XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Checking v/s Testing

Tuesday, September 8th, 2009

Testing is explorative, probing and learning oriented.

Checking is confirmative (verification and validation of what we already know). The outcome of a check is simply a pass or fail result; the outcome doesn’t require human interpretation. Hence checking should be the first target for automation.

James Bach points out that checking does require some element of testing; to create a check requires an act of test design, and to act upon the result of a check requires test result interpretation and learning. But its important to distinguish between the two because when people refer to Testing they really mean Checking.

Why is this distinction important?

Michael Bolton explains it really well. He says:

A development strategy that emphasizes checking at the expense of testing is one in which we’ll emphasize confirmation of existing knowledge over discovery of new knowledge. That might be okay. A few checks might constitute sufficient testing for some purpose; no testing and no checking at all might even be sufficient for some purposes. But the more we emphasize checking over testing, and the less testing we do generally, the more we leave ourselves vulnerable to the Black Swan.

What’s Cooking in Software Development

Friday, May 29th, 2009

Why do some authors call their tutorials as Cook books?

Its a collection of guidelines (recipes) to use the software. Very similar to cookbooks or recipe books.

Cooking has been used a metaphor/analogy for software development for many decades now. Some people have even compared Developers to Chefs, [poor] Analysts to Waiters and so on.

I find a very close resemblance between the way I cook food and the way I build software.

  • Both an very much an iterative and incremental process. Big bang approaches don’t work.
  • Very heavy focus on feedback and testing (tasting, smelling, feeling the texture, etc) early on and continuously throughout. We don’t cook the whole meal and then check it. The whole cooking process if very feedback driven.
  • Like Software, each meal has many edible food items (features) in it. Each food item has basic ingredients (that fills your stomach)[skeleton; must-have-part of the feature], ingredients that give the taste, color & meaning to the food and ingredients that decorates the food [esthetics]. We prioritize and thin slice to get early feedback and to take baby steps.
  • Like in software, fresh ingredients [new feature ideas] are more healthy and sustainable.
  • Cooking is an art. You get better at it by practicing it. There are no crash courses that can make you a master cook.
  • Cooking has some fundamental underlying principles that can be applied to different styles of cooking and to different cuisines. Similarly in software we have different schools of thoughts and different frameworks/technologies where those fundamental principles can be applied.
  • We have lots of recipe books for cooking. 2 different cooks can take the same recipe and come up with quite different food (taste, odor, color, texture, appeal, etc). A good cook (someone with quality experience) knows how to take a recipe and make wonderful food out of it. Other get caught up in the recipe. They miss the whole point of cooking and enjoying food.
  • Efficiency can vary drastically between a good cook and a bad cook. A good cook can deliver tasty food up to 10 times faster than a lousy cook.
  • Cooking needs passion and risk taking attitude. A passionate cook, willing to try something new, can get very creative with cooking. Can deliver great results with limited resources. Someone lacking the passion will not deliver any edible food, even if they are give all the resources in the world.
  • Cooking has a creative, experimental side to it. Mixing different styles of cooking can leading to wonderful results.
  • Cooking is a constant learning and exploratory process. This is what adds all the fun to cooking. Not cooking the same old stuff by reading the manual.
  • In cooking, there are guidelines no rules. One with discipline and one who has internalized the guidelines can cook far better than the one stuck with the rules and processes.
  • “Many cooks spoil the broth”. You can’t violate Mythical Man Month.

Also if we broaden the analogy to Restaurant business, we can see some other interesting aspects.

GOTOs Condisered Harmful! But…

Monday, February 9th, 2009

Original Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Public Function Process(ByVal headers As HeaderCollection, ByVal factory As HeadersFactory) As Boolean
    On Error GoTo ErrorHandler
    Dim i As Long
    Dim header As Header
    Dim receivedHeader As ReceivedHeader
    For i = 1 To headers.Count
        Set header = headers.Item(i)
        If (Trim(header.HeaderKey) = "Received") Then
            Set receivedHeader = factory.CreateReceivedHeader(header)
            If Not receivedHeader.IsIntranetServer Then
                If receivedHeader.IsTrustedRecipient Then
                    headers.ourHeadersExistBefore (i)
                    Exit For
                End If
            End If
        End If
    Next
    Exit Function
ErrorHandler:
    Err.Raise Err.Number, Err.Source & vbCrLf & "HeadersProcessor.Process", Err.Description
1
End Function

The nested if-else blocks bother me. Also the code does not communicate what is happening clearly.

Refactored Code With Goto:

1
2
Private receivedHeader As ReceivedHeader
Private recievedHeaderIndex As Long
1
2
3
4
5
6
7
8
9
10
Public Function Process(ByVal headers As HeaderCollection, ByVal factory As HeadersFactory) As Boolean
    On Error GoTo ErrorHandler
    Call ExtractFirstReceivedHeader(headers, factory)
    If receivedHeader Is Nothing Then Exit Function
    If receivedHeader.IsTrustedRecipient Then
        headers.ourHeadersExistBefore (recievedHeaderIndex)
    End If
    Exit Function
ErrorHandler:
    Err.Raise Err.Number, Err.Source & vbCrLf & "HeadersProcessor.Process", Err.Description
1
End Function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Private Sub ExtractFirstReceivedHeader(ByVal headers As HeaderCollection, ByVal factory As HeadersFactory)
    On Error GoTo ErrorHandler
    Dim i As Long
    Dim header As Header
    For i = 1 To headers.Count
        Set header = headers.Item(i)
        If (Trim(header.HeaderKey) <> "Received") Then <strong>GoTo Continue</strong>
        If NotAnIntranetServer(header, i, factory) Then GoTo FinishedProcessing
 
<strong>Continue</strong>:
    Next
 
FinishedProcessing:
    Set header = Nothing
    Exit Sub
1
2
ErrorHandler:
    Err.Raise Err.Number, Err.Source & vbCrLf & "HeadersProcessor.SetFirstReceivedHeader", Err.Description
1
End Sub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Private Function NotAnIntranetServer(ByVal header As Header, currentIndex As Long, ByVal factory As HeadersFactory) As Boolean
    On Error GoTo ErrorHandler
    Set receivedHeader = factory.CreateReceivedHeader(header)
    If receivedHeader.IsIntranetServer = False Then
        recievedHeaderIndex = currentIndex
        NotAnIntranetServer = True
    Else
        Set receivedHeader = Nothing
        NotAnIntranetServer = False
    End If
 
Exit Function
ErrorHandler:
    Err.Raise Err.Number, Err.Source & vbCrLf & "HeadersProcessor.NotAnIntranetServer", Err.Description
End Function
1
2
3
Private Sub Class_Terminate()
    Set receivedHeader = Nothing
End Sub

More Fluent Interfaces in Test

Monday, February 9th, 2009

Old Test Code:

1
2
3
4
5
6
[Test]
public void ShouldCheckRecipientOfReceivedHeaderIsIntranetServer()
{
    SetHeaderCollectionCount(2);
    Header header1 = CreateMockHeaderAndSetExpectation("Key");
    Header header2 = CreateMockHeaderAndSetExpectation("Received");
1
2
    collection.Stub(x =&gt; x.get_Item(1)).Return(header1);
    collection.Stub(x =&gt; x.get_Item(2)).Return(header2);
1
2
    factory.Expect(x =&gt; x.CreateReceivedHeader(header2)).Return(receivedHeader);
    receivedHeader.Expect(x=&gt; x.IsIntranetServer()).Return(true);
1
2
3
    processor.Process(collection,factory);
    receivedHeader.VerifyAllExpectations();
}
1
2
3
4
5
6
[Test]
public void ShouldCheckIfRecipientOfReceivedHeaderCanBeTrusted()
{
    SetHeaderCollectionCount(2);
    Header header1 = CreateMockHeaderAndSetExpectation("Key");
    Header header2 = CreateMockHeaderAndSetExpectation("Received");
1
2
3
4
    factory.Expect(x =&gt; x.CreateReceivedHeader(header2)).Return(receivedHeader);
    collection.Stub(x =&gt; x.get_Item(1)).Return(header1);
    collection.Stub(x =&gt; x.get_Item(2)).Return(header2);
    receivedHeader.Stub(x =&gt; x.IsIntranetServer()).Return(false);
1
2
    receivedHeader.Expect(x =&gt; x.IsTrustedRecipient()).Return(true);
    processor.Process(collection, factory);
1
2
    receivedHeader.VerifyAllExpectations();
}

Can you understand what is happening out here? Well it took me quite some time to understand it all. The first thing that caught my attention was the amount of duplication. All this duplication was getting in the way for me to see what was the real difference between the two tests.

Also I was bothered by the fact that the test contained calls to methods at various different levels of abstraction. I really like creating a veneer of domain specific language (method calls) that the rest of my code interacts with. .i.e. method containing calls to methods which are at similar level of abstraction.

Here is what I came up with to make it easy to communicate the intent. Also I tried to hide all the unnesscary stubbing logic. Not worth it staring in my face.

Refactored Code:

1
2
3
4
5
[Test]
public void RecipientOfReceivedHeaderBelongingToIntranetServerIsIgnored()
{
    AddHeader("MessageId");
    AddHeader("Received").FromIntranetServer();
1
2
    Process();
}
1
2
3
4
5
[Test]
public void TrustedRecipientOfReceivedHeaderIsAccepted()
{
    AddHeader("MessageId");
    AddHeader("Received").FromInternetServer().WhichIsTrusted();
1
2
    Process();
}
1
2
3
4
5
[TearDown]
public void VerifyExceptations()
{
    receivedHeader.VerifyAllExpectations();
}
1
2
3
4
5
private WhenHeadersAreProcessed FromInternetServer()
{
    receivedHeader.Stub(x =&gt; x.IsIntranetServer()).Return(false);
    return this;
}
1
2
3
4
5
private WhenHeadersAreProcessed FromIntranetServer()
{
    receivedHeader.Expect(x =&gt; x.IsIntranetServer()).Return(true);
    return this;
}
1
2
3
4
private void WhichIsTrusted()
{
    receivedHeader.Expect(x =&gt; x.IsTrustedRecipient()).Return(true);
}
1
2
3
4
5
6
private void Process()
{
    SetHeaderCollectionCount(headerCount);
    factory.Expect(x =&gt; x.CreateReceivedHeader(null)).IgnoreArguments().Return(receivedHeader);
    processor.Process(collection, factory);
}
1
2
3
4
private void SetHeaderCollectionCount(int count)
{
    collection.Expect(x =&gt; x.Count).Return(count);
}
1
2
3
4
5
6
private WhenHeadersAreProcessed AddHeader(string key)
{
    Header header = CreateMockHeaderAndSetExpectation(key);
    collection.Stub(x =&gt; x.get_Item(++this.headerCount)).Return(header);
    return this;
}

Also see: Fluent Interfaces improve readability of my Tests

Project Rescue Report

Monday, February 2nd, 2009

Recently I spent 2 Weeks helping a project clear its Technical Debt. Here are some results:

Topic Before After
Project Size Production Code

  • Package = 7
  • Classes = 23
  • Methods = 104 (average 4.52/class)
  • LOC = 912 (average 8.77/method and 39.65/class)
  • Average Cyclomatic Complexity/Method = 2.04

Test Code

  • Package = 1
  • Classes = 10
  • Methods = 92
  • LOC = 410
Production Code

  • Package = 4
  • Classes = 20
  • Methods = 89 (average 4.45/class)
  • LOC = 627 (average 7.04/method and 31.35/class)
  • Average Cyclomatic Complexity/Method = 1.79

Test Code

  • Package = 4
  • Classes = 18
  • Methods = 120
  • LOC = 771
Code Coverage
  • Line Coverage: 46%
  • Block Coverage: 43%

Coverage report before Refactoring

  • Line Coverage: 94%
  • Block Coverage: 96%

Coverage report after refactoring

Cyclomatic Complexity Cyclomatic Complexity report before Refactoring Cyclomatic Complexity report after Refactoring
Obvious Dead Code Following public methods:

  • class CryptoUtils: String getSHA1HashOfString(String), String encryptString(String), String decryptString(String)
  • class DbLogger: writeToTable(String, String)
  • class DebugUtils: String convertListToString(java.util.List), String convertStrArrayToString(String)
  • class FileSystem: int getNumLinesInFile(String)

Total: 7 methods in 4 classes

Following public methods:

  • class BackgroundDBWriter: stop()

Total: 1 method in 1 class

Note: This method is required by the tests.

Automation
Version Control Usage
  • Average Commits Per Day = 1
  • Average # of Files Changed Per Commit = 2
  • Average Commits Per Day = 4
  • Average # of Files Changed Per Commit = 9

Note: Since we are heavily refactoring, lots of files are touched for each commit. But the frequency of commit is fairly high to ensure we are not taking big leaps.

Coding Convention Violation 976 0

Something interesting to watch out is how the production code becomes more crisp (fewer packages, classes and LOC) and how the amount of test code becomes greater than the production code.

Another similar report.

What is Simple Design?

Monday, February 2nd, 2009

Simple is a very subjective word. But is Simple Design as well equally subjective?

Following is what dictionary.com has to say about the word “Simple”:

  • easy to understand, deal with, use, etc.: a simple matter; simple tools.
  • not elaborate or artificial; plain: a simple style.
  • not ornate or luxurious; unadorned: a simple gown.
  • unaffected; unassuming; modest: a simple manner.
  • not complicated: a simple design.
  • not complex or compound; single.
  • occurring or considered alone; mere; bare: the simple truth; a simple fact.
  • free of deceit or guile; sincere; unconditional: a frank, simple answer.
  • common or ordinary: a simple soldier.
  • not grand or sophisticated; unpretentious: a simple way of life.
  • humble or lowly: simple folk.
  • inconsequential or rudimentary.

It turns out that some of these adjectives define the characteristics of a Simple design very well:

  • easy to understand, deal with: communicates its intent.
  • is clear or has clarity
  • not elaborate or artificial; plain: crisp and concise
  • helps you maintain clear focus
  • is unambiguous
  • not ornate or luxurious; unadorned: minimalistic; least possible components (classes and methods).
  • unaffected; unassuming; modest: does not have unanticipated side-effects.
  • not complicated: avoids unnecessary conditional logic.
  • not complex or compound; single: just does one thing and does it well.
  • occurring or considered alone; mere; bare: to the point.
  • free of deceit or guile; sincere; unconditional: abstracts implementation from intent, but does not deceive someone by concealing or misrepresenting the actual concept.
  • common or ordinary: built on standard patterns which are well understood.
  • not grand or sophisticated; unpretentious: fulfills today’s needs without unnecessary bells and whistles (over-engineering).
  • humble or lowly.
  • inconsequential or rudimentary: does not draw your attention to unnecessary details; achieves good abstractions

What is Simple Design?

A design that allows you to keep moving forward with least amount of resistance. Its like traveling light; low up-front investment, and not much to slow you down when you want to change. Its like clay in the hands of an artist. Simple is a direction (dynamic) not a location (static). To achieve this:

  • Do the Simplest thing that could possibly work. In this context the “doing” is very important; just thinking will not help.
  • YAGNI – You Aren’t Gonna Need It. Don’t design for something that isn’t needed today. Think about the future, but test, code and design for today’s needs. Don’t design for future’s complexity that may not happen or change.
  • The use of Design Patterns contributes to the simplicity by using standard constructs and approaches that have been observed over many years of software development.
  • Code Smells have a wealth of knowledge on symptoms of rotting design. Being aware of them is very important for every programmer.
  • Similarly, the Unix Programming Philosophy and good Object-Oriented design principles will guide the code to be simple and maintainable.
  • Simple Design and Test Driven Development (TDD) go hand in hand. Since the code is written to make the test pass, it tends to be more focused and much simpler. Check out: Smells in Test that indicate Design problems.

You know you have achieved a Simple Design when: (the official scoop):

  • The System Works: all the tests are passing.
  • Communicates Well: expresses every idea that we need to express.
  • Contains no duplication: says everything Once-And-Only-Once and follows Don’t Repeat Yourself (DRY) principle.
  • Has no superfluous parts: is concise. Has the least possible number of classes and methods without violating the first 3 guideline

I would like to add a 5th guideline here. If any developer on your team cannot draw (explain) the design in a couple of minutes, there is scope for simplification.

In my experience design is a very involved activity. Every now and then, one needs to make trade-off decisions. Some of the guiding principles I use while designing (listed below), do tend to compete and forces me to make a balanced trade-off decision. Sometimes I make the wrong decision, but Refactoring gives me another chance to  set it right.

  • Lessons learnt from The Art of Unix Programming
    • Modularity: Write simple parts connected by clean interfaces
    • Clarity: Clarity is better than cleverness.
    • Composition: Design programs to be connected to other programs.
    • Separation: Separate policy from mechanism; separate interfaces from engines
    • Simplicity: Design for simplicity; add complexity only where you must
    • Parsimony: Write a big program only when it is clear by demonstration that nothing else will do
    • Transparency: Design for visibility to make inspection and debugging easier
    • Robustness: Robustness is the child of transparency and simplicity
    • Representation: Fold knowledge into data so program logic can be stupid and robust
    • Least Surprise: In interface design, always do the least surprising thing
    • Silence: When a program has nothing surprising to say, it should say nothing
    • Repair: When you must fail, fail noisily and as soon as possible
    • Economy: Programmer time is expensive; conserve it in preference to machine time
    • Generation: Avoid hand-hacking; write programs to write programs when you can
    • Optimization: Prototype before polishing. Get it working before you optimize it
    • Diversity: Distrust all claims for “one true way”
  • Bob Martin’s OO Design Principles: SOLID
    • Single Responsibility Principle (SRP): There should never be more than one reason for a class to change.
    • Open Closed Principle (OCP): A module should be open for extension but closed for modification
    • Liskov Substitution Principle (LSP): Subclasses should be substitutable for their base classes or Design by Contract
    • Interface Segregation Principle (ISP): Depend upon Abstractions. Do not depend upon concretions. Abstractions live longer than details.
    • Dependency Inversion Principle (DIP): Many client specific interfaces are better than one general purpose interface or Narrow Interface
  • OAOO – Once and only once: Mercilessly kill duplication. Whether its code duplication or conceptual duplication. It all gets in the way sooner or later.
  • DRY – Don’t Repeat yourself: Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. DRY is similar to OAOO, but DRY applies to effort as well, not just code.
  • Tell Don’t Ask: As the caller, you should not be making decisions based on the state of the called object which then results in you changing the state of some other object. The logic you are implementing is probably the called object’s responsibility, not yours. For you to make decisions outside the object violates its encapsulation.
  • The Law of Demeter: Any method of an object should only call methods belonging to:
    • itself
    • any composite objects
    • any parameters that were passed in to the method
    • any objects it created
  • Triangulate: When you are not sure what the correct abstraction should be, instead of pulling out an abstraction upfront, you get the second case to work by duplicating and modifying a small piece of code. Once you have both the solutions working, find the “generic” form and create an abstraction.
  • Influence from Functional Programming:
    • Separate Query from Modifier: always separate methods which have side-effects from those which don’t. If possible make the method signature express that. And if you really want to spice-up things a bit, try having side-effect free methods and classes as much as possible.
    • Prefer immutable objects over objects whose state changes after construction. Better for concurrency and better for freely passing them around.

Also don’t forget:

Smells in Test that indicate Design problems

Sunday, February 1st, 2009

At the Simple Design and Testing Conference in 2007, we had an interesting discussion on “what are my tests telling me about my design?

Following are some of the conclusions from the discussion:

  • Too many test cases per method: may indicate that the method is doing too much. We discussed the fact that complex business logic algorithms, with lots of special case, often appear to be atomic and indivisible; and thus only testable as a unit. But there is often a way to break them down into smaller pieces. Also sometimes one needs to think if all those special cases are really required now or we are speculating?
  • Poorly factored edge cases: this is the case where there are many variations of input tested, when a few carefully-chosen edge cases would suffice. We discussed how this sometimes emerges when the algorithm under test has too many special cases, and the same result could be arrived at with a more general algorithm.
  • Increasing access privilege of members (methods or instance variables) to protected or public only for testing purpose: sometimes indicates that you are coupling your tests too much with the code. Sometimes it indicates that may be the private thing has enough behavior that it needs to be tested. In that case may be you should consider pulling it out as a separate object
  • Too much setup/teardown: indicates strong coupling in the class under test.
  • Mocks returning mocks: indicate that the method under test has too many collaborators.
  • Poorly-named tests: sometimes means that the naming and/or design of the classes under test isn’t sufficiently thought-out.
  • Lots of Duplication in tests: sometimes indicate that the production code should be providing a way to avoid some of that duplication.
  • Extensive Inheritance in test fixtures: indicate that your design might heavily rely on inheritance instead of composition.
  • Double dots in the test code: indicates that the code violates the law of Demeter. In some cases it might be better to hide the delegate.
  • Changing one thing breaks many tests: may just indicate bad factoring of tests, but can also indicate excess dependencies in the code.
  • Dynamic stubs (stubs with conditional behavior): indicates lack of control over the collaborator that is being stubbed out. This sometimes indicate the behavior is not distributed well amongst the classes.
  • Too many dependencies that have to be included in the test context: indicates tight coupling in the design
  • Random test failures when running them in parallel: indicates that the code is not thread safe and has side-effects that are not factored correctly.
  • Tests run slowly: indicates that your unit tests might be hitting external systems like network, database or filesystem. This usually indicates that the class under test might have multiple responsibility. One should be able to stub out external dependencies.
  • Temporal coupling – tests break when run in a different order: may just be a test smell; may be coupling in the code under test.

Based on this its very apparent that tests do influence you design. If done well, it will surely result in Simple, Elegant Design.

What is the Least I can do that is Sustainable and Valuable?

Thursday, January 29th, 2009

Very very powerful thought when starting something!

Stop using us as Guinea-Pigs

Friday, December 12th, 2008

I’m trying to transfer some money from my account to another account, I enter all the details with passwords and what not and finally I get this…

Error Message

Wonderful, Self-explanatory Error Message!

Come on guys, test your bloody software. Don’t use us as guinea-pigs.

Want to use it? First help us test it!

Wednesday, October 29th, 2008

Why do Web 2.0 companies over look the importance of a solid suite of automated tests?

From an end user’s perspective it looks like they use their first thousand users as their manual testers.

I’ll give you an example, today LinkedIn launched a new set of Applications like SlideShare, Amazon, WordPress, TripIt, etc. When I try to use any application by installing it, I keep getting random errors.

There was a problem installing My Travel.
Fix this by reinstalling the application.

Sorry, unable to fetch your blog. Please try again later!

The server did not respond. Please try again.

Its the Web 2.0 companies or the Microsoft’s of the world who can get away with this attitude. If this was a high-end competitive market, such broken applications would result in significant loss of reputation and business.  The Web is certainly changing this. Not sure for the good or bad. On one hand, I like the fact that I can quickly release features and improve it over time. But on the other hand, I don’t like the fact that in the urgency to release new features, we compromise on quality and release dysfunctional stuff.

All I can think is, companies still struggle trying to strike the right balance. They are caugh up in tyring to have the cake and eating it too.

    Licensed under
Creative Commons License