XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Agile is Counter Intuitive

Friday, November 26th, 2010

I hear many people claim Agile is just common sense. When I hear that, I feel, these guys are way smarter than me or they don’t really understand Agile or they are plain lying.

When I first read about test-first programming, I fell off the chair laughing, I thought it was some kind of a joke. “How the heck can I write automated tests, even without knowing what my code would look like”. You think TDD is common sense?

From traditional methods, when I first moved to monthly iterations/sprints, we were struggling to finish what we signed up for in a month. Its but natural to consider extending the time. Also you realize half day of planning is not sufficient, there are lot of changes that come mid-sprint. The logical way to address this problem is to extend the iteration/sprint duration, add more people and to spend more time planning to make sure you’ve considered all scenarios. But to nobody’s surprise but your’s spending more time does not help (in fact makes things worse). In the moments of desperation, you propose to reduce the sprint duration to half, may be even 1/4. Surprisingly this works way better. Logical right?

And what did you think of Pair Programming? Its obvious right, that 2 developers working together on the same machine will produce better quality software faster?

What about continuous integration? Integrating once a week/month is such a nightmare, that you want us to go through that many times a day? But of course its common sense that it would be better.

How about showing working software demos weekly/monthly somehow magically improving collaboration and trust. Intuitive? And also shipping small increments of software frequently to avoid rework and get fast feedback?

One after another we can list each practice (esp.the most powerful ones) and you’ll see why Agile is counter-intuitive (at least to me in early 2000 when I stumbled upon it).

Code Analysis Tools for C/C++

Tuesday, August 31st, 2010

What tools do you use for Code Analysis of C/C++ projects?

This is a common questions a lot of teams have when we discuss Continuous Integration in C/C++.

I would recommend the following tools:

UPDATE: I strongly recommend looking at CppDepend (commercial), one stop solution for all kinds of metric. It has some very cool/useful features like Code Query Language, Customer Build Reporting, Comparing Builds, great visualization diagrams for dependency, treemaps, etc.

Wikipedia page on Static Code Analysis Tools has a list of many more tools.

Ultra-light Development and Deployment Example

Monday, October 26th, 2009

Over the last year, I’ve been helping (part-time) Freeset build their ecommerce website. David Hussman introduced me to folks from Freeset.

Following is a list of random topics (most of them are Agile/XP practices) about this project:

  • Project Inception: We started off with a couple of meetings with folks from Freeset to understand their needs. David quickly created an initial vision document with User Personas and their use cases (about 2 page long on Google Docs). Naomi and John from Freeset, quickly created some screen mock-ups in Photoshop to show user interaction. I don’t think we spent more than a week on all of this. This helped us get started.
  • Technology Choice: When we started we had to decide what platform are we going to use to build the site. We had to choose between customer site using Rails v/s using CMS. I think David was leaning towards RoR. I talked to folks at Directi (Sandeep, Jinesh, Latesh, etc) and we thought instead of building a custom website from scratch, we should use a CMS. After a bit of research, we settled on CMS Made Simple, for the following reasons
    • We needed different templates for different pages on the site.
    • PHP: Easiest to set up a PHP site with MySQL on any Shared Host Service Provider
  • Planning: We started off with an hour long, bi-weekly planning meetings (conf calls on Skype) on every Saturday morning (India time). We had a massively distributed team. John was in New Zealand. David and Deborah (from BestBuy) were in US. Kerry was in UK for a short while. Naomi, Kelsea and other were in Kolkatta and I was based out of Mumbai. Because of the time zone difference and because we’re all working on this part time, the whole bi-weekly planning meeting felt awkward and heavy weight. So after about 3 such meetings we abandoned it. We created a spreadsheet on Google Docs, added all the items that had high priority and started signing up for tasks. Whenever anyone updated an item on the sheet, everyone would be notified about the change.
  • User Stories: We started off with User Persona and Stories, but soon we just fell back to simple tasks on a shared spreadsheet. We had quite a few user related tasks, but just one liner in the spread sheet was more than sufficient. We used this spreadsheet as a sudo-backlog. (by no means we had the rigor to try and build a proper backlog).
  • Short Releases: We (were) only working on production environment. Every change made by a developer was immediately live. Only recently we created a development environment (replica of production), on which we do all our development. (I asked John from Freeset, if this change helped him, he had mixed feelings. Recently he did a large website restructuring (added some new section and moved some pages around), and he found the development environment useful for that. But for other things, when he wants to make some small changes, he finds it an over kill to make changes to dev and then sync it up with production. There are also things like news, which makes sense to do on the production server. Now he has to do in both places). So I’m thinking may be, we move back to just production environment and then create a prod on demand if we are plan to make big changes.
  • Testing: Original we had plans of at least recording or scripting some Selenium tests to make sure the site is behaving the way we expected it to. This kind of took a back seat and never really became an issue. Recently we had a slight set back when we moved a whole bunch of pages around and their link from other parts of the site were broken. Other than that, so far, its just been fine.
  • Evolutionary Design: Always believed in and continue to believe in “Do the Simplest, Dumbest, thing that could Possibly work“. Since we started, the project had taken interesting turns, we used quite a lot of different JavaScript libraries, hacked a bit of PHP code here and there. All of this is evolving and is working fine.
  • Usability: We still have lots of usability and optimization issues on our site. Since we don’t have an expert with us and we can’t afford one, we are doing the best we can with what we have on hand. We are hoping we’ll find a volunteer some day soon to help us on this front.
  • Versioning: We explored various options for versioning, but as of today we don’t have any repository under which we version our site (content and code). This is a drawback of using an online CMS. Having said that so far (been over a year), we did not really find the need for versioning. As of now we have 4 people working on this site and it just seems to work fine. Reminds me of YAGNI. (May be in future when we have more collaborators, we might need this).
  • Continuous Integration: With out Versioning and Testing, CI is out of question.
  • Automated Deployment: Until recently we only had one server (production) so there was no need for deployment. Since now we have a dev and a prod environment, Devdas and I quickly hacked a simple shell scrip (with mysqldump & rsync) that does automated deployment. It can’t get simpler than this.
  • Hosting: We talked about hosting the site on its own slice v/s using an existing shared host account. We could always move the site to another location when our existing, cheap hosting option will not suit our needs. So as of today, I’m hosting the site under one of my shared host account.
  • Rich Media Content: We questioned serving & hosting rich media content like videos from our site or using YouTube to host them. We went with YouTube for the following reasons
    • We wanted to redirect any possible traffic to other sites which are more tuned to catering high bandwidth content
    • We wanted to use YouTube’s existing customer base to attract traffic to our site
    • Since we knew we’ll be moving to another hosting service, we did not want to keep all those videos on the server which then will have to be moved to the new server
  • Customer Feedback: So far we have received great feedback from users of this site. We’ve also seen a huge growth in traffic to our site. Currently hovering around 1500 hits per day. Other than getting feedback from users. We also look at Google Analytics to see how users are responding to changes we’ve made and so on.
  • We don’t really have/need a System Metaphor and we are not paying as much attention to refactoring. We have some light conventions but we don’t really have any coding standards. Nor do we have the luxury to pair program.
  • Distributed/Virtual Team: Since all of us are distributed and traveling, we don’t really have the concept of site. Forget on-site customer or product owner.
  • Since all of this is voluntary work, Sustainable pace takes a very different meaning. Sometimes what we do is not sustainable, but that’s the need of the hour. However all of us really like and want to work on this project. We have a sense of ownership. (collective ownership)
  • We’ve never really sat down and done a retrospective. May be once in a while we ask a couple of questions regarding how something were going.

Overall, I’ve been extremely happy with the choices we’ve made. I’m not suggesting every project should be run this way. I’m trying to highlight an example of what being agile really means.

Who needs a separate QA Team?

Wednesday, January 14th, 2009

Have you come across developers who think that having a separate Quality Assurance (QA) team, who could test (manually or auto-magically) their code/software at the end of an iteration/release, will really help them? Personally I think this style of software development is not just dangerous but also harmful to the developers’ growth.

Having a QA Team that tests (inspects) the software after it’s built, gives me an impression that you can slap inspection at the end of any process and improve the quality of your product. Unfortunately things don’t work this way. What you want to do is build quality into the process rather than inspecting (checking) at the end of your process to assure quality.

Let me give you an example of what I mean by “building quality into the process“.

Back in the good old days, it was typical for a cloth manufacturer to have 10-15 power looms. They would set up these looms at the beginning of the day and let them run for the day. At the end of the day, they would take all the cloth produced by the looms and hand it over to another team (separate QA team) who would check each cloth for defect.

There were multiple sources of defects. At times one of the threads would break creating a defect in the cloth. At times insects would sit on the thread and would also get woven into the cloth creating a defect. And so on. Checking the cloth at the end of the day was turning out to be very expensive for the cloth manufactures. Basically they were trying to create quality products by inspecting the cloth at the end of the process. This is similar to the QA process in a waterfall project.

Since this was not working out, they hired a lot of people to watch each loom. Best case, there would be one person per loom watching for defects. As soon as a thread would break, they would stop the loom, fix the thread and continue. This certainly helped to reduce the defects, but was not an optimal solution for several reasons:

  • It was turning out to be quite expensive to have one person per loom
  • People at the looms would take breaks during the day and they would either stop the loom during their break (production hit) or would take the risk of letting some defects slip.
  • It become very dependent on how closely these folks watched the loom. In other words, the quality of the cloth was very dependent on the capability of the person (good eyesight and keen attention) inspecting the loom.
  • and so on

As you can see, what we are trying to do here is move the quality assurance process upstream. Trying to build quality into the manufacturing process. This is similar to the traditional Agile process where you have a couple of dedicated QAs on each team, who check for defects during or at the end of the iteration.

The next step which really helped fix this issue, to a great extent, was a ground breaking innovation by Toyoda Looms. As early as 1885 Sakichi Toyoda worked on improving looms.

Toyoda Loom

One of his initial innovation was to introduce a small lever on each thread. As soon as the thread would break, the lever would go and jam the loom. They went on to introduce noteworthy inventions such as automatic thread replenishment without any drop in the weaving speed, non-stop shuttle change motion, etc. Now a days, you can find looms with sensors which detect insect or other dirt on the threads and so on.

Basically what happened in the loom industry is they introduced various small mechanisms to be part of the loom which prevents the defect from being introduced in the first place. In other words, as and when they found issues with the process, they mistake proofed it by stopping it at source. They built quality into the process by shifting their focus from Quality Assurance to Quality Control. This is what you see in some really good product companies where they don’t really have a separate QA team. They focus on how can we eliminate/reduce the chances of introduction of defects rather than how can we detect defects (which is wasteful).

Hence its important that we focus on Quality Control rather than Quality Assurance. The terms “quality assurance” and “quality control” are often used interchangeably to refer to ways of ensuring the quality of a service or product. The terms, however, have different meanings.

Assurance: The act of giving confidence, the state of being certain or the act of making certain.

Quality assurance: The planned and systematic activities implemented in a quality system so that quality requirements for a product or service will be fulfilled.

Control: An evaluation to indicate needed corrective responses; the act of guiding a process in which variability is attributable to a constant system of chance causes.

Quality control: The observation techniques and activities used to fulfill requirements for quality.

So think about it, do you really need a separate QA team? What are you doing on the lines of Quality Control?

IMHO, in the late 90’s eXtreme Programming really pushed the envelope on this front. With wonderful practices like Automated Acceptance Testing, Test Driven Development, Pair Programming and Continuous Integration, I finally think we are getting closer. Having continuous/frequent working sessions with your customers/users is another great way of building quality into the process.

Lean Startup practices like Continuous Deployment and A/B Testing take this one step further and are really effective in tightening the feedback cycle for measuring user behavior in real context.

As more and more companies are embracing these methods, its becoming clear that we can do away with the concept of a separate QA team or an independent testing team.

Richard Sharpe made a great interview of Jean Tabaka and Bob Martin on the lean concept of “ceasing inspections”. In this 7 minute video, Jean and Bob support the idea of preventing defects upfront rather than at the end. Quality Assurance vs Quality Control

Continuous Integration

Wednesday, June 21st, 2006

What is the purpose of Continuous Integration (CI)?

To avoid last minute integration surprises. CI tries to break the integration process into small, frequent steps to avoid big bang integration as it leads to integration nightmare.

If people are afraid to check-in frequently, your Continuous Integration process is not working.

CI process goes hand in hand with Collective Code Ownership and Single-Team attitude.

CI is the manifestation of “Stop the Line” culture from Lean Manufacturing.

What are the advantages of Continuous Integration?

  • Helps to improve the quality of the software and reduce the risk by giving quicker feedback.
    • Experience shows that a huge number of bugs are introduced during the last-minute code integration under panic conditions.
  • Brings the team together. Helps to build collaborative teams.
  • Gives a level of confidence to checkin code more frequently that was once not there.
  • Helps to maintain the latest version of the code base in always shippable state. (for testing, demo, or release purposes)
  • Encourages lose coupling and evolutionary design.
  • Increase visibility and acts as an information radiator for the team.
  • By integrating frequently, it helps us avoid huge integration effort in the end.
  • Helps you visualize various trends about your source code. Can be a great starting point to improve your development process.

Is Continuous Integration the same as Continuous build?

No, continuous build only checks if the code compiles and links correctly. Continuous Integration goes beyond just compiling.

  • It executes a battery of unit and functional tests to verify that the latest version of the source code is still functional.
  • It runs a collection of source code analysis tools to give you feedback about the Quality of the source code.
  • It executes you packing script to make sure, the application can be packaged and installed.

Of course, both CI and CB should:

  • track changes,
  • archive and visualize build results and
  • intelligently publish/notify the results to the team.

How do you differentiate between Frequent Versus Continuous Integration?

Continuous means:

  • As soon as there is something new to build, its built automatically. You want to fail-fast and get this feedback as rapidly as possible.
  • When it stops becoming an event (ceremony) and becomes a behavior (habit).

Merge a little at a time to avoid the big cost at full integration at the end of a project. The bottom line is fail-fast & quicker feedback.

Can Continuous Integration be manual?

Manual Continuous Integration is the practice of frequently integrating with other team members’ code manually on developer’s machine or an independent machine.

Because people are not good at being consistent and cannot do repetitive tasks (its a machine’s job), IMHO, this process should be automated so that you are compiling, testing, inspecting and responding to feedback.

What are the Pre-Requisites for Continuous Integration?

This is a grey area. Here a quick list is:

  • Common source code repository
  • Source Control Management tool
  • Automated Build scripts
  • Automated tests
  • Feedback mechanism
  • Commit code frequently
  • Change of developer mentality, .i.e. desire to get rapid feedback and increase visibility.

What are the various steps in the Continuous Integration build?

  • pulling the source from the SCM
  • generating source (if you are using code generation)
  • compiling source
  • executing unit tests
  • run static code analysis tools – project size, coding convention violation checker, dependency analysis, cyclomatic complexity, etc.
  • generate version control usage trends
  • generate documentation
  • setup the environment (pre build)
  • set up third party dependency. Example: run database migration scripts
  • packaging
  • deployment
  • run various regression tests: smoke, integration, functional and performance test
  • run dynamic code analysis tools – code coverage, dead-code analyzer,
  • create and test installer
  • restore the environment (post build)
  • publishing build artifact
  • report/publish status of the build
  • update historical record of the build
  • build metrics – timings
  • gather auditing information (i.e. why, who)
  • labeling the repository
  • trigger dependent builds

Who are the stakeholders of the Continuous Integration build?

  • Developers
  • Testers [QA]
  • Analysts/Subject Matter Experts
  • Managers
  • System Operations
  • Architects
  • DBAs
  • UX Team
  • Agile/CI Coach

What is the scope of QA?

They help the team with automating the functional tests. They pick up the product from the nightly build and do other types of testing.
For Ex: Exploratory testing, Mutation testing, Some System tests which are hard to automate.

What are the different types of builds that make Continuous Integration and what are they based on?

We break down the CI build into different builds depending on their scope & time of feedback cycle and the target audience.

1. Local Developer build :
1.a. Job: Retains the environment. Only compiles and tests locally changed code (incremental).
1.b. Feedback: less than 5 mins.
1.c. Stakeholders: Developer pair who runs the build
1.d. Frequency: Before checking in code
1.e. Where: On developer workstation/laptop

2. Smoke build :
2.a. Job: Compiles , Unit test , Automated acceptance and Smoke tests on a clean environment[including database].
2.b. Feedback: less than 10 to 15 mins. (If it takes longer, then you could make the build incremental, not start on a clean environment)
2.c. Stakeholders: All the developers within a single team.
2.d. Frequency: With every checkin
2.e. Where: On a team’s dedicated continuous integration server. [Multiple modules can share the server, if they have parallel builds]

3. Functional build :
3.a. Job: Compiles , Unit test , Automated acceptance and All Functional\Regression tests on a clean environment. Stubs/Mocks out other modules or systems.
3.b. Feedback: less than 1 hour.
3.c. Stakeholders: Developers , QA , Analysts in a given team
3.d. Frequency: Every 2 to 3 hours
3.e. Where: On a team’s dedicated continuous integration server.

4. Cross module build :
4.a. Job: If your project has multiple teams, each working on a separate module, this build integrates those modules and runs the functional build across all those modules.
4.b. Feedback: in less than 4 hr.
4.c. Stakeholders: Developers , QA , Architects , Manager , Analyst across the module team
4.d. Frequency: 2 to 3 times a day
4.e. Where: On a continuous integration server owned by all the modules. [Different from above]

5. Product build :
5.a. Job: Integrates all the code that is required to create a single product. Nothing is mocked or stubbed. [Except things that are not yet built]. Creates all the artifacts and publishes a deployable product.
5.b. Feedback: less than 10 hrs.
5.c. Stakeholders: Every one including the Project Management.
5.d. Frequency: Every night.
5.e. Where: On a continuous integration server owned by all the modules. [Same as above]

General Rule of Thumb: No silver bullet. Adapt your own process/practice.

    Licensed under
Creative Commons License