July 29th, 2015
Agile India volunteers have started working on Agile India 2016 Conference. We are planning to host the conference at the same venue (Hotel Chancery Pavilion) in Bangalore from 14th – 21st Mar 2016 (8 Days.)
We are now open for proposals to following conference themes (and here are their theme chairs):
- Research Camp (March 15th) – Jyothi Rangaiah and Ashay Saxena
- Lean Startup (March 16th) – Nitin and Tathagat (ad interim)
- Enterprise Agile (March 17th) – Evan Leybourn and Ravi Kumar
- DevOps and Continuous Delivery (March 18th) – Joel Tosi and S Sivaguru
- Agile in the Trenches (March 19th) – Ellen Grove and Leena S N
More details: http://2016.agileindia.org/#program/theme
The conference will host 3 parallel tracks. The CFP Early Bird Submissions will close on Sep 10th.
Please submit your proposals at http://confengine.com/agile-india-2016/proposals
Speaker Compensation: http://2016.agileindia.org/#speaker/compensation
Please speard the word:
Twitter: #AgileIndia2016 or @agileindia
July 6th, 2015
The jQuery foundation is making their first trip to Bangalore to bring together experts from across the field of front-end development to bring you up-to speed on the latest open web technologies. Get the inside scoop on front-end development, code architecture and organization, design and implementation practices, tooling and workflow improvements, and emerging browser technologies.
We hope that you can use this opportunity to share ideas, socialize, and work together on advancing the present and future success of the front-end eco-system.
More details: http://jqueryconf.in
First 25 people to register at http://booking.agilefaqs.com/jquery-conf-2015 can avail a special 15% discount. Use discount code – [email protected]$
- Dave Methvin – jQuery Core Lead | President of jQuery Board
- Kris Borchers – Executive Director of jQuery Board
- Scott González – jQuery UI Lead
- Bodil Stokke – Functional Programming Hipster
- Darcy Clarke – Co-Founder, Themify
- Eric Schoffstall – Creator, Gulp
- John K Paul – Organizer, NYC HTML5
- Alexis Abril – Committer, CanJS
- and 21 more speakers
1. Pre-Conference Workshops – Wednesday, July 22
- Optimizing and Debugging Web Sites by Dave Methvin
- Revolutionizing your CSS! by Darcy Clarke
- Contributing to the jQuery Foundation by Kris Borchers
2. Open Web Conf – Thursday, July 23
Talks on Functional Reactive Programming, ES6, Escher.jl, Famo.us, CanJS, Ionic Framework, Kendo UI, Arduino, WebRTC and The Future of Video.
3. jQuery Conf– Fri, 24th & Sat, 25th July
Talks on The jQuery Foundation, Grunt, AngularJS, TDD in JS, Securing jQuery Code, Performance beyond Page Load, Responsive Web, jQuery Gotchas, Functional Reactive Programming, RxJS, ReactJS, Om, Memory Leaks, D3 and WebRTC.
4. Hackathon hosted by Joomla project – Friday 24th 2:00 PM – Sat 25th 2:00 PM
Details will be published shortly…
Big thanks to Freshdesk for supporting this conference as a Diamond Sponsor.
Hotel Chancery Pavilion, Residency Road, Bangalore
Facebook – https://www.facebook.com/jqueryconf
Twitter – https://twitter.com/jqueryconf
LinkedIn – https://www.linkedin.com/grp/home?gid=8301395
Website – http://jqueryconf.in
June 28th, 2015
If you are using Opauth-Twitter and suddenly you find that the Twitter OAuth is failing on OS X Yosemite, then it could be because of the CA certificate issue.
In OS X Yosemite 10.10, they switched cURL’s version from 7.30.0 to 7.37.1 [curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 SecureTransport zlib/1.2.5] and since then cURL always tries to verify the SSL certificate of the remote server.
In the previous versions, you could set curl_ssl_verifypeer to false and it would skip the verification. However from 7.37, if you set curl_ssl_verifypeer to false, it complains “SSL: CA certificate set, but certificate verification is disabled”.
Prior to version 0.60, tmhOAuth did not come bundled with the CA certificate and we used to get the following error:
SSL: can’t load CA certificate file <path>/vendor/opauth/Twitter/Vendor/tmhOAuth/cacert.pem
You can get the latest cacert.pem from here http://curl.haxx.se/ca/cacert.pem and saving it under /Vendor/tmhOAuth/cacert.pem (Latest version of tmbOAuth already has this in their repo.)
And then we need to set the $defaults (Optional parameters) curl_ssl_verifypeer to true in TwitterStrategy.php on line 48.
P.S: Turning off curl_ssl_verifypeer is actually a bad security move. It can make your server vulnerable to man-in-the-middle attack.
May 1st, 2015
Agile India 2016 Conf is Asia’s Largest & Premier Conference on Agile, Lean, Scrum, eXtreme Programming, Lean-Startup, Kanban, Continuous Delivery, DevOps, Patterns and more…
This time we are hosting a mega eight-day conference, starting on March 14th (Monday), where experts and practitioners from around the world will share their experience. The number of parallel tracks will be decided based on the quality of proposals we get. We are hoping that conference will host at least 3 parallel tracks.
Overall Agenda (tentative):
- Pre-Conference Workshop – 14th and 15th March (10:00 AM – 6:00 PM)
- Research Camp – 15th March (10:00 AM – 5:00 PM)
** Research Paper Presentation
** Open Space
** Brainstorming on improving Industry-Academia Collaboration
- Executive Leadership Conclave – 15th March (5:00 PM – 10:00 PM)
*** Keynote – 60 mins
*** Fishbowl – 90 mins
*** Group Activity on Future Direction – 90 mins
*** Cocktail Dinner Party
- Lean Startup – 16th March (9:00 AM – 6:30 PM)
*** Customer Development (Product Discovery)
*** Crafting MVPs & Safe-Fail Experimentation
*** Design Thinking
*** Lean UX
*** Lean Delivery
*** Actionable Metrics
*** 90 mins Hands-On Workshops
- Enterprise Agile – 17th March (9:00 AM – 6:30 PM)
*** Scaling Agile – Frameworks
*** People (career) & Performance Appraisals
*** Tools – Portfolio Management, Distributed Teams
*** 90 mins Hands-On Workshops
- Continuous Delivery & DevOps – 18th March (9:00 AM – 6:30 PM)
** Culture Transformation
** Software Craftsmanship
*** TDD/BDD, CI, Refactoring
*** Evolutionary Design
*** Test Pyramid
*** Legacy Code
** Cross-functional Team Collaboration
** DevOps Tools – Build, Deployment, Monitoring
** 90 mins Hands-On Workshops
- Agile in the Trenches – 19th March (9:00 AM – 6:30 PM)
*** Agile Challenges (20 mins experience reports only)
*** Abuse of Agile (20 mins experience reports only)
*** Agile Hacks – How did you tweak std. agile practices to work in your context (20 mins experience reports only)
*** Agile Tools Ecosystem
**** Visibility Tools – Project Management, Information Radiators
**** Feedback Tools – Code Quality, CI, Deployment, A/B Testing
*** 90 mins Hands-On Workshops
- Post-Conference Workshop – 20th and 21st March (10:00 AM – 6:00 PM)
We need your help to pull this off.
Roles, Responsibilities and Compensation for Program Committee Members: http://bit.ly/ai2016-program-team
–> Over the next 10 months, you would be expected to dedicate 30 mins every day (including weekends) to fulfil your role. Only if you are sure you can commit to it, please apply.
DUE DATE: May 15th.
Apply here: http://bit.ly/agileindia16-cfpc
April 11th, 2015
I’m surprised when people think Agile is perfect and if there are any shortcomings, its not the problem with Agile, instead, it is the person/team/org’s understanding or implementation issue. Some where along the way, the aspect that “We are uncovering better ways of developing software” was lost and agile became this static, rule-based prescriptive and dogmatic cargo-cult thing.
IMHO Agile has made a significant difference (some of it a a placebo effect as well) to the software industry however it has some serious limitations when you try to apply in beyond simple CRUD based applications:
- Agile works well in linear or organised complexity domains where the problem is fairly well understood (static) and we need to find/evolve the solution iteratively and incrementally. But in domains, where:
- the problem itself is unknown or constantly shifting,
- the problem has a dozen or so variables that interact non-linearly. For ex:
- in life sciences where we’re trying to understanding ageing/growth
- in anti-terrorism where we have to deal with a crisis situation
- when simulating chaotic systems like Indian traffic system
- trying to predict outcomes in systems with distributed intelligence
applying agile values, principles and practices is not the best approach in these cases. We often find ourselves lacking the right kind of thought process and tools to be able to manage such project.
- Event though the Agile luminaries claim that Agile treats software development as a Complex Adaptive System, they actually try to apply techniques that work in a Complicated Domain.
- For example, given a problem, we analyse the problem, figure our a best-bet solution (set of practices), apply the solution, see what happens, do a retrospective and tweak the solution (inspect and adapt). This is how you work in a complicated domain. In a complex adaptive domain, we try a few independent safe-fail experiments to solve the problem, but most importantly we do all those experiments in parallel (set-based development approach), so we can really amplify good patterns and dampen bad patterns. We manage the emergence of beneficial patterns with attractors within boundaries. Its like running 5 parallel A/B tests and then coming up with a solution.
- Agile folks seems to claim that distributed development is hard and you should always prefer collocation. But what about thousands of successful open source projects built by people who’ve never met each other? We seem to be missing something here. Open source project model seems to be way better at motivating people by giving them autonomy, master and sense of purpose. Most agile projects are not able to match this.
- Today velocity and bunch of other vanity metric is killing agility. There seems to be so much focus on output and very little focus on outcome and learning. Agile has very little to offer in the space of customer development, business model validation, User experience and other important aspects required for a successful product launch. Which is what Lean-Startup movement is trying to address. This is clearly a limitation of Agile methods.
What’s your take?
January 26th, 2015
TL;DR: Definition of Done (DoD) is a checklist-driven project management practice which drives compliance and contract negotiation rather than collaboration and ownership. Its very easy for teams to go down rat-holes and start to gold-plate crap in the name of DoD. It encourages a downstream, service’s thinking mindset rather than a product engineering mindset (very output centric, rather than outcome/impact focused.) Also smells of lack of maturity and trust on the team. Bottom line: Its a wrong tool in the wrong people’s hand.
The Scrum Guide™ describes DoD as a tool for bringing transparency to the work a Scrum Team is performing. It is related more to the quality of a product, rather than its functionality. The DoD is usually a clear and concise list of requirements that a software Increment must adhere to for the team to call it complete.
They recommend that having a clear DoD helps Scrum Teams to:
- Work together more collaboratively, increases transparency, and ultimately results in the development of consistently higher quality software.
- Clarifies the responsibilities of story authors and implementors.
- Enables the Development Team to know how much work to select for a given Sprint.
- Encourages people to be clear about the scope of work.
- Enable transparency within the Scrum Team and helps to baseline progress on work items
- Helps visualize done on posters and/or electronic tools.
- Aids in tracking how many stories are done or unfinished.
- Expose work items that need attention
- Determine when an Increment is ready for release
Also according to them, DoD is not changed during a Sprint, but should change periodically between Sprints to reflect improvements the Development Team has made in its processes and capabilities to deliver software.
According to the LeSS website– DoD is an agreed list of criteria that the software will meet for each Product Backlog Item. Achieving this level of completeness requires the Team to perform a list of tasks. When all tasks are completed, the item is done. Don’t confuse DoD with acceptance criteria, which are specific conditions an individual item has to fulfil to be accepted. DoD applies uniformly to all Product Backlog items.
If you search online, you’ll find sample DoD for user stories more or less like this:
- Short Spec created
- Implemented/Unit Tests created
- Acceptance Tests created
- Code completed
- Unit tests run
- Code peer-reviewed or paired
- Code checked in
- Documentation updated
- 100% Acceptance tests passed
- Product Owner demo passed
- Known bugs fixed
- Upgrade verified while keeping all user data intact.
- Potentially releasable build available for download
- Summary of changes updated to include newly implemented features
- Inactive/unimplemented features hidden or greyed out (not executable)
- Unit tests written and green
- Source code committed on server
- Jenkins built version and all tests green
- Code review completed (or pair-programmed)
- How to Demo verified before presentation to Product Owner
- Ok from Product Owner
Do you see the problem with DoD? If not, read on:
- Checklist Driven: It feels like a hangover from checklist driven project management practices. It treats team members as dumb, checklist bots. Rather than treating them as smart individuals, who can work collaboratively to achieve a common goal.
- Compliance OVER Ownership: It drives compliance rather than ownership and entrepreneurship (making smart, informed, contextual decisions.)
- Wrong Focus: If you keep it simple, it sounds too basic or even lame to be written down. If you really focus on it, it feels very heavy handed and soaked in progress-talk. It seems like the problem DoD is trying to solve is lack of maturity and/or trust within a team. And if that’s your problem, then DoD is the wrong focus. For example, certain teams are not able to take end-to-end ownership of a feature. So instead of putting check-points (in the name of DoD) at each team’s level and being happy about some work being accomplished by each team, we should break down the barriers and enable the team to take end-to-end responsibility.
- Contract Negotiation OVER Collaboration: We believe in collaboration over contract negotiation. However DoD feels more like a contract. Teams waste a lot of time arguing on what is a good DoD. You’ll often find teams gold plating crap and then debating with the PO about why the story should be accepted. (Thanks to Alistar Cockburn for highlighting this point.)
- Output Centric: DoD is very output centric thought process, instead of focusing on the end-to-end value delivery (outcome/impact of what the team is working on.) It creates an illusion of “good progress”, while you could be driving off a cliff. It mismanages risks by delaying real validation from end users. We seem to focus more on Software creators (product owners, developers, etc.) rather than software users. Emphasis is more on improving the process (e.g. increasing story throughput) rather than improving the product. Ex: It helps with tracking done work rather than discovering and validating user’s needs. DoD is more concerned about “doing” rather than “learning”. (Thanks to Joshua Kerievsky for highlighting this point.)
- Lacks Product Engineering Mindset: Encourages more of a downstream Service’s thinking rather than a product engineering mindset. Unlike in the services business, in product engineering you are never done and the cycle does not stop at promoting code to high environment (staging environment). Studying whether the feature you just deployed has a real impact on the user community is more important than checking off a task from your sprint backlog.
What should we do instead?
Just get rid of DoD. Get the teams to collaborate with the Product Management team (and user community, if possible) to really understand the real needs and what is the least the team needs to do to solve the problem. I’ve coached several teams this way and we’ve really seen the team come up with creative ways to meet user’s need and take the ownership of end-to-end value delivery instead of gold-plating crap.
October 29th, 2014
Self-organised, self-managed and self-directed…do they mean the same thing or are they actually different concepts, where one might be more desirable over the other?
In the context of an “agile” team, people seemed to use these terms interchangeably. However, it’s important to note that there are subtle, yet worthwhile distinction between each.
A group of people working together in their own ways, toward a common goal, which is defined outside the team.
For example – the CEO of a company decides to launch a new product to address the needs of a specific target market. An initial team is assembled with a budget and high-level timelines. This team decides how they wanted to operate within the given budget. Team will do their own work scheduling, training, rewards and recognition, etc. They typically do a 360 review and rated other team members for salary appraisal. Also the team manages itself and its stake holders. They collectively play the manger’s role.
A group of people working together in their own ways, toward a common goal, which the team defines.
Usually, the team comes together for a common cause. In addition to the characteristics highlighted under the self-managed teams, a self-directed team also handles the actual compensation, discipline, and acts as a profit centre by defining its own future. In some sense, you can think of open-source projects resembling these characteristics. There is a big element of self-selection and built-in synergy.
Self-managed and self-directed have a noticeable differences in terms of autonomy and how they actually operate because of it. Listed below are attributes to consider when deciding how to structure your teams in your organisation:
||Receives goals from leadership and determines how to accomplish their goals
||Determines own goals and formulates a strategy to accomplish them
||Requires frequent open-communication from leadership on company goals and objectives to build employee commitment and increases morale
||Team itself creates an environment of high innovation, commitment, and motivation in team members
||Conducting effective meetings, problem solving, project planning, and team skills
||Decision making, entrepreneurship, resolving conflicts, and problem solving techniques
||Requires little supervision to track team’s progress and direction
||Prefers to work without supervision
||Can increase customer satisfaction through better response time in getting work done and resolving important customer problems
||Can delight customers by focusing on innovation, problem solving and reduced cycle time (local, informed decision making)
|Time to get team up & running
||Is relatively faster to get the teams to start working together, if the goal is given to them. Once they get started, they might face challenges due to lack of focus & motivation, but at least they will get started quickly
||Forming teams of high-caliber people, who can quickly converge on a common goal is hard. It can be expensive and time consuming to keep the team together and resolve conflicts. But once the team gels and get started, their performance is unmatchable.
||Requires some help from supporting teams like Learning and Development, Human Resource, etc.
||Pretty much self-contained; can manage with very little external support.
|Executive Leadership Involvement
||Requires them to guide, motivate and track team’s direction.
||Requires a system that provides two-way communication of corporate strategy between leaders and their teams.
Hopefully, this highlights the difference between self-managed and self-directed. What about self-organised?
First let’s understand what self-organisation, as a phenomenon means.
Self-organisation is a process where some form of global order or coordination arises out of the local interactions between the components of an initially disordered system. This process is spontaneous: it is not directed or controlled by any agent or subsystem inside or outside of the system; however, the laws followed by the process and its initial conditions may have been chosen or caused by an agent. It is often triggered by random fluctuations that are amplified by positive/negative feedback. The resulting organisation is wholly decentralised or distributed over all the components of the system. As such it is typically very robust and able to survive and self-repair substantial damage or perturbations.
Self-organisation occurs in a variety of physical, chemical, biological, social and cognitive systems. Common examples are crystallisation, the emergence of convection patterns in a liquid heated from below, chemical oscillators, the invisible hand of the market, swarming in groups of animals, and the way neural networks learn to recognise complex patterns. Self-organisation is also relevant in chemistry, where it has often been taken as being synonymous with self-assembly.
Sometimes the notion of self-organisation is conflated with that of the related concept of emergence. Properly defined, however, there may be instances of self-organization without emergence and emergence without self-organization, and it is clear from the literature that the phenomena are not the same. The link between emergence and self-organisation remains an active research question.
Self-organisation usually relies on three basic ingredients:
- Strong dynamical non-linearity, often though not necessarily involving positive and negative feedback
- Balance of exploitation and exploration
- Multiple interactions
Self-organisation in biology
Birds flocking, an example of self-organisation in biology. According to Scott Camazine – “In biological systems self-organisation is a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system. Moreover, the rules specifying interactions among the system’s components are executed using only local information, without reference to the global pattern.”
Now let’s look at what is a self-organised team? Actually, the real question to ask is, what aspects of the team do they self-organise?
IMHO both self-managed and self-directed teams use self-organisation to achieve their objectives. Self-managed teams mostly self-organises to achieve their tasks, while self-directed team also uses self-organisation to form the team itself. It almost feels like self-managed/self-directed is one dimension (abstraction), while self-organised is a slightly different dimension (implementation.) While it feels like you cannot be self-managed or self-directed without self-organisation, I’m not 100% sure.
October 12th, 2014
Over the recent conferences, I’ve had several people ask me the follow:
I would like to better understand the expectations from the organising committee on the talk proposals. In particular, I would like your feedback on my talk submission so that I can work on improving the same.
I think, this is a very valid question:
What is the selection criteria for the talks?
I’ve been organising conferences for a decade now and following is my perspective:
In terms of the overarching themes or values, we look at the following during selection:
- Diversity – As a conference, we want to be more inclusive (different approaches, different programming languages, gender, countries, back-ground etc.)
- Balance – We want to strike a good balance between different types of presentations (expert talks, experience reports, tutorials, workshops, etc.) and different types of experience the speakers bring to the conference.
- Equality – We encourage more students and women speakers. We won’t select a stupid proposal just because it came from a student or a female speaker. But given we have to pick 1 out of 2 equal proposal, we’ll pick the one, which was proposed by a student or a female speaker.
- Practicality – People come to a conference to learn, network, have an experience and leave motivated. Proposals which directly help this are always preferred. While a little bit of theory is good, but if the proposal lacks practical application, it does not really help the participants. Also people learn more by doing rather than listening. If proposals has an element of “learn by doing” it wins over other proposals. Take people on a learning journey.
- Opportunity – While we want to ensure the conference has at least 2/3 rock solid speakers, we also want to give an opportunity to new speakers, who have real potential
- Originality – Original ideas wins hands-on from copied one. People always prefer listing to an idea from its creator rather than second or third person. However, you might have taken an idea and tweaked it in your context. You would have gained an insight by doing so. And certainly all of us want to hear your first-hand experience, even though you were not the creator of the original idea. We are looking for Thought-Leadership.
- Radical Ideas – We really respect people, who want to push the boundaries and challenge the status quo. We have a soft-corner for unconventional ideas and will try our best to support them and bring awareness to their work.
- Demand – Votes on a proposal and buzz on social media gives us an idea of how many people are really interested in the topic. (We fully understand votes can be gamed, but we’ve a system that can eliminate some bogus votes and use different types of patterns to give us a decent sense of the real demand.)
Once the proposal fits into our value system, here are some basic/obvious stuff we expect when we look at the proposal in the submission system:
- Is the Title matching the Abstract?
- Under the Outline/Structure of the Session, will the time break-up for each sub-topic will do justice to the topic?
- Is there a logical sequencing/progression of the topics?
- Has the speaker selected the right session type and duration for the topic? For Ex: 60 mins talk might be very boring.
- Has the speaker selected the best matching Theme/Topic/Category for the proposal?
- Is the Target Audience specific and correct? Also does it match with the Session Level?
- Is the Learning Outcome clearly articulated? Ideally 3-5 points, one of each line.
- Based on the Outline/Structure, will the speaker be able to achieve the Learning Outcomes?
- Based on the presentation link, does the speaker have good quality content and good way to present it?
- Based on the video link, does the speaker have a good presentation (edutainment) skills? Will the speaker be able to hold the attention of a large audience?
- Based on the additional links, does the speaker have subject matter expertise and thought leadership on the proposed topic?
- Are the Labels/Tags meaningful?
Proposal stands the best chance to be selected, if it’s unique, fully flushed, ready-to-go. Speaker please ensure to provide links to your:
- previous conference or user group presentations
- open source project contributions
- slides & videos of (present/past) presentations (other conferences or local user group or in-office)
- blog posts or articles on this topic
- and so on…
When selecting a proposal, we pay attention not only to the quality of the proposal, but also quality of the speaker, .i.e. whether the speaker will be able to effectively present/share their knowledge with others. Hence past speaking experience (videos & slides) are extremely important. If you don’t have a video from past conference presentation, that’s fine. Try to setup google hangout in one of your upcoming local user group meeting or internal office meeting, where you are presenting and share that link. This will give the committee a feel for your presentation skills and subject matter expertise.
While this might look very demanding, it is extremely important to ensure we put together a program which is top-notch.
October 3rd, 2014
Many teams suffer daily due to slow CI builds. The teams certainly realise the pain, but don’t necessarily take any corrective action. The most common excuse is we don’t have time or we don’t think it can get better than this.
Following are some key principles, I’ve used when confronted with long running builds:
- Focus on the Bottlenecks – Profile your builds to find the real culprits. Fixing them will help the most. IMHE I’ve seen the 80-20 rule apply here. Fixing 20% of the bottlenecks will give you 80% gain in speed.
- Divide and Conquer – Turn large monolithic builds into smaller, more focused builds. This would typically lead to restructuring your project into smaller modules or projects, which is a good version control practice anyway. Also most CI servers support a build pipeline, which will help you hookup all these smaller builds together.
- Turn Sequential Tasks to Parallel Tasks – By breaking your builds into smaller builds, you can now run them in parallel. You can also distribute the tasks across multiple slave machines. Also consider running your tests in parallel. Many static analysis tools can run in parallel.
- Reuse – Don’t create/start from scratch if you can avoid it. For ex: have pre-compiled code (jars) for dependent code instead of building it every time, esp. if it rarely changes. Set up your target env as a VM and keep it ready. Use a database dump for your seed data, instead of building it from an empty DB every time. Many times we use incremental compile/build, instead of clean builds.
- Avoid/Minimise IO (Disk & Network) – IOs can be a huge bottleneck. Turn down logging when running your builds. Preference using an in-process & in-memory DB, consider tmpfs for in-memory file system.
- Fail Fast – We want our builds to give us fast feedback. Hence its very important to prioritise your build tasks based on what is most likely to fail first. In fact long back we had started a project called ProTest, which helps you prioritise your tests on which test is most likely to fail.
- Push unnecessary stuff to a separate build – Things like JavaDocs can be done nightly
- Once and Only Once – avoid unnecessary duplication in steps. For ex: copying src files or jars to another location, creating a new Jenkins workspace every build, empty DB creation, etc.
- Reduce Noise – remove unnecessary data and file. Work on a minimal, yet apt set. Turn down logging levels.
- Time is Money -I guess I’m stating the obvious. But using newer, faster tools is actually cheaper. Moving from CVS/SVN to Git can speed up your build, newer testing frameworks are faster. Also Hardware is getting cheaper day by day, while developer’s cost is going up day by day. Invest in good hardware like SSD, Faster Multi-core CPUs, better RAM, etc. It would be way cheaper than your team waiting for the builds.
- Profile, Understand and Configure – Ignorance can be fatal. When it comes to build, you must profile your build to find the bottleneck. Go deeper to understand what is going on. And then based on data, configure your environment. For ex: setting the right OS parameters, set the right compiler flags can make a noticeable difference.
- Keep an Open Mind – Many times, you will find the real culprits might be some totally unrelated part of your environment. Many times we also find poorly written code which can slow things down. One needs to keep an open mind.
Are there any other principles you’ve used?
BTW Ashish and I plan to present this topic at the upcoming Agile Pune 2014 Conference. Would love to see you there.