Lean Software Development

Lean Software Development

 David J. Anderson is the author of three books, Lessons in Agile Management: On the Road to Kanban, which was published in 2012, Kanban: Successful Evolutionary Change for your Technology Business, [1] which was published in 2010, and , Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, [2] which was published in 2003. He was a member of the team that created the Agile software development method, Feature-Driven Development, in Singapore between 1997 and 1999. He created MSF for CMMI Process Improvement, and he co-authored the Technical Note from the Software Engineering Institute, “CMMI and Agile: Why Not Embrace Both!” He was a founder of the Lean Systems Society (http://www.leansystemssociety.org). He is CEO of Lean-Kanban University Inc., an accredited training and quality standards organization offering Kanban training through a network of partners throughout the world and he leads an international management training and consulting firm, David J. Anderson & Associates Inc. (http://www.agilemanagement.net) that helps technology businesses improve their performance through better management policies and decision making.

The term Lean Software Development was first coined as the title for a conference organized by the ESPRIT initiative of the European Union, in Stuttgart Germany, October 1992. Independently, the following year, Robert “Bob” Charette in 1993 suggested the concept of “Lean Software Development” as part of his work exploring better ways of managing risk in software projects. The term “Lean” dates to 1991, suggested by James Womack, Daniel Jones, and Daniel Roos, in their book The Machine That Changed the World: The Story of Lean Production[3] as the English language term to describe the management approach used at Toyota. The idea that Lean might be applicable in software development was established very early, only 1 to 2 years after the term was first used in association with trends in manufacturing processes and industrial engineering.

In their 2nd book, published in 1995, Womack and Jones[4] defined five core pillars of Lean Thinking. These were:

  • Value
  • Value Stream
  • Flow
  • Pull
  • Perfection

This became the default working definition for Lean over most of the next decade. The pursuit of perfection, it was suggested, was achieved by eliminating waste. While there were 5 pillars, it was the 5th one, pursuit of perfection through the systemic identification of wasteful activities and their elimination, that really resonated with a wide audience. Lean became almost exclusively associated with the practice of elimination of waste through the late 1990s and the early part of the 21st Century.

The Womack and Jones definition for Lean is not shared universally. The principles of management at Toyota are far more subtle. The single word “waste” in English is described more richly with three Japanese terms:

  • Muda – literally meaning “waste” but implying non-value-added activity
  • Mura – meaning “unevenness” and interpreted as “variability in flow”
  • Muri – meaning “overburdening” or “unreasonableness”

Perfection is pursued through the reduction of non-value-added activity but also through the smoothing of flow and the elimination of overburdening. In addition, the Toyota approach was based in a foundational respect for people and heavily influenced by the teachings of 20th century quality assurance and statistical process control experts such as W. Edwards Deming.

Unfortunately, there are almost as many definitions for Lean as there are authors on the subject.

Bob Charette was invited but unable to attend the 2001 meeting at Snowbird, Utah, where the Manifesto for Agile Software Development was authored. Despite missing this historic meeting, Lean Software Development was considered as one of several Agile approaches to software development. Jim Highsmith dedicated a chapter of his 2002 book to an interview with Bob about the topic. Later, Mary & Tom Poppendieck went on to author a series of 3 books. During the first few years of the 21st Century, Lean principles were used to explain why Agile methods were better. Lean explained that Agile methods contained little “waste” and hence produced a better economic outcome. Lean principles were used as a “permission giver” to adopt Agile methods.
In recent years, Lean Software Development has really emerged as its own discipline related to, but not specifically a subset of the Agile movement. This evolution started with the synthesis of ideas from Lean Product Development and the work of Donald G. Reinertsen and ideas emerging from the non-Agile world of large scale system engineering and the writing of James Sutton and Peter Middleton[12].  They also synthesized the work of Eli Goldratt and W. Edwards Deming and developed a focus on flow rather than waste reduction . At the behest of Reinertsen around 2005, They introduced the use of kanban systems that limit work-in-progress and “pull” new work only when the system is ready to process it. Alan Shalloway added his thoughts on Lean software development in his 2009 book on the topic. Since 2007, the emergence of Lean as a new force in the progress of the software development profession has been focused on improving flow, managing risk, and improving (management) decision making. Kanban has become a major enabler for Lean initiatives in IT-related work. It appears that a focus on flow, rather than a focus on waste elimination, is proving a better catalyst for continuous improvement within knowledge work activities such as software development.
Defining Lean Software Development is challenging because there is no specific Lean Software Development method or process. Lean is not an equivalent of Personal Software Process, V-Model, Spiral Model, EVO, Feature-Driven Development, Extreme Programming, Scrum, or Test-Driven Development. A software development lifecycle process or a project management process could be said to be “lean” if it was observed to be aligned with the values of the Lean Software Development movement and the principles of Lean Software Development. So those anticipating a simple recipe that can be followed and named Lean Software Development will be disappointed. You must fashion or tailor your own software development process by understanding Lean principles and adopting the core values of Lean.There are several schools of thought within Lean Software Development. The largest, and arguably leading, school is the Lean Systems Society, which includes Donald Reinertsen, Jim Sutton, Alan Shalloway, Bob Charette, Mary Poppendeick, and David J. Anderson. Mary and Tom Poppendieck’s work developed prior to the formation of the Society and its credo stands separately, as does the work of Craig Larman, Bas Vodde[15,16], and, most recently, Jim Coplien[17]. This article seeks to be broadly representative of the Lean Systems Society viewpoint as expressed in its credo and to provide a synthesis and summary of their ideas.
The Lean Systems Society published its credo at the 2012 Lean Software & Systems Conference . This was based on a set of values published a year earlier. Those values include:

  • Accept the human condition
  • Accept that complexity & uncertainty are natural to knowledge work
  • Work towards a better Economic Outcome
  • While enabling a better Sociological Outcome
  • Seek, embrace & question ideas from a wide range of disciplines
  • A values-based community enhances the speed & depth of positive change
Knowledge work such as software development is undertaken by human beings. We humans are inherently complex and, while logical thinkers, we are also led by our emotions and some inherent animalistic traits that can’t reasonably be overcome. Our psychology and neuro-psychology must be taken into account when designing systems or processes within which we work. Our social behavior must also be accommodated. Humans are inherently emotional, social, and tribal, and our behavior changes with fatigue and stress. Successful processes will be those that embrace and accommodate the human condition rather than those that try to deny it and assume logical, machine-like behavior.
The behavior of customers and markets are unpredictable. The flow of work through a process and a collection of workers is unpredictable. Defects and required rework are unpredictable. There is inherent chance or seemingly random behavior at many levels within software development. The purpose, goals, and scope of projects tend to change while they are being delivered. Some of this uncertainty and variability, though initially unknown, is knowable in the sense that it can be studied and quantified and its risks managed, but some variability is unknowable in advance and cannot be adequately anticipated. As a result, systems of Lean Software Development must be able to react to unfolding events, and the system must be able to adapt to changing circumstances. Hence any Lean Software Development process must exist within a framework that permits adaptation (of the process) to unfolding events.
Human activities such as Lean Software Development should be focused on producing a better economic outcome. Capitalism is acceptable when it contributes both to the value of the business and the benefit of the customer. Investors and owners of businesses deserve a return on investment. Employees and workers deserve a fair rate of pay for a fair effort in performing the work. Customers deserve a good product or service that delivers on its promised benefits in exchange for a fair price paid. Better economic outcomes will involve delivery of more value to the customer, at lower cost, while managing the capital deployed by the investors or owners in the most effective way possible.
Better economic outcomes should not be delivered at the expense of those performing the work. Creating a workplace that respects people by accepting the human condition and provides systems of work that respect the psychological and sociological nature of people is essential. Creating a great place to do great work is a core value of the Lean Software Development community.
The Lean Software & Systems community seems to agree on a few principles that underpin Lean Software Development processes.

  • Follow a Systems Thinking & Design Approach
  • Emergent Outcomes can be Influenced by Architecting the Context of a Complex Adaptive System
  • Respect People (as part of the system)
  • Use the Scientific Method (to drive improvements)
  • Encourage Leadership
  • Generate Visibility (into work, workflow, and system operation)
  • Reduce Flow Time
  • Reduce Waste to Improve Efficiency
This is often referred to in Lean literature as “optimize the whole,” which implies that it is the output from the entire system (or process) that we desire to optimize, and we shouldn’t mistakenly optimize parts in the hope that it will magically optimize the whole. Most practitioners believe the corollary to be true, that optimizing parts (local optimization) will lead to a suboptimal outcome.A Lean Systems Thinking and Design Approach requires that we consider the demands on the system made by external stakeholders, such as customers, and the desired outcome required by those stakeholders. We must study the nature of demand and compare it with the capability of our system to deliver. Demand will include so-called “value demand,” for which customers are willing to pay, and “failure demand,” which is typically rework or additional demand caused by a failure in the supply of value demand. Failure demand often takes two forms: rework on previously delivered value demand and additional services or support due to a failure in supplying value demand. In software development, failure demand is typically requests for bug fixes and requests to a customer care or help desk function.A systems design approach requires that we also follow the Plan-Do-Study-Act (PDSA) approach to process design and improvement. W. Edwards Deming used the words “study” and “capability” to imply that we study the natural philosophy of our system’s behavior. This system consists of our software development process and all the people operating it. It will have an observable behavior in terms of lead time, quality, quantity of features or functions delivered (referred to in Agile literature as “velocity”), and so forth. These metrics will exhibit variability and, by studying the mean and spread of variation, we can develop an understanding of our capability. If this is mismatched with the demand and customer expectations, then the system will need to be redesigned to close the gap.Deming also taught that capability is 95% influenced by system design, and only 5% is contributed by the performance of individuals. In other words, we can respect people by not blaming them for a gap in capability compared to demand and by redesigning the system to enable them to be successful.To understand system design, we must have a scientific understanding of the dynamics of system capability and how it might be affected. Models are developed to predict the dynamics of the system. While there are many possible models, several popular ones are in common usage: the understanding of economic costs; so-called transaction and coordination costs that relate to production of customer-valued products or services; the Theory of Constraints – the understanding of bottlenecks; and The Theory of Profound Knowledge – the study and recognition of variability as either common to the system design or special and external to the system design.
Complex systems have starting conditions and simple rules that, when run iteratively, produce an emergent outcome. Emergent outcomes are difficult or impossible to predict given the starting conditions. The computer science experiment “The Game of Life” is an example of a complex system. A complex adaptive system has within it some self-awareness and an internal method of reflection that enables it to consider how well its current set of rules is enabling it to achieve a desired outcome. The complex adaptive system may then choose to adapt itself – to change its simple rules – to close the gap between the current outcome and the desired outcome. The Game of Life adapted such that the rules could be re-written during play would be a complex adaptive system.In software development processes, the “simple rules” of complex adaptive systems are the policies that make up the process definition. The core principle here is based in the belief that developing software products and services is not a deterministic activity, and hence a defined process that cannot adapt itself will not be an adequate response to unforeseeable events. Hence, the process designed as part of our system thinking and design approach must be adaptable. It adapts through the modification of the policies of which it is made.The Kanban approach to Lean Software Development utilizes this concept by treating the policies of the kanban pull system as the “simple rules,” and the starting conditions are that work and workflow is visualized, that flow is managed using an understanding of system dynamics, and that the organization uses a scientific approach to understanding, proposing, and implementing process improvements.
The Lean community adopts Peter Drucker’s definition of knowledge work that states that workers are knowledge workers if they are more knowledgeable about the work they perform than their bosses. This creates the implication that workers are best placed to make decisions about how to perform work and how to modify processes to improve how work is performed. So the voice of the worker should be respected. Workers should be empowered to self-organize to complete work and achieve desired outcomes. They should also be empowered to suggest and implement process improvement opportunities or “kaizen events” as they are referred to in Lean literature. Making process policies explicit so that workers are aware of the rules that constrain them is another way of respecting them. Clearly defined rules encourage self-organization by removing fear and the need for courage. Respecting people by empowering them and giving them a set of explicitly declared policies holds true with the core value of respecting the human condition.
Seek to use models to understand the dynamics of how work is done and how the system of Lean Software Development is operating. Observe and study the system and its capability, and then develop and apply models for predicting its behavior. Collect quantitative data in your studies, and use that data to understand how the system is performing and to predict how it might change when the process is changed.The Lean Software & Systems community uses statistical methods such as statistical process control charts and spectral analysis histograms of raw data for lead time and velocity to understand system capability. They also use models such as: the Theory of Constraints to understand bottlenecks; The System of Profound Knowledge to understand variation that is internal to the system design versus that which is externally influenced; and an analysis of economic costs in the form of tasks performed to merely coordinate, set up, deliver, or clean up after customer-valued product or services are created. Some other models are coming into use, such as Real Option Theory, which seeks to apply financial option theory from financial risk management to real-world decision making.The scientific method suggests: we study; we postulate an outcome based on a model; we perturb the system based on that prediction; and we observe again to see if the perturbation produced the results the model predicted. If it doesn’t, then we check our data and reconsider whether our model is accurate. Using models to drive process improvements moves it to a scientific activity and elevates it from a superstitious activity based on intuition.
Leadership and management are not the same. Management is the activity of designing processes, creating, modifying, and deleting policy, making strategic and operational decisions, gathering resources, providing finance and facilities, and communicating information about context such as strategy, goals, and desired outcomes. Leadership is about vision, strategy, tactics, courage, innovation, judgment, advocacy, and many more attributes. Leadership can and should come from anyone within an organization. Small acts of leadership from workers will create a cascade of improvements that will deliver the changes needed to create a Lean Software Development process.
Knowledge work is invisible. If you can’t see something, it is (almost) impossible to manage it. It is necessary to generate visibility into the work being undertaken and the flow of that work through a network of individuals, skills, and departments until it is complete. It is necessary to create visibility into the process design by finding ways of visualizing the flow of the process and by making the policies of the process explicit for everyone to see and consider. When all of these things are visible, then the use of the scientific method is possible, and conversations about potential improvements can be collaborative and objective. Collaborative process improvement is almost impossible if work and workflow are invisible and if process policies are not explicit.
The software development profession and the academics who study software engineering have traditionally focused on measuring time spent working on an activity. The Lean Software Development community has discovered that it might be more useful to measure the actual elapsed calendar time something takes to be processed. This is typically referred to as Cycle Time and is usually qualified by the boundaries of the activities performed. For example, Cycle Time through Analysis to Ready for Deployment would measure the total elapsed time for a work item, such as a user story, to be analyzed, designed, developed, tested in several ways, and queued ready for deployment to a production environment.Focusing on the time work takes to flow through the process is important in several ways. Longer cycle times have been shown to correlate with a non-linear growth in bug rates. Hence shorter cycle times lead to higher quality. This is counter-intuitive as it seems ridiculous that bugs could be inserted in code while it is queuing and no human is actually touching it. Traditionally, the software engineering profession and academics who study it have ignored this idle time. However, empirical evidence suggests that cycle time is important to initial quality.Alan Shalloway has also talked about the concept of “induced work.” His observation is that a lag in performing a task can lead to that task taking a lot more effort than it may have done. For example, a bug found and fixed immediately may only take 20 minutes to fix, but if that bug is triaged, is queued and then waits for several days or weeks to be fixed, it may involve several or many hours to make the fix. Hence, the cycle time delay has “induced” additional work. As this work is avoidable, in Lean terms, it must be seen as “waste.”The third reason for focusing on cycle time is a business related reason. Every feature, function, or user story has a value. That value may be uncertain but, nevertheless, there is a value. The value may vary over time. The concept of value varying over time can be expressed economically as a market payoff function. When the market payoff function for a work item is understood, even if the function exhibits a spread of values to model uncertainty, it is possible to evaluate a “cost of delay.” The cost of delay allows us to put a value on reducing cycle time.With some work items, the market payoff function does not start until a known date in the future. For example, a feature designed to be used during the 4th of July holiday in the United States has no value prior to that date. Shortening cycle time and being capable of predicting cycle time with some certainty is still useful in such an example. Ideally, we want to start the work so that the feature is delivered “just in time” when it is needed and not significantly prior to the desired date, nor late, as late delivery incurs a cost of delay. Just-in-time delivery ensures that optimal use was made of available resources. Early delivery implies that we might have worked on something else and have, by implication, incurred an opportunity cost of delay.As a result of these three reasons, Lean Software Development seeks to minimize flow time and to record data that enables predictions about flow time. The objective is to minimize failure demand from bugs, waste from over-burdening due to delay in fixing bugs, and to maximize value delivered by avoiding both cost of delay and opportunity cost of delay.
For every valued-added activity, there are setup, cleanup and delivery activities that are necessary but do not add value in their own right. For example, a project iteration that develops an increment of working software requires planning (a setup activity), an environment and perhaps a code branch in version control (collectively known as configuration management and also a setup activity), a release plan and performing the actual release (a delivery activity), a demonstration to the customer (a delivery activity), and perhaps an environment teardown or reconfiguration (a cleanup activity.) In economic terms, the setup, cleanup, and delivery activities are transaction costs on performing the value-added work. These costs (or overheads) are considered waste in Lean.Any form of communication overhead can be considered waste. Meetings to determine project status and to schedule or assign work to team members would be considered a coordination cost in economic language. All coordination costs are waste in Lean thinking. Lean software development methods seek to eliminate or reduce coordination costs through the use of colocation of team members, short face-to-face meetings such as standups, and visual controls such as card walls.The third common form of waste in Lean Software Development is failure demand. Failure demand is a burden on the system of software development. Failure demand is typically rework or new forms of work generated as a side-effect of poor quality. The most typical forms of failure demand in software development are bugs, production defects, and customer support activities driven out of a failure to use the software as intended. The percentage of work-in-progress that is failure demand is often referred to as Failure Load. The percentage of value-adding work against failure demand is a measure of the efficiency of the system.The percentage of value-added work against the total work, including all the non-value adding transaction and coordination costs, determines the level of efficiency. A system with no transaction and coordination costs and no failure load would be considered 100% efficient.Traditionally, Western management science has taught that efficiency can be improved by increasing the batch size of work. Typically, transaction and coordination costs are fixed or rise only slightly with an increase in batch size. As a result, large batches of work are more efficient. This concept is known as “economy of scale.” However, in knowledge work problems, coordination costs tend to rise non-linearly with batch size, while transaction costs can often exhibit a linear growth. As a result, the traditional 20th Century approach to efficiency is not appropriate for knowledge work problems like software development.It is better to focus on reducing the overheads while keeping batch sizes small in order to improve efficiency. Hence, the Lean way to be efficient is to reduce waste. Lean software development methods focus on fast, cheap, and quick planning methods; low communication overhead; and effective low overhead coordination mechanisms, such as visual controls in kanban systems. They also encourage automated testing and automated deployment to reduce the transaction costs of delivery. Modern tools for minimizing the costs of environment setup and teardown, such as modern version control systems and use of virtualization, also help to improve efficiency of small batches of software development.
Lean Software Development does not prescribe practices. It is more important to demonstrate that actual process definitions are aligned with the principles and values. However, a number of practices are being commonly adopted. This section provides a brief overview of some of these.

Cumulative Flow Diagrams have been a standard part of reporting in Team Foundation Server since 2005. Cumulative flow diagrams plot an area graph of cumulative work items in each state of a workflow. They are rich in information and can be used to derive the mean cycle time between steps in a process as well as the throughput rate (or “velocity”). Different software development lifecycle processes produce different visual signatures on cumulative flow diagrams. Practitioners can learn to recognize patterns of dysfunction in the process displayed in the area graph. A truly Lean process will show evenly distributed areas of color, smoothly rising at a steady pace. The picture will appear smooth without jagged steps or visible blocks of color.In their most basic form, cumulative flow diagrams are used to visualize the quantity of work-in-progress at any given step in the work item lifecycle. This can be used to detect bottlenecks and observe the effects of “mura” (variability in flow).
In addition to the use of cumulative flow diagrams, Lean Software Development teams use physical boards, or projections of electronic visualization systems, to visualize work and observe its flow. Such visualizations help team members observe work-in-progress accumulating and enable them to see bottlenecks and the effects of “mura.” Visual controls also enable team members to self-organize to pick work and collaborate together without planning or specific management direction or intervention. These visual controls are often referred to as “card walls” or sometimes (incorrectly) as “kanban boards.”
A kanban system is a practice adopted from Lean manufacturing. It uses a system of physical cards to limit the quantity of work-in-progress at any given stage in the workflow. Such work-in-progress limited systems create a “pull” where new work is started only when there are free kanban indicating that new work can be “pulled” into a particular state and work can progress on it.In Lean Software Development, the kanban are virtual and often tracked by setting a maximum number for a given step in the workflow of a work item type. In some implementations, electronic systems keep track of the virtual kanban and provide a signal when new work can be started. The signal can be visual or in the form of an alert such as an email.Virtual kanban systems are often combined with visual controls to provide a visual virtual kanban system representing the workflow of one or several work item types. Such systems are often referred to as “kanban boards” or “electronic kanban systems.” A visual virtual kanban system is available as a plug-in for Team Foundation Server, called Visual WIP[20]. This project was developed as open source by Hakan Forss in Sweden.
Lean Software Development requires that work is either undertaken in small batches, often referred to as “iterations” or “increments,” or that work items flow independently, referred to as “single-piece flow.” Single-piece flow requires a sophisticated configuration management strategy to enable completed work to be delivered while incomplete work is not released accidentally. This is typically achieved using branching strategies in the version control system. A small batch of work would typically be considered a batch that can be undertaken by a small team of 8 people or less in under 2 weeks.Small batches and single-piece flow require frequent interaction with business owners to replenish the backlog or queue or work. They also require a capability to release frequently. To enable frequent interaction with business people and frequent delivery, it is necessary to shrink the transaction and coordination costs of both activities. A common way to achieve this is the use of automation.
Lean Software Development expects a high level of automation to economically enable single-piece flow and to encourage high quality and the reduction of failure demand. The use of automated testing, automated deployment, and software factories to automate the deployment of design patterns and creation of repetitive low variability sections of source code will all be commonplace in Lean Software Development processes.
In Lean literature, the term kaizen means “continuous improvement” and a kaizen event is the act of making a change to a process or tool that hopefully results in an improvement.Lean Software Development processes use several different activities to generate kaizen events. These are listed here. Each of these activities is designed to stimulate a conversation about problems that adversely affect capability and, consequently, ability to deliver against demand. The essence of kaizen in knowledge work is that we must provoke conversations about problems across groups of people from different teams and with different skills.
Teams of software developers, often up to 50, typically meet in front of a visual control system such as a whiteboard displaying a visualization of their work-in-progress. They discuss the dynamics of flow and factors affecting the flow of work. Particular focus is made to externally blocked work and work delayed due to bugs. Problems with the process often become evident over a series of standup meetings. The result is that a smaller group may remain after the meeting to discuss the problem and propose a solution or process change. A kaizen event will follow. These spontaneous meetings are often referred to as spontaneous quality circles in older literature. Such spontaneous meetings are at the heart of a truly kaizen culture. Managers will encourage the emergence of kaizen events after daily standup meetings in order to drive adoption of Lean within their organization.
Project teams may schedule regular meetings to reflect on recent performance. These are often done after specific project deliverables are complete or after time-boxed increments of development known as iterations or sprints in Agile software development.Retrospectives typically use an anecdotal approach to reflection by asking questions like “what went well?”, “what would we do differently?”, and “what should we stop doing?”Retrospectives typically produce a backlog of suggestions for kaizen events. The team may then prioritize some of these for implementation.
An operations review is typically larger than a retrospective and includes representatives from a whole value stream. It is common for as many as 12 departments to present objective, quantitative data that show the demand they received and reflect their capability to deliver against the demand. Operations reviews are typically held monthly. The key differences between an operations review and a retrospective is that operations reviews span a wider set of functions, typically span a portfolio of projects and other initiatives, and use objective, quantitative data. Retrospectives, in comparison, tend to be scoped to a single project; involve just a few teams such as analysis, development, and test; and are generally anecdotal in nature.An operations review will provoke discussions about the dynamics affecting performance between teams. Perhaps one team generates failure demand that is processed by another team? Perhaps that failure demand is disruptive and causes the second team to miss their commitments and fail to deliver against expectations? An operations review provides an opportunity to discuss such issues and propose changes. Operations reviews typically produce a small backlog of potential kaizen events that can be prioritized and scheduled for future implementation.

There is no such thing as a single Lean Software Development process. A process could be said to be Lean if it is clearly aligned with the values and principles of Lean Software Development. Lean Software Development does not prescribe any practices, but some activities have become common. Lean organizations seek to encourage kaizen through visualization of workflow and work-in-progress and through an understanding of the dynamics of flow and the factors (such as bottlenecks, non-instant availability, variability, and waste) that affect it. Process improvements are suggested and justified as ways to reduce sources of variability, eliminate waste, improve flow, or improve value delivery or risk management. As such, Lean Software Development processes will always be evolving and uniquely tailored to the organization within which they evolve. It will not be natural to simply copy a process definition from one organization to another and expect it to work in a different context. It will also be unlikely that returning to an organization after a few weeks or months to find the process in use to be the same as was observed earlier. It will always be evolving.

The organization using a Lean software development process could be said to be Lean if it exhibited only small amounts of waste in all three forms (“mura,” “muri,” and “muda”) and could be shown to be optimizing the delivery of value through effective management of risk. The pursuit of perfection in Lean is always a journey. There is no destination. True Lean organizations are always seeking further improvement.

Lean Software Development is still an emerging field, and we can expect it to continue to evolve over the next decade.

  1. Anderson, David J., Kanban: Successful Evolutionary Change for your Technology Business, Blue Hole Press, 2010
  2. Anderson, David J., Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, Prentice Hall PTR, 2003
  3. Womack, James P., Daniel T. Jones and Daniel Roos, The Machine That Changed the World: The Story of Lean Production, 2007 updated edition, Free Press, 2007
  4. Womack, James P., and Daniel T. Jones, Lean Thinking: Banish Waste and Create Wealth in your Corporation, 2nd Edition, Free Press, 2003
  5. Beck, Kent et al, The Manifesto for Agile Software Development, 2001 http://www.agilemanifesto.org/
  6. Highsmith, James A., Agile Software Development Ecosystems, Addison Wesley, 2002
  7. Poppendieck, Mary and Tom Poppendieck, Lean Software Development: An Agile Toolkit, Addison Wesely, 2003
  8. Poppendieck, Mary and Tom Poppendieck, Implementing Lean Software Development: From Concept to Cash, Addison Wesley, 2006
  9. Poppendieck, Mary and Tom Poppendieck, Leading Lean Software Development: Results are not the Point, Addison Wesley, 2009
  10. Reinertsen, Donald G., Managing the Design Factory, Free Press, 1997
  11. Reinertsen, Donald G., The Principles of Product Development Flow: Second Generation Lean Product Development, Celeritas Publishing, 2009
  12. Sutton, James and Peter Middleton, Lean Software Strategies: Proven Techniques for Managers and Developers, Productivity Press, 2005
  13. Anderson, David J., Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, Prentice Hall PTR, 2003
  14. Shalloway, Alan, and Guy Beaver and James R. Trott, Lean-Agile Software Development: Achieving Enterprise Agility, Addison Wesley, 2009
  15. Larman, Craig and Bas Vodde, Scaling Lean & Agile Development: Thinking and Organizational Tools for Large-scale Scrum, Addison Wesley Professional, 2008
  16. Practices for Scaling Lean & Agile Development: Large, Multisite, and Offshore Product Development with Large-Scale Scrum, Addison Wesley Professional, 2010
  17. Coplien, James O. and Gertrud Bjornvig, Lean Architecture: for Agile Software Development, Wiley, 2010
  18. http://leansystemssociety.org/credo/
  19. http://lssc12.leanssc.org/
  20. http://hakanforss.wordpress.com/2010/11/23/visual-wip-a-kanban-board-for-tfs/

Agile Software Development: Eight Tips for Better Code Testing

Agile Software Development: Eight Tips for Better Code Testing

You know about agile software development, wherein coding is quick and continuous. Due to continual releases and ongoing development, testing is an integral part of agile development. Without testing the builds more frequently and effectively, you cannot ensure the quality of the build. There are a few challenges faced by agile testers:

  • Creating daily builds and testing them
  • Collecting requirements and the amount of time committed
  • Keeping the meetings short and code inspections long

An agile tester should be highly proficient with his tools, be a team player, and have good coding skills. Here are eight tips for you to be more efficient in agile software testing.agile software testing

Tips for Better Agile Software Code Testing

1. Modify Your Character Traits

Successful agile testers have specific characters and mindsets. You should be passionate about coding, creative to some extent, and be forthcoming with your opinions. Soft skills are important: in communication, management, and leadership. Agile development and testing requires you to know the clients’ expectations before the delivery of the program.

2. Learn How the Data Flows Through the Application

In order to analyze your application and know how it works, first learn how the data flow inside it. Knowing the data flow will tell you volumes about the components and how they interact with each other. It will also give important information on the data security of the application. The data flow knowledge is very important to recognize and report defects in your app.

3. Application Log Analysis

AUT (the application under test) needs you to analyze the logs, especially in the case of agile testing. These logs give you a lot of information on the system architecture of the AUT. You may have heard about “silent errors.” These errors don’t show their effects to the end users immediately. Log analysis is your friend if you want to spot silent errors faster and be more useful to the development team.

 4. Change- and Risk-Based Testing

 In an agile environment, software coding and testing happens fast. The marketing time for the application is very important here, and both the development and testing teams work together to achieve minimum go-to-market times. In this environment, understanding the parts of the application that are being changed in each modification is important. If you can estimate the overall effect of this change, you can better spot bugs and errors.agile software development5. Know the Objectives

You, the agile tester, have to perceive the application as an end user. Use it in the way an end user would. This means, in order to come up with the best testing strategy, you should understand the key areas, parts, or features of the application that an end user is more likely to use. You may also need separate strategies for product architecture. The end-user focus may help you test for the sake of the application’s business objectives. This means, you can easily prioritize the defects. Meeting the needs of the end user is the most important aspect of software development anyway.

6. Use Browser Plugins and Tools

Agile testers may from time to time realize the value of browser tools. Google Chrome and Mozilla Firefox browsers come with developer tools within them. These plugins allow the tester to spot errors quickly. You can also use a third-party plugin (an example is FireBug) to test.

7. Repositories of Requirements

You have to know the type of agile strategy that your organization uses: Agile Unified Process (AUP), Adaptive Software Development (ADP), Scrum, Kanban, etc. The testing and development team may create documents on test cases, and you should analyze all the documentation. After a long time, you may find the requirements and test scenarios are gathered into a large repository, from which you can gather quite a bit of information.

8. Test Early, Often, and Always

Exploratory Testing (ET) is the sort of testing in which the process is instantaneous. ET is an important agile process. In order to develop and deliver an application, testing has to be done as early, as often, and as continuously as possible. Other testing types, such as functional and load testing, should also be incorporated into the project plan for more efficiency.

Conclusion

Agile development depends a lot of the stages of development. Hence it is more important than the end product. This is the reason why testing has become a major part of development. In the current agile development scenarios, unlike the olden times, software companies and professionals take a real-time look at the testing environments and cases.

IMNilTalaviya - Technology | WordPress | Health | Tips

You know about agile software development, wherein coding is quick and continuous. Due to continual releases and ongoing development, testing is an integral part of agile development. Without testing the builds more frequently and effectively, you cannot ensure the quality of the build. There are a few challenges faced by agile testers:

  • Creating daily builds and testing them
  • Collecting requirements and the amount of time committed
  • Keeping the meetings short and code inspections long

An agile tester should be highly proficient with his tools, be a team player, and have good coding skills. Here are eight tips for you to be more efficient in agile software testing. agile software testing

Tips for Better Agile Software Code Testing

1. Modify Your Character Traits

Successful agile testers have specific characters and mindsets. You should be passionate about coding, creative to some extent, and be forthcoming with your opinions. Soft skills are important: in communication…

View original post 658 more words

LeanKit – Future of Visual Management

LeanKit,  is  the core product that they build is software that allows customers to create kanban-style boards for managing teams and projects.  it’s a pretty great tool, and  always working to make it better. But,  a tool is only as good as the process it supports.

alex-and-the-smart-tv-portait

Using LeanKit by itself won’t magically make your team better. Using LeanKit to effectively implement Lean-Agile management practices with good technical practices in a healthy, supportive working environment can work wonders. They active participants in the Lean-Agile “community”, going to a lot of events. Of course, part of that is because they have a product to sell. But they are also keenly interested in the latest ideas from community thought-leaders. And they want to see and hear how customers and potential customers are “doing” kanban effectively. That informs thier product development, they incorporate those ideas into how they run LeanKit as a company, and like to share back to the community thier experiences as a kanban team.

Which brings us to the future of visual management. A kanban board works best if the team sees it all the time. A whiteboard with sticky notes does that automatically – at least for the people in the room. It doesn’t work so well for a distributed team. An electronic system like LeanKit solves that problem. But you run the risk of the board becoming a status reporting system that people look at occasionally rather than an always-visible information radiator and hub for collaboration. So how do you get the best of both worlds? They have long thought that the answer lay in interacting with LeanKit via a large screen TV. They have seen customers use giant, smart touchscreens like those from Smart Technologies. They’re awesome products made by a great company and,  think, well worth it if you can afford them. But not every departmental manager can justify that kind of capital investment. So, theyh’ve experimented with retail-available touchscreens like the HP Touchsmarts connected to a normal computer. A very nice option, but still fairly expensive, say $3-4,000 for a screen and computer. More than they felt comfortable recommending to most customers as a real-world actionable solution. A plain old big-screen LCD is great as a pure information radiator. You can get a 50-inch for about $600 on Amazon. Since a big screen will last years, you’re really talking about 50 cents a day in cost. That should be very do-able if you think about the hourly labor-rate for most of the teams doing kanban and/or the value of the products they produce. But what about interactivity. The touchscreens may be expensive but they let you move cards on your LeanKit board – not just view them. we can hook up a computer to the LCD, but the cost for a real PC seems a bit much for a screen we only occasionally interact with. And the user interface is a little clumsy for interacting with the board on the screen. Do you put a desk in front of the screen where you move the mouse? Not practical. Enter the smart TVs For those who haven’t seen one yet, a smart TV combines (obviously) a TV with a decent-but-not-over-the-top computer processor, integrated WiFi and web browsing, and point-and-click/drag-and-drop interaction with the screen. You can get this included in newer TVs or you can buy add-on devices that plug-in to a TV. They ha’ve tried several models and found we liked the LG G2′s as the best example of an integrated device and the Sony Internet Player with Google TV as the best of the add-on options. The integrated device has the benefit of uber-simplicty. Buy it. Hang it on the wall. Plug it in. Go. And they’re not too expensive. About $1,500 for the 55-inch. They ha’ve found, however, that they prefer the Sony add-on device. First, they’re definitely cheaper, about $150 plus the TV. So $800 total cost using the Panasonic 50-inch  mentioned above. They also prefer the style of remote that comes with them. The LG’s have a point-and-click Wii-mote style controller. That’s intuitive but a little touchy for fine-grained movements of a mouse. The Sony has more of a touchpad controller, like your laptops only in the palm of your hand. Both controllers have a full QWERTY keyboard on the back. And, even though they are an add-on, all you have to do is plug them into HDMI port of the TV. The remote is even easily programmable to replace the TV remote. The extra install time relative to the LG was measured in minutes. Making things even better, you can connect other peripherals to the TV through the Sony box. In the picture you see with this story we’ve got a Logitech Skype webcam connected to the TV (just a 42-inch in this case, a new 50-inch arrives later this week) through  Sony box. This allows us to have always-on HD video conferencing between  teams in multiple locations, combined with always-on interactive electronic kanban. It cost less than $1,000 per location. We installed it in minutes (minus the TV bracket) without any special skills or tools. The sales and marketing team did this, not the engineers. And you would not believe how much it improves the quality of interaction between remote teams. If your entire team can be in the same room to work together all the time, awesome. . But that’s a luxury most  can’t manage. Distributed teams are reality for most of us. With the latest technology (including LeanKit!) you can retain much more of the experience of being together than ever before. And you can do it easily and cheaply. You probably don’t even need to get permission or get a special budget allocation. Order them from Amazon today. Have them installed in a few days. Start reaping the benefits immediately.

Continuous Delivery

Getting software released to users is often a painful, risky, and time-consuming process.

Continuous delivery is  groundbreaking new book sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers, and operations, delivery teams can get changes released in a matter of hours— sometimes even minutes–no matter what the size of a project or the complexity of its code base.

Jez Humble and David Farley begin by presenting the foundations of a rapid, reliable, low-risk delivery process. Next, they introduce the “deployment pipeline,” an automated process for managing all changes, from check-in to release. Finally, they discuss the “ecosystem” needed to support continuous delivery, from infrastructure, data and configuration management to governance.

 The authors introduce state-of-the-art techniques, including automated infrastructure management and data migration, and the use of virtualization. For each, they review key issues, identify best practices, and demonstrate how to mitigate risks. Coverage includes

• Automating all facets of building, integrating, testing, and deploying software

• Implementing deployment pipelines at team and organizational levels

• Improving collaboration between developers, testers, and operations

• Developing features incrementally on large and distributed teams

• Implementing an effective configuration management strategy

• Automating acceptance testing, from analysis to implementation

• Testing capacity and other non-functional requirements

• Implementing continuous deployment and zero-downtime releases

• Managing infrastructure, data, components and dependencies

• Navigating risk management, compliance, and auditing

Whether you’re a developer, systems administrator, tester, or manager, this book will help your organization move from idea to release faster than ever—so you can deliver value to your business rapidly and reliably.

ThoughtWorks Continuous Delivery

A new perspective – the release process as a business advantage.

Release software on-demand, not on Red Alert.

ThoughtWorks Continuous Delivery transforms manual, disconnected and error-prone processes to make enterprise software releases so fast and assured they are a non-event rather than a Big Event; so well-controlled and automated that release timing can be placed in the hands of business stakeholders. ThoughtWorks Continuous Delivery is a new vision of how systems should be delivered into production: making delivery so responsive, fast and reliable that the deployment pipeline becomes a competitive advantage for the business.

It optimizes at all deployment pipeline elements – code integration, environment configuration, testing, performance analysis, security vetting, compliance checks, staging, and final release – in an integrated manner, so that all fixes and features can make their way from development to release in a near-continuous flow. At any point, you have an accurate view of the deployment pipeline: what’s tested, approved and ready to go; and what’s at any other stage. Releasing what’s ready to go is as straightforward and automated as pressing a button.

Operational, cost and reliability improvements within IT…

  • Faster, safer delivery – removal of waste, risk and bottlenecks. Releases are reliable, routine “non-events”.
  • Increased automation – speed the whole process while improving quality.
  • Exceptional visibility – at all times you know where each individual feature is in the pipeline, and its status.
  • Improved compliance – support for standard frameworks such as ITIL.
  • Collaboration – Test, support, development, operations work with each other as one delivery team.

…Bring new strategic capabilities to the business:

  • Release on demand – The ability to push releases to customers on demand places you first to market when new opportunities arise. Make competitors react to your moves.
  • Build the right thing – Explore new ideas and market test them quickly with much less effort and cost.
  • Continuous connection to customers – Faster releases show your customers you hear them.

Thoughtworks have the expertise and experience within the enterprise to help you make the journey.

Assessments start with your goals and current situation. Through a series of highly collaborative workshops and deep-dives we evaluate your needs, identify gaps and determine the best course of action. The outcome is a roadmap of immediately actionable recommendations. Assessments are conducted onsite and take 1-3 weeks.
Implementations focus on executing a roadmap of technical, process and organizational changes needed. it work side-by-side with you, providing both technical and coaching expertise, evolving you toward integrated Continuous Delivery practices.

its services are customized to your specific needs, but typically include:

  • Automating code, database and configuration deployment to make a reliable, rapid process. Use the same deployment mechanism for all environments.
  • Introducing Continuous Integration to support early testing and feedback on development.
  • Transforming development and operations teams into one delivery team, giving operations a seat at the table throughout the process to ensure operational needs are met.
  • Automating infrastructure and configuration management, along with use of cloud/virtualization to reduce the pain and cost of managing environments, keeping them in consistent and desired states.
  • Building a metrics dashboard and alerts to give automated feedback on the production readiness of your applications every time there is a change – to code, infrastructure, configuration or database.

Continuous Delivery by ThoughtWorker Jez Humble and alumnus Dave Farley sets out the principles and practices that enable rapid, incremental delivery of high quality, valuable new functionality.

The pattern that is central to the continuous delivery is deployment pipeline. Deployment pipeline is automated implementation of applications build, deploy, test, and release process. Automated deployment process shall be used by everybody and it should be the only way to deploy software. This ensures the deploy scripts works when needed. Same scripts shall be used in every environment

Agile Practices in Large organization

The ability to scale agile software development to large organizations has always had skeptics. Typical arguments are that agile works for small software development groups but not for large ones. Or, that they use outsourcing providers with fixed price contracts for software development and an agile methodology does not provide the discipline for them to fulfill contracts without a great deal of specification and design upfront.

Scaling agile software development to large organizations is still possible if enough attention is paid to:

  • Scaling agile practices – Understanding agile practices and making sure that the rest of the organization also does the same.
  • Scaling agile work – Organizing work and people appropriately for scaling agile properly.

Scaling agile practices to a large organization
Lean thinking guides agile practices significantly. The sources of many ideas in lean thinking are the Toyota production system (TPS) and the House of Quality that many lean companies practicing lean thinking use. The main principle in lean thinking is the idea that people are inherently responsible, capable of self-organization and learning, and do not need active management from supervisors. The other main idea in lean thinking is continuous improvement. Continuous improvement is best practiced by software development people that actually do the work. The Japanese technique of Gembutsu or “Go See” is

The principle is that each software development in each product or project environment is different and that methodologies and practices need to be tailored by the people who do the work after observing what is happening with the project closely for a while.

Reduction of waste is another strong agile practice that needs to be understood clearly and scaled in a large organization. Duplication of code in two different software projects is pretty common and well-known. Teams waiting for requirements documents to be complete and approved, waiting for design documents for coding to start, waiting for completed code for testing to start are all well-known wastes due to delays. Many processes like the stage-gate and other product management practices introduce their own delays. Software development teams waste time twiddling their thumbs while they are waiting.

For success, misconceptions about scaling with agile in large organizations need to be addressed. Agile does not mean there should be no documentation. Agile does not mean you are not disciplined. Agile does not mean no planning. The Agile Manifesto lays out a continuum of emphasis – individuals and interactions over processes and documentation. It just means that individuals and interactions are more important than any one process or extensive documentation, but not unimportant. Removing misconceptions is very important for agile to scale because such misconceptions have the potential to derail adoption.

Scaling agile work to a large organization
Organizing agile work to a large organization consists of two major areas that need to be addressed. Tackling one without the other is ineffective and counterproductive. These are organizing the work to be done and organizing people.

Organizing work traditionally has been done along some internal divisions like product divisions (personal tax preparation products, corporate tax preparation software products, for example) or functional divisions (user interface group, database management group, middleware group, etc.) or based on platforms (Windows, Windows Mobile, Unix, etc).

All of these ways of organizing work waste enormous amounts of time in unused talents and waste of time in waiting. In practice, there are almost always delays in handoffs and people are waiting for someone to give them something so that they can continue their own work. The UI group may be waiting for the middleware group to finish their designs. There could be enormous duplication of code – the two product divisions could be writing the same code to do the same thing without realizing it. There could be very good programmers that are good in UI design, coding and database design and implementation. The silo method of organizing work leaves a lot of talent untapped and unused.

It is better to organize work around requirements or features. Requirement areas will have their own requirement area owners that report to the product owner. Requirement areas could be IP protocols or performance or device support in the case of a telecommunication software product, for example.

Or, they could be organized around features such as downloading device data or batch download of data in the case of an embedded hardware/software product. In both cases, you will see that teams address entire set of coding functions – coding, UI design and development, and database design and development also.

Organizing people for scaling agile requires a lot of organizational change. It needs to be reflected in the policies and procedures of the company and needs to be adopted and used on a daily basis diligently for agile to be effective. Just adopting the superficial ways or organizing work without addressing these will be ineffective. Organizing people needs to follow the principles of empowerment of people, self-organization and self-management.

Reporting hierarchies need to be flattened first and the reporting spans should be larger. If people are empowered and self-managed, you need fewer managers to oversee their work. Managers need to be coaches or subject matter experts. Multi-skilling and job rotation needs to be built into the system. Software engineers may need to be experts at coding, architecture, design, database design and development and testing. Job titles prevent people from utilizing their full potential and contribute their best to the organization. Since now teamwork needs to be emphasized, reward structures need to be modified and job titles get in the way of team work. Job titles need to become generic and pay is tied to seniority and experience and automatic. These are pretty radical changes but without them re-organizing work alone may not help agile scale. These changes enable employees to be more proactive in taking on responsibilities and self-management and contribution.

Agile scaling, distributed and offshore software development
Agile scaling is really difficult with distributed and offshore software development. Many ideas that work when software development is centralized break down when the teams are distributed or done offshore.

Cultural differences, time zone differences do not pose big problems when software development is centralized. However, they become big problems in scaling agile development when teams are distributed, and some are offshore. The key here is to adapt and modify agile practices appropriately so that they work properly. A Daily Standup is possible if the entire team is in the same building or campus. If they are distributed across the globe only a weekly standup may be more practical and advisable. Clients or product owners may not be available for a daily standup at odd hours (because of time zone differences) and a weekly standup may be the only feasible solution.

Another way to address this is to use the distributed or offshore team as a self-contained requirement area group or a feature group. Communication is the #1 problem with distributed or offshore teams. There are no easy answers there except to use many communication mechanisms as possible – Skype or daily video conference, weekly team meetings, personal visits onsite by offshore teams, and personal visits by the client, onshore teams to the offshore location, at least every quarter or so.

Agile software development works in the small and can also work in the large, if approached carefully and many organizational changes and approaches are diligently made, and followed. Understanding and infusing the principles behind agile practices goes a long way in making scaling agile to large organizations successful. The keys are in not adopting only the superficial rituals but really adapt agile practices to the situation at hand, one organization at a time. Every organization and every software development project has its own unique aspects and a single magic bullet may not work in all cases. The underlying principle in agile is this flexibility and adaptation rather than blindly following a single set of prescriptions!