Business systems are complex. I’m not referring to systems in terms of IT; I’m talking about the processes and functions that occur in order to deliver products and services to customers (or support the propagation of said products and services). These systems usually exist as a collection of small simple core units which represent applied business knowledge, response to regulations/legislation, integration with key business partners, and process improvements, just to name a few. Staying competitive requires business leaders to ensure that these complex business systems are constructed to provide maximum value based on the parameters they are placed within. Many organizations look to either their IT team and/or a trusted consulting company to create technical implementations that manifest this vision.
Building automated, connected digital versions of these business systems requires a lot of information gathering—better known as requirements gathering. Using traditional methodologies, requirements are gathered and analyzed as the first step in the process. From that point in the initiative, all of the work that follows is in perspective of the entire system being constructed. Historically, the effort of actually implementing the system is managed in a waterfall fashion, meaning the actual development effort is broken into a stack of dependent features. In ideal situations where planning is carefully executed, these features can be tested and demonstrated to users at “feature release points” in the project. The typical timeframe between release points is fairly significant. The crafted features are often demonstrated to key project stakeholders and subject matter experts to verify functionality. Unfortunately, the demonstration is just that, a show-and-tell of the work that has been completed (the SME is not able to take it for a test drive). You can probably begin to see the disconnects that typically begin to surface (change orders, project delays, etc.) in an environment such as this.
There is good news; alternatives to the doom that appears imminent do exist. The Agile Manifesto chartered in 2001, provides glorious relief to those who subscribe to the principles outlined. The movement toward agile practices really seemed to progress at warp speed, especially the Scrum framework. Three to five years ago, software conferences were littered with sessions on Scrum. In the conferences that I have attended more recently, those sessions have disappeared; suggesting perhaps either it is no longer relevant, or it is perfectly understood.
The fact of the matter is that running successful projects is not an easy feat. There is no shortage of books out there to help; a quick search of Amazon for “Scrum” yielded well over 1000 books. One would think that it would be nearly impossible to fail given the abundance of resources available on the topic. But many organizations only begin to scrape the surface. Here are a few surefire early indicators that Scrum will most likely (or has already) fail(ed) in your organization’s implementation:
Kick the tires
Definition: Occurs when an organization attempts to implement Scrum practices in an incomplete or isolated fashion.
Scenario: A few developers convince the management team that they are ready to start utilizing Scrum. Management team is curious, yet hesitant, in making such a large shift in operations; therefore, a pilot initiative commences to see how this Scrum thing works.
Results: Implementing Scrum requires a significant amount of experience-based learning. It really is not much different from hopping in a car as a 16-year-old. The hard truth is that while you are capable of operating the car in a safe manner, you simply lack the experience to be consistently successful (statistics on young driving accidents serve as my proof). Also consider this… would you let a new driver drive a commercial bus?
Finish line first
Definition: Occurs when an organization evaluates the effectiveness of Scrum practices by comparing the “results” from two different teams.
Example: What this really represents is some way of comparing the results of one team to another, or the same team utilizing different strategies/practices.
Results: Assuming we are evaluating on a full project and not just a time box (which really begs the question of what company is willing to invest in duplicate work), we still haven’t defined the criteria for evaluating the two teams. Are we looking solely at completion date? Are we evaluating defect rate? Are we evaluating customer/user satisfaction? Is our team empowered, inspired, and generally satisfied with what they are accomplishing? At the end of the day, success can be judged in a variety of manners.
Definition: Occurs when an organization evaluates the effectiveness of Scrum practices based on traditional metrics.
Example: There are metrics that can be gained as a result of the execution of the project. What is the velocity of the team? What is the completion date of this feature or that user story? What dependencies or roadblocks hinder our ability to move forward? Are my developers performing at a high level?
Results: Again, you have to ask yourself what makes a project successful. Visibility is obviously an important aspect of understanding how a project is progressing, but is that the sole indicator of whether it was successful? How often are projects charted, developed, implemented, and result in an inferior product that either falls short of expectations, or simply doesn’t get used? Conventional metrics that are used to gauge whether a project was successful do not typically include impact to business. Did we deliver the ROI that was anticipated? Is the system able to respond to changing business requirements? How has implementing the systems impacted the workflow and general happiness of the users?
Implementing Scrum requires a level of dedication that many people aren’t willing to commit. It is important to create a vision that truly illustrates and justifies the movement. Keep that vision in perspective, as there undoubtedly will be painful lessons along the way. Just as the process is agile, so should the people be. When things do not seem to be working, classify those as adjustment periods instead of failures. Most importantly, remember that while there are countless others who have implemented Scrum, there is a unique component to it that allows for the process to be personal. When people are part of a personal process, they take ownership of it, and the results follow.
Other popular posts like this: