This article first appeared on the Scrum Alliance Member Articles Section
This version is the original unedited version. |
Little’s law is the theorem of queuing created by John Little, a retired professor from MIT
The theory is
l = tw
Where
l = the length of the queue
t = the rate at which items arrive or are removed from a steady state system (Throughput) and
w = the wait time in the queue
So, if you want to work out the wait time in the queue (ie, how long before your job is looked at) then we rearrange to
w=l/t
We see that the wait time is proportional to the length of the queue and inversely proportional to the processing rate.
So what does this mean. Simply put, if your processing rate is slow, then the larger the queue is, the longer waiting time before a new entry into the queue is processed. In other words your ability to change – be agile reduces.
Whereas, if your processing time is faster than your length in the queue, then you will be processing the queue quite quickly – but you run the risk of your system constantly being starved for work.
So how can I demonstrate this?
Let’s say you have a processing rate of one story every 2 days. This gives you a rate of 0.5 stories per day.
If we have a queue size of 1.
Then
w = 1/0.5
= 2 days your story will wait in the queue before it is even started on when added to the queue.
Now, lets say we push to the queue 2 stories.
w = 2/0.5
Now, for any additional stories entering the queue will take 4 days before they are looked at.
Our agility is really slowing down now.
Adding More Resources (Increasing Throughput)
Now, ok. Surely if I add more resources I can processing things quicker. My throughput has increased, so things should improve.
Ok, let’s say you now have 2 people, each with the capability of doing 0.5 stories per day. So you’re processing speed has doubled to 1 story per day.
So with a queue depth of 1.
w = 1/1
So the time waiting in the queue is now 1 day. We have increased our throughput.
Let’s try increasing our queue depth to 2.
w = 2/1
So the wait time is now 2 days again. Our agility is starting to go away again.
The more we assign to the queue, the longer it is going to take before the story is looked at.
The problem here is that you cannot always add more people. People and resources cost money. So what else can we do?
Reduce Queue Size
Let’s try something else. Let’s try reducing the queue size to zero. To keep things simple, we’ll go back to one person.
w = 0/0.5
Our wait time is now Zero????
We now have complete agility – the ability to change stories up until they are worked on.
So how do we achieve Zero queue size? Well, we’re cheating a little. There is still a queue – the backlog. What we are doing is no longer assigning stories to people. Their personal “queues” (The work in progress) has been reduced to “zero” – in other words they are working on one story/task/item at a time and “pulling” new items off the backlog when they finish. This allows the changing of the backlog without affecting the work.
Multitasking
Now, one of the problems with “assigning” tasks to people is that they have a tendency to multitask. Chop and change between what has been assigned to them and think they are making progress.
This is actually not the case. Let me explain.
Say you have 3 tasks. A, B and C. Each task takes let’s say a nice round number. 10 days to complete.
Now if we do these tasks sequentially, it’s going to take 10 days to complete each task.
But, we are supermen/women err people! We can multitask! Get everything done quicker! Is that the case?
Well, lets see. Say we split up the task so we work 5 days on each task to get something done of everything. We’re keeping it simple here.
So is it quicker to finish each task? Umm, no. We have actually doubled the time it takes to complete each task.
Now, this may not matter as the overall time for all tasks is still 30 days. The same as if we did it sequentially.
Where it comes into a problem though is that switching tasks is not free. There is a penalty. We have to dump our thought process on one task, and reload everything for the next. That takes time and energy. Especially if the time between multitasking is hours or minutes and not days.
Secondly, if there is any delay in completing a “portion” of a task – For example C took a little longer than first thought and you spent 6 days instead of 5, then you not only have delayed task C, but also completion of task A and B! And potentially the whole project! This then leads to stakeholders of A and B escalating as they need their stuff done NOW, so you stop, rearrange and the whole thing becomes a chaotic mess. Business as usual? For most people it is. So much so that for most it’s a fact of life. So much so that when developers look at how long something will take to complete, we ignore elapsed time and focus on work time, but stakeholders focus on elapsed time, so as with any mismatch, tension arises.
Caveat
Now I should probably mention that these techniques are not going to shorten the delivery time for a set amount of work in a perfect system without losses such as context switching or reorganizing regardless whether or not you have a queue size of zero or fifty or if,you switch tasks every few minutes. We are not doing any magic here. Just like the conservation of energy, the amount of work being done overall in the system is maintained. What we are doing here is rearranging things so that stuff gets done sooner and if you prioritize your items to give the most value, you get more value sooner as well. We are also dealing with a system that has reached steady state. As with anything to reach steady state, there is a ramp up time where the system is loaded and variances in work from one item to the next are minimal. Something that doesn’t come naturally in any knowledge based project.
Conclusion
I know my maths is a little off – I could probably explain it a lot better 20 years ago when I actually did maths in Uni, but Little’s Law shows a mathematical model on our ability to be Agile. It shows that “Pulling” increases your ability to respond to change as opposed to “assigning” work to someone. If you do assign, it shows that reducing the amount you load onto a person actually helps get things done sooner.
Finally, if you do overload someone by assigning tasks to them, as we are human, we tend to chop and change between those tasks for whatever reason. What this means is that for any one particular task to complete – it will actually take longer to complete any one task. Not only that, but a delay in one task could affect the delivery of other tasks that may be of a higher priority.
So keep your work in progress small (Preferably 1 thing at a time) try to keep the size of the work even and small in size to regulate the throughput to a steady state, and pull new work as required. You will get things done sooner, smoother and hopefully be one step closer to gaining control of the whole system. Better yet, you are doing this all with no additional cost!