I think a great mental model for understanding throughput and capacity planning is that of a highway and toll system.

Cars represent the demand as they travel between points A and B. The highway lane(s) and toll booth(s) between A and B are the service centers where cars spend their time traveling. The highway speed limit and service time of cars at the toll booths quantify efficiency. The number of lanes and toll booths quantify parallelism.

As the system approaches its capacity limit, the queue for the toll booth(s) will grow as cars wait for service. If you want to measure the throughput of the system, all you need to do is count cars as they leave the slowest resource, in this case the toll booth(s).

Shrini's client wants to increase throughput. His highly dubious colleague suggested adding more cars. Lets run that idea through the mental model. First 2 numbers, speed limit on the single lane, single booth highway is 100 kph, and the tool booth take 20 seconds to service a car. The capacity is this system is limited by the slowest resource, in this case 3 cars a minute. If you stand at the end of the highway and count cars, you will count a maximum of 3 cars a minute.

But lets not confuse capacity with throughput. Capacity is what CAN flow through the system. Throughput is what IS flowing through the system. If the flow of cars is 2 cars per minute on this highway, then ADDING CARS will indeed increase throughput! So, depending on the initial conditions, adding demand could increase throughput.

If the system is operating at it's capacity limit, then adding cars will do nothing for throughput, and in fact will only serve to increase the total service time for individual cars traveling from point A to B.

Now we have a good mental model to explore what happens as we increase the speed of our service centers (the highway speed limit and the tool booths).

Back to my original statement, it might be clearer that there are two options to increase throughput;

1) servicing individual requests faster (i.e. greater efficiency)

2) servicing more requests in parallel (i.e. greater concurrency)

You accomplish point 1 by increasing the speed limit or reducing time spent in the toll booths. You accomplish point 2 by adding highway lanes and adding toll booths.

To complete this mental model, we need to introduce some sort of contention for a shared resource. Lets imagine that as we go to multiple toll booths, each toll booth now must record the money collected using an accumulator so the greedy booth manager, Count D'Monet, knows the funds collected at any time. The booth sends a signal to an accumulator and can not release the car until the accumulator signals that the fare has been recorded. Lets say that the accumulator takes 4 seconds to perform its task and signal the booth to release the car.

With the new system you arrive at a toll booth and have to wait 20 seconds for normal service, 4 seconds for the accumulator AND some time waiting for the accumulator to service a car from the other lane (it is single threaded). How much additional time waiting? It depends on arrival patterns, but lets say on average you arrive half way through the accumulator servicing another car. That gives you 20 seconds of tool booth time plus 2 seconds waiting for the accumulator servicing another car plus 4 seconds for the accumulator servicing your car. That's 26 seconds. This is slowing YOU down!

But, the overall rate of cars clearing the toll boths is now 2 X 60/26 or 4.6 cars/min. An improvement over the 3 cars per minute we had before but not the factor of 2 you might expect by simply doubling the number of toll booths. Contention for shared resources is the counter balance to parallel processing. Continue adding toll booths and soon you get NO additional capacity for your effort. There is a fairly simple math function (A Taylor series minus one of the terms) that estimates the throughput of an N resource system given throughput of a 1 and 2 resource system, but I digress. Its also the reason why we don't have massively multi-core computer chips (not to be confused with massively parallel systems).

I love this model as it illustrates the fundamental principles of queuing theory. It shows, contrary to what has been said so far in this thread, your naive college could be right. But then again, even a broken clock is right twice a day.

Bernie Velivis, President Performax Inc

## Monday, December 7, 2009

Subscribe to:
Post Comments (Atom)

## No comments:

## Post a Comment