After establishing what the running time of an algorithm based on three
variables a, b, d, we might be wondering what is the meaning behind this
method. Where do these logs and exponents come from?
If we look at the amount of work done at a single level of the recursion tree, we get that it is,
where aj is the number of level-j subproblems and the rest is the amount of work per level-j subproblem. Thus, the total amount of work must be,
Here we finally have the a and bd that we compared. But what do they represent? Remember, a is the number of subproblems we broke down our problem into. We can think of it as the rate of subproblem proliferation.
Now what about bd? This is the rate of work shrinkage. How come it isn't just b, the amount we shrank the problem by? If the problem requires a linear amount of work outside the recursion, then halving the input size will halve the amount of work. However, if we have a quadratic amount of work, halving the input size will actually quarter the amount of work. This is where the exponent d comes into play.
These are two forces in opposition. If the rate of subproblem proliferation is greater than the rate of work shrinkage, then we will actually be doing more work. If it is less, then we are effectively reducing the amount of work we are doing by dividing the problem into more subproblems. Otherwise, we will be doing the same amount of work at each level.
So let's break it down into three cases.
)
If we look at the amount of work done at a single level of the recursion tree, we get that it is,
where aj is the number of level-j subproblems and the rest is the amount of work per level-j subproblem. Thus, the total amount of work must be,
Here we finally have the a and bd that we compared. But what do they represent? Remember, a is the number of subproblems we broke down our problem into. We can think of it as the rate of subproblem proliferation.
Now what about bd? This is the rate of work shrinkage. How come it isn't just b, the amount we shrank the problem by? If the problem requires a linear amount of work outside the recursion, then halving the input size will halve the amount of work. However, if we have a quadratic amount of work, halving the input size will actually quarter the amount of work. This is where the exponent d comes into play.
These are two forces in opposition. If the rate of subproblem proliferation is greater than the rate of work shrinkage, then we will actually be doing more work. If it is less, then we are effectively reducing the amount of work we are doing by dividing the problem into more subproblems. Otherwise, we will be doing the same amount of work at each level.
So let's break it down into three cases.
- a = bd : then we have that the fraction is 1 and the sum would simply be logbn. We can ignore the base because it will not make a difference and thus we get O(ndlogbn).
- a < bd : then using the sum of a geometric series with r<1 and="" be="" big-o="" constant="" constants="" have="" independent="" know="" less="" n="" notation="" number="" o="" of="" only="" some="" sum="" sup="" supressed="" terms.="" than="" that="" the="" then="" this="" thus="" using="" we="" will="">d1>
0 comments:
Post a Comment