numerical methods - How to compute exact complexity of an algorithm? -
Without the support of unimaginative signaling, is the only way to get the complexity of time of the algorithm is a tiring step? And without the code of each line, can we reach the big-o representation of any program?
Description: To try to know the complexity of many numerical analysis algorithms that are most appropriate to solve a particular problem such as - between the Regula-Felsi or Newton-Raptson method for solution of EUN, The intention is to evaluate the exact complexity of each method and then decide ('N' or any argument in it) is less complicated.
The only way -" easy " To know the exact complexity of a complex algorithm - - rather, in a difficult manner, rather than just the proper way - it is to create a profile. A modern implementation of an algorithm has a complex interaction with numerical libraries and CPU and its floating point unit. For example, cash memory access is much faster than the out-of-cache memory access, and there may be more than one level of a cache above it. The steps to counting are really very suitable which you say is not enough for your purpose.
But, if you want to count the steps automatically, then there are ways to do this too. You can add a counter increment command (like "bloof ++;") for each line of code, and then display the value at the end.
You should also know about the more sophisticated time complexity expression, F (n) * (1 + o (1)), which is also useful for analytical calculations. For example simplifies N ^ 2 + 2 * N +7N ^ 2 * (1 + o (1)). If there is a constant factor, then what do you bother about the general impromptu notation o (F), there is a way to keep track of this refinement and still pull out negligible conditions.
Comments
Post a Comment