"Big O" refers to a way of rating the efficiency of an algorithm. It is only a rough estimate of the actual running time of the algorithm, but it will give you an idea of the performance relative to the size of the input data.

An algorithm is said to be of order `O(`*expression*`)`,
or simply of order *expression* (where *expression* is some function
of `n`, like `n ^{2}` and

For example, the following piece of code executes the innermost code
`(n ^{2} - n) / 2`, and the whole expression for the running
time will be a quadratic. Since this lies below

for i := n downto 2 do begin h := 1; for j := 2 to i do if list[j] > list[h] then h := j; temp := list[h]; list[h] := list[i]; list[i] := list[temp]; end;

(this is the algorithm for Selection Sort)

This depends on several factors, but mainly the size of the input data. In general,
`O(n ^{x})` is considered reasonable (where

However, if `n` is reasonably large (e.g. 1000) then
`O(n ^{2})` is reasonable while

Another expression that sometimes insinuates itself in Big O expressions is
`log n`. This is often the case in recursive algorithms which
split the data in half at every stage, and `log n` is proportional to
the number of levels of recursion. `log n` grows extremely slowly and
is while it looks similar to `n` it is actually much closer to
1. The base of the logarithm is not given since this only scales the
result, but it is usually derived from a log base 2. However recursive
algorithms usually have high overhead so this should be accounted for
when applying the rule of thumb above.

In many cases, the running time depends heavily on the nature of the input
data. One example is a Linear Search, in
which you might find the item you're looking for right immediately but you
might have to search through the entire list. In this situation, the worst
case running time is used, in this case order `O(n)`. Sometimes average
running times are given (e.g. Quick Sort
has a worst case running time of order `O(n ^{2})`, but an
average time of order

Choosing an algorithm isn't a case of simply picking the fastest one and
implementing it. Especially in a competition, you will also have to balance
this against the time it will take you to code the algorithm. In fact
you should use the **simplest** algorithm that will run in
the time provided, since you generally get no more points for a faster
but more complex algorithm. In fact it is sometimes worth implementing
an algorithm that you know will not be fast enough to get 100%, since
90% for a slow, correct algorithm is better than 0% for a fast but
broken one.

When choosing an algorithm you should also plan what type of data structures you intend to use. The speed of some algorithms depends on what types of data structures they are combined with.

Last updated Sun Nov 28 22:04:38.0000000000 2004. Copyright Bruce Merry (bmerry '@' gmail dot. com).