# 5. Polynomial complexity

The main aim of computer science is develop efficient algorithms for different problems. However, the question what is *efficient* is not so straightforward. Engineers who program low-level operating system routines try to shave off any assembler level instruction, while scientists who conduct large numerical simulations try to devise algorithms with sub-quadratic complexity. The most liberal definition of efficiency is used by theoretical computer scientists, who consider consider all *polynomial time algorithms* as efficient.

**Definition.**An algorithm is said to have

**polynomial time complexity**if its worst-case running time $T_\text{worst}(n)$ for an input of size $n$ is upper bounded by a polynomial $p(n)$ for large enough $n\geq n_0$.

For example, if an algorithm's worst-case running time is $T_\text{worst}(n) \in O(2n^4 + 5n^3 + 6)$ then the algorithm has polynomial time complexity.

In which cases the algorithm has polynomial time complexity? In which cases does it not have polynomial time complexity? In which cases is there too little information to tell?

- If $T_\text{worst}(n) = 7n^{100} - 5n^{50}$? Answer
- If $T_\text{worst}(n) = n^2 + (\log n)^3$? Answer
- If $T_\text{worst}(n) \in O(n \log n)$? Answer
- If $T_\text{worst}(n) \in \Omega(a^n)$ where $a > 1$ is some constant? Answer
- If $T_\text{worst}(n) \in O(a^n)$ where $a > 1$ is some constant? Answer
- If $T_\text{worst}(n) \in \Omega(n!)$ where $n! = 1 \cdot 2 \cdot 3 \cdot \ldots \cdot n$ is the factorial of $n$? Answer

Many widely used algorithms have polynomial time complexity (like our algorithms `readNumbers1`

and `readNumbers2`

, quicksort, insertion sort, binary search etc. etc.). Examples of algorithms with non-polynomial time complexity are all kinds of brute-force algorithms that look through all possible configurations. For example, looking through all the subsets of a set of size $n$ takes time $O(2^n)$, looking through all possible ways of totally ordering $n$ elements takes time $O(n!)$.

The decision to declare polynomial algorithms as efficient is motivated by following reasons. First, there is a wast difference between running times of polynomial and exponential algorithms. Secondly, programming tasks seem to have either a polynomial-time algorithms or require exponential $O(2^n)$ time brute-force algorithms. Thirdly, polynomial-time algorithms are closed under superposition. In layman's terms, if an algorithm makes polynomial number of calls to a function that is implemented as a polynomial algorithm, the resulting algorithm has also polynomial time-complexity. This greatly simplifies theoretical analysis of algorithms – you do not have to keep track what is the individual complexity of sub-routines as long as they are polynomial.

This rule for separating efficient algorithms form inefficient ones is often true in practice, but not always. If your program uses $n^{100}$ seconds or $10^{100} n$ seconds then it runs in polynomial time but it is too slow for any practical purposes even for $n = 2$; on the other hand, an algorithm which takes $1.00000001^n$ seconds does not run in polynomial time but finishes within a second even for $n = 1,000,000$. However, determining whether an algorithm has polynomial time complexity is usually much easier than calculating the precise number of steps it makes or estimating the number of seconds it might spend on certain hardware.

The set of all problems which can be solved by an algorithm of polynomial time complexity is called complexity class P. A big open problem in modern computer science (in fact, one of the Millennium Problems of mathematics, whose solver would be awarded $1,000,000 by the Clay Mathematics Institute) is whether P = NP, where NP is another class of problems, which is known to contain P. In addition to all problems known to be solvable in polynomial time, NP contains many famous problems for which no polynomial-time solution is known but it has not been proven that a polynomial-time solution does not exist. An example is the Travelling Salesman Problem: given a set of towns and a table of distances between them, find the shortest round-trip going through all the towns. As there are no known fast algorithms to exactly solve such problems, it is common practice to use heuristic algorithms, which are fast but, as a rule, give a slightly non-optimal answer.