Difference between revisions of "Analysis of algorithms"

From Wiki @ Karl Jones dot com
Jump to: navigation, search
Line 42: Line 42:
  
 
* [https://en.wikipedia.org/wiki/Analysis_of_algorithms Analysis of algorithms] @ Wikipedia
 
* [https://en.wikipedia.org/wiki/Analysis_of_algorithms Analysis of algorithms] @ Wikipedia
 +
 +
[[Category:Algorithms]]
 +
[[Category:Computer science]]

Revision as of 04:45, 16 April 2016

In computer science, the analysis of algorithms (see Algorithm) is the determination of the amount of resources (such as time and storage) necessary to execute them.

Description

Most algorithms are designed to work with inputs of arbitrary length.

Usually, the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity).

Computational complexity theory

Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.

Estimating complexity and efficiency

In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input.

Big O notation, Big-omega notation and Big-theta notation are used to this end.

For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the list being searched, or in O(log(n)), colloquially "in logarithmic time".

Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency.

However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.

Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation.

Model of computation

A model of computation may be defined in terms of an abstract machine, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time.

For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2 n + 1 time units are needed to return an answer.

See also

External links