Computability theory

From Wiki @ Karl Jones dot com
Revision as of 08:49, 19 August 2015 by Karl Jones (Talk | contribs) (First)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Computability theory, also called recursion theory, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees.

The basic questions addressed by recursion theory are:

  • "What does it mean for a function on the natural numbers to be computable?"
  • "How can noncomputable functions be classified into a hierarchy based on their level of noncomputability?".

The answers to these questions have led to a rich theory that is still being actively researched.

The field has since grown to include the study of generalized computability and definability.

Invention of the central combinatorial object of recursion theory, namely the Universal Turing Machine, predates and predetermines the invention of modern computers.

Historically, the study of algorithmically undecidable sets and functions was motivated by various problems in mathematics that turned to be undecidable; for example, word problem for groups and the like.

There are several applications of the theory to other branches of mathematics that do not necessarily concentrate on undecidability.

The early applications include the celebrated Higman's embedding theorem that provides a link between recursion theory and group theory, results of Michael O. Rabin and Anatoly Maltsev on algorithmic presentations of algebras, and the negative solution to Hilbert's Tenth Problem.

The more recent applications include algorithmic randomness, results of Soare et al. who applied recursion-theoretic methods to solve a problem in algebraic geometry, and the very recent work of Slaman et al. on normal numbers that solves a problem in analytic number theory.

Recursion theory overlaps with proof theory, effective descriptive set theory, model theory, and abstract algebra.

Arguably, computational complexity theory is a child of recursion theory; both theories share the same technical tool, namely the Turing Machine.

Recursion theorists in mathematical logic often study the theory of relative computability, reducibility notions and degree structures described in this article.

This contrasts with the theory of subrecursive hierarchies, formal methods and formal languages that is common in the study of computability theory in computer science.

There is a considerable overlap in knowledge and methods between these two research communities; however, no firm line can be drawn between them.

For instance, parametrized complexity was invented by a complexity theorist Michael Fellows and a recursion theorist Rod Downey.

See also

External links