Difference between revisions of "Units of information"

From Wiki @ Karl Jones dot com
Jump to: navigation, search
(External links)
(External links)
Line 30: Line 30:
 
[[Category:Computing]]
 
[[Category:Computing]]
 
[[Category:Data]]
 
[[Category:Data]]
 +
[[Category:Data structures]]
 
[[Category:Information]]
 
[[Category:Information]]
 
[[Category:Information theory]]
 
[[Category:Information theory]]

Revision as of 18:18, 21 April 2016

In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels.

Description

In information theory, units of information are also used to measure the information contents or entropy of random variables.

Bit and byte

The most common units are:

  • The bit, the capacity of a system which can exist in only two states
  • The byte (or octet), which is equivalent to eight bits

Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (binary power prefixes).

Information capacity is a dimensionless quantity, because it refers to a count of binary symbols.

See also

External links