Difference between revisions of "Units of information"
From Wiki @ Karl Jones dot com
Karl Jones (Talk | contribs) |
Karl Jones (Talk | contribs) |
||
Line 1: | Line 1: | ||
In [[computing]] and [[telecommunications]], a '''unit of information''' is the capacity of some standard [[data storage system]] or [[communication channel]], used to [[Measurement|measure]] the [[capacities of other systems and channels]]. | In [[computing]] and [[telecommunications]], a '''unit of information''' is the capacity of some standard [[data storage system]] or [[communication channel]], used to [[Measurement|measure]] the [[capacities of other systems and channels]]. | ||
− | |||
− | |||
== Description == | == Description == |
Revision as of 06:37, 4 February 2016
In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels.
Description
In information theory, units of information are also used to measure the information contents or entropy of random variables.
Bit and byte
The most common units are:
- The bit, the capacity of a system which can exist in only two states
- The byte (or octet), which is equivalent to eight bits
Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (binary power prefixes).
Information capacity is a dimensionless quantity, because it refers to a count of binary symbols.
See also
External links
- Unit of information @ Wikipedia