Computing Fundamentals General Computing Terms
From Whatis.com
bit

A bit (short for binary digit) is the smallest unit of data in a computer. A bit has a single binary value, either 0 or 1. Although computers usually provide instructions that can test and manipulate bits, they generally are designed to store data and execute instructions in bit multiples called bytes. In most computer systems, there are eight bits in a byte. The value of a bit is usually stored as either above or below a designated level of electrical charge in a single capacitor within a memory device.

Half a byte (four bits) is called a nibble. In some systems, the term octet is used for an eight-bit unit instead of byte. In many systems, four eight-bit bytes or octets form a 32-bit word. In such systems, instruction lengths are sometimes expressed as full-word (32 bits in length) or half-word (16 bits in length).

In telecommunication, the bit rate is the number of bits that are transmitted in a given time period, usually a second.

crumb


In computers, crumb is jargon for two bits (that is, two binary digits). According to the Jargon File, synonyms are: quad, taste, and tastye. The term is rarely used.

nibble


In computers and digital technology, a nibble (pronounced NIHB-uhl; sometimes spelled nybble) is four binary digits or half of an eight-bit byte. A nibble can be conveniently represented by one hexadecimal digit.

Like crumb, nibble carries on the "edible data" metaphor established with bit and byte.

In communications, a nibble is sometimes referred to as a "quadbit." or one of 16 possible four-bit combinations. A signal may be encoded in quadbits rather than one bit at a time. Nibble interleaving or multiplexing takes a quadbit or nibble from a lower-speed channel as input for a multiplexed signal on a higher-speed channel.

octet


In computers, an octet (from the Latin octo or "eight") is a sequence of eight bits. An octet is thus an eight-bit byte. Since a byte is not eight bits in all computer systems, octet provides a nonambiguous term. This term should not be confused with octal, a term that describes a base-8 number system.

byte


In most computer systems, a byte is a unit of data that is eight binary digits long. A byte is the unit most computers use to represent a character such as a letter, number, or typographic symbol (for example, "g", "5", or "?"). A byte can also hold a string of bits that need to be used in some larger unit for application purposes (for example, the stream of bits that constitute a visual image for a program that displays images or the string of bits that constitutes the machine code of a computer program).

In some computer systems, four bytes constitute a word, a unit that a computer processor can be designed to handle efficiently as it reads and processes each instruction. Some computer processors can handle two-byte or single-byte instructions.

A byte is abbreviated with a "B". (A bit is abbreviated with a small "b".) Computer storage is usually measured in byte multiples. For example, an 820 MB hard drive holds a nominal 820 million bytes - or megabytes - of data. Byte multiples are based on powers of 2 and commonly expressed as a "rounded off" decimal number. For example, one megabyte ("one million bytes") is actually 1,048,576 (decimal) bytes. (Confusingly, however, some hard disk manufacturers and dictionary sources state that bytes for computer storage should be calculated as powers of 10 so that a megabyte really would be one million decimal bytes.)

Some language scripts require two bytes to represent a character. These are called double-byte character sets (DBCS).

According to Fred Brooks, an early hardware architect for IBM, project manager for the OS/360 operating system, and author of The Mythical Man-Month, Dr. Werner Buchholz originated the term byte in 1956 when working on IBM's STRETCH computer.

Also see megabyte, gigabyte, terabyte, petabyte, and exabyte.

word

In computer architecture, a word is a unit of data of a defined bit length that can be addressed and moved between storage and the computer processor. Usually, the defined bit length of a word is equivalent to the width of the computer's data bus so that a word can be moved in a single operation from storage to a processor register. For any computer architecture with an eight-bit byte, the word will be some multiple of eight bits. In IBM's evolutionary System/360 architecture, a word is 32 bits, or four continguous eight-bit bytes. In Intel's PC processor architecture, a word is 16 bits, or two contiguous eight-bit bytes.

Some computer processor architectures support a half word, which is half the bit length of a word, and a double word (doubleword), which is two contiguous words. Intel's processor architecture also supports a quadword, two contiguous doublewords, and a double quadword, two contiguous quadwords.

A word can contain a computer instruction, a storage address, or application data that is to be manipulated (for example, added to the the data in another word space). In some architectures, a double word or larger unit is required to contain an instruction, address, or application data. Typically, an instruction is a word in length, but some architectures support halfword and doubleword-length instructions.

In general, the longer the architected word length, the more the computer processor can do in a single operation.