Bits and Bytes

Bandwidth Definition

Bandwidth Definition

Bandwidth describes the maximum data transfer rate of a network or Internet connection. It measures how much data can be sent over a specific connection in a given amount of time. For example, a gigabit Ethernet connection has a bandwidth of 1,000 Mbps, (125 megabytes per second). An Internet connection via cable modem may provide 25 Mbps of bandwidth.

While bandwidth is used to describe network speeds, it does not measure how fast bits of data move from one location to another. Since data packets travel over electronic or fiber optic cables, the speed of each bit transferred is negligible. Instead, bandwidth measures how much data can flow through a specific connection at one time.

When visualizing bandwidth, it may help to think of a network connection as a tube and each bit of data as a grain of sand. If you pour a large amount of sand into a skinny tube, it will take a long time for the sand to flow through it. If you pour the same amount of sand through a wide tube, the sand will finish flowing through the tube much faster. Similarly, a download will finish much faster when you have a high-bandwidth connection rather than a low-bandwidth connection.

Data often flows over multiple network connections, which means the connection with the smallest bandwidth acts as a bottleneck. Generally, the Internet backbone and connections between servers have the most bandwidth, so they rarely serve as bottlenecks. Instead, the most common Internet bottleneck is your connection to your ISP.

NOTE: Bandwidth also refers to a range of frequencies used to transmit a signal. This type of bandwidth is measured in hertz and is often referenced in signal processing applications.

Bits and Bytes

Bitrate Definition

Bitrate Definition

Bitrate, as the name implies, describes the rate at which bits are transferred from one location to another. In other words, it measures how much data is transmitted in a given amount of time. Bitrate is commonly measured in bits per second (bps), kilobits per second (Kbps), or megabits per second (Mbps). For example, a DSL connection may be able to download data at 768 kbps, while a Firewire 800 connection can transfer data up to 800 Mbps.

Bitrate can also describe the quality of an audio or video file. For example, an MP3 audio file that is compressed at 192 Kbps will have a greater dynamic range and may sound slightly more clear than the same audio file compressed at 128 Kbps. This is because more bits are used to represent the audio data for each second of playback. Similarly, a video file that is compressed at 3000 Kbps will look better than the same file compressed at 1000 Kbps. Just like the quality of an image is measured in resolution, the quality of an audio or video file is measured by the bitrate.

Bits and Bytes

Megabyte Definition

Megabyte Definition

A megabyte is 106 or 1,000,000 bytes.

One megabyte (abbreviated “MB”) is equal to 1,000 kilobytes and precedes the gigabyte unit of measurement. While a megabyte is technically 1,000,000 bytes, megabytes are often used synonymously with mebibytes, which contain 1,048,576 bytes (220 or 1,024 x 1,024 bytes).

Megabytes are often used to measure the size of large files. For example, a high resolution JPEG image file might range is size from one to five megabytes. Uncompressed RAW images from a digital camera may require 10 to 50 MB of disk space. A three minute song saved in a compressed format may be roughly three megabytes in size, and the uncompressed version may take up 30 MB of space. While CD capacity is measured in megabytes (typically 700 to 800 MB), the capacity of most other forms of media, such as flash drives and hard drives, is typically measured in gigabytes or terabytes.

NOTE: View a list of all the units of measurement used for measuring data storage.

Bits and Bytes

Terabyte Definition

Terabyte Definition

A terabyte is 1012 or 1,000,000,000,000 bytes.

One terabyte (abbreviated “TB”) is equal to 1,000 gigabytes and precedes the petabyte unit of measurement. While a terabyte is exactly 1 trillion bytes, in some cases terabytes and tebibytes are used synonymously, though a tebibyte actually contains 1,099,511,627,776 bytes (1,024 gibibytes).

Terabytes are most often used to measure the storage capacity of large storage devices. While hard drives were measured in gigabytes for many years, around 2007, consumer hard drives reached a capacity of one terabyte. Now, all hard drives that have a capacity of 1,000 GB or more are measured in terabytes. For example, a typical internal HDD may hold 2 TB of data. Some servers and high-end workstations that contain multiple hard drives may have a total storage capacity of over 10 TB.

Terabytes are also used to measure bandwidth, or data transferred in a specific amount of time. For example, a Web host may limit a shared hosting client’s bandwidth to 2 terabytes per month. Dedicated servers often have higher bandwidth limits, such as 10 TB/mo or more.

NOTE: You can view a list of all the units of measurement used for measuring data storage.

Bits and Bytes

Kilobit Definition

Kilobit Definition

A kilobit is 103 or 1,000 bits.

One kilobit (abbreviated “Kb”) contains one thousand bits and there are 1,000 kilobits in a megabit. Kilobits (Kb) are smaller than than kilobytes (KB), since there are 8 kilobits in a single kilobyte. Therefore, a kilobit is one-eighth the size of a kilobyte.

Before broadband Internet was common, Internet connection speeds were often measured in kilobits. For example, a 28.8K modem was able to receive data up to 28.8 kilobits per second (Kbps). A 56K modem could receive data up to 56 Kbps. Most ISPs now offer connection speeds of 10 Mbps and higher, so kilobits are not as commonly used. However, if you download a file from a server that does not have a lot of bandwidth, your system might receive less than 1 Mbps. In this case, the download speed may be displayed in kilobits per second.

NOTE: While kilobits are used to measure data transfer rates, kilobytes are used to measure file size. Therefore, if you download an 800 KB file at 400 Kbps, it will take 16 seconds (rather than 2). This is because 400 kilobits per second is equal to 50 kilobytes per second.

Bits and Bytes

String Definition

String Definition

A string is a data type used in programming, such as an integer and floating point unit, but is used to represent text rather than numbers. It is comprised of a set of characters that can also contain spaces and numbers. For example, the word “hamburger” and the phrase “I ate 3 hamburgers” are both strings. Even “12345” could be considered a string, if specified correctly. Typically, programmers must enclose strings in quotation marks for the data to recognized as a string and not a number or variable name.

For example, in the comparison:

if (Option1 == Option2) then …

Option1 and Option2 may be variables containing integers, strings, or other data. If the values are the same, the test returns a value of true, otherwise the result is false. In the comparison:

if (“Option1” == “Option2”) then …

Option1 and Option2 are being treated as strings. Therefore the test is comparing the words “Option1” and “Option2,” which would return false. The length of a string is often determined by using a null character.

Bits and Bytes

Kbps Definition

Kbps Definition

Stands for “Kilobits Per Second.” 1 Kbps is equal to 1,000 bits per second. That means a 300 Kbps connection can transfer 300,000 bits in one second. 1,000 Kbps is equal to 1 Mbps.

Kbps is primarily used to measure data transfer rates. For example, dial-up modems were rated by their maximum download speeds, such as 14.4, 28.8, and 56 Kbps. Throughout the 1990s and early 2000s, Kbps remained the standard way to measure data transfer dates. However, broadband connections such as cable and DSL now offer speeds of several megabits per second. Therefore, Mbps is more ubiquitous than Kbps.

NOTE: The lowercase “b” in Kbps is significant. It stands for “bits,” not bytes (which is represented by a capital “B”). Since there are eight bits in one byte, 400 Kbps is equal to 400 ÷ 8, or 50 KBps. Because data transfer speeds have traditionally been measured in bps, Kbps is more commonly used than KBps.

Bits and Bytes

Byte Definition

Byte Definition

A byte is a data measurement unit that contains eight bits, or a series of eight zeros and ones. A single byte can be used to represent 28 or 256 different values.

The byte was originally created to store a single character, since 256 values is sufficient to represent all lowercase and uppercase letters, numbers, and symbols in western languages. However, since some languages have more than 256 characters, modern character encoding standards, such as UTF-16, use two bytes, or 16 bits for each character. With two bytes, it is possible to represent 216 or 65,536 values.

While the byte was originally designed to store character data, it has become the fundamental unit of measurement for data storage. For example, a kilobyte contains 1,000 bytes. A megabyte contains 1,000 x 1,000, or 1,000,000 bytes.

A small plain text file may only contain a few bytes of data. However, many file systems have a minimum cluster size of 4 kilobytes, which means each file requires a minimum of 4 KB of disk space. Therefore, bytes are more often used to measure specific data within a file rather than files themselves. Large file sizes may be measured in megabytes, while data storage capacities are often measured in gigabytes or terabytes.

NOTE:One kibibyte contains 1,024 bytes. One mebibyte contains 1,024 x 1,024 or 1,048,576 bytes.

Bits and Bytes

Bit Definition

Bit Definition

A bit (short for “binary digit”) is the smallest unit of measurement used to quantify computer data. It contains a single binary value of 0 or 1.

While a single bit can define a boolean value of True (1) or False (0), an individual bit has little other use. Therefore, in computer storage, bits are often grouped together in 8-bit clusters called bytes. Since a byte contains eight bits that each have two possible values, a single byte may have 28 or 256 different values.

The terms “bits” and “bytes” are often confused and are even used interchangeably since they sound similar and are both abbreviated with the letter “B.” However, when written correctly, bits are abbreviated with a lowercase “b,” while bytes are abbreviated with a capital “B.” It is important not to confuse these two terms, since any measurement in bytes contains eight times as many bits. For example, a small text file that is 4 KB in size contains 4,000 bytes, or 32,000 bits.

Generally, files, storage devices, and storage capacity are measured in bytes, while data transfer rates are measured in bits. For instance, an SSD may have a storage capacity of 240 GB, while a download may transfer at 10 Mbps. Additionally, bits are also used to describe processor architecture, such as a 32-bit or 64-bit processor.