Hardware

RFID Definition

RFID Definition

Stands for “Radio-Frequency Identification.” RFID is a system used to track objects, people, or animals using tags that respond to radio waves. RFID tags are integrated circuits that include a small antenna. The are typically small enough that they are not easily noticeable and therefore can be placed on many types of objects.

Like UPC labels, RFID tags are often used to uniquely identify the object they are attached to. However, unlike UPCs, RFID tags don’t need to be scanned directly with a laser scanner. Instead, they can be recorded by simply placing the tag within the range of an RFID radio transmitter. This makes it possible to quickly scan several items or to locate a specific product surrounded by many other items.

RFID tags have many different uses. Some examples include:

  • Merchandise tags – These tags are attached to clothing, electronics, and other products to prevent theft from retail stores. These tags are typically deactivated at the place of checkout. Tags that have not been deactivated will sound the alarm system near the store’s exit.
  • Inventory management – Products stored in warehouses may be given RFID tags so they can be located more easily.
  • Airplane luggage – RFID tags may be placed on checked bags so they can be easily tracked and located.
  • Toll booth passes – E-ZPass and I-Pass receivers may be placed in automobiles, allowing cars and trucks to pass through toll booths without needing to stop. This enables drivers to make toll payments automatically.
  • Credit cards – Some credit cards have built-in RFIDs so they can be “waved” rather than “swiped” near compatible readers. The SpeedPass wand is an example of an RFID-only payment device.
  • Animal tags – RFID tags can be placed pet collars to make help identify pets if they are lost. Tags may also be placed on birds and other animals to help track them for research purposes.

The above list includes just a few of the applications of radio-frequency identification. There are many other existing and potential applications for RFID tags as well.

Hardware

Laptop Definition

Laptop Definition

Laptop computers, also known as notebooks, are portable computers that you can take with you and use in different environments. They include a screen, keyboard, and a trackpad or trackball, which serves as the mouse. Because laptops are meant to be used on the go, they have a battery which allows them to operate without being plugged into a power outlet. Laptops also include a power adapter that allows them to use power from an outlet and recharges the battery.

While portable computers used to be significantly slower and less capable than desktop computers, advances in manufacturing technology have enabled laptops to perform nearly as well as their desktop counterparts. In fact, high-end laptops often perform better than low or even mid-range desktop systems. Most laptops also include several I/O ports, such as USB ports, that allow standard keyboards and mice to be used with the laptop. Modern laptops often include a wireless networking adapter as well, allowing users to access the Internet without requiring any wires.

While laptops can be powerful and convenient, the convenience often comes at a price. Most laptops cost several hundred dollars more than a similarly equipped desktop model with a monitor, keyboard, and mouse. Furthermore, working long hours on a laptop with a small screen and keyboard may be more fatiguing than working on a desktop system. Therefore, if portability is not a requirement for your computer, you may find better value in a desktop model.

Hardware

ADC Definition

ADC Definition

Stands for “Analog-to-Digital Converter.” Since computers only process digital information, they require digital input. Therefore, if an analog input is sent to a computer, an analog-to-digital converter (ADC) is required. This device can take an analog signal, such as an electrical current, and digitize it into a binary format that the computer can understand.

A common use for an ADC is to convert analog video to a digital format. For example, video recorded on 8mm film or a VHS tape is stored in an analog format. In order to transfer the video to a computer, the video must be converted to a digital format. This can be done using an ADC video conversion box, which typically has composite video inputs and a Firewire output. Some digital camcorders that have analog inputs can also be used to convert video from analog to digital.

ADCs may also be used to convert analog audio streams. For example, if you want to record sounds from a microphone, the audio must be converted from the microphone’s analog signal into a digital signal that the computer can understand. This is why all sound cards that have an analog audio input also require an ADC that converts the incoming audio signal to a digital format. The accuracy of the audio conversion depends on the sampling rate used in the conversion process. Higher sampling rates provide a better estimation of the analog signal, and therefore produce a higher-quality sound.

While ADCs convert analog inputs into a digital format that computers can recognize, sometimes a computer must output an analog signal. For this type of conversion, a digital-to-analog converter (DAC) is used.

NOTE: ADC can also stand for “Apple Display Connector,” which was a proprietary video connector developed by Apple. It combined DVIUSB, and AC power into a single cable. Apple stopped producing computers with ADC ports in 2004 in favor of the standard DVI connection.

Hardware

USB Definition

USB Definition

Stands for “Universal Serial Bus.” USB is the most common type of computer port used in today’s computers. It can be used to connect keyboards, mice, game controllers, printers, scanners, digital cameras, and removable media drives, just to name a few. With the help of a few USB hubs, you can connect up to 127 peripherals to a single USB port and use them all at once (though that would require quite a bit of dexterity).

USB is also faster than older ports, such as serial and parallel ports. The USB 1.1 specification supports data transfer rates of up to 12Mb/sec and USB 2.0 has a maximum transfer rate of 480 Mbps. Though USB was introduced in 1997, the technology didn’t really take off until the introduction of the Apple iMac (in late 1998) which used USB ports exclusively. It is somewhat ironic, considering USB was created and designed by Intel, Compaq, Digital, and IBM. Over the past few years, USB has become a widely-used cross-platform interface for both Macs and PCs.

Hardware

USB-C Definition

USB-C Definition

Stands for “Universal Serial Bus Type-C.” USB-C is a type of USB connector that was introduced in 2015. It supports USB 3.1, which means a USB-C connection can transfer data up to 10 Gbps and send or receive up to 20 volts or 100 watts of power. Unlike the previous USB Type-A and USB Type-B ports, the USB-C port is symmetrical, which means you never have to worry about plugging in the cable the wrong way.

The USB-C connector is the most significant change to the USB connector since the USB interface was standardized in 1996. USB 1.1, 2.0, and 3.0 all used the same flat, rectangular USB-A connector. While there have been several variations of USB-B, such as Mini-USB and Micro-USB, they are all designed for peripheral devices, which connect to a Type-A port on the other end. The Type-C connector introduced with USB 3.1 is designed to be the same on both ends.

There is no mini or micro version of USB-C, since the standard USB-C connector is about the same size of a Micro-USB connector. This means it can be used in small devices like smartphones and tablets. Since USB-C supports up to 100 watts of power, it can also be used as the power connector for laptops. In fact, the first laptops to include USB-C ports – the 2015 Apple MacBook and Google Chromebook Pixel – do not include power connectors. Instead, the power cable connects directly to the USB-C port.

A USB-C connector will only fit in a USB-C port, but USB-C cables are backwards-compatible with other USB standards. Therefore, a USB-C to USB-A or USB-C to USB-B adapter can be used to connect older USB devices to a USB-C port. However, the data transfer rate and wattage will be limited to the lower standard.

Hardware

Firewire Definition

Firewire Definition

FireWire is an I/O interface developed by Apple Computer. It is also known as IEEE 1394, which is the technical name standardized by the IEEE. Other names for IEEE 1394 include Sony i.Link and Yamaha mLAN, but Apple’s FireWire name the most commonly used.

There are two primary versions of the FireWire interface – FireWire 400 (IEEE 1394a) and FireWire 800 (IEEE 1394b). FireWire 400 uses a 6-pin connector and supports data transfer rates of up to 400 Mbps. FireWire 800 uses a 9-pin connector and can transfer data at up to 800 Mbps. The FireWire 800 interface, which was introduced on Macintosh computers in 2003, is backwards compatible with FireWire 400 devices using an adapter. Both interfaces support daisy chaining and can provide up to 30 volts of power to connected devices.

FireWire is considered a high-speed interface, and therefore can be used for connecting peripheral devices that require fast data transfer speeds. Examples include external hard drives, video cameras, and audio interfaces. On Macintosh computers, FireWire can be used to boot a computer in target disk mode, which allows the hard drive to show up as an external drive on another computer. Mac OS X also supports networking two computers via a FireWire cable.

While FireWire has never been as popular as USB, it has remained a popular choice for audio and video professionals. Since FireWire supports speeds up to 800 Mbps, it is faster than USB 2.0, which maxes out at 480 Mbps. In fact, even FireWire 400 provides faster sustained read and write speeds than USB 2.0, which is important for recording audio and video in real-time. Future versions of IEEE 1394, such as FireWire 1600 and 3200, were designed to support even faster data transfer speeds. However, the FireWire interface has been superseded by Thunderbolt, which can transfer data at up to 10,000 Mbps (10 Gbps) and is backwards compatible with multiple interfaces.

Hardware

DVD Definition

DVD Definition

Stands for “Digital Versatile Disc.” A DVD is a type of optical media used for storing digital data. It is the same size as a CD, but has a larger storage capacity. Some DVDs are formatted specifically for video playback, while others may contain different types of data, such as software programs and computer files.

The original “DVD-Video” format was standardized in 1995 by consortium of electronics companies, including Sony, Panasonic, Toshiba, and Philips. It provided a number of improvements over analog VHS tapes, including higher quality video, widescreen aspect ratios, custom menus, and chapter markers, which allow you to jump to different sections within a video. DVDs can also be watched repeatedly without reducing the quality of the video and of course they don’t need to be rewound. A standard video DVD can store 4.7 GB of data, which is enough to hold over 2 hours of video in 720p resolution, using MPEG-2 compression.

DVDs are also used to distribute software programs. Since some applications and other software (such as clip art collections) are too large to fit on a single 700 MB CD, DVDs provide a way to distribute large programs on a single disc. Writable DVDs also provide a way to store a large number of files and back up data. The writable DVD formats include DVD-RDVD+RDVD-RWDVD+RW, and DVD-RAM. While the different writable DVD formats caused a lot of confusion and incompatibility issues in the early 2000s, most DVD drives now support all formats besides DVD-RAM.

A standard DVD can hold 4.7 GB of data, but variations of the original DVD format have greater capacities. For example, a dual-layer DVD (which has two layers of data on a single side of the disc) can store 8.5 GB of data. A dual-sided DVD can store 9.4 GB of data (4.7 x 2). A dual-layer, dual-sided DVD can store 17.1 GB of data. The larger capacity formats are not supported by most standalone DVD players, but they can be used with many computer-based DVD drives.

Hardware

Host Definition

Host Definition

A host is a computer that is accessible over a network. It can be a clientserver, or any other type of computer. Each host has a unique identifier called a hostname that allows other computers to access it.

Depending on the network protocol, a computer’s hostname may be a domain nameIP address, or simply a unique text string. For example, the hostname of a computer on a local network might be Tech-Terms.local, while an Internet hostname might be techterms.com. A host can access its own data over a network protocol using the hostname “localhost.”

Host vs Server

The terms host and server are often used interchangeably, but they are two different things. All servers are hosts, but not all hosts are servers. To avoid confusion, servers are often defined as a specific type of host, such as a web host or mail host. For instance, a mail host and mail server may refer to the same thing.

While a server refers to a specific machine, a host may also refer to an organization that provides a service over the Internet. For example, a web host (or web hosting company) maintains multiple web servers and provides web hosting services for clients. A file host may provide online storage using multiple file servers. In other words, a hosting company hosts multiple servers that serve data to clients.

Hardware

CRT Definition

CRT Definition

Stands for “Cathode Ray Tube.” CRT is the technology used in traditional computer monitors and televisions. The image on a CRT display is created by firing electrons from the back of the tube to phosphors located towards the front of the display. Once the electrons hit the phosphors, they light up and are projected on the screen. The color you see on the screen is produced by a blend of red, blue, and green light, often referred to as RGB.

The stream of electrons is guiding by magnetic charges, which is why you may get interference with unshielded speakers or other magnetic devices that are placed close to a CRT monitor. Flat screen or LCD displays don’t have this problem, since they don’t require a magnetic charge. LCD monitors also don’t use a tube, which is what enables them to be much thinner than CRT monitors. While CRT displays are still used by graphics professionals because of their vibrant and accurate color, LCD displays now nearly match the quality of CRT monitors. Therefore, flat screen displays are well on their way to replacing CRT monitors in both the consumer and professional markets.

Hardware

LCD Definition

LCD Definition

Stands for “Liquid Crystal Display.” LCD is a flat panel display technology commonly used in TVs and computer monitors. It is also used in screens for mobile devices, such as laptopstablets, and smartphones.

LCD displays don’t just look different than bulky CRT monitors, the way they operate is significantly different as well. Instead of firing electrons at a glass screen, an LCD has backlight that provides light to individual pixels arranged in a rectangular grid. Each pixel has a red, green, and blue RGB sub-pixel that can be turned on or off. When all of a pixel’s sub-pixels are turned off, it appears black. When all the sub-pixels are turned on 100%, it appears white. By adjusting the individual levels of red, green, and blue light, millions of color combinations are possible.

How an LCD works

The backlight in liquid crystal display provides an even light source behind the screen. This light is polarized, meaning only half of the light shines through to the liquid crystal layer. The liquid crystals are made up of a part solid, part liquid substance that can be “twisted” by applying electrical voltage to them. They block the polarized light when they are off, but reflect red, green, or blue light when activated.

Each LCD screen contains a matrix of pixels that display the image on the screen. Early LCDs had passive-matrix screens, which controlled individual pixels by sending a charge to their row and column. Since a limited number of electrical charges could be sent each second, passive-matrix screens were known for appearing blurry when images moved quickly on the screen. Modern LCDs typically use active-matrix technology, which contain thin film transistors, or TFTs. These transistors include capacitors that enable individual pixels to “actively” retain their charge. Therefore, active-matrix LCDs are more efficient and appear more responsive than passive-matrix displays.

NOTE: An LCD’s backlight may either be a traditional bulb or LED light. An “LED display” is simply an LCD screen with an LED backlight. This is different than an OLED display, which lights up individual LEDs for each pixel. While the liquid crystals block most of an LCD’s backlight when they are off, some of the light may still shine through (which might be noticeable in a dark room). Therefore OLEDs typically have darker black levels than LCDs.