Memory controller

  Uncategorized

Memory controller

From Wikipedia, the free encyclopedia

Jump to navigationJump to search

The memory controller is a digital circuit that manages the flow of data going to and from the computer’s main memory. A memory controller can be a separate chip or integrated into another chip, such as being placed on the same die or as an integral part of a microprocessor; in the latter case, it is usually called an integrated memory controller (IMC). A memory controller is sometimes also called a memory chip controller (MCC)[1] or a memory controller unit (MCU).[2]

History[edit]

Most modern desktop or workstation microprocessors use an integrated memory controller (IMC), including microprocessors from IntelAMD, and those built around the ARM architecture.

Prior to K8 (circa 2003), AMD microprocessors had a memory controller implemented on their motherboard’s northbridge. In K8 and later, AMD employed an integrated memory controller.[3] Likewise, until Nehalem (circa 2008), Intel microprocessors used memory controllers implemented on the motherboard’s northbridge. Nehalem and later switched to an integrated memory controller.[4]

Other examples of microprocessors that use integrated memory controllers include IBM‘s POWER5, and Sun Microsystems‘s UltraSPARC T1.

While an integrated memory controller has the potential to increase the system’s performance, such as by reducing memory latency, it locks the microprocessor to a specific type (or types) of memory, forcing a redesign in order to support newer memory technologies. When DDR2 SDRAM was introduced, AMD released new Athlon 64 CPUs. These new models, with a DDR2 controller, use a different physical socket (known as Socket AM2), so that they will only fit in motherboards designed for the new type of RAM. When the memory controller is not on-die, the same CPU may be installed on a new motherboard, with an updated northbridge.

Some microprocessors in the 1990s, such as the DEC Alpha 21066 and HP PA-7300LC, had integrated memory controllers; however, rather than for performance gains, this was implemented to reduce the cost of systems by eliminating the need for an external memory controller.

Some CPUs are designed to have their memory controllers as dedicated external components that are not part of the chipset. An example is IBM POWER8, which uses external Centaur chips that are mounted onto DIMM modules and act as memory buffers, L4 cache chips, and as the actual memory controllers. The first version of the Centaur chip used DDR3 memory but an updated version was later released which can use DDR4.[5]

Purpose[edit]

Memory controllers contain the logic necessary to read and write to DRAM, and to “refresh” the DRAM. Without constant refreshes, DRAM will lose the data written to it as the capacitors leak their charge within a fraction of a second (not more than 64 milliseconds according to JEDEC standards).

Reading and writing to DRAM is performed by selecting the row and column data addresses of the DRAM as the inputs to the multiplexer circuit, where the demultiplexer on the DRAM uses the converted inputs to select the correct memory location and return the data, which is then passed back through a multiplexer to consolidate the data in order to reduce the required bus width for the operation.

Bus width is the number of parallel lines available to communicate with the memory cell. Memory controllers’ bus widths range from 8-bit in earlier systems, to 512-bit in more complicated systems and video cards (typically implemented as four 64-bit simultaneous memory controllers operating in parallel, though some are designed to operate in “gang mode” where two 64-bit memory controllers can be used to access a 128-bit memory device).

Some memory controllers, such as the one integrated into PowerQUICC II processors, can be connected to different kinds of devices at the same time, including SDRAMSRAMROM, and memory-mapped I/O; each kind of these devices requires a slightly different control bus, while the memory controller presents a common system bus / front-side bus to the processor. Some memory controllers, such as the one integrated into PowerQUICC II processors, include error detection and correction hardware.[6]

Security[edit]

A few experimental memory controllers (mostly aimed at the server market where data protection is legally required) contain a second level of address translation, in addition to the first level of address translation performed by the CPU’s memory management unit.[7]

Memory controllers integrated into certain Intel Core processors also provide memory scrambling as a feature that turns user data written to the main memory into pseudo-random patterns.[8][9]

Memory Scrambling (in Cryptographic Theory) is supposed to prevent forensic and reverse-engineering analysis based on DRAM data remanence by effectively rendering various types of cold boot attacks ineffective. In current practice this has not been achieved.

However Memory Scrambling has only been designed to address DRAM-related electrical problems. The late 2010s Memory Scrambling Standards do not fix or prevent security issues or problems. The 2010s Memory Scrambling standards are not cryptographically secure, or necessarily open sourced or open to public revision or analysis.[10]

ASUS and Intel have their own memory scrambling standards. Currently ASUS motherboards have allowed the user to choose which memory scrambling standards to use [ASUS or Intel] or whether to turn the feature off entirely.

Variants[edit]

Double data rate memory[edit]

Double data rate (DDR) memory controllers are used to drive DDR SDRAM, where data is transferred on both rising and falling edges of the system’s memory clock. DDR memory controllers are significantly more complicated when compared to single data rate controllers[citation needed], but they allow for twice the data to be transferred without increasing the memory cell’s clock rate or bus width.

Dual-channel memory[edit]

Dual Channel memory controllers are memory controllers where the DRAM devices are separated on to two different buses to allow the memory controller(s) to access them in parallel. This doubles the theoretical amount of bandwidth of the bus. In theory, more channels can be built (a channel for every DRAM cell would be the ideal solution), but due to wire count, line capacitance, and the need for parallel access lines to have identical lengths, more channels are very difficult to add.

Fully buffered memory[edit]

Fully buffered memory systems place a memory buffer device on every memory module (called an FB-DIMM when Fully Buffered RAM is used), which unlike traditional memory controller devices, use a serial data link to the memory controller instead of the parallel link used in previous RAM designs. This decreases the number of the wires necessary to place the memory devices on a motherboard (allowing for a smaller number of layers to be used, meaning more memory devices can be placed on a single board), at the expense of increasing latency (the time necessary to access a memory location). This increase is due to the time required to convert the parallel information read from the DRAM cell to the serial format used by the FB-DIMM controller, and back to a parallel form in the memory controller on the motherboard.

In theory, the FB-DIMM’s memory buffer device could be built to access any DRAM cells, allowing for memory cell agnostic memory controller design, but this has not been demonstrated, as the technology is in its infancy.

Flash memory controller[edit]

Many flash memory devices, such as USB memory sticks, include a flash memory controller on chip. Flash memory is inherently slower to access than RAM and often becomes unusable after a few million write cycles, which generally makes it unsuitable for RAM applications.

See also[edit]

LEAVE A COMMENT