Three years ago, there wasn’t much to say about system RAM

Get a paper written by a professional writer

Unlimited revisions

AI & Plagiarism free

Join 200 000+ happy customers

Place an order now

Three years ago, there wasn’t much to say about system RAM

DRAM

Three years ago, there wasn’t much to say about system RAM. Almost all PCs came with fast page mode (FPM) DRAM, which ran at speeds between 100ns and 80ns. However, escalating CPU and motherboard bus speeds outstripped the ability of FPM DRAM to deliver data in a timely manner. Nowadays there are a lot of different memory designs.

Due to cost considerations, all but the very high-end (and very expensive) computers have utilized DRAM for main memory. Originally, these were asynchronous, single-bank designs because the processors were relatively slow. Most recently, synchronous interfaces have been produced with many advanced features. Though these high-performance DRAMs have been available for only a few years, it is apparent that they will soon be replaced by at least one of the protocol-based designs, such as SyncLink or the DRDRAM design from Rambus, Inc. and Intel.

Random access memory (RAM) is the best known form of computer memory. RAM is considered “random access” because you can access any memory cell directly if you know the row and column that intersect at that cell.

The opposite of RAM is serial access memory (SAM). SAM stores data as a series of memory cells that can only be accessed sequentially (like a cassette tape). If the data is not in the current location, each memory cell is checked until the needed data is found. SAM works very well for memory buffers, where the data is normally stored in the order in which it will be used (a good example is the texture buffer memory on a video card). RAM data, on the other hand, can be accessed in any order.

Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information — a 0 or a 1).The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.

A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory cell, the bucket is filled with electrons. To store a 0, it is emptied. The problem with the capacitor’s bucket is that it has a leak. In a matter of a few milliseconds a full bucket becomes empty. Therefore, for dynamic memory to work, either the CPU or the memory controller has to come along and recharge all of the capacitors holding a 1 before they discharge. To do this, the memory controller reads the memory and then writes it right back. This refresh operation happens automatically thousands of times per second.

Memory chips in desktop computers originally used a pin configuration called dual inline package (DIP). This pin configuration could be soldered into holes on the computer’s motherboard or plugged into a socket that was soldered on the motherboard. This method worked fine when computers typically operated on a couple of megabytes or less of RAM, but as the need for memory grew, the number of chips needing space on the motherboard increased.

The solution was to place the memory chips, along with all of the support components, on a separate printed circuit board (PCB) that could then be plugged into a special connector (memory bank) on the motherboard. Most of these chips use a small outline J-lead (SOJ) pin configuration, but quite a few manufacturers use the thin small outline package (TSOP) configuration as well. The key difference between these newer pin types and the original DIP configuration is that SOJ and TSOP chips are surface-mounted to the PCB. In other words, the pins are soldered directly to the surface of the board, not inserted in holes or sockets.

Memory chips are normally only available as part of a card called a module. You’ve probably seen memory listed as 8×32 or 4×16. These numbers represent the number of the chips multiplied by the capacity of each individual chip, which is measured in megabits (Mb), or one million bits. Take the result and divide it by eight to get the number of megabytes on that module. For example, 4×32 means that the module has four 32-megabit chips. Multiply 4 by 32 and you get 128 megabits. Since we know that a byte has 8 bits, we need to divide our result of 128 by 8. Our result is 16 megabytes!

The type of board and connector used for RAM in desktop computers has evolved over the past few years. The first types were proprietary, meaning that different computer manufacturers developed memory boards that would only work with their specific systems. Then came SIMM, which stands for single in-line memory module. This memory board used a 30-pin connector and was about 3.5 x .75 inches in size (about 9 x 2 cm). In most computers, you had to install SIMMs in pairs of equal capacity and speed. This is because the width of the bus is more than a single SIMM. For example, you would install two 8-megabyte (MB) SIMMs to get 16 megabytes total RAM. Each SIMM could send 8 bits of data at one time, while the system bus could handle 16 bits at a time. Later SIMM boards, slightly larger at 4.25 x 1 inch (about 11 x 2.5 cm), used a 72-pin connector for increased bandwidth and allowed for up to 256 MB of RAM.

From the top: SIMM, DIMM and SODIMM memory modules

As processors grew in speed and bandwidth capability, the industry adopted a new standard in dual in-line memory module (DIMM). With a whopping 168-pin connector and a size of 5.4 x 1 inch (about 14 x 2.5 cm), DIMMs range in capacity from 8 MB to 128 MB per module and can be installed singly instead of in pairs. Most PC memory modules operate at 3.3 volts, while Mac systems typically use 5 volts. Another standard, Rambus in-line memory module (RIMM), is comparable in size and pin configuration to DIMM but uses a special memory bus to greatly increase speed.

Many brands of notebook computers use proprietary memory modules, but several manufacturers use RAM based on the small outline dual in-line memory module (SODIMM) configuration. SODIMM cards are small, about 2 x 1 inch (5 x 2.5 cm), and have 144 pins. Capacity ranges from 16 MB to 512 MB per module. An interesting fact about the Apple iMac desktop computer is that it uses SODIMMs instead of the traditional DIMMs.

Most memory available today is highly reliable. Most systems simply have the memory controller check for errors at start-up and rely on that. Memory chips with built-in error-checking typically use a method known as parity to check for errors. Parity chips have an extra bit for every 8 bits of data. The way parity works is simple. Let’s look at even parity first.

When the 8 bits in a byte receive data, the chip adds up the total number of 1s. If the total number of 1s is odd, the parity bit is set to 1. If the total is even, the parity bit is set to 0. When the data is read back out of the bits, the total is added up again and compared to the parity bit. If the total is odd and the parity bit is 1, then the data is assumed to be valid and is sent to the CPU. But if the total is odd and the parity bit is 0, the chip knows that there is an error somewhere in the 8 bits and dumps the data. Odd parity works the same way, but the parity bit is set to 1 when the total number of 1s in the byte are even.

The problem with parity is that it discovers errors but does nothing to correct them. If a byte of data does not match its parity bit, then the data are discarded and the system tries again. Computers in critical positions need a higher level of fault tolerance. High-end servers often have a form of error-checking known as error-correction code (ECC). Like parity, ECC uses additional bits to monitor the data in each byte. The difference is that ECC uses several bits for error checking — how many depends on the width of the bus — instead of one. ECC memory uses a special algorithm not only to detect single bit errors, but actually correct them as well. ECC memory will also detect instances when more than one bit of data in a byte fails. Such failures are very rare, and they are not correctable, even with ECC.

The majority of computers sold today use nonparity memory chips. These chips do not provide any type of built-in error checking, but instead rely on the memory controller for error detection.

This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be dynamically refreshed all of the time or it forgets what it is holding. The downside of all of this refreshing is that it takes time and slows down the memory.

Memory cells are etched onto a silicon wafer in an array of columns (bitlines) and rows (wordlines). The intersection of a bitline and wordline constitutes the address of the memory cell.

DRAM works by sending a charge through the appropriate column (CAS) to activate the transistor at each bit in the column. When writing, the row lines contain the state the capacitor should take on. When reading, the sense-amplifier determines the level of charge in the capacitor. If it is more than 50 percent, it reads it as a 1; otherwise it reads it as a 0. The counter tracks the refresh sequence based on which rows have been accessed in what order. The length of time necessary to do all this is so short that it is expressed in nanoseconds (billionths of a second). A memory chip rating of 70ns means that it takes 70 nanoseconds to completely read and recharge each cell.

Memory cells alone would be worthless without some way to get information in and out of them. So the memory cells have a whole support infrastructure of other specialized circuits. These circuits perform functions such as:

•Identifying each row and column (row address select and column address select)

•Keeping track of the refresh sequence (counter)

•Reading and restoring the signal from a cell (sense amplifier)

•Telling a cell whether it should take a charge or not (write enable)

Other functions of the memory controller include a series of tasks that include identifying the type, speed and amount of memory and checking for errors.

Static RAM uses a completely different technology. In static RAM, a form of flip-flop holds each bit of memory (see How Boolean Logic Works for details on flip-flops). A flip-flop for a memory cell takes four or six transistors along with some wiring, but never has to be refreshed. This makes static RAM significantly faster than dynamic RAM. However, because it has more parts, a static memory cell takes up a lot more space on a chip than a dynamic memory cell. Therefore, you get less memory per chip, and that makes static RAM a lot more expensive.

So static RAM is fast and expensive, and dynamic RAM is less expensive and slower. So static RAM is used to create the CPU’s speed-sensitive cache, while dynamic RAM forms the larger system RAM space.

Static random access memory uses multiple transistors, typically four to six, for each memory cell but doesn’t have a capacitor in each cell. It is used primarily for cache.

Dynamic random access memory has memory cells with a paired transistor and capacitor requiring constant refreshing.

A DRAM memory array can be thought of as a table of cells. These cells are comprised of capacitors, and contain one or more ‘bits’ of data, depending upon the chip configuration. This table is addressed via row and column decoders, which in turn receive their signals from the RAS and CAS clock generators. In order to minimize the package size, the row and column addresses are multiplexed into row and column address buffers. For example, if there are 11 address lines, there will be 11 row and 11 column address buffers. Access transistors called ‘sense amps’ are connected to the each column and provide the read and restore operations of the chip. Since the cells are capacitors that discharge for each read operation, the sense amp must restore the data before the end of the access cycle.

The capacitors used for data cells tend to bleed off their charge, and therefore require a periodic refresh cycle or data will be lost. A refresh controller determines the time between refresh cycles, and a refresh counter ensures that the entire array (all rows) are refreshed. Of course, this means that some cycles are used for refresh operations, and has some impact on performance.

A typical memory access would occur as follows. First, the row address bits are placed onto the address pins. After a period of time the RAS signal falls, which activates the sense amps and causes the row address to be latched into the row address buffer. When the RAS signal stabilizes, the selected row is transferred onto the sense amps. Next, the column address bits are set up, and then latched into the column address buffer when CAS falls, at which time the output buffer is also turned on. When CAS stabilizes, the selected sense amp feeds its data onto the output buffer.

An asynchronous interface is one where a minimum period of time is determined to be necessary to ensure an operation is complete. Each of the internal operations of an asynchronous DRAM chip are assigned minimum time values, so that if a clock cycle occurs any time prior to that minimum time another cycle must occur before the next operation is allowed to begin.

It should be fairly obvious that all of these operations require a significant amount of time and creates a major performance concern. The primary focus of DRAM manufacturers has been to either increase the number of bits per access, pipeline the various operations to minimize the time required or eliminate some of the operations for certain types of accesses.

Wider I/O ports would seem to be the simplest and cheapest method of improving performance. Unfortunately, a wider I/O port means additional I/O pins, which in turn means a larger package size. Likewise, the additional segmentation of the array (more I/O lines = more segments) means a larger chip size. Both of these issues mean a greater cost, somewhat defeating the purpose of using DRAM in the first place. Another drawback is that the multiple outputs draw additional current, which creates ringing in the ground circuit. This actually results in a slower part, because the data cannot be read until the signal stabilizes. These problems limited the I/O width to 4 bits for quite some time, causing DRAM designers to look for other ways to optimize performance.

Fast page mode dynamic random access memory was the original form of DRAM. It waits through the entire process of locating a bit of data by column and row and then reading the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately 176 MBps.

Fast Page mode improved upon the original page mode by eliminating the column address setup time during the page cycle. This was accomplished by activating the column address buffers on the falling edge of RAS (rather than CAS). Since RAS remains low for the entire page cycle, this acts as a transparent latch when CAS is high, and allows address setup to occur as soon as the column address is valid, rather than waiting for CAS to fall.

Fast Page mode became the most widely used access method for DRAMs, and is still used on many systems. The benefit of FPM memory is reduced power consumption, mainly because sense and restore current is not necessary during page mode access. Though FPM was a major innovation, there are still some drawbacks. The most significant is that the output buffers turn off when CAS goes high. The minimum cycle time is 5ns before the output buffers turn off, which essentially adds at least 5ns to the cycle time.

The last major improvement to asynchronous DRAMs came with the Hyperpage mode, or Extended DataOut. This innovation was simply to no longer turn off the output buffers upon the rising edge of /CAS. In essence, this eliminates the column precharge time while latching the data out. This allows the minimum time for /CAS to be low to be reduced, and the rising edge can come earlier.

In addition to a 40% or greater improvement in access times, EDO uses the same amount of silicon and the same package size. EDO has been shown to work well with memory bus speeds up to 83MHz with little or no performance penalty. If the chips are sufficiently fast (55ns or faster), EDO can be used even with a 100MHz memory bus. One of the best reasons to use EDO is that all of the current motherboard chipsets support it with no compatibility problems, unlike much of the synchronous memory now being used.

Even with all the stated advantages, EDO is no longer considered mainstream. Most manufacturers no longer produce it, or have limited production. It is only a matter of time before the prices begin to rise, and the equivalent size SDRAM module will be less expensive.

If you already own EDO memory, there is no real reason to jump to SDRAM unless you require bus speeds above 83MHz. With a typical EDO timing of 5-2-2-2 at 66MHz, there is almost no noticeable improvement with SDRAM over EDO, and at 83MHz it is still negligible. If you require 100MHz bus operation, EDO will lag far behind current SDRAM in performance even if it does operate at that speed due to the need for 6-3-3-3 timings. On the other hand, with EDO being phased out, you will likely find SDRAM to be equal to or even lower in price.

Extended data-out dynamic random access memory does not wait for all of the processing of the first bit before continuing to the next one. As soon as the address of the first bit is located, EDO DRAM begins looking for the next bit. It is about five percent faster than FPM. Maximum transfer rate to L2 cache is approximately 264 MBps.

Synchronous dynamic random access memory takes advantage of the burst mode concept to greatly improve performance. It does this by staying on the row containing the requested bit and moving rapidly through the columns, reading each bit as it goes. The idea is that most of the time the data needed by the CPU will be in sequence. SDRAM is about five percent faster than EDO RAM and is the most common form in desktops today. Maximum transfer rate to L2 cache is approximately 528 MBps.

Double data rate synchronous dynamic RAM is just like SDRAM except that is has higher bandwidth, meaning greater speed. Maximum transfer rate to L2 cache is approximately 1,064 MBps (for DDR SDRAM 133 MHZ).

One limitation of JEDEC SDRAM is that the theoretical limitation of the design is 125MHz, though technology advances may allow up to 133MHz operation. It is obvious that bus speeds will need to increase well beyond that in order for memory bandwidth to keep up with future processors. There are several competing new standards on the horizon that are very promising, however most of them require special pinouts, smaller bus widths, or other design considerations. In the short term, Double Data Rate SDRAM looks very appealing. Essentially, this design allows the activation of output operations on the chip to occur on both the rising and falling edge of the clock. Currently, only the rising edge signals an event to occur, so the DDR SDRAM design can effectively double the speed of operation up to at least 200MHz.

There is already one Socket 7 chipset that has support for DDR SDRAM, and more will certainly follow if manufacturers decide to make this memory available. In this industry, many times it is the first to market that gains the support, rather than the best technology.

Rambus dynamic random access memory is a radical departure from the previous DRAM architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory module (RIMM), which is similar in size and pin configuration to a standard DIMM. What makes RDRAM so different is its use of a special high-speed data bus called the Rambus channel. RDRAM memory chips work in parallel to achieve a data rate of 800 MHz, or 1,600 MBps.

Credit card memory is a proprietary self-contained DRAM memory module that plugs into a special slot for use in notebook computers.

Another self-contained DRAM module for notebooks, cards of this type are not proprietary and should work with any notebook computer whose system bus matches the memory card’s configuration.

FlashRAM is a generic term for the small amount of memory used by devices like TVs, VCRs and car radios to maintain custom information. Even when these items are turned off, they draw a tiny amount of power to refresh the contents of their memory. This is why every time the power flickers, the VCR blinks 12:00. It’s also why you lose all presets on your radio when your car battery dies! Your computer has FlashRAM to remember things like hard disk settings — see Why does my computer need a battery? for details.

VideoRAM, also known as multiport dynamic random access memory (MPDRAM), is a type of RAM used specifically for video adapters or 3-D accelerators. The “multiport” part comes from the fact that VRAM normally has both random access memory and serial access memory. VRAM is located on the graphics card and comes in a variety of formats, many of which are proprietary. The amount of VRAM is a determining factor in the resolution and color depth of the display. VRAM is also used to hold graphics-specific information such as 3-D geometry data and texture maps.

It’s said that you can never have enough money, and the same seems to hold true for RAM, especially if you do a lot of graphics-intensive work or gaming. Next to the CPU itself, RAM is the most important factor in computer performance. If you don’t have enough, adding RAM can make more of a difference than getting a new CPU!

If your system responds slowly or accesses the hard drive constantly, then you need to add more RAM. If you are running Windows 95/98, you need a bare minimum of 32 MB, and your computer will work much better with 64 MB. Windows NT/2000 needs at least 64 MB, and it will take everything you can throw at it, so you’ll probably want 128 MB or more.

Linux works happily on a system with only 4 MB of RAM. If you plan to add X-Windows or do much serious work, however, you’ll probably want 64 MB. Apple Mac OS systems will work with 16 MB, but you should probably have a minimum of 32 MB.

The amount of RAM listed for each system above is estimated for normal usage — accessing the Internet, word processing, standard home/office applications and light entertainment. If you do computer-aided design (CAD), 3-D modeling/animation or heavy data processing, or if you are a serious gamer, then you will most likely need more RAM. You may also need more RAM if your computer acts as a server of some sort (Web pages, database, application, FTP or network).

Another question is how much VRAM you want on your video card. Almost all cards that you can buy today have at least 8 MB of RAM. This is normally enough to operate in a typical office environment. You should probably invest in a 32-MB graphics card if you want to do any of the following:

•Work in a high-resolution, full-color environment

When shopping for video cards, remember that your monitor and computer must be capable of supporting the card you choose.

Most of the time, installing RAM is a very simple and straightforward procedure. The key is to do your research. Here’s what you need to know:

In the previous section, we discussed how much RAM is needed in most situations. RAM is usually sold in multiples of 16 megabytes: 16, 32, 64, 128, 256, 512. This means that if you currently have a system with 64 MB RAM and you want at least 100 MB RAM total, then you will probably need to add another 64 MB module.

Once you know how much RAM you want, check to see what form factor (card type) you need to buy. You can find this in the manual that came with your computer, or you can contact the manufacturer. An important thing to realize is that your options will depend on the design of your computer. Most computers sold today for normal home/office use have DIMM slots. High-end systems are moving to RIMM technology, which will eventually take over in standard desktop computers as well. Since DIMM and RIMM slots look a lot alike, be very careful to make sure you know which type your computer uses. Putting the wrong type of card in a slot can cause damage to your system and ruin the card.

You will also need to know what type of RAM is required. Some computers require very specific types of RAM to operate. For example, your computer may only work with 60ns-70ns parity EDO RAM. Most computers are not quite that restrictive, but they do have limitations. For optimal performance, the RAM you add to your computer must also match the existing RAM in speed, parity and type. The most common type available today is SDRAM.

Before you open your computer, check to make sure you won’t be voiding the warranty. Some manufacturers seal the case and request that the customer have an authorized technician install RAM. If you’re set to open the case, turn off and unplug the computer. Ground yourself by using an anti-static pad or wrist strap to discharge any static electricity. Depending on your computer, you may need a screwdriver or nut-driver to open the case. Many systems sold today come in toolless cases that use thumbscrews or a simple latch.

The actual installation of the memory module does not normally require any tools. RAM is installed in a series of slots on the motherboard known as the memory bank. The memory module is notched at one end so you won’t be able to insert it in the wrong direction. For SIMMs and some DIMMs, you install the module by placing it in the slot at approximately a 45-degree angle. Then push it forward until it is perpendicular to the motherboard and the small metal clips at each end snap into place. If the clips do not catch properly, check to make sure the notch is at the right end and the card is firmly seated. Many DIMMs do not have metal clips; they rely on friction to hold them in place. Again, just make sure the module is firmly seated in the slot.

Once the module is installed, close the case, plug the computer back in and power it up. When the computer starts the POST, it should automatically recognize the memory. That’s all there is to it!

Bibliography:

BIBLIOGRAPHY

owww.howstuffworks.com

owww.tomshardware.com/mainboard/19981024/ram-01.html

Get a paper written by a professional writer

Unlimited revisions

AI & Plagiarism free

Join 200 000+ happy customers

Place an order now


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *