Category Archives: Memory

What is 3D MLC NAND Flash Memory?

To unleash performance fit for the next generation of computers, Transcend has released its MTE850 M.2 Solid State Device (SSD), based on 3D MLC NAND flash memory. The device utilizes the PCI Express Gen3 x4 interface and supports the latest NVMe standard. According to Transcend, this SSD targets high-end applications such as gaming, digital audio and video production, and multiple uses in the enterprise. Typically, such applications demand constant processing of heavy workloads, while not willing to stand any system slowdowns or lags of any kind. Transcend claims the MTE850 M.2 SSD will offer users high-speed transfers and unmatched reliability.

High Speeds for High-End Applications

As the above SSD uses the PCIe Gen3 x4 interface and follows NVMe 1.2 standard, it transmits and receives data on four lanes simultaneously. This results in the SSD working at the blazing speeds of up to 1100 MBps while writing, and up to 2500 MBps while reading.

Why the PCIe Interface

Presently, the most popular method of connecting a host computer to an SSD is through SATA or Serial ATA interface. However, PCIe uses one transmit and one receive serial interfaces in each of the four lanes, the PCIe interface is much faster than SATA is, and it is able to fulfill new performance requirements in better ways.

Why the NVMe Standard

The growing needs of enterprise and client applications demands better performance vectors than the Advanced Host Controller Interface (AHCI) can provide. The NVM Express (NVMe) fulfills this enhanced host controller interface standard, which also calls for low latency, increased IOPS, and scalable bandwidth.

What is 3-D Expansion?

Existing planar NAND memory chips are arranged in the form of flat two-dimensional arrays. In contrast, 3-D NAND flash has memory cells stacked in the vertical direction as well as in multiple layers. This breaks through the density limitations of the existing 2-D planar NAND, with the 3-D NAND offering a far greater level of performance and endurance.

With Better Endurance Comes Higher Reliability

To help keep data secure, Transcend has engineered their MTE850 M.2 SSD with a RAID engine (a type of data storage virtualization technology) and Low-Density Parity Check (LDPC) coding, along with an Elliptical Curve Cryptography (ECC) algorithm. Additionally, Transcend manufactures their SSDs with top-tier MLC NAND flash chips and provides them with engineered dynamic thermal throttling mechanism. This way, Transcend ensures the MTE850 delivers superior stability and endurance befitting for high-end applications.

SSD Scope Software

Users can download the SSD Scope software application free of charge from the Transcend site. The application helps to monitor the health of the running SSD using SMART technology and allows the user to enable the TRIM command to obtain optimum write speeds. Using the application also keeps the firmware of the SSD up-to-date, and helps in migrating data from the original drive to the new SSD with only a few clicks.

With certificates from CE, FCC, and BSMI the #-D MLC NAND flash memory based  MTE850 M.2 SSD from Transcend works on 3.3 VDC ±5%, operating within 0 and 70°C. With mechanical dimensions of 80x22x3.58 mm, the SSD weighs only 8 grams.

3D NAND Memories Cross 10TB

At the Flash Memory Summit in Toronto, Micron Technology exhibited their NVM Express or NVMe Solid State Drives that use the company’s 3D NAND technology to achieve capacities over 10 TB.

According to Dan Florence, Micron built the 9200 series of NVMe SSDs from ground up to overcome the restrictions placed by the legacy hard drives. Dan Florence is the SSD product manager for Micron’s Storage Business Unit. The design of the new storage portfolio addresses the data demands that are presently surging, while maximizing the efficiency of data centers. According to Florence, this improves the overall total cost of ownership for customers. The NVMe over Fabric architecture of Micron is way ahead of standard developments, and is the storage foundation for the Micron SolidState Platform.

According to Florence, the 9200 SSDs from Micron can be up to ten times faster than the fastest SATA SSDs. The 9200 SSDs can achieve transfer speeds of 4.6 GB/s with one million read IOPS. This makes them ideal for high-capacity use case performance as application/database acceleration, high frequency computing, and high frequency trading. Regular interfaces were more attuned to spinning media, which allows NVMe several advantages over the traditional interfaces. As the NVMe sits on the PCIe bus, it not only overcomes a huge amount of latency, but also offers higher bandwidth, allowing users to get much higher IOPS.

Traditionally, PCIe has many custom drivers working in iterations, and the NVMe offers better ease of use. This is allowing NVMe SSDs to be adopted faster, as they can be plugged into almost any system and with any operating system.

The earlier generation of NVMe SSDs from Micron was limited in capacity. The 9200 series can go up to 11 TB, almost three times the capacity of the older generation, making then the first monolithic NVMe SSDs to cross the 10 TB boundary. That also makes it easier for the operating system to manage, while allowing for lower power consumption. Additionally, Micron makes the 9200 series in the U.2 form factor, which allows the new SSDs to achieve more density per server.

Micron claims their new NVMe SSDs, in random performance, can outperform the fastest hard drives by 300-1200 times, and the fastest SSDs by three to seven times. Of course, this is dependent upon the use case and configuration. According to Florence, database applications and transaction processing are increasingly using random performance, as they use a random IO access pattern. Moreover, the workload of several data analyses also follow the same pattern, since working with large pipes of data makes sequential handling more important for data ingest. This includes massive amounts of IoT data as well as user-generated content.

Most general applications also use some level of random IO, and the new NVMe SSDs can use most of the bandwidth in the PCIe bus. According to Florence, the value driver lies in the amount of data moved and worked with, which is also applicable to a growing number of applications. The new NVMe SSDs are a clear leader this area, as the dollar per IOPS becomes increasingly more important.

Replacement for Flash Memory

Today flash memories or thumb drives are commonly used as devices that store information even without power—nonvolatile memory. However, physicists and researchers are of the opinion that flash memory is nearing the end of its size and performance limits. Therefore, the computer industry is in search of a replacement for flash memory. For instance, the National Institute of Technology (NIST) conducted research is suggesting resistive random access memory (RRAM) as a worthy successor for the next generation of nonvolatile computer memory.

RRAM has several advantages over flash. Potentially faster and less energy hungry than flash, it is also able to pack in far more information within a given space. This is because its switches are tiny enough to store a terabyte within a space the size of a postage stamp. So far, technical hurdles have been preventing RRAM from being broadly commercialized.

One such hurdle physicists and researchers are facing is the RRAM variability. To be a practical memory, a switch needs to have two distinct states—representing a digital one or zero, and a predictable way of flipping from one state to the other. Conventional memory switches behave reliably when they receive an electrical pulse and switch states predictably. However, RRAM switches are still not so reliable, and their behavior is unpredictable.

Inside a RRAM switch, an electrical pulse flips it on or off by moving oxygen atoms around, thereby creating or breaking a conductive path through an insulating oxide. When the pulses are short and energetic, they are more effective in moving ions by the right amount for creating distinct on/off states. This potentially minimizes the longstanding problem of overlapping states largely keeping the RRAM in the R&D stage.

According to a guest researcher at NIST, David Nminibapiel, RRAMs are as yet highly unpredictable. The amount of energy required to flip a switch may not be adequate to do the same the next time around. Applying too much energy may cause it to overshoot, and may worsen the variability problem. In addition, even with a successful flip, the two states could overlap, and that makes it unclear whether the switch is actually storing a zero or a one.

Although this randomness takes away from the advantages of the technology, the researcher team at NIST has discovered a potential solution. They have found the energy delivered to the switch may be controlled with several short pulses rather than using one long pulse.

Typically, conventional memory chips work with relatively strong pulses lasting about a nanosecond. However, the NIST team found less energetic pulses of about 100 picoseconds, which were only a tenth of the conventional pulses, worked better with RRAM.  Sending a few of these gentler signals, the team noticed, was more useful not only for flipping the RRAM switches predictably, but also for exploring the behavior of the switches.

That led the team to conclude these shorter signals reduce the variability. Although the issue does not go away totally, but tapping the switch several times with the lighter pulses makes the switch flip gradually, while allowing checking to verify whether the switch did flip successfully.

What is ReRAM?

DRAM is a popular memory technology regularly in use in almost all computers and smartphones today. However, resistive RAM or ReRAM is an upcoming parallel technology of high-density storage class memory, whose performance, researchers claim, has now reached very close to that of DRAM.

According to 4DS Memory Limited, who patented their Interface Switching ReRAM, have made substantial changes to the architecture of their product. They claim this has resulted in substantially improving read access so that the speed of ReRAM is now comparable to that of DRAMs. According to Guido Arnout, company CEO and Managing Director, the development has presented the company with several opportunities.

So far, most memory technologies have faced inherently high errors of bit rates. This includes ReRAMs as well, with randomly large cell current fluctuations to blame. Although manufacturers do include techniques for error correction to retrieve data reliably, the activity is time consuming and affects read access times negatively and cripples read speed.

After making the changes, 4DS could not find any large fluctuations with their Interface Switching ReRAM even with an extensive study. They claim this indicates the memory needs minimal error correction. Therefore, the high-density storage class memory how has effective read speeds comparable to that of DRAM. According to Arnout, the company has also scaled their memory products to 40 nm, with a significant increase in endurance.

Initially, 4DS was trying to create a storage class memory to compete with NAND flash. However, with prices of NAND flash dipping fast, the opportunity for ReRAM is now stationed between DRAM and flash. With the difference in price between DRAM and flash growing regularly, the opportunities for 4DS are also getting larger.

4DS uses a different approach for developing their Interface Switching ReRAMs. Rather than use the regular filamentary technology, 4DS uses a technique that allows cell currents to scale with geometry. According to 4DS, they use smaller cells that yield lower cell currents, and these currents can flow more reliably through narrow on-chip wires, which are necessary for achieving higher densities. However, lower cell currents also means the memory suffers longer latency, and 4DS, through extensive measurements and analysis, had to optimize the cell currents so that the latency matched that of DRAM in a high-density storage class memory.

Even short cell latency is not adequate. In reality, latency is actually made up of the sum of the inherent memory latency added to the time required to detect and correct any read errors.

The Interface Switching technology from 4DS reduces the switching region from the influence of random irregularities. That makes the latency of the new Interface Switching ReRAM the dominant factor rather than the overhead of its error correction.

SanDisk had predicted a decade earlier that ReRAM would eventually replace the NAND flash. Now, with the Interface Switching ReRAM, 4DS is looking at a tier of storage class memory that will enable data centers to deliver more content on the Internet at a faster rate and efficiency. After proving the concept of its Interface Switching ReRAM, 4DS is now focusing on scaling it for achieving decent yields.

What is Optane Memory?

Optane is a revolutionary class of memory from Intel creating a bridge between dynamic RAM and storage for delivering an intelligent and amazingly responsive computing experience. For instance, Intel claims an increase of 28% in overall system performance, 14 times faster hard drive access, and two times increase in responsiveness in everyday tasks.

However, this revolution is not for everyone. It works only on the 7th generation Intel Core processor based systems that affordably maintain their capacity in mega-storage. For those using the above processor-based system, Intel promises Optane will deliver shorter boot times, faster application launching, extraordinarily fast gaming experience, and responsive browsing. However, there is a farther catch; you need to be running the latest Windows 10 operating system to take full advantage of Optane.

According to Intel, Windows 10 users on the 7th gen Intel Core processing systems can expect their computers to boot up twice as fast as earlier, with web browsers launching five times faster, and games launching up to 67% faster. Intel claims their Optane memory to be an adaptable system accelerator adjusting the tasks of the computer on which it is installed to run them more easily, smoothly, and faster. Intel provides an intelligent software for automatically learning the computing behavior and thereby accelerating frequent tasks and customizing the computer experience.

Intel’s new system acceleration solution places the new memory media module between the controller and other SATA-based storage devices that are slower. This includes SSHD, HDD, or SATA SSD. Based on 3D XPoint memory media, the module format stores commonly used programs and data close to the processor. This allows the processor to access information more quickly and thereby, improves the responsiveness of the overall system.

However, the Intel Optane memory module is not a replacement for the system Dynamic RAM. For instance, if a game requires X GB of DRAM, it cannot be divided between DRAM and Optane memory to meet the game requirements. Regular PC functioning will continue to require the necessary amount of DRAM.

For those who already have installed a solid-state disk or SSD in their computer systems can also install the Intel Optane memory for additional speed benefits. As such, the Intel Optane memory can extend acceleration to any type of SATA SSDs. However, the performance benefits are observed to be greater when the Intel Optane memory is used on slower magnetic HDDs, rather than when installed in systems with faster SSD-SATA.

Although other caching solutions exist, such as those using NAND technology, Intel’s Optane memory is entirely different. This new technology is a high performance, high endurance solution with low latency and quality of service or QoS. Optane uses the revolutionary new 3D XPoint memory media that performs well not only in low capacities, but also has the necessary endurance for withstanding multiple reading and writing cycles to the module.

In addition, Intel’s new Rapid Storage Technology driver, with its leading-edge algorithm, creates a compelling high-performance solution for a user-friendly, intuitive installation with easy to use setup process that automates the configuration to match the needs of the user.

The Energy Efficient RRAMs

Engineers at Stanford are making 3-D memory chips that can offer faster and more energy efficient solutions for computer memory. These are the Resistive random Access Memory or RRAMs, which are based on a new semiconductor material. It stores data based on temperature and voltage. However, the actual workings of RRAMs continued to be a mystery until a team at Stanford used a new tool for their investigations. They found the optimal temperature range to be lower than they had expected. This could lead to memory that is more efficient.

Conventional computer chips operate on a two dimensional plane. Typically, the CPU and memory communicate with each other through the data bus. While both the CPU and memory components have advanced technically, the data bus has lagged, leading to a slowdown of the entire system when crunching large amounts of data.

The special semiconductor RRAMs can be stacked one on top of the other, creating a 3-D structure. This brings the memory and its logic components closer together. As conventional silicon devices cannot replicate this, the 3-D high-rise chips can work at much higher speeds and be more energy efficient. Not only is this a better solution for tacking the challenges of Big Data, it can also extend the battery life of mobile devices.

The RRAMs work more like a switch. As explained by the Stanford engineers, in their natural state, the RRAM materials behave just as insulators do—resist the flow of electrons. However, when zapped with an electric field, a filament-like path opens up in the material, and electrons can flow through it. A second jolt closes the filament, and the material returns to being the insulator it was. Alternating between the two states generates a binary code with no signal transfer representing a zero and the passage of electrons representing a one.

The temperature rise of the material when subjected to the electric field causes the filament to form, allowing electrons to pass through. So far, the engineers were unable to estimate the exact temperature of the material that caused the switch. They needed much more precise information about the fundamental behavior of the RRAM material before they could hope to produce reliable devices.

As the engineers had no way of measuring the heat produced by a jolt of electricity, they heated the RRAM chips using a hot plate, while not applying any voltage. They then monitored the flow of electrons as filaments began to form. This allowed the team to measure the exact temperature band necessary for the materials to form the filaments. The engineers found the filaments formed between 26.7 and 126.7°C. Therefore, future RRAM devices will require less electricity for generating these temperatures, and that would make them more energy efficient.

Although at this moment, RRAMs are not yet ready to be incorporated into consumer devices, the researchers are confident that the discovery of the temperature range will speed up development work.

According to Ziwen Wang, a member of the team, the voltage and temperature discovered can be the predictive design inputs for enabling the design of a better memory device. The researchers will be presenting their find at the IEEE International Electron Devices Meeting in San Francisco.

Adding Memory to the Raspberry Pi

Although the memory onboard the Single Board Computer Raspberry Pi or RBPi is sufficient for most applications, some may feel the necessity of expanding the storage capacity. The options provided on the RBPi are limited, as the USB ports often engage a keyboard, a mouse or a game controller and the SD card slot holds only a single device.

The most obvious option for expanding the storage capacity on the RBPi is through the USB ports. However, tying up ports with a USB hard disk drive or flash drive can run into difficulty if you need the port for plugging in another USB device. One way of getting around this problem is by using powered USB hubs. It is important to realize the RBPi cannot supply enough power for driving the hub.

Using a powered USB hub makes it easy to add USB devices to your RBPi, including additional storage. However, you must consider a few things when expanding storage on your RBPi. In reality, there are only two common USB storage options available – flash drive and hard disk drive. Nevertheless, you may also consider a card-expanding trick for the Raspbian operating system for your RBPi. These are the three primary options available for expanding storage on your SBC. Apart from this, you may also consider using secondary storage devices such as networked drives, USB DVD-r drives and NAS drives.

The SD card in the RBPi acts as the main storage option – use an SDHC card for best results. It is a boot device acting as the general storage and from which the operating system also runs. You may think of the SD card as a replacement for the HDD of a regular desktop computer, more like an SSD or Solid State Drive, as it has no moving parts and uses very low energy.

By default, Raspbian, the standard Operating System of the RBPi, is designed to run from a 2 GB SD card. Therefore, when you flash the Raspbian image, the SD card will have a partition of 2 GB, with the balance of the card memory remaining unused.

To get around this, you must use the expand file system feature included in the raspi-config screen in Raspbian. This enables expanding the size of the partition to the maximum capacity of the SD card.

When you insert your flash drive into a USB port of the RBPi, you may be surprised it does not have the same effect as it does in a regular Ubuntu or Windows computer. It is not enough to insert the flash drive, Raspbian expects you to mount the device manually before you can use it as an additional USB storage device. However, before you can mount it, you must know the exact device name that Raspbian has assigned to the drive.

For this, the command necessary is: sudo ls /dev/sd*. The command “sudo” gives you temporary administrative status, “ls” allows listing the devices and “/dev/sd*” lists the devices seen by Raspbian. With this command, you will know the number Raspbian has assigned for your drive.

Now, you can mount the USB flash drive and use it as an additional storage device with the command: sudo mount -t vfat /dev/[USB DEVICE NUMBER] /mnt/usb.

What is 3D Flash Memory?

Slowly, but steadily, the memory market is veering away from magnetic disc storage systems to solid-state drives or SSDs. Not only are prices falling fast, manufacturers are producing SSDs with improved technologies, leading to denser memories, higher reliability and lower costs. For example, Samsung has recently announced SSD and systems designs that will drive their new 3-D NAND into mass markets.

Samsung’s latest SSDs are the 850 EVO series. According to Jim Eliot, a marketing executive for Samsung, these are 48-layer, 256 Gbit 3-D NAND cells, with 3-bits per cell. The new chips show more than 50% better power efficiency and twice the performance when compared to the 32-layer chips Samsung is now producing. In the future, Samsung is targeting Tbit-class chips made with more than 100 layers.

On a similar note, an engineer with SK Hynix says that by the third quarter, the company will start production of 3-D NAND chips with more than 30 layers. By 2019, SK Hynix will be making chips containing more than 190 layers.

At present, 3-D NAND production is still low in yield and the cost of production is higher than for producing traditional planar flash chips. However, these dense chips bring promises of several generations of continuing decreases in costs and improvements in the performance of flash. According to analysts and vendors, it might take another year or so before the new technology is ready for use in the mainstream.

Samsung was the first to announce 3-D NAND production, with rivals catching up fast. Toshiba has already announced its intentions of producing 256 Gbit 3-D NAND chips in September. These will also have 48 layers and 3-bits per cell.

According to Jim Handy, an analyst at the Objective Analysis, Los Gatos, California, sales of the 3-D NAND will not pick up before 2017. With Samsung shipping its V-NAND SSDs at a loss, they are gearing up to put the 48-layer devices in volume production. This will enable them to beat the cost of traditional flash.

The reason is not hard to find. Wafers of 3-D chips with 32-layers cost 70% higher than wafers for traditional flash. On the other hand, wafers for 48-layer versions cost only 5-10% higher, but have 50% more layers. Therefore, although the 48-layer chips tend to start with a 50% yield, they will easily approach the planar flash yield levels with a year or so.

According to expert analysts, it takes a couple of years for any new technology to mature. Therefore, the prediction that 3D NAND will reach a majority of flash bit sales only after 2018.

The number of 3D layers providing an optimal product is still under experimentation. Also, included is the development of a new class of controllers and firmware for managing the larger block sizes. Vendors are still exploring other unique characteristics of these 3D chips.

For example, Samsung has designed controllers and firmware that addresses the unique requirements of 3-D NAND and is selling its chips only in SSD form. According to the head of Samsung’s Memory Solutions Lab, Bob Brennan, SSDs provide higher profit margins as compared to merchant chips, and are the fastest way to market.

The 64-bit x86 SOC from AMD

Advanced Micro Devices or AMD is offering a new Embedded, R-Series SOC processor. Targeted at a range of application markets, the System-On-Chip processor will handle industrial control and communication networking along with digital signage, high-end gaming and media storage. The new AMD device follows the Platform System Architecture, Specification 1.0, of the HSA Foundation. The Heterogeneous System Architecture offers greater efficiency in parallel processing.

In the new Embedded R-Series SOC, AMD as combined its next-generation x86 Cores called Excavator, with its third-generation GCN or Graphics Core Next architecture. According to Colin Cureton, Senior manager for embedded products in AMD, this combination offers a substantial boost in performance as compared to their previous generation.

This is evident from the presentation made by Cureton. Benchmark scores show nearly 25% increase in the performance of the CPU with about 23% increase in the graphics performance as compared to present devices. Not only this, the chip also incorporates the Southbridge chip. As this is an external chip for current devices, the new chip offers developers a footprint reduction of 30% on the board.

As the R-Series SOCs have advanced power management built into them, the feature allows a performance boost without requiring any increase of power input. Cureton explains that the BIOS and the Operating System control the thermal envelope within which the device can operate safely.

Developers can use the cTDP or configurable Thermal Design Power to specify a tradeoff between the power consumed by the chip and its performance. They can adjust the TDP anywhere between 12-35W in increments of 1W. According to Cureton, even when running at 15W, the power level of operation of previous generation chips, the R-series has greater graphics performance.

Although the device offers raw performance specifically for embedded applications, there are other features as well. Within the chip, a dedicated secure processor performs an HVB or Hardware Validated Boot. That creates a trusted boot environment for the SOC before it can start up its x86 cores. The chip can handle upcoming changes in memory technology with ECC – presently supporting either DDR3 or the DDR4 types of memory. Other industry interfaces supported include USB3.0, POIe Gen.3, SPI, SATA3 among others. As industrial embedded designs require long product lifecycles, AMD assures a 10-year supply for their R-Series SOCs with plans of extended-temperature versions.

Apart from industrial, the R-Series SOC targets other application spaces also. The chip can support two or three displays simultaneously, while providing 4K graphics and video decoding as demanded by high-end gaming machines such as those in a casino. The device can also replace FPGA and DSP combinations presently used for medical imaging and image transformations. This is possible because of the HSA architecture, which eases the task of software-defined beam forming. As its GPU allows processing of several algorithms, the x86 architecture of the R-Series is gaining in dominance in the control plane for communications as well.

The HSA architecture that the R-Series has adopted gives it the ability to use the GPU as an auxiliary compute engine for non-graphics applications also. Rather than being only a slave to the CPU, the HSA turns the GPU into another computing node, increasing the efficiency.

Non-Volatile Memory from Carbon

So far, many problems have inhibited development of carbon based memory devices. Not any more, as IBM and the EMPA have solved those problems and come up with the possible use of oxygenated amorphous carbon for non-volatile memory applications. The new non-volatile memory is based on a Redox reaction that takes place in thin films of oxygenated amorphous carbon known as a-COx. The film is a process of PVD or Physical Vapor Deposition.

EMPA, the Swiss Electron Microscopy Center and IBM, Zurich, have published the details of their research. The latest release about their work discusses the results of device measurements. IBM is now a holder of a patent in this area.

Earlier research in this field has shown carbon and carbon nanotubes to possess some potential for NV memory application. However, development in the direction of products did not proceed because of lack of reproducibility, processing difficulties and limited write/erase endurance.

Amorphous carbon, because of its high electrical resistance, has not been receiving much attention. People have been studying the electrical properties of other allotropes of carbon. They have been focusing on carbon-based electronics as a challenge to silicon or as its follow-on.

However, the high electrical resistance of amorphous carbon is of immense importance as far as memory applications are concerned. The latest research on the use of oxygenated amorphous carbon for NV memory application has the added advantage of being able to use the conventional silicon-compatible process of thin-film deposition.

Manufacturers fabricate memory devices on a 500nm thick thermal film of silicon dioxide, which forms on a substrate of silicon wafer. A tungsten film forms the bottom electrode and it has circular pores delineating its active contact area. The pores are etched in the 35nm thick silicon dioxide film overlaying the tungsten and the pore diameters range from 100nm up to 4µm.

In the next step, manufacturers use a graphite carbon target in oxygen for physical vapor deposition of the a-COx active material into the pores, which then makes contact with the bottom electrode. A platinum top electrode metal deposition finally completes this planar sandwich construction. However, before the deposition of the COx, any native oxide is removed from the surface of the tungsten electrode by sputter cleaning. This is an important step, as it ensures non-contamination and non-compromise of the part of the Redox action involving the tungsten and Cox interface.

The next stage necessary is the forming step for bringing the memory device to its normal operating state. For this, a triangular shaped pulse of positive polarity is applied to the bottom electrode. As the applied voltage nears the forming voltage Vf of around 4-5V, a function of the thickness, there is an abrupt increase in the current flow through the cell. This switches the cell from its virgin state to an LRS or low-resistance state, which is also called its SET state. A sequence of 1µs-wide triangular pulses may also be used for forming the a-COx cells.

The device can be brought back to its HRS or high-resistance state or RESET state by applying a 10ns pulse of negative polarity to the bottom electrode. This does not require the use of the built-in current limiting resistor.