Category Archives: Electronics History

Peltier Cell Generates Electricity from a Lamp

The early 20th century saw the end of the use of candles and oil lamps as electric lighting became more common. Earlier, candles were made from various items such as natural fat, wax, and tallow. However, most manufacturers make candles from paraffin wax, a substance obtained from refining petroleum.

Compared to an incandescent bulb, a candle produces nearly a hundred times lower luminous efficacy. The luminous efficacy of a modern candle is about 0.16 lumens per watt, and it produces nearly 80 W of heat energy. Another form of the candle, tea lights, come with a smaller wick and produce a smaller flame. However, a standard tea light produces about 32 W, depending on the wax it uses.

The Peltier cell makes it possible to convert a small fraction of the heat energy from tea light into electricity. This can be used to drive a highly efficient LED light. This arrangement helps to boost the total luminous efficacy of the tea light and we can get a larger amount of light.

The Peltier element is really a solid-state active heat pump. Electricity applied to the element causes it to transfer heat from one side of the device to the other. Therefore, a Peltier element can be used for heating or cooling. If one side of the Peltier element is heated to a temperature higher than that on the other side, the Peltier element works in reverse, generating a difference of voltage between the terminals. This reverse effect is known as the Seebeck effect and the device works as a thermoelectric generator.

As the efficiency of a typical thermoelectric generator is only around 5-8%, the heat from a tea light should be capable of generating about 1.6-2.56 W of electrical power from the Peltier element. In practice, the Peltier element gives only about 0.25 W with the heat from the tea lamp. The reason being the inability of the Peltier element to capture the entire heat produced by the tea lamp to generate electricity—some heat is lost in transmission, and some in heating up the Peltier element. However, the energy generated by the Peltier acting as a thermoelectric generator is capable of running a small fan and drive an LED lamp satisfactorily.

A thermoelectric generator can be built around two 40×40 mm TEC1-12706 Peltier elements, mounted between two heat sinks, and connected in series to boost the voltage output. The smaller heat sink at the bottom serves to spread the heat from the tea light to heat up the Peltier elements evenly. The larger heat sink at the top has a fan to cool it and maximize the temperature difference between the two sides of the Peltier elements.

Although the fan draws power from the Peltier elements, it also helps to improve the efficiency of the system and make more energy available for the LED light. The fan also helps to keep the Peltier elements from overheating. Peltier elements are internally soldered with a bismuth allow solder melting at 138°C. Therefore, no Peltier element should operate above this temperature.

OLED Lighting in the Auto Industry

In recent years, a number of industries have started using Organic Light-Emitting Diodes (OLEDs) in diverse ways. The automotive industry, in particular, has seen a huge potential in OLEDs. For instance, very soon Audi will be coming up with OLED taillights. At present Audi has presented prototypes of the taillights. At the LOPEC Congress, Audi provides advanced insights into the needs of the automotive industry that the deployment of OLEDs will require to meet, and the future of automotive lighting.

So far, there have been plenty of developments. At LOPEC, Audi demonstrated prototypes of their OLED taillights, which they claim have reached production stage. However, using OLEDs in vehicles has always been a challenge, although OLED lighting installations and table lamps have been around for a while, and these are in use in museums, clubs, and restaurants.

Difficulties of Using OLED in Automobiles

Major hurdles OLEDs have to cross when in use in automobiles are they have to withstand humidity, heat, cold, UV radiation, and constant vibration. All these can reduce the life span of OLEDs drastically. Audi claims to have solved this problem by encapsulating their displays hermetically, which they claim will make the displays as stable as LEDs.

Why Use OLED in Place of LEDs?

Regular LEDs act as point sources of light, and it requires substantial development work for generating an even light from them. On the other hand, OLEDs are evenly radiating sources of light, and they naturally produce a uniform illumination. Moreover, their thickness is only about a millimeter, which makes OLEDs more suitable for automotive design.

Designers find OLED appearance is high quality, both when off and on. This is because it has a simple and clean surface. As design is an important aspect of the automotive industry, it makes OLEDs ideal for such use. Most automobile owners expect a certain lifestyle from their vehicles, apart from its functional use of transportation from point A to point B.

However, for use as turn signals and brake lights, the light intensity from OLEDs is not adequate, and will have to be increased. The automotive industry is also working on using flexible OLEDs. At present many are using glass-based OLEDs, but these are rigid, and using plastic foil substrates as the base for OLED is opening up a whole new world of opportunities for the designers.

Audi is expecting LOPEC will open up a huge bandwidth of business and research institutes for them. They expect to hold discussions with specialists using this breadth of activity, and to meet other OLED manufacturers and materials developers.

What the Future Holds?

In about a decade from now, the world will be witnessing innovations in vehicle lighting that most can only dream about today. As it is, a vehicle’s lighting system already functions as a form of communication—hazard lights, turn signals, brake lights, for example. In the future, driverless cars will need to interact with others on the road with even greater sophistication. One of the visions Audi has is of a three-dimensional OLED display extending the entire tail of the vehicle, on the panel of the body, and integrated OLED within the windshield.

Colorful Images from Electron Microscopy

Almost everyone treats Christmas as the time to get away from regular work. Surprisingly, there are exceptions, such as Roger Tsien. This late biochemist would do an extra two weeks of uninterrupted research in his lab during Christmas. In one of his sojourns, he gifted the world the first electron micrographs—in color. His method used to create them will dramatically advance cell imaging.

Scientists use Electron Microscopy (EM) for magnifying objects up to 10 million times their original size. The technique makes use of accelerated electrons for the purpose. Conventional EM images are in gray scale, and scientists add color using computer graphics programs, once the images are recorded. Tsien and his colleagues modified the EM technique for directly incorporating color labeling into the images.

Along with co-workers Mark H. Ellisman and Stephen Adams, Tsien devised techniques for employing serial applications of various lanthanides or rare earth metals, which served as the labels. Along with this, the researchers used the EELS or electron energy-loss spectroscopy type of Ems. EELS is capable of differentiating among the lanthanides. It does this by measuring the differences in energy deflected or absorbed by each lanthanide from an electron beam.

For instance, for creating the color image of a cell organelle such as an endosome, the researchers had to stain the sample initially with a lanthanide called cerium. This made the sample appear green when viewed under EELS. After removing the excess cerium, they applied the element praseodymium. This targeted another protein within the sample, which EELS now registered as red. Now all that the scientists had to do was to overlay the green and red images onto a traditional gray scale EM image and create the composite image. The final image highlighted different distinct regions of the endosome with red and green color.

In the November issue of the publication Cell Chemical Biology, Tsien, along with his coauthors, has described their multicolor EM technique. Although the technique is still very new, scientists are using it to obtain new information about cell structure. For instance, regular light microscopy is incapable of showing protein movements with and between cells. With the new technique, scientists can now view cell components at a much higher level of detail.

For instance, until now, scientists had only a hypothesis about the fate of certain molecules since they are too small to be visible using light microscopes. EELS offered vibrant proof and confirmed the hypothesis. So far, scientists had only conjectured that certain CPPs or cell penetrating peptides were responsible for ferrying molecules as cargo into cells, and that the cells then took up these molecules into the interior of endosomes. With the praseodymium coloring one kind of CPP with a red label, scientists were able to verify their hypothesis, as the CPP visibly ended up inside the endosome. At the same time, another molecule, colored vivid green with cerium, ended up predictably at the endosomal surface.

Tsien’s death has deprived the world of further contributions to this transformative technique. However, the innovations will continue to inspire his co-workers and the newer generation of scientists. Tsien, as a fitting last gift to the scientific world, added color to electron microscopy to allow them to see more within cells.

What is Vapor Phase Reflow Soldering?

Vapor Phase Reflow Soldering is an advanced soldering technology. This is fast replacing other forms of soldering processes manufacturers presently use for assembling printed circuit boards in high volumes for all sorts of electronic products. Soldering electronic components to printed circuit boards is a complex physical and chemical process requiring high temperatures. With the introduction of lead-free soldering, the process is more stringent, required still higher temperatures and shorter times. All the while, components are becoming smaller, making the process more complicated.

Manufacturers face soldering problems because of many reasons. Main among them is the introduction of lead-free components and the lead-free process of soldering. The other reason is boards often can contain different masses of components. The heat stored by these components during the soldering process varies according to their mass, resulting in uneven heat distribution leading to warping of the printed boards.

With Vapor Phase reflow soldering, the board and components face the lowest possible maximum temperatures necessary for proper soldering. Therefore, there is no overheating of components. The process offers the best wetting of components with solder and the soldering process happens in an inert atmosphere devoid of oxygen – resulting in the highest quality of soldering. The entire process is environment friendly and cost effective.

In the Vapor Phase Reflow Soldering process, the soldering chamber initially contains Galden, an inert liquid, with a boiling point of 230°C. This is same as the process temperature for lead-free Sn-Ag solders. During start up, Galden is heated up to its boiling point, causing a layer of vapor above the liquid surface, displacing the ambient air upwards. As the vapor has a higher molecular weight, it stays just above the liquid surface, ensuring an inert vapor zone.

A printed circuit board and components introduced in this inert vapor zone faces the phase change of the Galden vapor trying to cool back its liquid form. The change of phase from vapor to liquid involves the release of a large amount of thermal energy. As the vapor encompasses the entire PCB and components, there is no difference in temperature even for high-mass parts. Everything inside the vapor is thoroughly heated up to the vapor temperature. This is the biggest advantage of the vapor phase soldering process.

The heat transfer coefficients during condensation of the vapor ranges from 100-400Wm-3K-1. This is nearly 10 times higher than heat transfer coefficients involved in convection or radiation and about 10 times lower than that with contact during liquid soldering processes. The excellent heat transfer rate prevents any excessive or uneven heat transfer and the soldering temperature of the vapor phase reflow process stays at a constant 235°C.

There are several advantages from the Vapor Phase Reflow Soldering process. Soldering inside the vapor zone ensures there can be no overheating. As the vapor completely encompasses the components, there are no cold solders due to uneven heat transfer and shadowing. The inert vapor phase process precludes the use of nitrogen. Controlled heating up of the vapor consumes only one-fifth the usual direct energy consumption, and saves in air-conditioning costs.

As the entire process is a closed one, there is no creation of hazardous gasses such as from burnt flux. Additionally, Galden is a neutral process fluid and environment friendly.

Why do Speakers use Ferro-fluids?

Speakers reproduce sound by moving a diaphragm to displace air. The mechanism resembles a permanent magnet electric motor. The major difference is the voice coil in a speaker moves linearly instead of in a circular motion. As the coil moves back and forth in step with the electrical signals fed to it, it moves the attached diaphragm. To prevent spurious movements and unwanted oscillations of the diaphragm, conventional speakers generally use a damper. To produce sound from such speakers, extra energy is necessary to overcome the resistance of the damper.

Additionally, the damper has its own natural frequency of vibration that restricts the speaker from reproducing sound accurately at all frequencies. A new technique using a magnetic fluid to replace the damper claims to correct this anomaly by reducing energy consumption and allowing louder and clearer sound across the entire range of frequencies the speaker is capable of reproducing. To quantify the advantages, the new speaker reduces energy consumption by 35% for reproducing the same loudness of sound as from conventional speakers and the improvement in sound quality is nearly 3dB.

NASA originally developed the magnetic fluid in the 1960’s, using it for space exploration and called it Ferro-fluid. It responds to applied magnetic fields because the fluid is infused with Nano-sized magnetic particles. They do not agglomerate or cluster together because of a coating of suitable surfactants. The unique characteristic of ferro-fluids makes them useful in a range of applications. Using applied magnetic fields to control flow or movement, ferro-fluids can replace mechanical parts such as vehicle suspensions, flow of fuel in a reactor and more.

In a conventional speaker, the damper holds several components such as the diaphragm and spring in place, even when the speaker is vibrating. However, the damper causes friction while moving, thereby distorting the original sound waves with secondary vibrations, which are manifest as noise. To overcome the friction requires additional energy while driving and that reduces the speaker’s total volume output by a few decibels.

When replacing the damper in a speaker, the ferro-fluid used has a thickness of only a few microns. The magnets of the speaker create a permanent magnetic field to which the ferro-fluid responds by holding the diaphragm and the coil in place while allowing them to move linearly without any friction. As there are no secondary vibrations from the ferro-fluid, the sound is clearer. The lack of friction allows the speaker to save about 35% of the energy as compared to conventional speakers with dampers.

Ferro-fluids used for the audio field are usually based on two classes of carrier liquids – synthetic enters and hydrocarbons. Both oils are low in volatility and high on thermal stability. The environmental considerations dictate the choice of the fluid used, along with the best balance of viscosity values and magnetization for optimizing the acoustical performance.

Using different carrier liquids and by varying the quantity of magnetic material in the ferro-fluid, it can be tailored to meet different needs. The saturation magnetization depends on the nature of the suspended magnetic material and its volumetric loading. Care is taken to use material whose density and viscosity correspond closely to that of the carrier fluid.

What You Need To Know About EMI Antennas

Any electronic device, system or subsystem generates EMI or ElectroMagnetic Interference and is susceptible to EMI generated by others. To allow them to coexist and cooperate, all such electronic devices, systems or subsystems must confirm to specific standards, which limit the amplitude and frequency range of EMI generated and tolerated by each of them.

Testing for such radiated emissions and immunity involves EMI chambers and OATS or Open Area Test Sites. To check for EMI generated, these chambers or OATS will have several types of antenna that can handle a wide range of frequencies. As visits to a full-compliance lab are expensive and time intensive, you may want to do pre-compliance tests, for which, it is a simple matter to set up a temporary antenna in a conference room or basement. This helps in troubleshooting and correcting EMI problems beforehand.

Several factors decide the nature of the antenna you should be using for your tests. The choice for the tests mostly ranges among radiated emissions, radiated immunity, pre-compliance, full compliance, frequency range, power and size of the antennas. The most common EMI test people perform is for checking radiated emissions. Here too, the antenna you use will depend on frequency, size, gain and your budget.

For pre-compliance tests, the most popular antenna is the hybrid. This is also called by names such as Combilog, Biconilog, Bi-log and others. Hybrids are so favored because of their wide frequency range, which easily covers different ranges from 30 MHz to 7 GHz, depending on the model. This is a very big advantage, as you do not need to switch antennas in between the tests, which you have to do if you were using log-periodic or biconical.

For a lab, where precision is more important, using multiple antennas gives an advantage in the performance. Typically, a lab might use a horn antenna for frequencies above 1 GHz, a log-antenna from 1 GHz to 200 MHz and a biconical antenna for frequencies below 200 MHz. However, for pre-compliance tests, hybrids or Bi-log antennas are adequate for makeshift labs.

The size of the antenna you can use depends on the space you have in your makeshift lab. Larger antennas cover a wider frequency range along with better sensitivities as compared to those offered by smaller antennas. Some designs of hybrid antennas come with bent elements, which help to fit them in limited spaces. In general, hybrid antennas are larger than most dedicated antennas.

Antennas are available that allow you to use them for both radiated immunity as well as radiated emission tests. However, for immunity tests, it is important to limit the power you drive into an antenna to get the required field strength. Typically, immunity testing requires larger antenna sizes as compared to those necessary for measurements of emissions alone.

Hybrid antennas usually combine a log-periodic element with a biconical element. This extends the frequency range the antenna covers as compared with that covered by single-type antenna. For example, one of the newest hybrid antennas covers the entire range of 26 MHz to 3 GHz, while being able to handle signal power up to 300 W for immunity tests.

What is Digital Signal Processing?

Initially, when DSP or Digital Signal Processing was introduced over thirty years ago, it involved standalone processing. A single micro-controller handled all the parameters for processing the analog signal and transforming it to its digital value. Evolution in this area has introduced multicore processing elements that now extend the DSP’s range of applications.

Simultaneously, evolvement of software development tools for the DSP now allows expansion for accommodating diverse programmers. Therefore, on one hand you can have voice and image recognition with small, low power, but smart devices, while on the other, it is possible to have real-time data analytics with the multiple core high-performance compute platforms. This way, DSPs offer nearly endless opportunities for achieving low-power processing efficiencies.

Although initial DSPs processed only audio, engineers quickly adapted DSP technology for a wide variety of applications. Today, DSPs are available as standalone or as part of an SoC or System-on-Chip offering full software programmability including all the benefits of software-based products.

DSPs take already digitized signals from the real world, such as audio, video, pressure, temperature or position for further mathematical manipulations. Engineers design DSPs for performing quick mathematical operations such as add, subtract, multiply and divide.

This processing of the signals enables displaying, analyzing or converting information to a signal of another type to be useful. In the real world, several analog products are available to detect and manipulate signals such as pressure, temperature, light or sound. These signals are then passed on to converters such as ADCs or Analog to Digital Converters, which transform the analog signals into a digital format of 1’s and 0’s.

The DSP takes over this stream of digitized information and processes it further. The processed digital information goes back for use in the real world. The DSP does this in one of two ways. It feeds the information in the digital format to instruments capable of handling it. Where that is not possible, the digital signal passes through a second converter or DAC, the Digital to Analog Converter and this converts the digital signal to analog. All this happens at very high speeds.

An MP3 player is a very simple illustration of the concept of DSP. The analog audio, during the recording phase, passes through a receiver containing a microphone and an amplifier. An ADC then converts this analog signal into digital information, before passing it over to a DSP. The DSP processes the digital signal further as defined by its internal algorithm and encodes it as MP3, before saving the file to memory.

While playing back the recorded information, the DSP decodes the file from memory and a DAC converts the digital signal to an analog form. That makes it suitable to output the signal through an amplifier and speaker system. If necessary, the DSP handles other functions such as level control and equalization including user interfacing.

A computer can also use information from a DSP. The computer can use this information to control security, home theater systems, telephones and for compressing video. Compressed signals are more efficient when transmitting. Additionally, the computer can easily manipulate or enhance the signals to improve their quality.

Let Raspberry Pi Track Bats for You

If you live in an area that has fruit trees around, it is likely bats share your space. Bats are furry mammals that flit about at night, feasting on insects and fruits. Although they are not gifted with good eyesight, they locate prey and avoid obstacles using echolocation. They are expert fliers and it is difficult to observe them since they are so silent.

Although humans cannot hear bats, it does not mean these creatures make no noise. In fact, using the process of echolocation, bats produce a considerable amount of sound. However, humans cannot hear them because the sound bats produce has a frequency range beyond human hearing capabilities. Depending on age, humans can hear sounds produced in the frequency range between 20 Hz and 15-20 KHz. Bats can hear and produce sound up to about 110 KHz. That is why a Raspberry Pi or RBPi is necessary to collect process and graphically represent bat calls.

An analysis of bat calls shows the sounds they produce are quite loud and not limited to just one tone. Different breeds of bats produce a variety of sounds, differing just as bird chirping does. For example, their tone may sweep down from a high frequency to a low one, or move around a specific frequency.

Holger and Henrike Korber from Germany have used an RBPi to make a bat detection device. To collect the sound produced by bats, they use an inexpensive microphone of high sensitivity capable of responding to high frequencies. The algorithm they use allows not only a graphical representation of the calls, but also identification of the bat species as well. Additionally, the software allows manipulation of the calls to bring them into frequencies within the human hearing range and create histories of bat activity.

On their site, which translates to Bat Conservation in English, the Korbers offer a list of bat literature. If you can know the German language, you will find a treasure of information on echolocation and acoustic identification of bat species. To read in English, pass the page through Google Translate.

Details of their new WLAN-Raspi-Bat detector are available here. The detector, based on the RBPi Model B+, is wirelessly connected to an external notebook. That allows easy manipulation of the configuration and wireless recording of data. The RBPi bat project uses a UMTS stick for WLAN communication and a modified image of the RBPi OS.

The WLAN-Raspi-Bat detector sends SMS text messages automatically and at freely configurable times. For example, this could be just after the RBPi has booted or just before it shuts down. As the detector is portable, it is important to save on power consumption and data space on the SD Card. To keep the arrangement simple, the Korbers use a simple clock timer to start and shut down the RBPi. As bats venture out only at night, the RBPi can sleep during the day along with the bats.

As the detector communicates wirelessly, there are numerous applications. For example, it is able to operate at locations hard to access, such as in trees up to the canopy and in buildings with difficult access.

What are Leadless Packages?

Electronic components, especially semiconductors have undergone a dramatic transformation over the past few decades. Starting from the through-hole packages, semiconductors evolved into the surface mount packaging, which is the default today. With the increase in packaging density, surface mount packaging is now limited to passive components mostly, while semiconductors are moving towards current technologies involving leadless packaging.

Modern technologies involve leadless packaging such as dual/quad flats with no leads (DFN/QFN), Ball Grid Arrays or BGAs and Chip Scale Packaging or CSP. Such innovative technologies are allowing the semiconductor industry to exploit the successive IC processing shrink and achieve product performances, which were thought impossible earlier

For example, consider a simple three-pin discrete device such as a MOSFET, typically used as a switching device that can conduct currents ranging from 0.1A to more than 100A at voltages surpassing 1000V. Applications as diverse as motor controls to battery management use MOSFETs.

Leadless packaging makes discrete devices more attractive because of the assembly efficiencies involved that makes them friendlier to the environment. Although several leadless solutions are possible for packaging MOSFETs – BGAs, CSPs and DFN/QFN – the governing factor here is mainly the market price pressure. Substrate costs may be expensive, making package material sets undesirable for BGA packaging. Moreover, capital expenditure required to changeover to full production with new packaging types such as BGAs and CSPs may increase the per-unit cost.

Consequently, BGA and CSP packaging is limited to discrete semiconductor applications where the average selling price is of a secondary consideration over more important parameters such as performance. At present, the traditional surface mount packages are being replaced by the more cost-effective alternatives leadless package solutions such as the DFN and QFN.

The manufacturing steps for a typical DFN package consists of six key processes. A silicon die is attached to a copper alloy or similar leadframe using a highly conductive epoxy resin. The package pads are then attached to the silicon die using wirebonds of aluminum or gold. The silicon and leadframe package is then hermetically sealed with a mold of a halogen-free compound. Sawing the molded lead frame yields the finished package product.

Leadless packages offer several advantages. They utilize the available board-space more efficiently, while improving the thermal performance of the device. For example, the SOT23 package, being one of the most widely used packages of the semiconductor industry, has a silicon-to-footprint ratio of 23%, while it occupies 8mm2 space on the printed circuit board. Comparatively, The DFN2020 package has a silicon-to-footprint ratio of 42%, which is nearly double that of the SOT23, while it occupies only 4mm2 space on the PCB. This leads to huge cost benefits to the manufacturing industry, while simultaneously increasing the electrical performance of the application.

The DFN package has a highly conductive copper alloy pad for the die, which is exposed to the outside of the package to be soldered. This larger area of contact between the DFN package and the printed circuit board results in a very low thermal impedance between the junction and the leads. This ensures not only a reliable contact, but also a higher thermal efficiency as compared to typical surface mount packages.

What is a 4-20 mA Current Loop?

The pre-electronic industry used pneumatic controls. Compressed air powered all ratio controllers, temperature sensors, PID controllers and actuators. The modulation standard was 3-15 pounds per square inch, with 3 psi standing for an active zero and 100% represented by 15 psi. If the pressure went below 3 psi, an alarm would sound.

Electronic controls made their debut in the 1950s. A new signaling method with 4-20 mA current emulated and replaced the 3-15 psi pneumatic signal. As wires were easier to handle, install and maintain, current signaling quickly gained popularity. In contrast, pneumatic pressure lines and energy requirements are much higher – you need a 20-50 HP compressor, for instance. Moreover, with electronics you can add more complicated control algorithms.

The 4-20 mA current loop is a sensor signaling standard and a very robust one. Current loops are the favored form of data transmission method because they are inherently insensitive to electrical noise. In the 4-20 mA current loop, the signaling current flows through all the components. Therefore, the same current flows even if the wire terminations are not perfect. All components in the loop drop some voltage because the signaling current flows through them. However, the signaling current is unaltered by these voltage drops as long as the power supply voltage remains greater than the sum of the individual voltage drops around the loop at the maximum signaling current of 20 mA.

The simplest form of the 4-20 mA current loop has only four components –

− A DC power supply
− A 2-wire transmitter
− A receiving resistor to convert the current signal to a voltage
− A wire to interconnect all the above

Most 4-20 mA loops use 2-wire transmitters, with standard power supplies of 12, 15, 24 and 36 VDC. There are also 3-wire transmitters with AC or DC power supplies.

The transmitter forms the heart of the 4-20 mA signaling system. The transmitter helps to convert physical properties such as pressure, humidity or temperature into an electrical signal, a current, proportional to the physical quantity being measured. In the 4-20 mA current loop system, 4mA represents the lowest limit of the measurement range, while the 20 mA represents the highest limit.

Since it is much easier and simpler to measure voltage than it is to measure current, typical current loop circuits incorporate a Receiver Resistor. This resistor helps to convert the current into a voltage, following Ohms Law (Voltage = Current x Resistance). Most commonly, the resistor used in a 4-20 mA current loop is 250Ω, although some engineers use resistances of 100Ω to 750Ω, depending upon the application. When using 250Ω, four mA of current will produce a voltage of one VDC across the resistor, and 20 mA will produce five VDC. Therefore, the analog input of a controller can very easily interpret the 4-20 mA current as a 1-5 VDC voltage range.

The wire connecting all the components of a 4-20 mA current loop has its own resistance expressed in Ohms per 1,000 feet. Some voltage is dropped across this resistance of the wires according to Ohm’s Law and has to be compensated by the power supply voltage.

The major advantages in using 4-20 mA current loops are their extreme immunity to noise and power supply voltage fluctuations.