Monthly Archives: June 2013

What Is Electromagnetic Interference (EMI) And How Does It Affect Us?

Snap on ferrite for EMI suppression

(Snap on ferrite for EMI suppression)

What Is Electromagnetic Interference (EMI) And How Does It Affect Us?

Electromagnetic interference, abbreviated EMI, is the interference caused by an electromagnetic disturbance affecting the performance of a device, transmission channel, or system. It is also called radio frequency interference, or RFI, when the interference is in the radio frequency spectrum.

All of us encounter EMI in our everyday life. Common examples are:

• Disturbance in the audio/video signals on radio/TV due an aircraft flying at a low altitude

• Noise on microphones from a cell phone handshaking with communication tower to process a call

• A welding machine or a kitchen mixer/grinder generating undesired noise on the radio

• In flights, particularly while taking off or landing, we are required to switch off cell phones since the EMI from an active cell phone interferes with the navigation signals.

EMI is of two types, conducted – in which there is physical contact between the source and the affected circuits, and radiated – which is caused by induction.

The EMI source experiences rapidly changing electrical currents, and may be natural such as lightning, solar flares, or man-made such as switching off or on of heavy electrical loads like motors, lifts, etc. EMI may interrupt, obstruct, or otherwise cause an appliance to under-perform or even sustain damages.

In radio astronomy parlance, EMI is termed radio-frequency interference (RFI), and is a signal within the observed frequency band emanating from other than celestial sources themselves. In radio astronomy, RFI level being much larger than the intended signal, is a major impediment.

Susceptibility to EMI and Mitigation

Analog amplitude modulation or other older, traditional technologies are incapable of differentiating between desired and undesired signals, and hence are more susceptible to in-band EMI. Recent technologies like Wi-Fi are more robust, using error correcting technologies to minimize the impact of EMI.

All integrated circuits are a potential source of EMI, but assume significance only in conjunction with physically larger components such as printed circuit boards, heat sinks, connecting cables, etc. Mitigation techniques include the use of surge arresters or transzorbs (transient absorbers), decoupling capacitors, etc.

Spread-spectrum and frequency-hopping techniques help both analog and digital communication systems to combat EMI. Other solutions like diversity, directional antennae, etc., enable selective reception of the desired signal. Shielding with RF gaskets or conductive copper tapes is often a last option on account of added cost.

RFI detection with software is a modern method to handle in-band RFI. It can detect the interfering signals in time, frequency or time-frequency domains, and ensures that these signals are eliminated from further analysis of the observed data. This technique is useful for radio astronomy studies, but not so effective for EMI from most man-made sources.

EMI is sometimes put to useful purposes as well, such as for modern warfare, where EMI is deliberately generated to cause jamming of enemy radio networks to disable them for strategic advantages.

Regulations to contain EMI

The International Special Committee on Radio Interference (CISPR) created global standards covering recommended emission and immunity limits. These standards led to other regional and national standards such as European Norms (EN). Despite additional costs incurred in some cases to give electronic systems an agreed level of immunity, conforming to these regulations enhances their perceived quality for most applications in the present day environment.

Can capacitors act as a replacement for batteries?

It is common knowledge that capacitors store electrical energy. One could infer that this energy could be extracted and used in much the same way as a battery. Why can capacitors then not replace batteries?

Conventional capacitors discharge rapidly, whereas batteries discharge slowly as required for most electrical loads. A new type of capacitors with capacitances of the order of 1 Farad or higher, called Supercapacitors:

• Are capable of storing electrical energy, much like batteries
• Can be discharged gradually, similar to batteries
• Recharged rapidly – in seconds rather than hours (batteries need hours to recharge)
• Can be recharged again and again, without degradation (batteries have a limited life and hold increasingly lower charge with age, until they can be recharged no longer)

The Supercapacitor would thus appear to be one up on the batteries in terms of performance and longevity, and some more research could actually lead to a viable alternative to conventional fuel for automobiles. It is this concept that created the hybrid, fuel-efficient cars.

However, let us not jump to conclusions without considering all the aspects. For one, the research required to refine this technology would be both time and cost intensive. The outcome must justify the efforts in terms of both time and cost. The negatives must be carefully weighed against the advantages enumerated above, some of which are:

• Supercapacitors’ energy density (Watt-hours per kg) is much lower compared to batteries, leading to gigantically sized capacitors
• For quick charging, one would need to apply very high voltages and/or currents. As an illustration, charging a 100KWH battery in 10 seconds would need a 500V supply with a current of 72,000 Amps. This would be a challenge for safety, besides needing huge cables with solid insulation, along with a stout structure for support
• The sheer size of the charging infrastructure would call for robotic systems, a cumbersome and expensive set up. The cost and complexity of its operation and maintenance at multiple locations could defeat its purpose
• Primary power to enable the stations to function may not be available at remote locations.
Many prefer to opt for the traditional “battery bank” instead. The major problem of lead acid battery banks is the phenomenal hike in the cost of lead and the use of corrosive acid. Warm climates accelerate the chemical degradation leading to a shorter battery life.

A better solution, as often advocated, is to use a century-old technology in which nickel-iron (NiFe) batteries were used. These batteries need minimal maintenance, where the electrolyte, a non-corrosive and safe lithium compound, has to be changed once every 12-15 years. To charge fully, it is preferable to charge NiFe batteries using a capacitor bank in parallel with the bank rather than charging with a lead-acid-battery charger.

Though NiFe batteries are typically up to one and a half times more expensive, lower maintenance cost more than offsets the same over its lifetime.

To summarize, the Supercapacitor technology would still have to evolve in a big way before actually replacing batteries although the former offers a promising alternative to batteries.

image courtesy of eet.com

The Future of Cloud Computing

What is Cloud Computing?

Cloud Computing, an efficient method to balance between dealing with voluminous data and keeping costs competitive, is designed to deliver IT services consumable on demand, is scalable as per user need and uses a pay-per-use model. Business houses are progressively veering towards retaining core competencies, and shedding the non-core competencies for on-demand technology, business innovation and savings.

Delivery Options
• Infrastructure-as-a-Service (IaaS): Delivers computing hardware like Servers, Network, Storage, etc. Typical features are:
a) Users use resources but have no control of underlying cloud infrastructure
b) Users pay for what they use
c) Flexible scalable infrastructure without extensive pre-planning
• Storage-as-a-Service (SaaS): Provides storage resources as a pay-per-use utility to end users. This can be considered as a type of IaaS and has similar features.
• Platform-as-a-Service (PaaS): Provides a comprehensive stack for developers to create Cloud-ready business applications. Its features are:
a) Supports web-service standards
b) Dynamically scalable as per demand
c) Supports multi-tenant environment
• Software-as-a-Service (SaaS): Supports business applications of host and delivery type as a service. Common features include:
a) User applications run on cloud infrastructure
b) Accessible by users through web browser
c) Suitable for CRM (Customer Resource Management) applications
d) Supports multi-tenant environment

There are broadly three categories of cloud, namely Private, Hybrid and Public.

Private Cloud
• All components resident within user organization firewalls
• Automated, virtualized infrastructure (servers, network and storage) and delivers services.
• Use of existing infrastructure possible
• Option for management by user or vendor
• Works within the firewalls of the user organization
• Controlled network bandwidth
• User defines and controls data access and security to meet the agreed SLA (Service Level Agreement).

Advantages:
a) Direct, easy and fast end-user access of data
b) Chargeback to concerned user groups while maintaining control over data access and security

Public Cloud
• Easy, quick, affordable data sharing
• Most components reside outside the firewalls of user organization in a multi-tenant infrastructure
• Access of applications and storage by user, either at no cost or on a pay-per-use basis.
• Enables small and medium users who may not find it viable or useful to own Private clouds
• Low SLA
• Doesn’t offer a high level of data security or protection against corruption

Hybrid Cloud
• Leverages advantages of both Private and Public Clouds
• Users benefit from standardized or proprietary technologies and lower costs
• User definable range of services and data to be kept outside his own firewalls
• Smaller user outlay, pay-per-usage model
• Assured returns for cloud provider from a multi-tenant environment, bringing economies of scale
• Better security from high quality SLA’s and a stringent security policy

Future Projections and Driving User Segments

1. Media & entertainment – Enabling direct access to streaming music, video, interactive games, etc., on their devices without building huge infrastructure.
2. Social/collaboration – cloud computing enables more and more utilities on Face book, Linked-In, etc. With user base of nearly one-fifth of the world’s population, this is a major driving application
3. Mobile/location – clouds offering location and mobility through smart phones enable everything from email to business deals and more.
4. Payments – Payments cloud, a rather complex environment involving sellers, buyers, regulatory authorities, etc. is a relatively slow growth area

Overall, Cloud Computing is a potent tool to fulfill business ambitions of users, and with little competition on date, is poised for a bright future.

Is your anti-virus software really effective?

A popular concept floats around stating that anti-virus software simply does not work. Some sections of the press are known to propagate that the software products sold by anti-virus companies are rather ineffective in combating computer virus. Studies also influence these views on the efficacy of anti-virus software, such as the one conducted by a digital security agency in the USA. It infers that the high rate of virus growth on the internet outsmarts the bulk of anti-virus software commercially available. These software products fail to keep track of and provide adequate protection to computers against virus. Consequently, the effectiveness of these products is not commensurate with the cost of such software.

Some leading anti-virus providers have openly discarded these findings on grounds of ridiculously small sample sizes to be statistically correct, and declared the methodology used as inappropriate and unsound. They further consider the validation methodology – of simply examining the digital signatures – as poor and unscientific, not having run the study samples on live PC’s that such anti-virus software were actually supposed to protect.

The process of scanning signatures for malware detection is just one among several recognized methods of identifying the source of virus. Real anti-virus protection involves a lot more than that presumed in the aforementioned study. To be really useful, a complete suite of such methods must work in tandem, and that is the real safeguard against virus.

Consider a case of vehicle security, which could be a combination of an ignition lock, a door lock, gear lock, steering lock, immobilizer and a recent addition of GPS tracker, to name a few. Each of these provides a part of the protection using commercially available tools. The owner must decide the type and quantity of these he wants obtain and what he is willing to pay for them. A lopsided decision may defeat the very purpose of protection. It is like one installing a GPS tracker and an immobilizer in his car. A burglar may break the window glass and happily walk away with the expensive stereo, laptops and other valuables in the car, which the GPS tracker, or immobilizer may not be equipped to sense.

It is rather unjust to make a sweeping statement that anti-virus tools are no good in affording protection, without first deciding the level of security desired and having implemented solutions commensurate with such security. One needs to understand, with expert advice where necessary, the implications of using methods like firewalls, anti-phishing, anti-spam and so on, including what each can protect.

Another analogy to elucidate this concept is the performance of an orchestra, which does not depend solely on the violinist or the pianist, or even the entire range of musicians. Other important factors affect the performance, such as the conductor, the acoustics, the seats, the audience, and so on.

Irrespective of what popular opinion makes it out to be, if one is clear what one desires to protect and uses proper tools, it is very unlikely for one to conclude that anti-virus software serves no useful purpose.

Energy Harvesting – How & Why

What Is Energy Harvesting – Why Is It Needed?

The process of extracting small quantities of energy from one or more natural, inexhaustible sources, accumulation and storage for subsequent use at an affordable cost is called Energy Harvesting. Specially developed electronic devices that enable this task are termed Energy Harvesting Devices.

The world is facing acute energy crisis and global warming, stemming from rapid depletion of the traditional sources of energy such as oil, coal, fossil fuels, etc., which are on the verge of exhaustion. Not only is the global economy nose-diving, but the damage to the environment is also threatening our very existence. Natural calamities like earthquakes, tsunamis, droughts, floods, storms, etc., have become the order of the day. Economic growth is generating a spiraling demand for energy, goading us to tap alternative sources of energy on a war footing. Our very existence on the planet Earth is at stake, and we must find immediate solutions to meet the energy needs for survival.

Alternative Energy Sources Available

There are many, almost inexhaustible, sources of energy in nature. In addition, these energy forms are available almost free, if available close to the place where required. Sources include: Solar Energy, Wind Energy, Tidal Energy, Energy from the waves of the ocean, Bio Energy, Electromagnetic Energy, Chemical Energy, and so on.

Recent Advances in Technology

The sources listed above provide miniscule quantities of energy. The challenge before us is to gather the miniscule amounts and generate meaningful quantities of energy at affordable cost. Until very recently, this has remained an unfulfilled challenge.

Today, research and innovation has resulted in creation of more efficient devices to capture minute amounts of energy from these sources and convert them into electrical energy. Besides, better technology has led to lower power consumption, and hence higher power efficiency. These have been the major propelling factors for better, more efficient energy harvesting techniques, making it a viable solution. These solutions are considered to be more reliable and relatively maintenance free compared to traditional wall sockets, expensive batteries, etc.

Basic Building Blocks of an Energy Harvesting System

An Energy Harvesting System essentially consists of:

a) One or more sources of renewable energy (solar, wind, ocean or other type of energy)
b) An appropriate transducer to capture the energy and to convert it into electrical energy (such as solar cells for use in conjunction with solar power, a windmill for wind power, a turbine for hydro power, etc.)
c) An energy harvesting module to accumulate, store and control electrical power
d) A means of conveying the power to the user application (such as a transmission line)
e) The user application that consumes the power

With advancement in technology, various interface modules are commercially available at affordable prices. Combined with the enhanced awareness of the efficacy of Energy Harvesting, more and more applications and utilities are progressively using alternative sources of energy, which is a definite sign of progress to effectively deal with the global energy crisis.

Optional addition of power conditioning systems like voltage boosters, etc., can enhance the applications, but one must remember that such devices also consume power, which again brings down the efficiency and adds to cost.

Demystifying the A/D and D/A Converters

Analog and Digital Signals

Analog signals represent a physical parameter in the form of a continuous signal. In contrast, digital signals are discrete time signals formed by digital modulation. Most natural signals, like human voice and other sounds are analog in nature. Traditionally, communication systems were based on analog systems.

As demand for systems capable of carrying more information over longer distances kept soaring, the drawbacks of analog communication systems became increasingly evident. Efforts to improve the performance and throughput of systems saw the evolution of digital systems, which far surpasses the performance of analog systems, and offer features that were considered impossible earlier. Some major advantages of digital systems over analog are:

• Optical fibers can transmit digital signals and have virtually infinite information bearing capacity
• Combining multiple input signals over same channel is possible by multiplexing
• Digital signals can be encrypted and hence are more secure
• Better noise immunity leads to superior performance due to regeneration
• Much higher flexibility and ease of configuration

On the other hand, disadvantages include:

• Higher bandwidth required to transmit the same information
• Accurate synchronization required between transmitter and receiver for error free communication

Primary signals like human voice, natural sounds and pictures, etc., are all inherently analog. However, most signal processing and transmission systems are progressively becoming digital. Therefore, there is an obvious need for conversion of analog signals to digital. This facilitates processing and transmission, and reverse transition from digital to analog, since the digital signals will not be intelligible to human receivers or gadgets like a pen recorder. This need led to the evolution of Analog to Digital (A/D) Converters for encoding at the transmitting end and Digital to Analog (D/A) Converters at the receiving end for decoding.

Principle of Working of A/D and D/A Converters

An A/D converter senses the analog input signal at regular intervals and generates a corresponding binary bit stream as a combination of 0’s and 1’s. This data stream is then processed by the digital system until it is ready to be regenerated at the receiver’s location. The sampling rate has to be at least twice the highest frequency of the input signal so that the received signal is a near perfect replica of the input.

In contrast, a D/A Converter receives the bit stream and regenerates the signal by plotting the sampled values to obtain the input signal at the receiving end. The simplest way to achieve this is by using a variable resistor network, which converts each digital level into an equivalent binary weighted voltage (or current). However, if the recipient is a computer or other device capable of handling a digital signal directly, processing by D/A Converters is not necessary.

Two of the most important parameters of A/D and D/A Converters are Accuracy and Resolution. Accuracy reflects how closely the actual output signal resembles the theoretical output voltage. Resolution is the smallest increment in the input signal the system can sense and respond to. Higher resolution requires more bits and is more complicated and expensive, apart from being slower.

Measuring Temperature Remotely

How to Measure Temperature Remotely

In hostile atmospheres like toxic zones, very high temperature areas or remote locations, where objects are not amenable to direct temperature measurements, remote measurement techniques are deployed. In such applications, remote temperature measuring techniques are resorted to, and devices used include Infrared or Laser Thermometers as described below.

Infrared Thermometers or Laser Thermometers

These devices sense the thermal radiation, also called Blackbody Radiation, emitted by all bodies, and the emission depends on the physical temperature of the object whose temperature is to be sensed. Laser Thermometers, Non-contact Thermometers or Temperature Guns are names of variants that use lasers to direct the thermometer towards the object.

In these devices, a lens helps the thermal energy converge onto a detector, which in turn, generates an electrical signal, and drives a display after temperature compensation. The devices produce fairly accurate results and have a fast response, unlike direct temperature sensing, which is difficult, slow to respond to or not accurate enough. Induction heating, firefighting applications, cloud detection, monitoring of ovens or heaters are some typical examples of remote measurement of temperature. Other examples from the industry include hot chambers for equipment calibration and control, monitoring of manufacturing processes, and so on.

These devices are commercially available in a wide range of configurations, such as those designed for use in fixed locations, portable or handheld applications. The specifications, among others, mention the range of temperatures that the specific design is intended for, together with the level of accuracy (say, measurement uncertainty of ± 2°C).

For such devices, the most important specification is the DISTANCE-TO-SPOT RATIO (D:S) where D is the object’s distance from the device, and S denotes the diameter of the area whose temperature is to be measured. This implies that a measurement by the device concerned provides the average temperature over an area having a diameter S with the object placed at a distance D away from the device.

Some thermometers are available with a settable emissivity to adapt to the type of surface whose temperature is being measured. These sensors can thus be used for measuring the temperature of shiny as well as dull surfaces. Even thermometers without settable emissivity can be used for shiny objects by fixing a dull tape on the surface, but the error would be larger.

Commercially Available Types of Thermometers:

• Spot Infrared Thermometer or Infrared Pyrometer, for measurement of temperature at a spot on the object’s body

• Infrared Scanning Systems, for scanning large areas. This functionality is often realized by using a spot thermometer that aims at a rotating mirror, such as piles of material along a conveyor belt, cloth or paper sheets, etc. However, this cannot be termed a thermometer in the true sense.

• Infrared Thermal Imaging Cameras or Infrared Cameras are the ones that generate a thermogram, or an image in two dimensions, by plotting the temperature at many points along a larger surface. The temperatures sensed at various points are converted to pixels, and an image is created. As opposed to the types described above, these are primarily dependent on processor- and software-for functioning. These devices find use in perimeter monitoring by military or security personnel, and monitoring for safety and efficiency.