Category Archives: Guides

What are Piezoelectric Audio Devices?

The piezoelectric effect is a versatile and extremely useful phenomenon. Engineers have adopted this phenomenon in various transducer applications. Some of these applications involve transforming the applied voltage to mechanical strain output, for use as a basic source of sound. In a complementary mode, the application of mechanical stress to the Piezo material causes the rugged sensor to produce a voltage. Piezoelectric devices are low-cost, reliable, and rugged, and this allows engineers to exploit their unique properties.

Piezo-based speakers offer many attributes as sound sources. Unlike electrodynamic speakers, piezo-based speakers can be relatively thin, yet create very high sound pressure levels. However, mechanical and physical material issues can limit their audio quality. Now, a team at MIT is changing all this. They have developed a dense array of tiny dome speakers that they have based on Piezo technology. They have significantly transformed the classic analog function of loudspeakers. Their new loudspeakers are paper-thin, very flexible, and fully capable of turning any surface into an active audio source.

Although there are conventional thin-film loudspeakers, the basic requirement is the film must be free to bend to produce sounds. Firmly mounting such thin-film loudspeakers to a surface would attenuate their output and dampen the vibrations, while limiting their frequency response tremendously.

However, the ingenious approach of the MIT team has solved the problem in a rather unique way. Their new loudspeaker does not have to vibrate the entire material surface. Rather, they have fabricated tiny domes on a thin layer of piezoelectric material, such that each dome can vibrate independently. Each dome is about 15 µm in height, and they move up and down by only half a micron when vibrating. As each dome forms a single sound-generating unit, it requires thousands of these tiny domes to vibrate together to produce audible sounds. While the basic loudspeaker is only 120-µm thick, it weighs only 2 grams. Only standard processes are necessary to manufacture this loudspeaker at low costs.

Spacer layers surround the domes on the bottom and top of the film. This helps to protect the domes from the mounting surface, and at the same time allows them to freely vibrate. These spacer layers also protect the domes from impact and abrasion during daily handling, thereby enhancing the durability of the loudspeaker.

To make the film loudspeakers, the researchers used a thin sheet of PET or polyethylene terephthalate. This is a standard plastic used for a variety of applications. They used a laser to cut tiny holes in the sheet while laminating its underside with an 8-µm thick film of PVDF or polyvinylidene fluoride. This is a common industrial and commercial coating. Then they applied vacuum and heat to bond the two sheets.

As the PVDF layer is very thin, the pressure difference that the vacuum creates together with the heat causes it to bulge, but it cannot force its way through the PET layer. This makes the tiny domes protrude through the holes. The researchers laminated the free side of the PVDF layer with another layer of PET and this acts like a spacer between the bonding surface and the domes. Regardless of the rigid bonding surface, the film loudspeaker could generate a sound pressure level of 66 dB at 30 cm.

What are Reed Switches?

A modern factory will have several electronic devices working, and most of them will have several sensors. Typically, these sensors connect to the devices using wires. The wires provide the sensor with a supply voltage, a ground connection, and the signal output. The application of power allows the sensor to function properly, whether the sensor is sensing the presence of a ferromagnetic metal nearby, or it is sending out a beam of light as a part of the security system. On the other hand, simple mechanical switches, like reed switches, require only two wires to trigger the sensors. These switches need magnetic fields to activate.

The reed switch was born and patented at the Bell Telephone Laboratories. The basic reed switch looks like a small glass capsule that has two protruding wires. Inside the capsule, the wires connect to two ferromagnetic blades with only a few microns separating them. If a magnet happens to approach the switch, the two blades attract each other, making a connection for electricity to flow through them. This is the NO type of reed switch, and it is a normally open circuit until a magnet approaches it. There is another type of reed switch, the NC type, and it has one blade as a non-ferromagnetic type. This switch is a normally closed type, allowing electric current to flow until a magnet approaches it. The approaching magnet makes the blades pull apart, breaking the contact.

Manufacturers use a variety of metals to construct the contacts. This includes rhodium and tungsten. Some switches also use mercury, but the switch must remain in a proper orientation for switching. The glass envelope typically has an inert atmosphere inside—commonly nitrogen—to seal the contacts at one atmospheric pressure. Sealing with an inert atmosphere ensures the contacts remain isolated,  prevents corrosion, and quenches sparks that might result from current interruption due to contact movement.

Although there are solid state Hall effect sensors for detecting magnetic fields, the reed switch has its own advantages that are necessary for some applications. One of them is the superior electrical isolation that reed switches offer compared to what Hall effect sensors do. Moreover, the electrical resistance introduced is much lower for reed switches. Furthermore, reed switches are comfortable working with a range of voltages, variable loads, and frequencies, as they function simply as a switch to connect or disconnect two wires. On the other hand, Hall switches require supporting circuitry to function, which reed switches do not.

For a mechanical switch, reed switches have incredibly high reliability—they typically function for billions of cycles before failing. Moreover, because of their sealed construction, reed switches can function even in explosive environments, where a single spark could generate disastrous results. Although reed switches are older technology, they are far from obsolete. Reed switches are now available in surface mount technology for mounting on boards with automated pick-and-place machinery.

The functioning of reed switches does not require a permanent magnet to actuate them. Even electromagnets can turn them on. Initially, Bell labs used these switches abundantly in their telephone systems, until they changed over to digital electronics.

What are Stacked 3D ICs?

Just like any big city, electronics is evolving with great rapidity, such that both are running out of open space. The net result is a growth in the vertical direction. For a city, vertical growth promises more apartments, office space, and people per square mile. For electronics, there is the slowing of Moore’s law and the adoption of new advanced technology. That means chip developers cannot increase density and speed from shrinking processes and smaller transistors. Although they can increase the die capacity, this suffers from longer signal delays that reduce yield. That limits the expansion in X-Y directions, which means the only option remaining is building upwards.

Among the many established forms of vertical integration, there are 2.5D ICs, flip-chip technology, inter-die connectivity with wire bonding, and stacked packages. However, all these suffer from constraints that limit their value. Three-dimensional integrated circuits or 3-D ICs offer the highest density and speed.

Three-dimensional ICs are monolithic 3-D SoCs built on multiple active silicon layers. These layers use vertical interconnections between the different layers. So far, this is an emerging technology and has not been widely deployed. Furthermore, there are stacked 3-D ICs with multiple dies that manufacturers have stacked, aligned, and bonded into a single package. They use TSVs or through silicon vias, and a hybrid bonding technique to complete the inter-die communication. Stacked 3-D ICs are now commercially available, offering an option for larger dies or migration to leading-edge nodes that are very expensive.

Stacked 3-D ICs offer an ideal option for applications requiring more transistors in a given footprint. For instance, a mobile SoC requires high transistor densities but has limits on its footprint and height. Another example is cache memory chips. Manufacturers usually stack them on top of or below the processor to increase their bandwidth. This makes stacked 3-D ICs a natural choice for applications that are on the limits of a single die.

Vertical stacking offers a smaller footprint with faster interconnections compared to multiple packaged chips. Rather than a single large die, splitting it into several smaller dies provides a better yield. For the manufacturer, there is flexibility in stacking heterogeneous dies, as they can intermix various manufacturing processes and nodes. Moreover, it is possible to reuse existing chips without redesigning them or incorporating them into a single die. This offers a substantial reduction in risk and cost.

Although there are numerous benefits and opportunities from the use of stacked 3-D ICs, they also introduce new challenges. The architecture of 3-D silicone systems needs a more holistic approach, taking into account the third dimension. It is not sufficient to think of 3-D ICs only in terms of 2-D chips stacked on top of each other. Although it is necessary to optimize power, performance, and area in the familiar three-way approach,  the optimization must be in every cubic millimeter rather than in every square millimeter. All tradeoff decisions must take into account the vertical dimension also. This requires making the tradeoffs across all design stages, including IP, architecture, chip packaging, implementation, and system analysis.

Remote Sensing with nRF24L01+ Modules

RF modules, nRF24L01+, from Nordic Semiconductor, are low-cost solutions for two-way wireless communication. Users can configure the modules via their SPI or Serial Peripheral Interface. The SPI interface also allows control over a microcontroller. The Internet has many examples of projects using these RF modules with Arduino boards.

The RF module nRF24L01 has a built-in PCB antenna. Moreover, the module has an extra feature that utilizes the two-way communication feature for detecting any loss of communication between the transmitter and the receiver. The modules offer two-way communication because they act as a transmitter and a receiver at the same time. However, one module acts as the main transmitter and transmits the state of a PIR or Passive Infrared Sensor to the other module that receives the data for further processing.

Remote sensors need this ability to detect the loss of communications. This is because, in the absence of communication, it is easy to lose data or information without notice. Again, this is an important feature when installing the sensor to verify if both RF modules are actually talking to each other, and are not out of range.

Although the RF modules nRF24L01 need powering with 3.3 VDC, their IO pins are 5 VDC tolerant. That makes it easy to connect the SPI bus of the nRF24L01 modules to an Arduino Pro Mini working on 5 VDC.

It is very significant to place the power supply bypass capacitors as close as possible to the microcontroller and the nRF24L01 modules, as this effectively suppresses most of the switching noise from these chips. Overlooking this in such projects often leads to all types of unexpected problems. It is also necessary to use multiple bypass capacitors. Users can effectively parallel capacitors of different values, like an electrolytic capacitor of 100 µF and a polypropylene capacitor of 100 nF. The electrolytic capacitor filters out noises of lower frequencies, but it is ineffective for filtering any high-frequency noise. The polypropylene capacitor filters the higher frequency noise.

The PIR sensor connects to the microcontroller. A voltage level translator offers the sensor the optimum voltage level it needs to function. Therefore, depending on the type of PIR sensor, the voltage level translator can supply a 5 VDC, 3.3 VDC, or other lower level outputs. The polarity of the voltage level translator transistor decides whether the trigger output is high active or low active.

A red LED begins to flash when the transmitter and the receiver have lost their connection. On restoring the connection, the red LED stops flashing.

When the PIR sensor senses motion, a blue LED lights up to indicate this. The transmitter sends this trigger event over to the receiver as a trigger code byte. If there is no motion to detect, the transmitter sends only a live beat code to the receiver. This is how the receiver knows if the sensor has sent a motion trigger.

The receiver sends the same code it receives back to the transmitter as an acknowledgment. There is thus continuous communication between the receiver and the transmitter, and both can easily determine as soon as they have lost connection.

What is a PolyFuse?

Electronic circuits often have fuses on board the PCB. Fuses protect the circuitry from catching fire due to overload. Because of some fault like a short-circuit, a part of the circuit may start drawing more power than it is admissible. The additional power flow may lead to overheating and finally, a fire can break out. A fuse acts as a circuit breaker to protect against overload by interrupting the power flow. Typically, the fuse element is a thin wire with a low melting point. Higher power through the fuse means more increased current flow through it, which heats the wire and causes it to melt or blow. This interrupts the power flow.

Although the fuse wire acts as a protection, one of its drawbacks is it needs a physical replacement once it is blown. This is a problem for electronics at a remote location because the device will remain inoperative until someone fixes the problem and replaces the damaged fuse with a new one. This drawback has led to the development of PolyFuse.

There are electromechanical devices that act as self-resetting circuit breakers. However, most of such devices have a rating of 1A and above. Moreover, their physical size is not suitable for printed circuit boards. A PolyFuse is a self-resetting circuit breaker suitable for low voltage, low current electronics. Moreover, its physical size is small enough to allow its use on a small printed circuit board.

PolyFuses are similar to PTC or positive temperature coefficient resistors—initially, their resistance is low enough to allow the load current to flow unhindered. However, in case of an overload, the PolyFuse starts to heat up, and its resistance also increases. This helps in cutting down the load current through it. However, unlike PTCs, PolyFuses have a self-healing property. If the current through a PolyFuse reduces, its resistance drops back to a lower value. This is their self-resetting property.

A PolyFuse typically contains an organic polymer substance with the impregnation of carbon particles. The carbon particles are usually in close contact, as the polymer is in a crystalline state. This allows the resistance of the device to be low initially.

As current flow increases, the carbon in the PolyFuse heats up, and the polymer begins to expand in an amorphous state. This causes the carbon particles to separate, increasing the resistance of the device and a subsequent increase in the voltage drop across the PolyFuse, which leads to a decrease in the current flow through it. The residual current flow under the fault condition keeps the PolyFuse warm enough to limit the current. As soon as the cause of the overload is removed, the current reduces to allow the PolyFuse to cool down, regain its low resistance, and the correct operation to resume.

PolyFuses cannot act fast, because they need to heat up before limiting the current flow. That means they have a short but appreciable time delay before they operate. Hence, they are not very effective against fast surges and spikes. However, they are very useful because of their self-resetting property, making them effective against short-term short-circuits and overloads.

What is Pulsed Electrochemical Machining?

With pulsed electrochemical machining, it is possible to achieve high-repeatability production parts. This advanced process is a completely non-thermal and non-contact material removal process. It is capable of forming small features and high-quality surfaces.

Although its fundamentals remain the same as electromechanical machining or ECM, the variant, PECM or the pulsed electrochemical machining process is newer and more precise, using a pulsed power supply. Similar to other machining processes, like EDM and more, there is no contact between the tool and the workpiece. Material very close to the tool dissolves by an electrochemical process and the flowing electrolyte washes away the by-products. The remaining part takes on a shape like an inverse of the tool.

The PECM process has some key terms that it uses routinely. The first is the cathode—representing the tool in the process. Other names for the cathode are tool and electrode. Typically, its manufacturing is specific for each application and its design is the inverse of the shape the process wants to achieve.

The second is the anode—it refers to the workpiece or the material that the process works on. Therefore, the anode can assume many forms. This can include a cast piece of near net shape, wrought stock, an additively manufactured or 3D printed part, a part conventionally machined, and so on.

The third key item is the electrolyte—referring to the working fluid in the PECM process that flows between the cathode and the anode. Commonly a salt-based solution, the electrolyte serves two purposes. It allows electrical current to flow between the cathode and anode. It also flushes away the by-products of the electrochemical process such as hydroxides of the metals dissolved by the process.

The final key item is the gap—this is also the IEG or inter-electrode gap and is the space between the anode and the cathode. This space is an important part of the process, and it is necessary to maintain this gap during the machining process as the gap is a major contributor to the performance of the entire process. The PECM process allows gap sizes as small as 0.0004” to 0.004” (10 µm to 100 µm). This is the primary reason for PECM’s capability to resolve minuscule features in the final workpiece.

Compared to other manufacturing processes, pulsed electrochemical machining has some important advantages:

The pulsed electrochemical machining process of metal removal is unaffected by the hardness of the material it is removing. Moreover, the hardness also does not affect the speed of the process.

Being a non-thermal and non-contact process, PECM does not change the properties of the material on which it is working.

As it is a metal removal process using electrochemical means, it does not leave any burrs behind. In fact, many deburring processes use this method as a zero-risk method of machining to avoid burrs.

It is possible to achieve highly polished surfaces with the PECM process. For instance, surfaces of 0.2-8 µin Ra (0.005-0.2 µm Ra) are very common in a variety of materials.

Because of non-contact, there is no wear and tear in the cathode, and it has practically near-infinite tool life.

PECM can form an entire surface of a part at a time. The tool room can easily parallel it to manufacture multiple parts in a single operation.

Advantages of Additive Manufacturing

Additive manufacturing, like those from 3-D printers, allows businesses to develop functional prototypes quickly and cost-effectively. They may require these products for testing or for running a limited production line, allowing quick modifications when necessary. This is possible because these printers allow effortless electronic transport of computer models and designs. There are many benefits of additive manufacturing.

Designs most often require modifications and redesign. With additive manufacturing, designers have the freedom to design and innovate. They can test their designs quickly. This is one of the most important aspects of making innovative designs. Designers can follow the creative freedom in the production process without thinking about time and or cost penalties. This offers substantial benefits over the traditional methods of manufacturing and machining. For instance, over 60% of designs undergoing tooling and machining also undergo modifications while in production. This quickly builds up an increase in cost and delays. With additive manufacturing, the movement away from static design gives engineers the ability to try multiple versions or iterations simultaneously while accruing minimal additional costs.

The freedom to design and innovate on the fly without incurring penalties offers designers significant rewards like better quality products, compressed production schedules, more product designs, and more products, all leading to greater revenue generation. Regular traditional methods of manufacturing and production are subtractive processes that remove unwanted material to achieve the final design. On the other hand, additive manufacturing can build the same part by adding only the required material.

One of the greatest benefits of additive manufacturing is streamlining the traditional methods of manufacturing and production. Compressing the traditional methods also means a significant reduction in environmental footprints. Taking into account the mining process for steel and its retooling process during traditional manufacturing, it is obvious that additive manufacturing is a sustainable alternative.

Traditional manufacturing requires tremendous amounts of energy, while additive manufacturing requires only a relatively small amount. Additionally, waste products from traditional manufacturing require subsequent disposal. Additive manufacturing produces very little waste, as the process uses only the needed materials. An additional advantage of additive manufacturing is it can produce lightweight components for vehicles and aircraft, which further mitigates harmful fuel emissions.

For instance, with additive manufacturing, it is possible to build solid parts with semi-hollow honeycomb interiors. Such structures offer an excellent strength-to-weight ratio, which is equivalent to or better than the original solid part. These components can be as much as 60% lighter than the original parts that traditional subtractive manufacturing methods can produce. This can have a tremendous impact on fuel consumption and the costs of the final design.

Using additive manufacturing also reduces the risk involved and increases predictability, resulting in improving the bottom line of a company. As the manufacturer can try new designs and test prototypes quickly, digital additive manufacturing modifies the earlier unpredictable methods of production and turns them into predictable ones.

Most manufacturers use additive manufacturing as a bridge between technologies. They use additive technology to quickly reach a stable design that traditional manufacturing can then take over for meeting higher volumes of production.

What are Power Factor Controllers?

Connecting an increasing number of electrically-powered devices to the grid is leading to a substantial distortion of the electrical grid. This, in turn, is causing problems in the distribution of the electrical network. Therefore, most engineers resort to advanced power factor correction circuitry in power supply designs that can meet power factor standards strictly for mitigating these issues.

Most power factor correction methods popularly use the boost PFC topology. However, with the advent of wide band-gap semiconductors, like silicon carbide and gallium nitride, it is becoming easier to implement bridge-less topologies also, including the column PFC. With advanced column controllers, it is now possible to simplify the control over complex designs of the interleaved column PFC.

At present, the interleaved boost PFC is the most common topology that engineers use for power factor correction. They use a rectifying diode bridge for converting AC voltage to DC. A boost converter then steps up the DC voltage to a higher value, while converting it to a sinusoidal waveform. This has the effect of reducing the ripple on the output voltage while offering a sinusoidal waveform for the current.

Although it is possible to achieve power factor correction with only a single boost converter, engineers often use two or more converters in parallel. Each of these converters is given a phase shift to improve its efficiency and reduce the ripple on the input current. This topology is known as interleaving.

With new families of semiconductors, especially the silicon carbide type, creating power switches offers substantial improvements in their thermal and electrical characteristics. Using the new type of semiconductors, it is becoming possible to integrate the rectification and boost stages, along with two switching branches for operating at different frequencies. This is the bridge-less column PFC topology.

One of the two branches is the slow branch, and it commutates at the grid frequency, typically 50 or 60 Hz. This branch operates with traditional silicon switches, while it is primarily responsible for input voltage rectification. The second branch is the fast branch and is responsible for stepping up the voltage. Switching at very high frequencies like 100 kHz, this branch places great thermal and electrical strain on the semiconductor switches. For safe and efficient performance, engineers prefer to use wide band-gap semiconductor switches, such as GaN and SiC MOSFETs, in the second branch.

The bridge-less column PFC topology improves the performance in comparison with the interleaved boost converter. But the control circuitry is more complex due to the presence of additional active switches. Therefore, engineers often integrate the column controller to mitigate the issue.

It is possible to add more high-frequency branches for improving the efficiency of the bridge-less column PFC. Such additions help in reducing the ripple on the output voltage of the converter while distributing the power requirements equally among the branches. Such an arrangement minimizes the overall costs while reducing the layout.

Although it is possible to reach general conclusions about each topology by comparing their performance, this largely depends on the device selection and its operating parameters. Therefore, designers must be careful in considering the design for implementation.

How Piezoelectric Accelerometers Work

Vibration and shock testing typically require piezoelectric accelerometers. This is because these devices are ideal for measuring high-frequency acceleration signals generated by pyrotechnic shocks, equipment and machinery vibrations, impulse or impact forces, pneumatic or hydraulic perturbations, and so on.

Piezoelectric accelerometers rely on the piezoelectric effect. Generally speaking, when subject to mechanical stress, most piezoelectric materials produce electricity. A similar effect also happens conversely, as applying an electric field to a piezoelectric material can deform it mechanically to a small extent. Details of this phenomenon are quite interesting.

When no mechanical stress is present, the location of the negative and positive charges are such as to balance each other, making the molecules electrically neutral.

The application of a mechanical force deforms the structure and displaces the balance of the positive and negative charges. This leads the molecules to create many small dipoles in the material. The result is the appearance of some fixed charges on the surface of the piezoelectric material. The amount of electrical charges present is proportional to the force applied.

Piezoelectric substances belong to a class of dielectric materials. Being insulating in nature, they are very poor conductors of electricity. However, depositing two metal electrodes on the opposite surfaces of a piezoelectric material makes it possible to produce electricity from the electric field that the piezoelectric effect produces.

However, the electric current that the piezoelectric effect produces from a static force can last only a short period. Such a current flow continues only until free electrons cancel the electric field from the piezoelectric effect.

Removing the external force causes the material to return to its original shape. However, this process now causes a piezoelectric effect in the reverse direction, causing a current flow in the opposite direction.

Most piezoelectric accelerometers constitute a piezoelectric element that mechanically connects a known quantity of mass (proof mass) to the accelerometer body. As the mechanism accelerates due to external forces, the proof mass tends to lag behind due to its inertia. This deforms the piezoelectric element, thereby producing a charge output. The input acceleration produces a proportional amount of charge.

Piezoelectric accelerometers vary in their mechanical designs. Fundamentally, there are three designs, working in the compression mode, shear mode, and flexural mode. The sensor performance depends on the mechanical configuration. It impacts the sensitivity, bandwidth, temperature response of the sensor, and the susceptibility of the sensor to the base strain.

Just as in a MEMS accelerometer, Newton’s second law of motion is also the basis of the piezoelectric accelerometer. This allows modeling the piezoelectric element and the proof mass as a mass-damper-spring arrangement. A second-order differential equation of motion best describes the mass displacement. The mechanical system has a resonance behavior that specifies the upper-frequency limit of the accelerometer.

The amplifier following the sensor defines the lower frequency limit of the piezoelectric accelerometer. Such accelerometers are not capable of true DC response, and hence incapable of performing true static measurements. With a proper design, a piezoelectric accelerometer can respond to frequencies lower than 1 Hz, but cannot produce an output at 0 Hz or true DC.

What are Tactile Switches?

Tactile switches are electromechanical switches that make or break an electrical circuit with the help of manual actuation. In the 1980s, tactile switches were screen-printed or membrane switches that keypads and keyboards used extensively. Later versions offered switches with metal domes for improved feedback, enhanced longevity, and robust actuation. Today, a wide range of commercial and consumer applications use tactile switches extensively.

The presence of the metal dome in tactile switches provides a perceptible click sound, also known as a haptic bump, with the application of pressure. This is an indication that the switch has operated successfully. As tactile switches are momentary action devices, removal of the applied pressure releases the switch immediately, causing the current flow to be cut off.

Although most tactile switches are available as normally open devices, there are normally closed versions also in the market. In the latter model, the application of pressure causes the current flow to turn off and the release of pressure allows the current flow to resume.

Mixing up the names and functions of tactile and pushbutton switches is quite common, as their operation is somewhat similar. However, pushbutton switches have the traditional switch contact mechanism inside, whereas tactile switches use the membrane switch type contacts.

Their construction makes most pushbutton switches operate in momentary action. On the other hand, all tactile switches are momentary, much smaller than pushbutton switches, and generally offer lower voltage and current ratings. Compared to pushbutton switches, the haptic or audible feedback of tactile switches is another key differentiator from pushbutton switches. While it is possible to have pushbutton switches in PCB or panel mounting styles, the design of tactile switches allows only direct PCB mounting.

Comparing the construction of tactile switches with those of other mechanical switches shows a key area of difference, leading to the tactile switches being simple and robust. This difference is in the limited number of internal components that allows a tactile switch to achieve its intended function. In fact, a typical tactile switch has only four parts.

A molded resin base holds the terminals and contacts for connecting the switch to the printed circuit board.

A metallic contact dome with an arched shape fits into the base. It reverses its shape with the application of pressure and returns to its arched shape with the removal of pressure. This flexing process causes the audible sound or haptic click. At the same time, the dome also connects two fixed contacts in the base for the completion of the circuit. On removal of the force, the contact dome springs back to its original shape, thereby disconnecting the contacts. As the material for both the contacts and the dome are metal, they determine the haptic feel and the sound the switch makes.

A plunger directly above the metallic contact dome is the component the user presses to flex the dome and activate the switch. The plunger is either flat or a raised part.

The top cover, above the plunger, protects the switch’s internal mechanism from dust and water ingress. Depending on the intended function, the top cover can be metallic or other material. It also protects the switch from static discharge.