Category Archives: Batteries

Are Lithium Iron Phosphate Batteries Better?

According to the latest news from developments in batteries, the LFP or Lithium Iron Phosphate battery technology is going to pose a serious challenge to that of the omnipresent Lithium-ion type.

As far as e-mobility is concerned, Lithium-ion batteries have some serious disadvantages. These include higher cost and lower safety as compared to other chemistries. On the other hand, recent advancements in battery pack technology have led to an enhancement in the energy density of LFP batteries so that they are now viable for all kinds of applications related to e-mobility—not only in vehicles but also in shipping, such as in battery tankers.

In their early years of development, LFP cells had a lower energy density as compared to those of Lithium-ion cells. Improved packaging technology had bumped up the energy density to about 160 Wh/kg, but this was still not enough for use in e-mobility applications.

With further improvements in technology, LFP batteries now operate better at low temperatures, charge faster, and have a longer cycle life. These features are making them more appealing for many applications, including their use in electric cars and in battery tankers.

However, LFP batteries still continue to face several challenges, especially in applications involving high power. This is mainly due to the unique crystal structure of LFP, which reduces its electronic conductivity. Scientists have been experimenting with different approaches, such as reducing the directional crystal growth or particle size, using different conductive layer coatings, and element doping. These have not only helped to improve the electronic conductivity but have increased the thermal stability of the batteries as well.

Comparing LFP batteries with the Lithium-ion types shows them to have individual advantages in different key characteristics. For instance, Lithium-ion batteries offer higher cell voltages, higher power density, and better specific capacity. These characteristics lead to Lithium-ion batteries offering higher volumetric energy density suitable for achieving longer driving ranges.

In contrast, LFP batteries offer a longer cycle life, better safety, and better rate capability. As the risk of thermal runaway, in case of mechanical damage to a cell, is also much lower, these batteries are now popularly used for commercial vehicles with frequent access to charging, such as scooters, forklifts, and buses.

It is also possible to fully charge LFP batteries in each cycle, in contrast to having to stop at 80% to avoid overcharging some type of Lithium-ion batteries. Although this does allow simplification of the battery management algorithm, it adds other complexities for Battery Management Systems managing LFP cells.

Another key advantage of LFP batteries is they do not require the use of cobalt and nickel in their anodes. The industry fears that in the coming years, sourcing these metals will be more difficult. Even with mining projections of both elements doubling by 2030, they may not meet the increase in demand.

All the above is making the LFP batteries look increasingly interesting for e-mobility applications, with more car manufacturers planning to adapt them in their future cars.

Stretch Your Battery

Although there is no existence of a stretchable phone or laptop at present, researchers have developed a prototype of a battery that is wearable and can be stretched. This stretchable Lithium-ion battery is fabric-based. With this type of battery, the team of researchers, from the University of Houston, has opened up a new direction for the future of wearable technology.

Professor Haleh Ardebili first came up with the idea for this stretchable Li-ion battery. He initially envisioned a future with smart, interactive, and powered clothes. From here, it was but a natural step to create stretchable batteries that could integrate with stretchable devices and clothes. For instance, one can use clothes with interactive sensors embedded in them to monitor their health.

Typically, batteries are in general rigid and are a major bottleneck in the wearable technology development of the future. Not only does the stiffness of a battery lead to limited functionality, but their use of liquid electrolytes raises safety concerns. Especially as the organic liquid electrolyte is flammable and prone to explosions. The researchers are using conductive fabric made of silver as the platform for the flexible battery and as the current collector.

The team prefers the woven sliver fabric for the battery, as it can easily deform mechanically by stretching, while still providing the necessary electrical conduction pathways for the electrodes of the battery to perform. The battery electrodes need to allow the movement of both ions and electrodes. With their experiments and prototypes, the researchers have entered to investigate an unexplored field in science and engineering. Going beyond the prototype, the researchers are working on optimizing the design of the battery, its fabrication, and its materials.

According to the researchers, the fabric-based stretchable battery will work wonders for various applications like smart space suits and devices for interacting with humans at a variety of levels, such as consumer electronic equipment embedded in garments for monitoring health. In fact, the applications for such a device are endless, providing a path for light, safe, flexible, and stretchable batteries. However, the team feels they have much to do before they can commercialize their idea.

While they need to work on the cost and scale for commercial viability, the team feels there is a clear need in the market for such batteries, especially in the future, for stretchable electronic devices. Once such products appear in the market, there will be a huge demand for the batteries. Right now, the team wants to make sure the batteries are as safe as possible.

The team faced many challenges in designing the stretchable battery. It took more than five years for them to reach the present state. Their main impediment was integrating the fabric with a functional battery.

As to how the battery works, the team explained that while the electro-chemically active material in the battery provides charge through bonding and debonding of lithium, it coats and deposits on the sliver stretchable fabric. While the lithium ions shuttle back and forth within the battery between the positive and negative electrodes, the battery can stretch as the polymer electrolyte and the fabric can also do so.

MCUs Working Sans Batteries

Nature is exceptionally efficient. It maximizes available and additional resources by using as much of it as possible. Humans are now beginning to follow in nature’s footsteps. Doing this allows us to improve performance, thereby reducing waste and minimizing cost. One of the methods in use today is energy harvesting. We can power electrical devices using ambient energy. For devices operating on batteries, it is possible to use energy harvesting for extending the useful life of the battery, or even replace the energy contribution of the battery entirely.

We have ultra-low-power microcontroller units or ULP MCUs as the logical choice for demonstrating energy harvesting. Many devices like wireless sensors, wearable technology, and edge applications use ULP MCUs because it is essential for these devices to extend their battery lives. Reviewing the working practice of energy harvesting is important to understand its value to ULP MCUs.

The principles of energy harvesting are simple. It must overcome the finite nature of the primary source of energy, here, the battery. However, as no process can be one hundred percent efficient, there will be losses when converting the source power to usable energy, even when there is boundless ambient energy available for capture. This is evident in wind turbines, a renewable large-scale energy source. The wind provides the turbines with potential energy, making the blades rotate. This movement turns a generator, producing electrical power. Other similar large-scale ambient energy sources also exist—geothermal heat, oceanic waves, and solar.

Wearables and other similar small-scale devices harvest thermal, kinetic, or environmental electromagnetic radiation energy. However, each of these makes use of different mechanisms for converting the source power to useful usable energy. It is necessary to consider the utility and practicality of each conversion mechanism, as the application defines the size and mass of the energy conversion technology.

For instance, making use of thermal radiation is more suitable for wireless sensor applications, as the sensor placement and design can take advantage of both forms of energy. Likewise, vehicles can use sensors that make use of radiant heat emanating from the road surface. As engine components like wheels are high-vibration locations, it is possible to harvest motion energy from near them. For wearables using ULP MCUs, harvesting the kinetic energy from the human user’s motion provides the most practical means of conversion to usable energy.

In wearable technology, the primary application of the ULP MCU is to process the edge data gathered by the sensor. And, it is critical to process this data with the minimum power consumption. Energy harvesting supplements the power from the battery, which has a finite amount of energy, and requires periodic replenishment in the form of recharging or replacement as its power depletes. There are three ways of capturing energy for ULP MCUs—using piezoelectric, electromagnetic, or triboelectric generators.

Kinetic forces compressing piezoelectric materials can make it generate an electric field, which can add as much as 10 mW to the battery. Harvesting energy from electromagnetic radiation like infrared, radio, UV, and microwaves can contribute about 0.3 mW of harvested power. Triboelectric generators use friction on dissimilar material surfaces rubbing together from mechanical movements like oscillation, vibration, and rotary motions to generate 1-1.5 mW of electricity.

Preserving IoT Battery Life

At MIT, researchers have built a wake-up receiver for IoT devices. The receiver uses terahertz waves to communicate, making the chip more than ten times smaller than contemporary devices. The receiver also includes authentication that helps protect it from certain types of attacks. The low power consumption of the chip means it can help preserve battery life in robots or tiny sensors.

The current trend is towards developing ever-smaller devices for IoT or the Internet of Things. For instance, sensors can be smaller than a fingertip, capable of making any object trackable. Most of these tiny sensors, however, have even tinier batteries that are nearly impossible to replace. Therefore, engineers need to incorporate a wake-up device in these sensors. It keeps the device in a low-power sleep mode when not operating, thereby preserving battery life. The new device from MIT is capable of protecting the device from certain attacks that could drain its battery rather quickly.

The present generation of wake-up receivers is typical of the centimeter scale. This is because their antennas need to be proportional to the length of the radio waves they use for communicating. On the other hand, the MIT team utilized the terahertz wave for the receiver. As these waves are about one-tenth the length of regular radio waves, they could design the chip to be barely greater than a square millimeter.

It is possible to incorporate the wake-up receiver into microbots for monitoring environmental changes in locations that are either hazardous or too small for other robots to reach. As the device operates on terahertz frequencies, it is possible to use them in emerging applications like radio networks that operate as field-deployable swarms for collecting localized data.

Using terahertz frequencies, the researchers could make antennas the size of a few hundred micrometers on either side. The implication of such small-size antennas is that it is possible to integrate them on the chip, thereby creating a totally integrated solution. Ultimately, the researchers could build a wake-up receiver tiny enough to attach to tiny radios or sensors.

On the electromagnetic spectrum, terahertz waves exist between infrared light and microwaves. At very high frequencies, they travel much quicker than radio waves can. Terahertz waves, also known as pencil beams, travel in a rather direct path as compared to other signals, making them more secure.

However, terahertz receivers often multiply their signal by another signal so that they can alter their frequency. This process is termed frequency mixing or modulation, and it consumes a huge amount of power. The researchers at MIT used a pair of tiny transistors as antennas for detecting terahertz waves. This method of detecting consumes very little power, as it does not involve frequency mixing.

Even when they placed both antennas on the chip, the MIT wake-up chip was only 1.54 square millimeters and used only 3 microwatts to operate. The presence of two antennas maximizes its performance and makes it more sensitive to receiving signals. Once it detects the terahertz signal, it converts the analog signal into digital data for processing. The received signal contains a token, which, if it matches the wake-up receiver’s token, will activate the device.

Protecting the Li-ion Battery

For decentralization of the source of energy, it is hard to beat rechargeable lithium-ion batteries. A wide range of applications uses this electrochemical option of energy storage as a strategic imperative. That includes powering up units in the military sector,  storing and providing energy for personal use, keeping uninterruptible power supply systems operational for data centers and hospitals, storing energy from photovoltaic systems, and enabling the operation of battery electric vehicles and power tools.

The rechargeable battery pack is the most common design in the accumulator segment and accounts for the major share of battery-powered applications. Such a pack usually consists of multiple Li-ion cells. With continuous technological development, the economics of the Li-ion rechargeable battery pack is also becoming attractive enough to warrant a substantial increase in its use. This is also leading to the miniaturization of individual cells, resulting in an increase in their energy density.

However, even with the increased availability and use, the Li-ion rechargeable battery pack continues to carry a residual risk of hazards, especially due to the increase in energy density brought on by miniaturization. The disadvantage is in terms of safety.

The electrolyte in the Li-ion cells is typically a mixture of organic solvents and a conductive salt that improves its electrical conductivity. Unfortunately, this also makes the mixture highly flammable. During operation, the presence of an inordinate thermal load can lead to the point where the mixture becomes explosive. Furthermore, this safety hazard to the end-user is increasing with the constant efforts to further increase the energy density of Li-ion cells.

Most electric battery cells have a narrow operational temperature range, varying from +15 °C to +45 °C. That makes temperature the key parameter. When the cell exceeds this temperature range, its rising heat becomes a threat to its functional safety, and to the safety of the overall system.

Overcharging the battery substantially increases the statistical probability of the defect in the cell. This may lead to a breakdown of the cell structure, typically associated with the generation of fire and in some cases, an explosion.

Manufacturers of rechargeable battery packs try to mitigate this risk by including a battery management system, and primary and secondary protection circuits that they embed in the electronic safety architecture of the battery. This allows the battery to remain within its specified operating range during the charging and discharging cycles. But nothing is immune to failure, including components in the protection circuit, and the battery system can ignite and explode on an excessively high load.

As the battery powers up a load, excessive current flow can heat up the battery, and the primary protection circuit may not detect it even when it exceeds the permissible level. For the protection of batteries, RUAG Ammotec is offering a heat lock element, a pyrotechnical switch-off device that is entirely independent of the battery system. This comprises a physicochemical sensor to continuously monitor the environmental heat. As the temperature rises, the sensor blocks the flow of current permanently. The heat lock element causes an insulating piston to shear off a current conductor, thereby electrically isolating the battery.

Next-Generation Battery Management

Although there has been a significant advancement in increasing the range of electric vehicles, the charging speed is still a matter of concern. For instance, DC fast chargers can charge the battery to 80 percent in about 30 to 45 minutes. In contrast, it is possible to fill the gas tank in only a few minutes. Fast charging has its limitations, as the process generates a significant amount of heat. The high current and the internal resistance of the cable and the battery typically generate a significant rise in temperature.

EV batteries are typically rated at 400 V, and several factors limit their charging rate. This includes the cross-sectional area of the charging cable and the temperature of the battery cells. The temperature rise can be high enough for some fast-charging stations that necessitate liquid-cooling of their cables. Therefore, it would seem reasonable to expect that an increase in the battery’s voltage will boost the power it delivers.

Porsche, in their Taycan EV, has done just that. Their first production vehicle has a system voltage of 800 V rather than the usual 400. This would allow a 350 KW level 3 ultra-fast DC charging station to potentially charge the vehicle to 80% in as low as 15 minutes. But then, an EV design with an 800 V system requires new considerations for all its electrical systems, especially those related to managing the battery.

Switching the vehicle on and off requires the main contactors to electrically connect and disconnect the battery from the traction inverter. On the other side, there are independent contacts for connecting and disconnecting the battery to and from the charger buses and the DC link. For DC fast charging, additional DC charge contacts are necessary that can establish a connection from the battery to the DC charging station. Additionally, auxiliary contactors connect and disconnect the battery to electrical heaters for optimizing the passenger compartment temperature in cold weather conditions.

Moving to a higher battery voltage increases the potential for the formation of electrical arcs, and these can be damaging. Vehicle architectures operating at 800 V therefore, require stricter isolation parameters than those necessary for 400 V architecture. This can increase the cost of the vehicle.

For instance, higher voltage levels require the connector pins to have greater creepage and clearance between them to reduce the risk of arcing. Although connector manufacturers have managed to overcome these issues, the connectors are more expensive than those they offer for 400 V systems, thereby jacking up the total costs.

The maximum battery voltage decides the ratings of components that the traction inverter module uses. For battery voltages at 400 V, there is a wide range of selection of suitably rated components. But this range reduces drastically when the battery voltage is at 800 V. Most components for higher voltages come with a premium price tag attached. This raises the price of the traction inverter module.

A solution to the above problem is to use two 400 V batteries. To reduce the charging time, the batteries may connect in series. They can connect in parallel when driving.

Batteryless Microcontrollers for IoT

Ten years ago, IBM predicted the world will have one trillion connected devices by 2015. However, as 2015 rolled by, the world had yet to reach even 100 billion connected devices. The major problem—a trillion sensors mean at least a trillion batteries.

Although a significant problem, it did not make economic sense. Everyone was expecting the IoT technology to bring on a large value-addition, that of range. They expected IoT to bring the Internet to remote corners of the world, thereby interconnecting vast areas with IoT sensors and their information-gathering powers. Therefore, the internet and its incredible power would be visible in various places like large farms, factories, lumbering operations, construction sites, and mining operations, with enormous coverage and decentralized operations.

Typically, sensors collect data for IoT networks, which distribute it for processing and analysis. If sensors require batteries for operation, it places a severe restriction on the number of sensors that a network can use. This, in turn, goes on to defeat the entire point of having IoT in the first place.

For instance, consider a large-scale agricultural operation. IoT can bring major value addition to such a business through its coverage. By deploying multiple sensors across the entire operation, it is possible to access valuable information capable of generating highly actionable insights. Now consider the recurring cost of replacing or maintaining the huge number of batteries every year—making the proposition less compelling very quickly.

Not only would the resources, cost, and manpower, for replacing or maintaining the batteries on all the sensors be astronomical, but they would also easily surpass any possible savings that the system would likely bring.

According to an estimate, a trillion sensors would need 275 million battery replacements every day. This, assuming every battery deployed in the IoT network reached its claimed life of ten years. The next hurdle is even worse—discarded batteries poisoning the environment.

The above problem has resulted in sensors and microcontrollers getting more efficient and cheap. Modern sensors are now extremely reliable, consuming minuscule amounts of energy. Batteries have also improved, with the industry exhibiting robust batteries with higher energy density and longer life. However, the future of microcontrollers and IoT sensors needed to be batteryless. This led scientists and engineers to develop energy harvesting technologies that could eliminate the battery from IoT altogether. 

Energy harvesting is the technique of scavenging power from the surroundings, which has many forms of it—heat energy, electromagnetic energy, vibrational energy, and so on.

Considering that modern microcontrollers for IoT need only a few millivolts to operate, many are developing energy harvesting technologies as a potential power solution that can replace batteries.

This has given rise to self-powered microcontrollers in the market. For these MCUs, batteries impose no restrictions, as they harness their own energy from the environment. They use a number of harvesting technologies based on various power sources and kinds of materials—piezoelectricity, triboelectricity, and RF energy harvesting being the leading contenders in the category. Therefore, with energy harvesting powering microcontrollers, IoT can once again begin to chase the magic figure of one trillion interconnected devices.

Tiny Batteries Drive Microbots

Microbots are mobile robots, with characteristic dimensions below one micrometer. They are a part of the bigger family of common larger robots and a growing number of smaller nanorobots. In fact, the nature of microbots is common to both their larger and smaller cousins. Being autonomous, microbots use their onboard computers to move in insect-like maneuvers. Often, they are a part of a group of identical units that perform as a swarm does, under the control of a central computer.

With their insect-like form being a common feature, microbots are typically cheap to develop and manufacture. Scientists employ microbots for swarm robotics, using many of them and coordinating their behavior to perform a specific task. Combining many microbots compensates for their lack of individual computational capability, producing a behavior resembling that of an anthill or a beehive where insects cooperate to achieve a specific purpose.

With the field of microbotics still growing, microbots have a long way to develop further. Researchers are working with these devices, and they are investing their money, time, and effort in improving their capabilities.

With each new iteration, scientists are empowering microbots with more processing power, newer modes of locomotion, a larger number of sensors, and expanding their storage methods while providing them with newer techniques of energy harvesting. Recently, there has been a big breakthrough in tiny batteries that can help microbots drive further than ever before.

Generating a 9 VDC output, these tiny batteries are capable of driving motors directly. They stack multiple layers while turning components into packaging.

Several universities and a battery corporation have joined hands in creating the tiny batteries, a novel design that not only produces a high voltage but also boosts its storage capacity.

To unlock the full potential of microscale devices such as microbots, batteries must not only be tiny, they must also be powerful. According to the team that developed the tiny battery, its innovative design uses an improved architecture for its electrodes.

However, this was an unprecedented challenge. As the battery size reduces, the packaging begins to take up more of the available space, leaving precious little for the electrodes and the active ingredients that give the battery its performance.

Therefore, in place of working on the battery chemistry, the team started to work on a new packaging technology. They turned the negative and positive terminals of the battery into actual packaging, thereby saving considerable amounts of space.

By growing fully-dense non-polymer electrodes and combining them with vertical stacking, the team was able to make micro batteries that do not require carbon additives for electrodes. This allowed the micro batteries to easily outperform competitive models in capacity and voltage.

According to the team, limitations of power-dense micro- and nano-scale battery design were primarily due to cell design and electrode architecture. They have successfully created a microscale source of energy that has both volumetric energy density and high power density.

The higher voltage helps to reduce the electronic payload of a microbot. The 9 VDC from the tiny battery can power motors directly, bypassing energy losses associated with voltage boosting, allowing the small robots to either travel further or send more information to their human operators.

What are Solid-State Batteries?

The transport industry is currently undergoing a revolution with EVs or electric vehicles on the roads. EVs require batteries, and many EV manufacturers are now manufacturing their own batteries, targeting low-cost batteries with the most range and the fastest charging speed. While many industries are still using lithium-ion batteries, others are moving towards solid-state batteries. Compared to a few years ago, major breakthroughs are finally bringing solid-state batteries closer to mass production.

Although solid-state batteries have been in existence for some time now, and scientists have been researching them, they have been commercially available only in the last decade or so. Specific advantages of solid-state batteries include lower costs, superior energy density, and faster charging times.

Many companies have been researching solid-state battery technology for years. For instance, Toyota claims to be on the verge of producing solid-state batteries commercially for EVs, and they hold more than 1,000 patents.

Conventionally, a lithium-ion battery has an anode and a cathode, with a polymer separator keeping them apart. A liquid electrolyte floods the entire cell and is the medium through which lithium ions can travel while the battery is charging/discharging.

In a solid lithium-ion battery, a solid electrolyte layer separates the anode and the cathode, allowing lithium ions to travel through it. The anode is of pure lithium, which gives it a higher energy density than that of regular batteries. Theoretically, the energy density from solid lithium-ion batteries is roughly about 6300 watts per hour. Compared to the energy density of gasoline, a solid lithium-ion battery offers an energy density of about 9500 watts per liter.

The major advantage of solid-state batteries is their smaller size and weight. Additionally, they pose no fire hazards. As these batteries are very safe, they do not require as many safeguards to secure them. Their smaller size allows packing them to higher power capacity, and they do not release toxins. Solid-state batteries run 80 percent cooler than regular batteries.

With all the above advantages, using solid-state batteries in electric vehicles offer them greater range, safer operation, faster charging, higher voltages, and longer cycle life. However, solid-state batteries must overcome some disadvantages still.

The first of these obstacles is the dendrite formation. Lithium is a highly corrosive metal, requiring the use of chemically inert solid electrolytes. Over time, dendrite growth increases to the extent of destroying the battery. During charging, these batteries usually grow spike-like structures that can develop and begin to puncture the dividers, causing short-circuits in the battery. Manufacturers are using ceramic separators to overcome the dendrite menace.

Solid-state batteries currently do not perform well at low temperatures. This affects its long-term durability.

So far, the biggest detriment to solid-state batteries has been their exorbitant cost. However, present indications from manufacturers like Toyota suggest they have surmounted the price barrier.

Therefore, at present, the only problem still remaining for solid-state battery commercialization is their low-temperature performance. To be a viable alternative, solid-state batteries must perform in all kinds of variable environments and climates. However, manufacturers are offering assurances that they have overcome this hurdle also. Recharging stations need to be able to handle the faster-charging currents as compared to that of regular lithium-ion batteries.

The Battery of the Future — Sodium Ion

Currently, Lithium-ion batteries rule the roost. However, there are several disadvantages to this technology. The first is that Lithium is not an abundant material. Compared to this, Sodium is one of the most abundantly available materials on the earth, therefore it is cheap. That makes it the most prime promising candidate for new battery technology. So far, however, the limited performance of Sodium-ion batteries has not allowed them a large-scale integration into the industry.

PNNL, or the Pacific Northwest National Laboratory, of the Department of Energy, is about to turn the tides in favor of Sodium-ion technology. They are in the process of developing a Sodium-ion battery that has excelled in laboratory tests for extended longevity. By ingeniously changing the ingredients of the liquid core of the battery, they have been able to overcome the performance issues that have plagued this technology so far. They have described their findings in the journal Nature Energy, and it is a promising recipe for a battery type that may one day replace Lithium-ion.

According to the lead author of the team at PNNL, they have shown in principle that Sodium-ion battery technology can be long-lasting and environmentally friendly. And all this is due to the use of the right salt for the electrolyte.

Batteries require an electrolyte that helps in keeping the energy flowing. By dissolving salts in a solvent, the electrolyte forms charged ions that flow between the two electrodes. As time passes, the charged ions and electrochemical reactions helping to keep the energy flowing get slower, and the battery is unable to recharge anymore. In the present Sodium-ion battery technologies, this process was happening much faster than in Lithium-ion batteries of similar construction.

A battery loses its ability to charge itself through repeated cycles of charging and discharging. The new battery technology developed by PNNL can hold its ability to be charged far longer than the present Sodium-ion batteries can.

The team at PNNL approached the problem by first removing the liquid solution and the salt solution in it and replacing it with a new electrolyte recipe. Laboratory tests proved the design to be durable, being able to hold up to 90 percent of its cell capacity even after 300 cycles of charges and discharges. This is significantly higher than the present chemistry of Sodium-ion batteries available today.

The present chemistry of the Sodium-ion batteries causes the dissolution of the protective film on the anode or the negative electrode over time. The film allows Sodium ions to pass through while preserving the life of the battery, and therefore, quite significantly critical. The PNNL technology protects this film by stabilizing it. Additionally, the new electrolyte places an ultra-thin protective layer on the cathode or positive electrode, thereby helping to further contribute to the stability of the entire unit.

The new electrolyte that PNNL has developed for the Sodium-ion batteries is a natural fire-extinguishing solution. It also remains non-changing with temperature excursions, making the battery operable at high temperatures. The key to this feature is the ultra-thin protection layer the electrolyte forms on the anode. Once formed, the thin layer remains a durable cover, allowing the long cycle life of the battery.