Charging and gas gauging for mobile devices

February 03, 2016

Charging and gas gauging for mobile devices

Ensure a great experience for end users with proper gas gauging. Operating effectively from a battery is a core requirement for mobile devices. Good b...

 

Ensure a great experience for end users with proper gas gauging.

Operating effectively from a battery is a core requirement for mobile devices. Good battery-powered behavior is crucial for a product’s success. Battery-powered operation revolves around three major aspects:

  • Well-tuned system power consumption
  • Appropriate battery charging
  • Accurate fuel- or gas-gauge prediction of battery state

For mobile devices, tuning the system power consumption starts with defining the device use-cases and modeling the power consumption in its various modes. Normally this include a low “static” suspend state, when the device is not in use, as well as various dynamically managed power states when operating in different modes.

The battery charging and gas gauging aspects are often overlooked or under-estimated in terms of system design and implementation effort. Especially for consumer products, marginal failures in either charging or gas gauging lead to customer dissatisfaction. Problems like failure to detect the charger type on 0.1% of plug-ins, or the product suddenly powering-off when indicating 15% power remaining, spell disaster for your product’s reputation when you’ve shipped a million units. It also leads to significant customer support costs. Issues like failing to stop charging when a device overheats can even present a life-threatening safety (and liability) issue.

These aspects all require careful system design from the ground-up, both in terms of hardware and software. The goal here is to highlight that charging and gas gauging are subject to the 80/20 rule: getting 80% of the functionality takes 20% of the effort, but the remaining 20% of “final polish” takes more than 80% of the effort. A careful initial design, such as in the Snapdragon Open-Q series development kits, offers a great start.

Battery selection: chemistry

Many parameters and features factor into the selection of appropriate battery cell technology. The first is the chemistry/technology of the cell itself. Most people are familiar with energy density (Figure 1). Various lithium chemistries are the leader here, giving the highest energy per unit volume or weight. Those will be an obvious choice for many electronic consumer devices.

[Figure 1 | Energy density of various battery chemistries.]

While important, energy density might not be the only decision-maker. Temperature can be an important factor for charging/discharging. Lithium batteries usually shouldn’t be charged below 0°C, and generally are specified to be cut off from charging around 45°C. Generally, they shouldn’t be operated at all above 60°C, and they contain a permanent thermal fuse to prevent catastrophic thermal runaway if other safety prevention methods fail. 60°C may seem outside your device’s normal operation, but on the seat or dashboard of a hot car, this can easily be reached within minutes. Consumer products for outdoor use may face challenges with the low temperature limits too, requiring careful system design.

Aside from temperature, battery internal impedance and the rate at which they can discharge energy can also influence your cell choice. Where brief large currents (like greater than 4C) are needed, another chemistry based on Ni or Lead may be more appropriate. An example could be power tools or systems with large displays.

In the case of Lithium, the most common chemistry for consumer mobile applications, various cathode compounds can be used for different properties and cell shapes, flexibility, and configurations. Lithium has subtly different properties for voltage versus state-of-charge, internal impedance behavior, and variation of stored energy and voltage with temperature. Gas gauge chipmakers refer to these as “Golden parameters” and will perform a careful characterization of any battery cell type to determine appropriate values to use when setting up a gas gauge for that cell.

Charging

For a particular battery chemistry, there will be an appropriate paradigm for charging that cell. For Lithium-based chemistries, the method involves three specific stages: pre-conditioning, constant-current, and constant-voltage charge stages.

[Figure 2 | Lithium charging profile.]

Lithium-based cells can be permanently damaged if they’re discharged beyond a specific low voltage, for example by being connected directly to a resistive load like a light bulb. To prevent that damage, most Lithium battery packs will contain a small protection circuit PCB that incorporates a low-voltage cut-out circuit. When the cell voltage falls below ~2.5 V, the protection FET opens and the apparent voltage at the battery terminals falls to 0 V.

Before charging a cell in this condition, it’s necessary to “pre-condition” the cell by applying a small trickle-charge current, usually 100 mA. To detect whether a battery is connected at all, apply this current for several minutes, then remove the current and check for battery voltage. This process should continue until the battery reaches its minimum conditioning voltage as specified by cell vendor, usually 2.8 to 3.0 V.

Note that most charger designs allow some or all of the externally supplied charger current to power the system (aka “by-pass”) instead of being directed into the battery as charging current. Unless your system can operate (boot and run software) entirely on external-power, this pre-conditioning charge of the dead battery must be entirely managed by the charger hardware itself.

Having reached the constant-current charging section, the charging current can then be stepped up, usually to a maximum of about 1C. This current (in mA) is roughly equivalent to the cell’s rated capacity in mA-hours. Newer chemistries can permit higher charge currents (and thus shorter charge cycles), especially if the cell temperature during charging is carefully controlled. During this portion, the cell’s voltage will slowly rise. The maximum charging current will impact your choice of external charger, cabling, connectors, etc.

The control of charge current with temperature is the subject of several safety standards, including IEEE 1725 and JEITA. Charger circuitry itself can be damaged if subject to excessive load currents at high temperatures, so the charger circuits usually incorporate thermally controlled throttling based on die temperature. More importantly, though, Lithium-based cells can be subject to catastrophic thermal runaway if charged (or discharged with large currents) at high temperature.

To prevent the risks of fire and meltdown, the cells usually contain an internal thermal fuse that permanently disables the cells at temperatures around 90°C. Well before this stage is reached, the system must turn off the charge current when the cell temperature exceeds limits. The cell manufacturer will supply the temperature limits, but these usually stipulate no charge above 60°C or below 0°C. In the past, these have been hard on-off charge limits, but the JEITA standard (required in Japan and becoming more common as a de-facto standard) has a more complicated profile of charge current derating based on cell temperature (Figure 3).

[Figure 3 | JEITA Charging standard, voltage, and current.]

Having this fine-grained control over charge current often requires a combination of software and hardware, such as driver-level adjustments to charge thresholds. Relying solely on software is not possible, though. To prevent excessive battery discharge and damage to other components at high temperatures, your system may need to shut down in hot environments such as a hot car. Booting and charging should then be prevented when no software is running, or in the case of software malfunction. Consult IEEE 1725 and IEEE 1625 for safety-critical requirements in this area.

The cell’s temperature is usually monitored with a dedicated thermistor, often incorporated inside the battery pack itself, especially in systems where the battery is somewhat isolated from the main circuitry. This can raise the battery’s price though, so in designs without an in-pack thermistor, careful system thermal design is needed.

Full charge, termination current

At the end of the constant-current charge portion, the cell will be approaching its maximum voltage, around 4.1 to 4.2 V. At this point, the charger must limit its applied voltage to its cut-off voltage, while the charging current inherently slowly reduces. The voltage selected for this constant-voltage charge section has an impact on the cell’s life, with accelerated aging when too high a voltage is used. Too low a constant-voltage threshold results in sub-optimal full charge capacity, so a trade-off is made here, usually selecting a voltage around 4.15 to 4.2 V.

Cells will be chemically damaged if subject to a permanent external charging voltage as well, so the chargers terminate their charge cycle and remove applied voltage entirely when the charge current falls below a certain threshold. This level is usually referred to as the taper current, and is another key parameter when tuning your charger circuitry.

If your system stays plugged into the charger, the battery will be subject to periodic “top-up” charge cycles when the cell has discharged by a certain percentage of charge or voltage. The charging system should ensure that these top-ups are mini charge-cycles, with constant-current and constant-voltage sections.

Charger system designs

The system charging circuitry is typically either a dedicated charging IC, or is incorporated within the system’s power-management IC (PMIC). When included as part of the PMIC, the charging system design may define a main node from which all system power is derived. This node can be supplied from current bypassed from the external charger, and in some designs will also include the ability to supplement system current from the battery at the same time.

Depending on the system power load, a plugged-in system may be charging the battery at a high rate, at a throttled rate (because considerable power is being drawn for system operation), or using all the power from the external supply while also concurrently discharging the battery. Mode examples and bypass/supplementation of the charging current can be seen in Figure 4.

[Figure 4 | Charger system bypass and charge current throttling.]

The charger subsystem in Qualcomm’s Snapdragon family of PMICs includes SMBB technology, which can convert the charger circuitry to a boost mode, to generate 5 V into the charger power node, to power the high-drain camera flash LEDs.

USB Type-C includes a power delivery specification that negotiates higher charger input voltages over a dedicated charger protocol channel, for faster charging. Qualcomm supports a wide variety of quick charge technologies and techniques, including intelligent negotiation of optimum voltage (INOV), which is based on the principles of charging voltage negotiation. Supplying higher input voltages will improve power transfer rates and can greatly reducing charging times.

Gas gauging

The final component of a successful battery-powered system is the ability to gauge how much energy remains in your system’s battery at any given time. This is referred to as gas gauging. For successful gauging, it’s important for users that your system doesn’t die unexpectedly, while at the same time stretching maximum energy from the battery. Having your system fail suddenly due to low voltage can be both annoying and hazardous, potentially leading to data corruption or loss. Some devices, like those with an Electro-Phoretic E-Ink screen, can appear to be switched-on while the battery is actually dead, leading to user confusion, complaints, or customer-support calls.

There are two basic principles to gauging the energy in the battery: mapping the battery’s voltage to the current state-of-charge, and “what comes in must go out.” The latter sounds simple, and is known as coulomb counting. Circuitry integrates the current flowing in/out of the battery, to maintain a measure of the charge (and thus energy) in the battery. This has its challenges in practice, including:

  • Determining the cell’s initial state-of-charge
  • Battery self-discharge due to internal resistances and leakages
  • Energy loss during discharge due to internal battery impedance
  • Accurately measuring discharge at all times, including small leakages while the system is powered off, and the energy contained in short spikes when large subsystems are powered on or off

Mapping the battery voltage to its state-of-charge also has its challenges:

  • Voltage to state-of-charge varies depending on cell chemistry (see Figure 5)
  • Battery voltage depends on the cell’s internal impedance: the voltage drop can be substantial under heavy current load
  • Hysteresis due to charging or discharging: the voltage can be higher or lower than the open-circuit “relaxed” value depending on how the state-of-charge was reached
  • The open-circuit voltage increases with temperature, drops at lower temperatures

[Figure 5 | Typical open-circuit voltage to state-of-charge curves]

Due to these challenges, carefully tracking the battery’s impedance is crucial to effective gas gauging. This impedance changes with cell conditions like aging and charge/discharge cycle count, so for maximum accuracy, a fingerprint of impedance is maintained for a particular cell. This implies that when the battery is changed (such as if your system has a user-accessible battery pack), your system must recognize and re-evaluate the situation for the new battery pack.

Overall, effective gas gauging requires combining the open-circuit voltage measurements together with coulomb counting. The open-circuit voltage (measured at very low system discharge current, such as during a suspend or sleep state) is converted to state-of-charge based on standard profiles given the battery’s known chemistry, as shown in Figure 5.

The open-circuit voltage measurement can provide a reasonably accurate point value of the battery’s state-of-charge, especially if the system has been quiescent (or relaxing) for some time, allowing any hysteresis from charging or discharging to subside. This measurement helps plot the cell’s maximum charge capacity against its factory-default maximum charge capacity. Over time, the maximum charge capacity will drop as the battery is able to hold less and less charge.

Because your system use-cases may not often allow effective measurements of open-circuit voltage (e.g. continuously changing system current load), coulomb counting is used to track energy increase or decrease during active periods. This can be combined with knowledge of the battery’s internal impedance to track state-of-charge by battery voltage.

Gas gauging system designs

Doing this effectively either requires complex driver software or leveraging the knowledge and development efforts of semiconductor manufacturers who have designed gas-gauging hardware and firmware to effectively implement these algorithms. This can be done by designing gas gauging either into the battery pack itself; as a discrete gas gauging chip on the power subsystem PCB; or as an integrated component of the system’s PMIC.

The one that involves the least system integration effort and typically fewer integration problems is the in-pack gauging. It also has a number of distinct advantages, including ease of tracking per-pack aging data and impedance, ease of dead-battery charging and boot-up (since cell capacity can easily be retrieved from the pack in low-level bootloader software), and very low system integration and debug effort. However, the battery pack’s BOM cost can be high.

The discrete gas gauging chip solution (shown in Figure 6) can provide excellent gauging performance and can provide access to parameters like minutes until empty, based on a window-averaged system power consumption. Solutions from providers such as Texas Instruments encapsulate complex algorithms such as impedance track and provide customizable features such as ensuring that the reported state-of-charge is as close to monotonically decreasing as possible. (It can be confusing to end-users and lead to complaints if remaining capacity suddenly jumps from 10% to 20% after a system relaxation period). As we will see, the uncoupling of gas gauge from charger IC can sometimes lead to some tricky edge cases.

[Figure 6 | Typical external fuel gauge with coulomb-counting.]

Using a highly integrated system PMIC as a combined charger/gas-gauge solution can allow full customization, and can provide the lowest system BOM cost, component real estate, and PCB complexity. Solutions that include road-tested driver software can help reduce the software development and tuning efforts, especially when you stick closely to the reference designs.

Paired system-on-chip and PMIC solutions, such as Qualcomm’s highly integrated Snapdragon processor family, provide a rich feature set for charging, gauging, and system power conversion. It includes the ability to drive user feedback LEDs (useful for dead-battery-charging), internal battery voltage boost technology, and built-in over-voltage protection on the power inputs. Qualcomm’s APQ8016, with its PMIC PM8916 as designed into Intrinsyc’s Open-Q 410 SOM and reference carrier board, come with BSP support for charging and gauging solutions. This reduces integration efforts for the tuning needed to for select battery cell and power use-cases.

Gas-gauge parameter definition

Whether you use discrete gas gauging components, or a PMIC-integrated solution, you need to define the gas gauging parameters specific to your system. The most important of these is your system’s Empty Voltage. This voltage is derived from your system’s Dead Voltage, in turn defined by the system’s hardware. The dead voltage is the minimum battery voltage at which the system can operate properly. Various system components feed into this decision, including the input specs for your voltage regulators and their buck or boost configuration. Above this dead voltage, you need to account for tolerances in voltage inputs, variations in temperature, gauging error, and spikes that happen when components switch on or off. The end result will be the safe Empty Voltage at which your system must immediately power down (see Figure 7).

Having defined your system’s empty voltage, you now need to specify the system’s reserve capacity. This is a gauging parameter that ultimately defines when your battery is at 0% capacity, when your system should shut down cleanly. This ensures that users are informed of system shutdown, and that no data corruption or loss occurs. The reserve capacity is defined in terms of energy, and is based on the headroom of run-time needed to respond to the battery level and perform the shutdown. Delays in sampling, time required to give feedback, flush data-stores, and shutdown are all inputs to this parameter. Without correctly defined empty voltage and reserve capacity, your system will be subject to brownout, where it unexpectedly dies.

[Figure 8 | Reserve capacity definition]

To summarize, here’s an example of a complicated edge-case in system charging/gauging. The system had discrete charger IC providing source system bypass and charge-current throttling, allowing external input current to either charge the battery or supply the system power rails. The discrete gas-gauging IC tracked state-of-charge, and also maintained a measure of the cell’s full charge capacity, based on a combination of open-circuit voltage and coulomb counting.

Charging was permitted from either a dedicated charger (1 A) or from a USB source (less than 500 mA). When the system’s LCD panel was on and playing video, the system power requirement was about 800 mA. With the LCD off, the power requirement was less than 100 mA, so the battery could effectively be charged by USB power (up to 350 mA). When the system was plugged into a USB power source and the screen was toggled on and off, the battery would switch from charging to discharging mode.

The gas gauge IC monitored the battery pack’s full-charge capacity (how much the battery can hold based on aging and cycling). The amount of energy the fully charged cell could hold was used in calculations/predictions for shutdown capacity. It detected a fully charged situation whenever the cell voltage was measured within 85% of full (greater than 4.0 V), and when it saw averaged charge current drop below the termination current.

[Figure 9 | Challenging gauging edge-case round-up]

Unfortunately, when the user turned on the screen at an inopportune time, the gas gauge IC would detect the charge throttling (caused by a separate charger IC) as fully charged situation. Average charge current dropped to 0 mA, so the gas gauge assumed the cell was full. Then, when the screen was turned off, charging resumed and the gas gauge ultimately calculated the cell as holding more energy than expected.

The result was that when the system was later unplugged and discharged, the incorrect estimate of full-charge capacity resulted in a system brownout and immediate shutdown, even though the user was told there was 20% energy remaining. Preventing this situation needs careful tuning of your parameters, good gauging algorithms, coordination between the charger and the gauging circuitry, and good design of the system use-cases.

Ken Tough, CEng, is the Director of Software Engineering at Intrinsyc Technologies. He has worked in the embedded software and electronics industry for more than 25 years, in fields ranging from defense to telecommunications to consumer products. His experience in battery-powered devices was gained in large part through commercial eReader product support.

Instrinsyc

www.intrinsyc.com

[email protected]

 

Ken Tough, Director of Software Engineering, Intrinsyc Technologies