HAL: ADC

Just to be clear, I’m talking about the ADC built in to the ATMEGA32U4. Let’s get started!

ADC block diagram
From ATMEGA32U4 datasheet

The ADC is a 12-channel, 10 bit Serial Approximation Register (SAR) ADC. 12 channel does NOT mean the ADC can convert 12 different analog signals to digital values AT THE SAME TIME; it means that the ADC can convert ONE analog signal to a digital value at any given moment, and you can choose from 12 different analog inputs. The 12 different pins that can accept analog waveforms are highlighted below:

From variable load schematic; ADC channels are highlighted

So for example, if you wanted to read ADC4 through to ADC7, then you’d have to read ADC4, then ADC5, etc. For this project, we only need to read ADC1, which tells us the temperature of the load.

How do we use this thing? Here are some of the things you’ll have to set-up:

  • Voltage reference: the ADC can pick from several different references. The reference voltage determines the resolution of the ADC. For example, a 10 bit ADC with a reference voltage of 2 volts will have a resolution of (2 V)/(2^10) = 2/1024 = 1.95 mV / bit. The smaller your reference voltage, the better your resolution, since each bit will represent a smaller voltage, so you’ll need more bits to represent a given voltage. However, smaller reference voltage will also limit the maximum readable input to the ADC, as the ADC will not be able to correctly convert an analog signal greater than its reference voltage. For example, a 3 volt signal with always be read as the maximum value, which in this case would be 2 volts, because it exceeds the reference voltage. For the ATMEGA32U4, the reference voltage can be IREF (internal reference of 2.56 V), AVCC (power rail of chip, which in our case is 5 V) or AREF (externally applied reference, not used in our case).
  • Single or differential: the ADC supports differential operation, with a Programmable Gain Amplifier (PGA) no less. Since the one signal we care about is referenced to ground, we’ll only use single ended mode.
  • Channel: the ADC will have to be told what channel you want to read; for example ADC0, ADC1, etc. In our case we only have one channel we care about, ADC1.
  • Mode of operation: when does the ADC perform a conversion? All the time, in a non-stop fashion, or when we tell it to? In Single Conversion mode, the ADC will only start a conversion when we tell it to, and then does nothing after it finishes that one conversion. In Free Running mode, the ADC will constantly sample and convert the same analog waveform over and over again. You can also configure the ADC to convert based on certain triggers, like a timer interrupt going off. For our purposes, let’s stick with Single Conversion.
  • Clock: The ADC, like almost all digital circuits, needs a clock to run it. The faster the clock, the faster each conversion can be complete, and the faster you can start another conversion. Faster clocks also increase power consumption. In this case, we don’t care about power, so we should feed it the fastest clock it can take. The datasheet says the maximum recommended speed is 200 kHz, so let’s try to get close to that.

Let’s see how the constructor sets up the ADC:

ADC constructor

Not terribly exciting, but let’s walk through it. First, the ADC is disabled; you don’t want to mess with something that’s running and powered if you don’t have to. Second, you set the prescaler, which chooses what prescaled clock you feed the ADC. Then you set the reference, then the adjust (whether the 10 bit ADC, which is stored in two 8 bit registers, is aligned left or right). Lastly, the digital input driver is disabled. Finally, you re-enable the ADC.

What does disabling the digital input driver do, and why is it needed? Take a look at the GPIO functional block diagram:

Alternate Function block diagram
From ATMEGA32U4 datasheet

Pxn goes to the ADC multiplexer through AIOxn, but Pxn also feeds the bi-directional buffer that goes to the Schmitt trigger and synchronizer. Unforunately, digital circuits rarely handle analog voltages well; driving a voltage that’s between a high and low logic level will often cause the NMOS and PMOS transistors that make up the digital circuit to turn on simultaneously, which can damage the chip or its power supply. At worst case, it’s like having output contention where one transistor tries to drive a signal high, and another tries to drive it low. All of this can be avoided by disabling the digital input driver (the bi-directional buffer).

ADC enums (left)
ADC configuration methods (right)

The enums on the left are used by the methods on the right. There’s not a whole lot to say about the configuration methods, as they’re just setting and clearing bits. A couple of things to note, though. Firstly, the system has a 16 MHz clock, and we want a prescaled clock that’s smaller than or equal to 200 kHz. The only one that works is DIV128, since 16 MHz divided by 128 is 125 kHz. A typical conversion takes 13 clock cycles, so conversion takes about 13 / (125 kHz) = 104 us. Secondly, you can enable high speed mode to increase the ADC’s speed, but the datasheet doesn’t elaborate on it too much, and I don’t think I need it, so I’ll stay away from it for now.

Now let’s get to the juicy part: actually reading an analog signal:

Converting analog to digital, then reading

This function is more complex than it first appears. The first two lines are setting up bits in ADMUX and ADCSRB. For each line, the expression to the left of the pipe clears bits, while the expression on the right sets some of the bits that have just been cleared. Then, the conversion is started by setting the ADSC bit in ADCSRA. This bit, in Single Conversion mode, is cleared by hardware when the conversion finishes, so the while loop constantly reads and re-reads ADSC until it is cleared. Then, since the conversion is complete, the ADC results can be read and returned.

The complication with this relatively simple function comes from the warning the comment in the middle: delay may be needed. Why? Well, the ADC does not convert the analog input directly; the ADC converts a sample of the input, and the two aren’t always the same. See the following section from the datasheet:

Analog input circuitry
From ATMEGA32U4 datasheet

The analog input waveform is put through a sample & hold circuit, which means the voltage of the input is saved by storing charge in a capacitor. The problem is that if the analog input waveform isn’t given enough time to charge / discharge the capacitor, then the sample will not be representative of the actual signal. This is the reason the delay may be necessary: after the first two lines, the analog multiplexer has the chosen an analog waveform to charge the sample & hold capacitor. If the conversion starts too soon after that, then the capacitor may not have charged or discharged to the correct value. If a delay is added, then the capacitor has more time to reach the correct value, and the result will be correct.

This corruption of the sample can be seen in the following test code that I wrote:

ADC Test code

In the code above, I am using the ADC class to read values and then sending the results to my computer through serial. The values I’m reading on the ADC are GND, ADC1 and bandgap voltage. For this test, ADC1 is set to 1.565 volts (the resulting voltage when I put a 100 kohm resistor where the thermistor should be), and bandgap is 1.1 volts. The results are shown below:

Results from running the test code
Red text added by me

Each line has four numbers: GND, bandgap after GND, ADC1, bandgap after ADC1, and bandgap after bandgap. The results for GND and ADC1 are pretty much as expected, because they’re strongly driven signals. This means that they do not need much time to charge or discharge the sample & hold capacitor, since they can sink or source a lot of current (relatively speaking). However, the bandgap counts are all over the place! This is because, as far as I can tell, the bandgap is a weakly driven signal, which needs more time than GND or ADC1 to charge or discharge the capacitor. Let’s walk through each conversion:

  1. At the first conversion, the multiplexer selects GND; this completely discharges the sample & hold capacitor, as evidenced by the low counts for GND.
  2. At the second conversion, the multiplexer channel selects bandgap, which doesn’t have enough time to charge the sample & hold capacitor to 1.1 volts. This is why bandgap after GND is below the expected 440 count; the sample is smaller than the actual voltage.
  3. At the third conversion, the multiplexer selects ADC1. Though ADC1 isn’t driven by a particularly low resistance source (10 kohm), there is a beefy 0.1µF capacitor, which has no problem charging the much smaller sample & hold capacitor (14 pF). Therefore, the sample & hold capacitor is successfully charged up to 1.565 volts, and the counts are as expected.
  4. At the fourth conversion, the multiplexer selects bandgap again. This time, there isn’t enough time to discharge the capacitor down from 1.565 volts to 1.1 volts. This is why bandgap after ADC1 is larger than the expected 440 count; the sample is larger than the actual voltage.
  5. At the fifth conversion, the multiplexer selects bandgap again. Since the previous conversion partially discharged the sample & hold capacitor and brought it close to the correct value, this conversion doesn’t need much time to finish discharging the capacitor, so the results are correct value. This is why bandgap is correct if it is measured twice in a row; the double sample provides sufficient time for the weakly driven bandgap reference to charge or discharge the sample & hold capacitor to the right value.

This is problematic; the ADC may not be producing the right results if the input doesn’t have enough drive strength, or large enough bypass capacitance. The two solutions are (a) you double, or even triple the sample time of the signal, which effectively increases how much time the sample & hold capacitor has to charge or discharge, or (b) you add a delay between setting the multiplexer and starting the conversion, which again gives the sample & hold capacitor more time to change. The second solution is more convenient, since you don’t have to worry about double or triple sampling, but it also means you’re adding delays to strongly driven signals as well, which don’t need a delay.

Fortunately, for this project, it doesn’t matter; we’ll only be sampling ADC1. This means the sample & hold capacitor will always be hooked up to ADC1, which means that there is plenty of time for the capacitor to charge or discharge. On top of that, the large bypass capacitance on the input to ADC1 means that even if this weren’t the case, the sample & hold capacitor will always have the right value for ADC1. Therefore, we don’t have to worry about adding delays. However, for future projects, this delay requirement should be kept in mind (or always make sure the hardware makes the delay unnecessary by using analog buffers or large bypass capacitors).

Leave a comment

Design a site like this with WordPress.com
Get started