Accuracy test

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
lock attach
Attachments are accessible only for community members.
eliassh
Level 1
Level 1
10 sign-ins 5 sign-ins First reply posted

Hello .
I encounter the problem that the converter is only accurate in 8 bits.
If I choose 16 or 20 I start to see 5.112V on the screen but the input to the converter is equal to 4.8098.
Can anyone help?

0 Likes
1 Solution
Len_CONSULTRON
Level 9
Level 9
Beta tester 500 solutions authored 1000 replies posted

@eliassh ,

Is the issue you are having with overall accuracy or accuracy at the ends of the voltage range you are measuring?

Overall Accuracy

Cypress/Infineon have gain and offset compensations for certain ADC configurations.   These compensation values are located in special system FLASH and are used in the ADC_CountTo_Volts(), ADC_CountTo_mVolts() and ADC_CountTo_uVolts() functions.

But since there are many potential configurations the compensation values try to get you close.  Each configuration variation has their own gain/offset influence.

Here are the list of known configuration factors that can influence gain, offset and in some cases linearity across the voltage range of interest:

  • External circuit components.  (Also known as analog front-end.)
  • Internal analog routing resistances.
  • Vref accuracy and stability.
  • Input range multiplier.
  • Buffer gain.
  • Buffer mode.
  • Vss offset (in single-ended measurements)
  • Resolution (in bits)
  • Input mode (Single-ended or Differential)
  • Temperature
  • Signal jitter or oscillations.

You can see that Infineon did not provide the enough compensation factors across all configurations.  Especially for the ones with the external analog front-end and temperature which they cannot predict.

I've created a number of different projects that use an ADC.  If significantly improved accuracy is my target, I create a calibration sequence in my code.  This calibration sequence can be used at manufacturing time and/or at any time during the life of the product.   The calibration values obtained can be stored in FLASH or EEPROM (if available).   They can be recalled when needed to compensate the ADC counts measured.

I tend to calibrate and compensate for gain and offset which tend to be the most common error factors.   To do this I place a known voltage on the input near, but not at, the minimum voltage to be measured.  Usually about 0.5V to 1V above the minimum.   With the known voltage, I input the value I measure using another source (such as an accurate DMM) into the project's GUI (or equivalent).   Additionally I read the ADC counts from my ADC at this voltage.

I then place a known voltage on the input near, but not at, the maximum voltage to be measured.  Usually about 0.5V to 1V below the maximum.   With the known voltage, I input the value I measure using another source (such as an accurate DMM) into the project's GUI (or equivalent).  Additionally I read the ADC counts from my ADC at this voltage.

With the two calibration measurements at about 3/4 of the voltage input range, I get a linear slope.   Along with the two ADC count readings, I can calculate the gain and offset values that would allow my ADC counts to b converted to the voltage value with significant improvements to the accuracy.  (I no longer use the ADC_CountTo_Volts(), ADC_CountTo_mVolts() and ADC_CountTo_uVolts() functions.)

I should note here that as you increase the resolution (ie 8 to 20 bits) you will be more prone to see signal jitter as bit fluctuation from many sources external and internal.   It is common practice to take multiple measurements and average the readings to try to eliminate (average) the jitter.  A very accurate DMM does that regularly without you noticing.   A DMM commonly does multiple sampling averaging which gives the DMM about a 100 Hz low-pass filter equivalent.

One more note:   My procedure above tries to determine gain and offset factors across most of the desired voltage range.  It does not compensate for range non-linearity.   Generally non-linearity is worst in the middle of the range.  To compensate for this you would create a calibration in about the middle of the range.  This known voltage and ADC counts measured would create a curve (not a straight line) where you would now compensate the ADC counts.

One LAST note:  It is probable that the extreme ends of the measurable voltage range can be non-linear enough that the gain, offset and INL (integrated non-linear) factors may not be accurate enough.   Suggestion:   Accept the values you get as rough indication of where you are or increase the usable voltage range in your design.

Len
"Engineering is an Art. The Art of Compromise."

View solution in original post

4 Replies
LeoMathews
Moderator
Moderator
Moderator
First question asked 500 replies posted 100 solutions authored

Hi @eliassh 

I believe a similar issue is discussed in this link. Can you please check it and see if it helps?

Thanks and Regards,
Leo 

0 Likes

Hello
There was a problem with exactly all the bits, and the problem was with VDDA, now that's a different problem, the converter is only 8 bit accurate, I'm sure it's a problem with the settings of the converter, but I can't get over the problem.

Thanks

0 Likes
Len_CONSULTRON
Level 9
Level 9
Beta tester 500 solutions authored 1000 replies posted

@eliassh ,

Is the issue you are having with overall accuracy or accuracy at the ends of the voltage range you are measuring?

Overall Accuracy

Cypress/Infineon have gain and offset compensations for certain ADC configurations.   These compensation values are located in special system FLASH and are used in the ADC_CountTo_Volts(), ADC_CountTo_mVolts() and ADC_CountTo_uVolts() functions.

But since there are many potential configurations the compensation values try to get you close.  Each configuration variation has their own gain/offset influence.

Here are the list of known configuration factors that can influence gain, offset and in some cases linearity across the voltage range of interest:

  • External circuit components.  (Also known as analog front-end.)
  • Internal analog routing resistances.
  • Vref accuracy and stability.
  • Input range multiplier.
  • Buffer gain.
  • Buffer mode.
  • Vss offset (in single-ended measurements)
  • Resolution (in bits)
  • Input mode (Single-ended or Differential)
  • Temperature
  • Signal jitter or oscillations.

You can see that Infineon did not provide the enough compensation factors across all configurations.  Especially for the ones with the external analog front-end and temperature which they cannot predict.

I've created a number of different projects that use an ADC.  If significantly improved accuracy is my target, I create a calibration sequence in my code.  This calibration sequence can be used at manufacturing time and/or at any time during the life of the product.   The calibration values obtained can be stored in FLASH or EEPROM (if available).   They can be recalled when needed to compensate the ADC counts measured.

I tend to calibrate and compensate for gain and offset which tend to be the most common error factors.   To do this I place a known voltage on the input near, but not at, the minimum voltage to be measured.  Usually about 0.5V to 1V above the minimum.   With the known voltage, I input the value I measure using another source (such as an accurate DMM) into the project's GUI (or equivalent).   Additionally I read the ADC counts from my ADC at this voltage.

I then place a known voltage on the input near, but not at, the maximum voltage to be measured.  Usually about 0.5V to 1V below the maximum.   With the known voltage, I input the value I measure using another source (such as an accurate DMM) into the project's GUI (or equivalent).  Additionally I read the ADC counts from my ADC at this voltage.

With the two calibration measurements at about 3/4 of the voltage input range, I get a linear slope.   Along with the two ADC count readings, I can calculate the gain and offset values that would allow my ADC counts to b converted to the voltage value with significant improvements to the accuracy.  (I no longer use the ADC_CountTo_Volts(), ADC_CountTo_mVolts() and ADC_CountTo_uVolts() functions.)

I should note here that as you increase the resolution (ie 8 to 20 bits) you will be more prone to see signal jitter as bit fluctuation from many sources external and internal.   It is common practice to take multiple measurements and average the readings to try to eliminate (average) the jitter.  A very accurate DMM does that regularly without you noticing.   A DMM commonly does multiple sampling averaging which gives the DMM about a 100 Hz low-pass filter equivalent.

One more note:   My procedure above tries to determine gain and offset factors across most of the desired voltage range.  It does not compensate for range non-linearity.   Generally non-linearity is worst in the middle of the range.  To compensate for this you would create a calibration in about the middle of the range.  This known voltage and ADC counts measured would create a curve (not a straight line) where you would now compensate the ADC counts.

One LAST note:  It is probable that the extreme ends of the measurable voltage range can be non-linear enough that the gain, offset and INL (integrated non-linear) factors may not be accurate enough.   Suggestion:   Accept the values you get as rough indication of where you are or increase the usable voltage range in your design.

Len
"Engineering is an Art. The Art of Compromise."
LeoMathews
Moderator
Moderator
Moderator
First question asked 500 replies posted 100 solutions authored

Hi @eliassh ,

Thread was locked due to inactivity for long time, you can continue the discussion on the topic by opening a new thread with reference to the locked one. The continuous discussion in an inactive thread may mostly be unattended by community users.

Thanks and Regards,
Leo

0 Likes