- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello,

I made a basic watt meter project with a PSOC5.

The setup consists of:

1. A sigma-delta ADC object set to 18-bits. The voltage reference is 1.024V and the input is set to differential (+/-1.024V).

2. A multiplexer feeding a voltage and current input term to the ADC.

External to the PSOC:

1. Is a 1001:1 resistive divider connected to an op-amp buffer. This buffer is single-ended and feeds only the positive terminal of the A to D (the negative terminal is grounded). This serves as the voltage input to the watt meter.

2. Is a differential amplifier with a gain of 8.2 that measures voltage across a 0.5 ohm resistor. This serves as the ammeter current input to the watt meter.

I have the meter working flawlessly right now but it is coded using floating point arithmetic. I want to optimize the algorithm to use fixed point. I have been out of school for several years now and am rusty.

So how do you suggest I scale the numbers for optimal volts * amps precision given my hardware setup?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

1. First of all, the 18-bit ADC setting seems excessive. The accuracy of the output product I x V is defined by the ADC reading accuracy separately for I and V. If it is 18 bits, the final accuracy of the product (I x V) is also 18-bit.

The real absolute accuracy is typically worse for a number of reasons. In the case of DelSig-DC it is a "frequency droop" (amplitude drop with frequency as sinc^3(x), and stability of the 1.024V reference, ADC clock stability and noise contribution (mainly AC50/60Hz). I believe that 16-bit ADC setting is sufficient for power measurement, while 18-bit setting will typically show smaller values due to heavy ADC output filtering and resulting "frequency droop".

Another source of measurement error is the interleaved ADC sampling, so that current and voltage are not sampled simultaneously. The ADC sampling rate in 16-bit mode is 40 kHz, so time interval between the consecutive I and V samples will be shorter, and their product (I x V) is closer to instant value.

The power measurement accuracy can be improved substantially if the ADC sampling is performed at 16-bit, 40kHz rate, and then the product (I x V) is averaged and downsampled using the moving average filter. One can find the demo project using the Moving Average filter in conjunction with DelSig_ADC here:

Re: Delta Sigma ADC driver/filter Questions

This way the sampling rate is increased, the drooping factor is eliminated, and the output power can be averaged beyond the 18-bits as the product (I x V) is now being averaged instead of I and V separately.

2. In the 18-bit mode, the ADC output rate is very slow (~180Hz), so uC has plenty of time to perform that single multiplication I x V in floating format, which takes about 80 clocks. Doing it in fixed point, e. g. 64-bit, which takes 4 clocks, won't improve overall performance of the code.

3. The conversion factor ADC_code to Volts for 18-bit differential sampling (+/-1.024V full scale) is: Vout = 2 x 1.024V x ADC_code / 2^18

If code1 and code2 are the ADC readings for the current and voltage obtained by ADC_GetResult32() then

I = 2 x 1.024V x code1 / 2^18;

V = 2 x 1.024 x code2/ 2^18;

Lets define:

int64 power64 = code1 x code2;

int64 power_uW = 2 x 2 x 1024 x 1024 x power64 >> 36 = power64 >>14; // power in microwatts

And finally:

float power_W = (float) 0.000001x power_uW; // power in Watts

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

hi @BeS__284136

Since you are measuring Voltage and Currents using PSOC5 DSADC at 18bit ADC resolution the maximum conversion of samples are from 125sps to 3000sps in continuous mode for single channel. Based on the accuracy of the output you can select the sample rate for conversion.

The Frequency of AC is 50Hz with +/- 5%, you can use ADC sampling at 2400sps and each conversion with 16bit resolution and calculations are completed with the required time based on the Main CPU frequency.

This conversion method is widely used in Energy measurement devices(Energy Meters), my recommendation to you to use is the same.

Since you are going with fixed point calculation will provide faster results to you.

Here averaging of voltage, current samples and your multiplication algorithm timings will give you accuracy.

Please do let me know any further information is required, we will discuss.

Thanks & Best regards

Sateesh M