I have this code that will take in 8 bit samples via a SAR ADC then transfer it via DMA to the DFB Filter and then I have two DMAs, one requiring a HW request per burst and the other that does not so it double buffers them, transferring it to the SRAM.
My problem is that I am viewing data that either seems to have gaps in it, which is why I thought to double buffer the final filtered data, but this did not work. In fact I only achieved an even uglier signal. the pictures below are of the arrays the two DMAs transfer to. The First buffer always seems to have the data gap right in the beginning in the same place whenever I run it, while the second buffer seems to have substatially more gapping data, as well as the gaps change.
I don't know what I am doing wrong, below is the code for my DMAs.
For the first DMA I have
#define dma_out_A_BYTES_PER_BURST 2
#define dma_out_A_REQUEST_PER_BURST 1
#define dma_out_A_SRC_BASE (CYDEV_PERIPH_BASE)
#define dma_out_A_DST_BASE (CYDEV_SRAM_BASE)
dma_out_A_Chan = dma_out_A_DmaInitialize(dma_out_A_BYTES_PER_BURST, dma_out_A_REQUEST_PER_BURST,
dma_out_A_TD = CyDmaTdAllocate();
CyDmaTdSetConfiguration(dma_out_A_TD, LENGTH*2, /*DMA_END_CHAIN_TD*/ dma_out_A_TD, TD_INC_DST_ADR | dma_out_A__TD_TERMOUT_EN);
CyDmaTdSetAddress(dma_out_A_TD, LO16((uint32)Filter_HOLDAM_PTR), LO16((uint32)Filter_Ch_A));
Here is the code for the Second buffer
#define DMA_Buffer_BYTES_PER_BURST 2
#define DMA_Buffer_REQUEST_PER_BURST 0
#define DMA_Buffer_SRC_BASE (CYDEV_SRAM_BASE)
#define DMA_Buffer_DST_BASE (CYDEV_SRAM_BASE)
DMA_Buffer_Chan = DMA_Buffer_DmaInitialize(DMA_Buffer_BYTES_PER_BURST, DMA_Buffer_REQUEST_PER_BURST,
DMA_Buffer_TD = CyDmaTdAllocate();
CyDmaTdSetConfiguration(DMA_Buffer_TD, LENGTH*2, /*DMA_Buffer_TD*/ DMA_END_CHAIN_TD, DMA_Buffer__TD_TERMOUT_EN | TD_INC_SRC_ADR | TD_INC_DST_ADR);
CyDmaTdSetAddress(DMA_Buffer_TD, LO16((uint32)Filter_Ch_A), LO16((uint32)Buffered_Filter_Ch_A));
I have seen similar situations before, when I wasn't using the DFB filter, and a double buffer fixed it, because I was accessing the array when it was half way through being populated, which is why I added the second buffer that didn't need a HW request each time. This time though it does not seem to work, what am I doing wrong?
I do not think it is a problem with the DMA transfer between the ADC and the filter component, since there is no array being created there and it is a direct sample by sample transfer. Also I would imagine the filter would not result in such a direct peak, and rather would create a much more rounded dip.
Thanks in advance for all of your help,
Can you post your complete project, so that we all can have a look at all of your settings? To do so, use
Creator->File->Create Workspace Bundle (minimal)
and attach the resulting file.
Double buffering is normally not done with 2 DMA channels, but with two TDs chained so that the data is alternatively stored in two different memory areas. Your transfer might block the other one.
I had thought about doing it that way, but after trying to do it I realized that I don't think its going to accomplish what I want.. and maybe I either just cant do what I want, or I don't really need to but I figured it would be more efficient this way.
Using multiple TDs won't allow me to turn off the HW request for the second TD. since the filter spits the samples out sample by sample, the first DMA creates the array sample by sample. Once that is done I want to copy that array immediately to another array so that the first DMA can continue to populate the next array while I do calculations using the second array.
I was originally copying the array using the cpu but I figured why couldn't the DMA handle this.
am I wrong thinking this would work? I'm pretty sure that as long as the DMA priority for the second one is higher and since the DMA can't transfer 2 things at once this should work.
I'm pretty sure that as long as the DMA priority for the second one is higher and since the DMA can't transfer 2 things at once this should work.
That are estimations I cannot prove to be correct, furthermore these may change from one CPU implementation (PSoC5) to another (PSoC4).
Copying the data does not make much sense when double buffering is wanted. You always get informed when a TD finishes , so an interrupt can be fired. TDs can be chained in a loop, so that data is always popped from your filter.
Your ADC conversion rate is rather high, you just have 78k sps and a buffer of 1024 elements which leaves you about 12ms to do your computation with 1000 samples, sounds quite tough!
I have done this before though, but directly from the ADC without the filter in between them and it has worked flawlessly.
When I did it before the first DMA’s array looked the same as the first DMA’s array in this case, with a dip in it. I think this is because by the time the CPU can disable the TD , even though it disables it in the isr, the DMA has already had time to overwrite some of the array. The data before the dip is the newest data and the data after the dip is the pervious data. When I did it that way before, the second DMA array looked perfectly fine since it only updated once every 10-20ms, so as long as I performed my operations on the data, or in this case just viewed it, before the 10-20ms is up I was seeing a perfect signal without any overwrites. Even more importantly, the last bit of operations I want to do are performed in place, so I must make suer it’s a different array.
Also I should mention I want to use a larger array, either 2048 or 4096, but 1024 seems to be easier for the debugger to work with, otherwise I end up waiting 10 minutes for the data to load. I also will probably be switching to the DelSig ADC so I can sample at 44Khz, which would give me anywhere from 25-100ms to do the operations, which is more than enough time.
So it seemed to be an issue with the debugger rather than the actual program.
What I WAS doing, whenever I wanted to check on the data I would pre place a breakpoint, say inside the isr right after the DMA nrq triggers it, and then continue to run the program.
This was giving my a jump in my data for some reason.
What I am doing NOW, I remove all breakpoints and let the program run, when I want it to stop I place a breakpoint.
It's as simple as that, my data is flawless and smooth. I have NO idea why this is the case, I know the debugger is not perfect but this is very surprising to me.
Maybe someone with a little more expertise could share their input as I know the debugger is heavily relied upon.
There are two things with breakpoints:
A BP stops CPU execution, but will not stop any hardware , DMA does not use CPU instructions when fired, handling interrupts do use CPU.
Placing a BP into running code takes some time, I had an issue with reading a counter which did not give the expected results