XMC™ Forum Discussions
text.format{('custom.tabs.no.results')}
Sort by:
XMC™
Hello Form UserOn my EBU an ext device (Display) is working#define EBU_EXT_RAM_REGION_BASE 0x60000000UL/*GDDRAM base address assignment */#define GD...
Show More
Hello Form User
On my EBU an ext device (Display) is working
#define EBU_EXT_RAM_REGION_BASE 0x60000000UL
/*GDDRAM base address assignment */
#define GDDRAM EBU_EXT_RAM_REGION_BASE
Currently I send data via Loop to my Display
for (uint16_t i=0;i<12;i++) {
for (uint16_t j= 0 ;j<320;j++) {
*(volatile uint16_t *)(GDDRAM) = FrameBuffer.DisplayBuffer.value_u16[0];
*(volatile uint16_t *)(GDDRAM) = FrameBuffer.DisplayBuffer.value_u16[1];
*(volatile uint16_t *)(GDDRAM) = FrameBuffer.DisplayBuffer.value_u16[2];
}
}
Now I want to use the DMA Channel
XMC_DMA_CH_CONFIG_t GPDMA0_Ch0_config =
{
.enable_interrupt = true,
.dst_transfer_width = XMC_DMA_CH_TRANSFER_WIDTH_32,
.src_transfer_width = XMC_DMA_CH_TRANSFER_WIDTH_16,
.dst_address_count_mode = XMC_DMA_CH_ADDRESS_COUNT_MODE_NO_CHANGE,
.src_address_count_mode = XMC_DMA_CH_ADDRESS_COUNT_MODE_INCREMENT,
.dst_burst_length = XMC_DMA_CH_BURST_LENGTH_8,
.src_burst_length = XMC_DMA_CH_BURST_LENGTH_8,
.enable_src_gather = true,
.enable_dst_scatter = false,
.transfer_flow = XMC_DMA_CH_TRANSFER_FLOW_M2M_DMA,
.src_addr = (uint32_t) &FrameBuffer.DisplayBuffer_u32,
.dst_addr = (uint32_t) GDDRAM,
.src_gather_interval = 1,
.src_gather_count = 1,
.dst_scatter_interval = 0,
.dst_scatter_count = 0,
.block_size = 3840,
.transfer_type = XMC_DMA_CH_TRANSFER_TYPE_SINGLE_BLOCK,
.priority = XMC_DMA_CH_PRIORITY_0,
.src_handshaking = XMC_DMA_CH_SRC_HANDSHAKING_SOFTWARE,
};
XMC_DMA_Init(XMC_DMA0);
XMC_DMA_CH_Init(XMC_DMA0, 0, &GPDMA0_Ch0_config);
XMC_DMA_CH_EnableEvent(XMC_DMA0, 0, XMC_DMA_CH_EVENT_BLOCK_TRANSFER_COMPLETE);
NVIC_SetPriority(GPDMA0_0_IRQn,11);
NVIC_EnableIRQ(GPDMA0_0_IRQn);
I have two question Is the initialisation ok? and how I can start a single block Transfer
Thanks
EbbeSand
Show Less
On my EBU an ext device (Display) is working
#define EBU_EXT_RAM_REGION_BASE 0x60000000UL
/*GDDRAM base address assignment */
#define GDDRAM EBU_EXT_RAM_REGION_BASE
Currently I send data via Loop to my Display
for (uint16_t i=0;i<12;i++) {
for (uint16_t j= 0 ;j<320;j++) {
*(volatile uint16_t *)(GDDRAM) = FrameBuffer.DisplayBuffer
*(volatile uint16_t *)(GDDRAM) = FrameBuffer.DisplayBuffer
*(volatile uint16_t *)(GDDRAM) = FrameBuffer.DisplayBuffer
}
}
Now I want to use the DMA Channel
XMC_DMA_CH_CONFIG_t GPDMA0_Ch0_config =
{
.enable_interrupt = true,
.dst_transfer_width = XMC_DMA_CH_TRANSFER_WIDTH_32,
.src_transfer_width = XMC_DMA_CH_TRANSFER_WIDTH_16,
.dst_address_count_mode = XMC_DMA_CH_ADDRESS_COUNT_MODE_NO_CHANGE,
.src_address_count_mode = XMC_DMA_CH_ADDRESS_COUNT_MODE_INCREMENT,
.dst_burst_length = XMC_DMA_CH_BURST_LENGTH_8,
.src_burst_length = XMC_DMA_CH_BURST_LENGTH_8,
.enable_src_gather = true,
.enable_dst_scatter = false,
.transfer_flow = XMC_DMA_CH_TRANSFER_FLOW_M2M_DMA,
.src_addr = (uint32_t) &FrameBuffer.DisplayBuffer_u32,
.dst_addr = (uint32_t) GDDRAM,
.src_gather_interval = 1,
.src_gather_count = 1,
.dst_scatter_interval = 0,
.dst_scatter_count = 0,
.block_size = 3840,
.transfer_type = XMC_DMA_CH_TRANSFER_TYPE_SINGLE_BLOCK,
.priority = XMC_DMA_CH_PRIORITY_0,
.src_handshaking = XMC_DMA_CH_SRC_HANDSHAKING_SOFTWARE,
};
XMC_DMA_Init(XMC_DMA0);
XMC_DMA_CH_Init(XMC_DMA0, 0, &GPDMA0_Ch0_config);
XMC_DMA_CH_EnableEvent(XMC_DMA0, 0, XMC_DMA_CH_EVENT_BLOCK_TRANSFER_COMPLETE);
NVIC_SetPriority(GPDMA0_0_IRQn,11);
NVIC_EnableIRQ(GPDMA0_0_IRQn);
I have two question Is the initialisation ok? and how I can start a single block Transfer
Thanks
EbbeSand
XMC™
HiI have recently installed dave4 and set up a xmc4500 relax kit. I have managed to run an example of UART which communicates between xmc4500 and PC.N...
Show More
Hi
I have recently installed dave4 and set up a xmc4500 relax kit. I have managed to run an example of UART which communicates between xmc4500 and PC.
Now I am looking for a reference guide or any application examples which shows a serial communication via UART between XMC microcontrollers or in general at the moment I have two XMC4500 relax kits and I want to create a communication channel between both kits using UART.
If there are any examples that would be great or any reference guide which shows how to connect both microcontrollers?
Best Regards
Jamal Show Less
I have recently installed dave4 and set up a xmc4500 relax kit. I have managed to run an example of UART which communicates between xmc4500 and PC.
Now I am looking for a reference guide or any application examples which shows a serial communication via UART between XMC microcontrollers or in general at the moment I have two XMC4500 relax kits and I want to create a communication channel between both kits using UART.
If there are any examples that would be great or any reference guide which shows how to connect both microcontrollers?
Best Regards
Jamal Show Less
XMC™
Hi!I am using USIC in SSC-mode over DMA.word size: 8bit;mode: singe-SPI in master-mode;I program two DMA channels (receive and transmit) for service o...
Show More
Hi!
I am using USIC in SSC-mode over DMA.
word size: 8bit;
mode: singe-SPI in master-mode;
I program two DMA channels (receive and transmit) for service of SPI. DMA works with the burst size: BURST_SPI == 4 (GPDMA_CH.CTLL.MSIZE == BURST_SPI).
USIC.SSC FIFO size == 16.
RBCTR.LIMIT = BURST_SPI-1; RBCTR.SRBTM == 0; RBCTR.SRBTEN == 0; RBCTR.RNM == 0; RBCTR.LOF == 1; RBCTR.SRBIEN == 1; RBCTR.RBERIEN == 1.
TBCTR.LIMIT = BURST_SPI; TBCTR.STBTM == 0; TBCTR.STBTEN == 0; TBCTR.LOF == 0; TBCTR.STBIEN == 1; TBCTR.TBERIEN == 1.
I set the priority of the DMA.TX-channel to be lower than the priority DMA.RX-channel (GPDMA_CH.CFGL.CH_PRIOR).
I enabled interrupt from RX-DMA-channel (GPDMA.MASK.TFR) after completing transfer entire block.
I pass a 16-bytes block to the SPI.
If SCLK SPI was <= SYSTEM_CLK/4 - all ok - I received single interrupt, no troubles, all data transmitted/received correctly.
If SCLK SPI was == SYSTEM_CLK/2 - in this case, I get the error state "reading an empty receive buffer" (USIC.TRBSR.RBERI == 1). And I recives invalid data (duplicated bytes).
Trying to correct this error, I reduce the value: TBCTR.LIMIT = 1. This underflow error now occurs less often, but it still happens (not with a block size of 16 bytes, but starting with about ~ 1024 bytes and bigger).
I think the scenario for occuring the error is (for TBCTR.LIMIT = 1):
TX.FIFO RX.FIFO
0 0 ;DMA.TX writes BURST_SPI bytes to TX.FIFO (1-st burst)
4 0
3 0 ;TX.FIFO -> TX.SHIFT register (1-st byte)
3 1 ;was transmitted/received last bit of 1-st byte
2 1 ;TX.FIFO -> TX.SHIFT register (2-nd byte)
2 2 ;was transmitted/received last bit of 2-nd byte
1 2 ;TX.FIFO -> TX.SHIFT register (3-rd byte)
1 3 ;was transmitted/received last bit of 3-rd byte
0 3 ;TX.FIFO -> TX.SHIFT register (4-th byte); DMA.TX event -> DMA.TX writes new BURST_SPI bytes to TX.FIFO (2-nd burst)
4 3
4 4 ;was transmitted/received last bit of 4-th byte;
At this point, the event DMA.RX is generated and DMA.RX-channel starts reading BURST_SPI bytes from RX.FIFO. At the same time, the reception from the MISO is continues.
And, when the first byte from the RX.FIFO was read, the trigger of DMA.RX-event is cleared.
If at that moment was received the new byte to the RX.FIFO is finished, is a new DMA-event is generated (although the RX.FIFO still has too few bytes for the 2-nd burst (FIFO contains only BURST_SPI + 1 bytes)).
Then - the reading from RX.FIFO of 1-st burst is finishes and begins reads the 2-nd burst (according to the second RX.DMA-event). Although data for the second burst is not sufficient.
At this point, an underflow occurs.
How to solve this problem?
I want to start the transfer of the data block once and at the end of the transmission of the whole block get a single interrupt about the completion of the entire transmission.
Maybe I did not consider something or incorrectly programmed the configuration of the USIC.SSC or GPDMA?
Thank you in advance for your help. Show Less
I am using USIC in SSC-mode over DMA.
word size: 8bit;
mode: singe-SPI in master-mode;
I program two DMA channels (receive and transmit) for service of SPI. DMA works with the burst size: BURST_SPI == 4 (GPDMA_CH.CTLL.MSIZE == BURST_SPI).
USIC.SSC FIFO size == 16.
RBCTR.LIMIT = BURST_SPI-1; RBCTR.SRBTM == 0; RBCTR.SRBTEN == 0; RBCTR.RNM == 0; RBCTR.LOF == 1; RBCTR.SRBIEN == 1; RBCTR.RBERIEN == 1.
TBCTR.LIMIT = BURST_SPI; TBCTR.STBTM == 0; TBCTR.STBTEN == 0; TBCTR.LOF == 0; TBCTR.STBIEN == 1; TBCTR.TBERIEN == 1.
I set the priority of the DMA.TX-channel to be lower than the priority DMA.RX-channel (GPDMA_CH.CFGL.CH_PRIOR).
I enabled interrupt from RX-DMA-channel (GPDMA.MASK.TFR) after completing transfer entire block.
I pass a 16-bytes block to the SPI.
If SCLK SPI was <= SYSTEM_CLK/4 - all ok - I received single interrupt, no troubles, all data transmitted/received correctly.
If SCLK SPI was == SYSTEM_CLK/2 - in this case, I get the error state "reading an empty receive buffer" (USIC.TRBSR.RBERI == 1). And I recives invalid data (duplicated bytes).
Trying to correct this error, I reduce the value: TBCTR.LIMIT = 1. This underflow error now occurs less often, but it still happens (not with a block size of 16 bytes, but starting with about ~ 1024 bytes and bigger).
I think the scenario for occuring the error is (for TBCTR.LIMIT = 1):
TX.FIFO RX.FIFO
0 0 ;DMA.TX writes BURST_SPI bytes to TX.FIFO (1-st burst)
4 0
3 0 ;TX.FIFO -> TX.SHIFT register (1-st byte)
3 1 ;was transmitted/received last bit of 1-st byte
2 1 ;TX.FIFO -> TX.SHIFT register (2-nd byte)
2 2 ;was transmitted/received last bit of 2-nd byte
1 2 ;TX.FIFO -> TX.SHIFT register (3-rd byte)
1 3 ;was transmitted/received last bit of 3-rd byte
0 3 ;TX.FIFO -> TX.SHIFT register (4-th byte); DMA.TX event -> DMA.TX writes new BURST_SPI bytes to TX.FIFO (2-nd burst)
4 3
4 4 ;was transmitted/received last bit of 4-th byte;
At this point, the event DMA.RX is generated and DMA.RX-channel starts reading BURST_SPI bytes from RX.FIFO. At the same time, the reception from the MISO is continues.
And, when the first byte from the RX.FIFO was read, the trigger of DMA.RX-event is cleared.
If at that moment was received the new byte to the RX.FIFO is finished, is a new DMA-event is generated (although the RX.FIFO still has too few bytes for the 2-nd burst (FIFO contains only BURST_SPI + 1 bytes)).
Then - the reading from RX.FIFO of 1-st burst is finishes and begins reads the 2-nd burst (according to the second RX.DMA-event). Although data for the second burst is not sufficient.
At this point, an underflow occurs.
How to solve this problem?
I want to start the transfer of the data block once and at the end of the transmission of the whole block get a single interrupt about the completion of the entire transmission.
Maybe I did not consider something or incorrectly programmed the configuration of the USIC.SSC or GPDMA?
Thank you in advance for your help. Show Less
XMC™
Hi!I am using USIC in SSC-mode over DMA.word size: 8bit;mode: singe-SPI in master-mode;I program two DMA channels (receive and transmit) for service o...
Show More
Hi!
I am using USIC in SSC-mode over DMA.
word size: 8bit;
mode: singe-SPI in master-mode;
I program two DMA channels (receive and transmit) for service of SPI. DMA works with the burst size: BURST_SPI == 4 (GPDMA_CH.CTLL.MSIZE == BURST_SPI).
USIC.SSC FIFO size == 16.
RBCTR.LIMIT = BURST_SPI-1; RBCTR.SRBTM == 0; RBCTR.SRBTEN == 0; RBCTR.RNM == 0; RBCTR.LOF == 1; RBCTR.SRBIEN == 1; RBCTR.RBERIEN == 1.
TBCTR.LIMIT = BURST_SPI; TBCTR.STBTM == 0; TBCTR.STBTEN == 0; TBCTR.LOF == 0; TBCTR.STBIEN == 1; TBCTR.TBERIEN == 1.
I set the priority of the DMA.TX-channel to be lower than the priority DMA.RX-channel (GPDMA_CH.CFGL.CH_PRIOR).
I enabled interrupt from RX-DMA-channel (GPDMA.MASK.TFR) after completing transfer entire block.
I pass a 16-bytes block to the SPI.
If SCLK SPI was <= SYSTEM_CLK/4 - all ok - I received single interrupt, no troubles, all data transmitted/received correctly.
If SCLK SPI was == SYSTEM_CLK/2 - in this case, I get the error state "reading an empty receive buffer" (USIC.TRBSR.RBERI == 1). And I recives invalid data (duplicated bytes).
Trying to correct this error, I reduce the value: TBCTR.LIMIT = 1. This underflow error now occurs less often, but it still happens (not with a block size of 16 bytes, but starting with about ~ 1024 bytes and bigger).
I think the scenario for occuring the error is (for TBCTR.LIMIT = 1):
TX.FIFO RX.FIFO
0 0 ;DMA.TX writes BURST_SPI bytes to TX.FIFO (1-st burst)
4 0
3 0 ;TX.FIFO -> TX.SHIFT register (1-st byte)
3 1 ;was transmitted/received last bit of 1-st byte
2 1 ;TX.FIFO -> TX.SHIFT register (2-nd byte)
2 2 ;was transmitted/received last bit of 2-nd byte
1 2 ;TX.FIFO -> TX.SHIFT register (3-rd byte)
1 3 ;was transmitted/received last bit of 3-rd byte
0 3 ;TX.FIFO -> TX.SHIFT register (4-th byte); DMA.TX event -> DMA.TX writes new BURST_SPI bytes to TX.FIFO (2-nd burst)
4 3
4 4 ;was transmitted/received last bit of 4-th byte;
At this point, the event DMA.RX is generated and DMA.RX-channel starts reading BURST_SPI bytes from RX.FIFO. At the same time, the reception from the MISO is continues.
And, when the first byte from the RX.FIFO was read, the trigger of DMA.RX-event is cleared.
If at that moment was received the new byte to the RX.FIFO is finished, is a new DMA-event is generated (although the RX.FIFO still has too few bytes for the 2-nd burst (FIFO contains only BURST_SPI + 1 bytes)).
Then - the reading from RX.FIFO of 1-st burst is finishes and begins reads the 2-nd burst (according to the second RX.DMA-event). Although data for the second burst is not sufficient.
At this point, an underflow occurs.
How to solve this problem?
I want to start the transfer of the data block once and at the end of the transmission of the whole block get a single interrupt about the completion of the entire transmission.
Maybe I did not consider something or incorrectly programmed the configuration of the USIC.SSC or GPDMA?
Thank you in advance for your help. Show Less
I am using USIC in SSC-mode over DMA.
word size: 8bit;
mode: singe-SPI in master-mode;
I program two DMA channels (receive and transmit) for service of SPI. DMA works with the burst size: BURST_SPI == 4 (GPDMA_CH.CTLL.MSIZE == BURST_SPI).
USIC.SSC FIFO size == 16.
RBCTR.LIMIT = BURST_SPI-1; RBCTR.SRBTM == 0; RBCTR.SRBTEN == 0; RBCTR.RNM == 0; RBCTR.LOF == 1; RBCTR.SRBIEN == 1; RBCTR.RBERIEN == 1.
TBCTR.LIMIT = BURST_SPI; TBCTR.STBTM == 0; TBCTR.STBTEN == 0; TBCTR.LOF == 0; TBCTR.STBIEN == 1; TBCTR.TBERIEN == 1.
I set the priority of the DMA.TX-channel to be lower than the priority DMA.RX-channel (GPDMA_CH.CFGL.CH_PRIOR).
I enabled interrupt from RX-DMA-channel (GPDMA.MASK.TFR) after completing transfer entire block.
I pass a 16-bytes block to the SPI.
If SCLK SPI was <= SYSTEM_CLK/4 - all ok - I received single interrupt, no troubles, all data transmitted/received correctly.
If SCLK SPI was == SYSTEM_CLK/2 - in this case, I get the error state "reading an empty receive buffer" (USIC.TRBSR.RBERI == 1). And I recives invalid data (duplicated bytes).
Trying to correct this error, I reduce the value: TBCTR.LIMIT = 1. This underflow error now occurs less often, but it still happens (not with a block size of 16 bytes, but starting with about ~ 1024 bytes and bigger).
I think the scenario for occuring the error is (for TBCTR.LIMIT = 1):
TX.FIFO RX.FIFO
0 0 ;DMA.TX writes BURST_SPI bytes to TX.FIFO (1-st burst)
4 0
3 0 ;TX.FIFO -> TX.SHIFT register (1-st byte)
3 1 ;was transmitted/received last bit of 1-st byte
2 1 ;TX.FIFO -> TX.SHIFT register (2-nd byte)
2 2 ;was transmitted/received last bit of 2-nd byte
1 2 ;TX.FIFO -> TX.SHIFT register (3-rd byte)
1 3 ;was transmitted/received last bit of 3-rd byte
0 3 ;TX.FIFO -> TX.SHIFT register (4-th byte); DMA.TX event -> DMA.TX writes new BURST_SPI bytes to TX.FIFO (2-nd burst)
4 3
4 4 ;was transmitted/received last bit of 4-th byte;
At this point, the event DMA.RX is generated and DMA.RX-channel starts reading BURST_SPI bytes from RX.FIFO. At the same time, the reception from the MISO is continues.
And, when the first byte from the RX.FIFO was read, the trigger of DMA.RX-event is cleared.
If at that moment was received the new byte to the RX.FIFO is finished, is a new DMA-event is generated (although the RX.FIFO still has too few bytes for the 2-nd burst (FIFO contains only BURST_SPI + 1 bytes)).
Then - the reading from RX.FIFO of 1-st burst is finishes and begins reads the 2-nd burst (according to the second RX.DMA-event). Although data for the second burst is not sufficient.
At this point, an underflow occurs.
How to solve this problem?
I want to start the transfer of the data block once and at the end of the transmission of the whole block get a single interrupt about the completion of the entire transmission.
Maybe I did not consider something or incorrectly programmed the configuration of the USIC.SSC or GPDMA?
Thank you in advance for your help. Show Less
XMC™
Hi,I have a function, that contains a small section of code, that should not be interrupted. I wrap that section, in the usual __disable_irq() before,...
Show More
Hi,
I have a function, that contains a small section of code, that should not be interrupted. I wrap that section, in the usual __disable_irq() before, and __enable_irq() after.
As it happens, I need to call this function from various places, some of which, are within an ISR.
In some official docs, I've read that an EXC_RETURN value loaded into PC, at the end of an ISR, is what marks the end of the ISR, and allows the CPU to take another interrupt, if one is pending.
But in other official docs, I've read that the __enable_irq(), allows all interrupts to be taken.
Which is correct ?
I'm concerned that the __enable_irq(), within my function, when called from an ISR, would allow another interrupt to be taken, before my ISR finishes executing.
Best regards,
David Show Less
I have a function, that contains a small section of code, that should not be interrupted. I wrap that section, in the usual __disable_irq() before, and __enable_irq() after.
As it happens, I need to call this function from various places, some of which, are within an ISR.
In some official docs, I've read that an EXC_RETURN value loaded into PC, at the end of an ISR, is what marks the end of the ISR, and allows the CPU to take another interrupt, if one is pending.
But in other official docs, I've read that the __enable_irq(), allows all interrupts to be taken.
Which is correct ?
I'm concerned that the __enable_irq(), within my function, when called from an ISR, would allow another interrupt to be taken, before my ISR finishes executing.
Best regards,
David Show Less
XMC™
I am looking for the latest version of the startup_XMC4500.s startup file for the IAR EWARM.IAR told me that this file is provided by the chip manufac...
Show More
I am looking for the latest version of the startup_XMC4500.s startup file for the IAR EWARM.
IAR told me that this file is provided by the chip manufacturer, thus Infineon in my case.
Sadly I cannot find anything on the Infineon website. I searched my DAVE folders on the harddisk but those files are for GCC. Show Less
IAR told me that this file is provided by the chip manufacturer, thus Infineon in my case.
Sadly I cannot find anything on the Infineon website. I searched my DAVE folders on the harddisk but those files are for GCC. Show Less
XMC™
Hello,What is the duration of startup calibration of ADC on xmc4500? How many cycles and which clock? Is there any possibility that bit "VADC_G ARBCFG...
Show More
Hello,
What is the duration of startup calibration of ADC on xmc4500? How many cycles and which clock? Is there any possibility that bit "VADC_G ARBCFG: CAL" is not set?
I mean, is it always guaranteed that "while" in code is completed?
// start calibration allowed only after ANONS set
SET_BIT( VADC->GLOBCFG, VADC_GLOBCFG_SUCAL_Pos);
// wait for calibration completed
while(VADC_G0->ARBCFG & VADC_G_ARBCFG_CAL_Msk)
{}; //TODO dangerous wait
thanks
rum Show Less
What is the duration of startup calibration of ADC on xmc4500? How many cycles and which clock? Is there any possibility that bit "VADC_G ARBCFG: CAL" is not set?
I mean, is it always guaranteed that "while" in code is completed?
// start calibration allowed only after ANONS set
SET_BIT( VADC->GLOBCFG, VADC_GLOBCFG_SUCAL_Pos);
// wait for calibration completed
while(VADC_G0->ARBCFG & VADC_G_ARBCFG_CAL_Msk)
{}; //TODO dangerous wait
thanks
rum Show Less
XMC™
Hello Infineon,there is a (non-critical) error in the XMC1100 Flash library:while (XMC_FLASH_IsBusy() == true)the "== true" comparison is wrong, becau...
Show More
Hello Infineon,
there is a (non-critical) error in the XMC1100 Flash library:
the "== true" comparison is wrong, because XMC_FLASH_IsBusy() is defined as
IOW the resulting bit is XMC_FLASH_STATUS_BUSY.
The code works, because the defines for "XMC_FLASH_STATUS_BUSY" and "true" are both "1" by chance.
Nevertheless it is an error to compare bit masks from different definitions.
Use "while (XMC_FLASH_IsBusy())" for correct code.
BTW: Where is the best place to report such an error?
Greetings
Oliver Show Less
there is a (non-critical) error in the XMC1100 Flash library:
while (XMC_FLASH_IsBusy() == true)
the "== true" comparison is wrong, because XMC_FLASH_IsBusy() is defined as
return (bool)(XMC_FLASH_GetStatus() & XMC_FLASH_STATUS_BUSY);
IOW the resulting bit is XMC_FLASH_STATUS_BUSY.
The code works, because the defines for "XMC_FLASH_STATUS_BUSY" and "true" are both "1" by chance.
Nevertheless it is an error to compare bit masks from different definitions.
Use "while (XMC_FLASH_IsBusy())" for correct code.
BTW: Where is the best place to report such an error?
Greetings
Oliver Show Less
XMC™
Hi guys,I was trying to use the ASC Bootloader example. I am now stuck forever, i cant connect the device,Jflash dosent connect target device. Cannot ...
Show More
Hi guys,
I was trying to use the ASC Bootloader example. I am now stuck forever, i cant connect the device,
Jflash dosent connect target device.
Cannot connect with DAVE or Jflash commander.
The device and COM are perfectly connected over Pins 21,27,30.
while using the flashing program, I can load the hex. When it starts to erase, its just stuck and never proceeds.
Any help is appreciated 🙂 thanks in advance.
I hope ive not bricked this.
Any way of a factory erase or Flash Memory erase would also help me
Cheers,
Manish Show Less
I was trying to use the ASC Bootloader example. I am now stuck forever, i cant connect the device,
Jflash dosent connect target device.
Cannot connect with DAVE or Jflash commander.
The device and COM are perfectly connected over Pins 21,27,30.
while using the flashing program, I can load the hex. When it starts to erase, its just stuck and never proceeds.
Any help is appreciated 🙂 thanks in advance.
I hope ive not bricked this.
Any way of a factory erase or Flash Memory erase would also help me
Cheers,
Manish Show Less
XMC™
hi,I am working on a customer's project (E-Bike) using XMC1300 and so using XMC1000 Motor Control Application kit as a reference. I am reading schemat...
Show More
hi,
I am working on a customer's project (E-Bike) using XMC1300 and so using XMC1000 Motor Control Application kit as a reference.
I am reading schematics for both boards, PMSM_LV_15W card and XMC1300 CPU card which are available in both user manuals. These schematics are not very clear or visible.
Could you send schematics for these boards ?
Thank you in advance.
Best Regards,
Felix Show Less
I am working on a customer's project (E-Bike) using XMC1300 and so using XMC1000 Motor Control Application kit as a reference.
I am reading schematics for both boards, PMSM_LV_15W card and XMC1300 CPU card which are available in both user manuals. These schematics are not very clear or visible.
Could you send schematics for these boards ?
Thank you in advance.
Best Regards,
Felix Show Less