PSoC 5LP GPIO and Register Timing

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
RiAs_1660756
Level 4
Level 4
25 sign-ins 25 replies posted 10 replies posted

I have two questions:

1. Where can I find out timings for reading and writing to ports and registers including DMA transfers?  I have a time-critical application and want to know the numbers.

I ran a simple test to see how fast I could write to a pin using API write function, direct write to the port address, bit-banding; then writing to a control register connected to a pin.  The results were surprising in that the bit-banding took more clock cycles than a direct write; also, the number of clock cycles jumped at 48MHz, from typically 6 to 14. 

Clearly, this isn't a simple issue. 

2. Regarding warnings about asynchronous paths: if I get a warning about an asynchronous input that is used as a clock on a D-type, this means that the D-type is actually clocked on the internal clock edge, not my external clock.  Otherwise it wouldn't matter, no? If I use a Sync component, I'll lose another clock cycle: one to get through the Sync and the next to clock the D? 

I think the Sync component is basically a D flip-flop anyway so, despite the warning, I get no benefit from using it and delay my signal unnecessarily.

Am I wrong here?

0 Likes
1 Solution
Len_CONSULTRON
Level 9
Level 9
Beta tester 500 solutions authored 1000 replies posted

Richard,

Question 1.

All CPU resources have their own access timing inside the internal buss structure.

The fastest tend to be FLASH and SRAM.  This is because they do nothing but allow for Reads or Writes.

The Register resources may look like SRAM from the CPU's perspective but may require additional CPU cycles to perform the required function.

EEPROM is the worst for extended timing.

Once the CPU frequency goes above the maximum access timing, additional CPU cycles are needed to complete the operation.  This is why you saw more CPU cycles when going above 48MHz with the register access.   In this case since the CPU cycles to access are more, it can take a little longer to access the resource.

Performing a "smarter" more efficient register access like odissey1 provided can help.

The API calls to certain component resources are great and have a certain level of code abstraction.  However, the price of code abstraction can burden the execution time to make sure the API call can operate across many platforms.

DMA access with resources are the same as CPU access.  The DMA HW is designed to know the resource timing limitation based on the DMA input clock and adjust accordingly.

Suggestion:

You are using a PSoC5 which has what I think is a great resource: UDBs.

Is it possible in your application to create a UDB-base HW state machine to alter the registers?

I found that if you can create a HW state machine to do the much of the time-critical functions, you can unburden the CPU and the DMA from swiping clock cycles and perform the needed operations with the lowest latency.

Question 2.

The warning about the "Asynchronous path" is just that:  A warning.

Creator doesn't prevent an "Application Build" phase.  It just warns you of a potential timing violation that might be an issue with your design intent.

It is up to you to determine if the timing violation (usually a input setup timing to the input clock) will be a problem.   For example, the UART component never provides a asynchronous warning although the Rx input is guaranteed to be asynchronous to the UART clock.

The major problem for asynch inputs to the clock to a latching component is called a "Meta-stable" condition.  If the input switches within the setup or hold time requirements of the latch to the clock, then the output could actually oscillate for a period of time (usually for a few nsecs).   If the downstream logic being feed by this output will be adversely affected by an oscillation, then would be a design problem.

I experienced a "meta-stable" timing condition only once in my early engineering career.  The event occurred once every one to two days.  It took two weeks to isolate and capture the issue in testing.   Luckily, this occurred before production where it could be corrected before perpetuating the issue to the customer.

The usual fix to have the input double clocked through two serial DFFs.  The first DFF allows a potential output oscillation.  By the time the second DFF sees the first DFF output, the oscillation should be stable before the setup time.  Therefore the second DFF output should be free of the oscillation.   Sadly, there is at least one clock latency.

Len
"Engineering is an Art. The Art of Compromise."

View solution in original post

0 Likes
16 Replies