Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

USB superspeed peripherals

Level 2
Level 2


I am developing a product with Fx3s. And would like to use the GPIF II interface to interact with an FPGA.

As the first step, I used two Fx3s and one acted as the master and the other as the slave.

My question is: how to compute the number of clocks needed to read from and write to the slave side?

The following experiment were done with the default source code of AN87216 with the only change to modify the data bus width from 32 to 16.

1. Fx3 master read from fx3 slave (8bytes sent from slave out endpoint in control center)

RD_ signal last for 8 clock cycles (I think it should be 4 if bus bit width is 16 ???)


2. Fx3 master write to fx3 slave (8bytes sent from master out endpoint in control center)

WR_ signal last for 6 clock cycles (I think it should be 4 if bus bit width is 16 ???)


1 Solution
Moderator 100 solutions authored 50 solutions authored 50 likes received


Regarding the 1st timing diagram:

->Data sampling starts as soon as CS (trigger point) gets asserted and is done for the next 4 clock cycles ( 8-bit data ).

->In the 5th clock cycle, the state machine transitions to the next state and asserts FLAGA and FLAGC.

-> Due to some internal latency, FLAGA and FLAGC get asserted some nanoseconds after the 5th clock and so, they are sampled in the 6th clock.( You can verify by zooming into the trace and having markers to measure the exact point at which FLAGA get asserted)

->Because SLRD is early, it gets de-asserted in the 8th clock cycle( 2 cycles after FLAGA and FLAGC).

->Since it is a short packet, there is an additional 1 cycle delay in INTR_CPU and commiting the packets to the host.

That's the reason why you have 8 clock cycles for the operation instead of 4 clock cycles.



View solution in original post

5 Replies