ML: problem with the generated .tflite models

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
jsmith678x
Level 4
Level 4
100 sign-ins 25 replies posted First solution authored

I think there is a problem with the 'ML Configurator' generated .tflite models. Assume I have a Python project for Machine Learning and I have the .h5 model with the .tflite model (converted by the Python script), I have testbenches and everything is OK. Now I'd like to run the .tflite model on the PSoC6 MCU. To do this I open the ML Configurator and generate the required sources. But the generated .tflite model file is not compatible with the original exported .tflite file, the input_scale and the input_zero_point properties are different, and this is a problem because I can't run the same inputs in Python and on the MCU. I'd suggest an option at least keep the original configuration for .tflite export. It's nice that we have Imagimob but would be also nice to run Python saved model because there are a lot of resources on the Internet that we could use. 

0 Likes
1 Solution
RodolfoGL
Employee
Employee
250 solutions authored 250 sign-ins 5 comments on KBA

Yes, that could explain the problem. Currently, we only support signed 8-bit integer as input with TFLM. You would need to convert uint8 to int8 before inferencing in the PSoC.

View solution in original post

0 Likes
9 Replies
MuhammadNanda_K
Moderator
Moderator
Moderator
250 sign-ins 250 replies posted 50 likes received

Hello @jsmith678x,

Thank you for your concern.
I will check and discuss this matter on internal team first.

Thank you and regards,
Muhammad Nanda

0 Likes
RodolfoGL
Employee
Employee
250 solutions authored 250 sign-ins 5 comments on KBA

Could you share the .tflite model you generated? What is the scaling factor and zero point used and what is the input type format? Int8? Uint8?

Note that when deploying a model, we might not support all the features/operators when running from python scripts. On the firmware side, we use the TensorFlow Lite for Microcontrollers inference engine. 

0 Likes
lock attach
Attachments are accessible only for community members.
jsmith678x
Level 4
Level 4
100 sign-ins 25 replies posted First solution authored

I've shared the .tflite models.

classification_quant.tflite: saved from Python

IMG_CLASS_int8x8.tflite: generated by ModusToolBox from the .h5 file

 

0 Likes
jsmith678x
Level 4
Level 4
100 sign-ins 25 replies posted First solution authored

Maybe the problem that my Python saved model dtype is uint8, and ModusToolbox generated file is int8.

0 Likes
RodolfoGL
Employee
Employee
250 solutions authored 250 sign-ins 5 comments on KBA

Yes, that could explain the problem. Currently, we only support signed 8-bit integer as input with TFLM. You would need to convert uint8 to int8 before inferencing in the PSoC.

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
250 sign-ins 250 replies posted 50 likes received

Hello @RodolfoGL ,
Thanks for your explanation and the discussion. 🙂

Hello @jsmith678x ,
If you have any progress or still have any doubts, please kindly address it. 🙂

Best regards,
Muhammad Nanda

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
250 sign-ins 250 replies posted 50 likes received

Hello @jsmith678x ,

Do you still have any concern on this issue ?

Thank you and regards,
Muhammad Nanda

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
250 sign-ins 250 replies posted 50 likes received

Hello @jsmith678x ,

If you don't have any more concern on this thread, I will close this discussion thread by this week.
🙂 

Thank you and regards,
Muhammad Nanda

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
250 sign-ins 250 replies posted 50 likes received

Hello @jsmith678x ,

I haven't heard your reply for nearly 3 weeks.
Hopefully, your issue has been solved or maybe you have another higher priority task.

If you have any other query in the future, please kindly do not hesitate to create new thread. 🙂

Thank you and regards,
Muhammad Nanda

0 Likes