Announcements

Help us improve the Power & Sensing Selection Guide. Share feedback

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
lock attach
Attachments are accessible only for community members.
jsmith678x
Level 4
Level 4
100 sign-ins 25 replies posted First solution authored

Hi,
I'm using a Keras model for number images classification in Python.
https://keras.io/examples/vision/mnist_convnet/

This is well know example. The images are (28,28) grayscale images. I've the .h5 and the .tflite model. The inputs are C arrays converted from JPG files using Python script.

The problem: I've got wrong classification results on my PSoc6 eval board (CY8CKIT-062S2-43012) using quantization.
I've followed the ModusToolbox™ Machine Learning user guide and I've used the 'ML Configurator' in ModusToolbox.
I've also followed the 'Machine_Learning_Gesture_Classification' example and copied some important parts from this project.

I've a workaround. When I quantize the the input itself. In this case we don't need to call 'mtb_ml_utils_model_quantize'. However it is very unusual. In this case everything is OK. This is the case in main.c when 'QUANTIZED_INPUT=1'

What I missed? What may the problem?

I've attached the Makefile and main.c

 

0 Likes
1 Solution
MuhammadNanda_K
Moderator
Moderator
Moderator
50 solutions authored 250 sign-ins 250 replies posted

Hello @jsmith678x ,

This is quantize implementation in mtb_ml_utils.c :

cy_rslt_t mtb_ml_utils_model_quantize(const mtb_ml_model_t *obj, const float* input_data, MTB_ML_DATA_T* quantized_values)
{
    if (obj == NULL || input_data == NULL || quantized_values == NULL) {
        return MTB_ML_RESULT_BAD_ARG;
    }

#if !defined(COMPONENT_ML_FLOAT32)
    int32_t size = obj->input_size;
    const float *value = input_data;
#if defined(COMPONENT_ML_IFX)
    return mtb_ml_utils_convert_flt_to_int(value, quantized_values, size, obj->input_q_n);
#else
    return mtb_ml_utils_convert_tflm_flt_to_int8(value, quantized_values, size, obj->input_scale, obj->input_zero_point);
#endif
#else
    return MTB_ML_RESULT_SUCCESS;
#endif
}

 

Since in your Makefile:

NN_INFERENCE_ENGINE=tflm

 

that will jump into "#else" that call "mtb_ml_utils_convert_tflm_flt_to_int8()",
then inside that due to not defined "MTB_ML_HAVING_CMSIS_DSP", the implementation will be:

    loop_count = size;
    while (loop_count > 0)
    {
        val = (*in++ / scale) + zero_point;
        val += val > 0.0f ? 0.5f : -0.5f;
        if ((int32_t) val > SCHAR_MAX)
            *out++ = SCHAR_MAX;
        else if ((int32_t) val < SCHAR_MIN)
            *out++ = SCHAR_MIN;
        else
            *out++ = (int8_t) (val);

        loop_count--;
    }

 

It is different with your own implementation. 

What can I suggest is, use the same algorithm and data types between learning-testing.
It is okay if you use your implementation if it is match with the learning/training. 🙂

Thank you and regards,
Muhammad Nanda

View solution in original post

0 Likes
6 Replies
MuhammadNanda_K
Moderator
Moderator
Moderator
50 solutions authored 250 sign-ins 250 replies posted

Hello @jsmith678x,

Thank you for your query. 🙂

This issue is in ongoing internal discussion.
We may back to you as soon as possible.

I am sorry for any inconvenience.

Thank you and regards,
Muhammad Nanda

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
50 solutions authored 250 sign-ins 250 replies posted

Hello @jsmith678x,

I am sorry just wanted to clarify,
do you also use quantization in your training ?
Since the data set input should be the same kind between training/learning phase and testing phase.

Thank you and regards,
Muhammad Nanda

0 Likes
jsmith678x
Level 4
Level 4
100 sign-ins 25 replies posted First solution authored

The input data type was specified wrong. I've fixed to float, also I have a new img_array.c file. Unfortunatelly after fixing this problems the result still not good. I'd try to link the app directly with libtensorflow-microlite.a and avoid calling 'mtb_ml_utils_model_quantize' but libtensorflow-microlite.a can not linked because there is a 'main' entry in the lib file (as I posted today)

When I replace the 'mtb_ml_utils_model_quantize' with my own quantize function, the result is OK. These two functions are not the same?

void quantize_input(mtb_ml_model_t *my_model_obj, float* src, int8_t* dest)
{
	for (int i=0; i<my_model_obj->input_size; i++) {
 	 float x = src[i];
     int8_t x_quantized = x / my_model_obj->input_scale + my_model_obj->input_zero_point;
     dest[i] = x_quantized;
	}
}

 

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
50 solutions authored 250 sign-ins 250 replies posted

Hello @jsmith678x ,

This is quantize implementation in mtb_ml_utils.c :

cy_rslt_t mtb_ml_utils_model_quantize(const mtb_ml_model_t *obj, const float* input_data, MTB_ML_DATA_T* quantized_values)
{
    if (obj == NULL || input_data == NULL || quantized_values == NULL) {
        return MTB_ML_RESULT_BAD_ARG;
    }

#if !defined(COMPONENT_ML_FLOAT32)
    int32_t size = obj->input_size;
    const float *value = input_data;
#if defined(COMPONENT_ML_IFX)
    return mtb_ml_utils_convert_flt_to_int(value, quantized_values, size, obj->input_q_n);
#else
    return mtb_ml_utils_convert_tflm_flt_to_int8(value, quantized_values, size, obj->input_scale, obj->input_zero_point);
#endif
#else
    return MTB_ML_RESULT_SUCCESS;
#endif
}

 

Since in your Makefile:

NN_INFERENCE_ENGINE=tflm

 

that will jump into "#else" that call "mtb_ml_utils_convert_tflm_flt_to_int8()",
then inside that due to not defined "MTB_ML_HAVING_CMSIS_DSP", the implementation will be:

    loop_count = size;
    while (loop_count > 0)
    {
        val = (*in++ / scale) + zero_point;
        val += val > 0.0f ? 0.5f : -0.5f;
        if ((int32_t) val > SCHAR_MAX)
            *out++ = SCHAR_MAX;
        else if ((int32_t) val < SCHAR_MIN)
            *out++ = SCHAR_MIN;
        else
            *out++ = (int8_t) (val);

        loop_count--;
    }

 

It is different with your own implementation. 

What can I suggest is, use the same algorithm and data types between learning-testing.
It is okay if you use your implementation if it is match with the learning/training. 🙂

Thank you and regards,
Muhammad Nanda

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
50 solutions authored 250 sign-ins 250 replies posted

Hello @jsmith678x,

I am sorry, I just want to follow up. 🙂 
May you kindly update your status on this issue ?

Thank you and regards,
Muhammad Nanda

0 Likes
MuhammadNanda_K
Moderator
Moderator
Moderator
50 solutions authored 250 sign-ins 250 replies posted

Hello @jsmith678x,

Since you are marking an answer as solution, I will proceed to lock this discussion thread.

If you have any other issue in the future, please kindly do not hesitate to create new thread. 🙂

Thank you and regards,
Muhammad Nanda

0 Likes