4.12 format

Tip / Sign in to post questions, reply, level up, and achieve exciting badges. Know more

cross mob
Andrew7
Level 3
Level 3
25 sign-ins 10 questions asked 10 replies posted

I want to learn the 4.12 format which the Psoc uses/understands, can anyone enlighten me with the concept ?

0 Likes
10 Replies
Len_CONSULTRON
Level 9
Level 9
Beta tester 500 solutions authored 1000 replies posted

Anshuman7,

I'm very familiar with the PSoC.  What is the 4.12 format?  I've never heard this term.

Len
"Engineering is an Art. The Art of Compromise."
0 Likes

It's the numbers which Psoc microcontroller understands.. like for addition.. 1 + 1 is 4096 + 4096..

0 Likes

Anshuman7,

Interesting.  When I needed to add 1+1 on the PSoC I always used 1+1 = 2.

4096+4096 = 8192.

I guess I still don't understand to what you are referring with 4.12 format.

Len
"Engineering is an Art. The Art of Compromise."
0 Likes

Fixed point arithmetic basics and de-velopment of Macros for (a) addition 4.12 + 4.12, (b) multiplication 4.12 and 4.12 numbers, (c) number format casting - 4.12 to 8.24 and vice versa

Any idea?

0 Likes

Anshuman7,

Do you have any references to PSoC documentation that refer to the 4.12 format?

I've been developing HW and SW for CPUs for over 40 years.  I'm familiar with MANY different formats computers use. (Many of them are no longer in use due to improvements in CPU architecture.)

Having said that, I normally use the normal 'C' coding compiler instructions for performing math on variables.   I usually don't worry how the "sausage" is made unless I have to.

Is there a reason that you need to know this level of details regarding the number formats?

Len
"Engineering is an Art. The Art of Compromise."
0 Likes

Actually to represent decimals.. fixed point arithmetic is required .. like psoc treats 0.5 as 0 right? so if it is 4.12 format then 0.5 will be 4096/2 i.e 2048 ..

0 Likes

Anshuman7,

Thank you for this latest post.   

Now I understand what you are asking.  This is the first time I've ever heard of the 4.12 format.

When I need fixed decimal numbers disguised as integers I use what I call the x10 (for 0.1 precision) or x100 (for 0.01 precision).

Instead of using 4096 to equal 1, In the x10 method I use 10 => 1.  In the x100 method I use 100 => 1.

Therefore in x10 the integer 513 = 51.3 and in x100 the integer 89456 = 894.56.  Simple.  Straightforward.

Here is some example code:

// Macros
#define INT_2_x10(x)  (x*10))  // convert integer to x10
#define x10_INT(x)  (x/10)   // determine x10 integer portion
#define x10_DEC(x)  (x%10)   // determine x10 decimal portion
#define INT_2_x100(x)  (x*100)  // convert integer to x100
#define x100_INT(x)  (x/100)   // determine x100 integer portion
#define x100_DEC(x)  (x%100)   // determine x100 decimal portion

int32 var_x10 = 513;  // = 51.3
int32 var_x100 = 89456;  // = 894.56

main()
{
char tstr[100];
...
   var_x10 = var_x10 + INT_2_x10(5);   // add two x10 values
   var_x100 = var_x100 * INT_2_x100(593);  // multiply two x100 values
...
// Printing x10 and x100 results
  sprintf(tstr, "var_x10 = %d.%d    var_x100 = %d.%d \n\r", 
         x10_INT(var_x10),x10_DEC(var_x10),
         x100_INT(var_x100),x100_DEC(var_x100));
...
}

As long as all values are the same type, then performing math on them should yield meaningful values.

The 4.12 format with 4096 => 1 is basically the same as the x10 and x100.  However, I don't see the immediate value of using 4096.

The advantage of the x10 and x100 format method is if you need to debug the code.  In debugging mode when you display the variables in decimal radix, the number only has to be translated by 'eye' by placing the decimal point in the correct position.  This makes it easier to debug.

In theory the 4.12 format (4096) is potentially more compiler friendly IF the compiler performs  math optimizations to convert multiplications (x4096) or divisions (/4096) by performing 12-bit shifts.  This could result in slightly faster code.

Len
"Engineering is an Art. The Art of Compromise."
0 Likes

Yes I was introduced to this in my class for the 12 bit shift only. Thank you so much for explaining Sir!

0 Likes

You may check this blog for fixed format macros

Simple Fixed-Point Conversion in C 

and this one for custom 4.12 fixed point library

https://blog.mbedded.ninja/programming/general/fixed-point-mathematics/ 

HuEl_264296
Level 5
Level 5
First like given 25 sign-ins First solution authored

The 4.12 format is used for representing non-integer values in a 16-bit word.

Most of us are familiar with floating point numbers for representing non-integers, but what does this mean? When we write decimal numbers, we sometimes represent them like this: 1234.5678, and sometimes like this: 1.2345678x103, and maybe sometimes like this: 00001234.56780000.

The second example is what we might call a floating point format. We've split the significant digits of the number (12345678, also known as the mantissa) away from its exponent (the 3). This is useful because it lets us represent numbers which are very large or very small, without needing to use lots of zeros.

It's easier to write 1.234 x 1020 than to write 123400000000000000000. If everyone in the world had to choose one format to use, then it would have to be the floating point format, because you can represent the size of the universe in meters: 8.8×1026 and the diameter of an atom in meters: 6.2 ×10-11 without ever needing to use a lot of zeroes. But the downside is always having to write x10x after every number.

This is why computers generally use floating point format, because it lets people deal with this huge range of numbers.

However, if your application doesn't actually need to calculate how many hydrogen atoms will fit inside the universe, then you might actually be better off using fixed point numbers. This is especially true in computers.

A 16-bit binary number can represent 65536 different values. Which 65536 values should we choose to represent?

If your application involves numbers in the range 0 to 100, say, then there is no value to you in being able to represent numbers in the trillions, ore even in the thousands. These would simply be a waste of some of your precious 65536 values. It would make more sense to evenly distribute those 65536 values in the range 0-100. That would give you the most precise representation of numbers in this range.

If your application involves adding up a lot of small numbers, then you might be better off using fixed point format. Their increased precision makes them much better for this. It's especially useful in applications like PID controllers, when lots of small sensor samples are accumulated for the I component. Floating point numbers will loose some precision when doing that. Imagine adding 1234000000.00 to 0000000.01234, when we can only represent 12 significant digits. What do we get? 1234000000.01. We have lost three significant digits.

So what does 4.12 mean? It means that we are going to use 4 bits to represent the integer part of the number, and 12 bits to represent the fractional part of the number. i.e. the largest number we can represent is: 1111.111111111111 which is equivalent to 15.999755859375 in decimal. And the smallest (non-zero) number is 0000.000000000001, which is 0.000244140625 in decimal. Or, if you're using signed values, then you can handle numbers in the range of -8 to 7.999755859375.

The nice thing about fixed point format is that you can choose where to put the decimal place. You might choose 8.8 format, which lets you deal with numbers in the range 0 to 255.99609375 in steps of 0.00390625 (or -128 to 127.99609375 for signed values). Or 12.4 format, giving you a range of 0 to 4095.9375 in steps of 0.0625.

Putting the decimal point in the right place gives you the highest level of precision for the number of bits you're using.

But why use do we think in terms of integer and fractional bits, instead of (as Len suggests) simply multiplying our numbers by 100 for example? Indeed we can, and this can work just as well. But I'd recommend multiplying by a number a little larger than 100 to give a little more precision, depending on the range you need. However, there are two reasons why I personally wouldn't do that. The first reason is that, as an embedded systems engineer, the idea of multiplying by 100, when I could multiply by 256 instead feels a little bit yucky. And secondly, converting between integer and fixed point format at run time is faster when we're using powers of 2, since it involves bit shifting (which can be performed in a single instruction cycle) rather than multiplying or dividing by 100, which, depending on architecture, can take many clock cycles.

The other major advantage of using fixed point format is the fact that computers can handle these numbers using their standard integer ALU. They can be added together just as fast as integers. This is really useful when working in architectures that don't have a floating point processor (like the PSoCs).

Actually, since the PSoCs are 32-bit processors, they can handle 32-bit operations as fast as 16-bit operations. So when choosing your fixed point format, you might as well choose 16.16 format, giving you a range of 0 to 65535.9999847412109375 in steps of 0.0000152587890625, which is a pretty good range and precision for many applications. So, unless you're short of memory, I'd recommend using 16.16 format (or 8.24, or 24.8, or anything that adds up to 32).

Note that in more advanced architectures (Pentium onwards), using a certain amount of floating point code can actually increase an application's performance, because the CPU can handle floating point operations in parallel with integer ones, so you can push more computation through per clock cycle. This is trick the original Quake engine used to good effect.

There are a few fixed-point C libraries around which make using them pretty easy.
https://sourceforge.net/projects/fixedptc/
https://embeddedartistry.com/blog/2017/08/25/c11-fixed-point-arithmetic-library/