Hello. I've come across a rather interesting problem involving float to int conversions. If I have a float, say, 98.85, and I assign an int to this float times 100...
float myfloat = 98.85;
int myint = myfloat * 100;
myint becomes 9884, which is not what I want. Is there anyway to do this float->int conversion without losing precision and without making myint a float? I've figured a rather hack-like workaround by simply adding one to myint, but I would much rather a solution which did not involve this step.
Thanks.
Floats are weird in C++. Your '98.85' is actually '98.849999...' Adding 0.01 and then multiplying should work.
Quote from: sixb0nes on April 13, 2006, 06:45 PM
Hello. I've come across a rather interesting problem involving float to int conversions. If I have a float, say, 98.85, and I assign an int to this float times 100...
float myfloat = 98.85;
int myint = myfloat * 100;
myint becomes 9884, which is not what I want. Is there anyway to do this float->int conversion without losing precision and without making myint a float? I've figured a rather hack-like workaround by simply adding one to myint, but I would much rather a solution which did not involve this step.
Thanks.
Try this...
float myfloat = 98.85f;
int myint = (int)myfloat * 100;
Quote from: Maddox on April 15, 2006, 05:43 PM
Quote from: sixb0nes on April 13, 2006, 06:45 PM
Hello. I've come across a rather interesting problem involving float to int conversions. If I have a float, say, 98.85, and I assign an int to this float times 100...
float myfloat = 98.85;
int myint = myfloat * 100;
myint becomes 9884, which is not what I want. Is there anyway to do this float->int conversion without losing precision and without making myint a float? I've figured a rather hack-like workaround by simply adding one to myint, but I would much rather a solution which did not involve this step.
Thanks.
Try this...
float myfloat = 98.85f;
int myint = (int)myfloat * 100;
That would result in myint == 9800.
I think he may have meant int myint = (int)(myfloat * 100); so that the multiplication would process before the casting.
It would automatically be cast to int because he is multiplying by an int and storing as an int. A typecast is not needed.