Python increment float by smallest step possible predetermined by its number of decimals

Question:

I’ve been searching around for hours and I can’t find a simple way of accomplishing the following.

Value 1 = 0.00531
Value 2 = 0.051959
Value 3 = 0.0067123

I want to increment each value by its smallest decimal point (however, the number must maintain the exact number of decimal points as it started with and the number of decimals varies with each value, hence my trouble).

Value 1 should be: 0.00532
Value 2 should be: 0.051960
Value 3 should be: 0.0067124

Does anyone know of a simple way of accomplishing the above in a function that can still handle any number of decimals?

Thanks.

Asked By: gloomyfit

||

Answers:

Have you looked at the standard module decimal?

It circumvents the floating point behaviour.

Just to illustrate what can be done.

import decimal
my_number = '0.00531'
mnd = decimal.Decimal(my_number)
print(mnd)
mnt = mnd.as_tuple()
print(mnt)
mnt_digit_new = mnt.digits[:-1] + (mnt.digits[-1]+1,)
dec_incr = decimal.DecimalTuple(mnt.sign, mnt_digit_new, mnt.exponent)
print(dec_incr)
incremented = decimal.Decimal(dec_incr)
print(incremented)

prints

0.00531
DecimalTuple(sign=0, digits=(5, 3, 1), exponent=-5)
DecimalTuple(sign=0, digits=(5, 3, 2), exponent=-5)
0.00532

or a full version (after edit also carries any digit, so it also works on '0.199')…

from decimal import Decimal, getcontext

def add_one_at_last_digit(input_string):
    dec = Decimal(input_string)
    getcontext().prec = len(dec.as_tuple().digits)
    return dec.next_plus()

for i in ('0.00531', '0.051959', '0.0067123', '1', '0.05199'):
    print(add_one_at_last_digit(i))

that prints

0.00532
0.051960
0.0067124
2
0.05200
Answered By: ahed87

As the other commenters have noted: You should not operate with floats because a given number 0.1234 is converted into an internal representation and you cannot further process it the way you want. This is deliberately vaguely formulated. Floating points is a subject for itself. This article explains the topic very well and is a good primer on the topic.

That said, what you could do instead is to have the input as strings (e.g. do not convert it to float when reading from input). Then you could do this:

from decimal import Decimal

def add_one(v):
    after_comma = Decimal(v).as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v) + add

if __name__ == '__main__':
    print(add_one("0.00531"))
    print(add_one("0.051959"))
    print(add_one("0.0067123"))
    print(add_one("1"))

This prints

0.00532
0.051960
0.0067124
2

Update:

If you need to operate on floats, you could try to use a fuzzy logic to come to a close presentation. decimal offers a normalize function which lets you downgrade the precision of the decimal representation so that it matches the original number:

from decimal import Decimal, Context

def add_one_float(v):
    v_normalized = Decimal(v).normalize(Context(prec=16))
    after_comma = v_normalized.as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v_normalized) + add

But please note that the precision of 16 is purely experimental, you need to play with it to see if it yields the desired results. If you need correct results, you cannot take this path.

Answered By: hansaplast

A bit of improvement using numbers as input and implementing substraction as well:

import decimal 

def add_or_sub_one_at_last_digit(input_number,to_add = True):
    dec = decimal.Decimal(str(input_number))
    decimal.getcontext().prec = len(dec.as_tuple().digits)
    ret = dec.next_plus() if to_add else dec.next_minus()
    return ret 

a = 0.225487
# add
print(add_or_sub_one_at_last_digit(a))
# substract
print(add_or_sub_one_at_last_digit(a,False))

output:

0.225488
0.225486
Answered By: Pranav Joshi
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.