I recently came across a syntax I never seen before when I learned python nor in most tutorials, the
.. notation, it looks something like this:
f = 1..__truediv__ # or 1..__div__ for python 2 print(f(8)) # prints 0.125
I figured it was exactly the same as (except it’s longer, of course):
f = lambda x: (1).__truediv__(x) print(f(8)) # prints 0.125 or 1//8
But my questions are:
This will probably save me many lines of code in the future…:)
What you have is a
float literal without the trailing zero, which you then access the
__truediv__ method of. It’s not an operator in itself; the first dot is part of the float value, and the second is the dot operator to access the objects properties and methods.
You can reach the same point by doing the following.
>>> f = 1. >>> f 1.0 >>> f.__floordiv__ <method-wrapper '__floordiv__' of float object at 0x7f9fb4dc1a20>
>>> 1..__add__(2.) 3.0
Here we add 1.0 to 2.0, which obviously yields 3.0.
Two dots together may be a little awkward at first:
f = 1..__truediv__ # or 1..__div__ for python 2
But it is the same as writing:
f = 1.0.__truediv__ # or 1.0.__div__ for python 2
float literals can be written in three forms:
normal_float = 1.0 short_float = 1. # == 1.0 prefixed_float = .1 # == 0.1
The question is already sufficiently answered (i.e. @Paul Rooneys answer) but it’s also possible to verify the correctness of these answers.
Let me recap the existing answers: The
.. is not a single syntax element!
You can check how the source code is “tokenized”. These tokens represent how the code is interpreted:
>>> from tokenize import tokenize >>> from io import BytesIO >>> s = "1..__truediv__" >>> list(tokenize(BytesIO(s.encode('utf-8')).readline)) [... TokenInfo(type=2 (NUMBER), string='1.', start=(1, 0), end=(1, 2), line='1..__truediv__'), TokenInfo(type=53 (OP), string='.', start=(1, 2), end=(1, 3), line='1..__truediv__'), TokenInfo(type=1 (NAME), string='__truediv__', start=(1, 3), end=(1, 14), line='1..__truediv__'), ...]
So the string
1. is interpreted as number, the second
. is an OP (an operator, in this case the “get attribute” operator) and the
__truediv__ is the method name. So this is just accessing the
__truediv__ method of the float
Another way of viewing the generated bytecode is to
disassemble it. This actually shows the instructions that are performed when some code is executed:
>>> import dis >>> def f(): ... return 1..__truediv__ >>> dis.dis(f) 4 0 LOAD_CONST 1 (1.0) 3 LOAD_ATTR 0 (__truediv__) 6 RETURN_VALUE
Which basically says the same. It loads the attribute
__truediv__ of the constant
Regarding your question
And how can you use it in a more complex statement (if possible)?
Even though it’s possible you should never write code like that, simply because it’s unclear what the code is doing. So please don’t use it in more complex statements. I would even go so far that you shouldn’t use it in so “simple” statements, at least you should use parenthesis to separate the instructions:
f = (1.).__truediv__
this would be definetly more readable – but something along the lines of:
from functools import partial from operator import truediv f = partial(truediv, 1.0)
would be even better!
The approach using
partial also preserves python’s data model (the
1..__truediv__ approach does not!) which can be demonstrated by this little snippet:
>>> f1 = 1..__truediv__ >>> f2 = partial(truediv, 1.) >>> f2(1+2j) # reciprocal of complex number - works (0.2-0.4j) >>> f2('a') # reciprocal of string should raise an exception TypeError: unsupported operand type(s) for /: 'float' and 'str' >>> f1(1+2j) # reciprocal of complex number - works but gives an unexpected result NotImplemented >>> f1('a') # reciprocal of string should raise an exception but it doesn't NotImplemented
This is because
1. / (1+2j) is not evaluated by
float.__truediv__ but with
operator.truediv makes sure the reverse operation is called when the normal operation returns
NotImplemented but you don’t have these fallbacks when you operate on
__truediv__ directly. This loss of “expected behaviour” is the main reason why you (normally) shouldn’t use magic methods directly.
f = 1..__truediv__?
f is a bound special method on a float with a value of one. Specifically,
1.0 / x
in Python 3, invokes:
class Float(float): def __truediv__(self, other): print('__truediv__ called') return super(Float, self).__truediv__(other)
>>> one = Float(1) >>> one/2 __truediv__ called 0.5
If we do:
f = one.__truediv__
We retain a name bound to that bound method
>>> f(2) __truediv__ called 0.5 >>> f(3) __truediv__ called 0.3333333333333333
If we were doing that dotted lookup in a tight loop, this could save a little time.
We can see that parsing the AST for the expression tells us that we are getting the
__truediv__ attribute on the floating point number,
>>> import ast >>> ast.dump(ast.parse('1..__truediv__').body) "Expr(value=Attribute(value=Num(n=1.0), attr='__truediv__', ctx=Load()))"
You could get the same resulting function from:
f = float(1).__truediv__
f = (1.0).__truediv__
We can also get there by deduction.
Let’s build it up.
1 by itself is an
>>> 1 1 >>> type(1) <type 'int'>
1 with a period after it is a float:
>>> 1. 1.0 >>> type(1.) <type 'float'>
The next dot by itself would be a SyntaxError, but it begins a dotted lookup on the instance of the float:
>>> 1..__truediv__ <method-wrapper '__truediv__' of float object at 0x0D1C7BF0>
No one else has mentioned this – This is now a "bound method" on the float,
>>> f = 1..__truediv__ >>> f <method-wrapper '__truediv__' of float object at 0x127F3CD8> >>> f(2) 0.5 >>> f(3) 0.33333333333333331
We could accomplish the same function much more readably:
>>> def divide_one_by(x): ... return 1.0/x ... >>> divide_one_by(2) 0.5 >>> divide_one_by(3) 0.33333333333333331
The downside of the
divide_one_by function is that it requires another Python stack frame, making it somewhat slower than the bound method:
>>> def f_1(): ... for x in range(1, 11): ... f(x) ... >>> def f_2(): ... for x in range(1, 11): ... divide_one_by(x) ... >>> timeit.repeat(f_1) [2.5495760687176485, 2.5585621018805469, 2.5411816588331888] >>> timeit.repeat(f_2) [3.479687248616699, 3.46196088706062, 3.473726342237768]
Of course, if you can just use plain literals, that’s even faster:
>>> def f_3(): ... for x in range(1, 11): ... 1.0/x ... >>> timeit.repeat(f_3) [2.1224895628296281, 2.1219930218637728, 2.1280188256941983]