Python etc
6.13K subscribers
18 photos
194 links
Regular tips about Python and programming in general

Owner — @pushtaev

© CC BY-SA 4.0 — mention if repost
Download Telegram
json.dumps can serialize every built-in type which has a corresponding JSON type (int as number, None as null, list as array etc) but fails for every other type. Probably, the most often case when you will face it is when trying to serialize a datetime object:

import json
from datetime import datetime

json.dumps([123, 'hello'])
# '[123, "hello"]'

json.dumps(datetime.now())
# TypeError: Object of type 'datetime' is not JSON serializable


The fastest way to fix it is to provide a custom default serializer:

json.dumps(datetime.now(), default=str)
# '"2020-12-03 18:00:10.592496"'


However, that means that every unknown object will be serialized into a string which can lead to unexpected result:

class C: pass
json.dumps(C(), default=str)
'"<__main__.C object at 0x7f330ec801d0>"'


So, if you want to serialize only datetime and nothing else, it's better to define a custom encoder:

class DateTimeEncoder(json.JSONEncoder):
def default(self, obj) -> str:
if isinstance(obj, datetime):
return obj.isoformat()
return super().default(obj)

json.dumps(datetime.now(), cls=DateTimeEncoder)
'"2020-12-03T18:01:19.609648"'

json.dumps(C(), cls=DateTimeEncoder)
# TypeError: Object of type 'C' is not JSON serializable
IPython is an alternative interactive shell for Python. It has syntax highlighting, powerful introspection and autocomplete, searchable cross-session history, and much more. Run %quickref in IPython to get a quick reference on useful commands and shortcuts. Some of our favorite ones:

+ obj? - print short object info, including signature and docstring.
+ obj?? - same as above but also shows the object source code if available.
+ !cd my_project/ - execute a shell command.
+ %timeit list(range(1000)) - run a statement many times and show the execution time statistics.
+ %hist - show the history for the current session.
+ %run - run a file in the current session.
The module array is helpful if you want to be memory efficient or interoperate with C. However, working with array can be slower than with list:

import random
import array
lst = [random.randint(0, 1000) for _ in range(100000)]
arr = array.array('i', lst)

%timeit for i in lst: pass
# 1.05 ms ± 1.61 µs per loop

%timeit for i in arr: pass
# 2.63 ms ± 60.2 µs per loop

%timeit for i in range(len(lst)): lst[i]
# 5.42 ms ± 7.56 µs per loop

%timeit for i in range(len(arr)): arr[i]
# 7.8 ms ± 449 µs per loop


The reason is that because int in Python is a boxed object, and wrapping raw integer value into Python int takes some time.
Decorator functools.lru_cache caches the function result based on the given arguments:

from functools import lru_cache
@lru_cache(maxsize=32)
def say(phrase):
print(phrase)
return len(phrase)

say('hello')
# hello
# 5

say('pythonetc')
# pythonetc
# 9

# the function is not called, the result is cached
say('hello')
# 5


The only limitation is that all arguments must be hashable:

say({})
# TypeError: unhashable type: 'dict'


The decorator is useful for recursive algorithms and costly operations:

@lru_cache(maxsize=32)
def fib(n):
if n <= 2:
return 1
return fib(n-1) + fib(n-2)

fib(30)
# 832040


Also, the decorator provides a few helpful methods:

fib.cache_info()
# CacheInfo(hits=27, misses=30, maxsize=32, currsize=30)

fib.cache_clear()
fib.cache_info()
# CacheInfo(hits=0, misses=0, maxsize=32, currsize=0)

# Introduced in Python 3.9:
fib.cache_parameters()
# {'maxsize': None, 'typed': False}


And the last thing for today, you'll be surprised how fast lru_cache is:

def nop():
return None

@lru_cache(maxsize=1)
def nop_cached():
return None

%timeit nop()
# 49 ns ± 0.348 ns per loop

# cached faster!
%timeit nop_cached()
# 39.3 ns ± 0.118 ns per loop
The decorator functools.lru_cache named so because of the underlying cache replacement policy. When the cache size limit is reached Least Recently Used records removed first:

from functools import lru_cache

@lru_cache(maxsize=2)
def say(phrase):
print(phrase)

say('1')
# 1

say('2')
# 2

say('1')

# push a record out of the cache
say('3')
# 3

# '1' is still cached since it was used recently
say('1')

# but '2' was removed from cache
say('2')
# 2


To avoid the limit, you can pass maxsize=None:

@lru_cache(maxsize=None)
def fib(n):
if n <= 2:
return 1
return fib(n-1) + fib(n-2)

fib(30)
# 832040

fib.cache_info()
# CacheInfo(hits=27, misses=30, maxsize=None, currsize=30)


Python 3.9 introduced functools.cache which is the same as lru_cache(maxsize=None) but a little bit faster because it doesn't have all that LRU-related logic inside:

from functools import cache

@cache
def fib_cache(n):
if n <= 2:
return 1
return fib(n-1) + fib(n-2)

fib_cache(30)
# 832040

%timeit fib(30)
# 63 ns ± 0.574 ns per loop

%timeit fib_cache(30)
# 61.8 ns ± 0.409 ns per loop
Always precompile regular expressions using re.compile if the expression is known in advance:

# generate random string
from string import printable
from random import choice
text = ''.join(choice(printable) for _ in range(10 * 8))

# let's find numbers
pat = r'\d(?:[\d\.]+\d)*'
rex = re.compile(pat)

%timeit re.findall(pat, text)
# 2.08 µs ± 1.89 ns per loop

# pre-compiled almost twice faster
%timeit rex.findall(text)
# 1.3 µs ± 68.8 ns per loop


The secret is that module-level re functions just compile the expression and call the corresponding method, no optimizations involved:

def findall(pattern, string, flags=0):
return _compile(pattern, flags).findall(string)


If the expression is not known in advance but can be used repeatedly, consider using functools.lru_cache:

from functools import lru_cache

cached_compile = lru_cache(maxsize=64)(re.compile)

def find_all(pattern, text):
return cached_compile(pattern).findall(text)
The issue with a beautiful number #12345 proposed to add the following constant into stdlib:

tau = 2*math.pi


It was a controversial proposal since apparently it's not hard to recreate this constant on your own which will be more explicit, since more people are familiar with π rather than τ. However, the proposal was accepted and tau landed in math module in Python 3.6 (PEP-628):

import math
math.tau
# 6.283185307179586


There is a long story behind τ which you can read at tauday.com. Especially good this numberphile video.
What is the fastest way to build a string from many substrings in a loop? In other words, how to concatenate fast when we don't know in advance how many strings we have? There are many discussions about it, and the common advice is that strings are immutable, so it's better to use a list and then str.join it. Let's not trust anyone and just check it.

The straightforward solution:

%%timeit
s = ''
for _ in range(10*8):
s += 'a'
# 4.04 µs ± 256 ns per loop


Using lists:

%%timeit
a = []
for _ in range(10*8):
a.append('a')
''.join(a)
# 4.06 µs ± 144 ns per loop


So, it's about the same. But we can go deeper. What about generator expressions?

%%timeit
''.join('a' for _ in range(10*8))
# 3.56 µs ± 95.9 ns per loop


A bit faster. What if we use list comprehensions instead?

%%timeit
''.join(['a' for _ in range(10*8)])
# 2.52 µs ± 42.1 ns per loop


Wow, this is 1.6x faster than what we had before. Can you make it faster?

And there should be a disclaimer:

1. Avoid premature optimization, value readability over performance when using a bit slower operation is tolerable.

2. If you think that something is slow, prove it first. It can be different in your case.
from base64 import b64decode
from random import choice

CELLS = '~' * 12 + '¢•*@&.;,"'

def tree(max_width):
yield '/⁂\\'.center(max_width)

for width in range(3, max_width - 1, 2):
row = '/'
for _ in range(width):
row += choice(CELLS)
row += '\\'
yield row.center(max_width)

yield "'| |'".center(max_width)
yield " | | ".center(max_width)
yield '-' * max_width
title = b'SGFwcHkgTmV3IFllYXIsIEBweXRob25ldGMh'
yield b64decode(title).decode().center(max_width)

for row in tree(40):
print(row)
Today Guido van Rossum posted a Python riddle:

x = 0
y = 0
def f():
x = 1
y = 1
class C:
print(x, y) # What does this print?
x = 2
f()


The answer is 0 1.

The first tip is if you replace the class with a function, it will fail:

x = 0
y = 0
def f():
x = 1
y = 1
def f2():
print(x, y)
x = 2
f2()
f()
# UnboundLocalError: local variable 'x' referenced before assignment


Why so? The answer can be found in the documentation (see Execution model):

> If a variable is used in a code block but not defined there, it is a free variable.

So, x is a free variable but y isn't, this is why behavior for them is different. And when you try to use a free variable, the code fails at runtime because you haven't defined it yet in the current scope but will define it later.

Let's disassemble the snippet above:

import dis
dis.dis("""[insert here the previous snippet]""")


It outputs a lot of different things, this is the part we're interested in:

  8  0 LOAD_GLOBAL    0 (print)
2 LOAD_FAST 0 (x)
4 LOAD_DEREF 0 (y)
6 CALL_FUNCTION 2
8 POP_TOP


Indeed, x and y have different instructions, and they're different at bytecode-compilation time. Now, what's different for a class scope?

import dis
dis.dis("""[insert here the first code snippet]""")


This is the same dis part for the class:

  8  8 LOAD_NAME         3 (print)
10 LOAD_NAME 4 (x)
12 LOAD_CLASSDEREF 0 (y)
14 CALL_FUNCTION 2
16 POP_TOP


So, the class scope behaves differently. x and y loaded with LOAD_FAST and LOAD_DEREF for a function and with LOAD_NAME and LOAD_CLASSDEREF for a class.

The same documentation page answers how this behavior is different:

> Class definition blocks and arguments to exec() and eval() are special in the context of name resolution. A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution with an exception that unbound local variables are looked up in the global namespace.

In other words, if a variable in the class definition is unbound, it is looked up in the global namespace skipping enclosing nonlocal scope.
It was a long break but tomorrow we start again. We have plenty of ideas for posts but don't always have time to write them. So, this is how you can help us:

+ If you have something to tell about Python (syntax, stdlib, PEPs), check if it already was posted. If not, write a post, send it to us, and we will publish it. It will include your name (if you want to), we don't steal content ;)

+ If you don't have an idea, just contact us, we have plenty of them! And if you like it, the algorithm is the same as above: write a post, send it, we publish it with your name.

+ If you don't have time to write posts but still want to help, consider donating a bit of money, links are in the channel description. If we get enough, we can take a one-day vacation and invest it exclusively into writing posts.

+ If you see a bug or typo in a post, please, let us know!

And speaking about bugs, there are few in recent posts that our lovely subscribers have reported:

+ post #641, reported by @recursing. functools.cache isn't faster than functools.lru_cache(maxsize=None), it is exactly the same. The confusion comes from the documentation which says "this is smaller and faster than lru_cache() WITH A SIZE LIMIT".

+ post #644, reported by @el71Gato. It should be 10**8 instead of 10*8. We've re-run benchmarks with these values, relative numbers are the same, so all conclusions are still correct.

Welcome into season 2.5 :)
Let's talk a bit more about scopes.

Any class and function can implicitly use variables from the global scope:

v = 'global'
def f():
print(f'{v=}')
f()
# v='global'


Or from any other enclosing scope, even if it is defined after the fucntion definition:

def f():
v1 = 'local1'
def f2():
def f3():
print(f'{v1=}')
print(f'{v2=}')
v2 = 'local2'
f3()
f2()
f()
# v1='local1'
# v2='local2'


Class body is a tricky case. It is not considered an enclosing scope for functions defined inside of it:

v = 'global'
class A:
v = 'local'
print(f'A {v=}')
def f():
print(f'f {v=}')
# A v='local'

A.f()
# f v='global'
Any enclosing variable can be shadowed in the local scope without affecting the global one:

v = 'global'
def f():
v = 'local'
print(f'f {v=}')
f()
# f v='local'

print(f'{v=}')
# v='global'


And if you try to use a variable and then shadow it, the code will fail at runtime:

v = 'global'
def f():
print(v)
v = 'local'
f()
# UnboundLocalError: local variable 'v' referenced before assignment


If you want to re-define the global variable instead of locally shadowing it, it can be achieved using global and nonlocal statements:

v = 'global'
def f():
global v
v = 'local'
print(f'f {v=}')
f()
# f v='local'
print(f'g {v=}')
# g v='local'

def f1():
v = 'non-local'
def f2():
nonlocal v
v = 'local'
print(f'f2 {v=}')
f2()
print(f'f1 {v=}')
f1()
# f2 v='local'
# f1 v='local'


Also, global can be used to skip non-local definitions:

v = 'global'
def f1():
v = 'non-local'
def f2():
global v
print(f'f2 {v=}')
f2()
f1()
# f2 v='global'


To be said, using global and nonlocal is considered a bad practice that complicates the code testing and usage. If you want a global state, think if it can be achieved in another way. If you desperately need a global state, consider using singleton pattern which is a little bit better.
Let's learn a bit more about strings performance. What if instead of an unknown amount of strings we have only a few known variables?

s1 = 'hello, '
s2 = '@pythonetc'

%timeit s1+s2
# 56.7 ns ± 6.17 ns per loop

%timeit ''.join([s1, s2])
# 110 ns ± 6.09 ns per loop

%timeit '{}{}'.format(s1, s2)
# 63.3 ns ± 6.69 ns per loop

%timeit f'{s1}{s2}'
# 57 ns ± 5.43 ns per loop


No surprises here, + and f-strings are equally good, str.format is quite close. But what if we have numbers instead?

n1 = 123
n2 = 456
%timeit str(n1)+str(n2)
# 374 ns ± 7.09 ns per loop

%timeit '{}{}'.format(n1, n2)
# 249 ns ± 4.73 ns per loop

%timeit f'{n1}{n2}'
# 208 ns ± 3.49 ns per loop


In this case, formatting is faster because it doesn't create intermediate strings. However, there is something else about f-strings. Let's measure how long it takes just to convert an int into an str:

%timeit str(n1)
# 138 ns ± 4.86 ns per loop

%timeit '{}'.format(n1)
# 148 ns ± 3.49 ns per loop

%timeit format(n1, '')
# 91.8 ns ± 6.12 ns per loop

%timeit f'{n1}'
# 63.8 ns ± 6.13 ns per loop


Wow, f-strings are twice faster than just str! This is because f-strings are part of the grammar but str is just a function that requires function-lookup machinery:

import dis
dis.dis("f'{n1}'")
1 0 LOAD_NAME 0 (n1)
2 FORMAT_VALUE 0
4 RETURN_VALUE

dis.dis("str(n1)")
1 0 LOAD_NAME 0 (str)
2 LOAD_NAME 1 (n1)
4 CALL_FUNCTION 1
6 RETURN_VALUE


And once more, disclaimer: readability is more important than performance until proven otherwise. Use your knowledge with caution :)
Types str and bytes are immutable. As we learned in previous posts, + is optimized for str but sometimes you need a fairly mutable type. For such cases, there is bytearray type. It is a "hybrid" of bytes and list:

b = bytearray(b'hello, ')
b.extend(b'@pythonetc')
b
# bytearray(b'hello, @pythonetc')

b.upper()
# bytearray(b'HELLO, @PYTHONETC')


The type bytearray has all methods of both bytes and list except sort:

set(dir(bytearray)) ^ (set(dir(bytes)) | set(dir(list)))
# {'__alloc__', '__class_getitem__', '__getnewargs__', '__reversed__', 'sort'}


If you're looking for reasons why there is no bytearray.sort, there is the only answer we found: stackoverflow.com/a/22783330/8704691.
Suppose, you have 10 lists:

lists = [list(range(10_000)) for _ in range(10)]


What's the fastest way to join them into one? To have a baseline, let's just + everything together:

s = lists
%timeit s[0] + s[1] + s[2] + s[3] + s[4] + s[5] + s[6] + s[7] + s[8] + s[9]
# 1.65 ms ± 25.1 µs per loop


Now, let's try to use functools.reduce. It should be about the same but cleaner and doesn't require to know in advance how many lists we have:

from functools import reduce
from operator import add
%timeit reduce(add, lists)
# 1.65 ms ± 27.2 µs per loop


Good, about the same speed. However, reduce is not "pythonic" anymore, this is why it was moved from built-ins into functools. The more beautiful way to do it is using sum:

%timeit sum(lists, start=[])
# 1.64 ms ± 83.8 µs per loop


Short and simple. Now, can we make it faster? What if we itertools.chain everything together?

from itertools import chain
%timeit list(chain(*lists))
# 599 µs ± 20.4 µs per loop


Wow, this is about 3 times faster. Can we do better? Let's try something more straightforward:

%%timeit
r = []
for lst in lists:
r.extend(lst)
# 250 µs ± 5.96 µs per loop


Turned out, the most straightforward and simple solution is the fastest one.
Starting Python 3.8, the interpreter warns about is comparison of literals.

Python 3.7:

>>> 0 is 0
True


Python 3.8:

>>> 0 is 0
<stdin>:1: SyntaxWarning: "is" with a literal. Did you mean "=="?
True


The reason is that it is an infamous Python gotcha. While == does values comparison (which is implemented by calling __eq__ magic method, in a nutshell), is compares memory addresses of objects. It's true for ints from -5 to 256 but it won't work for ints out of this range or for objects of other types:

a = -5
a is -5 # True
a = -6
a is -6 # False
a = 256
a is 256 # True
a = 257
a is 257 # False
Floating point numbers in Python and most of the modern languages are implemented according to IEEE 754. The most interesting and hardcore part is "arithmetic formats" which defines a few special values:

+ inf and -inf representing infinity.
+ nan representing a special "Not a Number" value.
+ -0.0 representing "negative zero"

Negative zero is the easiest case, for all operations it considered to be the same as the positive zero:

-.0 == .0  # True
-.0 < .0 # False


Nan returns False for all comparison operations (except !=) including comparison with inf:

import math

math.nan < 10 # False
math.nan > 10 # False
math.nan < math.inf # False
math.nan > math.inf # False
math.nan == math.nan # False
math.nan != 10 # True


And all binary operations on nan return nan:

math.nan + 10  # nan
1 / math.nan # nan


You can read more about nan in previous posts:

+ https://tttttt.me/pythonetc/561
+ https://tttttt.me/pythonetc/597

Infinity is bigger than anything else (except nan). However, unlike in pure math, infinity is equal to infinity:

10 < math.inf         # True
math.inf == math.inf # True


The sum of positive and negative infinity is nan:

-math.inf + math.inf  # nan
Infinity has an interesting behavior on division operations. Some of them are expected, some of them are surprising. Without further talking, there is a table:

truediv (/)
| -8 | 8 | -inf | inf
-8 | 1.0 | -1.0 | 0.0 | -0.0
8 | -1.0 | 1.0 | -0.0 | 0.0
-inf | inf | -inf | nan | nan
inf | -inf | inf | nan | nan

floordiv (//)
| -8 | 8 | -inf | inf
-8 | 1 | -1 | 0.0 | -1.0
8 | -1 | 1 | -1.0 | 0.0
-inf | nan | nan | nan | nan
inf | nan | nan | nan | nan

mod (%)
| -8 | 8 | -inf | inf
-8 | 0 | 0 | -8.0 | inf
8 | 0 | 0 | -inf | 8.0
-inf | nan | nan | nan | nan
inf | nan | nan | nan | nan


The code used to generate the table:

import operator
cases = (-8, 8, float('-inf'), float('inf'))
ops = (operator.truediv, operator.floordiv, operator.mod)
for op in ops:
print(op.__name__)
row = ['{:4}'.format(x) for x in cases]
print(' ' * 6, ' | '.join(row))
for x in cases:
row = ['{:4}'.format(x)]
for y in cases:
row.append('{:4}'.format(op(x, y)))
print(' | '.join(row))
PEP-589 (landed in Python 3.8) introduced typing.TypedDict as a way to annotate dicts:

from typing import TypedDict

class Movie(TypedDict):
name: str
year: int

movie: Movie = {
'name': 'Blade Runner',
'year': 1982,
}


It cannot have keys that aren't explicitly specified in the type:

movie: Movie = {
'name': 'Blade Runner',
'year': 1982,
'director': 'Ridley Scott', # fails type checking
}


Also, all specified keys are required by default but it can be changed by passing total=False:

movie: Movie = {} # fails type checking

class Movie2(TypedDict, total=False):
name: str
year: int

movie2: Movie2 = {} # ok