Solving Python’s everlasting problem of slow code

Python is truly my go-to programming language. The ease of use, clean code and speed of development it delivers are unmatched. Often times I was able to prototype an idea very quickly. When I moved on to larger workloads there was one problem though: the execution speed.

Because Python enforces a lot less constraints than non dynamically typed languages, such as Go, Rust or C/C++, it is easier and faster to write code in Python, but it will also never achieve the same levels of speed.

So let me be clear from the start: There are ways of improving Python’s speed, but they all come with added complexity and their own downsides. If you don’t necessarily have to stick to Python it would probably be your best choice to just use a programming language such as Go

Alternative interpreters/compilers

This one is arguably a favorite of mine. Nuitka first translates your Python code to C and then compiles it. This also includes a lot of clever optimizations which offer a speedup of well over 300%. You get a single binary which includes all off your program’s dependencies and can be easily distributed. The target computer doesn’t even need to have Python installed anymore to run the program! At the same time Nuitka offers a greater amount of compatibility than PyPy.

Solving the problem of concurrency

Let’s look at some example code:

from datetime import datetimeimport httpxstart_time = _ in range(0, 10):
print("Took: ", - start_time)

Here we send 10 HTTP GET requests to httpbin and also determine the time it takes.

The output: Took: 0:00:04.893904

On my machine it took nearly 5 seconds to send those requests and receive the responses. Let’s see if we can speed things up a little.

By running code asynchronously, we can make use of waiting time which occurs throughout the program. Take web requests as an example: Normally the program would send an HTTP request and then wait for the server to send back a response before going on to the next line of code. The only downside of this is that the code becomes harder to debug.

With asynchronous code the next web request (or task) will be started while we are still waiting for the first response to arrive.

from datetime import datetime
import asyncio
import httpxasync def run():
async with httpx.AsyncClient() as client:
for _ in range(0, 10):
await client.get("")
start_time =
print("Took: ", - start_time)

By modifying our code to use asyncio we reduce the time the program takes to complete all request to 0:00:01.466089. That is a 70% improvement!

Since one Python process can only every use one thread at a time (→ GIL), why not use multiple processes? That is exactly what the package multiprocessing allows us to do. While this speeds up the program, each additional process also takes up system resources, and it becomes an additional challenge to exchange data between the processes.

Modifying our source code once again, this time to start multiple processes for completing the requests, we get:

from datetime import datetime
from multiprocessing import Pool, cpu_count
import httpxdef run():
start_time = Sets the number of processes running at the same time
# equal to the number of cpu cores.
pool = Pool(cpu_count())
for _ in range(0, 10):
pool.apply_async(run, [])
print("Took: ", - start_time)

This brings our total down to 0:00:00.531039! It pretty much took only 1/3 of the time of the asynchronous implementation.

Originally published at on April 13, 2021.

Developer and full-time learner. Aspiring entrepreneur.