How to use Rust to extend Python

When you start a software project, you face the choice of which programming language to use. Python is among the most popular and universal languages. Still, every tool has limitations.
Why not use Python for everything?
Some of the reasons could be:
- Speed. The dynamic nature of Python means it is notoriously slow at some things, and not great on parallel calculations. You may want to write performance-critical code in a lower-level language.
- Reusability. Why port code when you can use it directly? Parts of NumPy/SciPy are still written in Fortran after all.
- Cooperation. If one team writes Python and another — Rust, and they need to use each other’s code.
- Migration. Suppose in some tragic case you want to re-write your entire Python codebase to Rust, you could do that module by module.
Luckily, Python is an extendable language, and that can be done easily enough.
If your only concern is speed, you could use Cython or the library called Numba that would perform some magic on your Python code to recompile it into something else. But if you want to have more control or pursue other goals, you would still probably need to write an extension.
The classic way is to write an extension in C, but it is possible to use any language provided it has a wrapper library for Python.
Why Rust?

I want to talk specifically about Rust, and not only because it is my hobby language. Rust is one of the most loved languages by developers, and recently gets a lot of attention from companies too. Rust is an innovative compiled language with an accent on safety. I believe learning Rust is a great way for Python developers to improve general programming skills.
What is important for our story is that Rust has almost no runtime (e.g. no garbage collector) which makes it behave well in a dynamic library. We don’t want some other complex runtime in the same process as the Python interpreter. That may cause all kinds of tricky issues.
However, Rust has a somewhat steep learning curve. One may want to split their codebase, and write critical code in Rust, and more common code, for example, a web service, in Python. Let’s see how we could do that.
First: the code
The two most common Rust libraries that provide bindings for Python are rust-cpython (which we’re going to use for the examples) and PyO3. PyO3 is a fork of rust-cpython
, more advanced, but it only works on the nightly Rust. We will be using stable Rust. Both libraries not only let you extend Python but also use it as a scripting language in Rust programs. Although, that is a whole different story.
I’m going to skip the part where we install Rust and learn how to use Cargo (Rust package manager and CLI) — see the Rust book and the Cargo book — and get directly to wrapping up the code to be compiled and used from Python.
Our Rust project is very simple. It consists of just two files, src/lib.rs
with the code, and Cargo.toml
with some configuration:
Apart from defining some metadata, this file tells Cargo about the cpython
dependency. These dependencies will be downloaded from crates.io during the build.
crate-type = [“cdylib”]
tells the compiler that our module will be a dynamic library: a .dll
file on Windows, .so
on Linux, or .dylib
on Mac.
The get_result()
function returns a value passed into it, prepended by "Rust says":
Ok("Rust says: ".to_owned() + val)
The rest of the code generates a wrapper that lets us use this module seamlessly from Python. The types are converted automatically.
py_module_initializer
macro provides a public interface that Python can read. The __doc__
part is not mandatory, but when it is present, you can see it with help(mylib)
.
Second: a library
Now we need to compile and build the library.
As I’m building it on my MacBook, I’m going to have to set some linker arguments — they’re not needed on Linux or Windows. On macOS though, I just need to add the following file to the project: .cargo/config
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
Now, all we need to do is to run the following command:
cargo build --release
The resulting binary is in target/release/libmylib.dylib
(it would be .so
on Linux or .dll
on Windows). In order for Python to see it we're going to rename the file to mylib.so
:
cp target/release/libmylib.dylib ./mylib.so
Then we can give it a test:
python3Python 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)] on darwin Type "help", "copyright", "credits" or "license" for more information.
>>> import mylib
>>> mylib.get_result('Hi') 'Rust says: Hi'
>>>
We have just called the Rust code from Python.
Third: a web service
Let’s consider another, more “real-world” example.
Suppose we want to build a Python web-service that calls Rust library and returns the result to the user. This is the entire code:
We can run it with FLASK_APP=main.py flask run
but for production, we would like more: Continuous Integration/Deployment. For instance, why don’t we build a Docker image that we then could run in a Kubernetes environment?
Fourth: a Docker image

To achieve that, we will use a multi-stage build, so that the resulting Docker image will not contain any Rust artifacts — only a compiled binary.
Here’s our Dockerfile
:
The first build stage creates a Rust environment (we give it a name, rust-build
), which we use to compile mylib
and then this environment is discarded:
FROM rust:latest as rust-build
Now, the python:3.7-slim
image we’re using is a Linux environment (that is, Debian), so when we copy over the Rust code and make a binary, we must build for the Linux platform target. The first line makes sure the target is installed, the second builds the binary.
RUN rustup target add x86_64-unknown-linux-gnu
RUN cargo build --release --target x86_64-unknown-linux-gnu
Before running cargo build
we could also run cargo test
if we had any unit-tests.
During the second — Python — build stage we copy our dynamic library (now it is a .so
file) from the previous stage, renaming it similarly to what we did manually before:
COPY --from=rust-build /build/target/x86_64-unknown-linux-gnu/release/libmylib.so /app/mylib.so
We use gunicorn
to run the service in a "production-ready" fashion.
Now we can ask docker to build it all together — compile the rust code and embed it into our Python web-service:
docker build . -t rust-python-web
And then run the service:
docker run -p 8000:8000 rust-python-web
And — voilà — it works:
❯ curl http://localhost:8000
Rust says: Hello!
...
This is one way to do that. Although it has its downsides. For example, you may have two different teams, one working exclusively with Rust and one working with Python. Maybe you have a corporate PyPI repository, where all the Python library code is stored. Probably, a Python developer does not want to download the Rust code on their machine, they just want to do
pip install mylib
and have it compiled and installed, as it is the case with NumPy.
Fifth: a PIP package
There are a few libraries that can help us do that. setuptools-rust is probably the most straightforward. It extends standard setuptools
to be able to compile Rust code into Python extensions.
To create a package, we copy the Rust code (src
and Cargo.toml
), without any changes) to a new directory. In this directory, we also create a few files that setuptools-rust
requires (check out the GitHub repository).
We also need to create a mylib
directory with an __init__.py
file.
The most important file in the package is setup.py
:
We import RustExtension
class and use it to define our Rust extension. We use rust-cpython
for bindings, but setuptools-rust
allows us to choose PyO3
too.
Now let’s create a virtual environment and try to install our package:
virtualenv env -p /usr/local/bin/python3
. env/bin/activate
pip install setuptools-rust==0.10.6
pip install ./package/
...
python3
...
>>> import mylib
>>> mylib.get_result('Hi')
'Rust says: Hi'
This is better but, still, because we distribute source code, our imaginary Python developer will have to have Rust on their machine in order to use mylib
. Also, it will not work in our Dockerfile
as there’s no Rust in python:3.7-slim
.
So, in our imaginary polyglot company, we’ll need a PyPI repository, with binaries, pre-built for different platforms.
Sixth: a PIP package with binaries

First of all, we’ll need our own PyPI repository to share our packages, without polluting the global index. You can setup devpi or any analogue, but for our little example we’re just going to use TestPyPI — a playground for Python package builders.
Secondly, we’re going to create wheels — the file format that allows distributing Python code together with binary extensions. A .whl
file is essentially a zip archive with all the files plus metadata plus file name convention. The installer then uses the file name to find a wheel suitable for the current platform.
So, on my MacBook, I can simply run
python setup.py bdist_wheel
This will compile the Rust code and create a file I‘ll be able to install on macOS: dist/mylib_rust-1.0-cp37-cp37m-macosx_10_14_x86_64.whl
Again, for our web-service though we’re going to need a Linux library, and building for Linux is a little more tricky. We’re going to use a manylinux Docker image, and run the following script inside the container:
#!/bin/bash
set -excurl https://sh.rustup.rs -sSf | sh -s -- -y
export PATH="$HOME/.cargo/bin:$PATH"cd /iofor PYBIN in /opt/python/{cp35-cp35m,cp36-cp36m,cp37-cp37m}/bin; do
export PYTHON_SYS_EXECUTABLE="$PYBIN/python" "${PYBIN}/pip" install -U setuptools wheel setuptools-rust
"${PYBIN}/python" setup.py bdist_wheel
donefor whl in dist/*.whl; do
auditwheel repair "$whl" -w dist/
done
So, the following commands
docker pull quay.io/pypa/manylinux1_x86_64
docker run --rm -v `pwd`:/io quay.io/pypa/manylinux1_x86_64 /io/build-wheels.sh
…will create wheels we could install on Linux.
ls package/distmylib_rust-1.0-cp36-cp35m-linux_x86_64.whl
mylib_rust-1.0-cp37-cp35m-linux_x86_64.whl
mylib_rust-1.0-cp36-cp36m-linux_x86_64.whl
mylib_rust-1.0-cp37-cp37m-linux_x86_64.whl
mylib_rust-1.0-cp37-cp37m-manylinux1_x86_64.whl
mylib_rust-1.0-cp36-cp36m-manylinux1_x86_64.whl
mylib_rust-1.0-cp37-cp37m-macosx_10_14_x86_64.whl
The last step is to actually put these on TestPyPI. We need to register first, and then use twine to upload the files:
pip install twine
...
twine upload --skip-existing --repository-url https://test.pypi.org/legacy/ package/dist/*
Now I can check out my library on the website: https://test.pypi.org/project/mylib-rust/. And see the wheels:

Finally, we can remove the Rust building stage from the Dockerfile
. We just need to add mylib-rust
in requirements.txt
and set an additional index URL for PIP:
If you run this build, you’ll see it doesn’t attempt to compile Rust source code but downloads a corresponding wheel instead:
docker build -f Dockerfile.pypi -t rust-python-web-pypi .
...
Collecting mylib-rust==1.0
Downloading https://test-files.pythonhosted.org/packages/c0/96/625665a8da1c85e2e553ba5e07ec77d5d4f3d73578303f1d09545a9ea10a/mylib_rust-1.0-cp37-cp37m-manylinux1_x86_64.whl (1.6 MB)
...
Now it is transparent for Python developers — everything related to Rust is encapsulated into the package.
Conclusion
In conclusion, I’d like to add that I believe in polyglot developers and polyglot teams. Increasingly, programmers are blending languages together. You don't have to use the same tool as everyone else in the company. Or because it is your CTO’s favorite language.
It used to be much more difficult in the age of makefiles. But today, when tools like Docker or WebAssembly are so popular, why not try?
You can find all the code used in this article in my GitHub repository. Photos by Marina Medvedeva.
Links:
- GitHub repository with all the examples
- Rust Book
- Cargo Book
- rust-cpython — Python bindings for Rust
- PyO3 — Python bindings for Rust
- setuptools-rust
- Packaging and distributing Python projects
- Using TestPyPI
- Rust for Python Programmers