Changed Django's Docker Environment from Alpine + uWSGI to Debian + Daphne → Ended up with uvicorn After All

Up until now, I have often created Django images with Alpine Linux + uWSGI. However, there's a known issue where executing Python on Alpine Linux is slow.
- Reasons why you shouldn't use Alpine Linux as a base image when using Python with Docker
- Issues with running Python on an Alpine image - Qiita
Additionally, I was using uWSGI as the HTTP server running on Alpine, but its configuration is complex, and it felt redundant for servicing with Kubernetes.
Therefore, I decided to change the environment for the Django service.
The base image was changed from Alpine to a Debian-based Python, and the HTTP server was switched to Daphne.
Note: Daphne was later replaced with Uvicorn because it couldn't handle concurrent requests.
Changes to the Docker Image
Switch from Alpine to Python (Debian)
I adopted a multi-stage build approach where I use the full Python image for pipenv install (pipenv sync) and copy the artifacts to a public stage based on the slim Python image.
Previously, when I built the image including source code, dependencies, and uWSGI on Alpine, the resulting image size was 277MB. With a multi-stage build using python:3.10-bullseye and python:3.10-slim-bullseye, the final image size was 313MB. Although the size increased slightly, it was almost the same.
The Dockerfile is provided below.
Changes to the HTTP Server
Switch from uWSGI to Daphne
uWSGI is a very good library, but it has many tuning parameters, which can be exhausting over time.
Due to its characteristics, uWSGI tends to increase memory usage with each response. To prevent this, you can restart workers after a certain number of requests (max-requests). Is this a memory leak? No, it's garbage collection (GC).
When a worker restarts, it becomes temporarily unavailable, but as long as other workers are alive, overall service downtime can be avoided. However, since this occurs every few requests, it often happens in close succession, causing service interruptions.
There is an option (max-requests-delta) to stagger the restart threshold for each worker, which should avoid service interruptions. However, this setting is not available in the latest build. Despite being documented years ago, it has never worked as expected (I thought it was working).
- Is your uWSGI really using max-requests-delta? - Qiita
- uWSGI's max-requests-delta is not working - Qiita
Therefore, I decided to switch to a different application server.
As candidates, besides the classic gunicorn, there are Uvicorn and Hypercorn used with FastAPI, as well as Daphne developed by the Django team. This time, I chose Daphne.
Daphne, Uvicorn, and Hypercorn all support ASGI and are the mainstream servers for Django 3 and later.
Since I am using Django, I opted for Daphne, developed by the Django team.
Note: Switched from Daphne to Uvicorn
Daphne processes consecutive requests sequentially as coroutines. While this is fine for entirely async views, existing services are not set up this way, leading to issues with handling concurrent requests.
Since many applications run in a single pod, the responsiveness was not great, so I switched to Uvicorn, which makes starting multiple workers easier.
Serving Static Content
Switching from uWSGI to Daphne presents a challenge for serving static content.
For large-scale services used by many customers, static content is often served using a configuration like CloudFront + S3. In these cases, it's not a problem. However, for smaller services like internal tools or management sites, a simpler static file service is desired.
uWSGI has a feature called static-map for easily serving static files, which is perfect for internal tools, and I used it frequently. However, Daphne does not support serving static files.
One option is to use Nginx before Daphne, distributing static content and Django requests within Nginx. However, I wanted to avoid adding more daemons, so I looked for another solution.
A relatively recent and popular solution seems to be an application called WhiteNoise.
WhiteNoise is a static content server written in Python that can be easily integrated as middleware in Django.
Running a static content server in Python might seem nonsensical, but the answer to this is in the official documentation.
https://whitenoise.evans.io/en/stable/#infrequently-asked-questions
As an alternative to S3 or Nginx, it perfectly fits my needs.
Dockerfile
The Dockerfile is as follows:
FROM python:3.10-bullseye AS builderCopy Pipfile
COPY Pipfile /tmp/Pipfile COPY Pipfile.lock /tmp/Pipfile.lock
pipenv sync
For some projects: pipenv install --system --ignore-pipfile --deploy
RUN python3 -m pip install pipenv
&& PIPENV_PIPFILE=/tmp/Pipfile pipenv sync --system
&& python3 -m pip install uvicornPreviously: && python3 -m pip install daphne
FROM python:3.10-slim-bullseye
Copy necessary SOs for MySQL client from the builder stage
COPY --from=builder
/usr/lib/x86_64-linux-gnu/libmariadb.a
/usr/lib/x86_64-linux-gnu/libmariadb.so.3
/usr/lib/x86_64-linux-gnu/ COPY --from=builder /usr/lib/x86_64-linux-gnu/libmariadb3/
/usr/lib/x86_64-linux-gnu/libmariadb3/Copy libraries installed by Pipenv from the builder stage
COPY --from=builder /usr/local/lib/python3.10/site-packages
/usr/local/lib/python3.10/site-packages COPY --from=builder /usr/local/lib/python3.10/lib-dynload
/usr/local/lib/python3.10/lib-dynload COPY --from=builder /usr/local/bin /usr/local/binCreate symbolic links for SOs
RUN ln -s /usr/lib/x86_64-linux-gnu/libmariadb.a
/usr/lib/x86_64-linux-gnu/libmariadbclient.a
&& ln -s /usr/lib/x86_64-linux-gnu/libmariadb.so.3
/usr/lib/x86_64-linux-gnu/libmariadb.so
&& ln -s /usr/lib/x86_64-linux-gnu/libmariadb.so.3
/usr/lib/x86_64-linux-gnu/libmariadbclient.soCOPY my_app /var/app/my_app RUN chown -R nobody:nogroup /var/app
USER nobody WORKDIR /var/app/my_app RUN cd /var/app/my_app && python3 ./manage.py collectstatic --noinput EXPOSE 8002 CMD ["uvicorn",
"my_app.asgi:application",
"--host", "0.0.0.0",
"--port", "8002",
"--workers", "4"
]Previously: CMD ["daphne", "-b", "0.0.0.0", "-p", "8002", "my_app.asgi:application"]
Running WhiteNoise with Django
To host static files, add
whitenoise.middleware.WhiteNoiseMiddleware
to the MIDDLEWARE setting in Django.
Using WhiteNoise with Django - WhiteNoise 6.2.0 documentation
Extending Cache Duration
The default cache header lifetime (max-age) for WhiteNoise is
60 if not settings.DEBUG else 0
which is quite short. I will change it to 7 days.
WHITENOISE_MAX_AGE = 86400 * 7
(Set this in the production settings)
Media Server
WhiteNoise does not serve Django's MEDIA_URL.
http://whitenoise.evans.io/en/stable/django.html#serving-media-files
The reasons are explained on the above page. Therefore, for handling media, you need to integrate with something like django-storages to set up a service using S3 or nginx.
We look forward to discussing your development needs.