How to avoid removing all Docker containers and images while developing a containeraized Python application

Question:

I’m developing a Python application for Machine Learning models, you can see my docker-compose file here: https://github.com/Quilograma/IES_Project/blob/main/docker-compose.yml.

The problem is while developing the application, every time I change a line in the Python code I’ve to kill all active containers and drop their respective images. Then call docker-compose up to see the change I’ve made. It takes roughly 5 minutes to pull all Docker images and install the Python libraries again, which significantly slows down the process of development.

Is there any turnaround to overcome this issue? I really want to use the containers. Thanks!

Asked By: Martim Sousa

||

Answers:

You do not need to remove any images. You need to re-build your image. This means all previous image layers (FROM python:<tag>, RUN pip install <packages...>) would be cached.

The alternative solution (but only because Python in an interpreted language) would be to mount your module as a volume. Then when you save in your host filesystem, it is automatically updated inside the container.

Personal example with Flask server and Kafka connection

Answered By: OneCricketeer
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.