How to copy a file from container to host using copy in docker-py
Question:
I am using docker-py. I want to copy a file from docker container to host machine.
From docker-py documentation:
copy
Identical to the docker cp command. Get files/folders from the container.
Params:
container (str): The container to copy from
resource (str): The path within the container
Returns (str): The contents of the file as a string
I could create container and start it but unable to get the file which gets copied from container to host. Can someone help me in pointing out if I am missing anything? I have /mydir/myshell.sh in my docker container which I tried copying to host.
>>> a = c.copy(container="7eb334c512c57d37e38161ab7aad014ebaf6a622e4b8c868d7a666e1d855d217", resource="/mydir/myshell.sh") >>> a
<requests.packages.urllib3.response.HTTPResponse object at 0x7f2f2aa57050>
>>> type(a)
<class 'requests.packages.urllib3.response.HTTPResponse'>
It will be very helpful if someone can help me figuring out whether it is copying or not even copying the file.
Answers:
copy
is a deprecated method in docker and the preferred way is to use put_archive
method. So basically we need to create an archive and then put it into the container. I know that’s sounds weird, but that’s what the API supports currently. If you, like me, think this can be improved, feel free to open an issue/feature request and I’ll upvote it.
Here is a code snippet on how to copy a file to the container :
def copy_to_container(container_id, artifact_file):
with create_archive(artifact_file) as archive:
cli.put_archive(container=container_id, path='/tmp', data=archive)
def create_archive(artifact_file):
pw_tarstream = BytesIO()
pw_tar = tarfile.TarFile(fileobj=pw_tarstream, mode='w')
file_data = open(artifact_file, 'r').read()
tarinfo = tarfile.TarInfo(name=artifact_file)
tarinfo.size = len(file_data)
tarinfo.mtime = time.time()
# tarinfo.mode = 0600
pw_tar.addfile(tarinfo, BytesIO(file_data))
pw_tar.close()
pw_tarstream.seek(0)
return pw_tarstream
In my python script I added a call to run docker with docker run -it -v artifacts:/artifacts target-build
so I can get my files generated from docker run in artifacts folder.
The current top answer to this question describes how to copy from host to container.
To copy from container to host you can use a similar method laid out in the documentation here on the get_archive function: https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.get_archive
for example:
container = client.containers.get('your-container-name')
def copy_from_container(dest, src):
f = open(dest, 'wb')
bits, stat = container.get_archive(src)
for chunk in bits:
f.write(chunk)
f.close()
copy_from_container('/path/to/host/desination.tar', '/container/source/location.sh')
This should save the container source file to a tar archive on your host machine.
I am using docker-py. I want to copy a file from docker container to host machine.
From docker-py documentation:
copy
Identical to the docker cp command. Get files/folders from the container.
Params:
container (str): The container to copy from
resource (str): The path within the container
Returns (str): The contents of the file as a string
I could create container and start it but unable to get the file which gets copied from container to host. Can someone help me in pointing out if I am missing anything? I have /mydir/myshell.sh in my docker container which I tried copying to host.
>>> a = c.copy(container="7eb334c512c57d37e38161ab7aad014ebaf6a622e4b8c868d7a666e1d855d217", resource="/mydir/myshell.sh") >>> a
<requests.packages.urllib3.response.HTTPResponse object at 0x7f2f2aa57050>
>>> type(a)
<class 'requests.packages.urllib3.response.HTTPResponse'>
It will be very helpful if someone can help me figuring out whether it is copying or not even copying the file.
copy
is a deprecated method in docker and the preferred way is to use put_archive
method. So basically we need to create an archive and then put it into the container. I know that’s sounds weird, but that’s what the API supports currently. If you, like me, think this can be improved, feel free to open an issue/feature request and I’ll upvote it.
Here is a code snippet on how to copy a file to the container :
def copy_to_container(container_id, artifact_file):
with create_archive(artifact_file) as archive:
cli.put_archive(container=container_id, path='/tmp', data=archive)
def create_archive(artifact_file):
pw_tarstream = BytesIO()
pw_tar = tarfile.TarFile(fileobj=pw_tarstream, mode='w')
file_data = open(artifact_file, 'r').read()
tarinfo = tarfile.TarInfo(name=artifact_file)
tarinfo.size = len(file_data)
tarinfo.mtime = time.time()
# tarinfo.mode = 0600
pw_tar.addfile(tarinfo, BytesIO(file_data))
pw_tar.close()
pw_tarstream.seek(0)
return pw_tarstream
In my python script I added a call to run docker with docker run -it -v artifacts:/artifacts target-build
so I can get my files generated from docker run in artifacts folder.
The current top answer to this question describes how to copy from host to container.
To copy from container to host you can use a similar method laid out in the documentation here on the get_archive function: https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.get_archive
for example:
container = client.containers.get('your-container-name')
def copy_from_container(dest, src):
f = open(dest, 'wb')
bits, stat = container.get_archive(src)
for chunk in bits:
f.write(chunk)
f.close()
copy_from_container('/path/to/host/desination.tar', '/container/source/location.sh')
This should save the container source file to a tar archive on your host machine.