Paramiko ssh die/hang with big output

Question:

I try to backup a server using Paramiko and SSH to call a tar command. When there is a limited number of files, all works well but when it’s a big folder, the script wait endlessly. The following test shows me that the problem comes from the size of the stdout.

Is there a way to correct that and execute this kind of command?

Case big output:

query = 'cd /;ls -lshAR -h'
chan.exec_command(query)
while not chan.recv_exit_status():
    if chan.recv_ready():
        data = chan.recv(1024)
        while data:
            print data
            data = chan.recv(1024)

    if chan.recv_stderr_ready():
        error_buff = chan.recv_stderr(1024)
        while error_buff:
            print error_buff
            error_buff = chan.recv_stderr(1024)
    exist_status = chan.recv_exit_status()
    if 0 == exist_status:
        break

Result is (not ok – block – die ??)

2015-07-25 12:57:07,402 --> Query sent

Case small output:

query = 'cd /;ls -lshA -h'
chan.exec_command(query)
while not chan.recv_exit_status():
    if chan.recv_ready():
        data = chan.recv(1024)
        while data:
            print data
            data = chan.recv(1024)

    if chan.recv_stderr_ready():
        error_buff = chan.recv_stderr(1024)
        while error_buff:
            print error_buff
            error_buff = chan.recv_stderr(1024)
    exist_status = chan.recv_exit_status()
    if 0 == exist_status:
        break

Result is (all is ok)

2015-07-25 12:55:08,205 --> Query sent
total 172K
4.0K drwxr-x---   2 root psaadm 4.0K Dec 27  2013 archives
   0 -rw-r--r--   1 root root      0 Jul  9 23:49 .autofsck
   0 -rw-r--r--   1 root root      0 Dec 27  2013 .autorelabel
4.0K dr-xr-xr-x   2 root root   4.0K Dec 23  2014 bin
2015-07-25 12:55:08,307 --> Query executed (0.10) 

Put on GitHub:
https://github.com/paramiko/paramiko/issues/563

Asked By: Alexis G

||

Answers:

If the ls -R prints lots of error output (what is likely if the current user is not root => does not have access to all folders), your code deadlocks eventually.

It’s because, the output buffer of the error stream eventually fills, so the ls stops working, waiting for you to read the stream (empty the buffer).

While you wait for the regular output stream to finish, what it never does, as the ls waits for you to read the error stream, what you never do.

You have to read both streams in parallel (see Run multiple commands in different SSH servers in parallel using Python Paramiko).

Or even easier, use the Channel.set_combine_stderr to merge both streams into one.

Answered By: Martin Prikryl

read data before check exit status

channel.recv_exit_status() hanging

Answered By: hustljian
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.