Get output from a Paramiko SSH exec_command continuously
Question:
I am executing a long-running python script via ssh on a remote machine using paramiko. Works like a charm, no problems so far.
Unfortunately, the stdout
(respectively the stderr
) are only displayed after the script has finished! However, due to the execution time, I’d much prefer to output each new line as it is printed, not afterwards.
remote = paramiko.SSHClient()
remote.set_missing_host_key_policy(paramiko.AutoAddPolicy())
remote.connect("host", username="uname", password="pwd")
# myScript produces continuous output, that I want to capture as it appears
stdin, stdout, stderr = remote.exec_command("python myScript.py")
stdin.close()
for line in stdout.read().splitlines():
print(line)
How can this be achieved? Note: Of course one could pipe the output to a file and ‘less’ this file via another ssh session, but this is very ugly and I need a cleaner, ideally pythonic solution 🙂
Answers:
As specified in the read([size]) documentation, if you don’t specify a size
, it reads until EOF, that makes the script wait until the command ends before returning from read()
and printing any output.
Check this answers: How to loop until EOF in Python? and How to do a “While not EOF” for examples on how to exhaust the File-like object.
I was facing a similar issue. I was able to solve it by adding get_pty=True to paramiko:
stdin, stdout, stderr = client.exec_command("/var/mylongscript.py", get_pty=True)
A minimal and complete working example of how to use this answer (tested in Python 3.6.1)
# run.py
from paramiko import SSHClient
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.connect('...')
print('started...')
stdin, stdout, stderr = ssh.exec_command('python -m example', get_pty=True)
for line in iter(stdout.readline, ""):
print(line, end="")
print('finished.')
and
# example.py, at the server
import time
for x in range(10):
print(x)
time.sleep(2)
run on the local machine with
python -m run
using this:
stdin, stdout, stderr = ssh.exec_command('python -m example', get_pty=True)
for line in iter(stdout.readline, ""):
print(line, end="")
from @JorgeLeitao ‘s answer sped my stdout output up to almost real-time!!
I was using:
stdin, stdout, stderr = ssh.exec_command(cmd)
for line in stdout:
# Process each line in the remote output
print (line)
Streaming response data from a generator function.
I wanted to make a class that had more complexity than the standard Client.exec_command() examples and less than what I was seeing for Channel.exec_command() examples. Plus I covered some ‘gotchas’ that I encountered. This summary script was tested on CentOS Stream – Python 3.6.8.
import sys
import paramiko
client = paramiko.SSHClient()
client.load_system_host_keys('/etc/ssh/ssh_known_hosts')
try:
client.connect('host', username='username', password='password',
port=22, timeout=2)
except Exception as _e:
sys.stdout.write(_e)
# is_active can be a false positive, so further test
transport = client.get_transport()
if transport.is_active():
try:
transport.send_ignore()
except Exception as _e:
sys.stdout.write(_e)
sys.exit(1)
else:
sys.exit(1)
channel = transport.open_session()
# We're not handling stdout & stderr separately
channel.set_combine_stderr(1)
channel.exec_command('whoami')
# Command was sent, no longer need stdin
channel.shutdown_write()
def responseGen(channel):
# Small outputs (i.e. 'whoami') can end up running too quickly
# so we yield channel.recv in both scenarios
while True:
if channel.recv_ready():
yield channel.recv(4096).decode('utf-8')
if channel.exit_status_ready():
yield channel.recv(4096).decode('utf-8')
break
# iterate over each yield as it is given
for response in responseGen(channel):
sys.stdout.write(response)
# We're done, explicitly close the conenction
client.close()
Above answers all use exec_command
, they would be useful if you also use this way. But if you try send()
with invoke_shell()
they will not be useful. In that way, you can try this
while True:
sleep(0.1)
backMsg = ""
try:
backMsg = self.channel.recv(65536).decode('utf-8')
except socket.timeout as e:
break
print 'backMsg:%s, length:%d, channel recv status:%d '%(backMsg, len(backMsg), self.channel.recv_ready())
if len(backMsg) == 0 and self.channel.recv_ready() == False:
break
but this still have a problem, if the channel cannot recv any reply from server within timeout, then the code will break the loop even if you want your code still run.
I am executing a long-running python script via ssh on a remote machine using paramiko. Works like a charm, no problems so far.
Unfortunately, the stdout
(respectively the stderr
) are only displayed after the script has finished! However, due to the execution time, I’d much prefer to output each new line as it is printed, not afterwards.
remote = paramiko.SSHClient()
remote.set_missing_host_key_policy(paramiko.AutoAddPolicy())
remote.connect("host", username="uname", password="pwd")
# myScript produces continuous output, that I want to capture as it appears
stdin, stdout, stderr = remote.exec_command("python myScript.py")
stdin.close()
for line in stdout.read().splitlines():
print(line)
How can this be achieved? Note: Of course one could pipe the output to a file and ‘less’ this file via another ssh session, but this is very ugly and I need a cleaner, ideally pythonic solution 🙂
As specified in the read([size]) documentation, if you don’t specify a size
, it reads until EOF, that makes the script wait until the command ends before returning from read()
and printing any output.
Check this answers: How to loop until EOF in Python? and How to do a “While not EOF” for examples on how to exhaust the File-like object.
I was facing a similar issue. I was able to solve it by adding get_pty=True to paramiko:
stdin, stdout, stderr = client.exec_command("/var/mylongscript.py", get_pty=True)
A minimal and complete working example of how to use this answer (tested in Python 3.6.1)
# run.py
from paramiko import SSHClient
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.connect('...')
print('started...')
stdin, stdout, stderr = ssh.exec_command('python -m example', get_pty=True)
for line in iter(stdout.readline, ""):
print(line, end="")
print('finished.')
and
# example.py, at the server
import time
for x in range(10):
print(x)
time.sleep(2)
run on the local machine with
python -m run
using this:
stdin, stdout, stderr = ssh.exec_command('python -m example', get_pty=True)
for line in iter(stdout.readline, ""):
print(line, end="")
from @JorgeLeitao ‘s answer sped my stdout output up to almost real-time!!
I was using:
stdin, stdout, stderr = ssh.exec_command(cmd)
for line in stdout:
# Process each line in the remote output
print (line)
Streaming response data from a generator function.
I wanted to make a class that had more complexity than the standard Client.exec_command() examples and less than what I was seeing for Channel.exec_command() examples. Plus I covered some ‘gotchas’ that I encountered. This summary script was tested on CentOS Stream – Python 3.6.8.
import sys
import paramiko
client = paramiko.SSHClient()
client.load_system_host_keys('/etc/ssh/ssh_known_hosts')
try:
client.connect('host', username='username', password='password',
port=22, timeout=2)
except Exception as _e:
sys.stdout.write(_e)
# is_active can be a false positive, so further test
transport = client.get_transport()
if transport.is_active():
try:
transport.send_ignore()
except Exception as _e:
sys.stdout.write(_e)
sys.exit(1)
else:
sys.exit(1)
channel = transport.open_session()
# We're not handling stdout & stderr separately
channel.set_combine_stderr(1)
channel.exec_command('whoami')
# Command was sent, no longer need stdin
channel.shutdown_write()
def responseGen(channel):
# Small outputs (i.e. 'whoami') can end up running too quickly
# so we yield channel.recv in both scenarios
while True:
if channel.recv_ready():
yield channel.recv(4096).decode('utf-8')
if channel.exit_status_ready():
yield channel.recv(4096).decode('utf-8')
break
# iterate over each yield as it is given
for response in responseGen(channel):
sys.stdout.write(response)
# We're done, explicitly close the conenction
client.close()
Above answers all use exec_command
, they would be useful if you also use this way. But if you try send()
with invoke_shell()
they will not be useful. In that way, you can try this
while True:
sleep(0.1)
backMsg = ""
try:
backMsg = self.channel.recv(65536).decode('utf-8')
except socket.timeout as e:
break
print 'backMsg:%s, length:%d, channel recv status:%d '%(backMsg, len(backMsg), self.channel.recv_ready())
if len(backMsg) == 0 and self.channel.recv_ready() == False:
break
but this still have a problem, if the channel cannot recv any reply from server within timeout, then the code will break the loop even if you want your code still run.