running a command line containing Pipes and displaying result to STDOUT

Question:

How would one call a shell command from Python which contains a pipe and capture the output?

Suppose the command was something like:

cat file.log | tail -1

The Perl equivalent of what I am trying to do would be something like:

my $string = `cat file.log | tail -1`;
Asked By: spudATX

||

Answers:

This:

import subprocess
p = subprocess.Popen("cat file.log | tail -1", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
#for shell=False use absolute paths
p_stdout = p.stdout.read()
p_stderr = p.stderr.read()
print p_stdout

Or this should work:

import os
result = os.system("cat file.log | tail -1")
Answered By: chown

Use a subprocess.PIPE, as explained in the subprocess docs section “Replacing shell pipeline”:

import subprocess
p1 = subprocess.Popen(["cat", "file.log"], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["tail", "-1"], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close()  # Allow p1 to receive a SIGPIPE if p2 exits.
output,err = p2.communicate()

Or, using the sh module, piping becomes composition of functions:

import sh
output = sh.tail(sh.cat('file.log'), '-1')
Answered By: unutbu
import subprocess
task = subprocess.Popen("cat file.log | tail -1", shell=True, stdout=subprocess.PIPE)
data = task.stdout.read()
assert task.wait() == 0

Note that this does not capture stderr. And if you want to capture stderr as well, you’ll need to use task.communicate(); calling task.stdout.read() and then task.stderr.read() can deadlock if the buffer for stderr fills. If you want them combined, you should be able to use 2>&1 as part of the shell command.

But given your exact case,

task = subprocess.Popen(['tail', '-1', 'file.log'], stdout=subprocess.PIPE)
data = task.stdout.read()
assert task.wait() == 0

avoids the need for the pipe at all.

Answered By: retracile

Another way similar to Popen would be:

command=r"""cat file.log | tail -1 """
output=subprocess.check_output(command, shell=True)
Answered By: XAVI

This is a fork from @chown with some improvements:

  • an alias for import subprocess, makes easier when setting parameters
  • if you just want the output, you don’t need to set stderr or stdin when calling Popen
  • for better formatting, it’s recommended to decode the output
  • shell=True is necessary, in order to call an interpreter for the command line

#!/usr/bin/python3

import subprocess as sp

p = sp.Popen("cat app.log | grep guido", shell=True, stdout=sp.PIPE)

output = p.stdout.read()
print(output.decode('utf-8'))

$ cat app.log 
2017-10-14 22:34:12, User Removed [albert.wesker]
2017-10-26 18:14:02, User Removed [alexei.ivanovich] 
2017-10-28 12:14:56, User Created [ivan.leon]
2017-11-14 09:22:07, User Created [guido.rossum]

$ python3 subproc.py 
2017-11-14 09:22:07, User Created [guido.rossum]
Answered By: ivanleoncz

Simple function for run shell command with many pipes

Using

res, err = eval_shell_cmd('pacman -Qii | grep MODIFIED | grep -v UN | cut -f 2')

Function

import subprocess


def eval_shell_cmd(command, debug=False):
    """
    Eval shell command with pipes and return result
    :param command: Shell command
    :param debug: Debug flag
    :return: Result string
    """
    processes = command.split(' | ')

    if debug:
        print('Processes:', processes)

    for index, value in enumerate(processes):
        args = value.split(' ')

        if debug:
            print(index, args)

        if index == 0:
            p = subprocess.Popen(args, stdout=subprocess.PIPE)
        else:
            p = subprocess.Popen(args, stdin=p.stdout, stdout=subprocess.PIPE)

        if index == len(processes) - 1:
            result, error = p.communicate()
            return result.decode('utf-8'), error
Answered By: phpusr
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.