Send SIGINT to subprocess and also see output
Question:
In a celery task, I am launching a subprocess
and need (1) to be able to send a SIGINT
while (2) also having access to the subprocess’s stdout and stderr. I am able to do one or the other, but not both simultaneously.
I can send SIGINT
when the command in subprocess is given as a list, or as a string prepended with bash:
proc = subprocess.Popen(,
[sys.executable, "-m", path.to.module, myarg1, myarg2, ...], # also works with f"/bin/bash -c {sys.executable} -m path.to.module {myarg1} {myarg2} ..."
stdin=sys.stdin, stdout=PIPE, stderr=PIPE, shell=False
)
As far as I understand, both options are ultimately launching bash and it seems that only a running bash will react to SIGINT
.
Conversely, running "python -m …" means my program no longer reacts to the SIGINT
, but on the other hand it allows me to start seeing the stdout/stderr and logging inside my python program:
proc = subprocess.Popen(,
f"{sys.executable} -m path.to.module {myarg1} {myarg2} ..."
stdin=sys.stdin, stdout=PIPE, stderr=PIPE, shell=False
)
With the above, now I’m no longer able to send SIGINT
to my program but the logging is working.
How can I get both things to work at the same time? I’ve played around with shell=True
and the various stdin/out/err tweaks, but no luck.
EDIT: With the top form (command as a list) and adding signal.signal()
to my program in path.to.module
I am able to both receive the SIGINT as well as see some output.
Answers:
need to be able to send a SIGINT …
I disagree with the premise.
I believe the need is to "stop a running process" in some
sensible way, and INT, USR1, HUP or other signals might suffice.
The key is to have a signal handler active for the relevant signal.
The other detail is "access to … stdout", which is
entirely reasonable. That is, we do not want abrupt
termination, but rather we want printf(3) buffers to be
flushed by write(2) syscalls prior to termination.
So the crux is that we want child process (or maybe process group)
to be sent a signal it is expecting, and then handle the
signal nicely.
bash
traps certain signals so that is definitely a concern.
proc = subprocess.Popen(
... , shell=False)
… both options are ultimately launching bash
No, at least not in this instance.
The False
argument means that bash
is not involved here.
Consider def
ining a handler
, and then doing
signal.signal(signal.SIGUSR1, handler)
The handler might choose to call sys.exit
,
in order to flush stdout buffers.
In a celery task, I am launching a subprocess
and need (1) to be able to send a SIGINT
while (2) also having access to the subprocess’s stdout and stderr. I am able to do one or the other, but not both simultaneously.
I can send SIGINT
when the command in subprocess is given as a list, or as a string prepended with bash:
proc = subprocess.Popen(,
[sys.executable, "-m", path.to.module, myarg1, myarg2, ...], # also works with f"/bin/bash -c {sys.executable} -m path.to.module {myarg1} {myarg2} ..."
stdin=sys.stdin, stdout=PIPE, stderr=PIPE, shell=False
)
As far as I understand, both options are ultimately launching bash and it seems that only a running bash will react to SIGINT
.
Conversely, running "python -m …" means my program no longer reacts to the SIGINT
, but on the other hand it allows me to start seeing the stdout/stderr and logging inside my python program:
proc = subprocess.Popen(,
f"{sys.executable} -m path.to.module {myarg1} {myarg2} ..."
stdin=sys.stdin, stdout=PIPE, stderr=PIPE, shell=False
)
With the above, now I’m no longer able to send SIGINT
to my program but the logging is working.
How can I get both things to work at the same time? I’ve played around with shell=True
and the various stdin/out/err tweaks, but no luck.
EDIT: With the top form (command as a list) and adding signal.signal()
to my program in path.to.module
I am able to both receive the SIGINT as well as see some output.
need to be able to send a SIGINT …
I disagree with the premise.
I believe the need is to "stop a running process" in some
sensible way, and INT, USR1, HUP or other signals might suffice.
The key is to have a signal handler active for the relevant signal.
The other detail is "access to … stdout", which is
entirely reasonable. That is, we do not want abrupt
termination, but rather we want printf(3) buffers to be
flushed by write(2) syscalls prior to termination.
So the crux is that we want child process (or maybe process group)
to be sent a signal it is expecting, and then handle the
signal nicely.
bash
traps certain signals so that is definitely a concern.
proc = subprocess.Popen(
... , shell=False)
… both options are ultimately launching bash
No, at least not in this instance.
The False
argument means that bash
is not involved here.
Consider def
ining a handler
, and then doing
signal.signal(signal.SIGUSR1, handler)
The handler might choose to call sys.exit
,
in order to flush stdout buffers.