Ignoring Bash pipefail for error code 141
Question:
Setting the bash pipefail
option (via set -o pipefail
) allows the script to fail if a non-zero error is caught where there is a non-zero error in any step of a pipe.
However, we are running into SIGPIPE
errors (error code 141), where data is written to a pipe that no longer exists.
Is there a way to set bash to ignore SIGPIPE
errors, or is there an approach to writing an error handler that will handle all error status codes but, say, 0 and 141?
For instance, in Python, we can add:
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
to apply the default behavior to SIGPIPE
errors: ignoring them (cf. http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-06/3823.html).
Is there some similar option available in bash?
Answers:
The trap
command lets you specify a command to run when encountering a signal. To ignore a signal, pass the empty string:
trap '' PIPE
There isn’t a way that I know of to do this for the whole script. It would be risky in general, since there’s no way to know that a child process didn’t return 141 for a different reason.
But you can do this on a per-command basis. The ||
operator suppresses any errors returned by the first command, so you can do something like:
set -e -o pipefail
(cat /dev/urandom || true) | head -c 10 | base64
echo 'cat exited with SIGPIPE, but we still got here!'
I handle this on a per-pipeline basis by tacking on an || if ...
statement to swallow exit code 141 but generate exit code 1 for any other errors. (The original exit code that caused any non-141 failure is lost, as $?
was changed by the test after the if
.)
pipe | that | fails || if [[ $? -eq 141 ]]; then true; else exit $?; fi
Is there a way to set bash to ignore SIGPIPE errors, or is there an approach to writing an error handler that will handle all error status codes but, say, 0 and 141?
An alternative answer to the second question, which ignores the SIGPIPE termination signal from the child process (exit code 141):
cmd1 | cmd2 | cmd3 || { ec=$?; [ $ec -eq 141 ] && true || (exit $ec); }
This uses the exit
command in a subshell to preserve the original exit code from the command pipe intact, if it’s not 141. Thus, it will have the intended effect if set -e
(set -o errexit
) is in effect along with set -o pipefail
.
We can use a function for cleaner code, which allows the use of return
instead of the trick of putting exit
in a subshell:
handle_pipefails() {
# ignore exit code 141 from command pipes
[ $1 -eq 141 ] && return 0
return $1
}
# then use it or test it as:
yes | head -n 1 || handle_pipefails $?
echo "ec=$?"
# then change the tested code from 141 to e.g. 999 in
# the function, and see that ec was in fact captured as
# 141, unlike the current highest voted answer which
# exits with code 1.
# An alternative, if you want to test the exit status of all commands in a pipe:
handle_pipefails2() {
# ignore exit code 141 from more complex command pipes
# - use with: cmd1 | cmd2 | cmd3 || handle_pipefails2 "${PIPESTATUS[@]}"
for x in "$@"; do
(( $x == 141 )) || { (( $x > 0 )) && return $x; }
done
return 0
}
Aside: Question Interpretation
As pointed out in the comments to @chepner’s answer, interpreting the first part of the question is more difficult — one interpretation is, "ignoring error codes from child processes generated by SIGPIPE", and the code above accomplishes that. However, causing Bash to ignore the SIGPIPE signal entirely can make a child process write forever, since it never receives a termination signal. E.g.
(yes | head -n 1; echo $?)
# y
# 141
(trap '' PIPE; yes | head -n 1; echo $?)
# must be terminated with Ctrl-C, as `yes` will write forever
set -o errexit # Exit on error, do not continue running the script
set -o nounset # Trying to access a variable that has not been set generates an error
set -o pipefail # When a pipe fails generate an error
random="$(!(cat /dev/urandom) | tr -dc A-Za-z0-9 | head -c 10)"
echo $random
Since we cut off ‘cat’ we’ll get a pipe fail. But by negating the status with the ‘!’ it won’t trigger the pipefail detection.
Setting the bash pipefail
option (via set -o pipefail
) allows the script to fail if a non-zero error is caught where there is a non-zero error in any step of a pipe.
However, we are running into SIGPIPE
errors (error code 141), where data is written to a pipe that no longer exists.
Is there a way to set bash to ignore SIGPIPE
errors, or is there an approach to writing an error handler that will handle all error status codes but, say, 0 and 141?
For instance, in Python, we can add:
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
to apply the default behavior to SIGPIPE
errors: ignoring them (cf. http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-06/3823.html).
Is there some similar option available in bash?
The trap
command lets you specify a command to run when encountering a signal. To ignore a signal, pass the empty string:
trap '' PIPE
There isn’t a way that I know of to do this for the whole script. It would be risky in general, since there’s no way to know that a child process didn’t return 141 for a different reason.
But you can do this on a per-command basis. The ||
operator suppresses any errors returned by the first command, so you can do something like:
set -e -o pipefail
(cat /dev/urandom || true) | head -c 10 | base64
echo 'cat exited with SIGPIPE, but we still got here!'
I handle this on a per-pipeline basis by tacking on an || if ...
statement to swallow exit code 141 but generate exit code 1 for any other errors. (The original exit code that caused any non-141 failure is lost, as $?
was changed by the test after the if
.)
pipe | that | fails || if [[ $? -eq 141 ]]; then true; else exit $?; fi
Is there a way to set bash to ignore SIGPIPE errors, or is there an approach to writing an error handler that will handle all error status codes but, say, 0 and 141?
An alternative answer to the second question, which ignores the SIGPIPE termination signal from the child process (exit code 141):
cmd1 | cmd2 | cmd3 || { ec=$?; [ $ec -eq 141 ] && true || (exit $ec); }
This uses the exit
command in a subshell to preserve the original exit code from the command pipe intact, if it’s not 141. Thus, it will have the intended effect if set -e
(set -o errexit
) is in effect along with set -o pipefail
.
We can use a function for cleaner code, which allows the use of return
instead of the trick of putting exit
in a subshell:
handle_pipefails() {
# ignore exit code 141 from command pipes
[ $1 -eq 141 ] && return 0
return $1
}
# then use it or test it as:
yes | head -n 1 || handle_pipefails $?
echo "ec=$?"
# then change the tested code from 141 to e.g. 999 in
# the function, and see that ec was in fact captured as
# 141, unlike the current highest voted answer which
# exits with code 1.
# An alternative, if you want to test the exit status of all commands in a pipe:
handle_pipefails2() {
# ignore exit code 141 from more complex command pipes
# - use with: cmd1 | cmd2 | cmd3 || handle_pipefails2 "${PIPESTATUS[@]}"
for x in "$@"; do
(( $x == 141 )) || { (( $x > 0 )) && return $x; }
done
return 0
}
Aside: Question Interpretation
As pointed out in the comments to @chepner’s answer, interpreting the first part of the question is more difficult — one interpretation is, "ignoring error codes from child processes generated by SIGPIPE", and the code above accomplishes that. However, causing Bash to ignore the SIGPIPE signal entirely can make a child process write forever, since it never receives a termination signal. E.g.
(yes | head -n 1; echo $?)
# y
# 141
(trap '' PIPE; yes | head -n 1; echo $?)
# must be terminated with Ctrl-C, as `yes` will write forever
set -o errexit # Exit on error, do not continue running the script
set -o nounset # Trying to access a variable that has not been set generates an error
set -o pipefail # When a pipe fails generate an error
random="$(!(cat /dev/urandom) | tr -dc A-Za-z0-9 | head -c 10)"
echo $random
Since we cut off ‘cat’ we’ll get a pipe fail. But by negating the status with the ‘!’ it won’t trigger the pipefail detection.