Skip to content

Shell snippets

Some standard snippets I use from time to time, perhaps this helps others as well.

Allow comments in interactive zsh shells with set -k (otherwise you might get some errors with the examples).

Script directory

Often you want scripts to act on data in the directory the script is placed in (build scripts, maintenance, backups, …).

directory of executed script
self="$(dirname -- "$(readlink -f -- "$0")")"

Note

readlink -f is used to get the absolute path and also handles symlinks to your script - i.e. you can symlink the script into a directory in your $PATH, and it will still find the actual location.

Warning

As $0 is quite likely a relative path you mustn’t change the current working directory (i.e. no cd ... calls) before locating your script.

If the script you want the location for isn’t actually executed but only “sourced”, you can use ${BASH_SOURCE[0]} in bash (this also works with executed scripts):

directory of sourced script
# lib.sh
lib_self="$(dirname -- "$(readlink -f -- "${BASH_SOURCE[0]}")")"

# some other script
source "${someplace}/lib.sh"

Explicitly check exit codes

Instead of set -e one should check exit codes explicitly.

# Run passed command and check its exit/return code
check_exec() {
  if "$@"; then
    : # do nothing
  else
    local rc=$?
    echo >&2 "Failed to run (exit code ${rc}): $@"
    exit "${rc}"
  fi
}
# Check exit code from previous command
check_exit_code() {
  local rc=$?
  if [ "${rc}" -ne 0 ]; then
    echo >&2 "exit code: ${rc}: $@"
    exit "${rc}"
  fi
}
Example for check_exec
check_exec mkdir /tmp/mytempdir
Example for check_exit_code
tmpdir=$(mktemp --tmpdir -d myscriptname-XXXXXXX)
check_exit_code "Failed to create tmpdir"

In functions one can forward errors like this:

outer() {
  if ! something; then # (1)!
    return 1
  fi
}
  1. The ! operator destroys the actual exit code in $?; in order to access it you need

    if something; then :; else return $?; fi
    

    : is a “null” command that always succeeds - the shell doesn’t allow an empty command list in many places.

    or

    something
    rc=$?
    if [ "${rc}" -ne 0 ]; then return "${rc}"; fi
    

But it might be easier to use the exit-based error handling functions from above, use a subshell where you want to handle the error. (Subshells can’t modify variables in the outer shell though, so this isn’t always an option.)

if ( complicated-function-with-nested-calls ); then :; else
  echo >&2 "Failed to do ...; continue anyway"
fi

Temporary files

If you create temporary files in your scripts, make sure you clean it up afterwards. Usually you should clean up even if errors happened, and a simple way to do this are EXIT traps.

tmpdir=$(mktemp --tmpdir -d myscriptname-XXXXXXX)
trap 'rm -rf "${tmpdir}"' EXIT

# now put all temporary files in "${tmpdir}/"

You can have only one exit trap - if you need a more dynamic hook system you have to build it yourself (register functions to execute in an array or something like that).

Safe iteration over filenames from other commands (bash)

Safe iterating over file list from find (using NUL-separated filenames; note the space between -d and '' - the shell uses the first character in the argument to -d as separator, which is NUL as the string is empty):

while IFS= read -u3 -d '' -r file; do
  printf 'File found: "%s"\n' "$file"
done 3< <(find . -print0)

Warning

Be careful with using stdin (fd 0) in such loops - some programs read from stdin if it isn’t a tty device, assuming you piped them some commands. This is why I used fd 3; if you want to be extra careful you can close fd 3 for some commands/subshells in the loop.

Cron jobs and locking

Sometimes your cron jobs are so slow, the next call starts before your old finished (or you started it manually). In order to prevent your script from interfering with itself you want to use locking. Each job should use a different lockfile of course.

LOCKFILE=~/.lock/my-scriptname.lock

(
  if ! flock -n 9; then
    echo "Couldn't lock '${LOCKFILE}', exit" >&2
    exit 1
  fi

  (
    # do stuff
    echo "long job!"
    sleep 100
    echo "done"
  ) 9>&-

) 9>>"${LOCKFILE}"

The inner subshell is used to hide the lockfile (in case you start daemons), 9>&- closes the file descriptor for the inner shell.

The outer subshell keeps the lockfile open, which is needed to hold the lock - the lock is released if either all fds are closed that held the lock, or if you call flock -u on one of them.

(I couldn’t find flock in posix specs - it works on linux and freebsd, but probably only on local filesystems. Shell doesn’t matter as flock is not a builtin.)

Security note

This does not create the LOCKFILE in a secure way, so don’t use world writable locations with this snippet.

I haven’t found a way yet to do this in a secure way; creating files with O_EXCL is not enough, if creation fails you also need to check the existing file to not be a symlink and be owned by you in one step (using results from the same lstat() call), and the standard “test” binary doesn’t provide this.

Tip

Also have a look at BashFAQ/045, it has some examples without flock (but they don’t recover if a previous run didn’t delete the lock file/directory).