nq: A Unix Job Queue That's Just Files

2026-04-22

Somewhere between "run it in the background with &" and "set up a Redis-backed Celery cluster" lies a gap that almost nobody fills well. nq is a tiny tool by Leah Neukirchen (of mblaze and redo fame) that implements a job queue using nothing but the filesystem and flock(2). No daemon. No config. No database. Just files in a directory.

Install it:

apt install nq          # Debian/Ubuntu
brew install nq         # macOS
# or build from source — it's ~300 lines of C
git clone https://github.com/leahneukirchen/nq && cd nq && make

The entire interface is two commands: nq enqueues a job, fq watches/follows them.

Basic usage — serialize expensive tasks:

# Queue up three heavy builds. They run sequentially, not in parallel.
$ nq make -C project-alpha
$ nq make -C project-beta
$ nq make -C project-gamma

# Each nq call returns instantly. Check the queue:
$ fq
... tailing output of the currently running job ...

# List queued jobs (they're just files):
$ ls $NQDIR   # defaults to $PWD if NQDIR is unset
,e1a3f.12045   # running (locked via flock)
,e1a40.12078   # waiting
,e1a41.12092   # waiting

That's it. Each job is a file named with a timestamp and PID. The file contains the command's output. Jobs acquire an exclusive lock — the next job in line blocks on flock() until the previous one finishes. It's serialization via the kernel, not polling.

Where this actually shines — deploy scripts:

# Prevent overlapping deploys without a lockfile dance
$ export NQDIR=/var/run/deploy-queue
$ mkdir -p $NQDIR
$ nq /opt/deploy.sh production

# Second deploy arrives while the first is running?
# It queues behind it. No race condition. No "deploy already in progress" error.

Batch processing with controlled concurrency:

# Transcode a directory of videos, one at a time
$ export NQDIR=/tmp/transcode-queue
$ mkdir -p $NQDIR
$ for f in *.mkv; do
    nq ffmpeg -i "$f" -c:v libx265 "${f%.mkv}.mp4"
  done

# Walk away. Check progress anytime:
$ fq
# Or wait for everything to finish:
$ fq -q    # blocks until all jobs complete

Why not just use & or xargs -P? Because nq gives you serialization (jobs don't stampede your CPU), persistence (jobs survive your terminal disconnecting — they're just child processes of init after nq exits), and logging (each job's stdout/stderr is captured in its queue file). You get cat ,e1a3f.12045 to review what happened.

Why not at or batch? Those require atd, don't give you a live-tailable output stream, and have clunky interfaces for sequencing dependent work.

A few more tricks worth knowing:

# Wait for a specific job to finish, then do something
$ nq sleep 10
$ fq -q && echo "all done"

# Use separate queues for separate concerns
$ NQDIR=/tmp/db-migrations nq ./migrate.sh
$ NQDIR=/tmp/backups nq ./backup.sh
# These two queues run independently and in parallel

# Clean up finished jobs
$ cd $NQDIR && rm ,*

The entire implementation relies on a property of flock(): a process can open a file and wait for an exclusive lock. When the lock holder dies (or exits), the next waiter gets the lock. This is an ancient Unix primitive, and nq weaponizes it into a queue. No moving parts. Nothing to crash. Nothing to restart.

It's the kind of tool that makes you realize how many problems we over-engineer. You don't always need a message broker. Sometimes you just need a directory and a lock.

Key Takeaway: nq turns the filesystem into a sequential job queue using nothing but flock(2) — giving you serialized execution, persistent logs, and zero infrastructure in about 300 lines of C.

All newsletters