Switching to JSON logs unlocks powerful observability for machines, but it ruins readability for humans.
Instead of clean lines like GET / 200 OK, you get this:
{"level":30,"time":1638291,"pid":1,"hostname":"api-1","req":{"id":1,"method":"GET","url":"/"},"msg":"request completed"}
{"level":30,"time":1638292,"pid":1,"hostname":"api-1","req":{"id":2,"method":"POST","url":"/auth"},"msg":"authenticating"}
Visually scanning this for errors is impossible.
The Wrong Way: Application-Side Formatting
Your instinct might be to use JSON.stringify(obj, null, 2) inside your code.
Do not do this.
- Performance: It increases CPU usage to format strings.
- Storage: It adds massive whitespace overhead to your logs (often 2x-3x size).
- Parsing: It breaks downstream tools (Splunk, Datadog) that expect one JSON object per line (NDJSON).
Pro Tip: Always log compact, single-line JSON in production. Format it client-side (on your laptop) when you need to read it.
The Classic Way: jq
The standard tool for this is jq. It's installed everywhere and incredibly powerful.
# Basic pretty printing
tail -f app.log | jq '.'
Filter only errors
tail -f app.log | jq 'select(.level >= 50)'
The Downside: jq syntax is hard to remember. Writing a filter to "show me only the message and timestamp for errors" requires googling syntax every time.
The Node.js Way: pino-pretty
If you use the Pino logger, pino-pretty is excellent.
node server.js | pino-pretty
It colorizes levels, formats timestamps, and highlights stack traces. But it only works well if your logs strictly follow Pino's format. If you have logs from Nginx or a Go service mixed in, it breaks.
The Modern Way: Loghead
We built Loghead to be the "jq for humans". It detects log formats automatically and gives you a readable, structured view without memorizing syntax.
- Auto-detection: Works with Pino, Bunyan, Zap, Zerolog, and generic JSON.
- AI Integration: Pipe the pretty-printed output directly to an LLM context window.
# Pretty print any stream
cat production.log | loghead
Ask AI to find anomalies in the JSON
cat production.log | loghead --ai "Why are requests failing?"
This moves you from "reading raw data" to "getting answers."