Ryan Scott Brown

I build cloud-based systems for startups and enterprises. My background in operations gives me a unique focus on writing observable, reliable software and automating maintenance work.

I love learning and teaching about Amazon Web Services, automation tools such as Ansible, and the serverless ecosystem. I most often write code in Python, TypeScript, and Rust.

B.S. Applied Networking and Systems Administration, minor in Software Engineering from Rochester Institute of Technology.

Restarting Python Automation in 2026

A friend recently asked me to help them get started writing their own automations. I pointed them at Automate the Boring Stuff with Python, updated in late 2025 to cover Python 3.12+. This post covers a grab bag of techniques I’ve learned over the years.

The quickest start is my short post on using uv to manage single-file scripts to make standalone, machine-independent Python scripts.

Techniques

Think Like a Spreadsheet

The most useful mental model for data-driven automation is the spreadsheet. You have input cells (files, API responses, environment variables), intermediate cells (transformations, filters, lookups), and output cells (reports, deployed artifacts, notifications). When an input changes, only the cells that depend on it should update.

This is reactivity. The three approaches he covers (push, pull, and push-pull hybrid) map directly onto automation design:

  • Push-based: a file watcher triggers a rebuild whenever anything changes. Simple, but you end up rebuilding things that didn’t need to change.
  • Pull-based: you run a script that checks what’s stale and rebuilds on demand. Correct, but potentially slow if it checks everything.
  • Push-pull: mark things dirty when inputs change, then only rebuild what’s actually needed. This is what make, just, and most build systems do.

It’s simplest to make each script pull-based: You decide “I need to make a new report from X,” you run the script to pull fresh data and re-make it. Costlier workflows (downloading a large source file) should check if the file has changed before re-downloading (push-pull).

When you’re writing automation, build your tools so data flows in a directed acyclic graph (fancy computer words for “one way, no loops”).

Scripts that write to the same files they read are fragile. Follow the Unix philosophy: each tool takes input on stdin or as arguments, does one explainable thing, and writes output to stdout. Compose them with pipes or as part of build workflows (make, just, and such).

Write Crash-Only Software

Your scripts will fail, you will make mistakes, and APIs will change. The safest default is to write crash-only software. When something bad is detected (missing input file, bad argument, file write error) your program should crash and give a useful error. Do not try to recover, retry the command, or otherwise proceed into the vast unknown.

In practice this means:

  • Use temporary files, then move them to the output location. Don’t write directly to the output file. If your script crashes mid-write, the old output is still intact.
  • Make operations idempotent. Running the script twice should produce the same result as running it once. Overwrite the previous output.
  • Force confirmation for destructive actions. If a script deletes files, modifies production data, or spends money, require an explicit --commit flag. Default to a dry-run that prints what would happen.

If you find yourself adding try/except around every other line, step back. Let it crash, explain the error, and make restart cheap.

Log With Purpose

Beginners often put print() calls everywhere and then can’t tell signal from noise. Your CLI applications will have three main outputs:

  • files: Make as many as you like, as long as you don’t pick opaque names like output.csv
  • stdout: This is the default (short for standard output) – print() goes here – and a command like one.py | two.py is going to read the stdout of one.py and write that as input to two.py.
  • stderr:: This is for warnings and errors (short for standard error) and is used for non-essential output that two.py from our previous example would not expect.

Use Python’s built-in logging module (or even just a convention) with three tiers:

  • stdout for structured output: the data your script produces. This is what gets piped to the next tool. Keep it machine-readable (JSON, CSV, TSV).
  • stderr for human-readable status and warnings. Use print(..., file=sys.stderr) or logging.warning(). This is what you read when something goes wrong.
  • debug for development. logging.debug() with timestamps and context, enabled by a --verbose flag or LOG_LEVEL=DEBUG environment variable. Turn this on when you’re developing, off when the script runs in automation.

This separation means myscript.py | jq . works while the user can see warnings, because the warnings go to stderr and the JSON goes to stdout. You can get much more granular (logging supports critical, error, warning, info, and debug levels). If you’re writing crash-only software you will likely only need warning and debug – everything else is a crash.

Tools

uv & uvx

Avoid installing global Python packages. uv replaces pip, pyenv, pipenv, and virtualenv with a single fast tool. For automation, the killer feature is inline script metadata:

#!/usr/bin/env -S uv run
# /// script
# requires-python = ">=3.13"
# dependencies = [ "httpx" ]
# ///

import httpx
resp = httpx.get("https://api.example.com/data")
print(resp.json())

Set chmod +x on that file and run it, or do uv run myfile.py. uv creates a cached virtualenv matching the declared dependencies automatically. Each script is self-contained and owns its dependencies.

For running existing tools, uvx is the equivalent of npx: uvx ruff check . runs the ruff linter without installing it globally. Combine uvx with tools like marimo for quick notebook prototyping: uvx marimo new.

Use a Target-Based Build System

Once you have more than two or three scripts or if they run slowly, you need a way to orchestrate them. just is a command runner that’s better suited to automation tasks than make:

# justfile

default:
    @just --list

fetch:
    ./fetch-data.py > data/raw.json

transform: fetch
    ./transform.py < data/raw.json > data/clean.csv

report: transform
    ./build-report.py < data/clean.csv > output/report.html

just report runs all three steps in dependency order. If you only need to re-fetch, just fetch runs just that step. This is the push-pull model in action: you declare dependencies, and the runner figures out what to execute.

Handling Secrets

Never hardcode secrets. Never commit them.

Instead, use the 1Password CLI (op). To include secrets, use op run to inject secrets as environment variables into a subprocess. This means your scripts never see the raw secret in their source code.

If you don’t use 1Password, any secret manager with a CLI works ( aws secretsmanager, vault, gopass). The pattern is the same: fetch at runtime, never store in files.

For SSH keys on macOS, Secretive stores your private keys in the Secure Enclave — they literally cannot be exported off your machine. SSH agent forwarding still works, so git push and ssh are transparent.

File Format Tools

You don’t need to write parsers. Good CLI tools already exist for every common format:

  • sqlite-utils: convert CSV/JSON/newline-delimited JSON into SQLite databases and query them with SQL. sqlite-utils insert db.sqlite data data.csv --csv then sqlite-utils query db.sqlite "SELECT ...".
  • csvkit: a suite of tools for working with CSV (csvcut, csvgrep, csvsort, csvjoin). Pipe-friendly and composable.
  • Polars: when you need DataFrames in Python, use Polars instead of Pandas. It’s faster, has a cleaner API, and its lazy evaluation model means it only computes what you actually need (push-pull reactivity again).
  • XML: use xmlstarlet for XPath queries and XSLT transforms from the command line. For Python, lxml is the standard library. If you’re processing HTML, lxml.html with XPath is far more reliable than regex.

If a file format is common enough to have a name, someone has written a CLI tool for it. Don’t write your own parser, validate data as it enters your program, and crash if something seems off.

All Together Now

A good automation setup looks like this: self-contained uv scripts that each do one thing, a justfile that orchestrates them, secrets fetched at runtime from op, structured output on stdout, and human-readable logs on stderr. Each piece is independently testable, crash-safe, and composable.

Start small. The Automate the Boring Stuff approach still works: pick a real task, write a script, and iterate.

Design by Sam Lucidi (samlucidi.com)