Give LLMs a Shot
“Brevity is the soul of wit” - William Shakespeare, Hamlet
“I'm temporarily departing this establishment in order to acquire a more suitably reinforced transportation vehicle capable of facilitating my immediate objective of re-entry through your currently locked access point, at which time I will resume this interaction with considerably greater efficacy and substantially reduced regard for architectural barriers, so you may wish to interpret this announcement not as a polite farewell but as both a menacing and statistically reliable guarantee of my imminent and forceful return”
is what Arnold Schwarzenegger could have said to the desk sergeant, but thankfully he spared us the diatribe and left us with one of the most memorable quotes in movie history:

Great one-liners tend to stick with us. They’re short. They’re sharp. They achieve a precise purpose.
In software, I love one-liners!
- they're brief to remember and type
- they're often composable
- they can encode so much into so little
The last one is important to me. I don't have to think about anything more than I would actually like to think about in running my one-liners. Even if the one-line script is indeed a one-line command itself, I'll often prefer it, as it captures my defaults, my processes, my standards.
And for this reason, I find myself writing small bash or Node.js commands all the time.
Today's one-liner was llmshot. It's a bash command that simply calls an LLM and returns a response.
It's available for free under the MIT license at:
GitHub: https://github.com/markabrahams/llmshot
The Power of a One-Liner
The longevity of Unix has always been a wonder to me - how on earth is most of the world still running on an OS design born in the 1960's?!? In technology that's the closest thing to eternity that you'll get. My conclusion: They must have done something incredibly right. Probably a few things actually! But the right-est thing about Unix is the Unix Philosophy:
Do One Thing Well.
Small, focused, composable tools—designed to be chained together with pipes—have powered decades of automation, system administration, and development workflows. They don’t try to be everything; they just do their job well. And because they’re simple, they’re easy to embed in scripts, CI pipelines, cron jobs, and larger systems.
Consider:
grep -r "TODO" .
awk -F, '{sum+=$3} END {print sum}' file.csv
curl https://api.example.com | jq '.data[].name'
Each of these tools reflects the Unix philosophy. They're doing one thing. And they're doing it well.
Bringing LLMs Into the Unix Tradition
Large Language Models are powerful. But they often come wrapped in:
- Web interfaces (and evil silos at that! More on this later...)
- SDKs
- Full application frameworks
- Streaming APIs
- Complex setup
What if you just want to:
- Ask a model a question
- Summarize some text
- Generate a commit message
- Add intelligence to a script
In one line? There's an app for that!
What Is llmshot?
llmshot is a lightweight bash-based CLI tool that lets you call LLM providers with a single command.
It supports multiple providers (including OpenAI, Google Gemini, Anthropic, and local Ollama instances) and relies only on tools that already exist in most Unix-like environments:
- bash
- curl
- jq
No Node or Python. Light on dependencies and runtime bloat. Just a simple command that sends a prompt and returns a response.
Simple example:
llmshot -p openai -t "Give me a clever anagram"
Or piping input:
cat article.txt | llmshot -p google -t "Summarize this text:"
That’s it!
What's Cool About That?
The magic of Unix isn’t individual tools—it’s composition.
Consider this:
git diff HEAD~1 | llmshot -p openai -t "Summarize these changes:"
You’ve just built an AI-powered commit summarizer.
Or:
journalctl -n 200 | llmshot -p anthropic -t "Identify anomalies in these logs:"
Now you’ve got AI-assisted log analysis.
Or:
find . -name "*.md" -exec cat {} \; \
| llmshot -p ollama -t "Generate a project overview from these documents:"
Instant documentation synthesis.
What I'm saying is this:
When an LLM becomes a command-line primitive, it becomes automatable.
It turns AI from a chat window into easy-access infrastructure.
The Unix Philosophy, Revisited
Douglas McIlroy, one of the pioneers of Unix, summarized it like this:
Write programs that do one thing and do it well.
Write programs to work together.
Write programs to handle text streams, because that is a universal interface.
llmshot aims to fit this philosophy:
- It does one thing: send prompts to LLMs and return responses.
- It works with other programs via stdin/stdout.
- It treats text as the universal interface.
It doesn’t try to manage conversations—no history support. Also, no streaming responses—as such, you can be waiting an uncomfortable amount of time for the final response!
It’s a minimal but sharp tool.
Your Automation is Your Leverage
Once LLM access is reduced to a one-liner, entirely new possibilities open up:
- Pre-commit hooks that auto-suggest better messages
- CI pipelines that summarize test failures
- Cron jobs that digest daily logs
- Scripts that transform raw data into readable reports
- Static site generators enhanced with AI commentary
- Trading systems that annotate strategy output (yes, really)
A little AI can go a long way!
Turn any shell pipeline into an LLM-enhanced pipeline.
A Good One-Liner Is Power
Movie one-liners endure because they compress power into a small space. Great Unix commands endure for the same reason. llmshot brings that spirit into the LLM era.
One line. One prompt. One response.
Will you weave the power of AI into your CLI?
In the words of Captain Picard on the reception of his Bernina 830:
