In this article, I want to talk about a zsh
plugin that I've written to quickly ask an
LLM for help with terminal commands.
I also have a PowerShell version to do the same thing, but I won't talk about it here.
This plugin allows you to press a hotkey which will open a text editor.
I have the hotkey assigned to Ctrl-x
.
You can then type, in natural language, a description of what you want your command to do.
Then, when you save the file and close the editor, it sends your description to an LLM,
returns the desired command, and places the text of the command on the command line.
You are then free to edit the command and execute it by pressing Enter
.
Importantly, this will place the command in your command history.
You can also press the hotkey again to ask the LLM to edit the command if it's not quite right.
If you are concerned about the cost, don't be. The cost per query is extremely low. My usage, which is fairly modest, comes out to pennies a month.
Requirements
You will need the following:
zsh
(bash
doesn't have the required features unfortunately)jq
,sed
, andawk
(for wrangling json)- a text editor (I use Neovim)
- an API key for an LLM (I am using Claude Sonnet 3.5 from Anthropic)
- the plugin (I will go into detail about this below.)
Setup
Once you have the above prerequisite programs installed and your API key, you will need the plugin:
I keep it in ~/bin/cmd-assistant.plugin.zsh
.
#!/bin/zsh
function get_claude_completion() {
local messages="$1"
echo "$messages" > "/tmp/completion_messages.json"
local request_body=$(echo '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 200,
"system": '"$(echo "$messages" | jq ".[0].content")"',
"messages": '"$(echo "$messages" | jq ".[1:]")"'
}')
echo "$request_body" > "/tmp/completion_request.json"
local api_response=$(curl -s -X POST https://api.anthropic.com/v1/messages \
-H "x-api-key: ${ANTHROPIC_API_KEY:?}" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d "$request_body")
echo "$api_response" > "/tmp/completion_response.json"
echo "$api_response" |
awk 'NR==1 {line=$0; next} {line=line "\\n" $0} END {print line}' |
jq -r '.content[0].text'
}
function process_command_completion() {
local system_messages_file="$HOME/bin/CmdLinePrompt.jsonl"
local user_prompt_file=/tmp/USER_PROMPT
local current_command=${BUFFER}
echo -n "$current_command" > "$user_prompt_file"
$EDITOR "$user_prompt_file" 2>/dev/null || {
echo "Failed to open editor: '$EDITOR'" >&2;
return 1;
}
local user_content=$(<"$user_prompt_file" | sed ':a;N;$!ba;s/\n/\\\\n/g')
local formatted_messages=$(
jq -s . $system_messages_file |
jq --arg content "$user_content" '. + [{"role": "user", "content": $content}]')
local completion_result=$(get_claude_completion "$formatted_messages")
BUFFER=$completion_result
CURSOR=${#BUFFER}
}
zle -N get_claude_completion
zle -N process_command_completion
bindkey '^X' process_command_completion
Second, you will need a prompts file:
I keep this in ~/bin/CmdLinePrompt.jsonl
.
This contains a system prompt and examples to help the LLM know what to do.
You can tailor the system prompt and examples to your needs.
I'm not sure how important the examples are now that LLMs have gotten so good.
{"role": "system", "content": "Given a description of a Linux command, return the command to run. \
Return only the command and nothing else. Do not explain the command. Use the current directory \
unless otherwise specified. Assume Arch Linux and zsh shell."}
{"role": "user", "content": "List files"}
{"role": "assistant", "content": "ls -l"}
{"role": "user", "content": "Count files in a directory"}
{"role": "assistant", "content": "ls -l | wc -l"}
{"role": "user", "content": "Disk space used by home directory"}
{"role": "assistant", "content": "du ~"}
{"role": "user", "content": "Replace foo with bar in all .py files"}
{"role": "assistant", "content": "sed -i .bak -- 's/foo/bar/g' *.py"}
{"role": "user", "content": "Install the package foo"}
{"role": "assistant", "content": "sudo pacman -S foo"}
{"role": "user", "content": "Delete the models subdirectory"}
{"role": "assistant", "content": "rm -rf ./models"}
Third, you will need to set your API key to an environment variable and install the plugin.
I keep my API key in a .env file. It should be formatted like this
ANTHROPIC_API_KEY=<api-key>
Here's how I set this up in my .zshrc
file.
# set [-+]a enables and disables automatic export of variables
set -a; source "$HOME/.env"; set +a
source "$HOME/bin/cmd-assistant.plugin.zsh"
Demonstration
Here is a video of me using the plugin:
Conclusion
I've been using this plugin for a couple of years now, and it is has been one of my most used scripts on my system. One alternative to this plugin would be lots of googling, which has become increasingly frustrating recently. Another would be pulling up your favorite AI chat application and asking it for help with the command, and then copying and pasting the result back into your terminal. This terminal plugin workflow is faster with much less friction.
This is one of those things that can save you a lot of time, but it can also become a crutch that prevents you from learning. However, it can also help you discover things that you ordinarily wouldn't discover on your own, and it can also, at times, show you a better way to do things.
In my next article, I will show you another script that can speed up this workflow even more.
The plugin script referenced in this post is a simplified version. You can view the current version I use in my dotfiles repository on my GitHub. Feel free to modify it for your use. In the version on my GitHub, I have an old function that calls GPT 3.5. It won't be difficult to modify the script to call another LLM.