Ask Command
The ask
command allows you to query the LLM model directly with questions and get informative responses.
Usage
Your question can be multiple words, and will be sent as-is to the model. The response will be formatted as markdown by default.
Examples
# Basic question
agent ask "What is a file descriptor in Linux?"
# Using a specific provider and model
agent ask --provider openai --model gpt-4o-mini "How do I check disk usage on Linux?"
# Get the output as plain text (no markdown formatting)
agent ask --plain "Explain how pipes work in bash"
# Stream the response (see output as it's generated)
agent ask --stream "What are the benefits of using Go for CLI applications?"
Flags
Flag | Short | Default | Description |
---|---|---|---|
--provider |
-p |
From config | The LLM provider to use |
--model |
-m |
From config | The model ID to use |
--print |
-x |
true |
Whether to print the response to stdout |
--log |
-l |
false |
Whether to log the input and output to a file |
--stream |
-s |
false |
Stream the response to the stdout as it's generated |
--plain |
-k |
false |
Render the response as plain text (no markdown) |
Streaming Output
When using the --stream
flag, you'll see the output appear gradually as it's generated by the model, rather than waiting for the complete response. This provides a more interactive experience:
Note: Streaming is not supported with the Perplexity provider.
Logging
If you use the --log
flag, your question and the response will be saved to the history log:
You can later query your history with the history
command.