Extending Aider with a Prompt Collection for Boilerplate Prompts

Introduction: The Power of Being Questioned

When working with Large Language Models (LLMs), I noticed something interesting that helps me form ideas more effectively. If I’m being questioned about a topic, it becomes easier to form concrete ideas and fill in the gaps in my thinking. This realization led me to explore how I could use LLMs as interviewers to help refine my ideas through a structured question-answer cycle.

This works fine in basic ChatGPT or Claude web chat, though a website isn’t the ideal interface for this in my opinion. I need to add context from my file system or other websites to develop these ideas. Adding context from my brain currently requires typing, but before I can get to the good stuff, I need to instruct the LLM with boilerplate prompts.

I’ve been using aider, a powerful CLI tool that excels at making code changes with the context it has access to. What makes aider special is its simplicity and its ability to add necessary context from various sources, which is crucial for effective interaction with LLMs. I have disabled auto-commit to maintain control of the process. I don’t prefer agents where the LLM handles everything at once, even automating search and context retrieval. I’d rather stay in the loop and control the changes being made. Am I slower? Maybe, but I feel the quality is higher.

So how can I extend Aider, to also work for other tasks besides code?

The Fork Decision: Building on Aider’s Foundation

Initially, I considered creating a new tool based on aider’s technology. However, I quickly realized I would need to recreate much of what aider already offers. The smarter approach was to hack and extend aider itself.

After examining aider’s codebase, I discovered it already had different modes. I decided to fork the project and add two new commands:

  1. /refine - Initiates an interviewer mode where the AI asks questions one at a time
  2. /finalize - Formats the accumulated information into a structured document

The /refine command transitions the AI into interviewer mode, and once that process is complete, /finalize takes the entire conversation history and formats it according to predefined guidelines.

Technical Implementation: The Code Changes

The key addition I made was the --refine-prompt-path parameter, which allows you to specify a directory containing YAML configuration files. Each YAML file includes three critical properties:

  • name - The identifier used with the /refine command (e.g., /refine blog)
  • system - The system prompt that guides the AI’s behavior, instructing it to ask one question at a time
  • final - The formatting instructions for the output document

When you run /refine user-story, it initializes a new LLM session with the system prompt from the corresponding YAML file. After answering the AI’s questions, running /finalize applies the formatting instructions from the final property to produce a well-structured document.

Practical Applications: My Custom Modes

I’ve created several YAML configurations for different types of content:

  • user-story - For generating well-defined user stories and tasks
  • readme - For crafting comprehensive README files based on project context
  • emails - For structuring clear and effective emails
  • blog - For developing blog posts like this one!
  • rewrite - For rewriting small sections of text (no finalize is needed)

Each configuration has a specialized system prompt and formatting guidelines tailored to the specific content type. The beauty of this system is that I remain the source of knowledge while the LLM serves as both interviewer and formatter. This allows me to focus solely on providing information without worrying about structure, grammar, or formatting.

The Benefits: Staying in the Driver’s Seat

What I love about this approach is that I remain in control throughout the process. The LLM guides me with questions, helping me identify gaps in my thinking, but I’m the one providing the knowledge. If I find a question uninteresting or irrelevant, I can simply redirect the AI with a response like “Not interested in that aspect” and steer it in a more productive direction.

This guided approach to content creation has several advantages:

  1. Focus on substance - I can concentrate on answering questions about my knowledge domain without getting distracted by formatting concerns.
  2. Improved structure - The final output is well-organized according to predefined standards.
  3. Identification of gaps - The AI’s questions often reveal aspects of the topic I hadn’t considered.
  4. Forced clarity - Having to answer specific questions makes me articulate my ideas more precisely.
  5. Efficiency - The process is streamlined and contained within a single tool.

Conclusion: A Simple but Powerful Extension

The /refine and /finalize commands transform Aider from a code-focused tool into a something that can also just do content. Aider wasn’t designed this way, so sometimes I feel like I’m swimming against the current when hacking my stuff into it. After using it for a while, I think having boilerplate prompts available on my filesystem, accessible through the CLI, would be helpful. I’ve managed to accomplish some small tasks by leveraging Aider’s existing features to create something that works for me.

If you like Aider, you might like this. The code is here: https://github.com/Spanjer1/aider

While this extension shifts the focus somewhat away from aider’s original coding purpose, it kind of works. It might even offer benefits for the coding workflow by giving the LLM a chance to ask questions before making changes, potentially leading to higher-quality code.

In the end, this project shows how small, thoughtful extensions to existing tools can significantly enhance our productivity and creative processes. By turning AI into an interviewer, we can refine our ideas more effectively and produce better-structured content with minimal effort.

Prompts examples

Userstory

In this example I can easily create a structured document for an userstory, which I then later create into our agile board.

name: user-story
system: |
    You are a specialized Story Refinement Coach with expertise in agile user story development.
  
    Your goal is to help transform vague story ideas into well-structured user stories with clear:
    - WHAT
    - WHY
    - HOW
    - ACCEPTANCE CRITERIA
  
    Guidelines:
    - Ask ONE focused question at a time
    - Progress naturally through the conversation, not mechanically section by section
    - Don't ask directly for what should be included in one of the section, find it out by good direct questions.
    - Adapt your questions based on previous answers
  
    We don't need all the details, but enough for everyone in the team to know what the story is about. 
    We have a hard time listing all the information in one go. 
    We need someone that can guide is through the process one question at a time.

final: |
  Based on our conversation, generate a structured user story document with the following details:
    1. WHAT
    2. WHY
    3. HOW
    4. ACCEPTANCE CRITERIA: List 3-7 specific, testable conditions that define when this story is complete
    5. IMPLEMENTATION TASKS: Break down the technical work into 2-5 discrete tasks
  
    respond only with a markdown document in the following format:
  
    # User Story: <title>
  
    ## WHAT
    <what>
  
    ## WHY
    <why>
  
    ## HOW
    <how>
  
    ## ACCEPTANCE CRITERIA
    <acceptance_criteria>
  
    ## IMPLEMENTATION TASKS
    <tasks>