PictoTales Series 2 - Empowering Your AI Editor: Providing the Right Context for Next-Level Coding

PictoTales Series 2 - Empowering Your AI Editor: Providing the Right Context for Next-Level Coding

Welcome back to my Indie Hacker journey! In the previous post (PictoTales Series 1 – Reimagining Family Storytelling with AI), I shared how I returned to the coding trenches after a five-year hiatus from day-to-day development, dove into TypeScript and Next.js, and started “building in public” to create PictoTales—an AI-driven storytelling platform for families.

In this second installment, I want to focus on something equally crucial to modern software building: how to provide context to your AI editor or coding assistants. If you’ve ever used Cursor, GitHub Copilot, or any LLM-based tool, you know that good prompts are just the beginning. Truly unlocking their potential involves giving them structured context that goes well beyond a single question or snippet of code.

Here’s the approach I’ve taken—and how it fits into my overall plan for PictoTales.


Why “Context Is King” in AI Coding

We often treat AI-based code editors as fancy autocomplete tools, but they can be much more. Properly “onboarding” your AI editor with rules, guidelines, and a project overview is like equipping a new team member with the knowledge they need to excel.

  1. Improved Accuracy
    By sharing the bigger picture—your business rules, coding standards, and system architecture—AI can generate solutions that align more closely with your project’s needs.
  2. Reduced Redundancy
    Context prevents the AI from making the same mistakes or asking the same clarifying questions repeatedly.
  3. Enhanced Collaboration
    When the AI “understands” your style, constraints, and goals, it transitions from a code generator to a genuine thought partner.

The Three Pillars of Context

When working on PictoTales, I provide three main types of context to my AI coding assistant:

1. AI Editor Universal Rules

These are general guidelines or “house rules” for the AI itself—akin to an employee handbook. Some rules might include:

  • Teaching Approach: The AI should explain concepts in simple terms, break down complex problems into steps, and encourage best practices.
  • Coding Standards: Use meaningful variable names, add comments for clarity, and follow recommended patterns (like RESTful APIs or TypeScript interfaces).
  • Error Corrections: If code is wrong or buggy, explain why and how to fix it, without being overly cryptic.
  • Encourage Curiosity: Prompt me (the developer) to ask questions, explore different solutions, and refine my approach.
Tip: If you have your own universal rules, store them in a markdown or text file. Then, reference them whenever you start a coding session or new conversation with your AI.

2. Project Rules

Beyond universal guidelines, each project has its own “DNA.” For PictoTales, I have a project-specific set of rules such as:

  • Use TypeScript for all new web features in the Next.js app.
  • Follow the Prisma schema for all database interactions.
  • Apply the 5C Framework (Context, Change, Choice, Commitment, Consistency) to ensure features align with our core vision of fostering family connections.
  • Security & Performance: Don’t expose sensitive environment variables; ensure code can scale to thousands of stories being generated concurrently.

These rules live in my project’s documentation folder and serve as a quick reference for both human contributors and my AI assistants.

3. Context to Provide During Editing

Finally, there’s dynamic context that changes with each coding session or feature request. This might include:

  • Active Branch or Feature: Are we working on story-generation or adding a new route in Next.js? Let the AI know.
  • Relevant Snippets or Files: If I’m refactoring the “Audio Generation” module, I’ll share the relevant code block or function signature.
  • Project Structure: Showing the file/folder hierarchy helps the AI locate relevant files or guess where new ones should be placed.
Here’s a command snippet I often use to provide my AI editor with an overview of the monorepo’s structure:
tree -L 6

And here’s a simplified result that I might share verbatim with the AI:

PictoTalesLoveNext/
├── apps/
│   ├── ai-service/
│   │   ├── service.py
│   │   ├── dspy_story_generator.py
│   │   ├── story_audio_generation.py
│   │   ├── story_image_generator.py
│   │   └── story_translator_v2.py
│   └── web/
│       ├── app/
│       ├── modules/
│       └── content/
├── packages/
│   └── database/
└── docs/

The tree highlights key directories: ai-service (Python-based), web (Next.js), and a shared database package with Prisma. Providing this structure helps the AI generate path imports accurately and place new files in the correct location.


Practical Examples of Context-Onboarding

Let’s say I’m implementing a new workflow for multi-language audio narration in PictoTales. Before asking the AI to generate any code, I’ll do the following:

  1. Restate the Universal Rules: Remind the AI to explain logic and maintain best practices.
  2. Reiterate Project Rules: Emphasize that we need TypeScript for the front end, Python for the AI service, and consistent naming schemes.
  3. Set the Scene: Mention that we’re working on the “Audio Generation Workflow.”
  4. Provide the Existing Code: Paste the relevant function or interface for the AI to reference.
  5. Specify the Goal: E.g., “We want to add a language selector so parents can choose Spanish or English. Then generate an MP3 file and store it in our S3 bucket.”

Armed with this context, the AI can offer more precise and robust code—essentially becoming a specialized collaborator instead of a general-purpose assistant.


Innovative Ways to Leverage Context

  1. Versioning Your Prompts
    Just like you commit code, consider versioning your AI prompts. If you find a particularly effective set of context statements, label them with a version (e.g., ai_context_v1.md) and refine them as your project evolves.
  2. Context Modules
    Create modular context files for each major subsystem (e.g., “Database Context,” “Authentication Context,” “AI Workflow Context”) and combine them on-the-fly as needed.
  3. Automated “AI Onboarding” Script
    Build a simple script that, whenever you open a new coding session, automatically pipes your universal rules, project rules, and current environment details into the AI conversation. This ensures you don’t forget any crucial info.

PictoTales Update: Applying This Strategy

For PictoTales, every feature—from generating bedtime story illustrations to orchestrating multi-step audio workflows—goes through the same “context pipeline.” Here’s how it’s helped me:

  • Consistency Across Services: The Python AI service and Next.js front end maintain the same naming conventions, error handling, and logging approaches.
  • Fewer Iterations: I spend less time clarifying each request; the AI editor already “knows” the frameworks and constraints.
  • Depth Over Breadth: Instead of a scattershot approach, I can delve deeper into refining features like emotional tone in audio narration or advanced image styles.

Closing Thoughts & Next Steps

Providing robust context to your AI coding partner can transform it from a neat tool into a powerful extension of your development process. As you’ve seen, it involves more than just a quick prompt: it’s about systematically sharing your universal rules, project guidelines, and session-specific details.

  • Stay Tuned: In my next post, I’ll explore how I organize and automate these “Context Modules” in my workflow, complete with a small Node.js script that helps me onboard AI with minimal manual copying and pasting.
  • Try It Yourself: Next time you open a new AI coding session, copy over your project structure, share relevant config files, and remind the AI of your coding style. Watch the difference in the quality of suggestions.

Thanks for following along on the PictoTales journey, and here’s hoping these context strategies help you build (and code) more effectively. If you have your own tips or you’ve discovered creative ways to “onboard” your AI assistant, drop me a line or comment below. Let’s keep learning—and building in public—together!

Until next time, keep your AI well-fed with context, and happy hacking!


Code Review & Feedback (For the Curious)

Here’s a quick demonstration (in line with my “universal rules” for AI editors) of how I might review a snippet:

User Snippet:

# In story_audio_generation.py
def generate_audio(text, voice_type):
    # code to generate audio
    pass

AI Feedback:

  1. Explain the Purpose: “This function generates audio narrations for stories, using the chosen voice_type.”
  2. Suggestions for Improvement:
    • Add docstrings to clarify parameter types.
    • Consider handling exceptions from external APIs.
    • Add logging to track performance and error details.
  3. Example Implementation:
def generate_audio(text: str, voice_type: str) -> bytes:
    """
    Generates audio narration for the given text and voice type.

    :param text: The story text to be converted to audio.
    :param voice_type: The type of voice to use (e.g., 'child', 'narrator').
    :return: Audio data in bytes.
    """
    try:
        # Integrate with text-to-speech service
        audio_data = your_text_to_speech_api(text, voice_type=voice_type)
        return audio_data
    except Exception as e:
        logger.error(f"Audio generation failed: {e}")
        raise

This level of detail ensures you, the developer, learn best practices while making your code robust.


Suggestions for Further Learning or Practice

  • Prompt Engineering Best Practices: Explore courses or articles on how to design prompts for LLMs.
  • Documentation-Driven Development: Treat your AI as an ally that thrives on well-documented code and guidelines.
  • Experiment With Reasoning LLM: You can use OpenAI O1 or DeepSeek R1 as your software architect to generate PRD (Product Requirements Document) or system architecture to further enhance how you supply context comprehensively.
  • Build a Reusable “Context Delivery” System: Investigate building scripts or using advanced prompt managers to automate context provisioning at scale.

Keep iterating and refining your AI approach. Like any team member, the more context your AI collaborator has, the more value it can bring to your codebase—whether you’re weaving magical family tales or building the next big SaaS product.

Happy building!