CONVERSE PYTHON, PARTNER AI The Intentional Prompting Methodology By Michael Borck ============================================================ ============================================================ SOURCE: index.qmd ============================================================ # Preface ## Why This Book Exists Most developers use AI to get code written faster. They prompt, they paste, they ship. That approach produces code, but it skips the thinking that makes a developer good at their job. Programming with AI is not just about speed. It is about thinking differently about the development process itself. The developers who get the most from AI tools are not the ones who delegate the most. They are the ones who stay in the conversation: directing, questioning, understanding, and making the architectural decisions that the model cannot make for them. This book teaches a structured methodology for that conversation. It calls the approach intentional prompting — using AI tools with precision and purpose while ensuring you remain the architect and decision-maker throughout. ## Who This Book Is For - Software developers looking to integrate AI tools effectively into their workflow - Educators teaching programming in an AI-assisted world - Students learning to code alongside AI assistants - Tech leaders developing best practices for AI-augmented development teams You do not need to be an expert programmer. Basic familiarity with programming concepts will help you get the most from the examples, but the methodology applies at every level. ## What This Book Is Not This is not a guide to specific AI tools. It does not teach you how to use GitHub Copilot, ChatGPT, Claude, or any particular platform. Interfaces change; the methodology here does not. It is not a prompt cookbook. You will find prompts throughout, but they are starting points for conversation, not finished products. If you are looking for a library of copy-paste prompts that write your code for you, this is the wrong book. It is not a book about replacing developers with AI. It argues the opposite: that AI is most powerful when a skilled developer is directing it, and that the skills which make you a good developer — decomposition, abstraction, critical evaluation, architectural judgement — are exactly the skills that make you good at working with AI. And it is not a book that pretends AI always gets it right. A significant part of the methodology is about recognising when AI output is wrong, understanding why, and knowing how to fix it. ## If You Are Feeling Uncertain You are not alone. Many developers experience a quiet anxiety: that everyone else has figured this out already, that they are falling behind, that their skills are becoming obsolete. That feeling is nearly universal and rarely admitted. You are not behind. The technology is genuinely new, and the developers who appear to have it all figured out are mostly just a few weeks ahead. This book meets you wherever you are. ## How This Book Is Structured The book is divided into five parts: 1. The Philosophy: Why intentional prompting matters and how it connects to the broader methodology 2. Foundations: Core concepts of the human-AI partnership and the principles behind effective collaboration 3. Methodology: The six-step programming methodology adapted for AI collaboration 4. Patterns & Practices: Effective prompting patterns, debugging, refactoring, and practical case studies 5. Advanced Topics: Scaling to complex projects, teaching and learning contexts, and future directions Each chapter includes practical examples, exercises, and reflections to help you apply these concepts in your own work. ## Conventions Used in This Book Throughout the book, AI conversations appear as grey monospace blocks labelled You: (what you type into the AI) and AI: (what comes back). These look like what you would see on screen — plain text, no formatting. You: AI: Narrative text between conversations provides commentary — explaining why the prompt was structured that way, what to notice in the response, and what principle is being demonstrated. You will also encounter coloured callout boxes. Each serves a different purpose. Green boxes offer tips you can apply immediately — prompting strategies, workflow suggestions, things to try. Blue boxes highlight important ideas worth pausing on — principles of intentional prompting, methodology insights, things to remember. Yellow boxes flag common mistakes or misconceptions — things that look right but lead to poor results. ## The Companion Book This book applies the methodology developed in Conversation, Not Delegation: How to Think With AI, Not Just Use It (Borck, 2025) to software development. That companion book covers the full framework in depth: the Conversation Loop, the VET framework for evaluating AI output, the average-versus-precise decision grid, and the principle of AI Last. If you want the underlying rationale for why the approaches in this book work, or you want to share a discipline-neutral version with non-developer colleagues, start there. Available at https://michael-borck.github.io/conversation-not-delegation. ## Related Resources This guide is part of a series designed to help you master modern software development in the AI era. Start with Think Python, Direct AI if you are a complete beginner, or jump into Code Python, Consult AI if you want a more focused approach to Python fundamentals. Both prepare you for the production-focused content in Ship Python, Orchestrate AI. For web development, start with Build Web, Guide AI. The methodology from Converse Python, Partner AI enhances all your development work. Conversation, Not Delegation — the general methodology for working with AI across any discipline. Covers the Conversation Loop, the VET framework, and the principle of AI Last. Converse Python, Partner AI (this book) — intentional prompting methodology applied to software development. Think Python, Direct AI — computational thinking for absolute beginners. Learn to think like a programmer with AI as an exploration partner. Code Python, Consult AI — focused Python fundamentals with AI integration. Ship Python, Orchestrate AI — professional Python development practices with modern tools (uv, ruff, mypy, pytest). Build Web, Guide AI — web development (HTML, CSS, JavaScript, WordPress, React) with AI as your development partner. All titles are available at books.borck.education. ## Ways to Engage with This Book This book is available in several formats. Pick whichever fits how you work and learn. - Read it online. The full book is freely available at the companion website, with dark mode, search, and navigation. - Read it on paper or e-reader. Available as a paperback and ebook through Amazon KDP. - Converse with it. The online edition includes a chatbot grounded in the book's content. Ask it questions about intentional prompting, the six-step methodology, or any concept in the book. - Feed it to your own AI. The `llm.txt` file provides a clean text version of the entire book, ready to paste into ChatGPT, Claude, or any AI tool. - Run the code. All code examples and supplementary materials are available on GitHub. DeepWiki provides an AI-navigable view of the repository. - Browse all books. This book is part of a series. See all titles at books.borck.education. The online version is always the most current. The printed and ebook editions are updated periodically. Feedback and Errata Found an error? Have a suggestion? - Open an issue: github.com/michael-borck/converse-python-partner-ai/issues - Email: michael@borck.me ============================================================ SOURCE: copyright.qmd ============================================================ # Copyright \thispagestyle{empty} \vspace*{\fill} Converse Python, Partner AI\ The Intentional Prompting Methodology Copyright © 2026 Michael Borck. All rights reserved. Published by Michael Borck\ Perth, Western Australia ISBN: 979-8-2543-0182-0 First edition, 2026. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means without the prior written permission of the author, except for brief quotations in reviews and certain non-commercial uses permitted by copyright law. This work is also available under a Creative Commons Attribution (CC BY) licence for content and MIT licence for code examples at the companion website. See below for details. AI disclosure: This book was written using the methodology it describes. AI tools were used as thinking partners throughout the drafting, iterating, and refining process. The author reviewed, challenged, and took responsibility for every sentence and line of code. Full details of the tools used are in the acknowledgments. Companion website: https://michael-borck.github.io/converse-python-partner-ai\ Source: https://github.com/michael-borck/converse-python-partner-ai \vspace*{\fill} ============================================================ SOURCE: chapters/philosophy.qmd ============================================================ # Conversation, Not Delegation > The difference between using AI well and using it poorly is not technical skill. It is whether you are thinking or just accepting. ## The Core Distinction When you interact with AI, you make a choice, often unconsciously, about the nature of that interaction. Delegation is handing off a task: "Write me a marketing plan." "Create a Python function that does X." You ask, AI delivers, you accept. It's transactional. The AI does the thinking; you consume the output. Conversation is collaborative thinking: You engage the AI in dialogue. You ask questions, challenge assumptions, build understanding together. The AI contributes, but you remain the architect of the outcome. This distinction matters because: - Delegation makes you dependent. You get output but not understanding. When something goes wrong, you can't fix it. When requirements change, you start from scratch. - Conversation makes you capable. You build understanding alongside output. You can explain, modify, and improve. You grow with each interaction. The philosophy of this book is simple: Treat AI as a thinking partner, not a task executor. ## Vibe Coding and What Comes After There is a popular approach to AI-assisted development called "vibe coding" — a term coined by Andrej Karpathy to describe coding by feel, prompting AI until something works, without necessarily understanding the code it produces. The idea is liberating: anyone can build software by describing what they want and iterating until it runs. Vibe coding is real, and for quick prototypes and throwaway scripts it can be genuinely useful. Communities like roadmap.sh/vibe-coding have documented practical workflows around it: plan before you code, be specific in your prompts, manage context, test your output, use version control. These are sensible practices, and many of them overlap with what this book teaches. The difference is what happens when something goes wrong — or when something needs to change. A vibe coder who gets a working function cannot explain why it works. When a bug appears in production, they paste the error back into the AI and hope for a fix. When requirements shift, they start the conversation from scratch because they never understood the original design. Each interaction is independent. Nothing compounds. An intentional prompter who gets the same function can explain the algorithm, identify where it will fail, and modify it confidently. When they ask the AI for help, they can describe the problem precisely — not "it's broken" but "the expand_around_center function isn't handling even-length palindromes because the boundary check is off by one." That precision gives the AI the context it needs to be genuinely useful, rather than guessing. Understanding compounds. Each problem you work through intentionally makes the next one easier — you recognise patterns, you know what questions to ask, you can evaluate AI suggestions against your own mental model. Vibe coding does not compound. Each session starts from zero because no understanding was built in the previous one. The six-step methodology in this book is not slower than vibe coding. It is faster in the medium term, because the understanding you build in Steps 1-4 makes Steps 5-6 dramatically more productive. You are not guessing at prompts — you are giving the AI precise specifications based on work you already did. The AI responds better because you gave it better input. And when the output needs fixing, you know exactly where to look. There is also a complexity ceiling. Vibe coding works for single-file scripts and quick prototypes. But when a project grows to multiple modules, shared state, external APIs, and concurrent users, the AI gets stuck too. It suggests fixes that introduce new bugs because it is guessing at the architecture — just like you are. A developer who understands the system's structure can describe the problem precisely enough for the AI to help. A developer who does not understand the structure is reduced to pasting error messages and hoping. The gap between those two developers widens with every layer of complexity. An important clarification: intentional prompting does not mean avoiding AI until Step 5. AI is your partner through all six steps — brainstorming requirements in Step 1, exploring edge cases in Step 2, tracing through examples in Step 3, refining pseudocode in Step 4. The methodology is not "think without AI, then code with AI." It is "think with AI at every stage, so that when you reach the coding stage, both you and the AI have the context to produce something good." This is the Conversation Loop from Conversation, Not Delegation applied to programming: brainstorm, ideate, iterate, amplify — with AI as your thinking partner throughout. This book does not argue against using AI to write code. It argues for understanding what you are asking the AI to build, so that when it builds it, you can tell whether it is right. ## What is a "Prompt"? Before we go further, let's demystify some terminology. > A prompt is simply a request or instruction you give to AI. That's it. When you type "Explain how photosynthesis works" into ChatGPT, that's a prompt. When you ask Claude to "Review this code for bugs," that's a prompt. When you tell Copilot to "Write a function that calculates shipping costs," that's a prompt. Prompts can be simple or complex, vague or precise, single questions or multi-paragraph specifications. The quality of AI's response depends heavily on how you craft your prompt, but at its core, a prompt is just a request. ## What is "Intentional Prompting"? > Intentional prompting is the practice of crafting each AI interaction deliberately, to build understanding, not just get output. The word "intentional" is key. It means: - Purposeful: Each prompt has a clear goal beyond "give me an answer" - Thoughtful: You consider what you're asking and why - Sequential: Each prompt builds on previous exchanges - Learning-oriented: You seek understanding, not just solutions A single prompt is a request. A series of intentional prompts becomes a conversation. Consider the difference: Unintentional prompting: Intentional prompting: The second approach takes longer but produces: - A solution you understand and can modify - Knowledge that transfers to similar problems - Confidence in the code's limitations - Skills that grow with each interaction ## One Prompt, One Job A related principle: give each prompt a single focused task. When you ask AI to "write a function, add error handling, write tests, and document it," you get shallow results for all four. When you ask it to write the function first, then handle errors in the next message, then write tests, each response is deeper and more useful. The model calibrates its depth to match the scope of what you asked. A checklist gets checklist responses. A focused question gets a considered answer. This is the foundation of the conversation approach. You are not handing over a task list. You are working through a problem one step at a time, reviewing each result before moving on. ## The Philosophy in Action Let's see how delegation and conversation differ in practice. ### A Python Example Delegation approach: Conversation approach: The same principle applies beyond programming. Whether you are planning a marketing launch, designing a research methodology, or writing a policy document, the difference between delegation and conversation determines whether you understand the result or just received it. ## Beyond Programming ## Universal Principles While this book applies intentional prompting to software development, the philosophy of Conversation, Not Delegation works in any domain: business planning, creative writing, research, education, and beyond. The principles transfer - only the examples change. ## The Two-Chat Workflow One practical way to build this discipline into your development workflow is to separate thinking from building entirely. Instead of doing everything in one session, run two. Chat 1: Understand the problem. Open a session with no intention of writing code. Use it to explore the requirements, challenge your assumptions, and clarify what you are actually trying to build. What are the edge cases? What are the real constraints? What approaches have you not considered? Let the conversation wander. This session is messy by design. Its purpose is to sharpen your thinking until you know what you actually need. Chat 2: Write the code. Start a fresh session and arrive with a clear specification: what the function should do, what inputs it takes, what edge cases it must handle, and what design decisions you have already made. The code will be dramatically better because the input is better. The critical step is the gap between the two chats. You do not dump everything from the first session into the second. You review what emerged, keep what matters, discard what does not, and write a focused brief that reflects your decisions. That act of curation is where your engineering judgement lives. This maps directly to the average-versus-precise framework. A project that starts in the danger zone (large, precise) gets decomposed during the thinking chat into components that each sit in different quadrants. The architecture discussion belongs in Chat 1. The individual functions can be written in Chat 2 with appropriate trust for their position on the grid. ## What is Ahead This book will teach you to have productive conversations with AI: - Part 1: Foundations establishes the core concepts and principles - Part 2: Methodology provides a structured six-step approach - Part 3: Patterns and Practices offers reusable conversation patterns - Part 4: Advanced Topics addresses complex scenarios and future directions Throughout, you will see examples in both programming and business contexts, reinforcing that these principles apply wherever you work with AI. The goal is not to make you dependent on AI, but to make you more capable with AI as your partner. Lets begin. ============================================================ SOURCE: chapters/introduction.qmd ============================================================ # Introduction > Intentional prompting is not about getting better answers from AI. It is about becoming a better developer alongside it. ## Applying the Philosophy to Software Development In the previous chapter, we established the core philosophy: Conversation, Not Delegation. We defined intentional prompting as the practice of crafting each AI interaction deliberately, to build understanding, not just get output. Now we turn to software development specifically. The landscape of programming is undergoing a profound transformation. With AI assistants like GitHub Copilot, Claude, and ChatGPT, developers have powerful tools that can generate code, explain concepts, debug problems, and assist with software design. These tools have ignited debates: Will AI replace programmers? Should we resist AI-generated code? Is programming fundamentally changing? This book proposes a clear answer: Intentional Prompting, applying the "Conversation, Not Delegation" philosophy to position AI as a collaborative partner in development rather than either a threat or a magical solution. ## What is Intentional Prompting? Intentional prompting is a deliberate approach to programming that leverages AI tools while maintaining human control over the development process. Rather than simply asking AI for complete solutions, intentional prompting focuses on: 1. Understanding programming concepts well enough to evaluate AI-generated code 2. Guiding AI through a structured development methodology 3. Using AI as a tool for learning and exploration rather than just output 4. Maintaining the human developer as the architect and decision-maker This approach recognises that the most powerful development workflow combines human creativity, judgment, and domain knowledge with AI's ability to handle routine coding tasks, suggest implementations, and provide contextual information. ### Intentional Prompting vs. Prompt Engineering It's important to distinguish intentional prompting from the related but distinct field of prompt engineering: Prompt Engineering focuses primarily on crafting effective instructions for AI systems to produce desired outputs. It encompasses techniques like zero-shot and few-shot prompting, chain-of-thought prompting, and role-based prompting. The emphasis is on optimising the prompt itself to get the best possible output from the AI. Intentional Prompting builds upon prompt engineering but differs in several key ways: 1. Methodology vs. Technique: Intentional prompting provides a complete methodology (the six-step approach) rather than just techniques for crafting prompts. 2. Process vs. Output: While prompt engineering prioritises getting optimal outputs, intentional prompting emphasises the development process itself as the primary value. 3. Human Agency: Intentional prompting explicitly maintains the human as the director and decision-maker throughout the development process, not just the prompt creator. 4. Learning Focus: Intentional prompting has an explicit educational component, designed to enhance human understanding and skills alongside producing code. 5. Structured Framework: Intentional prompting embeds prompt techniques within a systematic framework for approaching programming problems, rather than treating prompting as an isolated activity. In essence, intentional prompting integrates prompt engineering techniques within a broader methodology that maintains human agency and prioritises understanding and learning throughout the development process. This distinction becomes especially important as AI capabilities advance toward autonomous coding agents, where maintaining human oversight and understanding becomes increasingly critical. For a comprehensive comparison of intentional prompting with other approaches to AI-assisted programming (including prompt engineering, pseudocode for LLMs, agentic programming, and vibe coding), see the comparative framework in Appendix C: AI Tool Comparison. ## Why Intentional Prompting Matters As AI coding assistants become increasingly capable, there's a risk that developers might: - Become overly dependent on AI suggestions without understanding them - Lose skills in fundamental programming concepts - Accept sub-optimal solutions because they're quick and easy Intentional prompting addresses these concerns by providing a framework that helps developers use AI effectively while continuing to grow their own skills and understanding. For students learning to code, this approach is particularly valuable. Rather than seeing AI as either a shortcut to avoid learning or a forbidden tool, intentional prompting teaches students to use AI as a learning partner that enhances their educational journey. ### Beyond Automation: The Case for Human Oversight The rapid advancement of AI has led to the emergence of agentic programming, where AI agents can autonomously perform complex development tasks with minimal human intervention. While these systems show impressive capabilities in routine coding tasks, they also highlight precisely why methodologies like intentional prompting remain essential. Research reveals that even the most sophisticated AI agents face fundamental limitations that necessitate human oversight: 1. Understanding Gap: AI agents lack true comprehension of problem context and domain requirements. They can pattern-match solutions but miss critical business or user needs that aren't explicitly stated. 2. Accountability and Ethics: Humans must bear responsibility for decisions with real-world impacts. Delegating this completely creates accountability gaps when things go wrong. 3. Novel Problem Solving: While agents excel at pattern recognition, they struggle with truly novel problems requiring innovative approaches outside their training distribution. 4. Quality Thresholds: Agents lack the meta-awareness to know when a solution is "good enough" versus when to keep refining for the specific context. 5. Creative Direction: Human value judgments about elegance, maintainability, and design philosophy are subjective but crucial for creating coherent, high-quality systems. 6. Integration Knowledge: Humans better understand how code fits into larger systems, organisational constraints, and unstated requirements. 7. Skill Development: The learning benefits of intentional programming disappear if fully automated, undermining the educational value essential for long-term developer growth. Even with sophisticated agent networks, complex programming will likely remain a collaborative process with humans providing critical validation, creative direction, context understanding, and accountability. The intentional prompting methodology provides a structured framework for this collaboration, ensuring that humans maintain control of the development process while leveraging the strengths of AI assistants. ## The Six-Step Methodology At the heart of intentional prompting is a structured methodology that guides the development process: 1. Restate the problem in your own words 2. Identify input and output requirements 3. Work the problem by hand with concrete examples 4. Write pseudocode to outline the solution strategy 5. Convert to code (with AI assistance as appropriate) 6. Test with a variety of data, especially edge cases This methodology works both with and without AI assistance, but becomes particularly powerful when combined with intentional prompting techniques that guide AI tools effectively. ## Who Should Read This Book This book is designed for: - Professional developers adapting to AI-assisted workflows - Students learning programming in an AI-enabled world - Educators teaching programming in the age of AI - Technical managers developing best practices for AI integration No matter where you are in your programming journey, intentional prompting offers a framework for leveraging AI tools effectively while continuing to develop your own skills and understanding. ## Related Materials This book is part of a comprehensive series for mastering modern software development in the AI era: Foundational Methodology - Converse Python, Partner AI: The Python Edition (this book) Python Track - Think Python, Direct AI: Computational Thinking for Beginners - Perfect for absolute beginners - Code Python, Consult AI: Python Fundamentals for the AI Era - Core Python knowledge - Ship Python, Orchestrate AI - Professional Python in the AI Era Web Track - Build Web, Guide AI: Business Web Development with AI - HTML, CSS, JavaScript, WordPress, React In the following chapters, we'll explore the principles of intentional prompting in depth, examine the six-step methodology with practical examples, and develop patterns for effective collaboration between human developers and AI assistants. ============================================================ SOURCE: chapters/ai-revolution.qmd ============================================================ # The AI Revolution in Programming > The tools changed. The need for understanding did not. ## Evolution of Programming Tools The history of programming is a story of continuous abstraction. From machine code to assembly language, from procedural to object-oriented paradigms, each evolution has aimed to make programming more accessible, maintainable, and powerful. AI coding assistants represent the next major evolution in this timeline. Rather than simply providing syntax highlighting or code completion based on predefined rules, these tools can understand intent, generate complete solutions, and explain complex concepts in natural language. ## Types of AI Coding Assistants Several categories of AI tools have emerged to assist programmers: ### Code Completion Tools Tools like GitHub Copilot, Amazon CodeWhisperer, and TabNine integrate directly into the development environment and suggest code completions as you type. These suggestions range from completing a single line to generating entire functions. These tools excel at: - Automating repetitive coding patterns - Suggesting syntactically correct code - Implementing common algorithms and data structures - Generating boilerplate code ### Conversational AI Assistants Conversational AI models like Claude, ChatGPT, and Bard take a different approach. Rather than integrating directly into the code editor, they provide a chat interface where developers can: - Ask questions about programming concepts - Request code suggestions for specific problems - Explain existing code - Debug issues - Discuss design alternatives ### IDE Integrations Some tools combine both approaches, integrating conversational AI directly into development environments. Examples include: - Visual Studio IntelliCode - JetBrains AI Assistant - Cursor (formerly Mutable AI) These tools allow developers to maintain their familiar workflow while accessing AI assistance. ## Capabilities and Limitations Understanding the true capabilities and limitations of AI coding assistants is crucial for using them effectively. ### What AI Can Do Well - Generate syntactically correct code in most programming languages - Implement standard algorithms and design patterns - Convert between similar programming languages - Explain code and programming concepts - Identify common bugs and suggest fixes - Provide alternative implementations for the same problem ### Current Limitations - Limited understanding of business context and domain-specific requirements - Difficulty with novel or extremely complex algorithms - Generation of plausible-looking but incorrect code for edge cases - Outdated knowledge of newer frameworks or libraries - Inconsistent code quality depending on prompt clarity - Potential to reinforce common programming misconceptions Understanding these limitations is not about dismissing AI assistance, but about knowing when and how to rely on AI versus your own judgement. Most of these limitations trace back to a single fact: LLMs interpolate, they do not retrieve. When you ask an AI assistant to write a function, it is not looking up the correct implementation in a database. It is predicting what plausible code would look like based on the millions of examples it absorbed during training. This is why it can produce a function that looks right, passes simple tests, and fails on edge cases. The prediction was convincing. The prediction was not verified. That distinction is why the intentional prompting methodology in this book emphasises understanding and testing, not just generating. ## Common Misconceptions Several misconceptions about AI coding assistants can lead to their ineffective use: ### "AI Will Replace Programmers" Reality: AI tools augment programmer capabilities rather than replacing the need for human judgment, creativity, and domain knowledge. The skills required for programming are evolving, not disappearing. ### "AI Always Produces Correct Code" Reality: AI-generated code often contains subtle bugs, inefficiencies, or security vulnerabilities that require human review. The most effective workflows involve careful review and testing of AI suggestions. ### "Using AI Is Cheating" Reality: AI assistance is simply another tool in the developer's toolkit. The skill lies in how effectively you direct the AI, evaluate its outputs, and integrate suggestions into your work. ### "AI Works Best When Asked for Complete Solutions" Reality: AI often produces better results when guided through a step-by-step process that breaks down complex problems and allows for iterative refinement. ## The Human-AI Partnership Model The most productive relationship with AI coding assistants is a partnership model rather than a service model. In this partnership: ### The Human Provides: - Problem understanding and domain knowledge - Architectural direction and design constraints - Quality standards and evaluation criteria - Creative direction and novel approaches - Critical thinking about suggested solutions ### The AI Provides: - Implementation suggestions - Alternative approaches - Documentation and explanations - Memory of syntax and APIs - Help with tedious or repetitive tasks This partnership model focuses on complementary strengths rather than replacement. It recognises that the combination of human creativity and judgment with AI capabilities creates possibilities that neither could achieve alone. ## Emerging Approaches to AI-Assisted Development As AI coding tools have advanced, distinct philosophical approaches to their use have emerged. These approaches represent different perspectives on how developers should integrate AI into their workflow. ### Vibe Coding: Embracing AI-Led Development "Vibe coding" (or "vibe programming") represents a paradigm shift where developers largely defer to AI for code generation. Introduced by Andrej Karpathy, a co-founder of OpenAI and former AI leader at Tesla, in early 2025, this approach encourages developers to "fully give in to the vibes, embrace exponentials, and forget that the code even exists." The vibe coding philosophy centres around: 1. Natural Language Prompting: Developers describe desired functionality in plain language rather than focusing on implementation details. 2. Minimal Code Review: Code is often accepted with limited scrutiny, prioritising speed and output over deep understanding. 3. Rapid Iteration: Developers quickly implement AI suggestions, test for functionality, and refine through further prompts rather than manual coding. 4. Reduced Technical Focus: As Karpathy described it, "It's not really coding - I just see things, say things, run things, and copy-paste things, and it mostly works." This approach has found particular traction in rapid prototyping and startup environments. Y Combinator reportedly found that 25% of startups in its Winter 2025 batch had codebases that were 95% AI-generated. While vibe coding offers unprecedented speed and accessibility, it presents challenges in debugging, maintenance, and knowledge development. Critics argue that it can lead to poor code quality, security vulnerabilities, and a devaluation of fundamental programming skills. ### Intentional Prompting: Human-Directed AI Assistance Intentional prompting, which we'll explore throughout this book, takes a different approach. Rather than deferring to AI, it views AI as a powerful assistant within a human-directed process. This approach emphasises: 1. Structured Problem-Solving: Following a deliberate methodology that starts with understanding the problem 2. Selective AI Integration: Leveraging AI at specific points in the development process while maintaining human agency 3. Critical Evaluation: Carefully reviewing AI-generated code for correctness, efficiency, and security 4. Learning-Focused Collaboration: Using AI to enhance understanding rather than bypass it Intentional prompting recognises AI's value but prioritises the developer's control and comprehension of the final solution. ### Collaborative Integration: Finding the Middle Ground Between vibe coding's emphasis on speed and intentional prompting's focus on process lies a spectrum of approaches that organisations are adopting: 1. Permissive Acceptance: maximising efficiency by broadly accepting AI outputs with minimal oversight 2. Dismissive Rejection: Maintaining traditional development methods with limited AI assistance 3. Collaborative Integration: Establishing defined processes where humans and AI work together with clear boundaries The collaborative integration approach has gained traction in many professional environments, offering a middle ground that captures efficiency gains while maintaining code quality and developer growth. ## Looking Ahead As AI coding assistants continue to evolve, we can expect: - Greater specialisation in domain-specific tools - Better integration with development workflows - Improved reasoning about complex systems - Enhanced ability to understand, refactor, and maintain legacy code - More personalised assistance based on your coding style and preferences These advancements will not eliminate the need for skilled developers, but will continue to transform how we approach programming tasks and what skills are most valuable. ============================================================ SOURCE: chapters/intentional-prompting-principles.qmd ============================================================ # Principles of Intentional Prompting > The developer who understands the code is more valuable than the developer who generated it. ## The Spectrum of AI Integration Approaches Before diving into the core principles of intentional prompting, it's important to understand the broader landscape of approaches to integrating AI into development workflows. These approaches reflect different philosophies about the role of AI and human developers. ### Permissive Approach: "Just Take the Answer" The permissive approach involves accepting AI outputs with minimal human oversight or intervention. In this model, developers largely defer to AI-generated recommendations, content, and decisions. Advantages: - Maximum efficiency and speed in completing tasks - Reduced cognitive load on humans - Quick access to AI capabilities without friction - Easier adoption for non-technical users Limitations and Risks: - Potential propagation of AI errors or biases - Limited human learning and skill development - Reduced critical thinking and problem-solving practice - Overreliance may lead to degraded human capabilities over time - Lack of contextual understanding in complex situations This approach aligns closely with the "vibe coding" philosophy discussed in the previous chapter, prioritising speed and output over process and understanding. ### Dismissive Approach: "Reject AI Outright" At the opposite end of the spectrum is the dismissive approach, characterised by scepticism or outright rejection of AI tools. This stance prioritises traditional methods and maintains full human control. Advantages: - Maintaining full human control and autonomy - Preserving traditional skills and methods - Avoiding risks associated with AI errors - Clear human accountability and ownership - Preserving jobs and roles that might otherwise be automated Limitations: - Missing potential productivity and quality improvements - Requiring more human time and resources - Unnecessary repetitive or mechanical work for employees - Limited access to AI's data processing and pattern recognition capabilities ### Collaborative Approach: "Human-Directed AI Assistance" Between these extremes lies the collaborative approach, which views AI as a tool that enhances human capabilities without replacing human judgment. This middle-ground philosophy establishes a partnership where humans direct AI systems, critically evaluate their outputs, and maintain decision-making authority. Advantages: - Combining human judgment with AI efficiency - Maintaining human oversight while leveraging AI strengths - Enabling iterative improvement through feedback - Preserving human agency and accountability - Creating opportunities for human upskilling alongside AI use Challenges: - Requires more thought and time than pure acceptance - Demands both AI literacy and domain expertise - Needs more complex workflows and training - Requires clear frameworks for when to trust or question AI outputs Intentional prompting sits firmly within this collaborative approach, viewing AI as a thoughtful assistant rather than a replacement for human judgment. ### Comparing Intentional Prompting vs. Vibe Programming To illustrate the differences in these approaches, the following table compares intentional prompting with vibe programming across several important dimensions: | Dimension | Intentional Prompting | Vibe Programming | |-----------|-------------------|------------------| | Primary Goal | Understanding and skill development alongside output | Speed and output production | | Development Process | Structured, methodical approach with defined steps | Rapid, conversational, minimal planning | | Human Involvement | Human directs process and makes key decisions | Human describes desired outcome, AI leads implementation | | Code Review | Thorough review with critical evaluation | Minimal review, focus on functionality over quality | | Debugging Approach | Systematic investigation based on understanding | Trial-and-error, asking AI to fix issues | | Learning Outcome | Progressive skill development and deeper understanding | Potentially limited technical growth, focus on prompt crafting | | Code Quality | optimised for readability, maintainability, efficiency | Prioritises working functionality over optimality | | Ideal Use Cases | Complex systems, educational contexts, enterprise software | Rapid prototyping, MVPs, personal projects, non-critical applications | | Knowledge Transfer | Facilitates team understanding and knowledge sharing | May create knowledge silos where only original developer understands prompting approach | | Maintenance | Designed for long-term maintenance and evolution | May present challenges for future maintenance or extension | This comparison isn't meant to suggest that one approach is universally superior to the other. Rather, each has contexts where it may be more appropriate. Intentional prompting offers significant advantages for educational settings, complex systems, team environments, and situations where code quality and maintainability are paramount. ### Example: Different Approaches to the Same Challenge To make these differences concrete, let's examine how the permissive (vibe coding) and collaborative (intentional prompting) approaches would handle the same programming challenge: creating a function to validate email addresses. #### Permissive Approach (Vibe Coding) Human: Write a function that validates email addresses. AI generates: Human: It's not working for some emails with plus signs. Fix it. AI generates updated code: Human: It's still not working for email+alias@domain.com. Can you fix it? AI revises again: The permissive approach emphasises speed but may involve several iterations of trial and error without deepening understanding of the problem or solution. #### Collaborative Approach (Intentional Prompting) Human: I need to create an email validation function. Before writing code, let's discuss what makes a valid email address and identify the key requirements. AI: Provides explanation of email format requirements and edge cases Human: Based on those requirements, let's brainstorm approaches. What are the trade-offs between using regular expressions versus parsing the parts individually? AI: Explains different approaches with their advantages and limitations Human: I'll choose the regex approach for simplicity. Let's write pseudocode first: AI: Provides feedback on pseudocode and suggests improvements Human: Now let's implement this in Python with a clear approach: Human: Let's test this with various cases including edge cases: - user\@example.com - user+alias\@example.com - user.name\@example.co.uk - user\@sub.example.com - incomplete-email-without-domain\\@ - incomplete-email-without-username\\@example.com - user\\@incomplete-domain The collaborative approach takes longer initially but builds understanding, produces better-documented code, and addresses edge cases proactively rather than reactively. This example illustrates how the different approaches affect not just the final code but the entire development process and learning experience. ## Maintaining Human Agency At the core of intentional prompting is the principle of human agency - the programmer remains the architect and decision-maker throughout the development process, not just a consumer of AI-generated solutions. ### Directive vs. Delegative Approaches There are two fundamental ways to interact with AI coding assistants: Delegative Approach: Handing off problems entirely to the AI and accepting its solutions with minimal scrutiny or direction. Directive Approach: Guiding the AI through a structured process, maintaining control over architecture and design decisions, and critically evaluating its outputs. Intentional prompting emphasises the directive approach, where you: - Break down problems before presenting them to AI - Provide clear constraints and requirements - Review and question AI-generated code - Make deliberate decisions about when and how to incorporate AI suggestions ### Techniques for Maintaining Agency - Frame the problem yourself before asking the AI for help - Establish evaluation criteria for solutions before generating them - Request multiple approaches to avoid anchoring on the first solution - Question assumptions in AI-generated code - Make final integration decisions based on your understanding, not convenience ## Understanding vs. Copying A critical distinction in intentional prompting is the difference between understanding a solution and merely copying it. The goal is not just working code, but comprehension that builds long-term programming capabilities. ### The "Black Box" Problem When developers copy AI-generated code without understanding it, they create "black boxes" in their codebase - components they can't effectively debug, maintain, or explain. Over time, this leads to brittle systems and stunted professional growth. ### The VET Check A useful shorthand for this comes from Conversation, Not Delegation: the VET framework. Before accepting any AI-generated code, ask three questions: - Verify: Does this code actually do what it claims? Run it, test it, check the output against known inputs. - Explain: Can I explain how this works to a colleague? If I cannot walk through the logic, I do not understand it yet. - Test: Does it hold up under pressure? Change a variable, try an edge case, feed it unexpected input. You understand code when you can explain how it works, modify it confidently to handle new requirements, identify potential edge cases it might not handle, recognise its performance characteristics, and connect it to broader programming concepts. If any of those feel uncertain, you are not done yet. ### Strategies for Building Understanding - Request explanations of generated code - Ask "what if" questions about edge cases or modifications - Trace through execution with specific examples - Modify the code to handle different scenarios - Compare different implementations of the same functionality ## Good at the Average, Bad at the Precise There is a useful way to predict when AI will serve you well and when it will let you down. Think along two axes: average versus precise, and small versus large. AI excels at producing convincing averages. Ask it to write a utility function and you get something that reads like competent code, because the model has absorbed millions of similar functions and can synthesise a plausible representative. But ask for something precise, a specific algorithm optimised for your exact constraints, a function that must handle a particular edge case in your domain, and the same mechanism becomes a liability. The model predicts what plausible code would look like, which is how you end up with functions that pass simple tests but fail on the cases that matter. The second axis is scale. AI handles small, bounded tasks well. A single function, a focused prompt with clear constraints, plays to the model's strengths. But as tasks grow larger, with more interdependent components, conflicting requirements, and emergent complexity, the output starts to look coherent on the surface while falling apart in the details. The model will produce something that looks like a system architecture, but the dependencies will not hold and the design decisions will be inconsistent. These axes interact. Being good at the average means the model will confidently produce plausible-looking large systems that fail on precise details. The convincing average scales badly with complexity. Before using AI on any programming task, ask two questions: How precise does this need to be? and How big is this? | | Small | Large | |---|---|---| | Average | Sweet spot. Utility functions, boilerplate, starter code. Trust with light review. | Plausible but brittle. Looks like a working system, falls apart under testing. | | Precise | Workable with verification. Specific algorithms, edge case handling. Test thoroughly. | Danger zone. Confident architecture that is wrong in subtle ways. Stay in the loop at every step. | The sweet spot tasks can often be delegated with a quick review. Everything else requires conversation, iteration, and your judgement as the developer who understands the actual requirements. This framework is explored in depth in Conversation, Not Delegation (Borck, 2025). ## Process Over Output Intentional prompting emphasises the development process rather than just the final code. This focus on process leads to better long-term outcomes in both code quality and developer growth. ### Why Process Matters - Better architecture: A thoughtful process leads to better-designed code - Fewer bugs: Systematic approaches catch edge cases that rushed solutions miss - Easier maintenance: Code developed through a clear process is typically more readable and maintainable - Knowledge transfer: Process-focused development makes it easier to onboard others - Skill development: Focusing on process builds transferable skills rather than point solutions ### The Six-Step Process The six-step programming methodology (which we'll explore in depth in Part 2) provides a structured process that works with or without AI assistance: 1. Restate the problem 2. Identify inputs and outputs 3. Work through examples by hand 4. Write pseudocode 5. Convert to working code 6. Test thoroughly ### Integrating AI Into Your Process Rather than replacing your process, AI should enhance it: - Use AI to explore problem variations during problem restatement - Generate test cases during input/output identification - Verify your manual examples - Suggest and refine pseudocode - Help convert pseudocode to working implementations - Generate comprehensive test cases ## The Learning Mindset Intentional prompting is fundamentally about continuous learning rather than just task completion. This mindset treats every programming challenge as an opportunity to deepen understanding and build skills. ### Characteristics of a Learning Mindset - Curiosity about how and why solutions work - Comfort with not knowing everything immediately - Desire to understand deeply rather than just solve the immediate problem - Willingness to explore alternatives even after finding a working solution - Reflection on the development process to improve future approaches ### AI as a Learning Partner When approached with a learning mindset, AI assistants become powerful learning tools: - Use AI to explore concepts you don't fully understand - Ask AI to compare different approaches and explain tradeoffs - Request explanations of unfamiliar code patterns - Use AI to find gaps in your understanding - Challenge AI-generated solutions to deepen your own thinking ### Intentional Learning Techniques - Concept exploration: Ask the AI to explain concepts in multiple ways - Implementation comparison: Request different implementations of the same functionality - Knowledge testing: Explain a concept to the AI and ask for feedback - Deliberate challenge: Introduce constraints that force exploration of new approaches - Reflective questioning: Ask "why" questions about code decisions ## Ethical Considerations Intentional prompting includes ethical considerations about the use of AI in the development process. ### Attribution and Transparency - Be transparent about AI contributions to your code - Understand your organisation's policies about AI-assisted development - Consider adding attribution comments for significant AI contributions - Maintain clear documentation of human design decisions ### Security and Quality Responsibility - Always review AI-generated code for security vulnerabilities - Never delegate final quality assurance to AI tools - Maintain awareness of common security issues in AI-generated code - Establish clear review processes for AI-assisted development ### Bias and Fairness - Be aware that AI tools may embed biases from their training data - Review code for fairness issues, especially in user-facing features - Consider diverse perspectives when evaluating AI-generated solutions - Question assumptions that might embed problematic patterns ### Professional Development Balance - Balance efficiency gains from AI with skill development needs - Identify core skills you want to strengthen, even with AI assistance - Create intentional learning projects where you limit AI assistance - Use AI to stretch beyond your current capabilities rather than stay within them ## Addressing Common Concerns and Resistance to LLMs Despite their utility, Large Language Models face resistance from many programmers and educators. Some of these concerns have deep historical roots, while others arise from the unique characteristics of modern LLMs. Understanding and addressing these concerns is essential for effective adoption of intentional prompting. ### The Ambiguity of Natural Language As far back as 1977, computer scientist Edsger W. Dijkstra presented a compelling critique of natural language programming in his essay "On the foolishness of 'natural language programming'". Dijkstra argued that programming fundamentally requires "the care and accuracy that is characteristic for the use of any formal symbolism" and that the inherent ambiguity of natural language made it unsuitable for the precision required in programming. Dijkstra's concerns were valid for his time, but modern LLM-based approaches offer new possibilities through iterative refinement processes: - Progressive disambiguation - Iterative approaches provide mechanisms to gradually eliminate ambiguities through multiple rounds of interaction, transforming imprecise natural language into precise formal representations - Structured frameworks - Methodologies like intentional prompting add structure to otherwise ambiguous interactions - Human-in-the-loop validation - The human programmer validates outputs and maintains final authority over implementation decisions The intentional prompting methodology directly addresses Dijkstra's concerns by providing a framework that bridges natural language and formal code, using the former as an entry point rather than a replacement for the latter. ### The Non-Deterministic Nature of LLMs Another significant concern involves the non-deterministic behaviour of LLMs, specifically the fact that they can produce different outputs even when given the same input. This unpredictability raises legitimate questions about reliability, especially in mission-critical applications. Research has documented significant variations in LLM performance across multiple runs, with accuracy varying up to 15% and gaps between best and worst performance reaching as high as 70%. This variability stems from: 1. Input interpretation variability - LLMs may interpret the same natural language prompt differently across different runs 2. Output generation variability - Even with the same interpretation, the code generated may vary due to sampling methods The intentional prompting methodology addresses these concerns through: - Structured evaluation - The methodology provides clear criteria for evaluating generated code - Explicit testing - Step 6 (Test with Data) ensures thorough validation of any generated solution - Human oversight - The human programmer maintains control over the development process, reviewing and modifying generated code as needed - Iterative refinement - The methodology embraces multiple iterations to converge on reliable solutions ### Procedural Knowledge Transfer Interestingly, LLMs excel at procedural tasks because they've inherited human problem-solving patterns through their training data. Research shows that LLMs have absorbed procedural knowledge through exposure to: - Error-checking protocols from technical manuals - Creative iteration cycles in writing samples - Mathematical proof structures in STEM literature - Software engineering best practices from code repositories This explains why models respond well to structured methodologies like intentional prompting: they're activating latent procedural knowledge that mirrors human problem-solving approaches. The six-step methodology leverages this characteristic by providing a framework that: - Activates the model's latent understanding of systematic problem-solving - Provides clear procedural guidance that aligns with effective human workflows - Creates a shared procedural language between human and AI ### Data Privacy: Real Risks vs Perceived Risks A common concern among developers and organisations is that code or proprietary logic pasted into AI tools might be extracted by other users. This fear is largely unfounded and worth understanding clearly. When you paste code into an AI tool, it is processed and a response is generated. Your code is not stored as a retrievable file inside the model. If your data is used for training at all (enterprise tiers typically exclude it), it becomes a vanishingly small statistical signal distributed across billions of parameters. Reconstructing your specific code from those parameters is not a realistic attack. Jailbreaking an LLM means manipulating its behaviour, not accessing a database of other users' sessions. And remember: LLMs interpolate, they do not retrieve. There is no mechanism by which another user could query the model and get your code back, because the model never stored it as a file in the first place. You will also encounter convergent development mistaken for theft. "I was building a feature using Copilot, and then a competitor released something very similar." Thousands of developers are solving the same problems with the same tools, reading the same documentation, responding to the same market needs. Independent arrival at similar solutions is how software development normally works. The risks that are real and worth managing: pasting API keys, credentials, or access tokens into any external tool is an immediate security exposure. Pasting personally identifiable data or data subject to regulation is a genuine compliance issue. And accepting AI-generated code without testing or understanding it is a professional liability, though that is a verification problem, not a data leakage problem. The "non-zero risk means do not use it" stance deserves scrutiny. Every development tool involves trade-offs. Git repositories can be compromised. Dependencies can be supply-chain attacked. We manage these risks through practice and policy, not prohibition. Developers and organisations that refuse to engage with AI because of overestimated data risks face a different set of consequences: slower development, skills that stagnate, and a widening gap with peers who learned to use the tools thoughtfully. ### Finding the Right Balance The most effective approach to LLM integration lies in finding the right balance between permissive acceptance and dismissive rejection. The intentional prompting methodology represents this balanced middle ground: - It acknowledges LLMs' limitations regarding ambiguity and non-determinism - It establishes guardrails through a structured methodology - It leverages LLMs' strengths in pattern recognition and procedural knowledge - It maintains human agency and oversight throughout the development process By addressing these concerns directly and providing a structured framework for human-AI collaboration, intentional prompting offers a pragmatic approach that captures the benefits of AI assistance while mitigating its risks. ## Putting Principles Into Practice These principles (maintaining agency, understanding vs. copying, process over output, the learning mindset, ethical considerations, and addressing common concerns) form the foundation of intentional prompting. ============================================================ SOURCE: chapters/six-step-methodology.qmd ============================================================ # The Six-Step Programming Methodology > A methodology is not a constraint. It is a way of thinking that transfers to problems you have not seen yet. ## Overview of the Methodology The six-step programming methodology provides a structured approach to solving problems that works both with and without AI assistance. This methodology focuses on understanding before implementation and ensures that programmers maintain control of the development process. The methodology divides programming into two distinct phases: ### Design (Thinking) Phase - Steps 1-4 This phase focuses on understanding and planning before any actual coding begins: 1. Restate the problem in your own words 2. Identify input and output requirements 3. Work the problem by hand with concrete examples 4. Write pseudocode to outline the solution strategy ### Coding (Doing) Phase - Steps 5-6 Only after completing the design phase do we move to implementation: 5. Convert to code (with AI assistance as appropriate) 6. Test with a variety of data, especially edge cases This clear separation between thinking and doing is crucial. The pseudocode created in Step 4 serves as a bridge between these phases, transforming into comments that guide implementation in Step 5. ### The Hidden Structure of the Methodology While the six steps appear sequential, the methodology contains several important hidden elements: 1. Meaningful Divisions: The separation between Design and Coding reflects different modes of thinking - first conceptual, then practical. 2. Bridging Elements: Pseudocode serves as both the final product of the design phase and the starting framework for the coding phase. 3. Feedback Loops: The methodology is inherently iterative, with testing potentially sending you back to earlier steps for refinement. 4. Understanding to Doing: The methodology follows a natural progression from abstract understanding to concrete implementation. This methodology isn't revolutionary on its own. It draws from established problem-solving approaches and computational thinking practices like Waterfall, Agile, and Test-Driven Development. What makes it valuable in the age of AI is how it provides a framework for intentional prompting and collaboration with AI assistants while maintaining human understanding and control. ### Adapting to Your Preferred Methodology It's important to note that we're presenting this simplified methodology as a concrete example of how to structure AI-assisted development, not as the only valid approach. The six-step methodology serves as a template that demonstrates key principles of intentional prompting, which you can adapt to your preferred development methodology or domain. #### Integration with Other Development Approaches These principles can be integrated into various development methodologies: - Agile: Use the problem understanding and design steps during sprint planning, with implementation and testing occurring within sprint cycles - Test-Driven Development: Enhance the TDD cycle by adding explicit problem restatement and planning before writing tests - Domain-Driven Design: Incorporate intentional prompting to help define the ubiquitous language and domain models #### customising for Your Context When adapting these concepts to your situation, consider: 1. Identify key decision points where human understanding and direction are crucial 2. Establish clear boundaries for AI assistance within your workflow 3. Create explicit steps for critical evaluation of AI outputs 4. Define integration points where AI-generated components connect with human-directed architecture The core value proposition (maintaining human understanding while leveraging AI capabilities) remains valid regardless of which specific methodology you prefer. What's most important is having a structured approach to human-AI collaboration rather than following these exact six steps verbatim. Use this methodology as inspiration to develop your own intentional prompting workflow that aligns with your team's practices, domain requirements, and development philosophy while preserving the principles of human direction and understanding. ## How AI Changes the Process AI coding assistants change the dynamics of this methodology in several important ways: ### Acceleration and Exploration AI can accelerate many steps in the process, allowing for: - Rapid exploration of problem variations (Step 1) - Comprehensive consideration of edge cases (Step 2) - Verification of manual solutions (Step 3) - Generation of alternative pseudocode approaches (Step 4) - Quick implementation of solutions (Step 5) - Comprehensive test case generation (Step 6) ### The Temptation to Skip Steps With powerful AI tools, there's a temptation to skip directly to implementation (Step 5). This is the core challenge that intentional prompting addresses: maintaining a disciplined approach even when AI makes shortcuts possible. ### New Capabilities and Responsibilities AI introduces new capabilities at each step: - Discussing problem nuances in natural language - Generating edge cases you might not have considered - Checking your manual approach for errors - Suggesting implementation strategies you might not know - Implementing complex algorithms efficiently - Testing boundary conditions systematically These capabilities create new responsibilities for the programmer: - Critically evaluating AI suggestions - Maintaining awareness of the problem's full context - Verifying that AI-generated code meets requirements - Understanding the code that ultimately gets deployed ## Benefits of a Structured Approach Following a structured methodology with AI assistance offers several important benefits: ### Better Code Quality By working through all six steps rather than jumping straight to implementation: - Architectural issues are identified earlier - Edge cases are handled more comprehensively - Solutions are more robust and maintainable - The development process becomes more consistent ### Enhanced Learning The structured approach promotes deeper learning: - You develop a habit of understanding problems before coding - You practice computational thinking independent of specific languages - You build transferable problem-solving skills - You learn to evaluate solutions critically ### Appropriate AI Integration The methodology creates natural integration points for AI assistance: - Each step has clear inputs and outputs for AI collaboration - The division of responsibility between human and AI is clarified - You maintain control over the development process - AI complements rather than replaces your thinking ### Reduced Technical Debt Following all six steps helps avoid the accumulation of technical debt: - Solutions are designed rather than just implemented - Edge cases are addressed proactively - Code is tested systematically - Implementations are understood rather than just accepted ## Case Study: Solving a Problem With and Without AI To illustrate the methodology in action, let's consider a practical programming problem: > Create a function that finds the longest palindromic substring in a given string. ### Approach 1: Delegating to AI (Without Methodology) A common approach is to simply ask the AI directly: The AI might produce this solution: This code may work, but: - Do you understand how it works? - Could you modify it if requirements changed? - Are you confident it handles all edge cases? - Do you know its performance characteristics? ### Approach 2: The Six-Step Methodology with AI Assistance #### Step 1: Restate the Problem You: AI: #### Step 2: Identify Inputs and Outputs You: AI: #### Step 3: Work the Problem by Hand Human work: Let's manually trace through a simple example: the string "babad". To find palindromes: 1. Start with each position as potential center of palindrome 2. Expand outward while characters match For "babad": - Center at 'b' (index 0): Just 'b' (length 1) - Center at 'a' (index 1): 'bab' (length 3) - Center at 'b' (index 2): 'bab' (length 3) - Center at 'a' (index 3): 'aba' (length 3) - Center at 'd' (index 4): Just 'd' (length 1) Also check even-length palindromes: - Between 'b' and 'a' (0-1): None - Between 'a' and 'b' (1-2): None - Between 'b' and 'a' (2-3): None - Between 'a' and 'd' (3-4): None Longest palindromes: "bab" or "aba" (both length 3) You: AI: #### Step 4: Write Pseudocode You: AI: #### Step 5: Convert to Code You: AI: ```text Your Python implementation looks correct and follows your pseudocode closely! It will find the longest palindromic substring as required. A few small observations: 1. You've correctly used `nonlocal` to update the `start` and `max_length` variables from within the nested function. 2. The algorithm handles both odd and even length palindromes properly by starting from different center positions. 3. The time complexity is O(n²) and space complexity is O(1), which is efficient for most inputs. 4. Your code handles the empty string case correctly. The implementation is clean and efficient. The only possible optimisation would be implementing Manacher's algorithm for linear time complexity, but that's typically unnecessary unless you're dealing with extremely large strings. text Let's test my function with various test cases: 1. Normal case: "babad" → Should return "bab" or "aba" 2. Entire string is a palindrome: "racecar" → Should return "racecar" 3. Empty string: "" → Should return "" 4. Single character: "a" → Should return "a" 5. No palindromes longer than 1: "abc" → Should return "a" (or "b" or "c") 6. Multiple palindromes of same length: "aabbaa" → Should return "aabbaa" 7. Even-length palindrome: "abba" → Should return "abba" Can you help me verify if my implementation handles all these cases correctly? text Let's trace through your implementation for each test case: 1. `"babad"`: - For center at 'b' (index 0): Just 'b' (length 1) - For center at 'a' (index 1): 'bab' (length 3) - For center at 'b' (index 2): 'bab' (length 3) - For center at 'a' (index 3): 'aba' (length 3) - For center at 'd' (index 4): Just 'd' (length 1) - The first palindrome of length 3 found is "bab" ✓ 2. `"racecar"`: - Expands from middle 'e' to full string "racecar" ✓ 3. `""`: - Immediate return of empty string ✓ 4. `"a"`: - Single character is a palindrome of length 1 ✓ 5. `"abc"`: - No palindromes longer than 1, returns first character "a" ✓ 6. `"aabbaa"`: - Full string is palindrome ✓ 7. `"abba"`: - Checking even centers finds "abba" ✓ Your implementation handles all these test cases correctly! It will return the expected outputs for each scenario. ``` ### Comparing the Approaches The six-step approach with AI assistance: - Built a clear understanding of the problem - Identified edge cases explicitly - Traced through examples manually - Developed the solution systematically - Resulted in a function the programmer fully understands - Verified handling of important test cases This approach might take more time initially, but it produces: - A deeper understanding of the problem - Confidence in the solution's correctness - The ability to modify the solution when needed - A stronger foundation for solving similar problems in the future ## The Methodology in Daily Practice While the full six-step methodology is valuable for learning and complex problems, it can be adapted for daily use: ### Quick Reference Version 1. Restate: What am I really trying to solve? 2. I/O: What goes in, what comes out? 3. Example: Try a simple case by hand 4. Plan: Outline the solution strategy 5. Code: Implement with appropriate AI assistance 6. Test: Verify with varied test cases ### When to Be More Thorough Apply the full methodology more carefully when: - Learning new concepts - Solving complex problems - Implementing critical components - Working with performance-sensitive code - Teaching or mentoring others ### When to Streamline For familiar or simpler problems, you might: - Combine steps 1 and 2 into a quick problem analysis - Skip step 3 for very familiar problem patterns - Use AI to help generate pseudocode more quickly - Focus more heavily on thorough testing ## Key Takeaways The six-step methodology provides: - A structured framework for problem-solving - Natural integration points for AI assistance - Maintenance of human understanding and control - A balance between efficiency and depth In the next six chapters, we'll explore each step of the methodology in detail, examining how AI assistance can be effectively integrated into each part of the process. ============================================================ SOURCE: chapters/restate-and-identify.qmd ============================================================ # Restate and Identify (Steps 1-2) > The quality of your solution is determined before you write a single line of code. The first two steps of the six-step methodology, restating the problem and identifying inputs and outputs, set the foundation for everything that follows. This critical preparation phase ensures you fully understand what you're trying to solve before diving into implementation. ## Techniques for Problem Restatement ### Why Restatement Matters Restating a problem in your own words serves several important purposes: 1. Clarifies understanding: The process of reformulating forces you to grapple with the problem's core elements 2. Identifies ambiguities: Areas you struggle to restate clearly often reveal hidden assumptions or unclear requirements 3. Provides ownership: Restating moves you from passive recipient to active problem-solver 4. Creates a reference point: Your restatement becomes a touchstone to check your solution against ### Effective Restatement Approaches #### The ELI5 Method (Explain Like I'm Five) Simplify the problem to its most fundamental elements, as if explaining to someone with no technical background: Original: "Implement a function to find the shortest path between two nodes in a weighted directed graph using Dijkstra's algorithm." ELI5 Restatement: "I need to find the fastest route between two points on a map where some routes are quicker than others." #### The Components Breakdown Identify and separate the main components of the problem: Original: "Build a cache that stores the results of expensive function calls and returns the cached result when the same inputs occur again." Components Restatement: - "I need to store function results along with their inputs" - "When a function is called, I need to check if we've seen these inputs before" - "If we have seen the inputs, return the stored result instead of recalculating" - "If we haven't seen the inputs, calculate, store, and return the result" #### The Constraints Highlight Explicitly call out all constraints and requirements: Original: "Create a username validation function for a web form." Constraints Restatement: - "Usernames must be 3-20 characters long" - "Only alphanumeric characters, hyphens, and underscores are allowed" - "Cannot start or end with a hyphen or underscore" - "Must be case-insensitive (uppercase/lowercase doesn't matter)" - "Cannot contain reserved keywords from this list..." #### The Inverse Problem State what the solution is NOT supposed to do: Original: "Implement user authentication for the application." Inverse Restatement: - "This is NOT about creating user accounts or registration" - "This is NOT about password reset functionality" - "This is NOT about authorization (what users can do)" - "This IS about verifying a user's identity based on credentials" ### Using AI to Enhance Restatement AI assistants can help refine your problem restatements in several ways: #### Restatement Verification After restating the problem yourself, ask the AI to verify your understanding: #### Assumptions Identification AI can help uncover hidden assumptions in the problem: #### Edge Case Exploration AI can help identify boundary conditions and special cases: #### Domain-Specific Context For problems in unfamiliar domains, AI can provide context: ## Clearly Defining Inputs and Outputs The second step of the methodology focuses on establishing precise expectations for what goes into your solution and what comes out. ### Why Input/Output Specification Matters Clearly defining inputs and outputs: 1. Creates a contract: Establishes exactly what your solution promises to deliver 2. Guides testing: Forms the basis for comprehensive test cases 3. Simplifies debugging: Makes it clear when the solution is working incorrectly 4. Facilitates integration: Makes it easier to incorporate your solution into larger systems ### Elements of Complete I/O Specifications #### For Inputs: - Data types: The expected types of each input parameter - Value ranges: Acceptable ranges or constraints for input values - Required vs. optional: Which inputs can be omitted with default values - Format requirements: Any specific formatting expected (e.g., date formats) - Size constraints: Limitations on input size (e.g., maximum string length) #### For Outputs: - Return type: The data type of the returned value - Success indicators: How successful operation is communicated - Error handling: How errors are communicated (exceptions, error codes, etc.) - Special values: Any special return values and their meaning - Side effects: Any state changes or external effects besides the return value ### Documenting I/O With Examples Examples can clarify I/O specifications better than abstract descriptions: ### Using AI to Explore Edge Cases AI assistants excel at identifying edge cases and boundary conditions that might affect your I/O specification. #### Input Edge Cases Ask the AI to help identify input edge cases: This might reveal considerations like: - Zero weight or distance - Extremely large weights or distances - Non-numeric inputs - Negative values - Special shipping conditions #### Output Edge Cases Similarly, explore edge cases for outputs: This might reveal considerations like: - Minimum shipping costs - Free shipping thresholds - Rounding rules - Currency formatting - Maximum possible costs #### Error Condition Exploration Use AI to explore potential error conditions: This might reveal considerations like: - Invalid input types - Values outside acceptable ranges - Unavailable shipping routes - Rate limit restrictions ## Prompting Techniques for Requirement Clarification When requirements are unclear or incomplete, AI can help explore the problem space through directed prompting. ### The Five Whys Technique Repeatedly ask "why" to dig deeper into requirements: This technique helps uncover the underlying goals behind requirements. ### Stakeholder Perspective Prompting Ask the AI to consider requirements from different perspectives: ### Requirement Expansion Ask the AI to expand terse requirements into detailed specifications: ### Comparative Analysis Use AI to compare different interpretations of ambiguous requirements: ## Practical Exercise: Restate and Identify in Action Let's apply Steps 1 and 2 to a sample problem: > Original Requirement: "Create a function that converts dates between different formats." ### Step 1: Restate the Problem Using Components Breakdown: - "I need to parse dates from various input formats" - "I need to convert parsed dates into specified output formats" - "I need to handle potential errors in the input formats" - "I need to ensure the conversion preserves the same date and time" Using Constraints Highlight: - "The function should accept a string representing a date" - "The function should accept a source format specification" - "The function should accept a target format specification" - "The function should return the date in the target format" - "The function should raise appropriate errors for invalid inputs" AI Verification Prompt: AI: ### Step 2: Identify Input and Output Input Specification: - `date_str`: A string representing a date in the source format - `source_format`: A string specifying the format of the input date - `target_format`: A string specifying the desired output format - `timezone`: (Optional) A string representing the timezone of the input date Output Specification: - On success: A string representing the same date in the target format - On error: Raises appropriate exceptions with clear error messages Examples: - `convert_date("2023-04-01", "%Y-%m-%d", "%m/%d/%Y")` → `"04/01/2023"` - `convert_date("April 1, 2023", "%B %d, %Y", "%Y-%m-%d")` → `"2023-04-01"` - `convert_date("invalid", "%Y-%m-%d", "%m/%d/%Y")` → `ValueError("Unable to parse date")` Edge Case Exploration Prompt: AI: ============================================================ SOURCE: chapters/work-by-hand.qmd ============================================================ # Work by Hand (Step 3) > If you cannot solve it on paper, you cannot explain it to a machine. Step 3 of the six-step methodology, working through examples by hand, is often the most tempting to skip, especially with AI assistance. Yet it remains one of the most valuable practices for deeply understanding a problem and its solution. ## The Importance of Manual Examples ### Why Work By Hand? Working through examples manually before coding serves several critical purposes: 1. Builds intuition about how the solution should behave 2. Reveals hidden complexity that might not be apparent in the problem description 3. Creates test cases that can validate your eventual code 4. Identifies edge cases that require special handling 5. Provides a reference implementation against which to check automated solutions This step is especially important in the age of AI coding assistants. When AI can generate code instantly, manually working through examples becomes the primary way to maintain a deep understanding of the solutions you implement. ### Mental Models vs. Pattern Matching There are two fundamentally different ways to solve programming problems: Pattern matching involves recognising a problem as similar to something you've seen before and applying a known solution template. This approach is quick but shallow, and it doesn't build transferable understanding. Mental modelling involves constructing a detailed understanding of the problem dynamics and working through how a solution would operate step by step. This approach builds deeper understanding that transfers to novel problems. AI excels at pattern matching but struggles with novel problems that require creative mental models. By working problems by hand, you develop the mental modelling skills that complement AI's pattern-matching strengths. ## Tracing Algorithms Step by Step ### The Tracing Process Tracing an algorithm involves simulating its execution with concrete inputs, tracking the state changes at each step: 1. Choose representative input values 2. Initialize variables as they would be in the algorithm 3. Follow the logic step by step, recording state changes 4. Note intermediate values at key points 5. Verify the final output matches expectations This process mirrors what a debugger does but happens in your mind or on paper, building a deeper understanding of the algorithm's behaviour. ### Visualisation Techniques Different problems benefit from different visualisation approaches: #### Tabular Traces For iterative algorithms, use a table where: - Each row represents one iteration - Columns show the values of key variables - Special columns can highlight important conditions or state changes | Iteration | Current Value | Accumulator | Condition Check | |-----------|--------------|-------------|-----------------| | 1 | 5 | 5 | 5 > 0? Yes | | 2 | 4 | 9 | 4 > 0? Yes | | 3 | 3 | 12 | 3 > 0? Yes | #### State Diagrams For algorithms that manipulate data structures, draw the structure at key points: #### Flow Traces For complex conditional logic, trace the path through the algorithm: ## Testing Assumptions Working by hand is the perfect opportunity to test your assumptions about the problem and solution approach. ### Assumption Categories Be explicit about testing these types of assumptions: #### Data Assumptions - Valid input ranges - Expected data formats - Special values (null, empty, extreme values) - Size limitations #### Algorithmic Assumptions - Time and space complexity requirements - Order sensitivity (does order matter?) - Uniqueness requirements (are duplicates allowed?) - Stability expectations (preserving order of equal elements) #### Output Assumptions - Expected format of results - Handling of error conditions - Ties or multiple valid answers ### The "What If?" Technique For each assumption, ask "What if this assumption is violated?" and trace through an example: - What if the input is empty? - What if values are negative? - What if there are duplicate entries? - What if the input is already sorted? This process helps identify edge cases that your solution must handle. ## Sharing Your Manual Work with AI Although working by hand is a primarily human activity, AI can still play a valuable role in the process. ### Verification of Manual Solutions After working through examples yourself, use AI to verify your approach: ### Exploring Alternative Approaches AI can suggest different ways to approach the manual solution: ### Identifying Edge Cases AI can help identify edge cases to trace manually: ## Practical Exercise: Working By Hand Let's apply Step 3 to our ongoing example problem: > Write a function to find the longest palindromic substring in a given string. ### Chosen Example: "babad" Let's trace through the process of finding the longest palindromic substring: 1. Start with the input string: "babad" 2. Examine each potential center position: For single-character centers (odd-length palindromes): - Center at 'b' (index 0): - Check "b" -> palindrome of length 1 - Check "ba" -> not a palindrome - Center at 'a' (index 1): - Check "a" -> palindrome of length 1 - Check "bab" -> palindrome of length 3 - Check "babα" (out of bounds) -> stop - Center at 'b' (index 2): - Check "b" -> palindrome of length 1 - Check "aba" -> palindrome of length 3 - Check "babα" (out of bounds) -> stop - Center at 'a' (index 3): - Check "a" -> palindrome of length 1 - Check "bad" -> not a palindrome - Center at 'd' (index 4): - Check "d" -> palindrome of length 1 - Check "ad" -> not a palindrome For two-character centers (even-length palindromes): - Between 'b' and 'a' (indices 0-1): - Check "ba" -> not a palindrome - Between 'a' and 'b' (indices 1-2): - Check "ab" -> not a palindrome - Between 'b' and 'a' (indices 2-3): - Check "ba" -> not a palindrome - Between 'a' and 'd' (indices 3-4): - Check "ad" -> not a palindrome From our manual trace, we can see that the longest palindromic substrings are "bab" and "aba", both of length 3. ### Edge Case: Empty String Input: "" Result: "" (empty string) ### Edge Case: Single Character Input: "a" Result: "a" (the character itself) ### Edge Case: All Same Characters Input: "aaaaa" Result: "aaaaa" (the entire string) ### Edge Case: No Palindromes Longer Than 1 Input: "abcd" Result: "a" (any single character) ### Validating Our Approach with AI You: AI: ## Insights from Working By Hand By manually tracing through examples, we've gained several key insights: 1. We need to check both odd-length palindromes (single character center) and even-length palindromes (between character center) 2. The process of expanding outward from a center is the same for both cases 3. We need to keep track of the longest palindrome found so far 4. Boundary checking is important to avoid going out of bounds 5. For strings with multiple palindromes of the same length, returning any one is acceptable These insights will guide our pseudocode development in the next step. ## Key Takeaways - Working problems by hand builds a deep understanding that can't be replicated by simply using AI to generate solutions - Manual tracing helps identify edge cases and test assumptions - Different visualisation techniques (tables, diagrams, flow traces) suit different types of problems - AI can complement manual work by verifying solutions and suggesting edge cases - The insights gained from manual examples inform all subsequent steps of the methodology ============================================================ SOURCE: chapters/pseudocode.qmd ============================================================ # Pseudocode (Step 4) > Pseudocode is thinking made visible. The syntax can wait. The fourth step in our methodology, writing pseudocode, bridges the conceptual understanding developed in earlier steps to the concrete implementation that follows. Pseudocode provides a language-agnostic blueprint for your solution, focusing on logic and algorithms rather than syntax. ## Writing Effective Pseudocode ### What Makes Good Pseudocode? Effective pseudocode strikes a balance between abstraction and detail: - Clear and readable - understandable by both humans and AI - Structured - uses indentation and organisation to show control flow - Language-agnostic - avoids specific programming language syntax - Focused on logic - emphasises algorithmic thinking over implementation details - Complete - addresses all requirements and edge cases - Concise - eliminates unnecessary details The goal is to create a plan concrete enough to guide implementation but abstract enough to focus on the solution's logic rather than syntactic details. ### Common Pseudocode Conventions While pseudocode isn't standardised, these conventions enhance clarity: #### Control Structures #### Function Definitions #### Variable Operations #### Input/Output ## Pseudocode and LLMs: A Natural Partnership Pseudocode plays a particularly important role when working with Large Language Models. It serves as a bridge between natural language ambiguity and the precision of formal programming languages, creating an ideal medium for human-AI collaboration. ### Why Pseudocode Works Well with LLMs Several factors make pseudocode especially effective for LLM interactions: 1. Structural alignment with training data - LLMs have been trained on vast amounts of programming content, including discussions of algorithms that frequently use pseudocode. This training means they have strong internal representations of pseudocode conventions. 2. Reduced ambiguity - Pseudocode provides more structure than natural language while remaining flexible, striking an ideal balance that reduces misinterpretations. 3. Focus on logic - By emphasising algorithmic thinking over syntax, pseudocode aligns with LLMs' strengths in reasoning about procedures rather than producing perfect syntax. 4. Activation of procedural knowledge - Research shows that LLMs have absorbed procedural knowledge from their training data. Pseudocode effectively activates this latent knowledge by providing clear procedural frameworks. 5. Medium of iterative refinement - Pseudocode serves as an excellent medium for progressive disambiguation - the process of gradually transforming ambiguous natural language into precise formal code through multiple rounds of interaction. ### Pseudocode as Disambiguation Tool One of the most significant challenges when working with LLMs is the inherent ambiguity of natural language. Pseudocode helps address this challenge by: - Providing clear structure that reduces misinterpretation - Creating a shared vocabulary for discussing algorithms - Enabling precise references to specific components or steps - Facilitating incremental refinement toward formal code As Dijkstra noted decades ago, "The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are... an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid." While pseudocode isn't fully formal, it moves us considerably in that direction. ### SudoLang: Pseudocode optimised for LLMs SudoLang represents an evolution of pseudocode specifically designed for LLM interaction. Created by Eric Elliott, it provides a structured syntax that bridges the gap between natural language and formal programming languages, optimised for human-AI collaboration. Key features of SudoLang include: - Simplified syntax that both humans and AI can easily understand - Declarative approach that focuses on what should happen rather than how - Named parameters that improve clarity and reduce ambiguity - Native support for modern programming patterns like functional programming and async operations - Unambiguous structure that reduces misinterpretation by AI models A simple example in SudoLang: When working with AI assistants on complex programming tasks, SudoLang can help create more precise, intentional prompts that result in higher-quality code generation. It's especially valuable when you need to communicate algorithmic intent clearly without getting lost in language-specific syntax details. ### LLMs as Pseudocode Interpreters An intriguing aspect of LLMs is their ability to act as "interpreters" for pseudocode. Unlike traditional pseudocode that serves purely as documentation, LLMs can actually process and "execute" pseudocode to generate outputs, transforming it from a planning tool into a functional programming interface. This capability enables new workflows where: 1. Humans write pseudocode expressing algorithmic intent 2. LLMs interpret this pseudocode and generate working implementations 3. Humans review, refine, and integrate these implementations This process leverages the complementary strengths of both humans and AI: human strategic thinking and algorithm design paired with AI's ability to handle implementation details across various programming languages. For complex algorithmic challenges, this partnership often produces better results than either pure natural language prompting (too ambiguous) or direct code requests (too focused on immediate implementation rather than understanding). ### Pseudocode vs. Comments Pseudocode differs from code comments in several important ways: | Pseudocode | Comments | |------------|----------| | Written before implementation | Added during/after implementation | | Focuses on overall approach | Explains specific code segments | | Language-independent | Tied to specific code | | Comprehensive overview | Targeted explanations | | Planning tool | Documentation tool | The best implementation often involves transforming pseudocode into commented code, preserving the planning insights in the final implementation. ## AI-Assisted Algorithm Design AI assistants can be valuable partners in developing pseudocode, offering different approaches and identifying potential issues. ### Generating Initial Pseudocode AI can help draft initial pseudocode based on your problem statement: You: AI: ### Refining Pseudocode Once you have initial pseudocode, use AI to help refine it by asking targeted questions: You: AI: ### Comparing Alternative Approaches AI can help explore different algorithmic approaches to the same problem: You: AI: ## From Pseudocode to Implementation Plan Pseudocode should eventually lead to a concrete implementation plan, outlining how to translate the logical structure into actual code. ### Adding Implementation Details Once the core logic is established in pseudocode, gradually add implementation-specific details: ### Creating Function Signatures Based on the pseudocode, define the function signatures that will be needed: ### Planning Test Coverage Use pseudocode to identify the test cases needed for comprehensive coverage: ## Comparing Alternative Approaches When faced with multiple valid algorithmic approaches, pseudocode provides a concise way to compare them before committing to implementation. ### Evaluation Criteria Evaluate pseudocode approaches based on: 1. Time complexity - theoretical performance as input size grows 2. Space complexity - memory requirements 3. Implementation complexity - how difficult it will be to code and debug 4. Readability and maintainability - how easily others can understand it 5. Edge case handling - robustness against unusual inputs 6. Scalability - ability to handle very large inputs or to be extended ### Structured Comparison Create a comparison table to evaluate different approaches: | Approach | Time Complexity | Space Complexity | Implementation Complexity | Strengths | Weaknesses | |----------|----------------|-----------------|---------------------------|-----------|------------| | Center Expansion | O(n²) | O(1) | Low | Intuitive, easy to implement | Less efficient for very large strings | | Dynamic Programming | O(n²) | O(n²) | Medium | Systematic, handles all cases uniformly | Higher memory usage | | Manacher's Algorithm | O(n) | O(n) | High | Optimal time complexity | Complex to implement and debug | ### Making an Informed Decision Consider the context of your application: - For educational purposes or moderate string lengths, the center expansion approach is ideal due to its simplicity and efficiency - For production systems with very large strings, Manacher's algorithm might be worth the implementation complexity - If memory is a significant constraint, avoid the DP approach - If you need to process many strings repeatedly, the upfront cost of implementing Manacher's algorithm may be justified ## Key Takeaways - Pseudocode provides a language-agnostic blueprint focusing on logic rather than syntax - Good pseudocode strikes a balance between abstraction and detail - AI can help generate, refine, and compare different pseudocode approaches - Gradually add implementation-specific details as you transition from pseudocode to code - Use pseudocode to compare alternative approaches before committing to implementation - Pseudocode forms the basis for function signatures and test plans ============================================================ SOURCE: chapters/convert-to-code.qmd ============================================================ # Convert to Code (Step 5) > AI is at its most useful when you already know what you want. That is not a coincidence. The fifth step of our methodology, converting pseudocode to actual code, is where your solution takes executable form. This is often the step where AI assistance becomes most valuable, but it's also where maintaining human understanding is most critical. ## Strategies for Code Implementation ### From Pseudocode to Code: A Systematic Approach Converting pseudocode to code should be a methodical process: 1. Start with structure - Convert the overall organisation and function definitions 2. Fill in the logic - Implement the core algorithms and control flow 3. Add error handling - Incorporate validation and exception handling 4. optimise - Refine the implementation for efficiency and readability 5. Document - Add appropriate comments and documentation This incremental approach ensures that you maintain control of the implementation while still leveraging AI for efficiency. ### Incremental Implementation Rather than converting all pseudocode at once, implement in manageable chunks: Implement one function at a time: Then integrate the pieces: ### Language-Specific Considerations Different programming languages offer various idioms and features that can improve your implementation: #### Python #### JavaScript #### Java ## Working with AI Code Suggestions AI assistants excel at generating code, but using them effectively requires a thoughtful approach. ### Prompting for Implementation When asking AI to implement code, provide context from previous steps. The key is to give the AI your pseudocode and design decisions so it generates code that matches what you've already reasoned through, not whatever it might invent on its own. You: ## Bring your pseudocode to the prompt Notice how the human restated the algorithm's key steps before asking for code. This is Step 5 building directly on Step 4 -- the pseudocode provides constraints that keep the AI's output aligned with the solution you designed, not one the AI might prefer. AI: ```text Here's a clean implementation of the longest palindromic substring algorithm in Python: def longest_palindromic_substring(s): """ Find the longest palindromic substring in a given string. Args: s: Input string Returns: The longest palindromic substring within the input string """ if not s: return "" start = 0 # Starting index of longest palindrome max_length = 1 # Length of longest palindrome def expand_around_center(left, right): """ Expand around a potential palindrome center and return the bounds. Args: left: Left index of the potential palindrome center right: Right index of the potential palindrome center Returns: Tuple of (start_index, length) of the palindrome """ # Expand as long as we stay in bounds and have matching characters while left >= 0 and right max_length: nonlocal start, max_length start = left max_length = current_length # Expand outward left -= 1 right += 1 # Check every possible center position for i in range(len(s)): # Odd length palindromes (single character center) expand_around_center(i, i) # Even length palindromes (between character center) if i = 0 and right max_length: start = odd_start max_length = odd_length # Even length palindromes (between character center) # Only check if we're not at the last character if i max_length: start = even_start max_length = even_length # Extract and return the longest palindromic substring return s[start:start + max_length] The changes I've made: 1. Modified `expand_around_center` to return the palindrome bounds instead of modifying global variables directly. This makes the function more reusable and easier to test. 2. Added an explicit check to avoid trying to form even-length palindromes when at the last character, which would be unnecessary. 3. Updated the main loop to capture the returned values from `expand_around_center` and update our tracking variables only if a longer palindrome is found. These changes make the code more modular and eliminate the unnecessary check, while maintaining the correct functionality. text In the implementation, you're returning a tuple and then unpacking it with: odd_start, odd_length = expand_around_center(i, i) I'm not familiar with this Python syntax. Could you explain how tuple unpacking works and why it's useful here? text Sure! This is called "tuple unpacking" or "multiple assignment" in Python, and it's a very convenient feature. ### How Tuple Unpacking Works When a function returns multiple values in Python, they're automatically packaged as a tuple. Tuple unpacking allows you to assign each value in the tuple to a separate variable in a single line of code. In this case, `expand_around_center(i, i)` returns a tuple of two values: `(start_index, length)`. The unpacking syntax: odd_start, odd_length = expand_around_center(i, i) is equivalent to: result = expand_around_center(i, i) odd_start = result[0] odd_length = result[1] ### Why It's Useful Here Tuple unpacking is particularly useful in this algorithm because: 1. Clarity: It makes it immediately clear what values we're getting back from the function 2. Conciseness: It reduces three lines of code to one, making the code more readable 3. Intent: It shows that these two values are related and come from the same operation This pattern is very common in Python, especially when functions naturally produce multiple related values. It's considered a Pythonic way to handle multiple return values without creating custom classes or using more complex data structures. Would you like me to show you some other common patterns for tuple unpacking in Python? python # Non-idiomatic Python i = 0 while i >> longest_palindromic_substring("babad") "bab" # "aba" would also be a valid return value >>> longest_palindromic_substring("cbbd") "bb" """ # Implementation... text FUNCTION findLongestPalindromicSubstring(string) IF string is empty THEN RETURN empty string END IF SET startIndex = 0 SET maxLength = 1 FUNCTION expandAroundCenter(left, right) WHILE left >= 0 AND right maxLength THEN SET maxLength = currentLength SET startIndex = left END IF DECREMENT left INCREMENT right END WHILE END FUNCTION FOR i = 0 TO length of string - 1 // Check odd length palindromes expandAroundCenter(i, i) // Check even length palindromes IF i >> longest_palindromic_substring("babad") "bab" # Note: "aba" would also be a valid return value >>> longest_palindromic_substring("cbbd") "bb" """ # Handle empty input if not s: return "" start = 0 max_length = 1 # Helper function to expand around a center def expand_around_center(left, right): """ Expand around a potential palindrome center and find the longest palindrome. Args: left (int): Left position of the center right (int): Right position of the center Returns: tuple: (start_index, length) of the palindrome """ # Expand as long as characters match and we're in bounds while left >= 0 and right max_length: start = odd_start max_length = odd_length # Check even-length palindrome if i max_length: start = even_start max_length = even_length # Return the longest palindromic substring return s[start:start + max_length] python # Test with various inputs test_cases = [ "", # Empty string "a", # Single character "aa", # Two identical characters "abc", # No palindromes longer than 1 "babad", # Odd-length palindrome "cbbd", # Even-length palindrome "racecar", # Entire string is a palindrome "aabbaa" # Multiple palindromes ] for test in test_cases: result = longest_palindromic_substring(test) print(f"Input: {test}, Output: {result}") ``` ## Key Takeaways - Converting pseudocode to code should be an incremental, systematic process - Take advantage of language-specific features and idioms for cleaner implementations - When using AI-generated code, always evaluate it critically for correctness, efficiency, and readability - Focus on writing idiomatic code with proper error handling and documentation - Use AI to explain unfamiliar syntax or patterns rather than just accepting them - Test your implementation against a variety of inputs, especially edge cases ============================================================ SOURCE: chapters/test-with-data.qmd ============================================================ # Test with Data (Step 6) > Code that has not been tested with edge cases is not finished. It is a hypothesis. The final step in our methodology, testing with a variety of data, is where we verify that our solution works correctly in all scenarios. This critical step ensures that our implementation meets the requirements and handles edge cases properly. ## Comprehensive Testing Strategies ### The Importance of Thorough Testing Testing isn't just about confirming that your code works. It's about discovering where and how it might fail. Thorough testing: 1. Validates correctness - Ensures the solution produces the expected results 2. Reveals edge case issues - Identifies problems with unusual inputs 3. Provides confidence - Gives you assurance that the solution is robust 4. Documents behaviour - Shows how the code responds to different inputs 5. Prevents regressions - Helps maintain correctness as code evolves Even with AI assistance in implementation, testing remains primarily a human responsibility. The most effective developers use a structured approach to testing rather than ad hoc verification. ### Types of Test Cases A comprehensive testing strategy includes several types of test cases: #### Functional Tests These verify that the code produces the correct output for standard inputs: #### Edge Case Tests These check how the code handles boundary conditions and unusual inputs: #### Stress Tests These evaluate behaviour with large or complex inputs: #### Performance Tests These measure execution time and resource usage: #### Regression Tests These ensure that fixes for reported issues don't break existing functionality: ## Edge Case Identification Identifying all relevant edge cases is a crucial aspect of thorough testing. ### Common Edge Case Categories Different problems have different edge cases, but these categories are frequently relevant: #### Input Size Edge Cases - Empty inputs (e.g., empty strings, empty arrays) - Minimal inputs (e.g., single-character strings, single-element arrays) - Very large inputs (testing performance and boundary limits) #### Value Edge Cases - Zero values - Negative values - Extremely large or small values - Special values (e.g., NaN, infinity in floating-point operations) #### Format Edge Cases - Valid but unusual formats - Malformed inputs - Mixed case or special characters in strings - Localization issues (e.g., different decimal separators) #### Boundary Edge Cases - First/last elements of collections - Values at the exact boundary of allowed ranges - Off-by-one scenarios #### Algorithm-Specific Edge Cases For our palindrome example: - No palindromes longer than 1 character - Entire string is a palindrome - Multiple palindromes of the same maximum length - Palindromes at the start or end of the string - Even vs. odd-length palindromes ### Using AI to Identify Edge Cases AI assistants can help identify edge cases you might not have considered: You: AI: ## AI-Assisted Test Generation AI can be particularly helpful in generating both test cases and test code. ### Generating Test Cases Ask AI to generate diverse test inputs: You: AI: ### Generating Test Code AI can also help generate the actual test code: You: AI: ```text Here's a comprehensive unit test class using Python's `unittest` framework to test the longest palindromic substring function with the test cases I suggested: import unittest # Import your function from palindrome import longest_palindromic_substring class TestLongestPalindromicSubstring(unittest.TestCase): def test_empty_string(self): """Test with an empty string.""" self.assertEqual(longest_palindromic_substring(""), "") def test_single_character(self): """Test with a single character.""" self.assertEqual(longest_palindromic_substring("a"), "a") def test_no_long_palindromes(self): """Test with a string that has no palindromes longer than 1 character.""" result = longest_palindromic_substring("abcd") # Any single character is a valid answer self.assertTrue(result in ["a", "b", "c", "d"]) self.assertEqual(len(result), 1) def test_multiple_valid_answers(self): """Test with a string that has multiple valid answers of the same length.""" result = longest_palindromic_substring("babad") self.assertTrue(result in ["bab", "aba"]) self.assertEqual(len(result), 3) def test_even_length_palindrome(self): """Test with a string containing an even-length palindrome.""" self.assertEqual(longest_palindromic_substring("cbbd"), "bb") def test_entire_string_is_palindrome(self): """Test with a string that is entirely a palindrome.""" self.assertEqual(longest_palindromic_substring("racecar"), "racecar") def test_multiple_palindromes(self): """Test with a string containing multiple palindromes of different lengths.""" self.assertEqual(longest_palindromic_substring("aacabdkacaa"), "aca") def test_repeated_characters(self): """Test with a string of repeated characters.""" self.assertEqual(longest_palindromic_substring("aaaaa"), "aaaaa") def test_palindrome_at_beginning(self): """Test with a palindrome at the beginning of the string.""" self.assertEqual(longest_palindromic_substring("civilservice"), "civic") def test_even_length_entire_string(self): """Test with an even-length palindrome that is the entire string.""" self.assertEqual(longest_palindromic_substring("abcdeedcba"), "abcdeedcba") def test_performance_with_long_string(self): """Test performance with a very long string containing a long palindrome.""" long_input = "a" 1000 + "b" + "a" 1000 result = longest_palindromic_substring(long_input) self.assertEqual(len(result), 2001) self.assertEqual(result, long_input) if __name__ == '__main__': unittest.main() This test class includes: 1. Tests for all the cases I mentioned earlier 2. Clear test method names that describe what's being tested 3. Docstrings explaining each test 4. Appropriate assertions for each case 5. A performance test with a large input To use this, save it as `test_palindrome.py` and make sure your implementation is in a file named `palindrome.py` with the function `longest_palindromic_substring`. Then run it with: python -m unittest test_palindrome.py If you're using a different file structure or function name, you'll need to adjust the import statement accordingly. text My test case for "aacabdkacaa" is failing. The expected output is "aca" but my function returns "aa". Here's my implementation: def longest_palindromic_substring(s): if not s: return "" start = 0 max_length = 1 for i in range(len(s)): # Check odd length palindromes left, right = i, i while left >= 0 and right max_length: max_length = right - left + 1 start = left left -= 1 right += 1 # Check even length palindromes left, right = i, i + 1 if right = 0 and right max_length: max_length = right - left + 1 start = left left -= 1 right += 1 return s[start:start + max_length] Can you help me identify what's wrong? text I see the issue in your implementation when testing with "aacabdkacaa". Let me walk through this step by step. In the string "aacabdkacaa", the palindromes are: - "aa" at the beginning (length 2) - "a" (multiple single characters, length 1) - "aca" in the middle (length 3) - "acaa" at the end (length 4) ← Wait, this isn't a palindrome! Ah, I found the issue! The string "acaa" is not a palindrome (it would read "aaca" backwards). Let me trace through your code to see where it's going wrong. When checking the string "aacabdkacaa", your function should find "aca" as the longest palindrome, which has length 3. But it's returning "aa" which has length 2. Let me check the expected output again... the expected output is "aca" but your function returns "aa". There must be a bug in your code or an error in my analysis. Let's carefully trace through the string "aacabdkacaa": - Palindromes at position 0 ('a'): "a", "aa" - Palindromes at position 1 ('a'): "a", "aa", "aaca" (wait, this isn't a palindrome!) I think I've found the bug! In your implementation, when you're expanding around a center, you're updating `max_length` and `start` before confirming the substring is actually a palindrome. The issue is in both of your while loops. Let's focus on the even-length check. When you check for even-length palindromes starting at position i, you immediately enter the while loop if s[i] == s[i+1]. But then you update `max_length` and `start` before checking if the next characters also match. Here's the correction needed: def longest_palindromic_substring(s): if not s: return "" start = 0 max_length = 1 for i in range(len(s)): # Check odd length palindromes left, right = i, i while left >= 0 and right max_length: max_length = current_length start = left left -= 1 right += 1 # Check even length palindromes left, right = i, i + 1 while left >= 0 and right max_length: max_length = current_length start = left left -= 1 right += 1 return s[start:start + max_length] The key change is adding a `current_length` variable to calculate the length inside the while loop. This ensures you're using the correct length for comparison. This bug would cause your function to sometimes return shorter palindromes when longer ones exist, especially in cases with repeated characters. python import unittest import time # Import our implementation from palindrome import longest_palindromic_substring class TestLongestPalindromicSubstring(unittest.TestCase): def test_basic_functionality(self): """Test basic functionality with standard inputs.""" self.assertEqual(longest_palindromic_substring("babad") in ["bab", "aba"], True) self.assertEqual(longest_palindromic_substring("cbbd"), "bb") def test_edge_cases(self): """Test edge cases.""" # Empty string self.assertEqual(longest_palindromic_substring(""), "") # Single character self.assertEqual(longest_palindromic_substring("a"), "a") # Two identical characters self.assertEqual(longest_palindromic_substring("aa"), "aa") # No palindromes longer than 1 result = longest_palindromic_substring("abcd") self.assertTrue(len(result) == 1) self.assertTrue(result in ["a", "b", "c", "d"]) def test_comprehensive_cases(self): """Test a variety of palindrome patterns.""" # Entire string is a palindrome self.assertEqual(longest_palindromic_substring("racecar"), "racecar") # Palindrome at the beginning self.assertEqual(longest_palindromic_substring("abbaxyz"), "abba") # Palindrome at the end self.assertEqual(longest_palindromic_substring("xyzabba"), "abba") # Palindrome in the middle self.assertEqual(longest_palindromic_substring("xyzabbauvw"), "abba") # Multiple palindromes self.assertEqual(longest_palindromic_substring("aacabdkacaa"), "aca") def test_multiple_palindromes(self): """Test strings with multiple palindromes.""" # Multiple palindromes of different lengths self.assertEqual(longest_palindromic_substring("abcbaxyzzyxabc"), "xyzzyxa") # Multiple palindromes of same length result = longest_palindromic_substring("abacababacab") self.assertTrue(result in ["abacaba", "bacabab"]) def test_performance(self): """Test performance with large inputs.""" # Long string with long palindrome long_input = "a" 500 + "b" + "a" 500 start_time = time.time() result = longest_palindromic_substring(long_input) end_time = time.time() self.assertEqual(len(result), 1001) self.assertTrue(end_time - start_time = 0 and right max_length: start = odd_start max_length = odd_length # Check even length palindromes if i max_length: start = even_start max_length = even_length return s[start:start + max_length] text ...... ---------------------------------------------------------------------- Ran 6 tests in 0.123s OK ``` ## Key Takeaways - Thorough testing is essential, even with AI-assisted implementation - A comprehensive testing strategy includes functional tests, edge cases, stress tests, performance tests, and regression tests - AI can help identify edge cases and generate test code - Test failures provide valuable learning opportunities - The combination of human oversight and AI assistance leads to more robust solutions ============================================================ SOURCE: chapters/intentional-prompting-patterns.qmd ============================================================ # Intentional Prompting Patterns > A good pattern is not a script. It is a starting point that adapts to the problem in front of you. ## The Art of Guiding AI Effective collaboration with AI assistants requires more than just asking for solutions. It demands a thoughtful approach to crafting prompts that guide the AI toward helpful, educational responses. This chapter explores key patterns for intentional prompting that maximise the learning and development value of AI interactions. ## Prompt Engineering: A Foundational Discipline Prompt engineering has emerged as a critical field in AI interaction, focusing on how to craft precise instructions that help AI models produce accurate, relevant, and contextually appropriate responses. Beyond casual interaction, it encompasses systematic techniques for improving AI output quality. ### From Prompt Engineering to Intentional Prompting It's important to understand the relationship between prompt engineering and intentional prompting: Prompt engineering is primarily concerned with getting optimal outputs from AI systems. It focuses on crafting the right words, examples, and instructions to elicit high-quality responses from language models. Prompt engineers develop expertise in understanding model behaviours, leveraging context windows effectively, and using specialised techniques to guide AI outputs. Intentional prompting incorporates prompt engineering techniques but embeds them within a comprehensive methodology for approaching programming tasks. While prompt engineering asks "How can I get the best output from this AI?", intentional prompting asks "How can I use this AI as part of a thoughtful development process that maintains my understanding and control?" The distinction becomes clearer when considering how each approach would handle a complex programming task: | Aspect | Prompt Engineering Approach | Intentional Prompting Approach | |--------|---------------------------|------------------------------| | Initial Task Analysis | optimise prompt for detailed requirements | Follow Steps 1-2: Restate problem and identify inputs/outputs | | Problem Understanding | Focus on conveying requirements clearly to AI | Follow Step 3: Work through examples by hand to build understanding | | Solution Design | Craft prompts to generate complete solutions | Follow Step 4: Create pseudocode before implementation | | Code Generation | Refine prompts until satisfactory code is produced | Follow Step 5: Use AI to implement pseudocode while maintaining understanding | | Verification | Prompt AI to validate generated code | Follow Step 6: Rigorously test with data, especially edge cases | | Learning Outcome | Improvement in prompt crafting skills | Improvement in both programming and AI collaboration skills | Intentional prompting doesn't replace prompt engineering. Rather, it integrates its techniques within a broader approach that preserves human agency, understanding, and skill development. ### Core Prompt Engineering Techniques #### Zero-Shot Prompting Zero-shot prompting instructs an AI to perform a task without providing examples within the prompt. This technique leverages the model's pre-existing knowledge to generate responses to novel tasks. Example: Zero-shot prompting works well for common tasks where the AI has extensive training data, but may struggle with specialised or complex tasks. #### Few-Shot Prompting This technique provides the model with one or more examples of expected input-output pairs before presenting the actual task. Examples help the model understand the desired format and approach. Example: Few-shot prompting is particularly valuable for tasks with specific formatting requirements or uncommon patterns. #### Chain-of-Thought Prompting Chain-of-Thought (CoT) prompting encourages the model to break down complex reasoning into intermediate steps, leading to more comprehensive and accurate outputs. This technique mimics human reasoning processes. Example: Chain-of-thought prompting significantly improves performance on problems requiring multi-step reasoning or algorithmic thinking. #### Role-Based Prompting Role-based prompting assigns a specific professional or character role to the AI, which helps frame its responses within a particular domain of expertise or perspective. Example: This technique helps orient the AI toward specific terminology, frameworks, and priorities relevant to the assigned role. ## Types of Intentional Prompts ### Foundation Prompts Foundation prompts establish the baseline requirements for a programming task. Instead of just asking for a solution, these prompts set the stage for a productive dialogue. Examples: Foundation prompts should provide enough context for the AI to understand the goal without being overly prescriptive about implementation details. ### Clarification Prompts Clarification prompts refine requirements and explore edge cases before diving into implementation. Examples: These prompts encourage thinking about requirements thoroughly before committing to code, a practice that prevents rework and bugs. ### Scaffolding Prompts Scaffolding prompts support learning by breaking down complex concepts into understandable components. Examples: These prompts transform the AI from a code generator into a tutor that helps build deeper understanding. ### Challenge Prompts Challenge prompts deliberately introduce programming challenges to test understanding and explore potential issues. Examples: Challenge prompts help develop critical thinking about code rather than just accepting first solutions. ### Refinement Prompts Refinement prompts push for code improvements based on best practices and efficiency considerations. Examples: These prompts help develop an eye for quality and foster continuous improvement. ### Error Induction Prompts Error induction prompts intentionally guide the AI toward making specific mistakes to explore error handling and debugging processes. Examples: These prompts create valuable learning opportunities by examining potential failure modes. ## Effective Prompting Patterns ### The "What If?" Pattern 1. Get working code from the AI 2. Ask "What if [edge case]?" 3. Evaluate the AI's solution against your understanding 4. Repeat with increasingly complex edge cases This pattern systematically explores the boundaries of a solution, building robustness and understanding. ### The Incremental Building Pattern 1. Start with a minimally viable solution 2. Add one feature at a time 3. Integrate and test after each addition This pattern mirrors agile development practices, keeping the development process manageable and focused. ### The Deliberate Error Pattern 1. Let the AI generate a solution 2. Identify a potential issue (even if the AI didn't make the error) 3. Ask: "Is there a problem with how this handles [specific case]?" 4. Use the discussion to deepen understanding This pattern develops debugging skills and critical evaluation of code. ### The Comparative Analysis Pattern 1. Ask the AI to implement a solution two different ways 2. Request a comparison of trade-offs between approaches 3. Make an informed decision based on the analysis This pattern builds judgment about different implementation strategies. ## Advanced Prompt Engineering Strategies Beyond the basic techniques described earlier, several advanced prompt engineering strategies can further enhance your interactions with AI coding assistants: ### Context-Enhanced Prompting This technique involves providing rich background information to help the AI generate more appropriate and contextually relevant responses. Example: Context-enhanced prompting is particularly valuable when working on components of larger systems where architectural decisions and constraints need to be considered. ### Constraint-Based Prompting By explicitly stating constraints and requirements, you can guide the AI to produce solutions that fit within your project's specific parameters. Example: This approach is especially useful for performance-critical applications or when working within specific technical limitations. ### Template-Guided Prompting Template-guided prompting provides a structural framework that the AI should follow in its response. This ensures consistency and completeness. Example: This strategy helps ensure the AI covers all necessary aspects of a problem rather than focusing only on the most obvious elements. ## Integrating Prompt Engineering with Intentional Prompting Intentional prompting isn't an alternative to prompt engineering. It's an evolution that incorporates prompt engineering techniques within a more comprehensive methodology. This integration allows us to leverage the power of well-crafted prompts while maintaining the educational and developmental benefits of intentional practice. ### The Symbiotic Relationship Prompt engineering provides the tactical tools for effective AI interaction, while intentional prompting provides the strategic framework for applying these tools: | Prompt Engineering Provides | Intentional Prompting Provides | |----------------------------|--------------------------------| | Techniques for precise AI instructions | Framework for maintaining human agency | | Methods for optimising AI responses | Structure for educational development | | Domain-specific prompting strategies | Process that builds understanding | | Input formatting best practices | Context for when/how to apply techniques | This relationship means that becoming skilled at intentional prompting requires developing competence in prompt engineering techniques, but applies those techniques within a thoughtful methodology that prioritises human understanding and agency. The intentional prompting patterns we've explored can be integrated into various workflows and methodologies. While they align particularly well with the six-step programming methodology discussed in this book, their application extends far beyond this specific framework. ### Reinforcing the Six-Step Methodology Intentional prompting naturally reinforces our six-step programming methodology: 1. Restate the problem → Use foundation prompts and chain-of-thought techniques to clarify the problem 2. Identify input/output → Use clarification prompts and constraint-based prompting to define boundaries 3. Work by hand → Do this yourself, then use few-shot prompting to verify understanding 4. Write pseudocode → Use template-guided prompting for consistent pseudocode structure 5. Convert to code → Apply role-based prompting (e.g., "as an expert Python developer") for idiomatic code 6. Test with data → Combine challenge prompts with context-enhanced prompting for thorough testing By combining formal prompt engineering techniques with intentional prompting patterns within this methodology, you develop a deeper understanding of programming concepts than simply asking for complete solutions. ### Beyond the Six-Step Framework: Universal Application While our methodology provides a structured approach, the core principles of intentional prompting apply universally across different development approaches and even beyond programming: #### In Agile Development - Use foundation and clarification prompts during sprint planning - Apply challenge prompts during code reviews - Leverage refinement prompts during refactoring sprints - Use template-guided prompting for documentation tasks #### In Domain-Specific Development - Data Science: Apply chain-of-thought prompting to break down complex analytical problems - Front-End Development: Use visual constraint prompts for UI implementation - DevOps: Leverage role-based prompting for security auditing and configuration - Systems Architecture: Apply comparative analysis prompts when evaluating design alternatives #### Beyond Programming The principles of intentional prompting extend to: - Technical writing: Guiding AI to produce clear, structured documentation - Educational content: Creating learning materials that build understanding - Project management: Breaking down complex tasks and planning workflows - Problem-solving in any domain: Applying structured thinking to any challenge ### AI as a Thoughtful Assistant Across all these applications, intentional prompting embraces AI as a thoughtful assistant rather than a replacement for human judgment. This partnership model is based on five key principles: 1. Human-led workflows: The human sets goals, defines parameters, and makes the final decisions 2. Critical evaluation: Thoughtfully evaluating AI suggestions based on expertise and context 3. Iterative collaboration: Engaging in a refinement process where AI suggestions improve through feedback 4. Domain-appropriate trust: recognising where AI excels versus where human judgment is essential 5. Skill complementarity: Using AI for appropriate tasks while keeping human focus on creative and strategic aspects By applying these principles across any methodology or domain, intentional prompting creates a collaborative relationship that leverages both human and AI strengths while overcoming their respective limitations. ## Measuring Your Progress As you practice intentional prompting, you can track your growth: Beginner Level: - You can identify obvious errors in AI-generated code - You depend on the AI to explain most concepts - Your prompts are simple and general Intermediate Level: - You can spot subtle logical errors - You ask about specific programming concepts - Your prompts include anticipated edge cases Advanced Level: - You strategically guide the AI toward teaching moments - You compare multiple solution approaches - Your prompts demonstrate computational thinking - You question efficiency, readability, and best practices ## Case Study: Improving a Weather Data Function Let's examine a practical example of intentional prompting that demonstrates how to guide an AI from a basic implementation to a robust solution. ### Initial Prompt A developer starts with a simple request: ### Initial AI-Generated Code ### Analysis of the Initial Code The developer analyses the initial code and identifies several issues: 1. No error handling - will crash if the API is down or returns an error 2. No input validation for the location parameter 3. Returns the raw API response which is complex and includes unnecessary data 4. No timeout parameter which could cause the application to hang 5. No docstring or comments explaining how to use the function ### Follow-up Prompt with Specific Improvements The developer then crafts a follow-up prompt that addresses each identified issue: ### Improved AI-Generated Code ### Why This Prompting Strategy Was Effective The follow-up prompt was effective because: 1. It identified specific problems in the initial code rather than just asking for "better code" 2. It requested specific improvements with clear objectives (error handling, validation, etc.) 3. It suggested a new feature (forecast_days parameter) that made the function more useful 4. It asked for proper documentation which resulted in comprehensive docstrings This approach guided the AI to generate a much more robust function that: - Is properly documented - Validates inputs - Has comprehensive error handling - Returns structured, processed data instead of raw API response - Includes the new forecast functionality requested The key to effective prompting was being specific about what needed improvement and why, rather than making vague requests for "better" code. ## Key Takeaways 1. The goal is not to get a perfect solution from the AI, but to use the interaction to deepen your understanding 2. Your programming knowledge is demonstrated by the quality of your prompts 3. Intentionally introducing challenges and constraints leads to better learning outcomes 4. The best programmers aren't those who know all the answers, but those who know how to ask the right questions 5. Specific, targeted follow-up prompts yield much better results than vague requests for improvement ============================================================ SOURCE: chapters/debugging-with-ai.qmd ============================================================ # Debugging with AI > The best debugging conversations start with what you already know about the problem, not with "fix this." Debugging is often one of the most challenging and time-consuming aspects of programming. AI assistants can be powerful allies in the debugging process, helping identify and fix issues more efficiently than traditional approaches. This chapter explores how to use intentional prompting techniques specifically for debugging tasks. ## Common Debugging Scenarios ### Syntax Errors Syntax errors occur when code violates the rules of the programming language. These are typically the easiest errors to fix, as they're caught by compilers or interpreters before the code runs. AI assistants can: - Explain cryptic error messages in plain language - Identify the exact location of syntax errors - Suggest corrections based on context Example prompt: ### Logic Errors Logic errors occur when the code runs without errors but doesn't produce the expected results. These are often more difficult to identify and fix than syntax errors. AI assistants can: - Trace through code execution step by step - Identify flawed assumptions or logical gaps - Suggest alternative approaches Example prompt: ### Runtime Errors Runtime errors occur during program execution and cause the program to terminate unexpectedly. These include exceptions, segmentation faults, and other crashes. AI assistants can: - analyse error messages and stack traces - Identify common causes for specific exceptions - Suggest defensive programming techniques to prevent crashes Example prompt: ### Performance Issues Performance issues occur when code runs correctly but takes too long to execute or consumes excessive resources. AI assistants can: - Identify performance bottlenecks - Suggest algorithmic improvements - Recommend more efficient data structures or libraries Example prompt: ## Prompt Templates for Debugging ### Error Diagnosis Template When you encounter an error message: ### Code Review Template When your code runs but doesn't behave as expected: ### Step-by-Step Tracing Template For understanding exactly where logic goes wrong: ### Performance Debugging Template For optimising slow code: ## Effective Debugging Workflows ### The Divide and Conquer Approach When debugging complex issues, breaking the problem down is often the most effective strategy: 1. Isolate the problem: 2. Create a minimal reproduction: 3. Binary search the code: ### The Hypothesis Testing Approach Debugging by forming and testing specific hypotheses: 1. Form a hypothesis: 2. Design a test: 3. Interpret results: ### The Comparative Analysis Approach Debugging by comparing working and non-working code: 1. Identify differences: 2. Incremental changes: 3. Reference implementation: ## Understanding Error Messages with AI Error messages can often be cryptic, especially for beginners. AI assistants can translate these messages into actionable information. ### Anatomy of Error Messages When sharing error messages with AI, include: 1. The error type/name 2. The error message 3. The line number or location 4. The stack trace (if available) 5. The context surrounding the error Example prompt: ### Common Error Patterns AI can help identify patterns in errors that might indicate systemic issues: Example prompt: ## Debugging Strategies with AI ### Rubber Duck Debugging Rubber duck debugging involves explaining your code line by line, which often helps you spot the issue yourself. AI can serve as an advanced "rubber duck" that can also respond with insights. Example prompt: ### Print Debugging with AI Print debugging involves adding print statements to track variable values. AI can suggest strategic places to add these statements. Example prompt: ### Root Cause Analysis AI can help trace errors back to their root causes: Example prompt: ### Debugging AI-Generated Code A common scenario is debugging code that AI has generated but contains subtle errors. This situation provides excellent learning opportunities by requiring you to critically analyse and fix issues. Example: Debugging a Gradebook Program Consider this AI-generated code for a student gradebook: Analysis of Errors: This code contains several issues: 1. The scores are stored as strings rather than numbers 2. The `sum()` function won't work on strings 3. Using string length for division will give incorrect results Effective Debugging Prompt: AI Response (improved but still with issues): Further Analysis: The code now works but has a critical flaw: it assumes each student has exactly 3 scores. Follow-up Debugging Prompt: AI Response (further improved): This iterative debugging process demonstrates key skills: 1. Identifying specific errors in AI-generated code 2. Creating targeted prompts to address those errors 3. Iteratively improving the solution through focused feedback 4. Thinking critically about assumptions and edge cases This approach is particularly valuable for learning, as it requires understanding both the problem and the implementation details rather than just accepting whatever code the AI generates. ## Avoiding Debugging Pitfalls ### The XY Problem The XY Problem occurs when you ask about your attempted solution (Y) rather than your actual problem (X), leading to inefficient debugging. AI can help identify this pattern. Wrong approach: Better approach: ### The Correlation ≠ Causation Trap This occurs when you attribute a bug to something that happened around the same time, but isn't actually the cause. Example: AI: ### The Bandage Solution This occurs when you fix the symptoms of a bug rather than the underlying cause. AI can help identify deeper issues. Example prompt: ## Case Study: Debugging a Complex Issue Let's walk through a complete debugging process using AI assistance: ### Initial Problem Statement You: ### Tracing Execution AI: ### Debugging the Test Case You: ### Identifying the Root Cause AI: ```text Based on your updated information, the bubble sort implementation itself appears to be working correctly. The issue might be in how it's being called or how the data is being processed within your larger application. Let's analyse the `process_data` function: def process_data(data_sets): results = [] for data in data_sets: sorted_data = bubble_sort(data.copy()) results.append(sorted_data) return results This function: 1. Takes a list of data sets 2. Creates a copy of each data set 3. Sorts the copy using bubble_sort 4. Appends the sorted data to a results list 5. Returns all results This approach seems correct. You're creating a copy of each data set before sorting, which is good practice to avoid modifying the original data. Here are some potential issues to check: 1. Data type consistency: Are all elements in your arrays numbers? Sorting mixed types (like strings and numbers) can lead to unexpected results. 2. Mutable objects: If your arrays contain mutable objects rather than simple numbers, the sorting might be affected by reference issues. 3. Preservation of original data: Double-check that `data.copy()` is creating a proper copy. For nested structures, you might need a deep copy. 4. Concurrency issues: If your application is multi-threaded, there could be race conditions affecting the sorting. 5. Data transformation: Is the data being transformed between the sorting and when you observe the results? 6. Visualisation/display issue: Sometimes the issue is not with the sorting but with how results are displayed or accessed. Can you share: 1. A specific example of a dataset that's being incorrectly sorted in your application 2. How you're verifying the results (e.g., printing them, displaying in UI) 3. Any processing that happens to the results after sorting text I think I found the issue. In another part of the code, there's a function that modifies the results after sorting: def highlight_outliers(data_list): for i in range(len(data_list)): # Swap positions of certain elements to highlight them for j in range(len(data_list[i])-1): if data_list[i][j] > data_list[i][j+1] * 2: # Highlight elements 2x larger than next data_list[i][j], data_list[i][j+1] = data_list[i][j+1], data_list[i][j] return data_list This is being called on the results from process_data! It's intentionally un-sorting some elements to highlight outliers, but I forgot it was there. Thank you for helping me trace through the algorithm; it confirmed that my bubble sort is correct and led me to look elsewhere for the issue. ``` ## Key Takeaways - Effective debugging with AI requires clear communication about errors and context - Different debugging scenarios call for different prompting strategies - Templates can streamline the debugging process for common issues - Structured debugging approaches like divide-and-conquer and hypothesis testing work well with AI - Always verify AI's debugging suggestions with your own testing - Root cause analysis is more valuable than symptom mitigation - Debugging is a process of investigation, not just code fixing ============================================================ SOURCE: chapters/refactoring-strategies.qmd ============================================================ # Refactoring Strategies > Refactoring is not rewriting. It is making the code say what you meant more clearly. Refactoring (improving code structure without changing its external behaviour) is a critical skill for maintaining healthy, adaptable software. AI assistants can be powerful allies in identifying refactoring opportunities and implementing improvements. This chapter explores intentional approaches to refactoring with AI assistance. ## When and Why to Refactor ### recognising Refactoring Opportunities Refactoring is most valuable in specific situations: 1. Code smells - Patterns in code that indicate deeper problems 2. Technical debt - Accumulated design or implementation shortcuts 3. Changing requirements - Evolving needs that strain existing designs 4. Performance bottlenecks - Areas where optimisation is needed 5. Duplication - Repeated code that could be consolidated 6. Complexity - Overly complicated logic that's difficult to understand AI assistants excel at identifying these opportunities, especially when prompted to look for specific issues. Example prompt: ### The Business Case for Refactoring Refactoring is sometimes seen as unnecessary by non-technical stakeholders. AI can help articulate the business value: Example prompt: ## AI-Assisted Code Improvements ### Identifying Refactoring Targets AI can analyse code to identify specific improvement opportunities: Example prompt: ### Suggesting Improved Designs AI can propose architectural improvements: Example prompt: ### Implementing Specific Refactorings AI can implement common refactoring patterns: Example prompt: ## Measuring Impact of Refactoring Refactoring should produce measurable improvements. AI can help identify metrics and assess results. ### Quantitative Metrics Example prompt: AI might suggest: - Performance metrics: Response time, throughput, resource utilisation - Code metrics: Cyclomatic complexity, lines of code, method length - Testing metrics: Test coverage, test execution time - Maintenance metrics: Time to implement new features, bug fix duration - Error rates: Exceptions, crashes, incorrect results ### Qualitative Assessment Example prompt: AI might suggest: - Developer feedback: Team assessment of code clarity and maintainability - Onboarding impact: How quickly new team members understand the code - Flexibility: Ease of implementing new requirements - Documentation needs: Reduction in necessary explanation - Knowledge distribution: Less reliance on specific team members ## Maintaining Functionality During Refactoring One of the most critical aspects of refactoring is preserving existing behaviour. ### Test-Driven Refactoring Example prompt: ### Incremental Refactoring Approaches Example prompt: AI might suggest: 1. Create a comprehensive test suite first 2. Identify clear boundaries between responsibilities 3. Extract one responsibility at a time, starting with the most independent 4. Introduce interfaces to define clear contracts 5. Refactor clients to use the new interfaces 6. Run tests after each incremental change ## Case Studies ### Case Study 1: Simplifying Complex Conditionals Original Code: Refactoring Request: You: AI: ### Case Study 2: Reducing Duplication Original Code: Refactoring Request: You: AI: ```text There's significant duplication in this `UserService` class. The three methods follow the same pattern but just differ in the query condition. Let's refactor to eliminate this duplication: class UserService: def __get_user_by_query(self, query, params): """ Private helper method to execute a query and return user data. Args: query: SQL query with placeholders params: Parameters to substitute in the query Returns: User dict or None if not found """ connection = None cursor = None try: connection = database.connect() cursor = connection.cursor() cursor.execute(query, params) result = cursor.fetchone() if result: return { 'id': result[0], 'username': result[1], 'email': result[2], 'created_at': result[3] } return None finally: # Ensure resources are closed even if an exception occurs if cursor: cursor.close() if connection: connection.close() def get_user_by_id(self, user_id): """Get a user by their ID.""" return self.__get_user_by_query( "SELECT * FROM users WHERE id = %s", (user_id,) ) def get_user_by_email(self, email): """Get a user by their email address.""" return self.__get_user_by_query( "SELECT * FROM users WHERE email = %s", (email,) ) def get_user_by_username(self, username): """Get a user by their username.""" return self.__get_user_by_query( "SELECT * FROM users WHERE username = %s", (username,) ) Key improvements in this refactoring: 1. Extracted common code into a private helper method `__get_user_by_query` 2. Improved error handling with a try/finally block to ensure resources are properly closed 3. Added docstrings to improve code documentation 4. Made public methods concise and focused on their specific responsibilities 5. Reduced duplication significantly - the code is now much more maintainable Further improvements could include: - Creating a User model class instead of returning dictionaries - Using a context manager or connection pool for database connections - Adding proper exception handling and logging - Implementing a data access layer or repository pattern This refactoring maintains exactly the same functionality while making the code much more maintainable. ``` ## Key Takeaways - Refactoring is about improving code structure without changing functionality - AI assistants excel at identifying refactoring opportunities and suggesting improvements - Measure the impact of refactoring with both quantitative and qualitative metrics - Preserve functionality during refactoring through comprehensive testing - Common refactoring patterns like extracting methods and simplifying conditionals can be effectively implemented with AI assistance - Incremental, test-driven approaches minimise risk during refactoring ============================================================ SOURCE: chapters/case-studies.qmd ============================================================ # Case Studies > Every real problem is messier than the textbook version. That is where methodology earns its value. In this chapter, we'll explore comprehensive case studies that demonstrate intentional prompting principles applied to real-world programming scenarios. These case studies illustrate how the six-step methodology and intentional prompting techniques come together to solve complex problems effectively. ## Case Study 1: Building a Data Analysis Pipeline ### The Problem A research team needs to analyse a large dataset of customer transactions to identify purchasing patterns. They have a CSV file with millions of records containing transaction data including: - Transaction ID - Customer ID - Date and time - Product categories - Purchase amount - Payment method - Store location The team needs to: 1. Clean and preprocess the data 2. Identify the most frequently purchased product combinations 3. analyse how purchasing patterns change over time 4. Visualise the results ### The Intentional Prompting Approach #### Step 1: Restate the Problem The first move is not to ask for code — it is to prove you understand the problem. By restating requirements in your own words, you invite the AI to catch gaps before any code is written. You: AI: Notice what happened here: the AI surfaced six considerations — data privacy, repeatability, output format — that were not in the original requirements. A one-shot prompt like "build me a data pipeline" would have skipped straight past these. The restatement turned a vague goal into a shared understanding. ## The restatement is a filter, not a formality If the AI's response to your restatement surprises you, that is the methodology working. Better to discover missing requirements now than after you have written 200 lines of code. #### Step 2: Identify Input and Output With the problem understood, the next step is to nail down exactly what goes in and what comes out. Vague specifications produce vague code — precise ones give the AI concrete targets. You: AI: The AI pushed back on an ambiguity that could have produced the wrong analysis entirely: does "product combinations" mean items in the same basket, or items the same customer buys over time? That single clarification question could save hours of rework. #### Step 3: Work by Hand In this step, we'd manually process a small sample of the data to understand the analysis process. For brevity, we'll focus on a simple example of finding product combinations. Human work: Sample data (5 transactions): To find product combinations: 1. Split each transaction's product_categories field 2. Generate all pair combinations 3. Count frequency of each pair For T1001: ["electronics", "accessories"] Pairs: [("electronics", "accessories")] For T1002: ["groceries", "dairy", "produce"] Pairs: [("groceries", "dairy"), ("groceries", "produce"), ("dairy", "produce")] ...and so on for all transactions. Counting frequencies: - ("dairy", "produce"): 2 - ("groceries", "dairy"): 2 - All others: 1 Most frequent combinations: ("dairy", "produce") and ("groceries", "dairy") Now the human shares the results of working through the problem by hand. This is the step most people skip — and the one that matters most. By showing the AI your manual process, you give it concrete evidence of what you expect the code to do. You: AI: ## Each step builds on the previous — the AI has context from earlier exchanges By Step 3, the AI knows the data schema, the desired outputs, and the manual process. When it suggests association metrics like support, confidence, and lift, it is making a recommendation informed by three rounds of shared context — not guessing in a vacuum. #### Step 4: Write Pseudocode Based on the manual exploration, we can now outline the solution in pseudocode. The goal here is to capture the logic — the structure and flow — without getting tangled in syntax or library details. You: FUNCTION clean_data(input_file): data = READ_CSV(input_file) # Handle missing values FILL_MISSING(data.product_categories, "unknown") FILL_MISSING(data.payment_method, "unknown") FILL_MISSING(data.store_location, "unknown") DROP_ROWS_WHERE(data.transaction_id IS NULL OR data.timestamp IS NULL) # Standardize formats data.timestamp = CONVERT_TO_DATETIME(data.timestamp) data.amount = CONVERT_TO_FLOAT(data.amount) RETURN data FUNCTION find_product_combinations(data, min_support=0.001): # Initialize counter for product combinations combination_counts = EMPTY_COUNTER() total_transactions = COUNT_ROWS(data) FOR each row IN data: products = SPLIT(row.product_categories, ",") product_combinations = GENERATE_COMBINATIONS(products, size=2) FOR each combination IN product_combinations: INCREMENT combination_counts[combination] # Convert counts to support values combination_support = EMPTY_DICT() FOR combination, count IN combination_counts: combination_support[combination] = count / total_transactions # Filter by minimum support frequent_combinations = FILTER(combination_support >= min_support) # Sort by support (descending) frequent_combinations = SORT_DESCENDING(frequent_combinations) RETURN frequent_combinations FUNCTION analyze_time_patterns(data): # Add temporal features data['month'] = EXTRACT_MONTH(data.timestamp) data['year'] = EXTRACT_YEAR(data.timestamp) data['day_of_week'] = EXTRACT_WEEKDAY(data.timestamp) data['hour'] = EXTRACT_HOUR(data.timestamp) # Group by month and product categories monthly_trends = GROUP_BY(data, ['year', 'month', 'product_categories'], AGGREGATE(COUNT(transaction_id), SUM(amount))) # Group by day of week and hour hourly_patterns = GROUP_BY(data, ['day_of_week', 'hour', 'product_categories'], AGGREGATE(COUNT(transaction_id))) RETURN monthly_trends, hourly_patterns FUNCTION create_visualizations(product_combinations, monthly_trends, hourly_patterns): # Create bar chart of top combinations bar_chart = PLOT_BAR_CHART(TOP_N(product_combinations, 20)) # Create line chart of monthly trends line_chart = PLOT_LINE_CHART(monthly_trends) # Create heatmap of purchase patterns heatmap = PLOT_HEATMAP(hourly_patterns) RETURN bar_chart, line_chart, heatmap FUNCTION main(input_file, output_dir): # Clean data cleaned_data = clean_data(input_file) SAVE_CSV(cleaned_data, output_dir + "/cleaned_data.csv") # Find frequent product combinations product_combinations = find_product_combinations(cleaned_data) SAVE_CSV(TOP_N(product_combinations, 20), output_dir + "/top_combinations.csv") # analyse time patterns monthly_trends, hourly_patterns = analyze_time_patterns(cleaned_data) SAVE_CSV(monthly_trends, output_dir + "/monthly_trends.csv") # Create visualizations bar_chart, line_chart, heatmap = create_visualizations( product_combinations, monthly_trends, hourly_patterns) SAVE_PNG(bar_chart, output_dir + "/top_combinations.png") SAVE_PNG(line_chart, output_dir + "/monthly_trends.png") SAVE_PNG(heatmap, output_dir + "/hourly_patterns.png") AI: The human is not just accepting the first answer — they are pushing for specifics by sharing detailed pseudocode and asking "are there inefficiencies?" This kind of directed follow-up consistently produces better results than passively accepting whatever the AI offers. ## This is the six-step methodology in action The thinking happened in Steps 1-4, so Step 5 is informed delegation. When the human asks for Python code next, the AI already knows the data schema, the algorithm choice, the chunking strategy, and the error-handling requirements. Compare this with a one-shot prompt — the quality difference comes from the structured conversation. #### Step 5: Convert to Code Now let's implement the solution in Python, focusing on the product combination analysis. Because the pseudocode already captures the design, this step becomes a translation exercise rather than a design exercise. You: AI: ```text Your implementation is solid and follows good practices for processing large datasets! Here are some thoughts and potential optimisations: Strengths: 1. Using `chunksize` for reading large CSV files is excellent 2. The `mlxtend` library is a good choice for implementing Apriori efficiently 3. The code is well-structured with clear documentation 4. You've added appropriate data cleaning steps Potential improvements: 1. Memory optimisation for very large datasets: def find_product_combinations(data, min_support=0.001, max_len=3): # For extremely large datasets, we might want to sample first if len(data) > 10_000_000: # Arbitrary threshold data = data.sample(n=10_000_000, random_state=42) print(f"Data sampled to {len(data)} rows due to size") # Rest of your function... 2. Error handling and logging: def clean_data(file_path, chunk_size=100000): chunks = [] total_rows = 0 invalid_rows = 0 try: for i, chunk in enumerate(pd.read_csv(file_path, chunksize=chunk_size)): initial_rows = len(chunk) total_rows += initial_rows # Your cleaning code... invalid_rows += initial_rows - len(chunk) chunks.append(chunk) if i % 10 == 0: print(f"Processed {i+1} chunks ({total_rows} rows)...") print(f"Cleaning complete. Processed {total_rows} rows, removed {invalid_rows} invalid rows.") except Exception as e: print(f"Error during data cleaning: {str(e)}") raise 3. Performance optimisation for Apriori on sparse data: # If your dataset has many distinct products but each transaction has few, # use sparse representation for better performance from scipy.sparse import csr_matrix te_ary = te.fit_transform(transactions) sparse_te_ary = csr_matrix(te_ary) # Then use the sparse matrix with apriori if mlxtend supports it # Otherwise, for very sparse data, consider alternative implementations 4. Data validation before processing: def find_product_combinations(data, min_support=0.001, max_len=3): # Validate input data if 'product_categories' not in data.columns: raise ValueError("DataFrame must contain 'product_categories' column") # Check for empty values post-cleaning empty_cats = data['product_categories'].isin(['', 'unknown']).sum() if empty_cats > 0: print(f"Warning: {empty_cats} rows have empty or unknown product categories") # Rest of your function... 5. Consider adding time-based analysis: def find_seasonal_combinations(data, time_period='month'): """Find how product combinations vary over time.""" # Add time period column if time_period == 'month': data['period'] = data['timestamp'].dt.to_period('M') elif time_period == 'quarter': data['period'] = data['timestamp'].dt.to_period('Q') periods = data['period'].unique() results = {} for period in periods: period_data = data[data['period'] == period] period_combos = find_product_combinations(period_data) results[period] = period_combos.head(10) # Top 10 for each period return results Overall, your implementation is well-designed for the task at hand. These suggestions are refinements that might be helpful in specific scenarios, but your core approach is sound! text Let's test our implementation with a small sample dataset first: import pandas as pd import numpy as np from datetime import datetime, timedelta # Create sample data np.random.seed(42) n_samples = 1000 # Generate transaction IDs transaction_ids = [f'T{i+1:04d}' for i in range(n_samples)] # Generate customer IDs (100 unique customers) customer_ids = [f'C{np.random.randint(1, 101):03d}' for _ in range(n_samples)] # Generate timestamps (last 90 days) base_date = datetime.now() - timedelta(days=90) timestamps = [base_date + timedelta(days=np.random.randint(0, 90), hours=np.random.randint(0, 24), minutes=np.random.randint(0, 60)) for _ in range(n_samples)] # Product categories categories = ['electronics', 'accessories', 'clothing', 'shoes', 'books', 'groceries', 'dairy', 'produce', 'meat', 'bakery', 'beverages', 'snacks', 'beauty', 'health', 'kitchen', 'toys', 'sports', 'automotive', 'garden', 'furniture'] # Generate product categories (1-4 categories per transaction) product_categories = [] for _ in range(n_samples): n_cats = np.random.randint(1, 5) cats = np.random.choice(categories, size=n_cats, replace=False) product_categories.append(','.join(cats)) # Generate amounts amounts = np.random.uniform(10, 500, n_samples).round(2) # Payment methods payment_methods = np.random.choice(['credit', 'debit', 'cash', 'mobile'], size=n_samples, p=[0.4, 0.3, 0.2, 0.1]) # Store locations store_locations = np.random.choice(['store1', 'store2', 'store3', 'online'], size=n_samples, p=[0.3, 0.3, 0.2, 0.2]) # Create DataFrame sample_data = pd.DataFrame({ 'transaction_id': transaction_ids, 'customer_id': customer_ids, 'timestamp': timestamps, 'product_categories': product_categories, 'amount': amounts, 'payment_method': payment_methods, 'store_location': store_locations }) # Save to CSV sample_data.to_csv('sample_transactions.csv', index=False) # Test our implementation # 1. Find product combinations results = find_product_combinations(sample_data, min_support=0.01, max_len=3) print(f"Found {len(results)} frequent product combinations") print("\nTop 10 product combinations:") print(results.head(10)) # 2. Check for issues or unexpected results print("\nData statistics:") print(f"Total transactions: {len(sample_data)}") print(f"Unique product categories: {len(set(','.join(product_categories).split(',')))}") unique_combos = set() for cats in product_categories: cat_list = cats.split(',') for i in range(len(cat_list)): for j in range(i+1, len(cat_list)): unique_combos.add(tuple(sorted([cat_list[i], cat_list[j]]))) print(f"Unique product pairs in data: {len(unique_combos)}") What issues might we encounter when scaling to the full dataset, and how should we address them? text Your test approach with synthetic data is excellent for validating the implementation! Here are the potential issues you might encounter when scaling to the full dataset and strategies to address them: Scaling Issues and Solutions: 1. Memory Consumption - Issue: The TransactionEncoder creates a one-hot encoded matrix which can be extremely memory-intensive for datasets with millions of transactions and many unique products. - Solution: - Process in batches based on time periods (e.g., analyse one month at a time) - Use sampling techniques for initial analysis - Implement a more memory-efficient version of Apriori or consider alternatives like FP-Growth - Use sparse matrix representations where possible 2. Computation Time - Issue: Apriori's performance degrades significantly with low support thresholds on large datasets. - Solution: - Start with higher minimum support values and gradually decrease - Use parallel processing libraries (e.g., Dask, PySpark) - Implement an incremental approach that processes data in chunks and merges results - Consider using GPU-accelerated implementations for very large datasets 3. Long Tail of Infrequent Items - Issue: With millions of transactions, you might get thousands of combinations with low support. - Solution: - Focus on combinations with higher lift or confidence scores - Implement filtering for statistically significant associations - Group similar products into categories for higher-level analysis 4. Data Quality at Scale - Issue: Large datasets often have more varied quality issues that become apparent only at scale. - Solution: - Implement more robust data cleaning steps - Add comprehensive logging of anomalies - Create data quality dashboards to monitor issues 5. Time-Based Changes - Issue: Product associations may change over time, and analysing everything together might obscure patterns. - Solution: - Implement time-window analysis - Compare associations across different time periods - Look for emerging or declining associations Implementation Improvements for Scale: # For very large datasets, consider this approach: def analyze_large_dataset(file_path, output_dir): # 1. First pass: Get data statistics and sample print("First pass: Collecting statistics...") stats = collect_data_stats(file_path) # 2. Determine optimal processing approach based on stats if stats['total_rows'] > 10_000_000: print("Very large dataset detected. Using distributed processing...") # Initialize Dask or Spark import dask.dataframe as dd ddf = dd.read_csv(file_path) # Process with distributed framework else: print("Processing with pandas...") # Process with your current approach # 3. Use progressive sampling for quick insights sample_sizes = [10000, 100000, 1000000] for size in sample_sizes: if size > stats['total_rows']: continue print(f"analysing sample of {size} transactions...") sample = get_stratified_sample(file_path, size) # Run analysis on progressively larger samples results = find_product_combinations(sample) save_results(results, f"{output_dir}/sample_{size}_results.csv") # 4. Full analysis with optimised parameters print("Running full analysis with optimised parameters...") # Use parameters learned from sample analysis full_results = process_in_time_windows(file_path) return full_results Your test setup is well-designed. When scaling to millions of rows, implementing these strategies will help manage both memory consumption and computation time while still extracting valuable insights from the data. text I need to create a weather dashboard in Python that uses a provided module to access weather data and displays it in a user-friendly interface. Can you help me restate this problem more clearly to ensure I understand all requirements? text This problem involves creating a Python application that: 1. Utilises an existing weather data module to retrieve weather information 2. Processes that data appropriately 3. Presents the information in a clear, user-friendly interface 4. Allows users to interact with the dashboard to view different weather metrics The key components you'll need to develop are: - Functions to retrieve and process data from the provided module - A display system (likely console-based) to show weather information - User interaction components to allow navigation and selection - Visualisation elements for presenting weather data graphically text That's helpful. Could you help me understand what types of weather data are typically available through weather APIs, and what visualisations might be most useful for users? This will help me plan my dashboard layout. text Based on our discussion, let me identify the specific inputs and outputs for my weather dashboard: Inputs: - Location information (city name or coordinates) - Type of weather data requested (current, forecast) - Time period for forecast data Outputs: - Current weather conditions display - Visual representations of temperature, humidity, etc. - Forecast for upcoming days - User navigation menu Does this cover the essential inputs and outputs? Is there anything important I'm missing? text Your input/output identification is strong! A few additional considerations: Additional inputs to consider: - User preferences for temperature units (Celsius/Fahrenheit) - Display preferences (text-only vs. graphical visualisation) - Refresh rate for data updates Additional outputs to consider: - Error messages for handling unavailable data or connection issues - Status indicators for data freshness - Historical data comparisons (if available) - Weather alerts or warnings text Before we start coding, I want to sketch out the user interaction flow for the dashboard: 1. User starts program 2. Program displays welcome and prompts for location 3. User enters location 4. Program fetches current weather data 5. Program displays main dashboard with: - Current temperature and conditions - Today's high/low - Humidity and wind - Simple 3-day forecast 6. Program shows menu options: - View detailed forecast - View temperature graph - Change location - Exit 7. User selects option 8. Program displays selected information 9. Return to menu (step 6) Does this flow make sense? Would you structure the user experience differently? text Now that we have a clear understanding of the problem and user flow, could you help me write pseudocode for this weather dashboard? I want to focus on creating a modular design with separate functions for data retrieval, display, and user interaction. text python # Main program structure function main(): display_welcome() location = get_location_from_user() units = get_preferred_units() while True: weather_data = fetch_weather_data(location, units) display_dashboard(weather_data) choice = display_menu() if choice == "forecast": display_detailed_forecast(weather_data) else if choice == "graph": display_temperature_graph(weather_data) else if choice == "location": location = get_location_from_user() else if choice == "refresh": continue # Will loop back and refresh data else if choice == "exit": break # Data functions function fetch_weather_data(location, units): # Use provided weather module to get data current_weather = weather_module.get_current(location, units) forecast = weather_module.get_forecast(location, units) return {current: current_weather, forecast: forecast} # Display functions function display_dashboard(weather_data): clear_screen() display_current_conditions(weather_data["current"]) display_simple_forecast(weather_data["forecast"]) function display_detailed_forecast(weather_data): clear_screen() for day in weather_data["forecast"]: display_day_forecast(day) wait_for_user() function display_temperature_graph(weather_data): clear_screen() temperatures = extract_temperatures(weather_data["forecast"]) dates = extract_dates(weather_data["forecast"]) create_line_graph(dates, temperatures) wait_for_user() # User interaction functions function display_menu(): print menu options return get_valid_choice() function get_location_from_user(): prompt for location validate location return location text The pseudocode looks good overall. I noticed that error handling isn't explicitly included. Could you expand the pseudocode to include error handling for: 1. Invalid location input 2. Weather service connection failures 3. Handling missing data in the weather response text Now I want to test this code with some sample data. Can you provide a sample data structure that would be returned by: 1. weather_api.get_current() 2. weather_api.get_forecast() Also, what test cases should I consider to ensure my dashboard handles all possible scenarios correctly? text Here are sample data structures that might be returned by the weather API: # Sample current weather data sample_current = { 'location': 'New York', 'timestamp': '2023-09-15 14:30', 'temperature': 22, 'feels_like': 24, 'condition': 'Partly Cloudy', 'humidity': 65, 'wind_speed': 8, 'wind_direction': 'NE', 'pressure': 1012, 'visibility': 10, 'uv_index': 5, 'units': 'metric' } # Sample forecast data (list of days) sample_forecast = [ { 'date': '2023-09-15', 'day_of_week': 'Friday', 'high': 24, 'low': 18, 'condition': 'Partly Cloudy', 'precipitation_chance': 20, 'humidity': 65, 'wind_speed': 8, 'wind_direction': 'NE', 'units': 'metric' }, # Additional days would follow... ] Test cases to consider: 1. Happy path: - Valid location with complete weather data - Multiple days of forecast 2. Data issues: - Missing fields in current weather - Partial forecast data available - Empty forecast list 3. Connection issues: - API timeout - Service unavailable 4. User input scenarios: - Invalid location name - Empty input - Special characters in location name python def fetch_weather_data(location, units): """Fetch weather data from API.""" current_weather = weather_api.get_current(location, units) forecast = weather_api.get_forecast(location, units, days=5) return {"current": current_weather, "forecast": forecast} text The fetch_weather_data() function needs improvement. It doesn't handle errors or validate the data. How can I make it more robust to: 1. Handle connection errors 2. Validate returned data structure 3. Handle missing or incomplete data 4. Provide meaningful error messages to the user python def fetch_weather_data(location, units): """ Fetch weather data from API with comprehensive error handling. Args: location: City name or location units: 'metric' or 'imperial' Returns: Dictionary containing current weather and forecast Raises: ConnectionError: If unable to connect to weather service ValueError: If location is invalid or data is incomplete """ try: # Attempt to get current weather current_weather = weather_api.get_current(location, units) # Validate current weather data required_current_fields = ['temperature', 'condition', 'humidity', 'wind_speed'] missing_fields = [field for field in required_current_fields if field not in current_weather] if missing_fields: # Handle missing fields by providing defaults for field in missing_fields: current_weather[field] = "N/A" # Log the issue but don't fail completely print(f"Warning: Some current weather data is unavailable: {', '.join(missing_fields)}") # Attempt to get forecast with retry mechanism max_retries = 3 retry_count = 0 forecast = None while retry_count = max_retries: print(f"Warning: Could not retrieve forecast data. Using current conditions only.") forecast = [] # Empty forecast as fallback else: time.sleep(1) # Wait before retrying # Ensure forecast is a list (even if empty) if not isinstance(forecast, list): forecast = [] return { "current": current_weather, "forecast": forecast, "location": location, "units": units, "timestamp": time.strftime("%Y-%m-%d %H:%M") } except ConnectionError: raise ConnectionError("Failed to connect to weather service. Please check your internet connection.") except ValueError as e: raise ValueError(f"Invalid location: {location}. Please check spelling and try again.") except Exception as e: # Convert any other exceptions from the API to something we can handle raise Exception(f"Error fetching weather data: {str(e)}") text We're experiencing intermittent failures in our checkout system. Let me restate the problem: Users occasionally receive a generic "Unable to complete checkout" error when attempting to purchase items. The issue: - Only affects approximately 5% of checkout attempts - Seems more common during peak traffic hours - Doesn't consistently affect the same users - Doesn't correlate with specific products or payment methods - Leaves limited information in our standard logs Our goal is to identify the root cause and implement a fix to eliminate these checkout failures. Is this an appropriate restatement of the debugging challenge? text To debug this issue effectively, we need to gather specific information: INPUTS (Information sources): - Application logs with enhanced logging enabled - Database transaction logs - Server performance metrics during failures - Network traffic patterns - User session data for failed checkouts - Payment processor API logs OUTPUTS (What we need to determine): - Exact point of failure in the checkout process - Conditions that trigger the failure - Root cause of the issue - Potential fix or mitigation strategy Are there other information sources or outputs we should consider to effectively debug this intermittent issue? ``` The debugging case study would continue through a systematic investigation process, demonstrating how intentional prompting techniques help narrow down the cause of complex bugs. ## The methodology adapts to the problem type Notice how the six steps flex to fit debugging: "restate the problem" becomes "describe what you know about the failure," "identify inputs/outputs" becomes "identify information sources," and "work by hand" becomes "trace the failure path manually." The structure is consistent; the application is flexible. ## Key Takeaways From these case studies, we can extract several important lessons about intentional prompting: 1. Structured methodology yields better results - Following all six steps produces more robust solutions than jumping straight to implementation - Each step builds on the previous one, creating a solid foundation 2. Problem exploration is time well spent - Restating the problem and identifying inputs/outputs reveals critical requirements - Manual exploration uncovers edge cases that might otherwise be missed 3. AI assistance enhances human capability - AI helps identify potential issues and optimisations - The human developer maintains control over the approach and design decisions 4. Testing is essential - Starting with small, synthetic datasets verifies the approach before scaling - Comprehensive testing reveals potential issues early 5. Real-world complexity requires adaptability - Initial solutions often need refinement as scale and complexity increase - The methodology provides a framework for iterative improvement ============================================================ SOURCE: chapters/scaling-complexity.qmd ============================================================ # Scaling to Complex Projects > The practices that work for a single function do not automatically scale. That is the challenge. So far, we've primarily focused on applying intentional prompting techniques to isolated functions and small-scale problems. In real-world development, however, you'll often work on complex projects with multiple components, larger codebases, and teams of developers. This chapter explores strategies for scaling intentional prompting practices to handle this increased complexity. ## Beyond Single-File Programs ### The Challenges of Scale As projects grow in size and complexity, several challenges emerge: 1. Architecture becomes more important - The high-level organisation of code has a greater impact on maintainability 2. Component interactions introduce new failure modes and dependencies 3. Consistency across the codebase becomes harder to maintain 4. Knowledge distribution among team members becomes uneven 5. Contextual understanding becomes more difficult as no single person knows every detail These challenges require adaptations to our intentional prompting approach. ### Intentional Architecture Design For larger projects, architectural planning becomes an essential first step. AI can be a valuable partner in this process. The following prompt demonstrates a key scaling skill: asking the AI to help you think through a design before you build it. Notice how the prompt lists the functional areas upfront — this gives the AI enough context to reason about boundaries and trade-offs. You: AI: The AI produced a solid starting point, but that's exactly what it is — a starting point. The real value here isn't the specific service breakdown (which is fairly standard for e-commerce). It's the framework of considerations — data ownership, consistency models, interaction patterns — that you now need to evaluate against your specific constraints. ## Architecture Is a Human Decision AI can generate sensible default architectures, and it's good at enumerating trade-offs. But the choices between those trade-offs — synchronous vs. asynchronous, strong vs. eventual consistency, separate auth service vs. embedded — depend on your team size, deployment constraints, and business priorities. Use the AI's output as a checklist of things to decide, not as decisions already made. ### Component-Level Intentional Prompting For complex projects, apply the six-step methodology at the component level: 1. Restate the component's purpose and responsibilities 2. Identify component interfaces (inputs and outputs) 3. Work through component interactions with sequence diagrams or similar tools 4. Write pseudocode for key interfaces and interactions 5. Implement the component while maintaining clear boundaries 6. Test the component both in isolation and integrated with others This approach ensures that each component is well-defined and properly integrated into the larger system. ## Managing AI in Larger Codebases ### Context Limitations A key challenge when using AI with large codebases is providing sufficient context. Most AI assistants have limitations on how much code they can analyse at once. Strategies for managing context limitations: 1. Hierarchical exploration - Start with high-level architecture and progressively drill down 2. Component-focused prompting - Focus inquiries on specific components rather than the entire system 3. Interface-first approach - Prioritise understanding component interfaces over implementation details 4. Strategic code sampling - Select representative portions of the codebase for AI analysis Example prompt for hierarchical exploration: ### Maintaining Consistency In larger projects, maintaining consistent patterns and approaches becomes critical. AI can help enforce these standards. This next prompt shows a technique that becomes essential at scale: teaching the AI your team's patterns by example. Rather than hoping the AI guesses your conventions, you explicitly provide a reference implementation and ask it to follow the same shape. You: ## Show, Don't Just Tell Listing your coding standards is helpful. Including an example that demonstrates those standards is far more effective. AI models learn patterns from examples much more reliably than from abstract descriptions. When you include `[Example controller code]`, you're giving the AI a concrete template to follow rather than a set of rules to interpret. ### Modular Prompting Strategies For complex tasks, break down your prompts into modular components: 1. Context establishment - Provide the necessary background information 2. Specific task definition - Clearly state what you're trying to accomplish 3. Constraints and requirements - Outline any limitations or must-have features 4. Expected output format - Specify how you want the response structured Example of modular prompting: ## Intentional System Design with Prompting ### Designing for Evolution Complex systems evolve over time. Intentional prompting at scale means designing with this evolution in mind. The next prompt illustrates a powerful technique: telling the AI not just what you need now, but what you anticipate needing later. This shifts the conversation from "build me a thing" to "help me design a thing that can grow." You: AI: ```text Designing a payment processing system with future extensibility in mind requires careful architectural planning. Here's a comprehensive approach: ### Core Architectural Patterns 1. Strategy Pattern for Payment Methods - Create a common `PaymentProcessor` interface - Implement concrete strategies for each payment method (CreditCardProcessor, PayPalProcessor) - Use a factory to instantiate the appropriate processor based on payment type 2. Adapter Pattern for External Services - Create a consistent internal API for payment operations - Implement adapters for each external service (Stripe, PayPal, etc.) - This isolates external API changes from your core system 3. Provider Model for New Payment Types - Implement a plugin/provider registration system - New payment methods can be added as modules without modifying core code - Use dependency injection to load payment providers ### System Architecture Components 1. Payment Gateway Interface public interface PaymentGateway { PaymentResult process(PaymentRequest request); RefundResult refund(RefundRequest request); PaymentStatus checkStatus(String paymentId); // Future methods can be added as needed } 2. Abstract Payment Method public abstract class PaymentMethod { protected PaymentGateway gateway; public abstract PaymentResponse authorize(Money amount, Customer customer); public abstract PaymentResponse capture(String authorizationId); public abstract PaymentResponse refund(String paymentId, Money amount); // Methods for subscription handling (future extension) public boolean supportsSubscriptions() { return false; } public PaymentResponse createSubscription(SubscriptionRequest request) { throw new UnsupportedOperationException(); } } 3. Payment Provider Registry public class PaymentProviderRegistry { private Map providers = new HashMap<>(); public void registerProvider(String type, PaymentMethod provider) { providers.put(type, provider); } public PaymentMethod getProvider(String type) { if (!providers.containsKey(type)) { throw new UnsupportedPaymentTypeException(type); } return providers.get(type); } public List getSupportedPaymentTypes() { return new ArrayList<>(providers.keySet()); } } ### Design Principles to Follow 1. Abstraction Layers - Separate business logic from payment provider integration - Create clear boundaries between components 2. Configuration Over Code - Use configuration files for provider settings - Enable/disable payment methods without code changes 3. Event-Driven Approach - Implement payment events (initiated, authorized, captured, failed, refunded) - Allow other systems to react to payment events without tight coupling 4. Idempotency - Design all operations to be idempotent (can be safely retried) - Use idempotency keys for external API calls 5. Generic Data Models - Design data models that can accommodate various payment methods - Use extensible property bags for method-specific data ### Specific Extension Points 1. For Cryptocurrency Support: - Add a `CryptocurrencyPaymentMethod` class - Implement blockchain transaction monitoring - Consider address generation and validation 2. For Subscription Billing: - Extend the payment interface with subscription-specific methods - Add models for subscription plans, billing cycles, etc. - Implement subscription state management 3. For Regional Payment Methods: - Add metadata for region availability to payment methods - Implement region-specific validation and processing rules - Consider internationalization requirements ### Implementation Example Here's a skeleton implementation of the core components: // The request/response models public class PaymentRequest { private String paymentMethodType; private Money amount; private Customer customer; private Map methodSpecificData = new HashMap<>(); // getters and setters } public class PaymentResponse { private boolean success; private String transactionId; private PaymentStatus status; private String errorCode; private String errorMessage; // getters and setters } // The payment service public class PaymentService { private PaymentProviderRegistry providerRegistry; public PaymentResponse processPayment(PaymentRequest request) { PaymentMethod method = providerRegistry.getProvider(request.getPaymentMethodType()); return method.authorize(request.getAmount(), request.getCustomer()); } // Additional methods for different payment operations } ### Testing Considerations 1. Create mock implementations of payment gateways for testing 2. Implement comprehensive integration tests for each payment provider 3. Design a test framework that can validate new payment methods By following these patterns and principles, your payment system will be well-positioned to accommodate new payment methods and functionality with minimal changes to the core architecture. text In our microservices architecture, we're seeing the same patterns implemented inconsistently across services for: - Authentication and authorization - Logging and monitoring - Error handling and reporting - API request validation - Rate limiting How can we standardize these cross-cutting concerns across our services while maintaining each team's autonomy over their service's core functionality? text I've implemented a new authentication service with the following components: - OAuth2 provider integration - JWT token issuance and validation - Role-based access control - Multi-factor authentication support I need to create comprehensive documentation for other developers who will integrate with this service. What should my documentation cover, and how should I structure it for maximum clarity? text # Authentication Service Documentation Structure Here's a comprehensive documentation structure for your authentication service that balances thoroughness with usability: ## 1. Overview ### 1.1 Purpose and Scope - Brief description of the authentication service - Core features (OAuth2, JWT, RBAC, MFA) - Where it fits in the overall system architecture ### 1.2 High-Level Architecture - Component diagram showing major parts of the auth service - Integration points with other systems - Data flow diagrams for key authentication paths ### 1.3. Terminology - Clear definitions of domain-specific terms - Explanation of OAuth2 concepts for those unfamiliar ## 2. Getting Started ### 2.1 Prerequisites - Required credentials and access - Development environment setup - Dependencies ### 2.2 Quick Start Guide - Basic implementation examples for common scenarios - Sample code for authentication flows - Minimal configuration example ### 2.3 Integration Checklist - Step-by-step guide for service integration - Security best practices checklist ## 3. Core Features ### 3.1 OAuth2 Provider Integration - Supported OAuth providers - Configuration parameters - Authorization code flow implementation - Refresh token handling - Error scenarios and troubleshooting ### 3.2 JWT Token Management - Token structure and payload - Issuance process and endpoints - Validation procedures - Token lifecycle (expiration, refresh) - Signature verification ### 3.3 Role-Based Access Control - Role hierarchy and definitions - Permission assignments - Role checking API - Integration with resource servers - Custom permission logic ### 3.4 Multi-Factor Authentication - Supported MFA methods - Enrollment process - Verification flow - Fallback mechanisms - End-user experience considerations ## 4. API Reference ### 4.1 Authentication Endpoints - Complete API specification (OpenAPI/Swagger) - Request/response formats - Error codes and handling - Rate limits ### 4.2 Client Libraries - Available client libraries for different languages - Installation instructions - Basic usage patterns ### 4.3 Middleware Components - Authentication middleware for different frameworks - Configuration options - Error handling ## 5. Implementation Guides ### 5.1 Backend Service Integration - Step-by-step guide with code examples - Validation middleware setup - Error handling best practices ### 5.2 Frontend Application Integration - Login implementation - Token storage strategies - Handling token expiration - Logout procedures ### 5.3 Mobile Application Integration - Native app authentication flows - Biometric integration - Token storage on mobile devices ## 6. Security Considerations ### 6.1 Security Best Practices - HTTPS requirements - Token storage recommendations - CSRF protection - XSS prevention ### 6.2 Common Vulnerabilities - Known attack vectors - Prevention strategies - Security testing checklist ### 6.3 Compliance Information - Relevant standards compliance (GDPR, SOC2, etc.) - Audit logging capabilities ## 7. Operations ### 7.1 Monitoring and Alerting - Available metrics - Recommended alerts - Performance benchmarks ### 7.2 Troubleshooting - Common issues and resolutions - Logging details - Debugging tools ### 7.3 Disaster Recovery - Failure scenarios - Recovery procedures ## 8. Reference Implementations ### 8.1 Sample Applications - Links to reference implementations - Example projects demonstrating best practices ### 8.2 Common Patterns - Authentication patterns for microservices - Single sign-on implementation - API gateway integration ## 9. FAQ and Support ### 9.1 Frequently Asked Questions - Answers to common questions - Clarification of complex concepts ### 9.2 Getting Help - Support channels - Issue reporting procedure - SLAs and response times ## Documentation Tips 1. Use consistent terminology throughout all documentation 2. Include working code examples that developers can copy and adapt 3. Create diagrams for complex flows, especially the OAuth2 and MFA processes 4. Add sequence diagrams showing the authentication process across systems 5. Document error scenarios just as thoroughly as success paths 6. Version your documentation to align with service versions 7. Provide a searchable format (ideally with a good search function) 8. Include a changelog tracking major documentation updates This structure provides a comprehensive yet navigable documentation set that addresses the needs of different audiences - from developers getting started to those troubleshooting specific issues. text Here's a complex piece of code from our order processing system: public class OrderProcessor { private final InventoryService inventoryService; private final PaymentService paymentService; private final ShippingService shippingService; private final NotificationService notificationService; private final OrderRepository orderRepository; // Constructor and other methods... \\@Transactional public OrderResult processOrder(Order order) { // Validate order if (order == null || order.getItems().isEmpty()) { return OrderResult.failure("Order must contain at least one item"); } // Check inventory List unavailableItems = new ArrayList<>(); for (OrderItem item : order.getItems()) { InventoryStatus status = inventoryService.checkAvailability(item.getProductId(), item.getQuantity()); if (status != InventoryStatus.AVAILABLE) { unavailableItems.add(item.getProductId()); } } if (!unavailableItems.isEmpty()) { return OrderResult.failure("Items not available: " + String.join(", ", unavailableItems)); } // Reserve inventory try { for (OrderItem item : order.getItems()) { inventoryService.reserveInventory(item.getProductId(), item.getQuantity(), order.getId()); } } catch (InventoryException e) { return OrderResult.failure("Failed to reserve inventory: " + e.getMessage()); } // Process payment PaymentResult paymentResult; try { paymentResult = paymentService.processPayment(order.getPaymentDetails(), order.getTotalAmount()); if (!paymentResult.isSuccessful()) { // Release inventory if payment fails for (OrderItem item : order.getItems()) { inventoryService.releaseInventory(item.getProductId(), item.getQuantity(), order.getId()); } return OrderResult.failure("Payment failed: " + paymentResult.getErrorMessage()); } } catch (PaymentException e) { // Release inventory if payment throws exception for (OrderItem item : order.getItems()) { inventoryService.releaseInventory(item.getProductId(), item.getQuantity(), order.getId()); } return OrderResult.failure("Payment error: " + e.getMessage()); } // Update order status order.setStatus(OrderStatus.PAID); order.setPaymentId(paymentResult.getTransactionId()); orderRepository.save(order); // Create shipment try { ShipmentResult shipmentResult = shippingService.createShipment(order); if (shipmentResult.isSuccessful()) { order.setStatus(OrderStatus.SHIPPED); order.setTrackingNumber(shipmentResult.getTrackingNumber()); orderRepository.save(order); } else { // Shipment creation failed but payment succeeded // Requires manual intervention order.setStatus(OrderStatus.PAYMENT_RECEIVED); order.setNotes("Shipment creation failed: " + shipmentResult.getErrorMessage()); orderRepository.save(order); notificationService.notifyAdminOfShippingIssue(order); } } catch (ShippingException e) { // Shipment creation threw exception order.setStatus(OrderStatus.PAYMENT_RECEIVED); order.setNotes("Shipping error: " + e.getMessage()); orderRepository.save(order); notificationService.notifyAdminOfShippingIssue(order); } // Send confirmation notificationService.sendOrderConfirmation(order); return OrderResult.success(order); } // Other methods... } Can you provide comprehensive JavaDoc documentation for this class and the processOrder method, explaining its purpose, the process flow, error handling, and any considerations for extending or modifying it? text Our development team of 12 engineers is working on a complex system with multiple components. We're finding that knowledge is becoming siloed, with each developer understanding only their area deeply. What collaboration patterns or team practices would help us maintain a shared understanding of the system while allowing for individual specialisation? text # Team Knowledge Sharing Patterns for Complex Systems To balance specialised expertise with shared understanding in your team of 12 engineers, consider implementing these complementary practices: ## Regular Knowledge Exchange Practices ### 1. Rotation-Based Code Reviews - Approach: Assign code reviews outside of a developer's primary area - Frequency: Every developer reviews code outside their specialty at least once per sprint - Implementation: Create a rotation schedule ensuring exposure to different components - Benefits: Cross-pollination of ideas and gradual knowledge distribution ### 2. Architecture Decision Records (ADRs) - Approach: Document significant architectural decisions with context and rationale - Content: Problem statement, constraints, considered options, chosen solution, consequences - Storage: Keep in version control alongside the code - Review: Team-wide review of new ADRs in dedicated meetings ### 3. Component Demo Sessions - Approach: Regular deep-dives into specific components - Format: 30-45 minute sessions with code walkthroughs, followed by Q&A - Frequency: Weekly or bi-weekly, rotating through components - Documentation: Record sessions and summarise key points in a knowledge base ### 4. Pair Programming Rotations - Approach: Scheduled pair programming sessions across component boundaries - Duration: Half-day sessions - Frequency: Bi-weekly for each developer - Pairing: Deliberate pairings of specialists with non-specialists ## Documentation Practices ### 5. Living Architecture Diagram - Approach: Maintain an always-current system architecture visualisation - Ownership: Rotating responsibility for updates - Reviews: Monthly team reviews to ensure accuracy - Accessibility: Prominently displayed in team spaces (physical and virtual) ### 6. Component Fact Sheets - Approach: One-page summaries of each component's purpose, interfaces, and quirks - Content: Core responsibilities, key interfaces, common issues, future plans - Updates: Refresh quarterly or after major changes - Usage: Required reading for onboarding to a new component ## Team Structure Adaptations ### 7. Bounded Rotation Program - Approach: Planned rotations of developers between components - Duration: 4-8 weeks per rotation - Scope: Limited to 1-2 developers rotating at once to maintain continuity - Knowledge Transfer: Dedicated handoff sessions when starting/ending rotations ### 8. Component Ambassadors - Approach: Each component has a primary owner and a secondary "ambassador" - Ambassador Role: Learn enough to answer questions and represent the component - Rotation: Ambassadors rotate every 3-6 months - Responsibilities: Attend cross-component design meetings, review integration points ## Meeting Structures ### 9. System-Wide Design Reviews - Approach: Regular reviews of how components interact - Frequency: Monthly - Format: Each team presents recent changes and upcoming work - Focus: Integration points and cross-cutting concerns ### 10. Technical Brown Bags - Approach: Lunch-and-learn sessions on specific technical topics - Presenters: Rotating responsibility among team members - Topics: Both component-specific and general engineering practices - Schedule: Weekly or bi-weekly ## Tooling and Infrastructure ### 11. centralised Documentation Portal - Approach: Single entry point for all system documentation - Content: Component guides, architecture diagrams, ADRs, API docs - Maintenance: Part of definition of done for features - Search: Robust search functionality across all documentation ### 12. Cross-Component Integration Tests - Approach: Shared ownership of integration test suite - Responsibility: All developers contribute to integration tests - Reviews: Cross-component review of test scenarios - Visibility: Integration test results prominently displayed ## Implementation Strategy Rather than implementing all these patterns at once: 1. Start with 3-4 high-impact practices based on your team's specific needs 2. Evaluate after 2-3 months to assess effectiveness 3. Add or adjust practices based on team feedback 4. Create explicit time allocations for knowledge sharing activities These practices will help create multiple knowledge bridges across component boundaries while still allowing for deep specialisation where needed. text We want to implement a "documentation as code" approach for our microservices architecture. This would mean: - Documentation lives alongside code in the repository - Documentation is reviewed and updated with code changes - Documentation is automatically generated and published What tools, practices, and workflows would you recommend for implementing this approach effectively? text I have a simple note-taking application with these components: - A React frontend for creating and viewing notes - A Node.js/Express backend with RESTful APIs - MongoDB for data storage Now I need to scale this to an enterprise collaboration platform with: - Team workspaces - Document sharing and editing - User permissions and roles - Activity tracking and notifications - Integration with other tools via webhooks How should I approach scaling this application architecture? What components would I need to add or modify, and how can I ensure the system remains maintainable as it grows? ``` ## Key Takeaways - Scaling intentional prompting to complex projects requires architectural thinking - Break down large systems into manageable components with clear boundaries - Use modular prompting strategies to help AI understand complex contexts - Address cross-cutting concerns systematically across the system - Invest in comprehensive documentation and knowledge-sharing practices - Treat documentation as a first-class artifact that evolves with the code - Use collaborative patterns to distribute knowledge across the team ============================================================ SOURCE: chapters/teaching-learning.qmd ============================================================ # Teaching and Learning > Teaching someone to use AI well is teaching them to think well. The tool is secondary. The advent of AI coding assistants has dramatically changed not just how we program, but also how we teach and learn programming. This chapter explores the application of intentional prompting principles in educational settings, offering guidance for both educators and students. ## Pedagogical Applications of Intentional Prompting ### Rethinking Programming Education Traditional programming education has focused on teaching syntax, algorithms, and problem-solving techniques directly. With AI assistants readily available, the educational focus needs to shift: | Traditional Focus | Intentional Prompting Focus | |-------------------|--------------------------| | Syntax memorization | Critical evaluation of AI-generated code | | Writing code from scratch | Guiding AI toward better solutions | | Debugging syntax errors | Understanding conceptual errors | | Implementing standard algorithms | Adapting algorithms to specific contexts | | Following prescribed solutions | Exploring multiple approaches | This shift doesn't mean abandoning the fundamentals. Rather, it means teaching them in a new context where AI collaboration is assumed. ### The Educator's Role in the AI Era Educators now have additional responsibilities: 1. Teaching effective prompting alongside programming concepts 2. modelling critical evaluation of AI-generated solutions 3. Emphasising conceptual understanding over implementation details 4. Preparing students for a future where AI collaboration is the norm 5. Creating assessment methods that work in an AI-assisted environment ### Integrating AI into Curriculum Design AI assistants can be integrated at different levels of the curriculum: Level 1: AI as a Learning Resource - Students use AI to explain concepts they don't understand - AI provides additional examples and alternative explanations - Educators guide students on effective question-asking Level 2: AI as a Programming Partner - Students learn to prompt AI for code suggestions - Assignments include evaluating and improving AI-generated code - Focus on understanding the "why" behind AI suggestions Level 3: AI as a Feedback Mechanism - Students request AI feedback on their code - AI identifies potential improvements and best practices - Students learn to critically evaluate AI feedback Level 4: AI as a Challenge Mechanism - Students intentionally guide AI toward errors, then identify them - Assignments include "debugging" AI-generated code - Focus on developing critical thinking skills ## Designing Assignments in the AI Era ### Beyond "AI-Proof" Assignments Many educators initially responded to AI coding assistants by trying to create "AI-proof" assignments, ones that AI tools couldn't easily solve. This approach is ultimately unsustainable and misses the opportunity to teach intentional prompting skills. Instead, design assignments that incorporate AI collaboration explicitly: ### Prompting-Focused Assignments These assignments emphasise the skill of effective prompting: Example assignment: ### Step-by-Step Guided Exercise This example from an educational setting shows how to guide students through intentional prompting: Example: Building a Python Gradebook > Task: Create a gradebook program that: > - Uses a dictionary to store student names and their list of scores > - Has a function to calculate average score > - Prints each student's name and average > - Handles edge cases (e.g., empty lists, invalid inputs) Step 1: Start Simple The AI might provide a basic implementation: Step 2: Add a Function The AI improves the implementation: Step 3: Improve Robustness The AI addresses the edge case: This guided approach teaches students to: - Start with a simple prompt - Identify limitations in the initial response - Use targeted follow-up prompts to improve specific aspects - Think critically about edge cases - Guide the AI through a step-by-step improvement process ### Evaluation and Improvement Assignments These assignments focus on critical evaluation of AI-generated code: Example assignment: ### Comparative Analysis Assignments These assignments emphasise exploring multiple approaches: Example assignment: ### Process Documentation Assignments These assignments focus on the development process rather than just the final code: Example assignment: ## Assessing Understanding vs. Output ### Moving Beyond Code Assessment In the AI era, the code students produce is no longer a reliable indicator of their understanding. Assessment strategies must evolve: ### Process-Based Assessment Evaluate the student's development process rather than just the final code: - Documentation of prompting strategy - Reflection on AI collaboration - Analysis of alternative approaches considered - Justification of design decisions ### Concept Demonstration Have students demonstrate conceptual understanding separate from code production: - Verbal explanations of how their code works - Identification of potential edge cases - Analysis of time and space complexity - Prediction of how code would behave under different inputs ### Code Modification Challenges Test understanding through targeted modification tasks: - Adding a new feature to existing code - optimising for a different constraint - Adapting the solution to a different context - Fixing intentionally introduced bugs ### Prompt Engineering Assessment Directly assess students' ability to effectively prompt AI tools: - Provide a difficult problem and evaluate prompting strategy - Grade the quality and specificity of prompts - Assess ability to refine prompts based on initial responses - Evaluate critical thinking about AI-generated solutions ## Building a Learning Community ### Collaborative Learning in the AI Era AI assistants change the dynamics of collaborative learning: Challenges: - Students may rely on AI rather than peers for help - Individual work becomes harder to distinguish - Knowledge gaps can be masked by AI assistance Opportunities: - Groups can focus on higher-level design discussions - AI can serve as a common "baseline" for group projects - Students can learn from each other's prompting strategies ### Peer Learning Strategies Encourage intentional peer collaboration: 1. Prompt-sharing sessions where students compare effective prompts 2. Code review workshops focused on evaluating AI-generated code 3. Paired prompting where students collaborate on guiding AI 4. Prompting competitions to solve challenges with the most effective prompts ### Creating a Supportive Environment Foster an environment that supports learning with AI: 1. Explicitly acknowledge AI use as expected and valuable 2. Establish clear guidelines for appropriate AI collaboration 3. Share success stories of effective AI use 4. Normalise discussions about AI limitations and frustrations 5. recognise prompting expertise as a valuable skill ## Case Studies in Educational Settings ### Case Study 1: Introductory Programming Course Context: An introductory Python programming course at a university, with students who have no prior programming experience. Approach: The instructor redesigned the course around intentional prompting principles: 1. Week 1-3: Foundations and AI Introduction - Basic Python syntax and concepts taught traditionally - Introduction to AI assistants in week 3 - Assignment: Compare hand-written and AI-generated solutions 2. Week 4-8: Guided AI Collaboration - Structured prompting templates provided - Focus on understanding and modifying AI-generated code - Assignments include documented prompting journey 3. Week 9-15: Independent Problem-Solving - Complex problems requiring multiple prompting iterations - Focus on proper problem decomposition - Final project includes reflection on AI collaboration process Results: - Students showed stronger conceptual understanding compared to previous cohorts - More students attempted advanced topics and extensions - Course completion rate increased by 15% - Students reported higher confidence in their programming abilities Key Takeaway: Integrating AI assistants from the beginning, with explicit instruction on effective usage, led to better learning outcomes than either banning AI or allowing it without guidance. ### Case Study 2: Advanced Software Engineering Course Context: A senior-level software engineering course focused on designing and implementing a complex system over a semester. Approach: The instructor implemented a "documentation-first" intentional prompting methodology: 1. Project Planning Phase - Students created detailed specifications before any coding - AI was used to evaluate specifications for completeness - Students documented initial architectural decisions 2. Implementation Phase - Students used the six-step methodology for each component - Weekly submissions included prompting strategies used - Peer reviews focused on AI collaboration effectiveness 3. Integration and Testing Phase - AI used to generate comprehensive test cases - Students documented where AI testing missed edge cases - Final submission included reflective analysis Results: - Projects showed more consistent architecture and documentation - Student teams reported fewer integration issues - Code quality metrics improved over previous cohorts - Students demonstrated better ability to explain architectural decisions Key Takeaway: The intentional, documentation-first approach helped students maintain control over increasingly complex projects, even with significant AI assistance. ## Practical Guide for Educators ### Introducing Intentional Prompting in Your Curriculum Step 1: Self-Education - Familiarize yourself with AI coding assistants - Practice the six-step methodology on your own projects - Develop your own prompting skills Step 2: Curriculum Analysis - Identify which learning objectives remain essential - Determine where AI can enhance rather than replace learning - Design new objectives around intentional prompting Step 3: Assignment Redesign - Convert existing assignments to incorporate AI collaboration - Create new assignments focused on prompting skills - Develop clear guidelines for acceptable AI use Step 4: Assessment Adaptation - Design assessments that evaluate process, not just output - Create rubrics that include prompting quality - Implement verification mechanisms (explanations, modifications) Step 5: Student Onboarding - Explicitly teach effective prompting techniques - Provide examples of good and poor AI collaboration - Set clear expectations for documentation ### Sample Lesson Plan: Introduction to Intentional Prompting Objective: Introduce students to intentional prompting for programming tasks Materials: - Access to an AI coding assistant - Sample programming problem - Prompting template handout Lesson Flow: 1. Introduction (15 min) - Discuss the limitations of traditional "solution-oriented" prompting - Introduce the concept of intentional prompting - Share examples of effective vs. ineffective prompts 2. Demonstration (20 min) - Instructor demonstrates solving a problem using intentional prompting - Highlight the iterative refinement process - Show how to evaluate and improve AI-generated code 3. Guided Practice (30 min) - Students work in pairs on a simple programming task - Using provided prompting templates - Instructor circulates to provide feedback 4. Reflection and Discussion (15 min) - Students share what worked and what didn't - Discuss patterns in effective prompts - Address common misconceptions 5. Independent Practice (Homework) - Students solve a new problem using intentional prompting - Document their prompting journey - Reflect on what they learned about effective prompting ## Practical Guide for Students ### maximising Learning with AI Assistance Develop an Intentional Learning Mindset - Focus on understanding concepts, not just completing assignments - Use AI to explore alternative approaches, not just get answers - Reflect on what you're learning through AI interactions Document Your Learning Journey - Keep a prompting journal recording effective prompts - Note concepts that were unclear and how AI helped clarify them - Track your progress in prompting effectiveness Balance AI Assistance with Independent Work - Try solving problems independently before consulting AI - Use AI to verify your understanding, not replace it - Deliberately practice skills that AI can't provide (debugging, testing) Leverage AI for Learning Acceleration - Use AI to explain concepts in multiple ways - Ask AI to generate practice problems on topics you're struggling with - Have AI explain the "why" behind its recommendations ### Self-Assessment Questions Students can use these questions to gauge their intentional prompting skills: 1. Can I explain the code I submitted, including its logic and potential edge cases? 2. Do I understand why the AI suggested certain approaches over others? 3. Can I modify the solution to work under different constraints? 4. Am I able to identify limitations or potential improvements in AI-generated code? 5. Can I effectively guide AI through complex problems using systematic prompting? ## Key Takeaways - The educational focus should shift from syntax to intentional collaboration - Assignments should incorporate AI explicitly rather than trying to be "AI-proof" - Assessment should focus on process, understanding, and prompting skills - Both educators and students need new strategies for effective learning with AI - Intentional prompting principles provide a framework for teaching programming in the AI era - Building learning communities around AI collaboration enhances educational outcomes ## Moving Forward In the final chapter, we'll explore the future of intentional prompting as AI technology continues to evolve, examining emerging trends and preparing for the next evolution of human-AI programming collaboration. ============================================================ SOURCE: chapters/future-directions.qmd ============================================================ # Future Directions > The tools will change. The discipline of thinking clearly will not. As we reach the conclusion of this book, it's important to look ahead at how intentional prompting might evolve as AI capabilities continue to advance. This chapter explores emerging trends in AI-assisted development, potential shifts in programming paradigms, and strategies for maintaining the human element in an increasingly AI-powered development landscape. ## Emerging AI Capabilities ### Beyond Text-Based Interfaces Current AI coding assistants primarily interact through text-based interfaces, but this is likely to change: #### Multimodal Interactions Future AI assistants will likely incorporate: - Visual programming interfaces that blend graphical and text-based interaction - Voice-based coding where spoken instructions translate to code - Diagram-to-code conversion where architecture diagrams become implementation scaffolding - Code-to-visualisation generation where code is automatically visualised for better understanding These multimodal capabilities will require adaptations to intentional prompting practices, with greater emphasis on clear specification and visualisation of intent. #### Contextual Awareness AI assistants are becoming increasingly aware of their operational context: - Project-wide understanding beyond individual files - Development history awareness including past decisions and their rationale - Developer preference modelling that adapts to individual coding styles - Team convention adherence that ensures consistency across contributors For intentional prompting, this means prompting strategies will need to focus more on high-level goals and constraints, while the AI handles more of the implementation details. ### Increasing Autonomy and Agentic Programming AI systems are gradually moving toward greater autonomy in coding tasks, leading to what's known as "agentic programming," where AI agents can autonomously perform complex development tasks with minimal human intervention: #### Autonomous Coding Capabilities - Self-directed exploration of solution spaces - Automated testing and verification of generated code - Self-debugging and refinement of initial solutions - Proactive suggestion of improvements and optimisations This increased autonomy will shift intentional prompting toward a supervisory role, where human developers focus on problem definition, architectural decisions, and quality assurance rather than implementation details. It is worth stating plainly: better models make the practices in this book more important, not less. When AI-generated code was obviously rough, developers naturally reviewed it. As models produce code that reads like something a competent developer wrote, the temptation to accept it without understanding increases, and so does the cost of the bugs you miss. The more convincing the output, the more critical it is that you can verify it, explain it, and test it. Intentional prompting is not a workaround for weak models. It is a discipline that becomes more valuable as models improve. #### AI Pairing and Collaboration Future development environments may feature: - Multiple specialised AIs with different expertise areas - AI-to-AI collaboration where different models handle different aspects of development - Continuous AI presence during the entire development process - Persistent learning from team interactions and feedback Intentional prompting in this environment will involve orchestrating these AI collaborators effectively, ensuring they work toward cohesive solutions. #### The Essential Role of Human Oversight in Agentic Systems As AI development agents become more sophisticated, research has identified critical intervention points where human oversight remains essential: Requirements Phase - AI Role: Convert user stories to technical specifications - Human Role: Validate business alignment and feasibility of proposed approaches Design Phase - AI Role: Generate architecture diagrams from specifications - Human Role: Evaluate maintainability and technical debt implications Implementation - AI Role: Auto-complete functions with context awareness - Human Role: Ensure compliance with security protocols and organisational standards Deployment - AI Role: Auto-scale and configure infrastructure - Human Role: Final approval for production releases Research shows that even the most advanced AI systems face fundamental limitations. They struggle with precision despite excellence in generalisation, encounter difficulties with large-scale tasks despite competence with smaller ones, and demonstrate incomplete understanding of physical and business realities. These limitations highlight why human oversight in agentic systems is not merely beneficial but essential. Effective human oversight includes: 1. Establishing clear protocols and roles for when and how human intervention should occur 2. Continuous training and development for oversight personnel 3. Fostering collaboration between AI experts and domain specialists 4. Implementing robust monitoring tools and evaluation frameworks The future of intentional prompting will likely involve more sophisticated techniques for providing this essential oversight, ensuring that increasingly autonomous AI systems remain aligned with human goals, values, and quality standards. ## Evolving Programming Paradigms ### From Code Writing to Code Curation The fundamental activity of programming is shifting from writing code to curating it: #### Specification-First Development - Detailed specifications become the primary artifact - Automated implementation derived from specifications - Human review and refinement of generated code - Iterative specification improvement based on implementation feedback Intentional prompting in this paradigm emphasises clear articulation of requirements and constraints, with less focus on implementation syntax. #### Quality-Oriented Development - Focus on correctness criteria rather than implementation - Property-based specifications that define behaviour boundaries - Automated verification against properties rather than manual testing - Formal methods becoming more accessible through AI interpretation This shift requires developing skills in specifying quality attributes and verification criteria rather than algorithmic details. ### New Programming Languages and Tools AI capabilities are influencing the design of programming languages and development tools: #### AI-Native Languages - Natural language programming becoming more feasible - Intent-based syntax that focuses on what, not how - Adaptable abstraction levels that adjust to developer expertise - Bidirectional translation between natural language and formal code Intentional prompting practices will need to evolve to leverage these new language paradigms effectively. #### Integrated AI Development Environments - Continuous AI collaboration throughout the development workflow - Predictive development that anticipates next steps - Automated refactoring and optimisation based on semantic understanding - Intelligent knowledge integration from documentation and community resources These environments will require new skills in guiding AI assistance effectively throughout the development lifecycle. ## Preparing for Future Changes ### Skill Evolution for Developers As AI capabilities advance, the most valuable developer skills will shift: #### Emerging Critical Skills 1. System design and architecture - Defining cohesive, maintainable systems 2. Problem formulation - Precisely specifying what needs to be solved 3. Solution evaluation - Critically assessing generated solutions 4. Adaptation and integration - Combining AI-generated components effectively 5. Domain modelling - Translating real-world complexity into computational terms Intentional prompting principles provide a foundation for developing these skills, with increasing emphasis on architecture and evaluation. #### Continuous Learning Strategies To remain effective as AI capabilities evolve: 1. Focus on fundamentals - Core concepts remain relevant even as implementation details change 2. Practice intentional prompting - Develop precise communication with AI systems 3. Engage with new paradigms - Experiment with emerging languages and methodologies 4. Build cross-disciplinary knowledge - Combine programming with domain expertise 5. Cultivate creative problem-solving - Develop skills that complement rather than compete with AI ### Organisational Adaptation organisations will need to adapt their development processes: #### Evolving Team Structures - AI specialists who optimise AI collaboration across teams - Prompt engineers who develop effective prompting strategies - Integration architects who ensure cohesive solutions - Quality assurance evolution toward specification verification #### Workflow Transformation - Specification-centric processes replacing code-centric workflows - Continuous AI collaboration throughout the development lifecycle - Balanced human-AI responsibility with clear accountability - Knowledge management becoming increasingly critical Intentional prompting principles can guide these organisational transformations, ensuring that human expertise remains central despite increasing automation. ## Maintaining the Human Element ### Ethical Considerations As AI takes on more programming responsibilities, ethical considerations become increasingly important: #### Responsibility and Accountability - Clear attribution of AI vs. human contributions - Transparency in AI assistance for regulatory compliance - Ethical review processes for AI-generated solutions - Professional responsibility for AI-assisted outcomes #### Bias and Fairness - Awareness of AI biases in generated code - Proactive fairness assessment of solutions - Diverse human oversight to detect potential issues - Inclusive design principles throughout development Intentional prompting practices should incorporate explicit ethical review steps to address these concerns. ### Human Comparative Advantages Despite advancing AI capabilities, humans maintain several critical advantages: #### Enduring Human Strengths 1. Contextual understanding - Grasping the broader purpose and impact of software 2. Ethical judgment - Evaluating moral implications of technical decisions 3. Creative problem-solving - Finding novel approaches to unprecedented challenges 4. Empathetic design - Deeply understanding user needs and experiences 5. Strategic thinking - Aligning technical decisions with long-term goals Intentional prompting should increasingly focus on leveraging these uniquely human capabilities while delegating algorithmically definable tasks to AI. #### Meaningful Human Contribution To ensure programming remains a fulfilling human activity: 1. Focus on high-impact decisions where human judgment adds value 2. Cultivate creativity in problem-solving approaches 3. Engage with ethical dimensions of technical decisions 4. Prioritise human-centred design considerations 5. Develop cross-disciplinary expertise combining programming with domain knowledge ## Vision for Intentional Prompting's Evolution ### Short-Term Evolution (1-3 Years) In the near future, intentional prompting will likely adapt to: - More comprehensive code generation capabilities - Better context awareness across codebases - Improved natural language understanding - Initial multimodal interaction capabilities Key adaptations will include: 1. More refined prompting techniques for whole-component generation 2. Greater emphasis on specification and architectural design 3. Increased focus on evaluation and integration skills 4. Enhanced collaborative workflows combining human and AI strengths ### Medium-Term Evolution (3-7 Years) Looking further ahead, we may see: - Natural language becoming a primary programming interface - AI agents capable of autonomous implementation of components - Sophisticated AI collaboration networks in development - Formal verification becoming mainstream through AI assistance Intentional prompting will likely evolve toward: 1. Architecture and system design as primary human activities 2. Specification languages replacing traditional programming languages 3. Orchestration of multiple specialised AI collaborators 4. Quality assurance through formal specification rather than testing ### Long-Term Vision (7+ Years) Though speculative, longer-term developments might include: - Programming primarily through intent specification - AI systems capable of end-to-end application development - Symbiotic human-AI development partnerships - New programming paradigms built specifically for human-AI collaboration The core principles of intentional prompting (maintaining human understanding and control while leveraging AI capabilities effectively) will remain relevant even as the specific practices evolve. ## A Personal Development Roadmap ### Building Your Intentional Prompting Skills To prepare for the future of programming, consider this development roadmap: #### Foundational Skills 1. Master the six-step methodology for intentional prompting 2. Develop effective prompting techniques for different programming tasks 3. Practice critical evaluation of AI-generated code 4. Build architectural thinking skills for component and system design 5. Cultivate test-driven thinking to ensure solution quality #### Advancing Your Practice 1. Experiment with emerging AI coding assistants to understand their capabilities 2. Develop domain-specific prompting strategies for your areas of expertise 3. Create personal libraries of effective prompts for common tasks 4. Incorporate intentional documentation practices in your workflow 5. Contribute to open standards for human-AI development collaboration #### Leading the Evolution 1. Explore new programming paradigms that leverage AI capabilities 2. Develop team practices for effective AI collaboration 3. Establish ethical guidelines for responsible AI use in development 4. Share your experiences through writing, speaking, or teaching 5. Participate in shaping the future of human-AI programming ### Continuous Adaptation The most important skill for the future will be adaptability. By grounding yourself in intentional prompting principles while remaining open to evolving techniques, you'll be well-positioned to thrive in the changing landscape of software development. ## Key Takeaways - AI capabilities will continue to evolve, with increased contextual understanding, multimodal interactions, and autonomy - Programming paradigms are shifting from code writing to code curation and verification - Developer skills are evolving toward architecture, specification, and solution evaluation - organisations need to adapt team structures and workflows for effective AI collaboration - Maintaining the human element requires focus on our unique capabilities and ethical considerations - Intentional prompting principles provide a foundation for navigating these changes effectively ## Intentional Prompting Beyond Programming While this book has focused primarily on applying intentional prompting principles to programming tasks, the core concepts of intentional prompting have far broader applications across various domains and activities. ### Universal Principles of AI Collaboration The fundamental principles of intentional prompting that we've explored (maintaining human agency, focusing on understanding, and prioritising process over output) apply universally to any human-AI collaboration: #### In Creative Fields - Writing: Using AI to brainstorm, refine, and edit written content while maintaining a unique voice - Design: Directing AI to explore visual concepts while preserving artistic intent and coherence - Music: Collaborating with AI on composition while maintaining musical identity and expression - Film and Animation: Using AI for technical aspects while humans direct narrative and emotional elements #### In Knowledge Work - Research: Employing AI to analyse data and literature while humans guide inquiry and interpret significance - Law: Using AI for precedent research and document preparation while humans handle judgment and argumentation - Medicine: Applying AI for analysis and pattern recognition while physicians maintain diagnostic responsibility - Education: utilising AI to customise learning materials while educators focus on motivation and conceptual guidance #### In Business and Management - Strategic Planning: Using AI to analyse trends and options while humans make value-based decisions - Project Management: Employing AI for scheduling and resource allocation while humans handle stakeholder relationships - Marketing: utilising AI for data analysis and content generation while humans direct brand voice and strategy - Customer Service: Using AI for routine interactions while humans handle complex or sensitive situations ### The Five Principles in Any Domain The five core principles of intentional AI collaboration transcend specific methodologies and can be applied in any context: 1. Human-led workflows: Regardless of domain, humans should establish goals, define parameters, and make final decisions 2. Critical evaluation: All AI outputs should be thoughtfully assessed based on domain expertise and context 3. Iterative collaboration: The process of refinement through feedback applies across all creative and analytical activities 4. Domain-appropriate trust: Every field has aspects where AI excels and where human judgment is essential 5. Skill complementarity: The most effective collaborations leverage both AI and human strengths appropriately ### The Future of Human-AI Partnership As AI capabilities continue to evolve, these principles will only become more important. Future systems may offer: - More seamless multimodal interaction across text, voice, images, and other media - Greater domain-specific expertise in specialised fields - Enhanced contextual awareness of projects, goals, and constraints - Increased autonomy for well-defined tasks However, the need for intentional human direction will remain essential. The most successful individuals and organisations will be those who develop effective practices for intentional collaboration with increasingly capable AI systems while maintaining human purpose, creativity, and ethical judgment. ## Conclusion As we conclude this exploration of intentional prompting, it's clear that the fundamental relationship between human developers and AI assistants will continue to evolve. What won't change is the need for a thoughtful, structured approach to leveraging AI capabilities while maintaining human understanding and control. The six-step methodology and intentional prompting techniques presented in this book provide a foundation that can adapt to emerging capabilities and paradigms. By focusing on process over output, understanding over implementation, and human judgment over automation, intentional prompting offers a path forward that embraces AI assistance while preserving the essence of what makes programming a uniquely human creative endeavour. Importantly, the methodology addresses many common objections to using LLMs in programming. For those concerned about the ambiguity of natural language, intentional prompting provides structured frameworks and progressive disambiguation techniques that bridge natural language and formal code. For those worried about the non-deterministic nature of LLMs, the methodology establishes clear evaluation criteria and testing practices that ensure reliability despite variability in outputs. And for those sceptical of AI's impact on learning and skill development, intentional prompting's focus on understanding over copying ensures that developers continue to build expertise rather than becoming dependent on AI. The power of intentional prompting comes from recognising that LLMs have inherited human problem-solving patterns through their training data. Our methodology doesn't fight against this trait but leverages it by providing procedural frameworks that activate latent knowledge in both the human and the AI. This collaborative approach represents a middle ground between uncritical acceptance and outright rejection of AI assistance: a balanced path that maximises benefits while minimising risks. As we move toward increasingly autonomous agentic programming systems, intentional prompting becomes not less but more essential. These systems excel at approximation and pattern matching but struggle with precision. They demonstrate competence with small-scale tasks while faltering with larger ones. They reveal an incomplete understanding of both physical and business realities. All these limitations highlight why human oversight through structured methodologies remains necessary. The six-step methodology provides a framework not just for individual programmers but for organisations implementing AI oversight. It offers clear intervention points where human judgment and domain knowledge can guide AI agents toward optimal solutions. By maintaining this human-in-the-loop approach, organisations can harness the efficiency of AI automation while mitigating its risks through informed, intentional human direction. While we've focused primarily on programming, the principles of intentional collaboration with AI will extend across all domains of human creativity and problem-solving. By developing these skills now, you're preparing not just for current AI tools, but for the next generation of human-AI partnership in programming and beyond. ============================================================ SOURCE: acknowledgments.qmd ============================================================ # Acknowledgments This book began as a set of Jupyter notebooks for teaching computational thinking. The students who worked through those notebooks — sometimes struggling, sometimes surprising me with approaches I had not considered — shaped the methodology that became intentional prompting. Their willingness to experiment with AI tools when the ground rules were still being written made this book possible. Colleagues who teach programming in various contexts provided feedback that kept the ideas practical. The question "would this actually work in a first-year unit?" came up often enough to become a design principle. The methodology in this book draws on Conversation, Not Delegation, and the two were developed in parallel. Ideas that emerged in the programming context were generalised for the companion book; principles from the companion book were tested against code. Each made the other sharper. The open source community behind Python, Quarto, GitHub, and the broader ecosystem of developer tools made both the writing and the publishing possible. The entire toolchain — from drafting to building to hosting — is free and open, and that matters. AI tools were used throughout the writing process. Claude (Anthropic) served as a conversation partner for drafting, iterating, and refining both text and code examples. The process was exactly what the book advocates: intentional, directed, and critically evaluated. The author made the decisions. The AI made the work faster. Every sentence and every line of code reflects the author's judgement. ============================================================ SOURCE: about-author.qmd ============================================================ # About the Author Michael Borck is a software developer and educator passionate about the intersection of human expertise and artificial intelligence. He developed the Intentional Prompting methodology to help programmers maintain agency and deepen their understanding while leveraging AI tools effectively. Michael believes that the future of programming lies not in delegating to AI, but in conversing with it---treating AI as a collaborative partner that enhances human capability rather than replacing human understanding. When not writing about AI collaboration, Michael works on practical applications of these principles across software development, education, and creative projects. He creates educational software and resources, and explores the 80/20 principle in learning and productivity. --- Connect - michaelborck.dev --- Professional work and projects - michaelborck.education --- Educational software and resources - 8020workshop.com --- Passion projects and workshops - LinkedIn --- Other Books in This Series Foundational Methodology: - Conversation, Not Delegation: Your Expertise + AI's Breadth = Amplified Thinking Python Track: - Think Python, Direct AI: Computational Thinking for Beginners - Code Python, Consult AI: Python Fundamentals for the AI Era - Ship Python, Orchestrate AI: Professional Python in the AI Era Web Track: - Build Web, Guide AI: Business Web Development with AI For Educators: - Partner, Don't Police: AI in the Business Classroom ============================================================ SOURCE: appendices/common-pitfalls.qmd ============================================================ # Common Pitfalls and Solutions When working with AI coding assistants, certain patterns of challenges tend to emerge. This appendix catalogues common pitfalls in the intentional prompting process and provides practical strategies for avoiding or addressing them. ## Over-Reliance on AI ### Symptoms - Accepting AI-generated code without critical evaluation - Inability to explain how your code works - Discomfort when working without AI assistance - Decreasing ability to solve problems independently ### Root Causes - Convenience of immediate solutions - Pressure to deliver quickly - Gradual atrophy of independent problem-solving skills - Incomplete understanding of the problem domain ### Solutions #### Intentional Skill Building - Schedule regular "AI-free" coding sessions to maintain core skills - Implement a personal rule to solve problems manually first, then verify with AI - Create deliberate learning challenges in unfamiliar areas #### Critical Evaluation Process - Establish a personal checklist for evaluating AI-generated code - Always trace through at least one test case manually - Ask "why" questions about design decisions in AI-generated code - Look for potential edge cases that might not be handled #### Balance Strategies - Use the six-step methodology to ensure you understand the problem before seeking AI solutions - Distinguish between areas where you need to build expertise (limit AI use) and areas where you can leverage AI more heavily - Set clear boundaries for when and how you use AI assistance ## Unclear Prompt Formulation ### Symptoms - Receiving irrelevant or incorrect solutions - Needing many iterations to get useful responses - Finding yourself frustrated with AI "not understanding" what you want - Receiving overly generic or simplistic answers ### Root Causes - Ambiguous problem descriptions - Missing context or requirements - Unexpressed assumptions - Imprecise terminology ### Solutions #### Structured Prompting Framework - Use the templates from Appendix A as starting points - Break complex requirements into clear, specific points - Include explicit constraints and edge cases - Specify the level of detail needed in the response #### Context Enhancement - Provide relevant background information - Include code snippets that show the context - Specify the programming language and framework - Describe the broader system architecture when relevant #### Iterative Refinement - Start with a basic prompt and refine based on initial responses - Use phrases like "Let me clarify..." to correct misunderstandings - Build upon partial successes rather than completely changing approach - Maintain a library of effective prompts for common tasks ## Accepting Incorrect Solutions ### Symptoms - Discovering bugs after implementation that were present in the AI suggestion - Receiving user reports of edge case failures - Finding that AI-generated solutions don't integrate well with existing code - Code reviews identifying fundamental flaws in approach ### Root Causes - Insufficient validation of proposed solutions - Misplaced trust in AI capabilities - Pressure to deliver quickly - Gaps in your own understanding of requirements ### Solutions #### Systematic Verification - Always test AI-generated code with multiple inputs, especially edge cases - Review security implications, particularly for input handling and data access - Check for performance issues with realistic data volumes - Verify integration points with existing systems #### Understanding Before Implementation - Request explanations of unfamiliar patterns in generated code - Step through complex algorithms manually with example data - Compare against alternative approaches - Ensure you can explain every part of the solution yourself #### Incremental Adoption - Integrate AI-generated code in small, testable chunks - Start with non-critical components to build confidence - Implement monitoring for new components to catch issues early - Create comprehensive tests before implementing AI suggestions ## Losing Track of the Big Picture ### Symptoms - Solutions that solve immediate problems but create architectural issues - Inconsistent patterns across different parts of the codebase - Difficulty explaining how components fit together - Growing technical debt from expedient but uncoordinated decisions ### Root Causes - Focus on tactical code generation rather than strategic design - AI's limited context window and understanding of full system - Solving problems in isolation without system-wide perspective - Missing architectural guidelines for AI collaboration ### Solutions #### Architectural Discipline - Establish clear architectural principles to guide AI prompting - Regularly review how individual components fit into the overall system - Maintain up-to-date architecture diagrams to reference during development - Use the six-step methodology to ensure each solution aligns with system goals #### Documentation-First Approach - Document design decisions before implementing them - Create clear interface specifications before coding - Maintain a "source of truth" for architectural patterns - Implement an architecture decision record (ADR) process #### System-Level Review - Periodically step back to assess the entire system - Schedule "architectural reviews" to identify drift from intended design - Use visualisation tools to understand system evolution - Consider how individual changes affect system properties (performance, security, maintainability) ## Ethical and Legal Concerns ### Symptoms - Uncertainty about code ownership and attribution - Potential licence violations from AI-suggested code - Reuse of code with unclear provenance - Privacy or security implications in generated solutions ### Root Causes - Unclear understanding of AI training data sources - Limited transparency in how AI generates responses - Evolving legal landscape around AI-generated content - Lack of established ethics frameworks for AI coding assistants ### Solutions #### Responsible AI Usage - Understand how your AI assistant was trained and what data it uses - Review AI-generated code for potential intellectual property issues - Follow your organisation's guidelines for AI use - Consider the ethical implications of automated decision-making in your code #### Documentation and Transparency - Document when and how AI assistance was used - Maintain clear attribution for code sources - Keep records of significant prompts used in development - Be transparent with stakeholders about AI's role in development #### Verification Processes - Implement review processes specifically for AI-generated code - Check for known security vulnerabilities in suggested solutions - Verify that solutions meet regulatory requirements - Consider third-party audits for critical systems ## Prompt Pattern Case Studies ### Case Study 1: Debugging a Complex Algorithm #### Problematic Approach Initial prompt: Issues: - Vague problem description - No information about how it's failing - No context about requirements or constraints #### Improved Approach Better prompt: Improvements: - Specific algorithm identified (merge sort) - Clear description of the failure case - Example input and expected output provided - Indication of where the bug might be located ### Case Study 2: Implementing a New Feature #### Problematic Approach Initial prompt: Issues: - Extremely broad scope - No information about the technology stack - No specific requirements or constraints - No context about the existing application #### Improved Approach Better prompt: Improvements: - Specific technology stack identified - Clear, detailed requirements - Existing schema provided for context - Scope broken down into manageable parts - Clear starting point specified ## Confusing Prompt Engineering with Intentional Prompting ### Symptoms - Focusing exclusively on crafting the perfect prompt to get complete solutions - Neglecting the manual work and understanding phases of the methodology - Measuring success solely by the quality of AI outputs rather than learning outcomes - Minimal critical evaluation of AI-generated solutions - Skipping the six-step methodology and moving directly to implementation ### Root Causes - Misconception that intentional prompting is just about creating better prompts - Prioritising short-term efficiency over long-term understanding - Not fully grasping the educational purpose of the intentional prompting methodology - Confusing the means (good prompts) with the end (developer understanding and growth) ### Solutions #### Clarify the Distinction - Remember that prompt engineering is a set of techniques while intentional prompting is a complete methodology - recognise that prompt engineering focuses on getting optimal outputs while intentional prompting focuses on the human development process - Understand that prompt engineering is a component of intentional prompting, not a replacement for it #### Balance Output and Process - Follow the complete six-step methodology even when it seems faster to skip steps - Evaluate success based on both the quality of the solution AND your understanding of it - Set aside time specifically for the "Work by Hand" and "Test with Data" steps, which are often neglected - Document your learning insights separate from code outputs #### Adopt a Learning Mindset - Ask AI to explain its solutions rather than just accepting them - Challenge yourself to modify and extend AI-generated code - Practice explaining solutions to others without referring to AI explanations - Identify gaps in your understanding and use AI to fill them deliberately ## Key Takeaways - Be intentional about balancing AI assistance with skill development to maintain your capabilities - Invest time in clear, detailed prompts to save time in iterative refinement - Always verify AI-generated solutions through testing and manual review - Maintain system-level perspective when implementing individual components - Address ethical and legal concerns proactively through documentation and transparent processes - Learn from failures by analysing and improving your prompting approach - Distinguish between prompt engineering techniques and the intentional prompting methodology By recognising and addressing these common pitfalls, you can maximise the benefits of AI assistance while avoiding its potential drawbacks. ============================================================ SOURCE: appendices/glossary.qmd ============================================================ # Glossary This glossary provides definitions for key terms used throughout "Intentional Prompting." ## AI-Related Terms AI Coding Assistant: Software tools that use artificial intelligence to help programmers write, understand, and debug code. Examples include GitHub Copilot, Amazon CodeWhisperer, and conversational AI models like Claude and ChatGPT when used for programming tasks. Chain-of-Thought Prompting: A technique that encourages AI to break down complex reasoning into intermediate steps, producing more comprehensive and accurate outputs. Few-Shot Prompting: Providing an AI with examples of the desired input-output behaviour before presenting the actual task, helping guide the model's response format and approach. Foundation Model: Large AI models trained on vast datasets that serve as the basis for more specialised applications, including coding assistants. Large Language Model (LLM): AI systems trained on massive text datasets that can generate human-like text and code based on prompts. Prompt Engineering: The practice of crafting effective instructions for AI systems to produce desired outputs, involving specific techniques and patterns. Zero-Shot Prompting: Asking an AI to perform a task without providing examples, relying on the model's pre-existing knowledge. ## Intentional Prompting Concepts Intentional Prompting: A deliberate approach to programming that leverages AI tools while maintaining human control over the development process. It focuses on understanding, guiding AI through structured methodology, using AI as a learning tool, and maintaining the human developer as the architect and decision-maker. Six-Step Methodology: The structured approach at the core of intentional prompting: 1. Restate the problem 2. Identify input and output 3. Work the problem by hand 4. Write pseudocode 5. Convert to code 6. Test with data Challenge Prompts: Deliberately introducing programming challenges to test understanding and explore potential issues. Clarification Prompts: Questions that refine requirements and explore edge cases before implementation. Foundation Prompts: Initial prompts that establish baseline requirements for a programming task. Refinement Prompts: Prompts that push for code improvements based on best practices and efficiency considerations. Scaffolding Prompts: Prompts that support learning by breaking down complex concepts into understandable components. ## Programming Concepts Code Smell: Patterns in code that may indicate deeper problems or potential for improvement. Edge Case: A problem or situation that occurs only at an extreme operating parameter, often requiring special handling in code. Pseudocode: A simplified, high-level description of a computer program or algorithm, using structural conventions of programming languages but intended for human reading rather than machine reading. Refactoring: The process of restructuring existing code without changing its external behaviour, typically to improve non-functional attributes like readability, reduce complexity, or improve maintainability. Technical Debt: The implied cost of additional rework caused by choosing an easy (but limited) solution now instead of using a better approach that would take longer. ## Teaching and Learning Terms AI-Proof Assignment: Assignments designed to be difficult for AI coding assistants to solve directly, often focusing on conceptual understanding or novel problems. Documentation-First Methodology: An approach where students create detailed specifications and documentation before writing any code, using AI to evaluate completeness and clarity. Process-Based Assessment: Evaluating students based on their problem-solving approach and understanding rather than just the final code output.