5 Managing LLM Context: Working Smarter with AI Conversations
A long conversation is not necessarily a deep one. After enough turns, the AI forgets where you started, and so might you.
5.1 Why Context Matters: Understanding AI’s Limitations
One of the most underrated skills in working with AI is managing context: the information you feed to an AI system and how you structure your conversations.
Think of context like the working memory of AI. Unlike humans, who can maintain focus across days of conversation, AI has specific limitations:
- Limited attention span: Conversations have maximum lengths before older information becomes less salient (less in focus)
- Token limits: Every word you input and every word AI outputs counts against the model’s capacity
- Output token competition: When you ask for multiple things at once, AI must divide its output tokens among all tasks, often producing shallow results
- Hallucination risk: As conversations grow longer and more complex, the risk of AI “making up” information increases
The good news? Understanding and managing context is a learnable skill that directly improves output quality, saves time, and reduces errors.
This is particularly important for educators because: - You’ll be having longer conversations (designing units, iterating on assignments) - You’ll need high-quality outputs (teaching materials must be accurate) - You’ll want consistent quality across multiple deliverables (course redesigns, prompt libraries) - You’ll be modelling these skills for students
5.2 The Four Core Problems
5.2.1 Problem 1: The Long Conversation Problem
What happens: You’ve been working with AI for 20 exchanges, refining a unit design. The conversation is great, but when you ask question 21, AI gives you an answer that contradicts something from exchange 5.
Why: As conversations grow longer, older information becomes less salient to the AI’s attention. While technically the AI can “see” the entire conversation, information from early exchanges has less influence on later responses.
Teaching impact: When designing complex units or courses, you’ll have lengthy conversations. Without managing context, outputs become inconsistent.
5.2.2 Problem 2: Output Token Scarcity
What happens: You ask AI to “redesign this unit, create a rubric, write student instructions, design an assessment, and create a facilitator guide.” You get five things, but each is shallow because AI divided its output tokens five ways.
Why: Every model has a maximum output token limit (typically 2,000–4,000 tokens). If you ask for 5 things, you get roughly 400–800 tokens per thing. Quality suffers.
Teaching impact: You might get a “complete” unit design that needs heavy revision, or you abandon it and start over,wasting time.
5.2.3 Problem 3: Hallucination Acceleration
What happens: As conversations get longer, AI becomes more likely to “confidently generate false information”,making up citations, inventing examples, or misremembering earlier statements.
Why: Longer conversations increase uncertainty. AI is tracking more information and making more inferential leaps. It tries to fill gaps with plausible-sounding but false information.
Teaching impact: Teaching materials with invented examples or false citations are problematic. Students trust what they see in your materials.
5.2.4 Problem 4: Lost Context Across Sessions
What happens: You close the chat. Next week, you want to continue designing that unit. You paste your earlier thinking into a new chat, but AI doesn’t have the full conversation history. It repeats earlier points or misses nuance.
Why: Each new conversation starts fresh. AI has no memory of previous sessions unless you explicitly provide that history.
Teaching impact: Multi-week projects (semester redesigns, curriculum overhauls) become fragmented. You must re-establish context repeatedly.
5.3 Core Strategy 1: Break Complex Tasks Into Steps
The Principle: Before diving into work, ask AI to help you structure the task.
Why it works: - Distributes output tokens efficiently (each step gets full focus) - Reduces hallucination risk (smaller scope per prompt) - Gives you a clear plan to follow - Lets you quality-check each step before moving forward
5.3.1 Example 1: Unit Redesign
Instead of asking all at once:
"Redesign my unit on supply chain management. Create new learning outcomes,
design assessments, write student instructions, create a rubric, and draft
a unit description."
Break it into steps,first, ask for a plan:
I'm redesigning a unit on supply chain management for 3rd-year business
students (40 students, mix of majors). Help me create a structured plan.
What are the key steps I should follow to redesign this unit? List them in
logical order with what we should accomplish at each step.
AI response: You get a plan like: 1. Clarify learning outcomes (what students should be able to do) 2. Design assessments (how you’ll know they’ve learned) 3. Plan learning activities (what students will do to learn) 4. Create student instructions (what students need to know) 5. Build assessment rubric (how you’ll grade) 6. Write facilitator notes (guidance for teaching)
Then work through the plan one step at a time:
Step 1:
Let's start with step 1: Learning Outcomes.
Here are my current outcomes: [paste]
Help me evaluate these:
- Which are clear and measurable?
- Which are too vague?
- Which are most important for supply chain professionals?
Suggest 3–4 revised outcomes that focus on authentic supply chain thinking.
Step 2 (after reviewing Step 1 output):
Good. Now step 2: Assessment Design.
For these outcomes: [paste]
Suggest three assessment approaches that would authentically test these outcomes.
For each, explain:
- What students do
- Why it tests supply chain thinking
- How it would work with 40 students
And so on. By breaking the work into steps, each output gets full attention and quality improves.
5.3.2 Example 2: Semester Course Planning
Instead of: “Design a 12-week course on organisational behaviour.”
Ask for a plan first:
I'm teaching organisational behaviour to 2nd-year business students (60 students,
first-year of their major). Help me structure this 12-week course.
What are the major topics we should cover? In what order? What's the learning
arc across the semester? What should be the rough focus of each week or pair
of weeks?
Then work through week-by-week or module-by-module:
Using the plan above, let's design Week 1-2: Introduction to Organisational
behaviour.
Learning focus: [paste from plan]
Design these weeks including:
- 2–3 key concepts to introduce
- 1 major activity or case study
- 1 short formative assessment
- Approximately 3–4 readings
Make it manageable for a 3-hour/week course.
5.4 Core Strategy 2: One Task Per Prompt (Usually)
The Principle: Ask for one main thing per prompt, not multiple things.
Why it works: - Each output gets full attention and token allocation (depth, not breadth) - Easier to review and iterate on one thing - Less cognitive load on the model - Quality increases noticeably
5.4.1 Example: Lesson Plan Design
Poor approach (asking for too much):
"Write a lesson plan for teaching critical thinking to business students.
Include: 5 learning outcomes, 3 classroom activities, assessment rubric,
student handout, and facilitator notes."
Result: Shallow. Each element is skeletal. Outcomes might be vague. Activities are one-liners. Rubric has minimal criteria.
Better approach (one task per prompt):
Prompt 1:
I'm teaching critical thinking to business undergraduates. I want them to
be able to analyse business problems from multiple perspectives.
Design 3 classroom activities that help students practice critical thinking.
For each activity:
- Describe what students do (step-by-step)
- Explain what they'll learn
- Note how long it takes
- Indicate the group size (individual, small group, whole class)
Prompt 2 (after reviewing):
Good activities. Now let's build an assessment rubric for evaluating students'
critical thinking. Include:
- 4–5 criteria (e.g., perspective-taking, evidence use, reasoning clarity)
- For each criterion, descriptors for: Excellent / Proficient / Developing
Keep it usable for grading real student work.
Prompt 3 (after reviewing):
Turn the activities and rubric into a one-page student handout. Include:
- What they're learning (1–2 sentences)
- Why it matters professionally (1–2 sentences)
- The activities (clear instructions)
- Success criteria (what "good" looks like)
- How to get help if stuck
Make it accessible and encouraging.
Result: Deep. Each element is thoughtful, specific, and builds on what came before.
5.4.2 Exception: When Multiple Things Are Fine
Sometimes asking for multiple outputs makes sense: - Comparative tasks: “Show me 3 different ways to teach [concept]. What are the trade-offs of each?” - Structured formats: “Create an outline with: learning outcomes, key concepts, and 3 discussion questions” - Quick iterations: “Now make that more concise / more challenging / more inclusive” - Batched similar tasks: “Write 5 discussion questions on these topics: [list]. Each should take 10 minutes of discussion.”
The key: Are the outputs relatively equal in scope and complexity? If yes, ask for multiple. If one task is much bigger than others, split them.
5.5 Core Strategy 3: Use Output Constraints to Manage Tokens
The Principle: When asking for multiple things, specify output size/structure upfront. This helps AI divide tokens wisely.
5.5.1 Example: Assessment Comparison
Without constraints:
Compare portfolio assessment vs. exam-based assessment for evaluating student
learning in business courses. What are the advantages and limitations of each?
When should I use each?
Result: AI might spend 70% of tokens on one approach and 30% on the other. Output is imbalanced.
With constraints:
Compare portfolio assessment vs. exam-based assessment for my business course.
For each approach, provide:
- 2 key advantages
- 2 key limitations
- Best when: [one sentence]
Keep each section to 3–4 sentences maximum. Focus on practical classroom
implications.
Result: AI knows exactly how to divide tokens. Output is balanced, concise, and usable.
5.5.2 Template for Token-Aware Requests
I need [specific output type]. Provide:
1. [First thing] - [length/format]
2. [Second thing] - [length/format]
3. [Third thing] - [length/format]
Keep total output under [X words]. Prioritize clarity and specificity
over completeness.
5.5.3 Discipline-Specific Example
Compare three audit sampling approaches: statistical, risk-based, and
procedural.
For each:
- 2 advantages for audit evidence
- 2 limitations
- Audit standards alignment: brief comment
- Best when: [one sentence]
Focus on what an audit team needs to decide.
5.6 Core Strategy 4: Keep Conversations Focused and Modular
The Principle: Use separate conversations for separate projects or major topic shifts.
Why it works: - Shorter conversations = less hallucination risk - Easier to find earlier outputs (scrolling back is simpler) - AI stays focused on one topic - Cleaner record-keeping (export or save by topic)
5.6.1 When to Start a New Conversation
- Topic shift: Finished designing one unit? Start a new conversation for a different unit.
- Major context change: Moving from unit design to research methodology? New conversation.
- Length: Conversation getting very long (50+ exchanges)? Consider summarising and moving to a new one.
- Different AI tool: Using Claude for teaching design and ChatGPT for grading assistance? Keep them separate.
5.6.2 When One Conversation Is Fine
- Iterative work on the same project (refining, revising)
- Related follow-ups (asking for adaptations of earlier output)
- Building on previous steps (multi-step workflows like the ones above)
Rule of thumb: One conversation per major project. Use the same conversation as you iterate and refine within that project. Start a new conversation when you move to a different project.
5.7 Core Strategy 5: summarise and Handoff for Long Conversations
The Principle: When a conversation gets long, ask AI to summarise what you’ve accomplished, then start fresh in a new conversation.
Why it works: - Resets the “attention freshness” (AI isn’t tracking 30+ old exchanges) - Gives you a clean document of what you’ve done (useful archive) - Reduces hallucination in the new conversation - Allows you to build on work without repeating context
5.7.1 How to Do It
In the long conversation, when it feels unwieldy:
We've been working on [project name: e.g., "redesigning the HR management
unit"] for a while. Can you summarise what we've accomplished?
Include:
- What problem or task we started with
- Key decisions we made
- What we've created/designed so far
- What still needs to be done
Make it concise but complete—something I can copy and paste into a new
conversation to continue working.
AI provides a summary. Then:
- Copy that summary
- Start a new conversation
- Paste the summary at the beginning
- Add: “I’m continuing this work. Here’s what we’ve done. Let’s move forward with [next step].”
- Continue from there
5.7.2 Example Handoff Summary
**Project:** Redesigning Supply Chain Management Unit (3rd year, 40 students)
**What we started with:**
- Students struggle to see supply chain as strategic (not just logistics)
- Current unit is mostly descriptive (case studies + lectures)
- Goal: more active learning, professional judgment development
**Key decisions made:**
- Organised around real supply chain decisions (not functions)
- Emphasised risk thinking and trade-offs
- Used simulation and case critique for learning
**What we've created:**
1. Learning outcomes (5 outcomes emphasising strategic thinking)
2. Assessment strategy (3 assessments: case analysis, risk simulation, team project)
3. Week-by-week learning plan (12 weeks structured around decisions)
**What's left:**
- Detailed activity instructions for each week
- Facilitator notes on how to run discussions
- Student rubric for team project
**Next step:** Design the Week 1-2 activities in detail.
You then start a new conversation and continue with: “I’m continuing supply chain unit redesign. Here’s our progress. Let’s design Week 1-2 activities.”
5.8 Core Strategy 6: Make Context Explicit and Structured
The Principle: Don’t assume AI remembers or understands implicit context. State it clearly.
5.8.1 Poor Context Example
"How should I handle participation in class?"
Missing: What is “participation”? What’s the class? How big? What’s the issue? What’s your teaching style? What have you tried?
5.8.2 Good Context Example
I teach marketing to 80 business students (2nd year, mix of domestic and
international). We use large lecture format (one 2-hour session per week).
Student participation problem: Maybe 10 students ask questions or offer ideas.
The other 70 are silent.
I want broader participation without:
- Making it feel forced or uncomfortable
- Losing lecture efficiency
- Putting shy students on the spot
How can I increase participation?
5.8.3 Better Context (If Continuing Earlier Work)
Remember we're redesigning the marketing unit on consumer behaviour.
We wanted more student participation—we had the problem where only a few
students spoke in the large lecture.
We've started using think-pair-share activities in class. They've helped.
But now we're thinking about assessments. How can we design assessments
that encourage quieter students to engage and show their thinking?
5.8.4 Checklist for Explicit Context
- Who: Who are the students? (Level, major, background, cohort size, cultural mix)
- What: What’s the specific task or problem? (Not vague; specific)
- Why: Why does it matter? (Learning goal, professional relevance, student challenge)
- Constraints: What are the limitations? (Time available, resources, institutional requirements)
- Style: What’s your teaching approach? What’s worked before? What hasn’t?
- History: Have we worked on this before? What did we already decide?
5.9 Core Strategy 7: Batch Similar Tasks
The Principle: When you have multiple similar tasks, batch them efficiently.
5.9.1 Poor Approach
[You] "Write a discussion question on leadership styles for my management unit."
[AI] [Provides question]
[You] "Review. Good. Now write a discussion question on ethical decision-making."
[AI] [Provides question]
[You] "Review. Good. Now write one on team conflict."
[Repeat 3+ more times]
Problem: This takes 10+ exchanges. You repeat context setup each time. Token efficiency is poor.
5.9.2 Better Approach
Single prompt:
I need 5 discussion questions for a 12-week management unit. They should:
- Progress from basic understanding to critical analysis
- Take 8–12 minutes of class discussion
- Spark respectful debate (not yes/no questions)
- Be relevant to business students' future work
Topics (one question each):
1. Leadership styles and contexts
2. Ethical decision-making in organisations
3. Managing team conflict
4. Change management and resistance
5. Inclusive leadership and diversity
Provide all 5 questions with a note about why each one works for discussion.
Result: Single exchange. AI understands the pattern. All 5 questions are high-quality and consistent.
Then iterate once if needed:
These are good. Now adapt question 3 (team conflict) to include an international
context. Some of my students are from cultures where conflict is handled very
differently than in Western business tradition.
5.10 The Two-Chat Workflow: Separate Thinking from Building
By now you have seen how breaking tasks into steps and keeping conversations focused improves quality. There is a deeper version of this principle that changes how you work with AI entirely: use two separate sessions, one for thinking and one for building.
This idea builds on the Two-Chat Workflow from Conversation, Not Delegation (Borck, 2025), adapted here for teaching practice. It is simple, powerful, and worth making a habit.
Chat 1: Explore and clarify. Open a session with no expectation of producing finished output. Use it to probe the teaching challenge you are facing. What are you actually trying to achieve? What assumptions are you making about your students? What alternatives have you not considered? Let the conversation wander. Challenge what the AI suggests. Follow tangents. The messier this session is, the clearer your thinking becomes.
Chat 2: Build from your decisions. Start a fresh session and arrive with a focused brief — not a vague request, but a set of deliberate choices about what you want, who it is for, and what constraints apply. The quality of this output depends almost entirely on the quality of the brief you wrote after reflecting on Chat 1.
The most important moment is the gap between the two chats. You do not dump everything from the first session into the second. You review what emerged, keep what matters, discard what does not, and write a clear brief that reflects your decisions. That act of curation is where your professional judgement lives — and it is the part no AI can do for you.
5.10.1 Example: Redesigning an Assessment
Thinking Chat:
I'm rethinking how I assess critical thinking in my 2nd-year management unit.
Currently I use a written case analysis, but I'm not sure it actually tests
critical thinking versus summarising. Help me think through this.
What does critical thinking actually look like in a management context?
What would strong evidence of it look like in student work?
You spend 10–15 minutes exploring: What counts as critical thinking in management? How is it different from analysis? What would a weak submission look like versus a strong one? The AI helps you think, but you are doing the intellectual work of deciding what matters.
Build Chat:
I'm redesigning the critical thinking assessment for my 2nd-year management
unit (60 students). After reflection, here's what I've decided:
- Critical thinking in management means evaluating competing stakeholder
perspectives and defending a position with evidence
- I want students to critique a real management decision, not just describe it
- The assessment should be 1500 words with a structured argument
- I need a rubric that distinguishes "summarising the case" from
"evaluating the decision"
Design the assessment brief and rubric based on these decisions.
Notice the difference. The build chat gets a clear, decided brief. The output will be dramatically better than if you had started cold with “design me a critical thinking assessment.”
Pick a teaching task you have been putting off — perhaps redesigning a tutorial activity or rethinking a rubric. Open an AI session and spend five minutes just exploring the problem: what is not working, what you have tried, what you are unsure about. Do not ask for any deliverables. Then close that session, jot down your key decisions in two or three sentences, and open a fresh session with those decisions as your opening brief. Compare the result to what you would have gotten from a single cold prompt. The difference is usually striking.
The exploring chat does not know what you will eventually build. The building chat does not know what options you considered and rejected. Only you hold both sides. That is what makes you irreplaceable in this process — not as someone who checks AI’s work after the fact, but as the person whose judgement connects exploration to execution.
This workflow connects directly to the average-versus-precise, small-versus-large framework from the earlier chapter on LLMs. That framework tells you where a task sits. The two-chat workflow tells you what to do about it. A task that starts in the danger zone (large and precise, like redesigning an entire unit’s assessment strategy) feels overwhelming as a single prompt. But the exploring chat breaks it into components that each sit in different quadrants. Some pieces land in the sweet spot. Others need careful verification. Each gets an appropriate level of trust and oversight. The exploring session is where you map the territory. The building session is where you execute with that map in hand.
5.11 Common Mistakes and How to Fix Them
| Mistake | What Goes Wrong | Fix |
|---|---|---|
| Asking for 10 things at once | Output is shallow; tokens divided 10 ways | Break into 2–3 prompts— one main task per prompt |
| Vague task description | AI misunderstands what you want | Add explicit context: who— what— why— constraints |
| “Design my whole course” in one go | Incoherent— shallow output | Ask for plan first— then design one section at a time |
| Leaving conversation open indefinitely | Hallucination risk increases; unwieldy to navigate | Start new conversation every 50+ exchanges |
| Not specifying output format | AI guesses format; may not match needs | Say “3 bullet points—” “one paragraph—” “table—” etc. |
| Asking “what am I missing?” | AI invents irrelevant things | Be specific: “What am I missing in my assessment of [specific skill]?” |
| Forgetting to review outputs | Errors and hallucinations slip through | Always quality-check— especially facts/citations |
| Pasting entire documents without framing | AI doesn’t know what to focus on | Add a sentence: “Here’s my unit outline. Focus on the assessment section.” |
| Starting a new conversation when context is long | Lost work and having to re-explain everything | summarise first— then paste summary into new chat |
5.12 Practical Workflow for Managing Context
Here’s a workflow that brings everything together:
5.12.1 Phase 1: Planning
- Define the task clearly (in writing, to yourself)
- Ask AI for a plan before diving in
- Break the plan into sub-tasks
- Identify how much output you need for each sub-task
5.12.2 Phase 2: Execution
- Work through one sub-task per prompt (usually)
- Review each output before moving forward
- Provide feedback for refinement
- Document what works (save successful prompts)
5.12.3 Phase 3: Management
- Keep conversations focused (one major project per conversation)
- When a conversation gets long (50+ exchanges), ask for a summary and move to a new conversation
- Use separate conversations for different topics/projects
- Archive completed work
5.12.4 Phase 4: Quality Check
- Verify facts (especially citations, dates, statistics, attributions)
- Check for contradictions (does it align with earlier outputs?)
- Assess completeness (did AI address all your needs?)
- Iterate if needed (use follow-up prompts to refine, not to ask for entirely new things)
5.13 Real-World Example: Managing Context Well
Scenario: Designing a 10-week supply chain management unit.
5.13.1 Bad approach (what NOT to do):
"Design the entire supply chain management unit including all 10 weeks,
learning outcomes, assessments, readings, 3 activities per week, facilitator
notes, and student rubrics."
Result: Massive output that’s shallow and poorly integrated. You’d need to revise everything piecemeal.
5.13.2 Good approach:
Conversation 1: Planning
I'm redesigning a 10-week supply chain management unit for 3rd-year business
students (40 students, mix of majors). Help me create a modular plan.
What are the key supply chain topics we should cover? How should they sequence?
What's the learning arc across the semester? What should each week focus on?
Result: You get a coherent 10-week plan with learning progression.
Conversation 2: Learning Outcomes
Using the plan from earlier, let's define learning outcomes.
Here's the plan: [paste from Conversation 1]
For each week or pair of weeks, suggest 1–2 specific, measurable outcomes
that focus on authentic supply chain thinking (not just knowledge).
Result: Outcomes aligned to the plan, focused on professional judgment.
Conversation 3: Week 1 Deep Dive
Let's design Week 1 in detail. Topic: Supply Chain Fundamentals and Strategic
Thinking.
Learning outcomes: [paste from Conversation 2]
Design Week 1 including:
- 3 key concepts to introduce
- 1 major activity or case study
- 1 short assessment
- 3–4 readings
- Facilitator notes on how to run discussion
Make it manageable for a 3-hour week.
Result: A coherent, complete Week 1.
Conversation 4: Weeks 2-3 Deep Dive
Continue with Weeks 2-3 using the same structure...
Result: By batching weeks and working modularly, the whole unit comes together coherently.
Conversation 5: Assessment Integration
I've now designed all 10 weeks. Here's a summary of all learning outcomes
and activities: [paste summary]
Design a capstone assessment that integrates learning from across the unit.
What should students do? How would you evaluate whether they've achieved the
outcomes?
Result: A coherent, well-integrated unit with assessment that ties it together.
5.14 Context Management for Different Scenarios
5.14.1 For Unit Redesign
Break down like this: 1. Conversation 1: Plan (topics, sequence, learning arc) 2. Conversation 2: Learning outcomes (aligned to plan) 3. Conversations 3+: One section per conversation (activities, assessments, etc.) 4. Final conversation: Integration (how it all connects)
Benefit: Quality outputs. Each conversation focuses on one aspect. By the end, you have a coherent unit designed through multiple focused conversations.
5.14.2 For Course-Level Change
Break down like this: 1. Conversation 1: Architecture (major themes, year-long learning arc) 2. Conversation 2: Learning outcomes for the year (connected to architecture) 3. Conversations 3+: One unit per conversation (each unit designed fully) 4. Final conversation: Integration (how units connect, capstone design)
Benefit: Coherence across the year. Each unit is designed well. The course flows logically.
5.14.3 For Assessment Redesign
Break down like this: 1. Conversation 1: Assessment strategy (what to assess, how, when) 2. Conversation 2: Individual assessment design (one assessment at a time) 3. Conversation 3: Rubrics (one per assessment) 4. Conversation 4: Student communication (handouts, success criteria, examples)
Benefit: Assessments that actually measure what you care about. Clear communication to students.
5.15 When Context Management Matters Most
Context management is most important when: - You’re doing complex, multi-step projects (unit redesigns, curriculum overhauls) - Quality matters (teaching materials, student-facing work) - You need consistency (prompt libraries, course coherence) - You’re iterating (refining approaches based on feedback) - You’re teaching students to use AI (modelling good context management)
For quick, one-off tasks (generating a single prompt, quick idea generation), context management is less critical. But for the substantial work you do as an educator, managing context improves quality dramatically.
5.16 Key Principles Summary
- Break complexity into steps - Ask for a plan before diving in
- One task per prompt (usually) - Give output tokens to focus on one thing
- Use output constraints - Specify length and format to manage token allocation
- Keep conversations focused - One major project per conversation
- summarise and handoff - When conversations get long (50+ exchanges), reset with a summary
- Make context explicit - Don’t assume AI understands implicit information
- Batch similar tasks - If you need 5 of the same thing, ask for all 5 at once
- Review everything - Always quality-check outputs
The underlying principle: Context management is about respecting the AI’s limitations while maximising its strengths. You’re not trying to have perfect conversations; you’re trying to have focused conversations that produce high-quality outputs consistently.
5.17 Why Students Should Learn This
As you teach students to use AI, context management becomes a critical skill. Students who understand context management will: - Get better results from AI (more usable outputs, fewer iterations) - Work more efficiently (fewer wasted conversations) - Produce higher-quality work (depth over breadth) - Develop professional AI literacy (understanding how to work with AI tools effectively)
Consider teaching context management explicitly: - Show students your workflow (how you break tasks into steps) - Model managing long conversations (summarise, start fresh) - Have students practice the “one task per prompt” principle - Discuss why quality suffers when asking for too much at once
This transfers from classroom to professional practice. If your students graduate understanding how to manage context with AI, they’ll be more effective professionals.
5.18 Your Next Step
Pick a project you’re currently working on or about to start:
- Define it clearly: What are you trying to accomplish?
- Ask AI for a plan: Before diving in, ask AI to help you structure the work
- Break into steps: Work through the plan one step at a time
- Keep it focused: One conversation per major project
- Review everything: Quality-check before moving forward
As you do this, notice: - How much more focused your outputs are - How much easier iteration becomes - How much less rework you need to do
Then bring that experience to your teaching. Your students will benefit from seeing how you work with AI effectively.