Prompt 101

Goldilocks

Dilemma

Hunting for the

“Just Right”

Moment in AI Research

A real look at what’s happening in my classroom— the raw process of designing for learning in an AI-powered world (with a dash of humour).

The Setup: Why This Play?

I’m always trying to find that “just right” point where students learn enough to tackle big issues without getting overwhelmed. It’s like trying to get the equalizer exactly right on a killer track so they can actually enjoy it.

Enter AI.

Tools like ChatGPT change the game by letting students skip a bunch of manual info-gathering.

So how do we handle that?

Is it still valuable to teach kids how to do the grunt work if AI does it in 0.2 seconds?

That nagging question set the scene for our lesson comparing two very different business models: Shein vs. Patagonia.

The Game Plan: How I Designed the Play

The Objective: Give students a real-world look at global supply chains and ethical dilemmas by comparing two big-name clothing companies with wildly different values.

Pro Pro Activity:

  1. Have students build empathy with a “ladder of inference,” stepping into the shoes of factory workers, business owners, and consumers.

  2. Create two opposing business models—one leaning on cost efficiency, the other on sustainability.

  3. Use a single online source for each company to keep them focused (and to reduce the number of rabbit holes they could wander down).

AI Integration:

  • I wanted them to use ChatGPT to synthesize info, but without me spoon-feeding prompt templates.

  • My hunch? They’d submit generic prompts like, “Hey ChatGPT, do my research for me.”

  • I designed the lesson to help them realize that if you ask ChatGPT for something vague, you’ll get the kind of answers you’d expect at 3 a.m. from a sleep-deprived friend—kinda helpful, but mostly missing details.

The Execution: What Actually Happened?

So, as predicted, things went a little bananas:

  • The “One-and-Done” Crew: Some students copied the entire article, pasted it into ChatGPT, and said, “Make me a business model.” It was like slapping a bunch of random ingredients in a blender and hoping for a Popeye’s protein shake. The result? General outlines missing any real spice or flavour.

  • The Tinkerers: A few groups outlined their project in ChatGPT, asked for a summary, and then refined their prompts. They re-prompted until they got something workable. That definitely paid off in terms of clarity.

  • The Overachievers: One pair ran prompts in a funnel—narrowing down exactly the info they needed for an easy side-by-side comparison. They basically treated ChatGPT like a personal intern. Impressive stuff!

  • The Purist: One student said “No thanks” to AI and did all the reading, note-taking, and synthesizing the old-fashioned way. While everyone else was dancing with robots, this kid was reading like it was 1995.

Across the board, any group that relied on a single ChatGPT summary found it lacking in specifics. They got the gist, but not the rich data or nuanced insights that come from deeper reading.

The Breakdown: What This Play Revealed

Mind The Gaps:
Students still don’t grasp the power of detailed prompts. ChatGPT isn’t some mind-reading genie—if you ask lazy questions, you get half-baked answers. They needed to feed the AI with more context, details, and a sense of purpose.

Also, there’s a mindset that AI can “just do the work,” no follow-up required. Nobody tried reading the text first and then using ChatGPT to confirm or cross-check. They just hopped straight to “Hey, bot, do your thing.”

The Real Sticking Point:
How do I keep students practicing crucial research and critical thinking skills when AI can short-circuit that process? If they don’t learn these deeper analytical skills now, they’ll have a rude awakening the moment ChatGPT spits out partial truths or omissions that they fail to catch.

The Adjustments: What I’ll Try Next

  1. Prompting 101: I’m rolling out a short protocol that reminds students to tell AI what they’re actually trying to do, share relevant background info, and specify what kind of output they want. Call it the “Goldilocks approach” to prompting—finding that sweet spot where AI’s help is super productive.

  2. Make It a Two-Step Dance: First, students independently pull important data from their source. Then (and only then!) they bring that data into ChatGPT for structuring or synthesizing. Think of it like reading the instructions before trying to build IKEA furniture—less frustration, fewer missing screws.

  3. Reality Checks: Students will compare the AI-generated summary against the original source. That way, if the AI missed something crucial (like that time it said the business was founded by a llama farmer—didn’t happen), they’ll catch it before moving on.

Open Questions: What I’m Still Wrestling With

With AI doing the heavy lifting, does manual summarizing still teach anything meaningful?

In a world where so much can be automated, do we lose something important by skipping those “sweat equity” steps in research?

Or do we simply shift our focus to higher-level thinking—like evaluating sources, refining questions, and combining insights?

That’s the puzzle. And I’m still figuring out how to ensure that final learning sticks and isn’t just regurgitated by an algorithm. But hey, at least we’re having fun along the way—like any good classroom experiment, we’ll keep tinkering until we find that elusive “just right” zone.

Click here to receive a copy of “Prompting 101: The “Goldilocks Approach” to Getting AI’s Best Help”.

Sign up for the newsletter so you don’t miss a beat!

Now, get prompting!

Next
Next

Not The ‘AI Guy’