What a Small Prompting Study Taught Me About Interface Design


In one of my CSE 190 assignments, I ran a small study where a partner had to write prompts for an AI system under two conditions:

  1. Unguided: “Just write a prompt that does X.”
  2. Guided: “Use this template and follow these hints.”

The task itself was simple (generate a short story about a vending machine that becomes sentient), but the differences were surprisingly clear:

  • With no guidance, my partner relied on intuition and trial-and-error.
    The prompts were shorter, more ambiguous, and the outputs varied a lot in tone and detail.
  • With lightweight scaffolding (reminders about length, tone, audience, and constraints), the prompts became much more structured and the outputs were consistently higher quality.

From a UX perspective, the interesting part was not “prompt engineering tricks” but how the interface shaped the mental model:

  • Checklists and examples nudged them to think more like a designer and less like a casual chat user.
  • A few well-chosen fields (“tone”, “constraints”, “audience”) reduced frustration because they felt less like they were guessing.

The biggest takeaway for me is that interfaces for AI are not neutral. Even simple choices — like whether you show an empty textbox or a guided prompt builder — can change:

  • how confident people feel,
  • what kinds of questions they ask, and
  • how effectively they can use the system.

Design for AI is not just about what the model can do, but how clearly the interface teaches people to collaborate with it.