Back to top

🔍 Behind the Prompt: Designing for Opal – The UX Pattern Framework

/// After reading 20+ GenAI UX patterns, examples and implementation tactics, I found the breakdown refreshingly practical—something many of us can learn from. To make the ideas easier to apply in context, I restructured the framework around how we think and design for Opal—Optimizely’s AI.

This version pairs each UX principle with real patterns from Opal in production.


1. Use AI Where It Matters

Apply AI only when it meaningfully improves the experience.
Automate time-consuming tasks, not decisions. Avoid novelty. Focus on value, speed, and augmentation over replacement.

🧭 Is AI the right solution for the job?
Focus on determining when and how to apply GenAI meaningfully.

  • GenAI or no GenAI — Opal doesn’t insert AI for its own sake. Features like content brief generation or transcription are grounded in specific, time-saving tasks.
  • Convert user needs to data needs — In Opal, campaign history, brand guidelines, and channel metadata are used to inform context-aware responses.
  • Augment vs automate — The Meeting Analysis Agent and Video Transcription Agent augment rather than replace; they prep summaries but don’t make decisions.
  • Define level of automation — Opal supports a range—from prompt-based assistance to agents that run predefined workflows.

2. Make the AI Easy to Grasp

Introduce AI gradually and in context.
Use familiar metaphors like roles or assistants. Show what the AI can and can’t do, and always be upfront about data use and privacy.

📐 Help users understand what the AI is and how to use it.
Focus on setting expectations, shaping mental models, and guiding initial trust.

  • Progressive AI adoption — AI entry points in Opal are unobtrusive (e.g., undockable chat, in-situ buttons) and evolve with user confidence.
  • Leverage mental models — Agents in Opal are framed by familiar roles (e.g., “Industry Marketing Agent”), making their purpose intuitive. When viewing experiment results, Opal offers in-situ summarization—meeting users where they already are.
  • Convey product limits — Users are shown system constraints—for example, when context exceeds token limits or when data sources are missing.
  • Communicate data privacy and controls — Opal adheres to Optimizely’s enterprise-grade data privacy standards, with transparent policies on data handling and usage.

3. Let the User Stay in Control

Design for collaboration, not delegation.
Give users clear ways to prompt, edit, guide, and undo. Scope what agents can do, and allow users to start, pause, or change direction at any time.

🎛️ Let users steer, shape, and collaborate with the AI.
Focus on prompting, editing, recall, and how users direct the system.

  • Provide contextual input parameters — Prompt templates in Opal help users provide structured input, especially for repetitive tasks.
  • Design for co-pilot / partial automation — Users can edit, regenerate, or collaborate mid-task with AI-generated briefs, headlines, etc.
  • Define user controls for automation — Agents are scoped—users activate specific ones rather than letting AI roam freely.
  • Design for memory and recall — Opal threads retain session memory to support fluid conversations. Expanded long-term memory is on the roadmap.

4. Expect Things to Break — and Design for It

Handle errors gracefully.
Be ready for vague prompts, failed completions, or missing data. Offer recovery paths, retry options, and clear feedback loops that help the system learn and improve.

⚠️ Prepare for what can go wrong—and learn from it.
Focus on error states, feedback capture, and continuous improvement.

  • Design for user input error states — Opal catches vague prompts and asks for clarification or suggests better phrasing.
  • Design for AI system error states — Timeouts, incomplete responses, or failures are surfaced clearly with retry options. We’re actively improving error handling to ensure users don’t get stuck.
  • Design to capture user feedback — Users will soon see in-context feedback options like thumbs up/down and “Was this helpful?” prompts.
  • Design for model evaluation — Admin users can compare different models and prompt configurations to determine the most effective setup.

5. Make AI Behavior Transparent

Show your work.
Let users see what steps the AI took, what data it used, and why it made a decision. Provide multiple options where helpful. Reveal limitations and model reasoning whenever possible.

🔍 Make the AI’s thinking visible and safe.
Focus on showing how the AI works, why it responded a certain way, and building trust. This is an active area of development. Planned improvements include:

  • Display chain of thought — Agents in Opal display their reasoning or steps, such as “Analyzing tone…” or “Filtering by campaign goal…”. Users will be able to drill down into which tools or agents were used during execution.
  • Leverage multiple outputs — Users often get multiple options, not just one, and can iterate from there.
  • Provide data sources — Outputs reference past campaigns, CMS content, or linked metadata.
  • Convey model confidence — Model confidence indicators are not yet exposed in Opal, but are under active exploration.
  • Design for AI safety guardrails — Opal includes built-in safety settings that block high-risk outputs across key harm categories: dangerous content, hate speech, sexually explicit material, harassment, and unspecified harms. These are configured at a ‘block only high’ threshold, balancing safety with generative flexibility. Brand-specific compliance filters can also be layered on as needed.

6. Design for Safety and Trust

Safety isn’t optional—it’s built in.
Build in strong guardrails to prevent harmful or off-brand content. Align AI behavior with enterprise data policies, brand standards, and user expectations.

(This principle is deeply integrated into the Transparency & Trust section above.)


To dive deeper, I’ll be sharing bite-sized insights from this framework in upcoming posts—stay tuned for how these principles play out across real product moments in Opal. We’re still learning and iterating every day. Hopefully, this framework offers a useful lens—or a starting point—for other teams designing GenAI experiences in the real world.


Let me know if you’d like this turned into a Figma-ready slide deck or a blog-style format next.

Post a Comment