Anthropic Claude · Conceptual · 2025

Context as Infrastructure: Designing Model Transparency in Anthropic's Claude

Claude case study hero — context panel and inline commands in the Claude interface
RoleProduct Designer
TimelineAugust – September 2025
Team3 Designers
SkillsProduct Design
Product Strategy
Prototyping

Overview

How might we help users manage context they share with LLMs

As a team of product designers & PMs we spent a semester exploring fundamental usability challenges within Large Language Models & AI interfaces.

Product Research & Synthesis

Understanding how users currently share and manage context with LLMs

Ideation & Prototyping

Going wide in ideation and rapidly testing concepts with users.

Iterating with Feedback & Validation

Continuously iterating on concepts and validating product decisions.

The Problem

AI outputs depend heavily on user-provided context but users have almost no control over it

I noticed people creating dozens of separate chats for the same project. When I asked why, they said it was the only way to reset what the AI remembered.

This breaks the fundamental contract of deterministic software. When I click "save" in Word, I know what saved. When I upload a file to Claude, I have no idea what it's remembering or how long it will remember it, or even when it gets referenced.

Opportunity

Through a competitive analysis I noticed there was no consumer ready solutions that managed context well

No consumer-ready solutions streamline context sharing and management in a single, accessible experience. Some competitors leverage powerful context-sharing protocols like MCP, but require additional technical implementation that puts them out of reach for most users.

Others build “out of the box” applications, but these don't connect context across products and platforms, leaving users siloed. This represented a significant design opportunity: build a context management system that is both powerful and accessible to everyday AI users.

Competitive analysis of context management solutions across AI products

Our Solution (for now, AI moves really fast)

I brought context management to the forefront of Claude's interface & added in-line commands to aid the user journey from import to execution

Our final design combines two complementary features built into the Claude interface: a Context Panel for managing what the model references, and In-line Commands for accelerating repetitive tasks and pulling in context from other applications.

The final feature implementation showing the Context Panel and In-line Commands
The final feature implementation

User Research

Understanding how people actually share context with AI

We started with desk research to map the landscape of AI context — defining the context window, context materials (chat history, system instructions, external data, project memory), and the interaction patterns users rely on today.

Sometimes I've had to copy and paste the exact same thing, going back and forth multiple times.

Avery Lee

Graduate Student @ UC Berkeley

I turned off the setting that has memories from previous chats, I don't like how the memory and history affects my future chats because I don't know what it takes into account.

Kaiona Martinson

Graduate Student @ UC Berkeley

Users struggle to capture relevant information

from their workspace without repeated copy and paste between applications and chat windows.

There is a lack of transparency

regarding what the AI tool knows or is referencing at any given point in a conversation.

Context gets lost across conversations

forcing users to restart or re-explain their projects, preferences, and prior decisions every time they open a new chat.

Conceptualization & Testing

To tackle user issues we came up with 3 ideas

Context Control Panel

Give the users a way to control what the model references at any time as layers

In-line Commands (e.g. Slack Style)

Allow the user to use pre-embedded prompts & import context from previous interactions

Collaborative AI for Teams

Allow team chats with the LLM to create a shared context pool

Testing different prototypes with users
Testing different prototypes with our users

What We Discovered

Context is the through-line behind every interaction. By combining the context control and in-line commands we can serve the user through the whole process

Our final design combines two complementary features built into the Claude interface: a Context Panel for managing what the model references, and In-line Commands for accelerating repetitive tasks and pulling in context from other applications.

Diagram showing how context panel and inline commands address the full user journey

Final Designs

I combined the context panel & in-line commands to serve the user across the entire journey

Final design showing the combined context panel and in-line command features

Features

Context panel feature — managing active context layersIn-line commands feature — slash command interface for importing context
Full feature overview — context panel and commands working together

Reflection

What I learned

Context management is a journey, not a feature. The most important thing this project taught me is that the user's relationship with context changes throughout a conversation.

Designing for a single moment — like the initial prompt or mid-conversation adjustment — misses the full picture. The strongest solution addressed the entire flow.

Designing for AI requires new interaction paradigms. AI tools are rapidly changing with many open-ended use cases, leading to a wide range of user behaviors. There's no single “right” way people use these tools, which makes designing for them both challenging and fascinating.

Features are constrained by model capabilities. The effectiveness of many potential features are still fundamentally limited by what LLMs can actually do today. Designing for AI means understanding both the user's needs and the technology's current boundaries.