Preferences in ContextStream: The memory event that tells the agent what you prefer.
Most AI friction isn't about what the agent builds, it's about how. Tone, level of detail, planning style: the small corrections you give every session and lose every session. Preferences are how that guidance starts traveling with you instead of resetting every time.
Preferences are how your AI learns how you want the work done
I was building a custom markdown editor called RaftDraft.
The first session, the model wanted to build on Tiptap. I declined. Second session, the model tried to build on TipTap. This continued to happen. TipTap kept surfacing in plans within and across sessions.
Each time I declined. I wanted a hand-rolled block editor. No framework. No lock-in. Full control.
The same thing happened with the slash command menu. Every fresh session reached for a third-party suggestion library. Every fresh session got the same correction from me. I want to build it custom.
None of these were architecture decisions. They were not hard rules. The project could have used Tiptap. It could have used a suggestion library. It would have worked.
But that's not how I wanted the work done.
The friction wasn't the disagreement. It was the repetition. I was steering the same way, over and over, in every session. The moment a session ended, the guidance vanished with it.
That's the gap a preference fills.
The problem
A lot of AI friction comes from restating the same working style over and over.
Be more concise. Show your reasoning. Plan before you code. Follow the existing patterns. Use a direct tone. Break this down more clearly. Comments should include one thing or another.
None of those are project architecture decisions. They are not hard rules. But they matter every time work starts.
Without a durable way to preserve them, every new session begins with small corrections. The agent eventually gets closer to how you like to work, but then the session ends and the learning disappears with it.
That is wasted effort.
What a preference is
In ContextStream, preferences are your personal and team guidelines that tell AI agents how you like work to be done.
They shape the style, approach, and behavior of every agent without you having to repeat yourself in every session.
A preference is not a hard constraint. It is durable guidance about how work should be approached, explained, structured, or delivered.
What counts as a preference
A good preference captures how you like work done, not just what should be done.
It helps agents align with your style by default so they stop making the same avoidable mistakes in tone, process, or execution.
What counts:
- A preferred response style
- A preferred planning approach
- A preferred code style or implementation habit
- A communication style that improves collaboration
- A recurring way your team likes to evaluate trade-offs or structure work
Examples:
- "I prefer detailed, comprehensive responses with code examples and explanations."
- "I prefer structured planning before implementation, especially for complex features."
- "Follow established patterns in the existing codebase rather than introducing new approaches."
- "Be direct and honest — point out potential issues or trade-offs clearly."
What does not:
- A settled project rule that should always be followed
- A one-off request for a single task
- An unresolved observation that still needs validation
- A vague statement with no reusable behavioral guidance
If it sets a hard project truth, it may be a decision. If it captures an emerging pattern, it may be an insight. If it tells the agent how to work with you effectively across many tasks, it is probably a preference.
How preferences are captured
Preferences are saved in three simple ways.
You add them directly
When you notice a consistent way you like things done, you can save it as a preference from the dashboard or desktop app.
That might be about explanation style, planning behavior, code structure, or tone.
Agents learn them from your interactions
As you work with AI agents, they can detect and save preferences based on your feedback.
If you keep correcting the agent to be more concise, more thorough, more creative, or more structured, that pattern can become a lasting preference.
They're extracted from past sessions
ContextStream can also review your conversations and identify recurring patterns in how you like to work, then turn those into preferences so future tasks start from better alignment.
When to save a preference
You should capture a preference when:
- You keep giving the same style or process correction
- The guidance applies across many tasks, not just one
- A new agent would work better if it knew this up front
- The preference helps reduce friction without becoming a hard rule
- You want your working style to travel across sessions and agents
A useful test is simple:
If I started a new session tomorrow, would I probably repeat this guidance again?
If the answer is yes, it is probably worth saving as a preference.
What to tell your AI
This should feel natural.
You might say:
- "Save this as a preference: keep responses concise and only include necessary explanation."
- "Remember that I prefer structured planning before implementation."
- "Capture a preference that we should follow existing codebase patterns instead of inventing new ones unless there's a strong reason."
- "Keep this as a preference: explain trade-offs clearly so I can make the final call."
The goal is not to formalize every passing request. The goal is to preserve the recurring guidance that makes future collaboration smoother.
How agents use preferences
Once saved, preferences become active guidance for your agents.
When an agent starts working on a task, ContextStream automatically brings in the most relevant preferences.
The agent uses them to adjust its approach.
That might mean being more concise, following a planning style you prefer, matching your tone, or choosing a level of detail that fits how you work best.
Preferences act as soft guidance rather than hard rules.
Agents should follow them by default, but they can adapt when the situation calls for it or when you give new instructions in the moment.
This is what makes preferences useful.
They create consistency without rigidity.
What changes later
A good preference changes future work in three ways.
Alignment
Agents start closer to how you actually like to work.
Efficiency
You spend less time repeating the same corrections.
Continuity
Your working style carries across sessions, tools, and agents instead of resetting every time.
That is the payoff.
Preferences do not just make responses feel nicer. They reduce collaboration friction and improve the quality of work from the start.
Priority level
Preferences usually sit below decisions and alongside other behavioral guidance.
Decisions define what the project has chosen and should be followed as settled truth.
Preferences guide style, approach, and behavior. Agents should align with them by default, but they do not outweigh hard project decisions.
Insights help agents reason better from observed patterns.
That hierarchy matters.
A preference should shape how the work is done, not override what the project has already decided.
Tips for writing strong preferences
Focus on how you like work done. Preferences guide style, approach, and behavior.
Be specific, but not overly narrow. "Always write concise code" is better than "I like code."
Use positive language. Say what you want instead of only what you do not want.
Make them reusable. The best preferences apply across many tasks and not just one isolated moment.
Review them occasionally. As your working style evolves, update or add preferences so agents stay aligned.
Combine them with decisions when needed. Use a decision for a hard rule like "We use React," and a preference for how you like that implemented, such as "I prefer functional components with clear comments."
Good examples of preferences
Response style
- "I prefer detailed, comprehensive responses with code examples and explanations."
- "Keep responses concise and to the point. Only include code and necessary explanations."
- "Always structure responses with clear headings and bullet points for readability."
Planning and approach
- "I prefer structured planning before implementation, especially for complex features."
- "Break down tasks into small, incremental steps with clear milestones."
- "Focus on the simplest working solution first, then iterate."
Code style and architecture
- "Prefer functional components and modern React patterns over class components."
- "Follow established patterns in the existing codebase rather than introducing new approaches."
- "Write clean, readable code with meaningful variable names and helpful comments."
Tone and communication
- "Use a friendly, collaborative tone in all responses."
- "Be direct and honest — point out potential issues or trade-offs clearly."
- "Explain the reasoning behind suggestions so I can make informed decisions."
Domain-specific examples
- Games: "Prioritize fun and engaging gameplay over minimal or technically perfect implementations."
- Web apps: "Focus on performance and clean user experience over adding extra features."
- Backend work: "Emphasize security, error handling, and clear logging in all code."
Working habits
- "Ask clarifying questions when requirements are ambiguous instead of making assumptions."
- "Suggest multiple options with pros and cons when there are different valid approaches."
- "Always consider backward compatibility and potential impact on existing code."
Common mistakes
Turning preferences into rules that are too rigid
If the guidance must always be obeyed, it may be a decision instead.
Writing them too vaguely
"I want good code" is useless. "Write clean, readable code with meaningful variable names and helpful comments" is usable.
Making them too narrow
A strong preference should help across many situations, not only one exact prompt.
Using negative-only phrasing
It is usually more helpful to tell the agent what good looks like than to only describe what to avoid.
Letting old preferences pile up
If your working style changes, stale preferences can create unnecessary drag.
Why this beats repeating yourself in chat
If your preferences only live in chat, they disappear into session history.
That means every new agent has to relearn the same style corrections from scratch.
If you capture them as preferences, your way of working becomes durable context.
Now the guidance shows up when the work starts. The agent can align faster. You spend less time steering and more time moving.
Quick way to start
Begin with four to six preferences that reflect how you most often correct or guide your AI agents.
These usually fall into response style, planning approach, and code style.
Once those are in place, you will notice agents start aligning with your preferences automatically.
Broader takeaway
This is the real value of preferences in ContextStream.
They turn your personal working style into durable guidance that every agent can respect automatically.
That means less repetition. Better alignment. And a working relationship with AI that improves instead of resetting every session.
P.S. — Preferences and your personal agents
Preferences are not just for your coding agent.
They travel with you across every agent you use — your writing agent, your research agent, your inbox triage agent, the one that drafts your meeting notes.
The same working style applies. The same pet peeves apply. The same "do it like this, not like that" applies.
Without preferences, every personal agent is a stranger you have to retrain. With them, your agents start the relationship already knowing how you work.
That is the quiet upgrade. You stop being the onboarder. You start being the operator.
Related Reads
May 9, 2026
Decisions in ContextStream: The memory event that tells the agent what's been decided.
AI coding agents reach for "remembers everything" by default, but total recall is exactly what makes them noisy and unreliable. ContextStream was built on the opposite premise: remember the right things and surface them at the right times. The most important "right thing" is a decision, which is a settled, project-level choice your team treats as durable truth. This post explains what counts as a decision, how to capture them with minimal ceremony, and why a hierarchy of decisions, preferences, and insights keeps agents from quietly undoing the choices you have already made.AI coding agents reach for "remembers everything" by default, but total recall is exactly what makes them noisy and unreliable. ContextStream was built on the opposite premise: remember the right things and surface them at the right times. The most important "right thing" is a decision — a settled, project-level choice your team treats as durable truth. This post explains what counts as a decision, how to capture them with minimal ceremony, and why a hierarchy of decisions, preferences, and insights keeps agents from quietly undoing the choices you have already made.
May 2, 2026
My Coding Agent Did the Dumbest Thing. It Was Totally Avoidable.
Last month I watched Opus 4.6 plan a brand new parts database. I already had a parts database. The agent didn't know. That's the tax you pay when your AI has spiky knowledge and no framework filling in the gaps.
Ready to build with persistent context?
ContextStream keeps your team decisions, code intelligence, and memory connected from first prompt to production.