Our goal: ship a conversational, agentic view builder — a system where you give Mem a short title like "Mem Teams initiative" or "Angi deal", and an agent finds relevant context in your notes, composes an appropriate layout from a library of UI building blocks, synthesizes the content, keeps itself up to date as new material lands, and lets you reshape it just by talking to it.
How to read this: top sections are the briefing; lower sections add depth. Smart Views is the most conceptually open project of the three, so we've made opinionated calls on scope below so we can get moving on Day 1. The "The one question we're not deferring" section is the most important thing in this doc — read it carefully.
TL;DR
- You type a short title ("Mem Teams initiative"). An agent goes and builds the view from what already exists in your Mem.
- The view is composed from a library of UI building blocks — sections, tables, lists, summaries, action cards — that the agent picks and arranges for the context. Not a templated layout; an LLM-generated one.
- As you capture new source material, the view automatically incorporates what's relevant. No tagging, no filing.
- You can reshape the view conversationally — "add a table of open deals at the top" — and those structural edits are retained as part of the view's ongoing state so they survive regeneration.
- Provenance: any synthesized statement traces back to a source note.
- Our hack-week bet: build the system end-to-end (title → agent-composed view → auto-incorporation → conversational iteration). Pick one scenario everyone can relate to as the demo vehicle — probably a real initiative or customer, not a trip. We'll commit on Day 1.
Why this matters
Mem's positioning is "Your AI Thought Partner" — a system that remembers, organizes, and brings back. Smart Views is the clearest instantiation of that promise for the user's ongoing work. Collections today are bags of content; a Smart View is a living, synthesized surface that maintains itself.
For busy operators juggling multiple recurring threads (customers, deals, projects, initiatives), the job Smart Views does is: "open this thing and immediately see where I am, what's new, and what's next — without having built the dashboard by hand." That's what power users try to do today in notes apps and fail; that's what we make automatic.
What we're actually building
This framing matters enough to call out explicitly so we don't drift toward the wrong thing.
A Smart View is not a template with fixed sections. It is an agent-composed surface:
- Input: a short user intent — often just a title, optionally a sentence of description.
- Context gathering: the agent searches the user's Mem for related content.
- Composition: the agent chooses which UI building blocks to use (a timeline? a table? a prose summary? an action card?) and how to arrange them based on what it found.
- Population: the agent fills those blocks with synthesized content grounded in the user's source material.
- Maintenance: as new source material lands in Mem, the view decides whether and how to incorporate it.
- Iteration: the user tells the view what to change conversationally; structural edits are persisted and survive regeneration.
The UI building-block library is the piece of infrastructure we need to get right. Things like:
- Section with heading + prose
- Timeline / itinerary
- Table (rows/columns agent-defined)
- List of items (with optional per-item action affordances)
- Key-value summary block
- Proactive suggestion card ("add this to the view?")
- Inline reference to another view
The agent is the one choosing which blocks to use and how to compose them. We define the blocks; the agent does the UI design.
Core workflows (the hero paths)
1. Create a view from almost nothing
You type "Mem Teams initiative" into a new Smart View. Mem goes and reads your notes, pulls context from everything relevant — past meeting notes, prospect list, recent agent chat threads — and composes a view. You open it: a summary of the initiative at the top, a table of active prospects with status, a list of recent activity, a section of open questions. You didn't specify any of that structure. The agent chose it.
The magic starting point. Minimum user input → useful view.
2. Capture source material, view auto-incorporates
You finish a sales call with Angi. You dictate a voice note. It lands in Mem. A few minutes later, the Mem Teams Smart View has updated: the row for Angi in the prospect table now shows the new status, the "Recent activity" section lists the call with a summary, and a suggestion card appears at the bottom: "Draft the follow-up email Kelly asked for?"
You never told Mem this note belonged to the Mem Teams view. The view noticed and incorporated it.
This is the core loop. Capture is the user's only job; the views reshape themselves around it. Some incorporation is silent (obvious-fit content added directly); some is surfaced as a suggested change that the user can accept or reject. The agent uses discretion.
3. Open the view as the home for the topic
You tap the Mem Teams Smart View. You see the current state of your initiative at a glance — the shape you didn't design but that matches how you think about it. At the bottom is the raw source material, browsable.
Whatever sections the view ends up with — current state, prospect table, next-step suggestions, source material — those aren't hardcoded. They're the sections the agent tends to produce for this kind of topic given this kind of content. A different topic with different content would produce a different layout.
4. Reshape the view conversationally
You're looking at the Mem Teams Smart View. You type in a chat attached to the view: "Add a section at the top summarizing weekly progress." Mem adds it. Over the next week, as you capture new material, that section updates.
Later: "Move the prospect table above the activity feed." Done. "Actually, group the prospect table by deal stage." Done.
This is the crux workflow. Users don't customize via settings panels — they tell the view what they want. Those structural instructions are persisted as part of the view's ongoing state (think: a living prompt + edit history the view carries with it), so the layout and shape they've coerced it into survives regeneration as new data lands.
5. Trace synthesis back to source
A line in the view reads: "Angi is interested in a Q3 pilot." You click it. Mem shows you the source — the meeting note from last Tuesday where Kelly said exactly that.
Provenance is essential. The user has to be able to audit any synthesized claim back to the source. This is what makes the view trustworthy vs. the view being a black box that might be hallucinating.
Our scoping bet for the week
Smart Views is conceptually big. The bet:
Build the system (title → agent-composed view → auto-incorporation → conversational iteration) end-to-end. Pick one scenario we can all relate to and demo from it.
The scenario is the proof point; the system is the work. What we're not doing:
- Hand-designing a specific view's sections and calling that "Smart Views." That's a template and it's the opposite of the concept.
- Building a settings-UI for configuring views. Configuration is conversational.
- Trying to generalize across many scenario types this week. One strong scenario that lives and breathes > a framework that supports all of them on paper.
What we are doing:
- A real building-block library — even if small — that the agent composes from.
- Real context gathering over a real Mem.
- Real auto-incorporation of new source material.
- Real conversational iteration with persisted structural edits.
Scenario selection — our one big open call for Day 1
We need a scenario that:
- Every demoer has real content about (in a real Mem we can seed with).
- Feels like real work, not a toy. (A trip could work but risks feeling like a fake use case.)
- Is plausibly close to the ICP — a project, an initiative, a customer, a deal.
Strong candidates:
- Mem Teams initiative — naturally work-related, meta but concrete, everyone has context.
- A specific customer inside Mem Teams — narrower, still relatable, good shape (prospect dossier).
- A cross-functional project one of us owns (hack week itself? a recent launch?).
We'll commit by end of Day 1.
Milestones
M0 — Pre-retreat if possible: seed the data
Gather real Mem content for the chosen scenario (enough that the agent has something substantive to compose a view from). If we can get this done before arriving in NYC, we save a full day of setup.
M1 — Title → view (Days 1–2)
- User types a short title + optional description.
- Agent searches their Mem, gathers context, decides on a layout from the building-block library, populates it with synthesized content.
- Result renders as a usable view.
- The demo moment: "type three words, get a useful view." Without this, nothing else matters.
M2 — Automatic incorporation of new source material (Days 2–3)
- When a note is captured or edited in Mem, it's debounced (similar to the Mem Agent pattern — wait ~5 minutes of no further edits).
- Then relevant Smart Views wake up and decide how to incorporate (or whether to incorporate at all).
- Direct incorporation for obvious-fit content; suggested incorporation (via a suggestion card in the view) for ambiguous cases.
- The demo moment: dictate a new note; open the view a few minutes later; see it has changed.
M3 — Conversational iteration (Days 3–4)
- User can tell the view what to change ("add a section here," "move X above Y," "group by Z").
- Changes are applied immediately AND persisted as part of the view's ongoing state.
- When the view regenerates as new data lands, the structural instructions the user gave are honored — their layout decisions survive.
- The demo moment: restructure the view by typing two instructions; capture a new note; watch the regenerated view keep the user's structure.
M4 — Provenance + polish (Day 5)
- Any synthesized statement in the view is clickable through to the source note(s) that informed it.
- Plus whatever polish is needed to make the demo sing.
Stretch
- "What's changed since you last looked" — a diff view at the top of the Smart View. Really useful for recurring-return topics and a strong vignette. Could be pulled into M4 if we have room.
- Proactive agent-suggested changes — the agent notices something and surfaces a card: "You've captured three new prospects since last view — add them to the prospect table?" The user accepts or dismisses.
Not in scope this week
- A second scenario/view type generalized from the first. (The system is general by construction; we don't need a second scenario to prove it.)
- Cross-view linking / a graph of views.
- Long-term persistent agent state beyond the view's edit history and ongoing prompt (see "Things Smart Views are not" below).
The one question we're not deferring: conversational iteration & structural memory
Most of the concept-level ambiguity in Smart Views (typed vs. untyped, tree vs. graph, relationship to Collections) we're deferring for the week. One question we are NOT deferring, because it's the UX and data-model crux:
How do we persist the structural intent a user expresses conversationally, so that it survives regeneration as new content lands?
Concretely: the user says "add a weekly-progress section at the top." That instruction has to:
- Apply to the view right now.
- Be retained in such a way that when the view regenerates tomorrow because new notes landed, the weekly-progress section is still at the top — not reverted to whatever the agent originally composed.
- Be editable / revocable by the user.
- Interact cleanly with subsequent instructions ("actually, put it at the bottom" — replaces? layers?).
Our working model (subject to the team refining during the week): each Smart View carries an edit log of user-expressed structural intent. On regeneration, the agent takes the view's title + context + edit log as its prompt. The edit log preserves the spirit of what the user wanted, even when the surface content underneath changes.
This applies to direct edits too. If the user hand-edits a synthesized section, that edit needs to be logged and respected by future regenerations — the spirit of the edit, not necessarily the literal text. This is how we square "views are self-maintaining" with "users can shape them."
This is the piece we most need to figure out. Everything else is an execution question; this is a design question.
More example vignettes
- The initiative dashboard. Type "Mem Teams." The view shows up with a summary, a prospect table, recent activity, and open questions — all inferred from existing notes.
- Captured → updated. Dictate a sales-call note; the relevant Smart View incorporates it automatically within minutes.
- User reshapes the view. "Group the prospect table by deal stage, and add a section for blockers." Done. Next regeneration honors both.
- Auditing a claim. User reads a synthesized statement, clicks it, sees the source note. Confidence restored.
- The suggestion card. Agent notices three new prospects not yet in the table and surfaces: "Add these to the prospect table?" User taps yes.
Framing questions we are deferring for this week
Each of these deserves a real answer eventually. None of them block a great hack-week demo.
- Typed vs. untyped vs. object-native. Is "Smart View" one generic primitive, or do we have named types (Project View, Customer View), or do we drop the word entirely and give Mem first-class objects like "Projects" and "Customers"? Our bet (generic + LLM-composed) works under any of these framings.
- Tree vs. graph. Can Smart Views contain other Smart Views? For the week, scope flat. Note learnings as we go.
- Relationship to Collections. Is a Smart View the evolution of a Collection or a new thing layered on top? Don't resolve; just avoid conflicts with existing Collection semantics in the ingestion path.
Key decisions to make on Day 1
- Scenario. Commit to one. Real content from a real Mem. Most likely Mem Teams initiative or a specific customer within it; Trip is off the table.
- The building-block library. Pick the starting set of UI blocks the agent can compose from (section + prose, table, list, timeline, key-value block, suggestion card). Two hours of whiteboarding; don't over-design. The library will grow.
- How we represent the view's structural memory. The edit-log model above is a working draft. Pressure-test it Day 1. This is the piece that most needs a coherent answer early because downstream work depends on it.
- Incorporation discretion policy. When does an incoming note get directly incorporated vs. surfaced as a suggestion? A simple rule (confidence-threshold, or section-match-quality) for the week; refined as we see it in practice.
- Regeneration cadence. Pick the simplest sensible default (e.g., debounced 5 minutes after a relevant note changes; manual-refresh button available). Don't over-invest; it's tuning, not architecture.
- One proactive suggestion type for the demo. Pick one flavor of agent-suggested change (e.g., "add this to the prospect table") and drive it end-to-end, rather than building a generalized suggestion framework.
What "done" looks like for the week
Minimum demo:
- Type a title → view appears, auto-composed from the user's real Mem content. It's useful.
Strong demo:
- All of the above, plus: dictate new content → view updates meaningfully on its own → user reshapes the view conversationally → reshaping sticks across regeneration.
Wow demo:
- The above, plus: provenance from synthesis to source, and at least one proactive suggestion card landed convincingly in the flow, and ideally the "what's changed since last time you looked" surface.
Things Smart Views are not (for this hack week)
- Not a Collections replacement. The relationship matters long-term, but for the week, Smart Views layer on top of existing capture primitives rather than replacing them.
- Not a long-term persistent agent. Long term, each Smart View may have its own ongoing agent with persistent state — that's plausible and we're not ruling it out. This week, the view's state is its title + content + structural edit log. We're not building a persistent agent memory layer separate from that.
- Not Heads Up. Heads Up surfaces related content while you work on something else. A Smart View is the destination you open to work on a topic. Different job, different surface.