Protect Privacy, Attention, and Agency
A powerful system with no boundaries eventually feels invasive
The more your second brain knows, the more useful it becomes. It also becomes more capable of surfacing things you did not want surfaced, storing things you did not want stored, and acting in ways you did not intend. The line between helpful and invasive is not a technical boundary. It is a set of decisions you make deliberately.
These projects build the boundaries before you need them. Privacy tiers control which modules see which records. Approval boundaries define what the assistant can do independently and what requires your review. Source confidence marks which facts are verified and which are guesses. Sensitive data splits separate raw data from safe summaries. Do-not-remember protocols exclude entire categories from storage.
Boundaries are not limitations. They are design decisions that make the system trustworthy. A system you trust gets more data, more corrections, and more use. A system that feels invasive gets abandoned.

Privacy tiers control which modules see which records
Every record type in gets a : full access (any can read), restricted (only specified modules can read), or local-only (stored on your device and never sent to any cloud service). You assign the tier when you define the record type.
Contact names and interaction dates are typically full access: every benefits from knowing who you met and when. Financial transaction details are typically restricted: only the finance module and the need to see specific amounts. Journal entries containing personal reflections are typically local-only: they live on your device and never leave it.
The tier assignment is a one-time decision per record type, revisited when your comfort level changes. If you start using the system for health tracking and later decide it feels too exposed, you change the health records to local-only without affecting other record types.
Approval boundaries define the line between helpful and overstepping
Each already has an from its . This project reviews and tightens those boundaries across the whole system. For every module, you answer: what can the assistant read, summarize, draft, suggest, schedule, save, or never do?
The boundary answers come in layers. Level one (safe): read data, summarize findings, surface patterns. Level two (draft): draft replies, propose tasks, suggest reconnections. Level three (act): send an email, reschedule a meeting, create a calendar event. Most readers keep the assistant at level two: draft and suggest, never act without explicit approval.
The boundary exists before the assistant acts, not after. Setting the boundary is a proactive design decision. Discovering the boundary after the assistant did something unexpected is a failure. These projects front-load the decisions so the system operates within limits you defined.
Source confidence marks which facts are verified and which are guesses
accumulates facts from many sources. Some are directly observed (Sarah's email said the deadline is Friday). Some are inferred (the assistant guessed David's role from his email signature). Some are uncertain (the assistant thinks this contact is the same person mentioned in a meeting, based on name similarity). Source confidence marks each fact so modules that use it know how much to trust it.
The confidence audit reviews existing records and flags low-confidence fields. The assistant checks: which fields came from a verified source (email header, calendar event)? Which were inferred (email signature, domain name)? Which are stale (last verified more than 90 days ago)? The audit produces a report you can act on: verify this role, update that date, confirm or delete this guess.
Sensitive data splits separate raw details from safe summaries
Your finance might process raw transaction data: $847 to CloudHost on May 3, $1,200 from Atlas Corp on May 5. The needs to know your spending patterns, but does it need every transaction? The sensitive data split stores raw financial data in a restricted record type and produces an aggregated summary for the weekly review: 'Spending this month: $3,200, up 15% from April. Two invoices outstanding.'
The same split applies to health data. Raw health metrics (blood pressure readings, medication schedules) stay in local-only records. The sees only what you allow: 'Exercise: 4 days this week. Sleep: averaged 7 hours.' The raw data stays protected. The summary provides enough for the review without exposing details.
The split is the design that makes sensitive data usable. Without it, you either expose raw details to every or exclude sensitive categories entirely. The split gives you a middle path: protected storage with safe summaries.
A do-not-remember protocol excludes entire categories from storage
Some information should never enter the system. You might decide: never store specific medical diagnoses, never record salary negotiations, never capture arguments with family members, never save passwords or financial account numbers. The do-not- remember protocol defines these exclusions before they come up.
When you tell the assistant 'do not remember this,' it should honor the instruction immediately. The protocol goes further: it defines categories of information that the assistant should exclude even if mentioned in passing. If you have a rule that says 'never record health diagnoses,' and a meeting note mentions someone's medical condition, the assistant skips that detail during capture.
The protocol also handles retroactive requests. If you realize the system has stored something you want removed, you can ask the assistant to find and delete all records matching a category or keyword. The deletion is logged (so you know it happened) and the exclusion rule is saved (so it does not happen again).
