Keep the Second Brain From Rotting
A system without maintenance drifts until it stops being useful
disconnect. Rules stop matching your current priorities. Modules go unused for weeks. Record types accumulate duplicates. Classification rules that were accurate three months ago produce false positives now. The system does not fail suddenly. It decays gradually, and the gradual decay is harder to notice than a hard failure.
Maintenance projects prevent the slow rot. They are not glamorous. They do not produce new capabilities. They keep existing capabilities working. A monthly health check catches the you forgot about. A rule drift audit catches the classification that no longer matches your life. A duplicate cleanup keeps the contact list accurate. A connector failure drill proves the system can survive a source going offline.
The honest version of a second-brain book includes this chapter because the capability is not 'set it and forget it.' The capability is learning how to keep a system useful over months and years.

A monthly health check asks whether each module is still earning its place
Once a month, the assistant reviews every active and reports: is it running? Is it being used? Is its data current? Are its rules still relevant? Is it worth keeping? The health check catches the module you built in March and forgot about in May.
For each , the check answers five questions: (1) When did it last produce output? (2) Did you review that output? (3) Are its connected sources still working? (4) Have its rules been corrected in the past 30 days, or have they stagnated? (5) Would you notice if it stopped running?
Pausing a is a valid outcome. If the finance has not run in two months and you have not missed it, pause it officially rather than letting it drift. A paused module can be reactivated when you are ready. A drifting module produces stale records that pollute the system.
A rule drift audit catches classification rules that no longer match your life
Your email classification rules were accurate when you wrote them. Three months later, you have a new client, a former colleague left their company, and the newsletter you used to read every week is now noise. The rules have drifted because your life changed and the rules did not change with it.
The rule drift audit reviews every active rule in the system: email classification rules, task urgency rules, contact tier assignments, privacy boundaries, and - specific corrections. For each rule, it checks: when was this rule created? When did it last trigger? Does it still match current conditions?
Rules that have not triggered in 60 days are candidates for review. Rules that reference people, companies, or projects that no longer exist are candidates for removal. The drift audit keeps the system from running on assumptions from a version of your life that no longer exists.
Duplicate record cleanup prevents two records from splitting one person's history
Over months, different modules create records for the same person, project, or topic using slightly different names. The email created 'Mike Johnson' from an email thread. The meeting capture created 'Michael Johnson' from a calendar invite. Their interaction history is now split across two records, and neither one tells the full story.
The duplicate cleanup scans for records with similar names, overlapping contact information, or matching email addresses. It proposes merges for your review: 'Mike Johnson (3 interactions) and Michael Johnson (2 interactions) appear to be the same person. Merge? The merged record will have 5 interactions, David's email address, and the most recent role from each record.'
Automated merging is risky because name similarity does not guarantee identity. The assistant proposes merges; you confirm them. Every merge is a correction that improves future duplicate detection. After you merge Mike and Michael, the assistant learns that this person uses both names.
A connector failure drill proves the system works when a source goes offline
What happens when Gmail is down? When your calendar API stops responding? When the task manager you use changes its export format? A connector failure drill tests these scenarios before they happen in real life.
The drill disconnects one source and runs the affected modules. The morning brief should note that email is unavailable and produce a brief from calendar and tasks alone. The email should say 'source disconnected' rather than failing silently. The should flag the gap.
The goal is graceful degradation, not perfection. A system that produces a partial brief with a clear note about missing data is better than a system that crashes or, worse, produces a brief that looks complete while missing an entire source.
An export and rebuild plan guarantees the system can survive a tool change
Tools change. You might switch from Claude to ChatGPT, from ChatGPT to a coding agent, from one task manager to another. The export plan ensures that your second brain is portable: records, prompts, rules, schedules, and briefs can all be extracted and rebuilt on a different platform.
The export includes five components: (1) all records in a structured format ( or ), (2) all briefs (the specifications that define each module), (3) all reusable rules and corrections, (4) all schedules and automation configurations, and (5) all privacy and boundary settings.
The rebuild test is the proof. Export everything. Set up a fresh assistant session with no history. Import the records, load the briefs, and run one morning brief. If the output is comparable to your current system, the export plan works. If critical is missing, you know what needs to be added to the export.
