systems

Learning System

Last updated: 3/31/2026

Learning System

A system for turning articles, books, and content into durable knowledge that changes how I think and act. Built on [[spaced-repetition-science]], [[deep-reading-and-encoding-science]], [[knowledge-transfer-and-application-science]], and [[ai-assisted-learning-automation-research]]. Integrates with the knowledge-management-system.

Core Principles

  1. Retrieval beats re-reading. Testing yourself (g = 0.50-0.61) is 2-3x more effective than highlighting or re-reading. Every stage of this system forces active retrieval, not passive review.
  2. Generation beats consumption. Writing answers in your own words (d = 0.40) produces stronger memory than receiving pre-made answers. AI automates the boring parts (extraction, formatting, scheduling), never the thinking parts (connecting, explaining, applying).
  3. Effort is signal, not noise. If it feels easy, you're not learning. Spacing, interleaving, and retrieval practice all feel harder than massed re-reading but produce dramatically better retention. Trust the science over the feeling.
  4. Framework before atoms. Never create flashcards for material you don't understand. Build the conceptual framework first, then encode the atoms. Orphan cards without conceptual scaffolding produce brittle memorization.
  5. Design for bursts, not streaks. Irregular review patterns are normal, not failure. The system degrades gracefully during busy weeks and recovers without punishment.
  6. Less is more. A small deck of high-quality, actively-relevant cards reviewed consistently beats a huge deck of everything-I've-ever-read. Ruthlessly triage what deserves a card.

The Pipeline

CAPTURE → TRIAGE → ENRICH → LEARN → REVIEW → TRANSFER

Stage 1: Capture (KMS App)

Use the knowledge-management-system app to capture content. Tag with #learn anything worth retaining long-term. The KMS handles all modalities: text, clipboard, screenshots, audio, URLs.

When a URL is captured, the enricher auto-fetches content (article text, YouTube transcript) and generates pre-reading questions (pretesting effect, d = 0.35).

Stage 2: Triage (Automatic)

Not everything enters the learning pipeline. Content passes through a triage gate:

  • Explicit opt-in: tagged #learn or #remember — always processed
  • AI triage: scored on novelty, relevance to active projects, and content quality — processed if score >= 0.7
  • Question detection: captures containing questions auto-route to the Q&A pipeline

Stage 3: Active Reading Protocol

When engaging with content worth learning:

Pre-read (2 min):

  • The enricher generates pre-reading questions automatically for URLs
  • Survey headings, abstract, conclusion
  • Try to answer the pre-reading questions from existing knowledge (pretesting effect, d = 0.35)

Active read:

  • Capture passages that are Inspiring, Useful, Personal, or Surprising (the knowledge-management-system IUPS filter)
  • After each section, pause and self-explain the key point in your head (self-explanation effect, g = 0.55)
  • Ask "why is this true?" for each key claim (elaborative interrogation, d = 0.56)

Post-read (3 min):

  • Close the article. Write 3-5 bullet points from memory of what mattered most (retrieval practice)
  • Compare against your highlights. Note what you missed.

Stage 2: Triage (What Deserves a Card?)

Not everything you read deserves a flashcard. At highlight time, decide:

Card-worthy (tag highlight as #card):

  • Facts I'll need in working memory, not just reference
  • Concepts that build on each other (forgetting one breaks a chain)
  • The "why" behind conclusions, not just the conclusions themselves
  • Anything I've had to look up more than twice
  • Knowledge I'll still need in 6+ months

Note-worthy only (no card tag):

  • Information I'm uncertain about
  • API syntax, commands learned by doing
  • Short-term project-specific details
  • Material I don't yet understand well enough to test

Decision filters (from Gwern and Nielsen):

  • 5-minute rule: If not knowing this will cost me 5+ minutes over my lifetime, card it
  • Minimum 5 cards per article: If I can't extract 5 card-worthy items, the article wasn't worth deep-processing
  • The thesis test: Can I state the article's core argument in one sentence? If yes, that's always a card

Stage 3: Encode (Card Generation)

AI generates draft cards. I provide the thinking.

For every article processed, generate exactly one thesis card: "What is the core argument of [article]?" — this preserves the argument structure that atomization destroys.

Tier 1 — Factual recall (AI-generated, lightly reviewed):

  • One atomic fact per card, following the [[spaced-repetition-science|minimum information principle]]
  • Cloze deletions for definitions, Q&A for relationships
  • AI generates, I approve/reject. Quick pass.

Tier 2 — Conceptual "why/how" (AI-drafted question, I write the answer):

  • "Why does X cause Y?" / "How does X relate to Z?"
  • The AI shows me only the question. I write my answer first (generation effect). Then I see the AI's answer and refine mine.
  • This is where real learning happens. Don't skip the writing.

Tier 3 — Application/transfer (human-generated, AI-assisted):

  • "How would I apply X to [current project]?" / "Where else does this principle apply?"
  • Only for knowledge connected to an active project
  • Must be time-bound and specific, not generic
  • 30-day "prove it" rule: if I can't cite a real-world application within 30 days, demote or delete

Card quality rules (from Wozniak's 20 rules):

  • One fact per card. Split complex items.
  • Avoid sets and enumerations (use overlapping cloze instead)
  • Use imagery where possible (picture superiority effect)
  • Add source and date to every card
  • Combat interference: add distinguishing context for similar items
  • Encode from multiple angles: forward + reverse, definition + example

Stage 4: Review (Spaced Retrieval)

Anki with FSRS algorithm.

Settings:

  • FSRS enabled, desired retention: 0.90
  • Learning steps: 10m 1d
  • Relearning steps: 10m
  • Optimize FSRS parameters monthly
  • Interleave all decks (default Anki behavior)

Daily review target: 10-15 minutes. This is the ceiling, not the floor.

New cards: 3-5 per day. This produces ~30-50 review cards/day at steady state — well within the time budget.

Rating discipline:

  • Press Again when I forget (not Hard)
  • Hard = hesitant but correct recall
  • Never press Hard as a shortcut for Again
  • Rate based on recall quality, ignore displayed intervals

Lapse protocol (pre-defined, not invented under duress):

  • If queue > 50 overdue: suspend all but starred cards, review only those, then gradually unsuspend
  • If queue > 100 overdue: reset the deck. Mark all as new. Restart. Data has no value if the habit is dead.
  • Missing a week is normal. The protocol exists so returning is easy, not punishing.

Leech management:

  • Leech threshold: 5-8 lapses
  • When a card leeches: fix it (split, reframe, add context), or delete it
  • Never unsuspend a leech unchanged
  • Quarterly audit: review all suspended leeches

Deck health metrics (checked weekly):

  • Cards added this week
  • Review completion rate
  • Days since last article processed
  • Total active cards (target: under 1000)

Stage 5: Transfer (Making Knowledge Usable)

Retrieval without application produces trivia knowledge, not thinking tools.

In-card transfer (automated):

  • Tier 3 cards already test application
  • Every concept gets at least one "where else does this apply?" card

Active project tagging:

  • Tag every card: active-project, reference, or general
  • Daily review prioritizes active-project cards first
  • When a project closes, archive its cards (suspend, don't delete)

Structured reflection (integrated into existing weekly review, not a separate session):

  • "Which 3 things I reviewed this week connect to what I'm working on?"
  • "What did I learn this week that changed how I think about something?"
  • "What's one thing I can apply this week?"

Teaching as transfer:

  • Preparing to teach: g = 0.35. Actually teaching: g = 0.56
  • Share one "thing I learned from Anki this week" in conversation, writing, or social media
  • This serves dual purpose: transfer test + social accountability

Integration with the Knowledge Management System

This system extends the knowledge-management-system CODE phases:

KMS PhaseLearning System Addition
CaptureKMS app captures with #learn tag for opt-in. Enricher auto-fetches URLs, generates pre-reading questions.
OrganizeAutomation pipeline routes #learn captures to learn consumer. Cards generated in resources/flashcards/review/.
DistillAI generates multi-tier learning materials: factual cards, conceptual questions, synthesis challenges. Human reviews and writes own answers for Tier 2.
ExpressSynthesis challenges (Feynman, devil's advocate, connections) + teaching exercises
Active LearningAnki FSRS for retrieval cards. Synthesis challenges surfaced via ntfy.

The learning system and KMS share the same capture layer and automation pipeline. The #learn tag is the bridge — it tells the pipeline to route content through the learn consumer in addition to normal PARA organization.

Learning Enhancements (from [[learning-techniques-master-taxonomy]])

Curiosity Framing

All card fronts are framed as mysteries, paradoxes, or information gaps instead of dry definitions. This increases encoding by >50% (Gruber 2014). The AI generation prompts enforce this automatically.

Synthesis Challenges

Beyond retrieval cards, the system generates three types of synthesis challenges:

  • Connection prompts (~15%): "How does [concept] connect to [other domain]?" — forces cross-domain thinking
  • Feynman exercises (~10%): "Explain [concept] as if to a 12-year-old" — tests true understanding
  • Devil's advocate (~10%): "What is the strongest argument against [claim]?" — prevents blind acceptance

These are surfaced daily via ntfy (scripts/synthesis-challenge-surfacer.py) and weekly as a digest.

Post-Session Reflection

After each learning session, the system can generate structured reflection prompts (g = 0.79):

  • "What was the most important thing you learned?"
  • "What confused you?"
  • "What connects to something you already know?"

Self-Improvement Automation

The weekly improvement cycle (scripts/weekly-learning-improvement.py) measures all 6 optimization variables, identifies the weakest, and suggests specific interventions. It appends findings to the generation log and alerts via ntfy if high-severity issues are detected.

The Automation Layer

What AI does (the boring parts):

  • Extract card candidates with curiosity framing
  • Generate synthesis challenges (connection, Feynman, devil's advocate)
  • Format cards for Anki import
  • Schedule reviews via FSRS algorithm
  • Generate elaborative interrogation questions ("why?" and "how?")
  • Track system health metrics across 6 variables
  • Surface daily synthesis challenges via ntfy
  • Run weekly self-improvement cycle

What I do (the thinking parts):

  • Decide what's card-worthy at highlight time
  • Write Tier 2 and synthesis answers before seeing AI version
  • Create application cards connected to active projects
  • Review and edit AI-generated cards (not just approve)
  • Make connections to existing knowledge
  • Apply knowledge in real-world contexts

Tools:

  • KMS app (~/Projects/KnowledgeManagementSystem): capture UI with AI tagging, multi-modal support
  • server/content_enricher.py: auto-fetches URLs, YouTube transcripts, generates pre-reading questions at capture time
  • organize/consumers/learn.py: automation consumer that generates learning materials with synthesis challenges from organized captures
  • organize/consumers/question_answer.py: auto-answers captured questions
  • scripts/article-to-anki.py: launches Claude Code process-articles agent for vault-aware card generation
  • scripts/push-cards-to-anki.py: pushes reviewed cards to Anki via AnkiConnect
  • scripts/anki-feedback-loop.py: queries Anki performance, feeds back into generation guidelines
  • scripts/synthesis-challenge-surfacer.py: surfaces daily/weekly synthesis challenges via ntfy
  • scripts/weekly-learning-improvement.py: weekly self-improvement cycle across 6 optimization variables
  • scripts/learning-system-smoke-test.py: validates entire pipeline (17 dependency checks + integration test)
  • AnkiConnect API: programmatic card management
  • FSRS: scheduling algorithm
  • Ollama on server: local AI for tagging, card generation, question answering

The Self-Improvement Loop

The system gets better over time through a closed feedback loop:

Generate cards (reading [[card-generation-guidelines]])
  → Human reviews (keeps/edits/deletes cards)
  → Log quality metrics in [[generation-log]]
  → Analyze patterns: what fails? what works?
  → Update [[card-generation-guidelines]] with learned rules
  → Better cards next time

And periodically (monthly or when review feels off):

Run anki-feedback-loop.py
  → Query Anki for leeches, retention by tag/tier
  → Identify: what do bad cards have in common?
  → Update generation guidelines with anti-patterns
  → Regenerate or delete problem cards

Key files in this loop:

  • resources/flashcards/card-generation-guidelines.md — the learned ruleset, read before every generation batch. Starts with research-based rules, accumulates feedback-driven rules over time.
  • resources/flashcards/generation-log.md — tracks every batch: cards generated, kept, edited, deleted, with reasons. Also tracks Anki performance metrics.
  • scripts/anki-feedback-loop.py — queries AnkiConnect for leeches, retention rates by tag/tier/topic, recently failed cards. Identifies patterns and suggests guideline updates.
  • ~/.claude/agents/process-articles.md — the Claude Code agent that processes articles. Reads guidelines before every batch, so improvements are immediately applied.

Self-improvement happens at three timescales:

  1. Per-review: each card deletion/edit teaches the system what to avoid
  2. Per-month: anki-feedback-loop identifies retention patterns across hundreds of cards
  3. Per-quarter: full audit of guidelines effectiveness, prompt evolution, system health

What-to-Ankify Quick Reference

AnkifyDon't Ankify
Core concepts and definitionsTrivia and decoration
The "why" behind conclusionsConclusions without reasoning
Facts that chain togetherOrphan facts
Things I've looked up 2+ timesThings I can always look up fast
Knowledge for 6+ month horizonShort-term project details
Principles that transfer across domainsDomain-specific syntax/APIs

Failure Modes to Watch For

  1. Queue death: Missing a week creates an aversive backlog. Use the lapse protocol. Missing is normal.
  2. Card inflation: The deck grows faster than you review. Cap at 1000 active cards. Annual audit.
  3. Illusion of learning: Approving AI cards without engaging. For Tier 2+, write your answer first.
  4. Orphan cards: Cards without conceptual framework. Framework first, atoms second.
  5. Hoarder's trap in SRS form: Carding everything you read. Triage ruthlessly. Most articles don't warrant cards.
  6. Processing backlog: Highlights pile up unprocessed. This is fine. Process in bursts, not daily.
  7. Tool-tweaking: Optimizing FSRS parameters instead of reviewing cards. Use defaults until the review habit is established.

Evidence Base

TechniqueEffect SizeSource
Retrieval practiceg = 0.50-0.61Rowland 2014, Adesope 2017
Spacingd = 0.60Cepeda 2006
Self-explanationg = 0.55Bisra 2018
Elaborative interrogationd = 0.56Dunlosky 2013
Generation effectd = 0.40Slamecka & Graf 1978
Concept map constructiong = 0.72Schroeder 2017
Interleavingg = 0.42Brunmair & Richter 2019
Teaching effectg = 0.56Kobayashi 2019
Metacognitive strategiesd = 0.69Hattie 2009
FSRS vs SM-299.6% superiorityopen-spaced-repetition benchmark

Full research: [[spaced-repetition-science]], [[deep-reading-and-encoding-science]], [[knowledge-transfer-and-application-science]], [[ai-assisted-learning-automation-research]], [[learning-system-operationalization-research]]

Related Systems

  • knowledge-management-system — the broader PKM system this integrates with
  • [[capture-system]] — how content enters the vault
  • [[reflection-system]] — weekly review that includes learning system health check
  • learning-system-project — project documentation with design decisions, architecture, open questions