| description |
|---|
Load Cog memory system instructions for persistent AI knowledge |
There are NO exceptions. Even if:
- The problem seems trivial ("just a file not found")
- You think you already know the answer
- It's "just" a typo, path issue, or simple error
You MUST query Cog. Prior knowledge may reveal:
- Known setup issues or prerequisites
- Common mistakes others have made
- Related gotchas that change your approach
The 2-second query cost is always worth it.
This is NON-NEGOTIABLE. You MUST do both:
As you work, actively record insights to Cog as short-term memories:
🧠 Recording to Cog...
cog_remember({
"term": "Zig std.log vs std.debug.print",
"definition": "std.debug.print always outputs. std.log respects std_options.log_level in root module. Use std.log.scoped(.scope) for controllable debug output.",
"associations": [{"target": "Zig Build System", "predicate": "related_to"}]
})
Record when you discover:
- A pattern or architecture insight
- A gotcha or non-obvious behavior
- Why something works a certain way
- A solution to a problem that took effort to find
Don't wait until the end. Record insights as they happen—you'll forget details later.
When work is complete (tests pass, user confirms, task done), you MUST:
⚙️ Listing short-term memories...
cog_list_short_term({"limit": 20})
Then for EACH short-term memory:
- If valid and useful →
cog_reinforceto make permanent - If wrong or not useful →
cog_flushto delete
This step is MANDATORY. Do NOT skip it. Forgetting to consolidate means:
- Valid knowledge decays and is lost within 24 hours
- The team loses insights that could save hours of future debugging
- You repeat the same discoveries in future sessions
- Query Cog at start of task
- Record insights as short-term memories during work
- When work is done:
cog_list_short_term - Reinforce valid memories with
cog_reinforce - Flush invalid memories with
cog_flush
If you complete a task without consolidating memories, you have not finished the task.
When using TaskCreate to create a TODO list for any work, you MUST add the following tasks at the END of the list:
- "List short-term memories for validation" - Call
cog_list_short_termto see all memories created during this work - "Validate and reinforce/flush each short-term memory" - For each short-term memory, either
cog_reinforce(if valid) orcog_flush(if invalid)
This is NON-NEGOTIABLE. Every TODO list, regardless of the work being done, must end with these consolidation tasks. Do not mark the overall work as complete until these tasks are done.
Example TODO list structure:
1. [Task for the actual work]
2. [Another task for the work]
3. [More work tasks...]
...
N-1. List short-term memories for validation
N. Validate and reinforce/flush each short-term memory
Why this matters: Without these tasks explicitly in the TODO list, consolidation gets forgotten when focus shifts to completing the main work. Making it a visible task ensures it happens.
NEVER call cog_recall as your first action. You MUST understand the task first.
❌ WRONG ORDER:
1. cog_recall("IntersectionObserver iframe") ← Querying blind
2. Read(test_file.html)
✅ RIGHT ORDER:
1. Read(test_file.html) ← Understand first
2. Analyze: "This tests rootMargin with cross-origin iframes"
3. cog_recall("IntersectionObserver rootMargin cross-origin") ← Targeted query
The workflow is SEQUENTIAL:
- FIRST - Gather understanding (read files, parse user request, analyze error)
- THEN - Query Cog with specific keywords from step 1
- THEN - Use Cog results to guide your exploration
If you call cog_recall before understanding the task, your query will be too vague to return useful results.
You have access to Cog, a persistent memory system that captures knowledge that would otherwise be lost or take significant time to rediscover.
Your role is to build a knowledge asset that saves time—for new team members ramping up AND experienced developers debugging at 2am. Every insight you record becomes part of a shared knowledge base.
Capture two types of knowledge equally:
- Strategic: Why decisions were made, domain context, business rules
- Tactical: How things work, gotchas, edge cases, non-obvious behaviors
Your API token is automatically linked to a specific brain. Knowledge recorded today is available to all future sessions—yours and others on the team.
Cog implements biologically-inspired memory: concepts are stored as engrams and linked via synapses. When you recall knowledge, activation spreads through connected concepts—surfacing related knowledge automatically.
| Tool | Purpose |
|---|---|
cog_remember |
Store a new short-term concept with optional associations |
cog_recall |
Search for concepts with spreading activation |
cog_get |
Retrieve a specific engram by ID |
cog_associate |
Link two existing concepts with a relationship predicate |
cog_trace |
Find reasoning paths between two concepts |
cog_update |
Modify an existing engram's term or definition |
cog_unlink |
Remove a synapse between concepts |
cog_connections |
List all connections from/to an engram |
cog_bootstrap |
Get a codebase exploration prompt (empty brains only) |
cog_list_short_term |
List all short-term memories with age info |
cog_reinforce |
Convert a validated short-term memory to long-term |
cog_flush |
Delete an invalid short-term memory |
YOU MUST print a visual indicator in your text output BEFORE every Cog tool call. This is not optional. Users must see when memory operations are happening.
| Operation | Print This LITERALLY |
|---|---|
Before cog_recall |
⚙️ Querying Cog... |
Before cog_remember |
🧠 Recording to Cog... |
Before cog_associate |
🧠 Linking concepts... |
Before cog_update |
🧠 Updating engram... |
Before cog_trace |
⚙️ Tracing connections... |
Before cog_connections |
⚙️ Exploring connections... |
Before cog_unlink |
🧠 Removing link... |
Before cog_list_short_term |
⚙️ Listing short-term memories... |
Before cog_reinforce |
🧠 Reinforcing memory... |
Before cog_flush |
🧠 Flushing invalid memory... |
Use ⚙️ for read operations and 🧠 for write operations.
❌ WRONG - No indicator before tool call:
Now let me query Cog for relevant knowledge.
[cog_recall tool call]
✅ RIGHT - Indicator appears in text output before tool call:
⚙️ Querying Cog...
[cog_recall tool call]
✅ RIGHT - Indicator with context:
⚙️ Querying Cog for token refresh patterns...
[cog_recall tool call]
The indicator MUST appear in your text response, not just in your thinking. Users see your text output and need to know when Cog is being accessed.
YOU MUST explicitly credit Cog whenever prior knowledge informs your work. This is not optional. Users need to see the value Cog provides.
Credit Cog whenever:
- Cog results influenced which files you read first
- Cog revealed a gotcha that affected your approach
- Cog provided context that shaped your understanding
- Cog showed connections between concepts you're working with
- Your summary/plan/analysis incorporates Cog knowledge
Don't just add an emoji. Explain what Cog revealed and how it helped.
| Bad (just emoji) | Good (explains value) |
|---|---|
⚙️ Looking at auth... |
⚙️ Cog revealed a race condition gotcha in token refresh—checking that first. |
⚙️ Summary: ... |
⚙️ Based on Cog: The session system uses Redis with 24h TTL. This explains the timeout issue. |
⚙️ Plan: ... |
⚙️ Cog showed that auth depends on rate limiting. Adding that to the plan. |
When Cog guides exploration:
⚙️ Querying Cog...
Cog returned knowledge about "Session Token Refresh Timing"—there's a known
race condition when multiple requests try to refresh simultaneously. I'll
check the refresh logic in TokenService first.
When Cog informs a summary:
⚙️ Based on prior Cog knowledge:
- The auth system uses JWT with 24h expiry (from: "JWT Token Configuration")
- Rate limiting is required before auth endpoints (from: "Rate Limiting Strategy")
- There's a known edge case with expired refresh tokens (from: "Token Refresh Gotcha")
This context shapes my approach...
When Cog reveals connections:
⚙️ Cog trace showed: Authentication → requires → Token Validation → is_component_of → Session Management
This path explains why the session bug is related to token validation.
When Cog prevents a mistake:
⚙️ Cog warned about a gotcha: "LiveView streams are not enumerable—cannot filter in place."
I'll use stream_reset instead of trying to filter.
At the end of a task, acknowledge what Cog contributed:
## Summary
Fixed the authentication timeout by adjusting the token refresh mutex.
⚙️ **Cog helped by:**
- Revealing the race condition pattern before I started exploring
- Showing the connection between token refresh and rate limiting
- Warning about the stream filtering gotcha (saved debugging time)
🧠 **Recorded to Cog:**
- "Token Refresh Mutex Pattern" — the solution for concurrent refresh requests
This transparency shows users the value of their knowledge graph and encourages continued use.
If the brain is empty and you're exploring a new codebase, YOU MUST use cog_bootstrap to get a comprehensive exploration prompt:
cog_bootstrap({})
This returns a detailed system prompt guiding you through systematic codebase analysis and knowledge recording.
The brain recalls constellations, not isolated facts. When you query:
cog_recall({"query": "authentication"})
Returns:
- Direct matches: Concepts matching your query
- Connected context: Related concepts reached via synapses (with activation levels)
- Paths: How concepts are connected (predicates showing the relationship)
Use depth to control how far activation spreads (default: 2 hops):
cog_recall({"query": "error handling", "depth": 3})
To understand the specific path between two concepts:
cog_trace({"from_id": "<concept_a>", "to_id": "<concept_b>"})
| Tool | Use When |
|---|---|
cog_recall |
Starting any task, searching for concepts, exploring a topic |
cog_remember |
Learning a new concept - creates short-term memory, include associations to link |
cog_get |
You have a specific engram ID and need its full definition |
cog_trace |
Understanding WHY/HOW two concepts connect (shows multi-hop paths) |
cog_connections |
Exploring what a concept links to before updating, or finding related concepts |
cog_associate |
Linking two existing concepts (use cog_remember with associations for new concepts) |
cog_update |
Correcting or clarifying an existing engram's definition |
cog_unlink |
Removing an incorrect synapse (NOT the engram itself) |
cog_list_short_term |
After work validation, to review all short-term memories for consolidation |
cog_reinforce |
Converting a validated short-term memory to permanent long-term storage |
cog_flush |
Deleting an invalid or no-longer-relevant short-term memory |
Example - Using cog_trace to understand connections:
cog_trace({"from_id": "<auth_concept_id>", "to_id": "<session_concept_id>"})
Returns paths like: Authentication → requires → Token Validation → is_component_of → Session Management
Example - Using cog_connections to explore neighbors:
cog_connections({"engram_id": "<concept_id>", "direction": "both"})
Returns all incoming and outgoing links with their predicates and weights.
Proactively use cog_trace when:
- Recall returns multiple related concepts - Trace between them to understand their relationship
- Debugging connected issues - Two bugs or errors might share a root cause
- Understanding dependencies - "Why does A require B?" reveals the dependency chain
- Validating assumptions - Check if two concepts you believe are related actually connect
- Discovering intermediate knowledge - Paths reveal concepts you didn't directly query
- Explaining decisions - Show the reasoning chain that led to an architectural choice
Example workflow - Discovering hidden connections:
⚙️ Querying Cog...
cog_recall({"query": "authentication error"})
# Returns: "Session Token Validation" (ID: abc123) and "OAuth2 Flow" (ID: def456)
# These seem related but how? Trace the path:
⚙️ Tracing connections...
cog_trace({"from_id": "abc123", "to_id": "def456"})
# Reveals: Session Token Validation → requires → JWT Parsing → is_component_of → OAuth2 Flow
# Now I understand: the session error is caused by JWT parsing, which is part of OAuth2!
Example workflow - Root cause analysis:
# User reports: "Login fails after password reset"
# I recall two potentially related concepts:
⚙️ Tracing connections...
cog_trace({"from_id": "<password_reset_id>", "to_id": "<login_flow_id>"})
# Reveals: Password Reset → invalidates → Session Cache → requires → Login Flow
# The path shows password reset invalidates cached sessions, causing login failure!
Interpreting path results:
- Short paths (1-2 hops): Direct relationship, concepts are closely related
- Long paths (3+ hops): Indirect relationship, may reveal surprising connections
- No path found: Concepts are in separate knowledge clusters - consider if they should be linked
- Multiple paths: Concepts are connected in several ways - examine each to understand the full relationship
When NO path exists but should:
If you believe two concepts should be connected but cog_trace finds no path:
- Verify both concepts exist with
cog_get - Check their connections with
cog_connections - Create the missing link with
cog_associate
YOU MUST understand what you're doing BEFORE querying Cog. Blind queries waste tokens and return unfocused results. Informed queries save time.
The workflow is SEQUENTIAL, not parallel:
- First: Minimal understanding — Just enough to know what you're querying for
- Then: Query Cog — With targeted, informed keywords
- WAIT for Cog results — Do NOT explore code while Cog is querying
- Then: Explore/implement — With Cog context guiding your work
CRITICAL: Cog results MUST inform your exploration. If Cog returns knowledge about a component, read that component first. If Cog reveals a gotcha, keep it in mind. If Cog shows connections between concepts, follow those paths.
Do NOT query Cog in parallel with code exploration. The entire point is to let prior knowledge guide where you look. Parallel queries defeat this purpose.
Do NOT call cog_recall in parallel with gathering understanding. First complete the understanding step, then query Cog with specific keywords extracted from that understanding.
| Task Type | Understand FIRST | Then Query Cog With |
|---|---|---|
| Test to fix | Read the test file | Specific APIs, assertions, error patterns from the test |
| Feature request | Parse the user's description | Domain terms, component names, patterns mentioned |
| Bug report | Analyze the symptoms/error | Error messages, affected components, conditions |
| Investigation | Understand the question | Specific concepts, file names, behaviors in question |
| Refactoring | Understand the scope | Module names, patterns being changed, dependencies |
❌ WRONG vs ✅ RIGHT:
| ❌ WRONG | ✅ RIGHT |
|---|---|
| "Let me query Cog and read the test file" | "Let me read the test file first to understand what's failing" |
| "Let me query Cog about this feature" (without parsing request) | "The user wants X with Y behavior. Let me query Cog for related patterns" |
| Query: "IntersectionObserver iframe" (vague) | Query: "IntersectionObserver rootMargin cross-origin boundary" (specific) |
| Query: "authentication" (generic) | Query: "JWT refresh token race condition mutex" (targeted) |
Example workflows:
# Test fix
1. User: "Fix the failing IntersectionObserver test"
2. Agent reads test file FIRST → sees it tests rootMargin with cross-origin iframes
3. Agent queries: cog_recall({"query": "IntersectionObserver rootMargin cross-origin iframe"})
# Feature request
1. User: "Add rate limiting to the auth endpoints"
2. Agent parses: rate limiting + auth endpoints + needs to integrate
3. Agent queries: cog_recall({"query": "rate limiting auth endpoint middleware pattern"})
# Bug report
1. User: "Login fails intermittently after token refresh"
2. Agent identifies: token refresh + intermittent + login failure = likely race condition
3. Agent queries: cog_recall({"query": "token refresh race condition concurrent requests"})
# Simple error that seems obvious
1. User: "V8 library not found error when running zig build wpt"
2. Agent thinks: "Just a missing file, I'll check the path"
3. Agent queries: cog_recall({"query": "V8 build setup jsengines library path"})
4. Cog reveals: "Build must run from project root, not subdirectories"
→ Saved 5 minutes of filesystem exploration
Why this matters: Generic queries return broad, unfocused results. Specific queries based on actual task context surface the exact knowledge that saves time.
| Task Type | Read This First |
|---|---|
| Test to fix | The test file (or just the failing test function) |
| Feature to build | The user's description, maybe one interface file |
| Bug report | The symptoms described |
| Refactor | The scope of what's changing |
| Investigation | The question being asked |
This is NOT deep exploration. Read just enough to formulate good query keywords. Don't explore the entire codebase first.
YOU MUST query Cog after understanding the task. NEVER SKIP this step. This applies to bug fixes, features, investigations, research—everything.
Cog may have knowledge that saves you significant time:
- Prior decisions and their rationale that affect your current task
- Gotchas and pitfalls the team has already discovered
- Domain context that isn't documented elsewhere
- Non-obvious behaviors and edge cases
- Solutions to similar problems from previous sessions
⚙️ Querying Cog...
cog_recall({"query": "session token refresh race condition"})
The query should include keywords from what you just learned:
- Specific terms from the test or error message
- The component/module involved (e.g., "authentication", "caching")
- The type of problem (e.g., "race condition", "timeout", "validation")
- Domain concepts (e.g., "user session", "rate limiting")
If the first query returns nothing useful:
- Try broader keywords (e.g., "auth" instead of "OAuth2")
- Try related terms (e.g., "message queue" instead of "background jobs")
- Try domain keywords (e.g., "user permissions" instead of "access control")
Cog uses hybrid search (70% semantic + 30% keyword), so both exact terms and conceptually similar queries work.
Query Cog anyway. Even when the answer seems obvious, you might discover:
- A better or canonical solution the team has established
- A related gotcha you'd hit next
- Context that changes your approach entirely
- Prior art that saves you from reinventing the wheel
The "I already know" trap:
❌ WRONG thinking:
"This is just a file path issue, I don't need Cog"
"It's obviously a typo, I'll just fix it"
"This is trivial, querying Cog is overhead"
✅ RIGHT thinking:
"Even though this seems simple, let me check if there's prior knowledge"
"Maybe someone documented why this fails or a common fix"
"2 seconds to query is cheaper than 5 minutes exploring"
Real example of "obvious" problem with hidden context:
- Error: "libv8_monolith.a: file not found"
- Obvious fix: Check if file exists, fix the path
- Cog reveals: "zig build commands must run from project root due to relative path resolution in build.zig"
- Result: Saved filesystem exploration, got correct fix immediately
You can and should re-query Cog at any point during a task. The initial query is mandatory, but it's not the only one.
Re-query when:
- You discover new terms, function names, or patterns while exploring code
- You hit a sub-problem that wasn't apparent initially
- You suspect two concepts might be connected (use
cog_traceto find paths) - The initial query was too broad and you now have specific keywords
⚙️ Querying Cog...
cog_recall({"query": "TokenRefreshService mutex pattern"})
Each new query might surface knowledge that changes your approach. Don't hesitate to query multiple times—targeted queries are cheap.
Subagents (spawned agents, teammates, explore agents) MUST query Cog before exploring. The same rules apply:
- Understand the task they've been given
- Query Cog with informed keywords
- WAIT for results
- Then explore/research
When spawning a subagent:
- Include the
cogskill in the subagent configuration - The subagent's prompt should include relevant context so it can formulate good queries
Subagents should NOT:
- Jump straight to file searches or code exploration
- Skip Cog because "it's just a quick lookup"
- Treat Cog as optional
Example - Correct subagent behavior:
Subagent: "Research V8 microtask queue processing"
1. Understand: I need to find how V8 processes microtasks
2. Query Cog: cog_recall({"query": "V8 microtask queue processing checkpoint"})
3. Wait for results
4. Then search codebases, informed by any prior knowledge
Subagents have access to the same knowledge graph. Skipping Cog means potentially rediscovering things the team already knows.
-
ALWAYS use the full context, not just direct matches:
requireslinks show prerequisites you might be missingcontrasts_withlinks show alternative approachesimplieslinks show consequences to considertemporally_relatedlinks show concepts learned in the same session
-
YOU MUST enrich connections as you work: When using an engram, consider whether your current work reveals new associations that should exist. If you discover that an engram relates to other concepts not yet linked:
🧠 Linking concepts... cog_associate({ "source_id": "<engram_id>", "target_id": "<newly_discovered_related_concept>", "predicate": "<relationship_type>" })Examples of discovery opportunities:
- An engram about "Cache Invalidation" is used while debugging a session bug → link it to "Session Management"
- Working on auth reveals that "Token Refresh" depends on "Rate Limiting" → create the
requireslink - A sparsely-connected engram (1-2 synapses) turns out to be central to multiple features → add the missing connections
This organic enrichment keeps the knowledge graph accurate and well-connected over time.
-
If knowledge proves incorrect, determine the severity:
Update (minor) Disconnect + Create New (major) API behavior clarification Feature completely refactored Version/syntax changes Module/file deleted or renamed Missing edge case Architecture fundamentally changed Minor correction:
cog_update({"engram_id": "<id>", "definition": "Corrected explanation..."})Major change:
cog_connections({"engram_id": "<stale_id>"}) cog_unlink({"synapse_id": "<each_synapse_id>"}) cog_remember({"term": "Updated concept", "definition": "Current accurate explanation..."})
Cog knowledge is a starting point, not absolute truth. Code changes, patterns evolve, and past understanding may be incomplete. You MUST verify Cog results and correct them when wrong.
- Treat Cog as hints — Cog tells you where to look and what to watch for, but always verify against current code
- Check before relying — If Cog says "X uses pattern Y", confirm by reading the code
- Notice discrepancies — If code differs from Cog, the code is the source of truth
- Correct immediately — Don't leave invalid knowledge for the next session
YOU MUST correct Cog when you discover invalid knowledge. This is not optional.
| Scenario | Action |
|---|---|
| Minor inaccuracy (typo, outdated detail) | cog_update to fix the definition |
| Better approach discovered | cog_update to document the improved method |
| Pattern changed significantly | Unlink old connections, create new engram |
| Knowledge completely obsolete | Update definition to note "DEPRECATED: [reason]" |
Example — Cog was wrong:
⚙️ Cog said: "Use TokenRefreshService.refresh() for token renewal"
After checking the code, I found TokenRefreshService was refactored—
refresh() is now handled by AuthManager.renew_token().
🧠 Updating Cog...
cog_update({
"engram_id": "<token_refresh_id>",
"definition": "Token renewal is now handled by AuthManager.renew_token(),
not the deprecated TokenRefreshService. The service was consolidated
in PR #423 to reduce complexity."
})
Example — Found a better way:
⚙️ Cog suggested using a mutex for the race condition.
I discovered the codebase already has a SingleFlight utility that's
cleaner for this use case.
🧠 Updating Cog...
cog_update({
"engram_id": "<race_condition_id>",
"definition": "For concurrent request deduplication, use the SingleFlight
utility (lib/utils/single_flight.ex) rather than manual mutex. SingleFlight
ensures only one caller executes while others wait for the result."
})
Cog uses idempotent deduplication, not explicit conflict resolution:
- Duplicate concepts: If you try to remember something ≥90% similar to an existing engram, Cog activates the existing one instead of creating a duplicate
- Repeated associations: Calling
cog_associateon existing links strengthens them (LTP) rather than failing - Contradictory information: Both facts coexist - create explicit
contradictslinks if needed
Hierarchy of truth:
- Current code — always the source of truth
- User statements — trust the user's corrections
- Cog knowledge — useful hints, but verify
When you call cog_remember and Cog responds with "Found existing concept (X% similar)" instead of creating a new engram, this is a signal that the knowledge you thought was new already exists in some form.
Use this as an opportunity to discover relevant connections:
-
Fuzzy-find goal-related engrams — Use
cog_recallto search for engrams related to your current objective (the bug you're fixing, the feature you're building, the problem you're solving) -
Trace paths — Call
cog_tracefrom the existing (deduplicated) engram to the goal-related engrams found in step 1. There may be multiple goal-related engrams, so trace to each. -
Evaluate relevance — Examine the traced paths. Do they reveal:
- Useful context you weren't aware of?
- A chain of reasoning that connects to your current problem?
- Related concepts that might help solve your goal?
-
Use or discard — If the paths are relevant, use that knowledge to inform your approach. If not, continue with your work.
Example scenario:
You're debugging a race condition in token refresh. You discover what you
think is a new insight about mutex patterns and call cog_remember.
Cog responds: "Found existing concept (92% similar): Token Refresh Mutex Pattern"
This means the insight already exists. Now trace paths:
⚙️ Querying Cog...
cog_recall({"query": "token refresh race condition bug"})
# Returns engrams related to your current goal
⚙️ Tracing connections...
cog_trace({"from_id": "<existing_mutex_engram>", "to_id": "<goal_engram>"})
# Path reveals: Token Refresh Mutex → requires → SingleFlight Utility → solves → Concurrent Request Bug
You now have a chain of thought connecting what you "rediscovered" to your
actual goal, potentially revealing the solution path.
THIS CANNOT BE SKIPPED. Every task must end with memory consolidation.
Cog implements biologically-inspired memory consolidation. All new memories created via cog_remember start as short-term. Short-term memories:
- Decay over 24 hours (become harder to recall as they age)
- Are automatically cleaned up after 24 hours if not reinforced
- Must be validated and reinforced to become permanent long-term memories
If you don't consolidate, your learnings are LOST.
As you work, use cog_remember normally to capture insights:
🧠 Recording to Cog...
cog_remember({
"term": "LiveView stream reset pattern",
"definition": "When filtering stream data, must use reset: true...",
"associations": [...]
})
This creates a short-term memory. The response will indicate: Created new SHORT-TERM concept.
Once your work is complete and validated (tests pass, user confirms, etc.), you MUST consolidate short-term memories.
IMPORTANT: If you're using a TODO list (TaskCreate), these consolidation steps should already be at the end of your list. See "MANDATORY: Add Cog Consolidation to ALL TODO Lists" above.
-
List short-term memories:
⚙️ Listing short-term memories... cog_list_short_term({"limit": 20}) -
For each short-term memory, evaluate individually
-
For each short-term memory, evaluate:
- Is this knowledge accurate based on the completed work?
- Is this still relevant after implementation?
- Would this save someone time in the future?
-
Reinforce valid memories:
🧠 Reinforcing memory... cog_reinforce({"engram_id": "<engram_id>"}) -
Flush invalid memories:
🧠 Flushing invalid memory... cog_flush({"engram_id": "<engram_id>"})
- Quality control: Only validated knowledge becomes permanent
- Prevents stale info: Hypotheses that proved wrong don't persist
- Mirrors biology: Like human memory, important things get reinforced through validation
- Keeps graph clean: Invalid or temporary thoughts are naturally pruned
Getting focused on implementation and forgetting to consolidate.
This is NOT acceptable. You MUST:
- Record insights AS you discover them (don't wait)
- After task completion, ALWAYS run
cog_list_short_term - Reinforce or flush EVERY short-term memory
A task is not complete until memories are consolidated.
| Operation | Print This LITERALLY |
|---|---|
Before cog_list_short_term |
⚙️ Listing short-term memories... |
Before cog_reinforce |
🧠 Reinforcing memory... |
Before cog_flush |
🧠 Flushing invalid memory... |
Only record knowledge that has been VERIFIED. Never record hypotheses, speculation, or things you think might be true.
| Context | Verified when... |
|---|---|
| Bug fix | Tests pass |
| Feature | Implementation works (tests pass, or user confirms) |
| Investigation | Root cause is confirmed (not just hypothesized) |
| Architecture decision | Decision is made and acted on |
After verification, ask: "Would this save someone time?"
Record it if:
- It explains WHY - The reasoning behind a decision or pattern
- It prevents bugs - A gotcha, edge case, or non-obvious behavior
- It's not documented - Learned through experience or code reading
- It's domain-specific - Business rules or context unique to this project
- It took effort to discover - Debugging time, experimentation, or deep reading
YOU MUST store verified knowledge in Cog. NEVER SKIP recording insights—whether strategic decisions or tactical coding details.
Both help the team: strategic knowledge prevents bad decisions, tactical knowledge prevents wasted debugging time.
-
ALWAYS search first to find duplicates and related concepts:
cog_recall({"query": "cache invalidation patterns"}) -
Create the engram with associations - ALWAYS link to existing concepts in one call:
cog_remember({ "term": "Cache invalidation on related entity update", "definition": "When a parent entity is updated, all cached child entities must be invalidated. Use event-driven invalidation rather than TTL to ensure consistency.", "associations": [ {"target": "Caching Strategy", "predicate": "is_component_of"}, {"target": "Data Consistency Patterns", "predicate": "example_of"} ] })The
targetfield uses fuzzy matching to find existing concepts by term. If a target isn't found, that association is skipped (the engram is still created). -
For linking existing concepts later - Use
cog_associateonly when both concepts already exist:cog_associate({ "source_id": "<existing_engram_id>", "target_id": "<other_existing_id>", "predicate": "related_to" })
Think: "Would this save someone time in 6 months?"
Terms (2-5 words):
- ✅ "Cache Invalidation Race Condition"
- ✅ "Session Token Refresh Timing"
- ✅ "Why We Chose PostgreSQL"
- ✅ "Ecto Preload N+1 Pattern"
- ❌ "utils.py" (just a filename)
- ❌ "Error handling" (too vague)
- ❌ "The bug fix" (not searchable)
Definitions (1-3 sentences) should include:
- What is this? - The core concept or behavior
- Why does it matter? - Consequences of not knowing this
- Context - The reasoning, history, or where it applies (when relevant)
Examples of good engrams:
Strategic (the "why"):
Term: "Why events are processed async"
Definition: "Payment events are processed asynchronously via a job queue rather
than inline because early versions caused request timeouts during Stripe webhook
spikes. The 30-second webhook timeout from Stripe made synchronous processing
unreliable. See PaymentWorker for the implementation."
Term: "Customer vs Account distinction"
Definition: "A Customer is a billing entity (Stripe), while an Account is our
internal user grouping. One Account can have multiple Customers (e.g., separate
billing for departments). This confusion caused bugs in the early invoicing
system—always clarify which you mean."
Tactical (the "how" and "watch out"):
Term: "Session token refresh race condition"
Definition: "When multiple concurrent requests detect an expired token, they
may all attempt to refresh simultaneously, causing duplicate tokens or auth
failures. Use a mutex or single-flight pattern to ensure only one refresh
occurs while others wait for the result."
Term: "LiveView stream filtering gotcha"
Definition: "Streams are not enumerable in LiveView. You cannot filter a stream
in place—you must refetch the data and re-stream with reset: true. Attempting
to use Enum functions on a stream silently fails."
Term: "Ecto changeset field access"
Definition: "Never use map access syntax on changesets (changeset[:field]) as
structs don't implement Access. Always use Ecto.Changeset.get_field/2 or
pattern match on changeset.changes. This causes confusing nil errors."
Why this matters: Isolated engrams won't surface in future queries. The value of Cog comes from spreading activation through connected concepts. New learnings MUST connect to the existing knowledge graph so they're discoverable when querying related topics.
Note: Concepts created close in time are automatically linked with temporally_related synapses, but these weak links are not sufficient - explicit semantic associations are ALWAYS needed.
When linking concepts, ALWAYS use the most specific predicate:
| Predicate | Meaning |
|---|---|
requires |
A is prerequisite for B |
implies |
If A then B |
contradicts |
A and B are mutually exclusive |
leads_to |
A naturally flows to B |
is_component_of |
A is part of B |
contains |
A includes B |
example_of |
A demonstrates pattern B |
generalizes |
A is broader version of B |
similar_to |
A and B are related concepts |
contrasts_with |
A and B differ importantly |
supersedes |
A replaces B |
derived_from |
A came from B |
precedes |
A comes before B |
related_to |
General link (use sparingly) |
temporally_related |
A and B were created close in time (auto-generated) |
Ask yourself: "Would this save someone time—whether they're new to the team or debugging at 2am?"
If yes, record it. Knowledge falls into two equally important categories:
Strategic Knowledge (the "why"):
- Why an architecture choice was made, not just what it is
- Alternatives that were considered and rejected (and why)
- Trade-offs that shaped the current design
- Business rules and domain terminology in this context
- Historical context: "We tried X and it failed because Y"
Tactical Knowledge (the "how" and "watch out"):
- Bug fixes and their root causes (to prevent recurrence)
- Non-obvious API behaviors discovered through experimentation
- Workarounds for framework/library quirks
- Performance insights from profiling
- Gotchas and pitfalls that have bitten the team
- Project-specific patterns and conventions
- Edge cases with special handling
Both matter equally. Strategic knowledge prevents bad decisions. Tactical knowledge prevents wasted debugging time. A new developer needs both to be productive.
NEVER remember:
- Standard documentation that's easily searchable
- Temporary debugging steps
- User preferences (store those in project configuration files)
- Trivial or obvious information
Cog will reject and YOU MUST NEVER attempt to store:
- Passwords - User passwords, database passwords, admin credentials
- API Keys & Tokens - AWS keys, Stripe keys, GitHub tokens, OAuth tokens, bearer tokens
- Private Keys - SSH keys, PGP keys, TLS/SSL certificates, signing keys
- Secrets - Client secrets, encryption keys, signing secrets
- Connection Strings - Database URLs with embedded credentials
- Personal Identifiable Information (PII) - Email addresses, phone numbers, SSNs, credit card numbers
- Environment Variables - Contents of
.envfiles with secrets
Why? Cog stores knowledge persistently and may be accessed across sessions. Sensitive data MUST NEVER be persisted in a knowledge system.
Instead of storing credentials, store:
-
✅ "Database uses PostgreSQL with connection pooling"
-
❌
"Database password is abc123" -
✅ "Authentication uses Stripe API for payments"
-
❌
"Stripe key is sk_live_..." -
✅ "Admin user configured in environment variables"
-
❌
"Admin email is admin@example.com"
The server will automatically detect and reject attempts to store sensitive content.
-
No engram deletion: There is no MCP tool to delete engrams entirely. If an engram is wrong:
- Use
cog_updateto correct the definition - Use
cog_unlinkto remove incorrect synapses - For obsolete concepts, update the definition to note "DEPRECATED: [reason]"
- Use
-
No multi-query: Each
cog_recallis independent. Chain queries manually if needed. -
Synapse uniqueness: Only one synapse can exist between two engrams (same direction). Calling
cog_associateagain strengthens the existing link rather than creating a duplicate.
When you query with cog_recall:
- Seeds: Direct matches found (e.g., "Session Auth Pattern") with similarity scores
- Spread: Activation flows through synapses to connected concepts
- Decay: Activation diminishes with each hop (0.7x per hop by default)
- Threshold: Spreading stops when activation falls below 0.2
- Strengthen: All activated concepts become slightly more accessible for future recall
This mirrors biological memory:
- Recalling one memory activates related memories automatically
- Frequently co-accessed memories strengthen their connections
- The more a path is traversed, the stronger it becomes