Dissertation Overview
Large language models are powerful reasoners but poor long-term partners: they forget earlier interactions, lose track of context, and struggle to maintain continuity in complex tasks. My research focuses on building a cognitive memory framework that gives agents durable, structured memory.
- Problem: LLMs have weak long-term memory and limited continuity across sessions and tasks.
- Goal: Design a memory system with working, episodic, semantic, and procedural layers inspired by human cognition.
- Approach: Hybrid local + cloud memory, multi-agent collaboration, and selective sync strategies that control what is stored, compressed, or forgotten.
Architecture
The core architecture organizes memory into tiers with different time-scales, capacities, and retrieval strategies. Agents interact with a memory bus that mediates reads, writes, and compression across these tiers.
Memory tiers
- Working: short-term, high-resolution context for current conversations and tasks.
- Episodic: summaries of sessions, events, and interactions across time.
- Semantic: distilled facts, concepts, and stable knowledge.
- Procedural: learned routines, checklists, and task-specific “skills”.
Agent network
Multiple agents (planner, memory manager, workers) interact through a shared memory bus. The memory manager controls promotion, demotion, and forgetting policies, while task-specific agents read from and write to memory as they work.
Diagrams and implementation details will be added here, including links to PDFs and code once public.
Papers & Talks
As results are published, this section will collect papers, preprints, and talks related to cognitive memory architectures, multi-agent systems, and applied tools like AdultBrain.
- Coming soon: citation list in APA style, with links to PDFs and recorded talks.