Following the Stanford Smallville experiment and emerging work on agent frameworks, this project investigates how temporal awareness and memory retrieval patterns influence agent behavior over extended interactions. It is a testbed for exploring questions about:
Temporal Persistence: Does recency bias lead to stable behavioral patterns or continuous drift?
Emotional Dynamics: How does emotional state propagate through the perception-action loop?
Memory Formation: What patterns emerge in which experiences become memories?
Path Dependence: How do early experiences shape later behavior through memory accumulation?
Inspired by:
- Park et al.'s Generative Agents (Stanford "Smallville" simulation) - demonstrating emergent social behaviors from memory-augmented LLM agents
- LangChain's agent framework
- Cognitive architectures for LLMs
The system implements a simple perception-action loop with temporal awareness:
Clock (discrete time) → State (world observations)
↓
Memory Retrieval (recency-based) → Bias Computation (emotional state)
↓
Action Generation → Memory Formation
↓
State Update → Clock Tick → [repeat]
Clock: Discrete time tracker enabling temporal reasoning and memory recency calculations
State: Observable world conditions that update with agent actions and environmental changes
Memory: Timestamped experience records forming the agent's episodic history
MemoryGraph: Storage and retrieval system using temporal heuristics to surface relevant memories
BiasModel: Computes emotional biases from current observations and retrieved memories using LLM inference
CognitiveModel: Generates actions and forms new memories based on world state, emotional bias, and recent actions
FileSystem: Persistent storage for simulation histories enabling post-hoc analysis
# Clone repository
git clone https://github.com/yourusername/temporal-mind.git
cd temporal-mind
# Set OpenAI API key
export OPENAI_API_KEY='your-api-key-here'
# Install dependencies
pip install openaicd src
python main.pySimulation runs for 100 epochs (configurable in main.py). Histories save to timestamped directories in temporal_mind_data/ for analysis.
Edit main.py to modify:
DEFAULT_WORLD_STATE_OBSERVABLES: Initial environmental conditionsDEFAULT_MEMORIES: Foundational knowledge/experiencesROUNDS: Number of time steps to simulate- Memory retrieval parameter
kinget_top_k_memories()
temporal_mind_data/
└── 20250119/
└── histories/
└── histories_20250119_143022/
├── world_states.txt # State evolution over time
├── biases.txt # Emotional trajectory
├── reactions.txt # Action sequence
└── new_memories.txt # Memory formation pattern
The implementation successfully demonstrates:
- ✅ Stable perception-action loops over 100+ time steps
- ✅ Coherent memory formation and retrieval
- ✅ Emotional bias computation from memory and state
- ✅ Persistent state evolution with action consequences
- Memory graph grows unbounded (no forgetting mechanism)
- No memory consolidation across time scales
- Purely reactive (no planning or goal-directed behavior)
- Limited evaluation metrics
temporal-mind/
├── src/
│ ├── main.py # Simulation loop and configuration
│ ├── clock.py # Discrete time tracking
│ ├── state.py # World state representation
│ ├── memory.py # Memory objects with timestamps
│ ├── memory_graph.py # Memory storage and retrieval
│ ├── model.py # BiasModel and CognitiveModel (LLM inference)
│ └── filesystem.py # Data persistence
├── LICENSE
└── README.md
Temporal Mind: Experimental Cognitive Agent Framework
https://github.com/one-2/temporal-mind
2025