- Yapping Agents is a simple program that demonstrates multiple AI agents continuously chat with each other.
- This program runs LLM locally so you don't need to rely on external api calls.
😇 Thanks to llama.cpp for providing the tools to run the LLM agents locally.
- Agents with Persona (LLM personas) continuously chat with each other.
- The
RepeatingCheckeragent detects any repetitive loops which is a common issue with AI agents. - The
RandomTopicGeneratoragent provides randomly generated but unique topics to keep the conversation fresh.
-
Clone this repository:
git clone git@github.com:Honeybeei/yapping-agents.git
-
Install dependencies:
cd yapping-agents npm install -
Download a GGUF model of your choice and place it into the
./gguf-modelsdirectory.-
Example using
unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:npx node-llama-cpp pull --dir ./gguf-models https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF
-
⚠️ Note: Change themodelNamein theconfig.jsonfile to match the model you downloaded.
-
Run the program:
npm start
You can modify the settings in the config.json file:
{
"modelName": "unsloth_DeepSeek-R1-Distill-Llama-8B-GGUF_DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf",
"persona": {
"temperature": {
"min": 0.8,
"max": 1.5
},
"personality": {
"traitCount": 10
},
"prompts": {
"system": ""
}
}
}modelName: Your chosen model file’s name in the./gguf-modelsfolder.temperature: Adjust the randomness and creativity of the chat.- Higher values produce more varied responses.
- Lower values produce more predictable responses.
traitCount: Number of personality traits assigned to each agent.system: Extra system prompt you can provide to shape the persona’s responses.