This project implements a shell add-on to improve productivity with local LLM usage. It allows two ways of usage : Quick and Deep. The quick mode use a simple request to Ollama, while the Deep mode performs multiple steps, allowing the LLM to run jobs and get additional data.
You can see the documents about the project status:
Here just run ./install.sh with the desired shell:

Then just use the terminal as usual. If you need assistance, just ask!

Finally, just let do the magic (it is not really magic):

This shows that the tool works. However, some features are missing and the AI doesn't handle all the formating well.
And using the data from the workers the AI solve the problem:

In order to give context about previous commands and results, the add-on should log them to a file (one per session). This file will be created when initiating the shell session ( when calling source add-on.sh ).
The data will have to be sent to python3 script in JSON format. To do so we can either format the data asthey arrive to the file, or parse them before giving them to the python scrpit.
We first want to make clear how the LLM should process requests. This part must be persistant. We want it to clearly be able to :
- Understand the structure of the prompts
- Know how it should handle data
- Know how it should answer to the user
- Know it abalilities In order to solve this problem, we must create a verry good first prompt, and find a way to make it persistant. For example we can send it at each request, or we can try to find a parameter to specify it. Finally, we do not want the ai to use data from the examples to answer, just the structure, we have to clearly anounce it.
We then wants the AI to understand the context behind the user prompt :
- OS - hardware info
- File structure
- ...
- Previous commands / results To do so, we have to log previous commands. Then we have to find the best way to structure the data ine a clear way to make the AI able to easyly filter data. Maybe using json or yaml could be a good solution if AI handles it.
The user prompt should be process normally. However, it could be a good feature to allow the user to ask something else to the AI concerning the previous response.
The AI might need more specific context to handle user request, like analysing a file, or see the result of a specific command. A good feature could be to allow the AI to ask the program for aditional data using pre made requests.
Here is an example of how the workers are chosen by the AI in multiple scenarios.
- "why cant i run the program ch64"
[+] Valid worker : ['file_analysis', '"ch64"']
[+] Valid worker : ['executable_analysis', '"ch64"']
Here are the required workers : {'file_analysis': ['"ch64"'], 'executable_analysis': ['"ch64"']}
- "why cant i run main.py"
[+] Valid worker : ['file_analysis', 'main.py']
Here are the required workers : {'file_analysis': ['main.py']}
But also...
[+] Valid worker : ['system_info']
[+] Valid worker : ['file_analysis', '"/home/arthub", "/root", "/etc", "/usr", "/var", "/home", "/opt"']
Here are the required workers : {'system_info': [], 'file_analysis': ['"/home/arthub"', ' "/root"', ' "/etc"', ' "/usr"', ' "/var"', ' "/home"', ' "/opt"']}
- "what is the biggest file of my current directory"
[+] Valid worker : ['file_analysis', 'AI_POWER.py,__init__.py,workers/executable_analysis.py,workers/file_analysis.py,workers/hardware_info.py,workers/network_conf.py,workers/system_info.py -s']
Here are the required workers : {'file_analysis': ['AI_POWER.py', '__init__.py', 'workers/executable_analysis.py', 'workers/file_analysis.py', 'workers/hardware_info.py', 'workers/network_conf.py', 'workers/system_info.py -s']}
But also ...
[+] Valid worker : ['file_analysis', 'getBiggestFile']
Here are the required workers : {'file_analysis': ['getBiggestFile']}
- "why cant i reach my friend on 192.168.1.2"
[+] Valid worker : ['network_conf', '1']
Here are the required workers : {'network_conf': ['1']}



