Skip to content

add ollama support#269

Merged
di-sukharev merged 5 commits intodi-sukharev:devfrom
jaroslaw-weber:feature/ollama
Feb 27, 2024
Merged

add ollama support#269
di-sukharev merged 5 commits intodi-sukharev:devfrom
jaroslaw-weber:feature/ollama

Conversation

@jaroslaw-weber
Copy link
Contributor

adding ollama support (local model)

how to setup:

  • install and run ollama https://ollama.ai/
  • run ollama run mistral to fetch the model (2gb of data)
  • run opencommit with OCO_AI_PROVIDER='ollama'

it's not perfect but can be a start to incorporating other models that are available through ollama.

@di-sukharev
Copy link
Owner

wow, that's is super great, will get time on weekend to collab on that! thank you for the contribution <3

@senovr
Copy link
Contributor

senovr commented Nov 22, 2023

@di-sukharev - any progress on PR?
I am really eager to try local model feature.
I am not familiar with node, and when I am trying execute nmp build on the @jaroslaw-weber branch - it throws bunch of errors.

@di-sukharev
Copy link
Owner

well, month has passed and i still dont have enough time to manage this, it's a beautiful contribution!

@jaroslaw-weber are there any specific parts you are concerned about?

di-sukharev
di-sukharev previously approved these changes Dec 5, 2023
//hoftix: local models are not so clever...
prompt+='Summarize above git diff in 10 words or less'

//console.log('prompt length ', prompt.length)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove comments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@di-sukharev di-sukharev changed the base branch from master to dev December 5, 2023 13:10
@di-sukharev di-sukharev dismissed their stale review December 5, 2023 13:10

The base branch was changed.

@jaroslaw-weber
Copy link
Contributor Author

jaroslaw-weber commented Dec 5, 2023

@di-sukharev thank you for checking the PR :)

my concerns:

  • is gpt still working? i think i did not break anything but would be great if u could check
  • are you also able to run ollama? would be nice to have someone confirm it works

i think if those two are working then we can maybe merge v1.0 version of the local model feature?

what we could improve in next PRs:

  • currently i hardcoded single model and it cannot be changed. but in the future we could add option in config for local models (i can do it in next PR)
  • documentation improvements (maybe after getting feedback from some users)

@senovr
Copy link
Contributor

senovr commented Dec 6, 2023

I can help with testing and feedback ( if you advice how I can build and install this package "locally")
As for the improvement there are few things.
First, OCO_LOCAL_MODEL_LLAMA needs to be set and updated right after initial install of oco ( in the same manner as with api key)
And of course - more ollama model , including codellama, etc.

@jaroslaw-weber
Copy link
Contributor Author

@senovr im pretty sure its quite straightforward to run this.
did u install all packages? what error did u get?

try this:
install and run ollama https://ollama.ai/
then:

npm i
npm run build
npm run ollama:start

@senovr
Copy link
Contributor

senovr commented Dec 6, 2023

My issue is not with installing ollama itself, but with building npm oco package from source.

@jaroslaw-weber
Copy link
Contributor Author

@di-sukharev do you think we can deploy soon?
i have integrated ollama in similar tool and confirmed it works

insulineru/ai-commit#18

➜  repo_name git:(prod) ✗ ai-commit
Ai provider:  ollama
prompting ollama... http://localhost:11434/api/generate mistral
response:   Updated production environment variables in docker-compose-all.yaml: ENV\_SECRET\_NAME from 'prod-env-2023-12-28' to 'prod-env-2024-01-02'.
prompting ai done!
Proposed Commit:
------------------------------
 Updated production environment variables in docker-compose-all.yaml: ENV\_SECRET\_NAME from 'prod-env-2023-12-28' to 'prod-env-2024-01-02'.
------------------------------
? Do you want to continue? Yes
Committing Message... 🚀 
Commit Successful! 🎉

@di-sukharev
Copy link
Owner

di-sukharev commented Jan 2, 2024 via email

@di-sukharev
Copy link
Owner

hi @jaroslaw-weber, still havent got a free minute to merge this properly, will be back asap

@jaroslaw-weber
Copy link
Contributor Author

No problem don't rush :) it's open source, not your job

@senovr
Copy link
Contributor

senovr commented Jan 28, 2024

@jaroslaw-weber , @di-sukharev - sorry for being silent for a while.
Answering your question, I was able to build the code and perform some simple tests.
Short answer- it works 👍

A bit longer answer is:
for some reason, it is not adding Fix/ Chore/ etc. prefix, and overall ollama with mistral model looks a bit wordy for me :)
May be adding miXtral or some other model will help...

Here is an example of actual commit message:

Changes include refactoring function and argument names, adding docstrings, and updating import statements for Python 3 compatibility.

@di-sukharev
Copy link
Owner

<3 looking into it

@di-sukharev di-sukharev merged commit 1d6980f into di-sukharev:dev Feb 27, 2024
@senovr
Copy link
Contributor

senovr commented Feb 28, 2024

@di-sukharev, thank you very much for mergin this into main.
Now, I have a question as a user:

  1. I made ollama installation, ollama pull mistral, ollama serve, ollama run mistral.
  2. I updated opencommit npm i -g opencommit@latest
  3. I made oco config set OCO_AI_PROVIDER='ollama'
  4. I also set a pre-commit hook.

When I run git add - git commit, I got a commit message , but without (fix) (chore) (feature).
Will dig more into it.

My question to @jaroslaw-weber - if I change model to something else, will it works?

@jaroslaw-weber
Copy link
Contributor Author

@senovr it may work but those models are usually less powerful than gpt 4. But definitely you can try

@di-sukharev
Copy link
Owner

@senovr i've only tested and merged it, shoutout to @jaroslaw-weber who implemented the feature!

@senovr
Copy link
Contributor

senovr commented Feb 29, 2024

@jaroslaw-weber , sorry for bothering you again )
Currently, you have hard-coded model name in your commit
const model = 'mistral'; // todo: allow other models
How I can change it to become opencommit-configurable?

In Dmitry's code, I seeing this row:
const MODEL = config?.OCO_MODEL || 'gpt-3.5-turbo';

Probably in ollama.ts we can do it similarly...

My second question is probaby goes to both of you @di-sukharev and @jaroslaw-weber :
in jaroslaw's code, there is a "promt" part before request to local llm:

/hoftix: local models are not so clever so im changing the prompt a bit...
    prompt += 'Summarize above git diff in 10 words or less';

Probably this is the case not having (chore) (fix) (feat) in commit message...

I can not locate similar promt when calling chatGPT api, can you tell me if such prompt exists?

BTW, let me know if we need to move this conversation in new issue )

@di-sukharev
Copy link
Owner

@senovr haha lol, please lets create a new issue, could you please create the PR, i will merge once done, please target it to the dev branch`

@romejoe
Copy link

romejoe commented Mar 3, 2024

I have a PR from a little while ago that forks this PR to add support for changing the ollama model. I can update it and reopen it here if anyone is interested.
It's over here: jaroslaw-weber#1

@jaroslaw-weber
Copy link
Contributor Author

Yeah forgot about mentioning that. I had some issue with generating commit message with the original prompt so I changed it. But feel free to modify that part :) Great that you guys follow up on this

@jaroslaw-weber
Copy link
Contributor Author

As for hardcoding part of the model, I was going to work on this after my PR got merged but I got busy when it eventually got merged :) . Feel free to improve this part.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants