-
Notifications
You must be signed in to change notification settings - Fork 265
Open
Description
Hi,
I really like the new feature that integrates LLM into the enrichment analysis.
However, I found that when running the following:
> interpret(enrichment$up$KEGG, task = "interpretation", model = "glm-4.7")
Interpreting cluster: Default
## Interpretation Result
### Cluster: Default
### 1. Overview
Warning message:
In value[[3L]](cond) :
Failed to parse JSON response from LLM. Returning raw text. Error: parse error: premature EOF
(right here) ------^
Here, enrichment$up$KEGG is an enrichResult object, and the previous version (glm-4) works fine:

So I guess the issue might be related to changes in the output format of the new glm version.
In addition, I noticed that we have to deploy the “fanyi” API to run the function:
> interpret(result$up$KEGG, task = "interpretation")
Interpreting cluster: Default
Error in value[[3L]](cond) :
Failed to call fanyi::chat_request. Error: API key for deepseek is missing.
Even though I don’t need translation, it’s a bit tedious to check the API documentation and manually call set_translate_option.
It might be better to add an option to disable fanyi when not needed.
Thanks again for the great work!
Best,
Peng
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels