Skip to content

Problem about interpret using glm4.7 from chatglm #810

@mugpeng

Description

@mugpeng

Hi,

I really like the new feature that integrates LLM into the enrichment analysis.

However, I found that when running the following:

> interpret(enrichment$up$KEGG, task = "interpretation", model = "glm-4.7")
Interpreting cluster: Default
## Interpretation Result

### Cluster: Default

### 1. Overview

Warning message:
In value[[3L]](cond) :
  Failed to parse JSON response from LLM. Returning raw text. Error: parse error: premature EOF
                                       
                     (right here) ------^

Here, enrichment$up$KEGG is an enrichResult object, and the previous version (glm-4) works fine:
Image

So I guess the issue might be related to changes in the output format of the new glm version.

In addition, I noticed that we have to deploy the “fanyi” API to run the function:

> interpret(result$up$KEGG, task = "interpretation")
Interpreting cluster: Default
Error in value[[3L]](cond) : 
  Failed to call fanyi::chat_request. Error: API key for deepseek is missing.

Even though I don’t need translation, it’s a bit tedious to check the API documentation and manually call set_translate_option.
It might be better to add an option to disable fanyi when not needed.

Thanks again for the great work!

Best,
Peng

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions