diff --git a/.gitignore b/.gitignore index deb2d346..7d7a38ea 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,4 @@ node_modules build - -.docusaurus \ No newline at end of file +.idea +.docusaurus diff --git a/seedao-docs/05-seedao-app/release.md b/seedao-docs/05-seedao-app/release.md index d951afc5..bee4c6aa 100644 --- a/seedao-docs/05-seedao-app/release.md +++ b/seedao-docs/05-seedao-app/release.md @@ -23,6 +23,14 @@ Notes: - The datasets in preview version and dev version are subject to change and update at any time. - The code repo is open source and can be accessed on Github. +### Beta v0.8.7 - 22 Apr 2025 + +SeeDAO App Beta v0.8.6 is released. The release note can be viewed at https://docs.seedao.tech/seedao-app/updates + +Changelog: +- Optimize SeeChat DeepSeek API Response +- Update Profile for SeeChat API + ### Beta v0.8.6 - 11 Apr 2025 SeeDAO App Beta v0.8.6 is released. The release note can be viewed at https://docs.seedao.tech/seedao-app/updates diff --git a/seedao-docs/07-seechat/APIendpoint.md b/seedao-docs/07-seechat/APIendpoint.md new file mode 100644 index 00000000..c676d262 --- /dev/null +++ b/seedao-docs/07-seechat/APIendpoint.md @@ -0,0 +1,35 @@ +--- +sidebar_position: 2 +--- +# API Endpoints Overview +This guide provides essential information on how to interact with the API endpoints effectively to achieve seamless integration and automation using our models. + +### Authentication + +To ensure secure access to the API, authentication is required 🛡️. You can authenticate your API requests using the Bearer Token or X-API-Key. Obtain your API key from Profile > SeeChat in the SeeDAO OS. + +### 💬 Chat Completions +Supports streaming responses and multi-turn conversations + +```shell +POST /v1/chat/completions +``` + +```shell +export API_KEY="" +export API_ENDPOINT="" + +curl -X POST $API_ENDPOINT/api/chat/completions \ +-H "Authorization: Bearer $API_KEY" \ +-H "Content-Type: application/json" \ +-d '{ + "model":"deepseek-reasoner", + "stream": true, + "messages": [ + { + "role": "user", + "content": "Why is the sky blue?" + } + ] + }' +``` diff --git a/seedao-docs/07-seechat/Integration/ChatGPT-Next-Web.md b/seedao-docs/07-seechat/Integration/ChatGPT-Next-Web.md new file mode 100644 index 00000000..01151477 --- /dev/null +++ b/seedao-docs/07-seechat/Integration/ChatGPT-Next-Web.md @@ -0,0 +1,10 @@ +# ChatGPT Next Web + +One-Click to get a well-designed cross-platform ChatGPT web UI,support multiple LLMs. + +## UI + + + +### Integrate with SeeChat API + diff --git a/seedao-docs/07-seechat/Integration/Chatbox.md b/seedao-docs/07-seechat/Integration/Chatbox.md new file mode 100644 index 00000000..1a0967a6 --- /dev/null +++ b/seedao-docs/07-seechat/Integration/Chatbox.md @@ -0,0 +1,9 @@ +# Chatbox + +Chatbox is a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux. + +## UI + + +### Integrate with SeeChat API + diff --git a/seedao-docs/07-seechat/Integration/SwiftChat.md b/seedao-docs/07-seechat/Integration/SwiftChat.md new file mode 100644 index 00000000..7c115b62 --- /dev/null +++ b/seedao-docs/07-seechat/Integration/SwiftChat.md @@ -0,0 +1,8 @@ +# SwiftChat +SwiftChat is a lightning-fast, cross-platform AI chat application built with React Native. It delivers native performance on Android, iOS, iPad, Android tablets and macOS. Features include real-time streaming chat, rich Markdown support (tables, code blocks, LaTeX), AI image generation, customizable system prompts, quick model switching, and multimodal capabilities. Supports multiple AI providers including DeepSeek, Amazon Bedrock, Ollama and OpenAI Compatible Models. The minimalist UI design and optimized architecture ensure instant launch and responsive interactions across all platforms. + +## UI + + +### Integrate with SeeChat API + diff --git a/seedao-docs/07-seechat/Integration/_category_.yaml b/seedao-docs/07-seechat/Integration/_category_.yaml new file mode 100644 index 00000000..38e474d6 --- /dev/null +++ b/seedao-docs/07-seechat/Integration/_category_.yaml @@ -0,0 +1,2 @@ +label: 🛠 Integrations +collapsed: true diff --git a/seedao-docs/07-seechat/Integration/cherryStudio.md b/seedao-docs/07-seechat/Integration/cherryStudio.md new file mode 100644 index 00000000..fe4fa51d --- /dev/null +++ b/seedao-docs/07-seechat/Integration/cherryStudio.md @@ -0,0 +1,11 @@ +# Cherry Studio + +A powerful desktop AI assistant for producer + +## Screenshot + + +## Integrate with SeeChat API + + + diff --git a/seedao-docs/07-seechat/Models.md b/seedao-docs/07-seechat/Models.md new file mode 100644 index 00000000..1e00669c --- /dev/null +++ b/seedao-docs/07-seechat/Models.md @@ -0,0 +1,48 @@ +--- +sidebar_position: 0 +--- + +# Models + +A token, the smallest unit of text that the model recognizes, can be a word, a number, or even a punctuation mark. We +will counted based on the total number of input and output tokens by the model. + +# Models' Details + + + + + + + + + + + + + + + + + + + + + + + + + + +
MODELdeepseek-chatdeepseek-reasoner
+ CONTEXT LENGTH + 32K32K
+ MAX COT(Chain of Thought) TOKENS + -32K
+ MAX OUTPUT TOKENS + 8K8K
+ +### Tips +- The deepseek-chat model points to ```DeepSeek-V3```. The ```deepseek-reasoner``` model points to DeepSeek-R1. + +- ***MAX OUTPUT TOKENS:*** Integer between 1 and 8192. The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. If max_tokens is not specified, the default value 4096 is used. diff --git a/seedao-docs/07-seechat/TokenUsage.md b/seedao-docs/07-seechat/TokenUsage.md new file mode 100644 index 00000000..637e9c12 --- /dev/null +++ b/seedao-docs/07-seechat/TokenUsage.md @@ -0,0 +1,13 @@ +--- +sidebar_position: 1 +--- +# Token Usage Calculation +Tokens are the basic units used by the model to process natural language text and serve as our billing metric. You can think of them as equivalent to "words" or "characters." + +As a general reference, the conversion ratio between tokens and characters is roughly: + +- 1 English character ≈ 0.3 tokens + +- 1 Chinese character ≈ 0.6 tokens + +However, since tokenization varies across different models, the exact ratio may differ. The actual token count for each processing task will be provided in the API response under the usage field, which should be treated as the authoritative reference. diff --git a/seedao-docs/07-seechat/_category_.yaml b/seedao-docs/07-seechat/_category_.yaml new file mode 100644 index 00000000..cd6173f5 --- /dev/null +++ b/seedao-docs/07-seechat/_category_.yaml @@ -0,0 +1,2 @@ +label: 🤖 SeeChat +collapsed: true diff --git a/seedao-docs/07-seechat/_spec_.yaml b/seedao-docs/07-seechat/_spec_.yaml new file mode 100644 index 00000000..4a39f337 --- /dev/null +++ b/seedao-docs/07-seechat/_spec_.yaml @@ -0,0 +1,155 @@ +--- +openapi: 3.0.0 +info: + title: API Endpoints + description: | + version: 1.0.0 + +servers: + - url: "https://ds.seedao.tech/v1" + description: "dev env for the API" +paths: + "/chat/completions": + post: + summary: Completions + tags: + - 🔌 API + description: Creates a model response for the given chat conversation. + parameters: + - in: header + name: Authorization + description: | + This field can adopt either of the following two formats: + + Bearer Token format: Authorization: Bearer ****** + + API Key format: X-API-Key: ****** + + ***Note: You must choose one of the two formats exclusively.*** + required: true + schema: + type: string + example: "Bearer your_token_here" + requestBody: + content: + "application/json": + schema: + $ref: "#/components/schemas/chatRequest" + responses: + '200': + description: Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed. View More + +components: + schemas: + chatRequest: + type: object + properties: + messages: + type: array + required: true + description: A list of messages comprising the conversation so far. + example: [{"content": "You are a helpful assistant","role": "system"},{"content": "Hi","role": "user"}] + model: + type: string + required: true + description: | + ID of the model to use. + Possible values: [***deepseek-chat***, ***deepseek-reasoner***] + example: deepseek-reasoner + frequency_penalty: + type: number + required: false + description: | + Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. + + Possible values: ```>= -2``` and ```<= 2``` + + Default value: ***0*** + max_tokens: + type: integer + required: false + description: | + Integer between 1 and 8192. The maximum number of tokens that can be generated in the chat completion.The total length of input tokens and generated tokens is limited by the model's context length.If max_tokens is not specified, the default value 4096 is used. + + Possible values: ```> 1``` + Default value: ***4096*** + example: 8192 + presence_penalty: + type: number + required: false + description: | + Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. + Possible values: ```>= -2``` and ```<= 2``` + + Default value: ***0*** + response_format: + type: object + description: | + An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON Output, which guarantees the message the model generates is valid JSON. + + ***Important: *** When using JSON Output, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. + properties: + type: + type: string + description: | + Possible values: [***text***, ***json_object***] + + Default value: ***text*** + example: {"type": "text"} + stream: + type: boolean + description: | + If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available, with the stream terminated by a data: [DONE] message. + example: false + temperature: + type: number + description: | + What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.We generally recommend altering this or top_p but not both. + + Possible values: ```<= 2``` + + Default value: ***1*** + example: 1 + top_p: + type: number + description: | + An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. + + Possible values: ```<= 1``` + + Default value: ***1*** + example: 1 + tools: + type: array + description: | + A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. + View More + example: null + logprobs: + type: boolean + description: | + Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. + example: false + top_logprobs: + type: integer + description: | + An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used. + + Possible values: ```<= 20``` + required: + - messages + - model + + chatResponse: + type: object + properties: + id: + type: integer + username: + type: string + email: + type: string + required: + - id + - username + - email diff --git a/seedao-docs/07-seechat/intro.md b/seedao-docs/07-seechat/intro.md new file mode 100644 index 00000000..e6fc5b49 --- /dev/null +++ b/seedao-docs/07-seechat/intro.md @@ -0,0 +1,10 @@ +--- +id: intro +title: 💬 Introduction +sidebar_label: Intro +sidebar_position: 0 +--- + +## Overview + +We proudly present SeeChat Web (built on DeepSeek AI & SeeDAO’s knowledge base), now open for all SNS users to experience smart conversations. Dive into deep AI discussions today! diff --git a/static/img/integration/ChatGPT-Next-Web.png b/static/img/integration/ChatGPT-Next-Web.png new file mode 100644 index 00000000..1042a79d Binary files /dev/null and b/static/img/integration/ChatGPT-Next-Web.png differ diff --git a/static/img/integration/ChatGPT-Next-WebUI.png b/static/img/integration/ChatGPT-Next-WebUI.png new file mode 100644 index 00000000..c1571d05 Binary files /dev/null and b/static/img/integration/ChatGPT-Next-WebUI.png differ diff --git a/static/img/integration/SwiftChat.png b/static/img/integration/SwiftChat.png new file mode 100644 index 00000000..97aa8034 Binary files /dev/null and b/static/img/integration/SwiftChat.png differ diff --git a/static/img/integration/SwiftChatUI.png b/static/img/integration/SwiftChatUI.png new file mode 100644 index 00000000..94de8c6e Binary files /dev/null and b/static/img/integration/SwiftChatUI.png differ diff --git a/static/img/integration/chatBoxIntegrate.png b/static/img/integration/chatBoxIntegrate.png new file mode 100644 index 00000000..5bac08d5 Binary files /dev/null and b/static/img/integration/chatBoxIntegrate.png differ diff --git a/static/img/integration/chatboxUi.png b/static/img/integration/chatboxUi.png new file mode 100644 index 00000000..fc6f2e37 Binary files /dev/null and b/static/img/integration/chatboxUi.png differ diff --git a/static/img/integration/cherryStudio1.png b/static/img/integration/cherryStudio1.png new file mode 100644 index 00000000..af53e41f Binary files /dev/null and b/static/img/integration/cherryStudio1.png differ diff --git a/static/img/integration/cherryStudio2.png b/static/img/integration/cherryStudio2.png new file mode 100644 index 00000000..3c5479f8 Binary files /dev/null and b/static/img/integration/cherryStudio2.png differ diff --git a/static/img/integration/cherryStudio3.png b/static/img/integration/cherryStudio3.png new file mode 100644 index 00000000..a8bcbbdf Binary files /dev/null and b/static/img/integration/cherryStudio3.png differ diff --git a/static/img/integration/cherryStudioUI.png b/static/img/integration/cherryStudioUI.png new file mode 100644 index 00000000..438d1fd5 Binary files /dev/null and b/static/img/integration/cherryStudioUI.png differ