Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
node_modules
build

.docusaurus
.idea
.docusaurus
8 changes: 8 additions & 0 deletions seedao-docs/05-seedao-app/release.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,14 @@ Notes:
- The datasets in preview version and dev version are subject to change and update at any time.
- The code repo is open source and can be accessed on Github.

### Beta v0.8.7 - 22 Apr 2025

SeeDAO App Beta v0.8.6 is released. The release note can be viewed at https://docs.seedao.tech/seedao-app/updates

Changelog:
- Optimize SeeChat DeepSeek API Response
- Update Profile for SeeChat API

### Beta v0.8.6 - 11 Apr 2025

SeeDAO App Beta v0.8.6 is released. The release note can be viewed at https://docs.seedao.tech/seedao-app/updates
Expand Down
35 changes: 35 additions & 0 deletions seedao-docs/07-seechat/APIendpoint.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
sidebar_position: 2
---
# API Endpoints Overview
This guide provides essential information on how to interact with the API endpoints effectively to achieve seamless integration and automation using our models.

### Authentication

To ensure secure access to the API, authentication is required 🛡️. You can authenticate your API requests using the Bearer Token or X-API-Key. Obtain your API key from Profile > SeeChat in the SeeDAO OS.

### 💬 Chat Completions
Supports streaming responses and multi-turn conversations

```shell
POST /v1/chat/completions
```

```shell
export API_KEY="<Your API KEY>"
export API_ENDPOINT="<Your Endpoint>"

curl -X POST $API_ENDPOINT/api/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model":"deepseek-reasoner",
"stream": true,
"messages": [
{
"role": "user",
"content": "Why is the sky blue?"
}
]
}'
```
10 changes: 10 additions & 0 deletions seedao-docs/07-seechat/Integration/ChatGPT-Next-Web.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# ChatGPT Next Web

One-Click to get a well-designed cross-platform ChatGPT web UI,support multiple LLMs.

## UI
<img src="/img/integration/ChatGPT-Next-WebUI.png" />


### Integrate with SeeChat API
<img src="/img/integration/ChatGPT-Next-Web.png" />
9 changes: 9 additions & 0 deletions seedao-docs/07-seechat/Integration/Chatbox.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Chatbox

Chatbox is a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.

## UI
<img src="/img/integration/chatboxUi.png" />

### Integrate with SeeChat API
<img src="/img/integration/chatBoxIntegrate.png" />
8 changes: 8 additions & 0 deletions seedao-docs/07-seechat/Integration/SwiftChat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# SwiftChat
SwiftChat is a lightning-fast, cross-platform AI chat application built with React Native. It delivers native performance on Android, iOS, iPad, Android tablets and macOS. Features include real-time streaming chat, rich Markdown support (tables, code blocks, LaTeX), AI image generation, customizable system prompts, quick model switching, and multimodal capabilities. Supports multiple AI providers including DeepSeek, Amazon Bedrock, Ollama and OpenAI Compatible Models. The minimalist UI design and optimized architecture ensure instant launch and responsive interactions across all platforms.

## UI
<img src="/img/integration/SwiftChatUI.png" />

### Integrate with SeeChat API
<img src="/img/integration/SwiftChat.png" />
2 changes: 2 additions & 0 deletions seedao-docs/07-seechat/Integration/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: 🛠 Integrations
collapsed: true
11 changes: 11 additions & 0 deletions seedao-docs/07-seechat/Integration/cherryStudio.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Cherry Studio

A powerful desktop AI assistant for producer

## Screenshot
<img src="/img/integration/cherryStudioUI.png" />

## Integrate with SeeChat API
<img src="/img/integration/cherryStudio1.png" />
<img src="/img/integration/cherryStudio2.png" />
<img src="/img/integration/cherryStudio3.png" />
48 changes: 48 additions & 0 deletions seedao-docs/07-seechat/Models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
sidebar_position: 0
---

# Models

A token, the smallest unit of text that the model recognizes, can be a word, a number, or even a punctuation mark. We
will counted based on the total number of input and output tokens by the model.

# Models' Details

<table class="apiTable_e8hp">
<thead>
<tr>
<th>MODEL</th>
<th>deepseek-chat</th>
<th>deepseek-reasoner</th>
</tr>
</thead>
<tbody>
<tr id="minHeadingLevel" tabindex="0">
<td>
CONTEXT LENGTH
</td>
<td>32K</td>
<td>32K</td>
</tr>
<tr>
<td>
MAX COT(Chain of Thought) TOKENS
</td>
<td>-</td>
<td>32K</td>
</tr>
<tr>
<td>
MAX OUTPUT TOKENS
</td>
<td>8K</td>
<td>8K</td>
</tr>
</tbody>
</table>

### Tips
- The deepseek-chat model points to ```DeepSeek-V3```. The ```deepseek-reasoner``` model points to DeepSeek-R1.

- ***MAX OUTPUT TOKENS:*** Integer between 1 and 8192. The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. If max_tokens is not specified, the default value 4096 is used.
13 changes: 13 additions & 0 deletions seedao-docs/07-seechat/TokenUsage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
sidebar_position: 1
---
# Token Usage Calculation
Tokens are the basic units used by the model to process natural language text and serve as our billing metric. You can think of them as equivalent to "words" or "characters."

As a general reference, the conversion ratio between tokens and characters is roughly:

- 1 English character ≈ 0.3 tokens

- 1 Chinese character ≈ 0.6 tokens

However, since tokenization varies across different models, the exact ratio may differ. The actual token count for each processing task will be provided in the API response under the usage field, which should be treated as the authoritative reference.
2 changes: 2 additions & 0 deletions seedao-docs/07-seechat/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: 🤖 SeeChat
collapsed: true
155 changes: 155 additions & 0 deletions seedao-docs/07-seechat/_spec_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
---
openapi: 3.0.0
info:
title: API Endpoints
description: |
version: 1.0.0

servers:
- url: "https://ds.seedao.tech/v1"
description: "dev env for the API"
paths:
"/chat/completions":
post:
summary: Completions
tags:
- 🔌 API
description: Creates a model response for the given chat conversation.
parameters:
- in: header
name: Authorization
description: |
This field can adopt either of the following two formats:

Bearer Token format: Authorization: Bearer ***<your_token_here>***

API Key format: X-API-Key: ***<your_token_here>***

***Note: You must choose one of the two formats exclusively.***
required: true
schema:
type: string
example: "Bearer your_token_here"
requestBody:
content:
"application/json":
schema:
$ref: "#/components/schemas/chatRequest"
responses:
'200':
description: Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed. <a href="https://api-docs.deepseek.com/api/create-chat-completion">View More </a>

components:
schemas:
chatRequest:
type: object
properties:
messages:
type: array
required: true
description: A list of messages comprising the conversation so far.
example: [{"content": "You are a helpful assistant","role": "system"},{"content": "Hi","role": "user"}]
model:
type: string
required: true
description: |
ID of the model to use.
Possible values: [***deepseek-chat***, ***deepseek-reasoner***]
example: deepseek-reasoner
frequency_penalty:
type: number
required: false
description: |
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

Possible values: ```>= -2``` and ```<= 2```

Default value: ***0***
max_tokens:
type: integer
required: false
description: |
Integer between 1 and 8192. The maximum number of tokens that can be generated in the chat completion.The total length of input tokens and generated tokens is limited by the model's context length.If max_tokens is not specified, the default value 4096 is used.

Possible values: ```> 1```
Default value: ***4096***
example: 8192
presence_penalty:
type: number
required: false
description: |
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Possible values: ```>= -2``` and ```<= 2```

Default value: ***0***
response_format:
type: object
description: |
An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON Output, which guarantees the message the model generates is valid JSON.

***Important: *** When using JSON Output, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
properties:
type:
type: string
description: |
Possible values: [***text***, ***json_object***]

Default value: ***text***
example: {"type": "text"}
stream:
type: boolean
description: |
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available, with the stream terminated by a data: [DONE] message.
example: false
temperature:
type: number
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.We generally recommend altering this or top_p but not both.

Possible values: ```<= 2```

Default value: ***1***
example: 1
top_p:
type: number
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

Possible values: ```<= 1```

Default value: ***1***
example: 1
tools:
type: array
description: |
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
<a href="https://api-docs.deepseek.com/api/create-chat-completion">View More </a>
example: null
logprobs:
type: boolean
description: |
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
example: false
top_logprobs:
type: integer
description: |
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.

Possible values: ```<= 20```
required:
- messages
- model

chatResponse:
type: object
properties:
id:
type: integer
username:
type: string
email:
type: string
required:
- id
- username
- email
10 changes: 10 additions & 0 deletions seedao-docs/07-seechat/intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
id: intro
title: 💬 Introduction
sidebar_label: Intro
sidebar_position: 0
---

## Overview

We proudly present SeeChat Web (built on DeepSeek AI & SeeDAO’s knowledge base), now open for all SNS users to experience smart conversations. Dive into deep AI discussions today!
Binary file added static/img/integration/ChatGPT-Next-Web.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/ChatGPT-Next-WebUI.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/SwiftChat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/SwiftChatUI.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/chatBoxIntegrate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/chatboxUi.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/cherryStudio1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/cherryStudio2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/cherryStudio3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/img/integration/cherryStudioUI.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading