sc is a cli application that makes it easy to interact with LLMs from multiple providers, including Ollama and OpenAI.
Download the latest release installer for your platform from the GitHub releases page.
Available installers:
- Linux: .deb (Debian/Ubuntu), .rpm (RedHat/Fedora/SUSE) packages for x86_64
- macOS: .dmg, .pkg installers for x86_64 (Intel) and aarch_64 (Apple Silicon)
- Windows: .exe, .msi installers for x86_64
Coming soon:
- Homebrew (macOS/Linux)
- Scoop (Windows)
- Chocolatey (Windows)
Requires Java 22+ (JDK 16+ for JPackage):
git clone https://github.com/juliuskrah/sc.git
cd sc
./gradlew clean bootJar
./gradlew buildJPackageInstaller The installer will be available in build/jpackage/.
Table of Contents
This command allows you to chat with the Ollama API and other LLMs. You can use it to send messages and receive responses from the model.
chat memory is implemented with HSQLDB, which is an in-memory database. You can find the database files in $HOME/.sc/store.db.*. See the config command for more information on how to configure the chat memory.
-m, --model: Specify the model to use for the chat. Currently only Ollama models are supported. The default ismistral-small3.1.NOTE: The model should be available locally in the Ollama environment. You can list available models using the
ollama listcommand.--base-url: Ollama API endpoint. The default is http://localhost:11434.
MESSAGE: The prompt to send to the model.
sc chat --model llama3.2 --base-url=http://localhost:11434 "Hello, how are you?"Note: When you omit the
MESSAGEparameter, this will start a REPL session.
sc chat
sc> Hello, how are you?
Hello! I'm functioning as intended, thank you. How can I assist you today?Press Ctrl+C during a streamed response to cancel generation and return to the prompt.
Multi-modal prompts are supported. You can include images in your messages.
sc chat
sc> Describe this image @path/to/image.png
This is an image of two birds standing on a beachThis command allows you to view or set the configuration for the CLI. You can use it to manage settings such as the Ollama API endpoint and other CLI-specific configurations.
The configuration schema will look something like this:
# ~/.sc/config
provider: ollama # or openai
providers:
ollama: # Ollama provider configuration
base-url: http://localhost:11433
model: qwen2:0.5b
openai: # OpenAI provider configuration - see future work below
base-url: https://api.openai.com/v1
model: gpt-3.5-turbo
options: {} # provider-specific options
chat-memory: # Chat memory configuration e.g. jdbc for HSQLDB
jdbc:
url: jdbc:hsqldb:mem:testdb
username: saBy default, the configuration file is stored in $HOME/.sc/config. You can change the configuration directory by setting the SC_CONFIG_DIR environment variable.
--dir: Show the configuration directory. This option will display the path to the directory where the configuration files are stored. The default directory is$HOME/.sc/however this can be overridden by setting theSC_CONFIG_DIRenvironment variable.--file: Show the configuration file. This option will display the path to the configuration file used by the CLI.--set: Set a configuration option. You need to provide the key and value in the formatkey=value. This option also creates the configuration file if it does not exist. You can view the location of the config directory with--diroption. The default configuration file is$HOME/.sc/config.--get: Get the value of a specific configuration option. You need to specify the key.--unset: Unset a configuration option. You need to specify the key.
View the configuration directory:
sc config --dirView the configuration file:
sc config --fileSet configuration options. This creates the configuration file if it does not exist:
sc config --set providers.ollama.base-url=http://localhost:11434 --set provider=ollamaGet a configuration option value:
sc config --get providers.ollama.base-urlUnset/remove a configuration properties:
sc config --unset providers.ollama.base-url --unset providerThis command initializes the configuration file for the CLI. It creates a default configuration file if it does not already exist. This is useful when setting up the CLI for the first time or resetting the configuration. When you run this command, it will create a configuration file in the default directory ($HOME/.sc/) with the default settings.
Note
If the configuration file already exists, this command will not overwrite it. It will only create the file if it does not exist. You can override the default configuration directory by setting the SC_CONFIG_DIR environment variable.
sc config initThis command allows you to interact with the RAG (Retrieval-Augmented Generation) system. You can load documents into a vector database or dump the RAG response to a file (dumping to file is only useful for testing and debugging). This is useful for processing documents and generating responses based on the content of those documents.
-o, --output: Specify output filename for the RAG response. This option must be used in conjunction with the--etl=fileoption.--etl: Specify the ETL (Extract, Transform, Load) operation target. The available targets are:file: Write output to a file from the local filesystem (default).vectorStore: Write output to a vector store.
DOCUMENT: The document to process. This can be a local document, a remote document (HTTPS), or a cloud document (S3, GCS, Azure). Documents can be loaded from various sources; the following protocols are supported when loading documents:file:///path/to/file: Local document (absolute path)https://path/to/page: Remote document (only HTTPS is supported)github://user/repo/contents/path/to/file: GitHub document
The following document formats are supported:
application/pdf: PDF filestext/html: HTML filestext/plain: Plain text filestext/markdown: Markdown files (e.g..mdfiles)application/json: JSON files
# Load a PDF document from the local filesystem and write the response to a file
sc rag --etl=file --output output.txt file:///path/to/document.pdf
# Load a HTML document from a remote URL and write the response to a file
sc rag --etl=file --output output.txt https://docs.spring.io/spring-ai/reference/api/etl-pipeline.html
# Load a plain text document from a local file and write the response to a file
sc rag --etl=file --output output.txt file:///path/to/plain.txt
# Load a Markdown document from a remote URL and write the response to a file
sc rag --etl=file --output output.txt https://raw.githubusercontent.com/juliuskrah/quartz-manager/refs/heads/master/README.md
# Load a JSON document from the local filesystem and write the response to a file
sc rag --etl=file --output output.txt file:///path/to/document.json
# Load a PDF document from the local filesystem and write the response to a vector store
sc rag --etl=vectorStore file:///path/to/document.pdf
# Load a Markdown document from a GitHub URL and write the response to a vector store
sc rag --etl=vectorStore "github://spring-projects/spring-framework/contents/README.md"
# Load a Markdown document from a GitHub ref URL and write the response to a file
sc rag --etl=file --output output.txt "github://spring-projects/spring-framework/contents/README.md?ref=main"-h, --help: Show help message and exit.-v, --version: Show the version of the CLI.--base-url: Specify the base-url to use.
You can enable logging with the following environment variable:
JAVA_TOOL_OPTIONS=-Dlogging.level.org.sc.ai.cli=debug sc <args>Use this option if you want to explore more options such as running your tests in a native image.
The GraalVM native-image compiler should be installed and configured on your machine.
NOTE: GraalVM 22.3+ is required.
To create the executable, run the following goal:
./gradlew nativeCompileThen, you can run the app as follows:
build/native/nativeCompile/sc --helpYou can also run your existing tests suite in a native image. This is an efficient way to validate the compatibility of your application.
To run your existing tests in a native image, run the following goal:
./gradlew nativeTestFor local documentation development, you can serve the docs locally:
# Generate docs
./gradlew generateDocs
# Serve locally (if you have Python installed)
cd build/docs && python -m http.server 8000
# Then visit http://localhost:8000chat: Attach files when chatting with the model. Support the following document types:application/pdf- PDF files
chat memory: Support for different memory backends:rdbms: Relational database management system (e.g. PostgreSQL, MySQL)
chat LLM: Support for different LLM providers:google: Google Vertex AIopenai: OpenAI API
chat agent: Implement an agent that can perform tasks based on user input and context.mcp: Model context protocol support for the agent.
rag: Document sources from:s3://<bucket>/<key>- S3 document
This documentation for this project is generated using picocli's ManPageGenerator with template support for enhanced documentation.
The documentation system supports two modes:
Automatically generates documentation from command annotations:
./gradlew generateDocsUses customizable templates for richer documentation content:
# First time only: Generate initial templates
./gradlew generateManpageTemplates
# Then use enhanced documentation generation
./gradlew generateDocsThe template system allows for customization of the generated documentation while preserving the automatically generated command information.
Templates are stored in src/docs/man-templates/ and use AsciiDoctor's include mechanism:
// Main template file example
:includedir: ../../../build/generated-picocli-docs
include::{includedir}/sc.adoc[tag=picocli-generated-man-section-header]
include::{includedir}/sc.adoc[tag=picocli-generated-man-section-name]
include::{includedir}/sc.adoc[tag=picocli-generated-man-section-synopsis]
// Add custom sections here
== Getting Started
This section provides additional context...
== Examples
Additional examples beyond what's auto-generated...
// Continue with generated sections
include::{includedir}/sc.adoc[tag=picocli-generated-man-section-options]Each template includes:
- Custom sections with detailed explanations between generated content
- Enhanced examples and real-world use cases
- Configuration details and troubleshooting guides
- Cross-references to related commands and concepts
- Additional context not available in code annotations alone
The project includes a JSON Schema for configuration validation:
- Local:
.sc/schema.json - Published:
https://juliuskrah.com/sc/schemas/schema.json
The schema is automatically included in the GitHub Pages deployment and can be used for:
- IDE autocomplete and validation in configuration files
- Configuration validation in external tools
- API documentation for configuration structure
Available Gradle tasks for documentation:
# Generate enhanced documentation (uses templates if available)
./gradlew generateDocs
# Generate initial templates (run only once)
./gradlew generateManpageTemplates
# Individual tasks
./gradlew generateEnhancedDocs # Generate AsciiDoc with template support
./gradlew asciidoctor # Convert AsciiDoc to HTMLThe build system automatically detects whether templates exist and uses them for enhanced output, or falls back to standard generation.