AI

Although AI has NOT reached 100% feature parity on selfhost, you can still enable basic AI features like chat by completing the AI config on your server.

We’ve currently tested it on GPT-5.x / Claude 4.x (Sonnet & Opus) / Gemini 2.5 (Flash & Pro), no guarantee support for other models or non-official api for mentioned above at this time.

Configuring AI in self-hosted AFFiNE

required AFFiNE Server >= 0.24.0

The custom models in scenarios and override settings only work on AFFiNE Server >= 0.24.0

There is an AI section in the Admin console, you may set API keys and baseUrls for providers, and tweak Model Ids for various scenarios in that config screen.

  • Related reference for accounts and keys: Getting AI API Keys

  • Model Id overriding is designed to addresses model deprecation, such as claude-sonnet-4@20250514

    • We generally advise against using drastically different Model Ids, especially those from different providers. This caution is particularly critical for complex scenarios such as chat and tool calling. Should you choose to proceed, be aware that future model upgrades may lead to significant compatibility issues.

KeCl5wBbGtuYTqGWB58Yv9pMuWTyJ9Cb8EN_I8pobvE=

As an alternative, you may also set API keys and baseUrls by editing config.json in the $CONFIG_LOCATION in your .env file. Remember to restart the containers after the edit.

AI relevant schemas as follows:

config.json
{
  "$schema": "https://github.com/toeverything/affine/releases/latest/download/config.schema.json",
  "copilot": {
    "enabled": true,
    "providers.openai": {
      "apiKey": "your key",
      "baseUrl": "open-ai-compatitable.example.com"
    }
  }
}

*checkout this discussion for more details on this beta feature

Since version 0.23, chat responses have defaulted to using Claude (providers.anthropic).

Since version 0.25, you can add oldApiStyle flag for openai provider to use /chat/completions style api, but some AI feature may not work.

Enable AI Embedding

Starting from version 0.21, AFFiNE added support for AI doc embedding ( semantic indexing ), it has those dependencies:

  • It requires the cloud indexer to be running Indexer

  • It requires a configured Provider and Modelid for embedding.Please see steps above.

  • It requires PostgreSQL vector extension, and pgvector was choosen. Please see steps below.

You need to update the images used in compose.yml or manually install pgvector extension to your own Postgres server if you use standalone one.

Prepare

BACKUP!

Do backup before any database regarding updates.

Backup data

  • Perform a full backup of the PostgreSQL database to ensure data can be recovered in case of issues during the upgrade. Learn how to do the backup.

Stop existing services

  • Stop running services using the Docker Compose command:
docker compose -f docker-compose.yml down
  • Confirm that all relevant containers have stopped running to avoid data write conflicts.

Update container

Download the latestdocker-compose.yml file

wget -O docker-compose.yml \
  https://github.com/toeverything/AFFiNE/releases/latest/download/docker-compose.yml

Update docker compose to use pgvector

Major version match

Be noticed that the major version of replacer pgvector/pgvector:pg{version} must be the same as old postgres image postgres:{version}

compose.yml
services:
  # ...
  postgres:
    image: postgres:16 
    image: pgvector/pgvector:pg16 
  #...

Pull the new image and start the service

# pull the latest image
docker compose pull
# start service
docker compose up -d