Skip to content

Commit ee2db1b

Browse files
committed
Updating documentation so anyone can start ChatFAQ with as close to zero configuration as possible
1 parent 761b469 commit ee2db1b

6 files changed

Lines changed: 110 additions & 42 deletions

File tree

back/.env-template

Lines changed: 26 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,35 @@
11
# --------------------------- Django ---------------------------
2-
DEBUG=yes/no
2+
DEBUG=yes
3+
ENVIRONMENT=dev
4+
# RUN_MAIN=yes
35

46
SECRET_KEY=secretkey1234567890
57
BASE_URL=http://localhost:8000
6-
ENVIRONMENT=dev
8+
9+
DATABASE_URL="postgresql://chatfaq:chatfaq@postgres:5432/chatfaq" # "postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DB_HOST}:${DB_PORT}/${POSTGRES_DB}?sslmode=${DB_SSL_MODE}"
710

811
# --------------------------- DB ------------------------------
9-
# Database (shared with Docker)
1012
DATABASE_USER=chatfaq
1113
DATABASE_PASSWORD=chatfaq
12-
DATABASE_HOST=postgres
13-
DATABASE_NAME=chatfaq
14-
CONN_MAX_AGE=0
1514
# Note: like this, DB_HOST is probably unknown to your computer
1615
# add new line in /etc/hosts: 127.0.0.1 postgres
1716
# to share the same .env whether you run in the host (dev mode) or in a container
18-
DATABASE_URL="postgresql://chatfaq:chatfaq@postgres:5432/chatfaq" # "postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DB_HOST}:${DB_PORT}/${POSTGRES_DB}?sslmode=${DB_SSL_MODE}"
19-
# --- or ---
2017
DATABASE_HOST=postgres
21-
DATABASE_PORT=5432
22-
DATABASE_SSL_MODE=disable
18+
DATABASE_NAME=chatfaq
2319

20+
PGUSER=chatfaq
2421
# --------------------------- STORAGE ------------------------------
22+
# Local Storage
23+
STORAGES_MODE=local
24+
# --- or ---
2525
# S3/DO Storage
26+
STORAGES_MODE=s3/do
2627
AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
2728
AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
2829
AWS_STORAGE_BUCKET_NAME=<AWS_STORAGE_BUCKET_NAME>
2930
DO_REGION=<DO_REGION>
30-
STORAGES_MODE=s3/do
3131
STORAGE_MAKE_FILES_PUBLIC=no
3232
AWS_S3_SIGNATURE_VERSION=s3v4
33-
# --- or ---
34-
# Local Storage
35-
STORAGES_MODE=local
3633
# --------------------------- SCRAPER ------------------------------
3734

3835
SCRAPY_SETTINGS_MODULE=back.apps.language_model.scraping.scraping.settings
@@ -43,15 +40,24 @@ SCRAPY_SETTINGS_MODULE=back.apps.language_model.scraping.scraping.settings
4340
# to share the same .env whether you run in the host (dev mode) or in a container
4441
REDIS_URL=redis://redis:6379/0 # "redis://${REDIS_HOST}:${REDIS_PORT}/${REDIS_DB}"
4542

46-
# --------------------------- LLM APIs ------------------------------
47-
# # Optionals
48-
# TG_TOKEN=<TELEGRAM_TOKEN>
43+
44+
# --------------------------- LLM/Retriever APIs ------------------------------
45+
OPENAI_API_KEY=<OPENAI_API_KEY>
46+
47+
# # All Options
48+
49+
# VLLM_ENDPOINT_URL=http://<VLLM_HOST>:8000/v1 (https://docs.vllm.ai/en/latest/models/supported_models.html)
50+
4951
# OPENAI_API_KEY=<OPENAI_API_KEY>
50-
# HUGGINGFACE_KEY=<HUGGINGFACE_KEY>
51-
# VLLM_ENDPOINT_URL=http://localhost:5000/v1
5252
# ANTHROPIC_API_KEY=<ANTHROPIC_API_KEY>
5353
# MISTRAL_API_KEY=<MISTRAL_API_KEY>
54+
# TOGETHER_API_KEY=<HUGGINGFACE_KEY>
55+
56+
# HUGGINGFACE_KEY=<HUGGINGFACE_KEY>
57+
58+
# --------------------------- Messengers ---------------------
59+
# TG_TOKEN=<TELEGRAM_TOKEN>
5460

55-
# --------------------------- RAY Worker Config ---------------------
61+
# --------------------------- RAY Workers Config ---------------------
5662
BACKEND_HOST=http://back:8000
5763
BACKEND_TOKEN=<BACKEND_TOKEN>

back/back/apps/language_model/fixtures/initial.json

Lines changed: 2 additions & 1 deletion
Large diffs are not rendered by default.
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
from django.core.management.base import BaseCommand, CommandError
2+
from django.core import exceptions
3+
from back.apps.people.models import User
4+
from knox.models import AuthToken
5+
6+
7+
class Command(BaseCommand):
8+
help = 'Creates a auth token given a user name'
9+
10+
def add_arguments(self, parser):
11+
parser.add_argument('email', type=str, help='Username of the user to create the token for')
12+
13+
def handle(self, *args, **options):
14+
email = options['email']
15+
try:
16+
user = User.objects.get(email=email)
17+
except User.DoesNotExist:
18+
raise CommandError(f'User with email "{email}" does not exist')
19+
20+
# check if the user belong to the RPC group:
21+
if not user.groups.filter(name="RPC").exists():
22+
raise CommandError(
23+
f'User with email "{email}" does not belong to the RPC group'
24+
)
25+
26+
instance, token = AuthToken.objects.create(user=user)
27+
28+
self.stdout.write(self.style.SUCCESS(f'Token for user "{email}" created: {token}'))

doc/source/introduction.md

Lines changed: 48 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,50 @@
1-
# Introduction
1+
# Installation
22

3-
## Installation
3+
## Components
44

5-
The system comprises three main components that you need to set up:
5+
The system comprises seven main components, here are their relationships and the technologies they are built with:
66

77

88
![ChatFAQ Components](./_static/images/chatfaq_components.png)
99

1010

11-
- The Back (<a href="/en/latest/modules/installations/index.html#back-installation">local install</a>) manages the communication between all the components. It also houses the database for storing all the data related to the chatbots, datasets, models, etc...
11+
- Back-end (<a href="/en/latest/modules/installations/index.html#back-installation">local install</a>) manages the communication between all the components. It also houses the database for storing all the data related to the chatbots, datasets, models, etc...
1212

1313

14-
- The SDK (<a href="/en/latest/modules/installations/index.html#sdk-installation">local install</a>) launches a Remote Procedure Call (RPC) server to execute transitions and events from the posted FSM definitions.
14+
- SDK (<a href="/en/latest/modules/installations/index.html#sdk-installation">local install</a>) launches a Remote Procedure Call (RPC) server to execute transitions and events from the posted FSM definitions.
1515

1616

17-
- The Widget (<a href="/en/latest/modules/installations/index.html#widget-installation">local install</a>) is a JS browser client application from which the user interacts with the bot.
17+
- Widget (<a href="/en/latest/modules/installations/index.html#widget-installation">local install</a>) is a JS browser client application from which the user interacts with the bot.
18+
19+
20+
- Admin is a JS browser client application to manage the chatbots, datasets, retriever, models, RAG configs, etc...
21+
22+
23+
- Ray workers (Ray) are used to run distributed inference on the models.
24+
25+
-
26+
- Channel layer (Redis) is used to communicate through WebSockets between the back-end and the SDK, admin and widget.
27+
28+
29+
- Relational Database (PostgreSQL) is used to store all the data related to the chatbots, datasets, retriever, models, RAG configs, etc...
30+
1831

1932
### Docker Compose
2033

21-
There is a `docker-compose.yaml` that runs all the the services you need. You can run it with:
34+
We prepared you a `docker-compose.yaml` that set up all the services for you. You can find it on the root of the repository.
2235

23-
First of all we recommend to add to your hosts file (usually under `/etc/hosts`) the following lines in order to share the `.env` files values between a local deployment and a docker deployment:
2436

25-
127.0.0.1 postgres
26-
127.0.0.1 back
27-
127.0.0.1 ray
28-
127.0.0.1 redis
37+
But first you add to your hosts file (usually under `/etc/hosts`) the following lines in order to share the `.env` files values between a local deployment and a docker deployment:
38+
39+
127.0.0.1 postgres
40+
127.0.0.1 back
41+
127.0.0.1 redis
2942

3043
Then you need to create the corresponding `.env` files for each service. You can see an example of those on:
3144

3245
- [back/.env-template](https://github.com/ChatFAQ/ChatFAQ/blob/develop/back/.env-template)
3346
- [sdk/.env-template](https://github.com/ChatFAQ/ChatFAQ/blob/develop/sdk/.env-template)
47+
- [admin/.env-template](https://github.com/ChatFAQ/ChatFAQ/blob/develop/admin/.env-template)
3448
- [widget/.env-template](https://github.com/ChatFAQ/ChatFAQ/blob/develop/widget/.env-template)
3549

3650

@@ -45,27 +59,40 @@ Create a superuser on the backend (making sure you answer 'yes' to the question
4559

4660
Generate a ChatFAQ Token with the user and password you just created:
4761

48-
docker compose exec back curl -X POST -u <USER>:<PASSWORD> http://localhost:8000/back/api/login/
62+
docker compose -f docker-compose.yaml -f docker-compose.vars.yaml run back poetry run ./manage.py createtoken <USER>
4963

5064
Which will respond something as such:
5165

52-
{"expiry":null,"token":"<TOKEN>"}
66+
Token for user <USER> created: <TOKEN>
5367

5468
Add it to your `sdk/.env` file:
5569

5670
CHATFAQ_TOKEN=<TOKEN>
5771

58-
And finally, now you can run all the services:
72+
and the `back/.env` file:
73+
74+
BACKEND_TOKEN=<TOKEN>
75+
76+
One last thing, the configuration we provided in the fixture for the configuration of the LLM is an OpenAI model, so you need to add your OpenAI API key to the `back/.env` file:
77+
78+
OPENAI_API_KEY=<API_KEY>
79+
80+
Do not worry, our solution supports any other LLM model, it also supports deploying your own local model over a VLLM server but for the sake of simplicity and because OpenAI models are the most popular, we are using it as the default.
81+
82+
Finally now you can run all the services:
83+
84+
docker compose -f docker-compose.yaml -f docker-compose.vars.yaml up
5985

60-
docker compose -f docker-compose.yaml -f docker-compose.vars.yaml up -d
86+
Congratulations! You have a running ChatFAQ instance.
6187

88+
Now you can navigate to the widget to interact with the chatbot http://localhost:3000/demo/
6289

63-
## Model Configuration
90+
Or to the admin to manage the chatbot and see how we have configured it for you the model http://localhost:3000/
6491

65-
After setting up the components, you will probably want to configure a model that you want to use for your chatbot. Typically the model will be used from the SDK, from a state within its FSM.
92+
## Deeper into ChatFAQ
6693

67-
Here is an example of a minimum model ([configuration](./modules/configuration/index.md))
94+
If you want to upload your own dataset, you can check the [Dataset Configuration](./modules/dataset/index.md) documentation.
6895

69-
## Quick Start
96+
If you want to learn how to configure your own RAG (LLM model, retriever model, prompt configuration, etc...) you can check the [RAG Configuration](./modules/rag/index.md) documentation.
7097

71-
Learning <a href="/en/latest/modules/sdk/index.html#usage">how to use the SDK</a> is the only requirement to start building your own chatbots with ChatFAQ.
98+
If you want to learn how to use the SDK so you can create your own chatbot behavior, you can check the [SDK](./modules/sdk/index.md) documentation.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Datasets
2+
3+
TODO

doc/source/modules/rag/index.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# RAG
2+
3+
TODO

0 commit comments

Comments
 (0)