You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-The Back (<ahref="/en/latest/modules/installations/index.html#back-installation">local install</a>) manages the communication between all the components. It also houses the database for storing all the data related to the chatbots, datasets, models, etc...
11
+
- Back-end (<ahref="/en/latest/modules/installations/index.html#back-installation">local install</a>) manages the communication between all the components. It also houses the database for storing all the data related to the chatbots, datasets, models, etc...
12
12
13
13
14
-
-The SDK (<ahref="/en/latest/modules/installations/index.html#sdk-installation">local install</a>) launches a Remote Procedure Call (RPC) server to execute transitions and events from the posted FSM definitions.
14
+
- SDK (<ahref="/en/latest/modules/installations/index.html#sdk-installation">local install</a>) launches a Remote Procedure Call (RPC) server to execute transitions and events from the posted FSM definitions.
15
15
16
16
17
-
- The Widget (<ahref="/en/latest/modules/installations/index.html#widget-installation">local install</a>) is a JS browser client application from which the user interacts with the bot.
17
+
- Widget (<ahref="/en/latest/modules/installations/index.html#widget-installation">local install</a>) is a JS browser client application from which the user interacts with the bot.
18
+
19
+
20
+
- Admin is a JS browser client application to manage the chatbots, datasets, retriever, models, RAG configs, etc...
21
+
22
+
23
+
- Ray workers (Ray) are used to run distributed inference on the models.
24
+
25
+
-
26
+
- Channel layer (Redis) is used to communicate through WebSockets between the back-end and the SDK, admin and widget.
27
+
28
+
29
+
- Relational Database (PostgreSQL) is used to store all the data related to the chatbots, datasets, retriever, models, RAG configs, etc...
30
+
18
31
19
32
### Docker Compose
20
33
21
-
There is a `docker-compose.yaml` that runs all the the services you need. You can run it with:
34
+
We prepared you a `docker-compose.yaml` that set up all the services for you. You can find it on the root of the repository.
22
35
23
-
First of all we recommend to add to your hosts file (usually under `/etc/hosts`) the following lines in order to share the `.env` files values between a local deployment and a docker deployment:
24
36
25
-
127.0.0.1 postgres
26
-
127.0.0.1 back
27
-
127.0.0.1 ray
28
-
127.0.0.1 redis
37
+
But first you add to your hosts file (usually under `/etc/hosts`) the following lines in order to share the `.env` files values between a local deployment and a docker deployment:
38
+
39
+
127.0.0.1 postgres
40
+
127.0.0.1 back
41
+
127.0.0.1 redis
29
42
30
43
Then you need to create the corresponding `.env` files for each service. You can see an example of those on:
@@ -45,27 +59,40 @@ Create a superuser on the backend (making sure you answer 'yes' to the question
45
59
46
60
Generate a ChatFAQ Token with the user and password you just created:
47
61
48
-
docker compose exec back curl -X POST -u <USER>:<PASSWORD> http://localhost:8000/back/api/login/
62
+
docker compose -f docker-compose.yaml -f docker-compose.vars.yaml run back poetry run ./manage.py createtoken <USER>
49
63
50
64
Which will respond something as such:
51
65
52
-
{"expiry":null,"token":"<TOKEN>"}
66
+
Token for user <USER> created: <TOKEN>
53
67
54
68
Add it to your `sdk/.env` file:
55
69
56
70
CHATFAQ_TOKEN=<TOKEN>
57
71
58
-
And finally, now you can run all the services:
72
+
and the `back/.env` file:
73
+
74
+
BACKEND_TOKEN=<TOKEN>
75
+
76
+
One last thing, the configuration we provided in the fixture for the configuration of the LLM is an OpenAI model, so you need to add your OpenAI API key to the `back/.env` file:
77
+
78
+
OPENAI_API_KEY=<API_KEY>
79
+
80
+
Do not worry, our solution supports any other LLM model, it also supports deploying your own local model over a VLLM server but for the sake of simplicity and because OpenAI models are the most popular, we are using it as the default.
81
+
82
+
Finally now you can run all the services:
83
+
84
+
docker compose -f docker-compose.yaml -f docker-compose.vars.yaml up
59
85
60
-
docker compose -f docker-compose.yaml -f docker-compose.vars.yaml up -d
86
+
Congratulations! You have a running ChatFAQ instance.
61
87
88
+
Now you can navigate to the widget to interact with the chatbot http://localhost:3000/demo/
62
89
63
-
## Model Configuration
90
+
Or to the admin to manage the chatbot and see how we have configured it for you the model http://localhost:3000/
64
91
65
-
After setting up the components, you will probably want to configure a model that you want to use for your chatbot. Typically the model will be used from the SDK, from a state within its FSM.
92
+
## Deeper into ChatFAQ
66
93
67
-
Here is an example of a minimum model ([configuration](./modules/configuration/index.md))
94
+
If you want to upload your own dataset, you can check the [Dataset Configuration](./modules/dataset/index.md) documentation.
68
95
69
-
## Quick Start
96
+
If you want to learn how to configure your own RAG (LLM model, retriever model, prompt configuration, etc...) you can check the [RAG Configuration](./modules/rag/index.md) documentation.
70
97
71
-
Learning <ahref="/en/latest/modules/sdk/index.html#usage">how to use the SDK</a> is the only requirement to start building your own chatbots with ChatFAQ.
98
+
If you want to learn how to use the SDK so you can create your own chatbot behavior, you can check the [SDK](./modules/sdk/index.md) documentation.
0 commit comments