As the name of the repository suggests, it's just a playground. A place to better understand using chainlit, while employing various agents utilising large language models (LLM) via various inference providers to accomplish a given task. This project aims to be agent development kit (adk) agnostic as possible.
Here's a step-by-step interpretation of the flow of the LLM.
🚧 👷 Currently in the early stages of development, current graph implementation might change. 👷 🚧
---
config:
flowchart:
curve: linear
---
graph TD;
__start__([<p>__start__</p>]):::first
product_owner(product_owner)
scrum_master(scrum_master)
engineer(engineer)
tools(tools)
wait_for_user(wait_for_user)
__end__([<p>__end__</p>]):::last
__start__ --> product_owner;
engineer -.-> scrum_master;
engineer -.-> tools;
product_owner -.-> scrum_master;
product_owner -.-> wait_for_user;
scrum_master -. end .-> __end__;
scrum_master -.-> engineer;
scrum_master -.-> product_owner;
tools --> engineer;
wait_for_user --> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
- Install python packages used for the project
pip install -r requirements.txt- Run the application
chainlit run app.py- Install python packages used for the project
uv sync- Run the application
uv run chainlit run app.py- Run the following command within the root directory, this will build the chainlit docker image and pull various docker images from docker hub
docker compose up --build -d- As this container also runs Ollama, please wait for the
ollama-setup-1to finish pulling your specified large language model.
2024-01-31 23:36:50 {"status":"verifying sha256 digest"}
2024-01-31 23:36:50 {"status":"writing manifest"}
2024-01-31 23:36:50 {"status":"removing any unused layers"}
2024-01-31 23:36:50 {"status":"success"}
100 1128k 0 1128k 0 21 2546 0 --:--:-- 0:07:33 --:--:-- 23