This repository contains the backend of the Cogito project, which consists of several main components for handling analytical models, LLM model integration, and the main API server.
cogito-backend/
├── analytical-setup/ # Analytical model setup (Python)
│ └── fallacy_detector_model/ # Model
│ ├── .gitignore
│ └── app.py # Entry point
│
├── llama-setup/ # Integration with LLM (Node.js/Express)
│ ├── node_modules/
│ ├── src/ # Source code for LLM wrapper/handler
│ ├── .env
│ ├── .gitignore
│ ├── package.json
│ └── package-lock.json
│
├── server-api/ # Main backend (Node.js/Express)
│ ├── migrations/ # Contains database migration scripts
│ ├── node_modules/
│ ├── scripts/ # Additional scripts
│ ├── src/ # Main application code
│ ├── .env
│ ├── .env.test
│ ├── .gitignore
│ ├── package.json
│ └── package-lock.json
- Python >= 3.8
- Libraries such as
flask,transformers, etc. (More details in./fallacy_detector_model/requirements.py)
- Node.js >= 18.x
- PostgreSQL
npm,node-pg-migrate, and other
Before running the project, make sure the model is placed in the following directory:
- LLM Model for
llama-setup/models: Download on Google Drive - Fallacy Detection Model for
analytical-setup/fallacy_detector_model: Download on Google Drive
Please extract the model file to the appropriate directory after downloading it.
cd analytical-setup
pip install -r ./fallacy_detector_model/requirements.py
python app.pycd llama-setup
npm install
npm startcd server-api
npm install
npm run migrate up
npm start- Ensure that the
.envfile is available in each component (llama-setupandserver-api). - For testing, you can use the
.env.testfile inserver-api. - The structure may change depending on the final implementation.
MIT License
Translated with DeepL.com (free version)