This project focuses on fine-tuning and evaluating the Llama 3 1.8B model on Amazon Bedrock for summarization tasks. It includes data preparation, model customization, and evaluation steps.
01_setup.ipynb: Initial setup notebook.02_fine-tune_and_evaluate_llama31_8B_bedrock_summarization.ipynb: Notebook for fine-tuning and evaluation.03_cleanup.ipynb: Cleanup notebook.Step01-DataPreparation.ipynb: Data preparation notebook.Step02-Customization.ipynb: Customization notebook.
[Provide instructions on how to run the notebooks and any other relevant scripts. Include example commands if applicable. For example:
- Open the notebooks in JupyterLab or a similar environment.
- Run the cells in the notebooks in the specified order.
- Adjust parameters as needed. ]
-
Clone the repository:
git clone https://github.com/QsingularityAi/Fine-tune-meta-llama-aws.git cd fine-tune-meta-llama-aws -
Install dependencies:
pip install -r requirements.txt
(Create a
requirements.txtfile with the necessary packages if one doesn't exist.) -
Configure AWS credentials:
Configure your AWS credentials using the AWS CLI or by setting environment variables.
-
[Optional] Install JupyterLab:
pip install jupyterlab