Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: ggml-org/llama.cpp
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: master
Choose a base ref
...
head repository: KASR/llama.cpp
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 7 commits
  • 2 files changed
  • 1 contributor

Commits on Apr 27, 2023

  1. python script to verify the checksum of the llama models

    Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability.
    KASR authored Apr 27, 2023
    Configuration menu
    Copy the full SHA
    b7fb31e View commit details
    Browse the repository at this point in the history
  2. Update README.md

    update to the readme for improved readability and to explain the usage of the python checksum verification script
    KASR authored Apr 27, 2023
    Configuration menu
    Copy the full SHA
    78434dd View commit details
    Browse the repository at this point in the history
  3. update the verification script

    I've extended the script based on suggestions by @prusnak
    
    The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks.
    KASR authored Apr 27, 2023
    Configuration menu
    Copy the full SHA
    0a6d364 View commit details
    Browse the repository at this point in the history
  4. minor improvment

    small change so that the available ram is checked and not the total ram
    KASR authored Apr 27, 2023
    Configuration menu
    Copy the full SHA
    24317a5 View commit details
    Browse the repository at this point in the history
  5. remove the part of the code that reads the file at once if enough ram…

    … is available
    
    based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks.
    KASR authored Apr 27, 2023
    Configuration menu
    Copy the full SHA
    6ddce36 View commit details
    Browse the repository at this point in the history

Commits on May 3, 2023

  1. Configuration menu
    Copy the full SHA
    3bdecc2 View commit details
    Browse the repository at this point in the history
  2. Update verify-checksum-models.py

    quick fix to pass the git check
    KASR authored May 3, 2023
    Configuration menu
    Copy the full SHA
    9f788b9 View commit details
    Browse the repository at this point in the history
Loading