Skip to main content

PyLLMCore

PyLLMCore is a python library for working with a variety of LLM models and it supports both OpenAI and Local models.

Setup on Linux

Install the llama-cpp-python library first so that you can ensure that the nvidia dependencies are all pre-configured.

CMAKE_ARGS="-DLLAMA_CUBLAS=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc" pip install llama-cpp-python
pip install py-llm-core

Put models in the correct location

The library seems quite fussy about model location. They must be in the ~/.cache/py-llm-core/models/ folder inside your user profile. Since I am already using SimonW's LLM (as described here) I symlink the zephyr model from there:

ln -s ~/.config/io.datasette.llm/llama-cpp/models/zephyr-7b-alpha.Q5_K_M.gguf\
 ~/.cache/py-llm-core/models/zephyr-7b-alpha.Q5_K_M.gguf