Training Material
Connecting to Cluster
Each team will be assigned with a temporary Andrew id. You will need the Andrew id to access Cluster.
Follow the Instructions to connect to Cluster
Instruction with SLURM
SLURM (Simple Linux Utility for Resource Management) is a job scheduler and resource management system commonly used in high-performance computing (HPC) environments. You will need to get familiar with basic usage of SLURM in order to interact with the Cluster
Refer to Beginner's guide to the SLURM workload manager for more details.
Monitoring
To monitor cluster activities such as jobs, compute resources, and disk usage, learn about essential techniques and tools for effective cluster monitoring here: Monitoring.
LLM Deployment Demo
LLaMA
Released this past February by Meta Research, LLaMA is the latest and greatest in open-source pre-trained LLMs. The LLaMA models range in size from 7 billion to 65 billion parameters. It was trained mostly on scraped internet data, with some other data sources, such as Github code, Books, and academic papers thrown in. Since LLaMA's release, it has also been finetuned for instruction-following and conversational alignment
Accessibility: LLaMA is more efficient and less resource-intensive than other models, and it is available under a non-commercial license to researchers and other organizations. LLaMA is available in various sizes(7B, 13B, 33B, and 65B parameters), making it accessible to a range of computing resources.
Open-source Community: LLaMA models are part of the open-source ecosystem, users can benefit from the extensive community support, documentation, and shared resources available through platforms like HuggingFace.
Setting up LLaMA on Babel:
Server
- Clone repo: https://github.com/neulab/lti-llm-deployment
- Checkout to
update-lammabranch - (If not already installed) Install pip using https://pip.pypa.io/en/stable/installation/
- Create a virtual environment with
Minicondawget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.shbash Miniconda3-latest-Linux-x86_64.shconda create --name myenvconda activate myenv
- Start a shell in compute node:
srun -c 16 --mem=128Gb --gres=gpu:2 --pty $SHELL - Change directory to use lti-llm-deployment
cd lti-llm-deployment - Install dependencies (README):
pip install flask flask_api gunicorn pydantic accelerate huggingface_hub>=0.9.0 deepspeed>=0.7.3 deepspeed-mii==0.0.4pip install sentencepiece
- Set environment variable (Required for models that have greater than 7B params)
export TRANSFORMERS_CACHE=/data/user_data/<andrew_id> - Run the desired scripts to launch the LLMs
bash launch_llama7b_fp16_2gpu_server.shUpon successful execution, the message “model loaded” is printed on the terminal
Client
- Start another session on babel to set up the client-server
- Create a python file with the following contents<syntaxhighlight lang="python">
import llm_client
client = llm_client.Client(address=”babel-0-23”, port = 5000) text = “CMU students are” output = client.prompt([text]) tokens, scores = client.score([text]) print(text) print(output[0].text) for tok, s in zip(tokens, scores):
print(f"[{tok}]: {s:.2f}")
</syntaxhighlight>
- Run
python filename.py