Editing
Training Material
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Connecting to Cluster === Each team will be assigned with a temporary Andrew id. You will need the Andrew id to access Cluster. Follow the [[Connecting to the Cluster|Instructions]] to connect to Cluster <div style="float: left; margin-right: 1%; width: 70%;"> === Instruction with SLURM === SLURM (Simple Linux Utility for Resource Management) is a job scheduler and resource management system commonly used in high-performance computing (HPC) environments. You will need to get familiar with basic usage of SLURM in order to interact with the Cluster Refer to [[Slurm|Beginner's guide to the SLURM workload manager]] for more details. </div> <div style="float: left; margin-right: 1%; width: 70%;"> === Monitoring === To monitor cluster activities such as jobs, compute resources, and disk usage, learn about essential techniques and tools for effective cluster monitoring here: [[Monitoring]]. </div> <div style="float: left; margin-right: 1%; width: 70%;"> === LLM Deployment Demo === ==== LLaMA ==== Released this past February by Meta Research, LLaMA is the latest and greatest in open-source pre-trained LLMs. The LLaMA models range in size from 7 billion to 65 billion parameters. It was trained mostly on scraped internet data, with some other data sources, such as Github code, Books, and academic papers thrown in. Since LLaMA's release, it has also been finetuned for instruction-following and conversational alignment ''<u>Accessibility</u>''''':''' LLaMA is more efficient and less resource-intensive than other models, and it is available under a non-commercial license to researchers and other organizations. LLaMA is available in various sizes(7B, 13B, 33B, and 65B parameters), making it accessible to a range of computing resources. <u>''Open-source Community''</u>: LLaMA models are part of the open-source ecosystem, users can benefit from the extensive community support, documentation, and shared resources available through platforms like HuggingFace. ==== Setting up LLaMA on Babel: ==== ''<u>Server</u>'' # Clone repo: https://github.com/neulab/lti-llm-deployment # Checkout to <code>update-lamma</code> branch # (If not already installed) Install pip using https://pip.pypa.io/en/stable/installation/ # Create a virtual environment with <code>Miniconda</code> ## <code>wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh</code> ## <code>bash Miniconda3-latest-Linux-x86_64.sh</code> ## <code>conda create --name myenv</code> ## <code>conda activate myenv</code> # Start a shell in compute node: <code>srun -G <num_of_gpu> -t <num_of_hours> -p <partition_name> -w <node_name> --pty $SHELL</code> #* <code>num_of_gpu</code> : The number of GPUs you would like to allocate, each team have 8 GPUs. #* <code>num_of_hours</code> : Time limits (days) for the job, maximum value is 20 days. #* <code>partition_name</code> , <code>node_name</code> : Each team is assigned with a specific partition and compute node, consulting your mentor if you don't have them. #Activate the conda environment: <code>conda activate myenv</code> # Change directory to use lti-llm-deployment <code>cd lti-llm-deployment</code> # Install dependencies ([https://github.com/neulab/lti-llm-deployment/blob/main/inference_server/README.md README]): ## <code>pip install flask flask_api gunicorn pydantic accelerate huggingface_hub>=0.9.0 deepspeed>=0.7.3 deepspeed-mii==0.0.4</code> ## <code>pip install sentencepiece</code> # Set environment variable (Required for models that have greater than 7B params): <code>export TRANSFORMERS_CACHE=/data/user_data/<andrew_id></code> # Run the desired scripts to launch the LLMs <code>bash launch_llama7b_fp16_2gpu_server.sh</code> Upon successful execution, the message ''“model loaded”'' is printed on the terminal ''<u>Client</u>'' # Start another session on babel to set up the client-server # Create a python file with the following contents<syntaxhighlight lang="python"> import llm_client client = llm_client.Client(address=”babel-0-23”, port = 5000) text = “CMU students are” output = client.prompt([text]) tokens, scores = client.score([text]) print(text) print(output[0].text) for tok, s in zip(tokens, scores): print(f"[{tok}]: {s:.2f}") </syntaxhighlight> # Run <code>python filename.py</code> </div>
Summary:
Please note that all contributions to CMU -- Language Technologies Institute -- HPC Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Project:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Tools
What links here
Related changes
Page information