CXCSCMU GroupWiki
Organize your code
Create your own branch for development and regularly submit changes to your branch! Here are some development steps you may refer to:
- For new features/experiments, it is highly recommended to create a new branch
- After developing and thorough test, merge the new features to your branch
- If you think these features can benefit everyone in the group, merge them into the main branch by pull request (peers should do code review and test)
Organize your results
Loss curve
If you do train the model, the loss curve is vital in terms of debugging, providing insights, and reproducing your results.
It is recommended to use wandb to show your loss curve:
# you may be asked to fill in the API key wandb.init(project=wandb_project, name=wandb_run_name, config=hparams, dir=out_dir)
"""
Every logging step
"""
wandb.log({
"step": train_step,
"train/loss": train_loss,
"val/loss": val_loss,
"step time": (t1 - t0),
"lr": lr,
})
When you are training, you can keep track of the curve online and export the reports to share at the end of the training.
Evaluation numbers
Create the folder named by your name/project name under the Google Drive CXCSCMU_Group and use Google Sheets to illustrate the numbers.
- Make sure the column/row name is clear to read about the details of the model you are actually evaluating
- ❌ Pythia
- ✅ Pythia-160M full-model fine-tuned with SST-2 for1 epoch, lr=1e-5, bs=8
- Group the model sets that can be compared fairly in the same format as the research paper
Codebases
Lightning-Pretrain
It's an LLM codebase built on Lit-GPT and Pytorch Lightning, which is especially useful for efficient pre-training LM from scratch.
What can it do:
- Pre-train state-of-the-art decoder-only models (Llama, Llama 2, Pythia, Vicuna, GPT-2, ...)
- Fine-tune using task-specific data
- Evaluation on Language Model Evaluation Harness
Pros:
- State-of-the-art distributed training strategies: DDP, FSDP, DeepSpeed
- Modern acceleration strategies: FlashAttention, Fused Adam, mixed precision
- Parameter-efficient fine-tune: Adapter, Adapter v2, LoRA, QLoRA, ...
- Large-scale evaluation datasets: almost cover every common task in NLP and keep updating
- Comparable training speed with huggingface but better flexibility
- Relatively easy to convert the model weights from/to huggingface: name mapping
- Detailed tutorials for each usage, and it is pretty easy to begin with
Cons:
- Does not support models in other structures such as T5, BERT
- Does not support as many training datasets as huggingface, you may need to define the dataset class or preprocess the dataset by yourself
- Still in development and requires everyone's effort to maintain it