Editing
CXCSCMU GroupWiki
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Meeting 2 - November 3, 2023 === {| class="wikitable" |+ !Paper !#Votes |- |[https://arxiv.org/pdf/2310.11511.pdf SELF-RAG: Learning to Retrieve, Generate and Critique through Self-reflection] |4 |- |[https://arxiv.org/pdf/2310.11716.pdf Reflection-Tuning:Data Recycling Improves LLM Instruction-Tuning] |2 |- |[https://arxiv.org/abs/2310.14034 Tree Prompting: Efficient Task Adaptation without Fine-Tuning] |2 |- |[https://arxiv.org/abs/2306.04488 Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards] |2 |- |[https://arxiv.org/pdf/2309.12307.pdf LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models] |2 |- |[https://arxiv.org/pdf/2304.15004.pdf Are Emergent Abilities of Large Language Models a Mirage?] |2 |- |[https://arxiv.org/abs/2205.14135 FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness] |2 |- |[https://arxiv.org/pdf/2310.11451.pdf Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective] |1 |- |[https://arxiv.org/pdf/2301.13808.pdf Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning] |1 |- |[https://aclanthology.org/2021.acl-long.568/ Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning] |1 |- |[https://arxiv.org/pdf/2310.17680v1.pdf CodeFusion: A Pre-trained Diffusion Model for Code Generation] |0 |- |[https://arxiv.org/abs/2309.08532 Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers] |0 |- |[https://openreview.net/pdf?id=9k27IITeAZ ChunkAttention: Efficient Attention on KV Cache with Chunking Sharing and Batching] |0 |- |[https://arxiv.org/abs/2310.05029 Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading] |0 |}
Summary:
Please note that all contributions to CMU -- Language Technologies Institute -- HPC Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Project:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Tools
What links here
Related changes
Page information