Large Language Models (LLMs)
Last updated
Was this helpful?
Last updated
Was this helpful?
(great)
by OpenAI
- The trade-off between model size and compute overhead and reveal there is significant room to reduce the compute-optimal model size with minimal compute overhead.
- Alec et al. openAI
- scaling LLMs with data is enough to make them few shot.
Databricks dolly
Vicuna
LLaMA
LLMs
Prompt Templates
Chains
Agents and Tools
Memory
Document Loaders
Indexes
,
(using RLHF)
- "We introduce Instructor👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) by simply providing the task instruction, without any finetuning. Instructor achieves sota on 70 diverse embedding tasks!"
in by Patrick Loeber about
, , - is a UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows.
- PandasAI, asking data Qs using LLMs on Panda's DFs with two code lines. 𝚙𝚊𝚗𝚍𝚊𝚜_𝚊𝚒 = 𝙿𝚊𝚗𝚍𝚊𝚜𝙰𝙸(𝚕𝚕𝚖) & 𝚙𝚊𝚗𝚍𝚊𝚜_𝚊𝚒.𝚛𝚞𝚗(𝚍𝚏, 𝚙𝚛𝚘𝚖𝚙𝚝='𝚆𝚑𝚒𝚌𝚑 𝚊𝚛𝚎 𝚝𝚑𝚎 𝟻 𝚑𝚊𝚙𝚙𝚒𝚎𝚜𝚝 𝚌𝚘𝚞𝚗𝚝𝚛𝚒𝚎𝚜?')
- LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
- LLM training code for Databricks foundation models using MoasicML
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
- The simplest, fastest repository for training/finetuning medium-sized GPTs.
(https://minigpt-4.github.io, https://minigpt-v2.github.io/)
- Implementing LLM Guardrails for Safe and Responsible Generative AI Deployment on Databricks
by Chip Huyen
- Reinforcement Learning from Human Feedback: Progress and Challenges
- a family of metrics that evaluate the performance of a LLM in text summarization, i.e., ROUGE-1, ROUGE-2, ROUGE-L, for unigrams, bi grams, LCS, respectively.