In this book, I have attempted to provide an explanation that guides the reader from merely using large language models via API to defining large solutions where these models play a significant role. To achieve this, various techniques are explained, including prompt engineering, model training and evaluation, and the use of tools such as vector databases. The importance of these large language models is not only discussed, but great emphasis is also placed on the handling of embeddings, which is essentially the language understood by large language models.

Vector Databases and LLMs.
In the first chapter of the book, you learned how to use the OpenAI API and obtained your initial results from a large language model.
You, also, observed how to build a prompt and applied it to create a chatbot and a SQL generator. While it may not initially appear to be significant, I can assure you that with these two small projects, you have gained a significant set of techniques which you will frequently use in the future.
In this second chapter, you are about to take a truly impressive step forward. You will embark on creating one of the most sought-after projects by companies today: enhancing the response of a large language model with custom information. In other words, you will be building a Retrieval Augmented Generation (RAG) system using an open source large language model from Hugging Face and a vector database.
CONTENTS.
About the Author.
About the Technical Reviewer.
Acknowledgments.
Introduction.
Part I: Techniques and Libraries.
Chapter 1: Introduction to Large Language Models with OpenAI.
1.1 Create Your First Chatbot with OpenAI.
A Brief Introduction to the OpenAI API.
The Roles in OpenAI Messages.
Memory in Conversations with OpenAI.
Creating a Chatbot with OpenAI.
Key Takeaways and More to Learn.
1.2 Create a Simple Natural Language to SQL Using OpenAI.
Key Takeaways and More to Learn.
1.3 Influencing the Model’s Response with In-Context Learning.
Key Takeaways and More to Learn.
1.4 Summary.
Chapter 2: Vector Databases and LLMs.
2.1 Brief Introduction to Hugging Face and Kaggle.
Hugging Face.
Kaggle.
Key Takeaways and More to Learn.
2.2 RAG and Vector Databases.
How Do Vector Databases Work?.
Key Takeaways.
2.3 Creating a RAG System with News Dataset.
What Technology Will We Use?.
Preparing the Dataset.
Working with Chroma.
Loading the Model and Testing the Solution.
Different Ways to Load ChromaDB.
Key Takeaways and More to Learn.
2.4 Summary.
Chapter 3: LangChain and Agents.
3.1 Create a RAG System with LangChain.
Reviewing the Embeddings.
Using LangChain to Create the RAG System.
Key Takeaways and More to Learn.
3.2 Create a Moderation System Using LangChain.
Create a Self-moderated Commentary System with LangChain and OpenAI.
Create a Self-moderated Commentary System with LLAMA-2 and OpenAI.
3.3 Create a Data Analyst Assistant Using a LLM Agent.
Key Takeaways and More to Learn.
3.4 Create a Medical Assistant RAG System.
Loading the Data and Creating the Embeddings.
Creating the Agent.
Key Takeaways and More to Learn.
3.5 Summary.
Chapter 4: Evaluating Models.
4.1 BLEU, ROUGE, and N-Grams.
N-Grams.
Measuring Translation Quality with BLEU.
Measuring Summary Quality with ROUGE.
Key Takeaways and More to Learn.
4.2 Evaluation and Tracing with LangSmith.
Evaluating LLM Summaries Using Embedding Distance with LangSmith.
Tracing a Medical Agent with LangSmith.
Key Takeaways and More to Learn.
4.3 Evaluating Language Models with Language Models.
Evaluating a RAG Solution with Giskard.
Key Takeaways and More to Learn.
4.4 An Overview of Generalist Benchmarks.
MMLU.
ThruthfulQA.
Key Takeaways.
4.5 Summary.
Chapter 5: Fine-Tuning Models.
5.1 A Brief Introduction to the Concept of Fine-Tuning.
5.2 Efficient Fine-Tuning with LoRA.
Brief Introduction to LoRA.
Creating a Prompt Generator with LoRA.
Key Takeaways and More to Learn.
5.3 Size Optimization and Fine-Tuning with QLoRA.
Brief Introduction to Quantization.
QLoRA: Fine-Tuning a 4-Bit Quantized Model Using LoRA.
Key Takeaways and More to Learn.
5.4 Prompt Tuning.
Prompt Tuning: Prompt Generator.
Detecting Hate Speech Using Prompt Tuning.
Key Takeaways and More to Learn.
5.5 Summary.
Part II: Projects.
Chapter 6: Natural Language to SQL.
6.1 Creating a Super NL2SQL Prompt for OpenAI.
6.2 Setting UP a NL2SQL Project with Azure OpenAI Studio.
Calling Azure OpenAI Services from a Notebook.
Key Takeaways and More to Learn.
6.3 Setting Up a NL2SQL Solution with AWS Bedrock.
Calling AWS Bedrock from Python.
Key Takeaways and More to Learn.
6.4 Setting UP a NL2SQL Project with Ollama.
Calling Ollama from a Notebook.
Key Takeaways and More to Learn.
6.5 Summary.
Chapter 7: Creating and Publishing Your Own LLM.
7.1 Introduction to DPO: Direct Preference Optimization.
A Look at Some DPO Datasets.
7.2 Aligning with DPO a phi3-3 Model.
Save and Upload.
7.3 Summary.
Part III: Enterprise Solutions.
Chapter 8: Architecting a NL2SQL Project for Immense Enterprise Databases.
8.1 Brief Project Overview.
8.2 Solution Architecture.
Prompt Size Reduction.
Using Different Models to Create SQL.
Semantic Caching to Reduce LLM Access.
8.3 Summary.
Chapter 9: Decoding Risk: Transforming Banks with Customer Embeddings.
9.1 Actual Client Risk System.
9.2 How Can a Large Language Model (LLM) Help Us Improve This Process and, Above All, Simplify It?.
9.3 First Picture of the Solution.
9.4 Preparatory Steps When Initiating the Project.
9.5 Conclusion.
Chapter 10: Closing.
Index.
Бесплатно скачать электронную книгу в удобном формате, смотреть и читать:
Скачать книгу Large Language Models Projects, Martra P., 2024 - fileskachat.com, быстрое и бесплатное скачивание.
Скачать pdf
Ниже можно купить эту книгу, если она есть в продаже, и похожие книги по лучшей цене со скидкой с доставкой по всей России.Купить книги
Скачать - pdf - Яндекс.Диск.
Дата публикации:
Теги: учебник по программированию :: программирование :: Martra
Смотрите также учебники, книги и учебные материалы:
Следующие учебники и книги:
Предыдущие статьи: