Building Applications with Large Language Models, Singh B., 2024

Подробнее о кнопках "Купить"

По кнопкам "Купить бумажную книгу" или "Купить электронную книгу" можно купить в официальных магазинах эту книгу, если она имеется в продаже, или похожую книгу. Результаты поиска формируются при помощи поисковых систем Яндекс и Google на основании названия и авторов книги.

Наш сайт не занимается продажей книг, этим занимаются вышеуказанные магазины. Мы лишь даем пользователям возможность найти эту или похожие книги в этих магазинах.

Список книг, которые предлагают магазины, можно увидеть перейдя на одну из страниц покупки, для этого надо нажать на одну из этих кнопок.

Building Applications with Large Language Models, Singh B., 2024.
     
    This book is your guide in understanding different ways in which Large Language Models, like GPT, BERT, Claude, LLaMA, etc., can be utilized for building something useful. It takes you on a journey starting from very basic, like understanding the basic models in NLP, to complex techniques, like PEFT, RAG, Prompt Engineering, etc. Throughout the book, you will find several examples and code snippets which will help you appreciate the state-of-the-art NLP models. Whether you’re a student trying to get hold of the new technology, a data scientist transitioning to the field of NLP, or simply someone who is inquisitive about Large Language Models (LLMs), this book will build your concepts and equip you with the knowledge required to start building your own applications using LLMs.

Building Applications with Large Language Models, Singh B., 2024


Rule-Based Language Models.
The earliest attempts to generate the human language were made during the 1960s where a set of rules tried to capture different aspects of a language, such as grammar, syntax, etc. These rules when combined with pattern matching approaches such as regex, generated results that created quite an impact at that time. This set forward the path for language models that we are seeing today.

The first rule-based system that left a mark on the world was ELIZA. It was developed in the mid-1960s by MIT computer scientist Joseph Weizenbaum. ELIZA was a chatbot which could run based on different “scripts" fed to it in a form similar to Lisp representation. The script that stood out the most was DOCTOR, in which ELIZA simulated a psychotherapist. Even a short exposure to this chatbot created a delusion in people who thought that the computer program was emotional and interested in them, despite being consciously aware of the program's inability to do so. This led to the coining of an interesting term called “ELIZA effect," which refers to the users' inclination to assign humanlike attributes to a machine's responses (such as emotional, intelligent, etc.) even when they are aware of the machine’s incapability in generating such responses.

CONTENTS.
About the Author.
About the Technical Reviewer.
Acknowledgments.
Introduction.
Chapter 1: Introduction to Large Language Models.
Understanding NLP.
Text Preprocessing.
Data Transformation.
History of LLMs.
Language Model.
Rule-Based Language Models.
Statistical Language Models.
Neural Language Models.
RNN and LSTM.
Transformer.
Applications of LLMs.
Conclusion.
Chapter 2: Understanding Foundation Models.
Generations of AI.
Foundation Models.
Building Foundation Models.
Benefits of Foundation Models.
Transformer Architecture.
Self-Attention Mechanism.
What Is Self-Attention?.
How Does Self-Attention Work?.
Building Self-Attention from Scratch.
Conclusion.
Chapter 3: Adapting with Fine-Tuning.
Decoding the Fine-Tuning.
Instruction Tuning or Supervised Fine-Tuning (SFT).
Instruction Fine-Tuned Models.
Understanding GPU for Fine-Tuning.
Alignment Tuning.
Parameter Efficient Model Tuning (PEFT).
Adapter Tuning.
Soft Prompting.
Low-Rank Adaptation (LoRA).
QLoRA.
Conclusion.
Chapter 4: Magic of Prompt Engineering.
Understanding a Prompt.
Introduction.
Key Characteristics of a Prompt.
Understanding OpenAI API for Chat Completion.
Required Parameters.
Optional Parameters.
Techniques in Prompt Engineering.
Zero-Shot Prompting.
Few-Shot Prompting.
Chain-of-Thought (CoT) Prompting.
Self-Consistency.
Tree-of-Thought (ToT) Prompting.
Generated Knowledge.
Prompt Chaining.
Design Principles for Writing the Best Prompts.
Principle 1: Clarity.
Principle 2: Style of Writing.
Principle 3: Ensuring Fair Response.
Conclusion.
Chapter 5: Stop Hallucinations with RAG.
Retrieval.
Document Understanding.
Chunking.
Chunk Transformation and Metadata.
Embeddings.
Search.
Augmentation.
Generation.
Conclusion.
Chapter 6: Evaluation of LLMs.
Introduction.
Evaluating the LLM.
Basic Capability: Language Modeling.
Advanced Capabilities: Language Translation.
Advanced Capabilities: Text Summarization.
Advanced Capabilities: Programming.
Advanced Capabilities: Question Answering Based on Pre-training.
Advanced Capabilities: Question Answering Based on Evidence.
Advanced Capabilities: Commonsense Reasoning.
Advanced Capabilities: Math.
LLM-Based Application: Fine-Tuning.
LLM-Based Application: RAG-Based Application.
LLM-Based Application: Human Alignment.
Conclusion.
Chapter 7: Frameworks for Development.
Introduction.
LangChain.
What Is LangChain?.
Why Do You Need a Framework like LangChain?.
How Does LangChain Work?.
What Are the Key Components of LangChain?.
Conclusion.
Chapter 8: Run in Production.
Introduction.
MLOps.
LLMOps.
Prompts and the Problems.
Safety and Privacy.
Latency.
Conclusion.
Chapter 9: The Ethical Dilemma.
Known Risk Category.
Bias and Stereotypes.
Sources of Bias in AI.
Examples of bias in LLMs.
Example 1.
Example 2.
Example 3.
Solutions to Manage Bias.
Security and Privacy.
User Enablement.
Security Attacks.
Privacy.
Data Leakage.
Copyright Issues.
Examples Related to Security and Privacy Issues.
Misinformation.
Prompt Injection.
Data Leakage.
Copyright Issue.
Transparency.
Environmental Impact.
The EU AI Act.
Conclusion.
Chapter 10: The Future of AI.
Perception of People About GenAI.
Impact on People.
Resource Readiness.
Quality Standards.
Need of a Regulatory Body.
Emerging Trends in GenAI.
Multimodality.
Longer Context Windows.
Agentic Capabilities.
Conclusion.
Index.



Бесплатно скачать электронную книгу в удобном формате, смотреть и читать:
Скачать книгу Building Applications with Large Language Models, Singh B., 2024 - fileskachat.com, быстрое и бесплатное скачивание.

Скачать pdf
Ниже можно купить эту книгу, если она есть в продаже, и похожие книги по лучшей цене со скидкой с доставкой по всей России.Купить книги



Скачать - pdf - Яндекс.Диск.
Дата публикации:





Теги: :: ::


 


 

Книги, учебники, обучение по разделам




Не нашёл? Найди:





2025-07-29 06:08:01