Machine Learning Q and AI, Raschka S., 2024

Подробнее о кнопках "Купить"

По кнопкам "Купить бумажную книгу" или "Купить электронную книгу" можно купить в официальных магазинах эту книгу, если она имеется в продаже, или похожую книгу. Результаты поиска формируются при помощи поисковых систем Яндекс и Google на основании названия и авторов книги.

Наш сайт не занимается продажей книг, этим занимаются вышеуказанные магазины. Мы лишь даем пользователям возможность найти эту или похожие книги в этих магазинах.

Список книг, которые предлагают магазины, можно увидеть перейдя на одну из страниц покупки, для этого надо нажать на одну из этих кнопок.

Machine Learning Q and AI, Raschka S., 2024.
    
   I wrote this book as a resource for readers and machine learning practitioners who want to advance their expertise in the field and learn about techniques that I consider useful and significant but that are often overlooked in traditional and introductory textbooks and classes. I hope you’ll find this book a valuable resource for obtaining new insights and discovering new techniques you can implement in your work.

Machine Learning Q and AI, Raschka S., 2024


Practical Implications and Limitations.
If it’s possible to identify smaller subnetworks that have the same predictive performance as their up-to-10-times-larger counterparts, this can have significant implications for both neural training and inference. Given the ever-growing size of modern neural network architectures, this can help cut training costs and infrastructure.

Sound too good to be true? Maybe. If winning tickets can be identified efficiently, this would be very useful in practice. However, at the time of writing, there is no way to find the winning tickets without training the original network. Including the pruning steps would make this even more expensive than a regular training procedure. Moreover, after the publication of the original paper, researchers found that the original weight initialization may not work to find winning tickets for larger-scale networks, and additional experimentation with the initial weights of the pruned networks is required.

The good news is that winning tickets do exist. Even if it’s currently not possible to identify them without training their larger neural network counterparts, they can be used for more efficient inference after training.

Contents.
Foreword.
Acknowledgments.
Introduction.
PART I: NEURAL NETWORKS AND DEEP LEARNING.
Chapter 1: Embeddings, Latent Space, and Representations.
Chapter 2: Self-Supervised Learning.
Chapter 3: Few-Shot Learning.
Chapter 4: The Lottery Ticket Hypothesis.
Chapter 5: Reducing Overfitting with Data.
Chapter 6: Reducing Overfitting with Model Modifications.
Chapter 7: Multi-GPU Training Paradigms.
Chapter 8: The Success of Transformers.
Chapter 9: Generative AI Models.
Chapter 10: Sources of Randomness.
PART II: COMPUTER VISION.
Chapter 11: Calculating the Number of Parameters.
Chapter 12: Fully Connected and Convolutional Layers.
Chapter 13: Large Training Sets for Vision Transformers.
PART III: NATURAL LANGUAGE PROCESSING.
Chapter 14: The Distributional Hypothesis.
Chapter 15: Data Augmentation for Text.
Chapter 16: Self-Attention.
Chapter 17: Encoder- and Decoder-Style Transformers.
Chapter 18: Using and Fine-Tuning Pretrained Transformers.
Chapter 19: Evaluating Generative Large Language Models.
PART IV: PRODUCTION AND DEPLOYMENT.
Chapter 20: Stateless and Stateful Training.
Chapter 21: Data-Centric AI.
Chapter 22: Speeding Up Inference.
Chapter 23: Data Distribution Shifts.
PART V: PREDICTIVE PERFORMANCE AND MODEL EVALUATION.
Chapter 24: Poisson and Ordinal Regression.
Chapter 25: Confidence Intervals.
Chapter 26: Confidence Intervals vs. Conformal Predictions.
Chapter 27: Proper Metrics.
Chapter 28: The k in k-Fold Cross-Validation.
Chapter 29: Training and Test Set Discordance.
Chapter 30: Limited Labeled Data.
Afterword.
Appendix: Answers to the Exercises.
Index.



Бесплатно скачать электронную книгу в удобном формате, смотреть и читать:
Скачать книгу Machine Learning Q and AI, Raschka S., 2024 - fileskachat.com, быстрое и бесплатное скачивание.

Скачать файл № 1 - pdf
Скачать файл № 2 - epub
Ниже можно купить эту книгу, если она есть в продаже, и похожие книги по лучшей цене со скидкой с доставкой по всей России.Купить книги



Скачать - epub - Яндекс.Диск.

Скачать - pdf - Яндекс.Диск.
Дата публикации:





Теги: :: :: ::


 


 

Книги, учебники, обучение по разделам




Не нашёл? Найди:





2025-07-19 06:10:01