A la rentrée 2026, ScholarVox International devient Cantook ScholarVox En savoir plus

Close Stop LLM applications from breaking in production. Build deterministic pipelines, enforce strict tool contracts, engineer high-signal context for RAG, and orchestrate resilient multi-agent workflows using two foundational frameworks: Haystack for pipelines and LangGraph for low-level agent orchestration. DRM-free PDF version + access to Packt's next-gen Reader*Key FeaturesDesign reproducible LLM pipelines using typed components and strict tool contractsBuild resilient multi-agent systems with LangGraph and modular microservicesEvaluate and monitor pipeline performance with Ragas and Weights & BiasesBook DescriptionModern LLM applications often break in production due to brittle pipelines, loose tool definitions, and noisy context. This book shows you how to build production-ready, context-aware systems using Haystack and LangGraph. You’ll learn to design deterministic pipelines with strict tool contracts and deploy them as microservices. Through structured context engineering, you’ll orchestrate reliable agent workflows and move beyond simple prompt-based interactions. You'll start by understanding LLM behavior—tokens, embeddings, and transformer models—and see how prompt engineering has evolved into a full context engineering discipline. Then, you'll build retrieval-augmented generation (RAG) pipelines with retrievers, rankers, and custom components using Haystack’s graph-based architecture. You’ll also create knowledge graphs, synthesize unstructured data, and evaluate system behavior using Ragas and Weights & Biases. In LangGraph, you’ll orchestrate agents with supervisor-worker patterns, typed state machines, retries, fallbacks, and safety guardrails. By the end of the book, you’ll have the skills to design scalable, testable LLM pipelines and multi-agent systems that remain robust as the AI ecosystem evolves. *Email sign-up and proof of purchase required What you will learnBuild structured retrieval pipelines with HaystackApply context engineering to improve agent performanceServe pipelines as LangGraph-compatible microservicesUse LangGraph to orchestrate multi-agent workflowsDeploy REST APIs using FastAPI and HayhooksTrack cost and quality with Ragas and Weights & BiasesImplement retries, circuit breakers, and observabilityDesign sovereign agents for high-volume local executionWho this book is forLLM engineers, NLP developers, and data scientists looking to build production-grade pipelines, agentic workflows, or RAG systems. Ideal for tech leads looking to move beyond prototypes to scalable, testable solutions, as well as teams modernizing legacy NLP pipelines into orchestration-ready microservices. Proficiency in Python and familiarity with core NLP concepts are recommended.

Building Natural Language and LLM Pipelines

QRcode

Build production-grade RAG, tool contracts, and context engineering with Haystack and LangGraph

Stop LLM applications from breaking in production. Build deterministic pipelines, enforce strict tool contracts, engineer high-signal context for RAG, and orchestrate resilient multi-agent workflows using two foundational frameworks: Haystack for pipelines and LangGraph for low-level agent orchestra

Voir toute la description...

Auteur(s): Funderburk, Laura

Editeur: Packt Publishing

Année de Publication: 2025

pages: 338

Langue: Anglais

ISBN: 978-1-83546-799-2

eISBN: 978-1-83546-700-8

Stop LLM applications from breaking in production. Build deterministic pipelines, enforce strict tool contracts, engineer high-signal context for RAG, and orchestrate resilient multi-agent workflows using two foundational frameworks: Haystack for pipelines and LangGraph for low-level agent orchestra

Voir toute la description...

Découvrez aussi...