Artwork

Contenuto fornito da Keith Bourne. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Keith Bourne o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Graph-Based RAG: Hybrid Embeddings & Explainable AI (Chapter 14)

21:44
 
Condividi
 

Manage episode 523994504 series 3705596
Contenuto fornito da Keith Bourne. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Keith Bourne o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Unlock the power of graph-based Retrieval-Augmented Generation (RAG) in this technical deep dive featuring insights from Chapter 14 of Keith Bourne's "Unlocking Data with Generative AI and RAG." Discover how combining knowledge graphs with LLMs using hybrid embeddings and explicit graph traversal can dramatically improve multi-hop reasoning accuracy and explainability.

In this episode:

- Explore ontology design and graph ingestion workflows using Protégé, RDF, and Neo4j

- Understand the advantages of hybrid embeddings over vector-only approaches

- Learn why Python static dictionaries significantly boost LLM multi-hop reasoning accuracy

- Discuss architecture trade-offs between ontology-based and cyclical graph RAG systems

- Review real-world production considerations, scalability challenges, and tooling best practices

- Hear directly from author Keith Bourne about building explainable and reliable AI pipelines

Key tools and technologies mentioned:

- Protégé for ontology creation

- RDF triples and rdflib for data parsing

- Neo4j graph database with Cypher queries

- Sentence-Transformers (all-MiniLM-L6-v2) for embedding generation

- FAISS for vector similarity search

- LangChain for orchestration

- OpenAI chat models

- python-dotenv for secrets management

Timestamps:

00:00 - Introduction & episode overview

02:30 - Surprising results: Python dicts vs natural language for KG representation

05:45 - Why graph-based RAG matters now: tech readiness & industry demand

08:15 - Architecture walkthrough: from ontology to LLM prompt input

12:00 - Comparing ontology-based vs cyclical graph RAG approaches

15:00 - Under the hood: building the pipeline step-by-step

18:30 - Real-world results, scaling challenges, and practical tips

21:00 - Closing thoughts and next steps

Resources:

- "Unlocking Data with Generative AI and RAG" by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition

- Visit Memriq AI at https://Memriq.ai for more AI engineering insights and tools

  continue reading

22 episodi

Artwork
iconCondividi
 
Manage episode 523994504 series 3705596
Contenuto fornito da Keith Bourne. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Keith Bourne o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Unlock the power of graph-based Retrieval-Augmented Generation (RAG) in this technical deep dive featuring insights from Chapter 14 of Keith Bourne's "Unlocking Data with Generative AI and RAG." Discover how combining knowledge graphs with LLMs using hybrid embeddings and explicit graph traversal can dramatically improve multi-hop reasoning accuracy and explainability.

In this episode:

- Explore ontology design and graph ingestion workflows using Protégé, RDF, and Neo4j

- Understand the advantages of hybrid embeddings over vector-only approaches

- Learn why Python static dictionaries significantly boost LLM multi-hop reasoning accuracy

- Discuss architecture trade-offs between ontology-based and cyclical graph RAG systems

- Review real-world production considerations, scalability challenges, and tooling best practices

- Hear directly from author Keith Bourne about building explainable and reliable AI pipelines

Key tools and technologies mentioned:

- Protégé for ontology creation

- RDF triples and rdflib for data parsing

- Neo4j graph database with Cypher queries

- Sentence-Transformers (all-MiniLM-L6-v2) for embedding generation

- FAISS for vector similarity search

- LangChain for orchestration

- OpenAI chat models

- python-dotenv for secrets management

Timestamps:

00:00 - Introduction & episode overview

02:30 - Surprising results: Python dicts vs natural language for KG representation

05:45 - Why graph-based RAG matters now: tech readiness & industry demand

08:15 - Architecture walkthrough: from ontology to LLM prompt input

12:00 - Comparing ontology-based vs cyclical graph RAG approaches

15:00 - Under the hood: building the pipeline step-by-step

18:30 - Real-world results, scaling challenges, and practical tips

21:00 - Closing thoughts and next steps

Resources:

- "Unlocking Data with Generative AI and RAG" by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition

- Visit Memriq AI at https://Memriq.ai for more AI engineering insights and tools

  continue reading

22 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci