Pretty excited about this new RAG technique 🧑🍳 A top issue with RAG chunking is it splits the document into fragmented pieces, causing top-k retrieval to return partial context. Also most documents have multiple hierarchies of sections: top-level sections, sub-sections, etc. This is also why lots of people are interested in exploring the idea of knowledge graphs - pulling in "links" to related pages to expand retrieved context. This notebook lets you retrieve contiguous chunks without having to spend a lot of time tuning the chunking algorithm, thanks to GraphRAG-esque metadata tagging + retrieval. Tag chunks with sections, and use the section ID to expand the retrieved set. #RAGTechnique #GraphRAG #KnowledgeGraphs #AIContextualUnderstanding #InformationRetrieval #NaturalLanguageProcessing #ChunkingOptimization #ArtificialIntelligenceInnovation #MachineLearningAdvancements #LanguageModelingSolutions https://lnkd.in/gqTnfKWG
Waqas Ahmed’s Post
More Relevant Posts
-
A trick in retrieval-augmented generation (RAG) is to use the output of RAG as extra context information to perform second RAG query to get better retrieval result and eventually get a better output. In my interpretation it is a resampling of documents in the vector storage to get more relevant documents as the result of retrieval. And I think we can do the same by querying the vector storage using the embedding output of ReRanker for better retrieval without large cost overhead using text generation. Thanks Paul Tsoi for the paper https://lnkd.in/g8NFdZBg
To view or add a comment, sign in
-
ARAGOG: Advanced RAG Output Grading Retrieval Augmented Generation (RAG) = how to incorporate external knowledge sources into text generation process, thus enhancing the models’ ability to produce contextually relevant and in formed outputs.
To view or add a comment, sign in
-
This weekend, learn about 5 different ways of evaluating your RAG systems. zhaozhiming takes you through a comprehensive tour of the different RAG evaluation methods using LLM-as-a-judge (which have corresponding LlamaIndex implementations): 1. Answer Relevance 2. Context Relevance 3. Faithfulness 4. Correctness 5. Pairwise Comparison Along with synthetic dataset generation. Check it out here: https://lnkd.in/gdQ3fZ2W
To view or add a comment, sign in
-
Very useful when considering RAG evaluation.
This weekend, learn about 5 different ways of evaluating your RAG systems. zhaozhiming takes you through a comprehensive tour of the different RAG evaluation methods using LLM-as-a-judge (which have corresponding LlamaIndex implementations): 1. Answer Relevance 2. Context Relevance 3. Faithfulness 4. Correctness 5. Pairwise Comparison Along with synthetic dataset generation. Check it out here: https://lnkd.in/gdQ3fZ2W
To view or add a comment, sign in
-
Added 2 new multi-agent papers to the Awesome Multi-Agent Papers Repository - [Agent-as-a-Judge: Evaluate Agents with Agents ](https://buff.ly/4f1c5iZ) - [Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence](https://buff.ly/4h5fiiP ) What papers are we missing? Like, repost, and share with this friends ✨
To view or add a comment, sign in
-
Added 2 new multi-agent papers to the Awesome Multi-Agent Papers Repository - [Agent-as-a-Judge: Evaluate Agents with Agents ](https://buff.ly/4f1c5iZ) - [Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence](https://buff.ly/4h5fiiP ) What papers are we missing? Like, repost, and share with this friends ✨
To view or add a comment, sign in
-
In this blog, Zain Hasan breaks down RAG into indexing, retrieval, and generation components and proposes 2 to 3 practical steps to improve each part of your RAG pipeline. Covering everything from chunking techniques, filtered search, and hybrid search to reranking, fine-tuning embedding models, and generating metadata for your text chunks! https://lnkd.in/dqfKWfiu
To view or add a comment, sign in
-
Well, it's true that people get some better results by applying these techniques, but for each use case, it is still important to have enough tests to find the best way for his/her best approach. The application and effect of new technologies is not really linear. Practice is the best way to prove. Thank you for sharing the paper!
There’s thousands of RAG techniques and tutorials, but which ones perform the best? ARAGOG by Matouš Eibich is one of the most comprehensive evaluation surveys on advanced RAG techniques, testing everything from “classic vector database” to reranking (Cohere, LLM) to MMR to LlamaIndex native advanced techniques (sentence window retrieval, document summary index). The findings 💡: ✅ HyDE and LLM reranking enhance retrieval precision ⚠️ MMR and multi-query techniques didn’t seem to be as effective ✅ Sentence window retrieval, Auto-merging retrieval, and the document summary index (all native LlamaIndex techniques) offer promising benefits in either retrieval precision and answer similarity! (And also interesting tradeoffs). It’s definitely worth giving the full paper a skim. Check it out: https://lnkd.in/genni8g2
To view or add a comment, sign in
-
This survey is amazing, and helpful! I already worked with Classic VDB + LLM Rerank, but what particularly caught my attention were the insights into native LlamaIndex techniques, including sentence window retrieval and the document summary index, which offer promising benefits in terms of both retrieval precision and answer similarity, with intriguing tradeoffs.
There’s thousands of RAG techniques and tutorials, but which ones perform the best? ARAGOG by Matouš Eibich is one of the most comprehensive evaluation surveys on advanced RAG techniques, testing everything from “classic vector database” to reranking (Cohere, LLM) to MMR to LlamaIndex native advanced techniques (sentence window retrieval, document summary index). The findings 💡: ✅ HyDE and LLM reranking enhance retrieval precision ⚠️ MMR and multi-query techniques didn’t seem to be as effective ✅ Sentence window retrieval, Auto-merging retrieval, and the document summary index (all native LlamaIndex techniques) offer promising benefits in either retrieval precision and answer similarity! (And also interesting tradeoffs). It’s definitely worth giving the full paper a skim. Check it out: https://lnkd.in/genni8g2
To view or add a comment, sign in
-
\#LeManTrendHist - indicator MetaTrader 5 https://lnkd.in/d4ZvTGaR \# Real author: LeMan The LeManTrend implemented as a histogram of the smoothed difference between its signal lines. The indicator uses SmoothAlgorithms.mqh library classes \(copy it to \<terminal\_data\_folder\>\MQL5\Include\). The use of the classes was tho...
#LeManTrendHist - indicator MetaTrader 5 https://tradingrobots.net/lemantrendhist-indicator-for-metatrader-5/ # Real author: LeMan The LeManTrend implemented as a histogram of the smoothed difference between its signal lines. The indicator uses SmoothAlgorithms.mqh library classes (copy it to <terminal_data_folder>\MQL5\Include). The use of the classe...
https://tradingrobots.net
To view or add a comment, sign in