How to apply RAG and LLM to search BI reports faster
Table of Contents
🎥 Here’s a video of me trying to explain all this at the last Serbian Tableau User Group.
What are the prerequisites for using this approach? #
- The company’s BI tool has hundreds of reports and continues to grow
- The company is growing fast, with new employees and new dashboards being added
- There is a lack of business intelligence culture among dashboard creators, as many reports lack descriptions, tags, and keywords
- The process of dashboard creation is both rapid and decentralized, allowing for faster development but also introducing challenges in maintaining consistency
- Documentation exists in-place, meaning small texts near charts or elements next to dashboard filters that explain metrics
What is the problem? #
End users struggle to find useful information quickly. This typically happens because of BI tool limitations:
- Poor search experience, with no “search by meaning” functionality
- Vendors often sell search improvements as add-ons instead of improving them for free, leading to significant additional costs for extended licenses
- A bias toward English in BI tool search functionality
All of this leads to new ad-hoc tasks for analysts and worsens data literacy across the company. Archiving outdated dashboards may help, but it’s often not enough.
What is RAG? #
RAG (Retrieval-Augmented Generation) is an approach that helps an LLM reduce hallucinations by retrieving information from document storage instead of relying solely on its internal knowledge.
RAG mitigates hallucinations but doesn’t eliminate them completely: always validate outputs
There is basically 2 steps to implement RAG:
- Retrival. Question and documents will be converted into a vectors (embeddings) and then similarity search will be excecuted. Basic search algorithm is cosine similarity — a metric that measures how closely two vectors “point” in the same direction. The closer the directions, the more similar the meanings.
- Generation. The k most relevant documents and the initial question are passed to the model for answer generation (along with a custom prompt).
Why LLM is better than basic BI search? #
Most BI tools I’ve seen in production use a simple search that matches only keywords. In some cases, there are no search capabilities at all.
LLM, on the other hand, understands meaning — even with typos, missing tags, or vague questions — making it far more reliable for the complexity and unpredictability of BI environments.
What data and prompt do you need? #
1. Reports metadata #
A simple text or markdown document that contains the report page content and some metadata, which will be useful later when splitting it into pieces.
DataLens, my main BI system, provides an API that I use to gather information about dashboards, tabs, and charts on those tabs.
Metadata for this document:
"meta": {
"domain": "01 Sales",
"tab_name": "Some Overview / Main",
"url": "https://datalens.yandex.cloud/<dash_id>?tab=<tab_id>",
"number_of_views": 100
}
2. Any LLM API #
Open-source models can be very effective. In my production I use an internally hosted DeepSeek V3 API, but you might prefer Mistral or the OpenAI API if your dashboard metadata is not protected by an NDA.
3. Some general business overview #
Just a short text containing information about your business to provide additional context to the LLM. Keep it as brief as possible.
### Who are we?
Dunder Mifflin Paper Company, Inc. is a paper and office supplies
wholesale company. There is 13 branches in the US (yet).
### Key Products
- Paperclips: it's the best one
- Paper: A4, A2
- Office Supplies: furniture of various brands
- ...
### Metrics / Abbriviations
- GMV: Gross Merchandise Value
- MAU: Monthly Active Users
- DHJ: Dwight Hates Jim
- ...
### Other Useful Information
- Scranton branch is the best on sales
- ...
4. Vector database #
I use local files and FAISS, but for production environments with many users, you could use a variety of compatible vector databases.
5. Metadata about user / company’s intranet API (optional) #
If you have intranet portal (such as SharePoint) that list users and the department hierarchy, this information can help improve context-based search and support final decision-making.
6. Any interface to hand over results to the end users (optional) #
In my solution I use a Telegram bot (it’s widely used messenger within my company), but you can plug results into whatever tool your users prefer. In this article and GitHub repository I use examples of CLI-tools, but then you could easily adapt it to any other interface.
Creating a vectorstore #
Now let’s put it all together! To build an MVP for search functionality, the first step is to create a vector store using the Reports Metadata. The goal of this step is to convert our metadata about dashboards into vectors, also known as embeddings.
There are many embedding models available, but a good starting point is intfloat/multilingual-e5-base — it’s CPU-optimized, supports multiple languages, and handles up to 512 tokens, making it a solid choice for processing longer text blocks.
However, if you look at the average metadata report document, it’s usually much larger than 512 tokens.
To address this, it is necessary to split documents into smaller parts that fit within the token limit.
To avoid losing important information and to improve the quality of vector search, headers (##
and ###
) can be used as natural split points.
Additionally, for every chunk text that does not start with a ## About this tab:
(which marks the beginning of the document and contains general metadata), extra metadata prepared earlier during the first step of data preparation will be attached.
--- PART ---
Domain: <Domain of the original Document>
Tab Name: <Tab Name of the original Document>
URL: <URL of the original Document>
Number of views: <Number of views of the original Document>
---
<chunk content>
Here is a simple example of the document (you could have thousands of them):
{
"meta": {
"domain": "01 Sales",
"tab_name": "Regional Dynamics",
"url": "https://dl.ya.cloud/abc?tab=100",
"number_of_views": 278
},
"md": "## About this tab:\nDomain: 01 Sales\nTab Name: Regional Dynamics..."
}
Here is what the creation of the local vector storage looks like:
|
|
Now take a closer look at step 2 in the code. As you can see, I used chunk_size
of 512 - 90 = 422
tokens and a chunk_overlap
of 0
tokens. Why?
Since every extra metadata block adds about 90 tokens (you can check this using transformers.AutoTokenizer
), I needed to reduce the size of every chunk starting from second chunk to stay within the limit for the main text content.
This setup should fit the embedding model’s limit of 512 tokens. However, because metadata can sometimes be messy, in step 4, I checked the quality of the split (the percentage of chunks that would be truncated due to the model’s limits). To reduce this number, you could use other separators and split long texts into parts. In my case, 1% of the chunks were larger than 512 tokens. This means that in those chunks, the context was truncated.
Now you have vector storage. What’s next? #
Let’s build an MVP of the reply engine. There are two more things needed: a good prompt for generation part and the vector store retriever.
Now let’s take a closer look at the retriever code:
|
|
Let’s break this down:
First, I initialized the HuggingFaceEmbeddings
class to define which embedding model I will use during the retrieval process.
One important detail: encode_kwargs={"normalize_embeddings": True}
. I used this because FAISS performs better on normalized vectors.
Then, I created the FAISS database with allow_dangerous_deserialization=True
, since Pickle files could contain malware if tampered with. But since I created the files myself, I trust them.
Then I invoke the retriever to use cosine similarity to find the most relevant docs.
What is cosine similarity? #
Let’s say you want to make a Tinder for sentences: with cosine similarity it would be a piece of cake to find a match!
Let’s say you’re dog lover and would kill for coffee. You have those profiles in the database: 1,2,3. If you apply this algorithm it would say that 1 and 3 is the high match, and number 2 is completely off.
If you imagine your user question and your chunks as the coordinates of meaning in multidimensional space then you could just compare the align and direction of those vectors to find the most similar.
Now let’s talk about search_kwargs.k
parameter.
This parameter controls the number of documents the retriever will return after finding results cosine-similar to the user’s query.
Since I use in code example gpt-4o the context window is 128k tokens (up to 300 chunks), but in production I use DeepSeek V3, where the context window is only 64 000 tokens, meaning that I could theoretically pass around 150 document chunks.
However, keep in mind you also need room for the system prompt — the more system context you add, the fewer document chunks you can pass without hitting limits.
You should also remember:
- Higher k → more documents → better quality, but
- Higher k → longer execution time.
So, you’ll need to find a balance. I recommend starting with k = 40 and adjusting based on your needs.
How to measure quality of retriever? #
The basic goal for you before launch is to optimize the recall and precision metrics. It’s basically the share of successful searches.
Recall = Kfound / Krelevant
Precision = Kfound / Kretrieved
To calculate those metrics you could use simple dataset with 50 different questions and id’s of the most relevant chunks that should be found by the algorithm. Then you just run the search 50 times and compare chunks that were found and your relevant chunks.
Question | Relevant documents |
---|---|
Find me a report about sales | [1000, 1001, 1002] |
Paperclips profit | [2004, 100] |
Then you could make this dataset bigger and more similar to your users questions.
Generation part #
After that, I initialized LLM and pass those docs to the context of the LLM API call.
As for the prompt, I created this version (you can also expand it with additional context like a business overview or domain descriptions):
You can use my code example and simply run it as CLI:
python3 -m ask_me_anything -q 'Find me a report about sales'
...
🔎 Relevant dashboards:
- [Regional Dynamics](https://dl.ya.cloud/abc?tab=100) (278 views):
Contains a table with sales data, including dimensions like region, city,
and product, and measures such as GMV, sales, and average check.
Confidence: 9/10
Some Tips #
- Pre-filter irrelevant metadata (outdated dashboards, somewhat from sandbox folder, etc.) to improve search efficiency.
- Sort your documents based on relevance by creating a custom retriever class.
- Consider your metadata and your user language when selecting the embedding model.
- To improve the quality of the retriever you could use re-phrases of the initial question and advanced algorithms (MMR and RRF) to make the recall better.
- Try to give your end users MVP on early stages: I guarantee they will surprise you with their questions :)
Don’t hesitate to ask follow-up questions or reach out if you need advice.
Here is a GitHub repository with all code and examples to start with.
Happy coding! 🚀