Balen Shah as Revolutionary Leader of Nepal
Manipur Crisis Explained: What’s Happening in India’s Northeast?
AAP’s Raghav Chadha Exit: What It Means for Aam Aadmi Party and Indian Politics
Lenskart Premises Policy Controversy: Dress Code Debate & Religious
TCS Nashik Case: Nida Khan
The TATA Group Empire
BBDS App Download
× Bindass Bol Home About News Contact Search

which AI is best and we should use ?

which AI is best and we should use ?

which AI is best and we should use ?

Which AI is Best? A Practical Guide Based on Real Use Cases

Artificial Intelligence (AI) has become a part of everyday work—whether you're creating content, coding, designing, or automating business processes. But one common question people ask is: which AI is actually the best? The fair  reply as is—it depends on what you need it for.

In this guide, I’ll break things down in simple, natural language so you can clearly understand which AI tools are best in different areas and how to choose the right one for your needs.

 

Understanding “Best AI” – It’s Not One-Size-Fits-All

Before comparing tools, it’s important to understand that no single AI is perfect for everything. Some AI tools are strong in writing, others in coding, some in image creation, and others in automation.

So instead of asking or telling “Which AI is best among all of these?”, a better question is:
“Which AI is best for my specific task?”

 

Best AI for Content Writing

If your goal is to create blog posts, SEO content, social media captions, or marketing copy, then conversational AI tools are the best fit.

Top Choices:

  • ChatGPT (for natural, human-like writing)
  • Jasper AI (for marketing-focused content)
  • Copy.ai (for quick ad copies and captions)

Why They’re Good:

These tools understand tone, context, and structure. They can generate SEO-friendly content that feels natural—not robotic.

Best For:

  • Blog writing
  • Website content
  • SEO articles
  • Email marketing

 

Best AI for Coding and Development

If you're a developer or running a tech business, AI coding assistants can save a lot of time.

Top Choices:

  • GitHub Copilot
  • ChatGPT (for debugging and explanations)
  • Codeium

Why They’re Good:

They can generate code, fix errors, explain logic, and even suggest better approaches.

 Best For:

  • Writing code faster
  • Debugging issues
  • Learning new programming languages

 

Best AI for Image Generation

Need designs, banners, social media creatives, or marketing visuals? AI image generators are the best option.

Top Choices:

  • Midjourney
  • DALL·E
  • Leonardo AI

Why They’re Good:

These tools can create high-quality images from simple text prompts. Perfect for branding and marketing.

 Best For:

  • Social media posts
  • Website banners
  • Logo concepts
  • Creative designs

 

Best AI for Business Automation

If you want to automate tasks like customer support, WhatsApp messaging, or workflows, automation-focused AI is ideal.

Top Choices:

  • Zapier (AI automation)
  • Make (formerly Integromat)
  • Custom AI chatbots

Why They’re Good:

They connect apps and automate repetitive tasks, saving time and reducing manual work.

 Best For:

  • Lead generation
  • CRM automation
  • WhatsApp bots
  • Workflow automation

 

Best AI for SEO and Marketing

For digital marketing and ranking on Google, some AI tools are specially designed for SEO optimization.

Top Choices:

  • Surfer SEO
  • Frase
  • SEMrush AI tools

Why They’re Good:

This will help us with keyword research, content writing, and deep competitor analysis in all spectrum.

 Best For:

  • Ranking blog posts
  • Keyword optimization
  • SEO audits

 

Key Factors to Choose the Right AI

When selecting the best AI, keep these points in mind:

1. Purpose

What do you need it for—writing, coding, design, or automation?

2. Ease of Use

Some AI tools are beginner-friendly, while others require technical knowledge.

3. Pricing

Free tools are good to start, but premium tools offer better features.

4. Output Quality

Always test the results. The best AI should give accurate and human-like output.

 

Final Thoughts: Which AI Should You Choose?

As our suggestion is no single “best AI” for everything. The right choice depends on your goals.

  • For content writing → Go with ChatGPT
  • For coding → Use GitHub Copilot
  • For designs → Try Midjourney or DALL·E
  • For automation → Use Zapier or chatbot tools
  • For SEO → Use Surfer SEO or similar platforms

The smartest approach is to combine multiple AI tools depending on your needs.

 

Pro Tip for Businesses

If you're running a business (especially digital services like websites, WhatsApp API, or marketing), using AI smartly can give you a huge competitive advantage.

You can:

  • Automate customer communication
  • Generate content at scale
  • Improve SEO rankings
  • Reduce operational costs

 

Conclusion

AI is not about replacing humans—it’s about making work faster, smarter, and more efficient. The “best AI” is the one that solves your problem effectively.

Start with one tool, experiment, and gradually build your AI toolkit. That’s the real way to get the most out of artificial intelligence.


BBDS Logo

Bindass Bol Dil Se

Written by: Taushif

30 Mar 2026  ·  Published: 15:20 IST

What is RAG & how it process ?

RAG Architecture Explained: How AI Uses External Data for Better Results

What is RAG & how it process ?

What is a traditional RAG? What is a vector-based RAG? What are the problems associated with vector-based RAG?

And how does this new concept—known as Vectorless RAG (or Page Indexing)—effectively solve a significant problem? Furthermore, how does it utilize reasoning models to enhance document retrieval? So, with that, let's dive into the context.

Alright, so before jumping into Vectorless RAG, let's first understand what a traditional RAG is—and even before that, let's define what RAG stands for. RAG basically stands for "Retrieval Augmented Generation." The problem statement here is quite simple. For the moment, let's set Vectorless RAG aside—let's not worry about it just yet. Let's assume that within a traditional application, you have a large collection of documents. For instance, I'll take a PDF file here; let's say this represents one of my PDF files. You might have numerous PDF files, or perhaps a single PDF file containing many pages.

Now, a user wants to perform a Q&A session based on this content using AI. So, essentially, I have these pages—it could be three pages, or it could be three hundred pages—and I need to facilitate a Q&A interaction over them. The simplest, most naive solution would be this: let's consider what the user does. The user will provide you with a query—right? The user is going to give you a specific question (which is also referred to as a "prompt"). So, let's assume this is my query. What you can then do is feed this into an LLM model—you can use any Large Language Model, whether it's GPT, OpenAI, Anthropic, or any other model.

Let's assume this represents my model. Basically, what you can do is simply feed *all* of these documents directly into the model. You take all your documents—all your content—and include it within the prompt itself; you also include the user's query within that same prompt. The LLM can then process this input and generate a corresponding output. This constitutes one simple, "naive" solution—one that is reliable and will certainly work—wherein you provide the entire content of the documents, supply the user's query, trigger the generation process (make an LLM call), and receive your output. But as you can see in the

diagram, these things don't work that simply. There are a lot of problems with this particular approach. The first problem that comes is that number one it's a very large context because these are particular files, they can be very big files, there can be a lot of content in it and there is a problem that LLMs have limited context window okay so limited context window that means that if you put a lot of content in it, then LLM can fail because the context window is limited, you cannot in just 3000 pages, one or two pages, if you put a lot of content in it, then LLM can fail because the context window is limited.

you cannot in just 3000 pages, you can put one or two pages that's completely okay but even if you have 100 pages in your PDF file there is a high probability that your LLM will fail okay let's assume that after one or two years from today, the context window of the LLM will increase and you can even ingest 3000 pages.

The second problem that comes here is the problem of so much context and LLM will start hallucinating here. Because you have a PDF of 3000 pages, if you ingest all of it in LLM because your LLM is now having a lot of context, So the output that will come, Its quality is not going to be good, because you have too much context, There is no focus on mother, You have completely ingested the entire book in LLM, so the focus is not there, So the answers you will get, it can be generic answers, but not a focused input,

so that's one problem, and third problem is the cost because If the user's query is very simple There was also a query which is just for the page number 5 you every 3000 within LLM call If you ingest the pages so obviously you have to pay a lot of cost for the tokens because In case of LLMs everything is a token and tokens are costly So this is not an efficient solution, right, Why should I give 3000 pages just for a user query? because it is increasing my cost and it is even decreasing my quality of the output,

So this was a problem, which is a known problem within LLMs, and that is where the rag comes into the picture, again my friend, I am not talking about vectors or vectorless, We are just understanding the problem statement simply. So what happened here is you have the traditional rag system. So rag system solves this problem How do I ingest the last documents into LLMs? So inside traditional rag what you do is you have two phases number one phase is known as the indexing phase and number two phase is known as the query phase

so what you have to do is first let's talk about indexing phase, one phase nona indexing phase and number two phase nona indexing phase ok so that's the first indexing phase. Talking about phase, what is indexing phase? Ok let's talk about indexing phase, first key user? Will give you some PDF files PDF and can be Excel can be document files can be any kind of file but now for this we Let's assume PDF file only, because that's the like most common PDF format, right, most common format, what you have to do is, number one, you have to chunk these into many many segments,

What happens is that I can chunk it by using a particular algorithm, So simple chunking can be that I chunk it page by page, if I have 3000 pages, what will I do, I will make 3000 chunks of my PDF file, So first what could be your approach, let me chunk you page by page. which works perfectly fine, second, if you want to make the chunk smaller you can do something known as a paragraph by paragraph You can also do paragraph by paragraph chunking.

so basically what you are doing is you are doing some kind of chunking and splitting of the documents okay most commonly what do you do paragraph You can do by paragraph but in that There is also a problem that a paragraph If it is very big then you can still go out of context right the context window can be reached So what do people usually do? what do they take a fixed window chunking okay this is known as a fixed window chunking so i'm here I will take one size let's say 500 characters of so what I can do is

One for every 500 characters I will chunk it, it takes 500 words not characters let's take 500 words so what I am going to do is I will chunk this entire PDF into 500 words. so what I am going to do is Chunks based on 500-500 words If I make it, let's say this is my chunk 1 this is my chunk 2 this is my chunk 3 and so on So the first part was chunking.

and now what you basically do is you these chunks convert inside vectors using a LLM using an LLM okay so here what you can do is let's say if you are using open AI vectors inside so open AI Special models are made to make You don't use simple models here there are actually vector models so let me just show you vector models okay so vector models if we use open AI so you can see vector embeddings and you can see that you have special models here like text embed 3 small is a model text embed 3 large is a model so you use special different

models for these embeddings so what you can basically do is you take these chunks you call your embeddings model right which will call right which can be your open or your entropic whatever you want to use and then what you basically do is you get some array of numbers because what is vectors at the end of the day there are some array of numbers So you will get some array of numbers returned here.

so that means what i did picked up chunks Gave it to my vector embedding model, I got the embeddings here, so that means what I did, picked up the chunks, gave them to my vector embedding model, I got the embeddings here, just in case you want to understand what are these vector embeddings and all, okay, now basically what you can do is, In embeddings, you have to store somewhere in database,

Now you can use these vector embeddings Not saved inside traditional databases can there are special databases for vector embeddings for example you have pinecone okay what is pinecone db a vector database similarly you have chroma db you have viviate you have milwis you have quadrant there are so many databases which are vector databases So you can find many databases among them, which are vector databases, So you can store inside them, Even Postgres also comes with an extension, known as PG vector, which makes it a vector database,

So what do we do with these vectors, Let's save our vector inside the db, along with the chunk, So that means this was its chunk, These were its vector embeddings, So all the chunks you have made, You will save that many vector embeddings in your database. So this was your first part indexing that's it take the PDF file chunk it up make its vectors and save it in the database Your indexing phase is done Now second phase comes user query When the user wants to chat over his PDF file want to ask something about that

so what happens is that your user will come here, so this user wants to chat over his PDF file, wants to ask something about it, so what happens, That is your user, He will come here, so this user came, And what will the user do? Someone will give you a query, He will say, Friend, I have a query. We made the spelling of the query wrong.

This is my query, please me over it, tell me something, Tell me according to my PDF file, what is that, now the thing is, what you do is, first of all, This is the query, using the same model, you will also create vector embeddings of this query, okay, now we do this you do is first of all this query is using the same model You can also use vector embeddings of this query you will make it ok now let's look here We are not using simple llm here.

we are using a vector embedding model so what you are going to do is you are going to convert the user's query also in the vector embeddings using the same model And what will this do to you some number Will give let's say whatever number comes Something came like 3 2, 5 and 6 What will it do by becoming vector embeddings it will give you some number let's say whatever number came it came something like 3, 2, 5 and 6 something like this came as vector embeddings now what you can do is you can search for similar numbers

in your database Don't you remember this was our pine cone database? so what you basically do is you are going to go into the database and you are going to do a vector similarity search vector similarity search okay, so that means you will tell him Friend, I did not get these numbers. 3, 2, 5, 6 by searching this Bring it, then see it is possible that this particular which is the vector point If he brings it to him, what you can basically do is you can get this chunk, because user The question asked, let's say,

The user said something about the car, the question was asked, And wherever, Inside our PDF, about that car, Must have talked, inside vector embeddings, So you, Relevant chunks will be found, You pass another parameter here, which we call, top underscore k, how many relevant chunks do I need, So I said, This is not top k, make mine 5, Bring me the top 5 relevant chunks, not the entire PDF file, just the chunks, and what is the size of each chunk we had decided, maybe 500 words, maybe you have made one paragraph a chunk, then what will you get here, top 5, 1, 2, 3, 4 and 5 chunks have been found, PDF file could be yours 3000.

pages, but here you What did you do by using the user's query? smartly only relevant chunks Found it, it is called relevant relevant meaning in every chunk The discussion is taking place as per the query of that user. now what you can do is, you can take these chunks, plus The user who originally asked the query said these two things: then you can do a simple open API call which LLM is called this can be a GPT 4.

 model, cloud, anthropic whatever you want you can call it man The user has asked this query and now it has relevant chunks. Above you perform one generation you will get some result which you can return back to the user. And this is how your traditional rag system works. Got it? You use vectors in this. Okay. Now let's understand the problem behind this vector rag. It is called vector rag.

Right? Vector rag works fine. It is used a lot, almost every company is using it, and it's a very, like you know, a very traditional and a very old way to do the document raga, But what is the biggest problem in this, the problem is chunking, Because we don't have any solid justification for chunking, okay, see, I'll tell you one thing, let's say you have a paragraph, You have this second paragraph, you have this You have this third paragraph, you have this fourth paragraph, what did you do, blindly an algorithm

I have decided what I will do, I will not chunk above 500 words, Okay, so what would you have done, let's say if this is your entire page, You split it into 500 words, That friend, pick up the first 500 words here, then pick up the next 500, pick up the next 500, then pick up the next 500, then this is a whole page of yours, you have split it into 500 words, that friend, pick up the first 500 words from here, Next 500 then pick it up, Next 500 then pick it up, Next 500 then pick it up, the thing is that,

It is possible, Some information should be in this chunk, And some information should be in this chunk, but because you split it in the middle, your context is lost, your data is lost, right, Let's say, I open any random paragraph in front of you here, your context is lost, your data is lost right, if let's say, I am here Any one random in front of you I'll open the paragraph, okay, by the way There is a paragraph of vectorless melody, for Abe assume, this is a story book, I what did i do in the first chunk

Just kept the data till here, maybe My 500 characters, not just here It's done, what will happen now? you can see clearly, me technically This should also have been taken, only then there would be a complete story frame, but you took a number of 500 characters, the first chunk became just this much, and the second chunk became this much, so what happened, because you were doing chunking on a static number, your context remained in one chunk, and the rest of the context went to the next chunk, so that paragraph could not be completed, that's a one problem, second problem that.

comes is, see, ho could that this whole three paragraphs make one story right, it never happens that this Paragraph's own story and this paragraph's own story is, maybe that this alone makes one story, but If you have paragraph by paragraph chunking If done, then this is one, this is two and this is three and secondly what happened here, third paragraph is very hurtful, second paragraph is very long, So whatever chunking you did, There is no justification behind it, We're chunking blindly, so this is one problem,

How do I do that chunking, Meaning I should make relevant chunks, I should create chunks semantically, not some hard-coded way, that I'll take paragraph by paragraph, or let's take 500 words, not some hard-coded way, That I will take paragraph by paragraph, Or I will take 500 words, this is not a good way to chunk the data, A strike off within context, a cut will come, second problem is, If you would have seen, like for example, If you have seen any legal documents, legal documents, So what happens in legal documents,

usually no, inside it, There are references, There are references inside it, that as per per rule let's say he said it or as per rule append let's say 63.7.4 of of a something like that, okay then He said something before that, now the thing is, you clearly see that, Here is a reference to another page, let's say what was yours, this was yours on page number 4, And then you can have, There can be a page number inside the same PDF, 578, In which this rule is actually mentioned, That's what happens, isn't it? usually this rule is mentioned inside it,

So what will happen now? you can clearly see that you want to read both of the pages, because there is a rule mentioned inside it, so what will happen now? because there is a reference inside this of the page and the actual content inside this page So it is for this generation I want both these pages But this does not happen in chunking.

What will happen in vector embeddings? What is this keyword being used here? will do this will pick this up this may happen that Don't raise it so that's also one problem of the chunking okay third problem which will take in vector rag may not take it so that's also one problem of the chunking third problem which is in vector rag it comes and sees when you hit chunks When you over chunks perform vector similarity search Based on these numbers Your vector similarity search is performed now these numbers rely heavily on what

kind of question user is asking If the user gets the same keywords in his query, he says, look, user, anything. You do not have control over the user, if the prompt I have written Wrote my llm very well i mean i did not write the exact same If I used keywords which were inside my pdf file then its vector embeddings and were inside my pine cone then its vector embeddings and inside my pine cone The vector embeddings stored are very Will match easily and I like it very much You will get good relevant chunks

but it doesn't happen every time Maybe your book which we I ingested the terms that were inside it. The butt user is very different. Asks only vague or very high level questions question right how to do this It is possible that the chunks that are created, the vector embedding that is created, may never match with that original documentation, because the user does not know how to ask, the user does not know what his keyword was inside the original book, right, what was the keyword, what should I really ask, so here we rely on the user's query that the user's query will be good.

whose vector embeddings are That's from our original document vector embeddings only if they match What is vector similarity search? will return relevant documents So if the user's query is useless We will not find relevant documents and our llm output will not be good these are some of the problems Which comes inside the traditional rack, and these problems are now solved, now kind of solved using vectorless rag, okay, so vectorless rag as the name says, inside it you do not do vector embeddings at all, okay, in this also you have two phases, number one is the indexing phase, number two is the query phase, phases are exactly the same, but the indexing phase has changed here.

no vectors there is no pine cone there is no vector embeddings there is nothing there is no chunking even right so what do you do you use reasoning model okay because look what is overtime, your llms have become more smart. they are more capable they are more smarter more reasoning so you heavily rely on the reasoning models, they are more capable they are more smarter, can do more reasoning, so you heavily rely on the reasoning models, then do one thing These documents read, so this is one article, which I want to show you

and inside this is an example of Sholay movie ok, so this page index is called by the way, the vectorless Another name of raga, that is a page index, how to build a vectorless Raga, i.e. there is no vector embeddings, no vector DB, If you read this document a little, If we start, you can clearly see, Page Index is a vectorless, reasoning based, retrieval augmented generation, rag, okay, it's a vectorless, and what does it do, If we go here, instead of relying on semantic similarity search, What I was telling you was vector search, right?

semantic similarity search, Page Index builds a hierarchical table of content tree, here your data searches will be very useful, this is a very important line, so this is a very important line that is hierarchical table of content, okay, let's note this, because this is the indexing phase, so inside the indexing phase you are not making vector embeddings here right you are not doing any kind of chunking or vector embedding.

but what you basically do is you build something known as a TOC tree which is basically a tree, you must have read in data structures that it is a tree, right? What does a tree look like, your tree looks something like this you have something here, you have something here, you have something here Then you have some nodes here right you have multiple nodes, these are called nodes and then you are basically join these nodes so there are some such nodes right you have multiple nodes these are called nodes and then you are basically join these nodes so there are some such nodes right

so this is what a tree looks like so you build a table of content like you buy any book you have an index and if you have to read something what you do you open the index you see what is the line there and that's how you basically do it correct so this is something we have to build but here's the end and that's how you basically do it correctly Let's take So if we go back here Suvishwa basically create this from document use is large Language Model to Reason Over Its Structure Here Reasoning is Used The Model First

identifies the most relevant sections using the documents hierarchical hierarchy, the tree, then navigate to the section to generate precise answer. Ok? So that means, if we talk about the whole thing, Traditional raga worked on similarity, What does page index do? Does reasoning. This bit is inspired by that human, because if you ever notice, if I give you a very big book, I will give it to you in a very thick book, and I ask you a question, how will your brain perform? So that is actually something like page index. So what does page index basically do, by the way, before going on, so it also solves the problem of legal documents and legal contracts that I told you about, okay?

So what does page index do, number one, if we go down a little bit here structure before search okay so what you are going to do is you are going to build a hierarchical index so this is basically your entire pipeline document Will get you are going to create an hierarchical index of it then you reason on it based retrieval and then you will get an answer instead of doing this This is your vector embedding Okay, so what will we do first? First of all, we are going to build an index something like this, see if you have

Sholay movie, I am not sure if you saw it Is it or not, if you have sholay movie the book is from sholay movie What can you do, you can ask the llm to go page by page and create an index of it So how will an index be made of it, you will have a root document which is null, let's say inside that you just put a summary That does the entire Sholay movie? then what will you do you inside it Will you identify the scenarios? the scenarios is the right word If we go here too there is something known as scene headings

what's inside this movie What were the plots, what were the scenarios, LLM does it itself, reasoning models can do it okay, so what were the plots inside, what were the scenarios, LLM does it itself, reasoning models can do it, okay, so what did he do in life in Ramgad, Gabbars resign, right, final shutdown, after that Gabbars then, after that then let's see your bass recruitment of Jiro, Vero and Jai, so what did you do, which were the main headings, which There were main scenarios, where some plot twist happens, where the story changes,

Where a story is complete, what you did, one of You have created a table of content. Made its headings, content It's not here, it's just headings, okay, If you have created headings, then in headings See what can be created, what will be structural detection, Made scenes, made characters, made after breaks, major where If there was some transition, you made that, then there is no problem in it, major.

Where there was some transition, then you made that, So there is no fixed chunk size in this, Well, there is no fixed chunk size, What is there in this, based on reasoning you Identified the things, what are the different things, Meaning a new character was introduced, You posted it, maybe somewhere. There was a big twist in a movie, You took it as a detection, then maybe somewhere you have put that ending, maybe somewhere there is a big twist in the movie, you have given it a Edge detection was taken and then maybe there was an ending somewhere and there was an emotional scene there.

If you have taken that then you have detected things where things are changing. Okay and best on that. You don't have to do this, okay, now, then what you can do is, you can give some tags, for example, you turned blue, where If there are segments of a story, then from this In all the blue places, there is this There are segments of the story, then which ones did you read? Marked wherever there is something related to Gabar, You marked purple, Wherever there are critical events, and gold Marked, where there are any events,

again, LLMs can do it better, So you kept giving the documents to LLM, got him to do reasoning, on the basis of reasoning you got a tree generated where there are any events, again LLM's can do it better, so you kept giving documents to LLM, got him to do the reasoning. Got reasoning done, on the basis of reasoning You have generated a tree and a hierarchical mapping of yours It has become, that brother is my root node.

Sholay, after that you are making me watch it again I have all these first level branches ok, what happened inside him after that So based on that you have built a tree out of it, now every node What data can we store inside ok look at it this is a node this is also a node this is also a node So what data will we store inside each node? number one title title of that node id of that node This ID is very important this id is ok This is your node ID here This is basically a reference to the original document. Look here, we are just keeping it in a tree format. This is your node id. Here this is basically a reference to the original document.

Look, here we only see him are kept in a tree format but actual page number That in the official documentation where to get that thing node id Then kept a summary of it and its child nodes We have kept here this is how you do a tree in memory correct now if we are here Let's go so basically what will you do? whenever user Someone will ask a query, let's say You asked, why did Thakur lose his arms, this was our query, So what can you do here, you don't have to travel, you don't have to give the full movie, to you the full movie

So you don't have to pay for LLM, do you? You can do it, the full script is not sent, ok, because I need my context Not increase the size, no there will be no embeddings, there will be no embeddings no There will be no similarity size. What will you do by using the user's question? He will traversal your tree.

traversal birth will do the first traversal and what it will do is it will pick up the relevant nodes. Are not working on the original document, Right now we only have table of content, The tree we have just created, small tree, We will just work on it, So llm will be called, Friend, the user has not asked a question.

Why did Thakur lose his arms? So friend please do one thing for me, This is my tree, Don't search for it from this, what do you think, Which nodes are relevant? And bring its child nodes, So what will he do? On user's question, He will go to the hierarchical map, and will read the summary of each one, what will he do, based on the structure, he will pick whichever node he finds relevant, okay, because you have a very good tree, so what he did, he picked this node, he picked this node, and he picked this other node, because your tree was in a very good structure,

He picked this node and he picked that node because your tree was in a very good structure now what it can do is When you have relevant nodes you have found from here and there now you have node id You can also fetch relevant documents You can also fetch original documents plus because there was a summary here, you can use that also.

So only LLM's reasoning is used here. what nodes do i need and after that you can just give that data and you can do the retrieval, okay so that means if i'm here for a second Go back if user Asked something, it was a user's query so what you can do is, first of all Maybe this node is relevant for me OK, this is node relevant, but this node is not relevant Leave all this aside if it is not relevant.

Leave it all, it is relevant, so I picked it up, I did this pickup, so according to the user's query, I got a subset of tree, which is relevant for me, then what I can do is, inside each node there is a summary, meaning how will LLM decide which node is relevant for it, we had kept the summary, now basically what I can do is I can go to the original document.

I can fetch original chunks I can give that to llm and then I can do the retrieval, so that means What can you put inside each node First keep the node id, which is a unique id Then this is basically a location Maybe, there is a pointer pointer to original page, you can put something like this whatever you want to keep, after that you keep the title of every node, keep the description of every node, you can keep every node, whatever you want to keep, After that keep the title of each note, Keep the description of every note,

Keep a summary of every note, and obviously his child notes will come, which is basically again an array of notes, so this is basically how you can construct a tree, What happens to you in this, Look, LLM itself decides everything. what should I do, right, what I want to do, so that means, no vectors, no vector embeddings, no chunking, there is no semantic search. It purely happens on the reasoning and the capabilities of LLM.

So this is basically what page index is basically trying to do. So page index basically works on navigation and extraction. This mirrors how humans read.

When you want to know something the index basically works on navigation and extraction, this is the belief that in the same way so this is the main thing that basically that basically what happens here okay so if you go here in vectorless rag you see this one repository also which are introducing this thing this is a page index again not sponsored okay so this is basically a python sdk i feel yes this is in python which enables you page index so you can see what it happens you give it something you it builds up a tree then it does an LLM reasoning on the query and you get an answer, so this is the whole pipeline, this is a very relatively very new thing or just in case you want to see what a tree looks like

this is what a tree looks like, so you have a title you have a node id, you have a summary, you have child nodes Then after that title, node id, start index end index, where is it originally and a summary and inside that  node id, start index, end index where is it originally and a summary and there can be child nodes inside it too so you construct a tree so here's your because LLMs have got smart over time So the reasoning models of LLM reasoning models and the smartness of the LLMs is basically used here

so that is how the vectorless rag comes into the picture So this is what is basically used here in this particular. So that is how the vectorless rag comes into the picture. So this was in this particular that how vectorless rags are coming into the picture, how vectorless rags are coming. So just in case you like this approach, let me know, I am even ready to code.

Recently in our very recent project, we have converted our traditional rag to a vectorless rag. The only trade-off that we have to give is number one the cost because reasoning models are expensive and the speed. Because you have to reason something and you have to do a tree traversal, it takes a little bit of time for the LLM to come to the final output.

because before that it reasons a lot. So the trade-off is that we are trading off time for accuracy. Okay, so that's the trade-off that we have to give. Rest, there are many relatively new things. Of course, it's an AI world. Things change very rapidly. new new things keep coming. So let's wait for the next thing what comes here.

But let me know in the comments that what do you feel about this Victorless lag? What are your takes on it?


BBDS Logo

Bindass Bol Dil Se

Written by: Chia

10 Apr 2026  ·  Published: 14:05 IST

Top Flutter App Development Companies

Top Flutter App Development Companies

Top Flutter App Development Company

Title :-  Top Flutter App Development Company in Delhi  

Flutter is developed in 2017 by google as Alphabet Inc. flutter provides single codebase structure that can be used for development of website applications, mobile applications as well as android and ios also. In previous we have to use java + xml for android, and swift for ios  but now it solves the problems as cross platform services. For startups companies they should choose flutter over native development. Flutter is a ui framework which uses dart. Its dependency manager is pub.dev where developer can find related packages as they required as payment gateway, state management, sliders, sms, etc.

Features of flutter:-

Widgets:- in flutter, everything is as widget as child widget, stateless widget, stateful widget.

Editor:- we can use flutter in editor like vscode, android studio, etc.

Run:- we can run flutter in any devices like, android smartphone or ios smartphone, browser like chrome, edge, or any emulator if installed.

Apk:- we can build apk as to run on any smartphone.

Startups:- best for any business who want to build their application like ecommerce, education lms, water purification booking app, wifi booking app, matrimonial, bike taxi booking application, online food ordering application, etc.

Live:- for making application live in play store business have to purchase play console account of about 2000/- INR. For ios it will cost around 10000/- INR for lifetime purchase.

Cross platform:- we can build application for any devices like desktop app, android app, ios app.

Ai:- there are many ai platforms that they can help in design and development of building any kind of application as ChatGPT, DeepSeek, Claude, Grok, etc, now many of ai portal are providing full stack applications for web, app, etc. In Ai you can complex problems in second in meantime as the developer know how to do by reading the docs for steps properly. Some of ai website have limitations like long length characters query. You can build designs from scratch in any time as needed. For best resolution you can go for premium subscription to get relevant answers. The ChatGPT, Grok, Claude, DeepSeek  is a generative technology means based on data which is on their servers or third party servers like websites their prompt will goes to search engine o find technical term for that then find all relevant sources then check and provides us. The all ai websites are using google search api from google cloud platform as GCP. In recent news google search api limited the data showing results in pagination to increase revenue because the ai website are dependent to get realtime data from internet and google has due to search console they list billion plus websites till now. For AI websites they need large amount of servers so they can choose AWS or datacenters for maintaining load balancers, backups, kafka, zookeeper, etc.

What types of applications we create using flutter:-

Ecommerce:- we can create shopping mobile applications in any vertical like B2B, B2C, D2C, etc. having features like home page, search, categories, products, add2cart, checkout, payment, order tracking, privacy policy, terms and conditions, shipping policy, delivery policy, returns and refund policy, login, register, etc.

Matrimonial:- we can build matrimonial application like shaadi.com, bharatmatrimony.com having features like login, signup using otp verification, profile creations, subscriptions, profile matches founder, payments using gateway, contact number and chats accessibility after purchasing subscriptions, privacy page, return and refunds page, etc.

Online Taxi Booking:- we can build online bike taxi mobile applications like rapido, ola, uber, having features as register, signin using otp verification, profile details, ride booking with start to destination point, after then check nearest rider on that location after accepting by the caption rides starts by using google maps api when reaches at location then captain has to complete the ride and collect payment via upi or cash as provided by the platform. User can see all rides whatever he has travelled , payments, etc. Captain can see all rides which he provides to customers and payouts what was transferred on particular date. Can check insurance as provided by company, bank account details, etc.

E-Learning LMS:- EdTech business can create LMS applications like Physics Wallah, Allen, Careerwill, selection way, etc having features like login and sign using mobile number, then show all available courses, users or learners can buy their relevant course by enrolling. After enrolling you can see purchased courses after clicked the course content will visible like video player having notes, QNA, relevant queries, assignments, etc. creating ticket features is also for users if they faced any issue regarding this.

Recruitment App:- Any entity who want to provides recruitment services like indeed, apna, workindia, etc. users can signup the app and then provide basic details like name, email, contact, updated CV, etc.  based on interests or job profile category, users can apply with variations of benefits as provided by companies like salary range, cab facility, hybrid, travel allowance, cl, gh, etc. users can get notified in new jobs openings.

There are many categories for developing a mobile application using flutter. You can use backend like Laravel, or any other and for database you can use mysql or firebase, for pushing notification you can use firebase push notifications. In play console you can check how much app downloads counts as done.

Top companies for flutter applications development:-

Webgridsolution:- Global Tech provider webgridsolution provides multiples services like as website design, website development, mobile application development, saas platform, digital marketing, etc.

Webtrills:- India’s leading mobile application development services provider based in delhi having native, hybrid applications development.

Webkul:- webkul is online tech provider in services like development, marketing, etc. its main office is in Noida region.

Appsinvo:- leading full stack mobile applications company across the country with native, iot enabled applications.

Winklix:- Winklix offers mobile app development services for various platforms, including iOS, Android, React Native, Flutter, and Salesforce.

For any enquiries,

Contact support info@bindaasboldilse.com


BBDS Logo

Bindass Bol Dil Se

Written by: Rohit

27 Feb 2026  ·  Published: 19:45 IST

Breaking: Oracle Announces 30,000+ Layoffs Worldwide

Oracle Job Cuts 2026: Massive 30K Layoff Explained

Breaking: Oracle Announces 30,000+ Layoffs Worldwide

Oracle Layoffs Shock Tech Industry: Over 30,000 Jobs Cut

At 6 in the morning your phone vibrated and an email came. Today is your last working day No meeting, no conversation and no warning Just one cold email and your career is completely over. Globally 30,000 employees and 12,000 employees from India This was Oracle's biggest playoff mass in its 47-year history.

That company which has the history of every major year of India on its database. I will tell you three things, three villains who took away the jobs of twelve thousand Indians. And one explanation which you will probably understand now let's start the video Hello friends, my name is Deepak, you are watching this lot.

So first of all understand what is Uracal? Because this is most important to understand this video So the year was 1977, California, America. And there was a man, Larry Ellison, a college dropout, no money at home, no big background, but he had an idea, companies had to secure data, find it fast, and never allow it to be deleted, this was the data base, and with this idea, Larry Ellison created Oracle, And with this one idea, Larry Ellison created Oracle. What does Oracle actually do today? The first work is their database

Whenever you insert your card in ATM, where is the complete data of the bank? On Oracle's database, SBI, HDFC, ICICI, India's biggest banks actually run on Oracle. Consider ERP software as a second task. How does a big company like Reliance manage its finance? It manages its supply chain on Oracle's software and the third work is that of their cloud. Like AWS and Azure, Oracle has its own cloud. Oracle Cloud Infrastructure and when did Oracle actually come to India? Year 1994, they opened their first office in Bangalore.

And within 30 years, Oracle became one of India's largest tech employers. Before LEO, 30,000 employers used to work in India only. If you have ever touched any bank software Have seen the hospital system or worked in a big company So you must have encountered Oracle, it was and one Saved job for 30 years and now three villains have changed everything. The first villain and one is different in reality.

Listen, read that email once again, Today is your last working day, this line came at 6 in the morning and along with What happened, system access stopped immediately, no goodbye meeting, no manager's call, no thank you, this line came at 6 in the morning and what happened immediately, system access stopped, no goodbye meeting, no manager's call, no thank you came.

In Oracle India, some employees had 10 years of service, some had 15 years, some had 20 years. A manager also wrote in a LinkedIn post that a journey of 16 years ended in one email and all this happened when Oracle's quarterly net income was 6.13 billion dollars in a quarter, which means a profit of 6.13 billion dollars in a quarter and yet 30,000 people were fired. In reality, this was not survival, it was the company's decision to increase the margin. Now Oracle actually fired the employees but did they get some severance payment? If they got some compensation, then yes, the company has given them proper. The company fired all its employees, but did they get some severance payment, meaning they got some compensation, then yes, the company has given them proper. The company has given all its employees 15 jin salary on the basis of hour of service, plus 2 months' ex-gratia, plus notice pay, I and Ekal are not completely wrong here. I would say, they paid severance pay, followed legal compliance, but one thing they did not do was advance warning, in fact they did not give any signal to their employees, did not give any detraining offer, did not provide any transition support, just one mail and the job was over.

When a company is making a profit of Rs 50,000 crore and still sends an email at 6 o'clock, then this is not restructuring, this is a cold business. If you extract this from email then this is not restructuring, this is a cold business. Now coming to the second villain which is AI automation and this villain is the most dangerous.

Because it is not visible in reality. In fact, Larry Ellison talked about big work in January 2026. He said autonomous software eliminates human labor and human error lowering operating cost Understand what they have to say. They say that AI reduces human cost and human error. That's why they are completely shifting towards AI This is not a PR thing, this is actually Oracle's plan Oracle's biggest budget was in 2025-26 What did they actually do? He raised a debt of 50 billion dollars why? Why did Nvidia raise black debt to build AI data centers? To create AI data centers

For Nvidia's blackwell chips Actually, Oracle and Nvidia have created the world's largest AI super cluster. And they have also launched 22 AI agents. Which the company has named Oracle Fusion Agent Application: What work do these agents actually do? Earlier, 50 engineers used to do the work of cloud monitor, detecting server issues, solving customer support tickets, all that is now done by one agent.

In 2022, Oracle bought Cerner, America's largest health care IT company, for $28 billion. Now the entire system of Sarnar was actually the old code of Karuno Lines. Earlier it was estimated that several thousand engineers and developers would be required to write it anew. Together, this entire work will take 5-7 years.

Now Oracle actually did all this with AI in 3 years. 1000 soft developers woke up, AI ruined their work and where did those developers go? they disappeared This is the same pattern which is going on in Uracal India also. Cloud operations, level two support, quality assured testing, repetitive engineering tasks AI agents are doing all this now Monitoring more than a thousand cloud instances 24 into 7 without break a human team of 50 people She cannot achieve accuracy Friends, if your work is repeatable So that means it is also deleteable.

You will definitely understand this Now comes the third villain which is within us Understand this, there is an ostrich, when danger comes, he buries his head in the ground. The third villain, which is within us, understand this, there is an ostrich, when danger comes, he buries his head in the ground, he thinks, if I don't see him, then maybe the danger will go away, this is the same mistake which the IT professionals of India actually made.

Twelve thousand were selected, many of them had 10-15 years of Oracle experience. He had expert level in Oracle database. Was also certified in Oracle ERP, But he did not teach Oracle Cloud, Didn't teach AI agent, didn't teach generative AI, Why? Because Oracle was safe, Bank accounts were running, EMIs were being deducted.

The job seemed secure. And Indians don't actually do upskilling. Actually friends, this is not just about Oracle. This is a story of total Big Tech layoffs in 2025-26 Global jobs are gone by the hour Amazon removed 16,000, Microsoft removed 19,000, Meta removed 36,000 Google, IBM, the same pattern is going on everywhere, what exactly are you projecting, then you are the culprit of this project and this is the villain number three. Oracle did not give warning but Larry Ellison did.

Said himself in the beginning of 2026 Autonomous Software Eliminates Human Labels there was warning But there was no one to really listen And this is not just Oracle's story This is the pattern of the entire IT sector Oracle removed 30,000 Also their stock went up Imagine their stock was going down for a year, their stock was going down for a year, But in one day he showed high.

Because investors saw that the company is investing in AI. Removing umls and marchin will increase in future. This is the signal, this trend is not stopping. This is just the beginning of Oracle's AI driven restructuring. This is just the beginning and you will see the big game in the times to come. So friends, this is the story of Oracle.

But there are three business lessons in this which if you don't learn then you too will fall into the same trap. Understand the first lesson There is no such thing as a safe job Oracle was considered the safest job in India for 30 years. Used to go to banks, used to go to hospitals, Used to work on government project, And just one day, at 6 o'clock in the morning, an email changed everything, Actually no skill is safe, Is it only current or becomes outdated? This choice is yours now, Understand the second lesson, when a company invests in AI,

Then employees should also invest, Oracle plans to invest $50 billion in 2025-26; lesson number 1 is most important Repeatable work is actually a risk zone in India and the world Like cloud monitoring, L1, L2 support, quality assured testing, basic engineering tasks All this is actually taken over by AI But one thing cannot be taken away, friends, please understand carefully.

Critical thinking, customer relationships, system design decisions, understanding business context. If you go to these places then I will not be able to replace you. If eye is taking over your job role completely. So you have to become a manager of AI, Will have to become the CEO of AI, So friends, this was the story of Oracle, We have known three villains, The Oracle who fired people without warning, AI automation which is invisibly replacing people, And along with Indian people who leave work for later,

I have not made this video to scare you. I have made it because this is a warning, And warning is valuable, friends, today AI is one of the must do skills, I have not made AI to scare you, I have made it because it is a warning, and warning is valuable, friends, today AI is one of the must do skills, no matter what is your job role, you are a businessman, student, IT professional, or do any work, or just watch the house, if you don't need AI. If it comes, then you will be left behind the times, please comment and let us know if you too have been laid off from Oracle.


BBDS Logo

Bindass Bol Dil Se

Written by: Aditya Raj

08 Apr 2026  ·  Published: 13:15 IST