You literally told you built something which would allow an LLM to access the data. In order to be reliable enough the data would have to be appropriately sorted already and there would need to be an interface which the LLMs could use. So you built all this stuff to let the LLM thing work and now you’re looking at me stupid like building an extreme simple filter is some sorta crazy thing and we need a product to do it.
What the hell were people doing before you built your little chatbot? Just neatly sorting information into a black box and throwing into the ocean?
In order to be reliable enough the data would have to be appropriately sorted already and there would need to be an interface which the LLMs could use.
Ah ok, so you have no idea what you’re talking about then lol. In a nutshell you go “here is your database connection details, now be a good little AI and answer my questions about the database”.
So you built all this stuff to let the LLM thing work and now you’re looking at me stupid like building an extreme simple filter is some sorta crazy thing and we need a product to do it.
“an extreme simple filter” lol. It could be pulling data from 30 different tables, views, stored procedure results, etc from the database and making insanely complex queries and reports, then cross referencing those with external logs from a third party logging service to provide even more data. You seem to think that you pretty much have to build all the queries and reports and services and then the LLM just calls them with some parameters lol.
You very clearly have zero experience in this area, and have not done even the most basic of research.
Hey dude, I was responding to your incredibly shitty examples. You give me no information and blame for not having information well, that’s a you problem. But I suppose if you understood that concept you’d also understand the problems I’m talking about.
Now, again, if the AI can have access to all that information and identify it correctly then why is it impossible to do what I’m asking? It has to be able to tell the difference somehow, right? And with LLMs being known to have hallucinations and serious misunderstandings it seems rather ridiculous to rely on it for something that you say is so complex that a person cannot do it. You also haven’t answered me, I don’t think, on the topic of what people were doing before the LLM.
There are a lot of key elements you’re dodging here and before you start talking shit maybe start addressing them.
Product X > filter by state > date range. Why is this difficult? Gimme another, it’s mildly entertaining even if it’s not exactly difficult.
What product are you using to get that data from a live Azure database?
You literally told you built something which would allow an LLM to access the data. In order to be reliable enough the data would have to be appropriately sorted already and there would need to be an interface which the LLMs could use. So you built all this stuff to let the LLM thing work and now you’re looking at me stupid like building an extreme simple filter is some sorta crazy thing and we need a product to do it.
What the hell were people doing before you built your little chatbot? Just neatly sorting information into a black box and throwing into the ocean?
Ah ok, so you have no idea what you’re talking about then lol. In a nutshell you go “here is your database connection details, now be a good little AI and answer my questions about the database”.
“an extreme simple filter” lol. It could be pulling data from 30 different tables, views, stored procedure results, etc from the database and making insanely complex queries and reports, then cross referencing those with external logs from a third party logging service to provide even more data. You seem to think that you pretty much have to build all the queries and reports and services and then the LLM just calls them with some parameters lol.
You very clearly have zero experience in this area, and have not done even the most basic of research.
Hey dude, I was responding to your incredibly shitty examples. You give me no information and blame for not having information well, that’s a you problem. But I suppose if you understood that concept you’d also understand the problems I’m talking about.
Now, again, if the AI can have access to all that information and identify it correctly then why is it impossible to do what I’m asking? It has to be able to tell the difference somehow, right? And with LLMs being known to have hallucinations and serious misunderstandings it seems rather ridiculous to rely on it for something that you say is so complex that a person cannot do it. You also haven’t answered me, I don’t think, on the topic of what people were doing before the LLM.
There are a lot of key elements you’re dodging here and before you start talking shit maybe start addressing them.