Thesis on the Future of AI Interactions
Aldaran Analytics • 2025
Abstract
Why we abandoned the Chatbot, why manual prompting will become obsolete, and why domain expertise is the real moat in specialized AI applications.
We make two bold assumptions: First, in a couple of years from now, writing prompts into a Chatbot will be considered manual work, just like typing tables into Excel is today. Second, there will be more Investment Analysts working on the product development for data providers than are working on the data-recipient side to analyze companies and make investments.
Why We Abandoned the Chatbot
When we started Aldaran Analytics, we knew which problem we wanted to solve. The more difficult part was to figure out how the user would interact with our solution. The most obvious thought was, of course, a Chatbot. It's much more appealing: investors see the tech directly, users have the feeling of working with AI, and we could have described our company as an AI-native company.
Further, when providing a Chatbot you can always go with the flow of the current state of the foundational models. If your accuracy is not good, you simply can say "But we benchmark against the quality of foundational models" (because you are actually a GPT-wrapper). As a side note: this is a good test for checking if the tool you are using is a GPT-wrapper or not. Do they offer the answer, or do you have to get the answer out of the tool?
If the latter is the case, you as the User are in charge of maximizing the value, not the company you pay.
The Problem: Value Divergence
The problem we have today is that there is a huge divergence between the value a LLM could deliver and the value a LLM actually delivers. The value an LLM can deliver for a user is highly dependent on the user himself. And we're not talking about how good your question is — we're simply talking about what you ask or request the LLM in the first place.
A skilled coder can ask a LLM to do something an unskilled coder doesn't even know is possible and thus can't even request. When a Chatbot could answer 100 questions, but a user only asks one, the value is not maximized.
Most users of Chatbots end up solely improving efficiency of things they could do themselves but rarely learn something new. We wanted to go beyond sole efficiency improvements and actually offer intelligence, not just data. And we knew that a Chat interface would not suffice to accomplish this.
The Shift: From User Questions to Proactive Intelligence
We think there must come a shift from the user asking the question and determining the value he receives to the company asking the question, reviewing the answer, and maximizing the value for the user. Obviously, this is only possible with significant domain expertise and in a very specialized niche.
This led to our realization that the real difficulty for specialized AI applications (especially in data & data analytics), won't be the engineering but rather obtaining domain expertise and first-hand user experience. The more intelligence (actual insights) a data provider can attach to the data it provides to a user, the better. And the more a user must take your data and contextualize, quantify, assess it, the worse.
We call this proactive intelligence: the company has the domain expertise, it is in charge to ask the right question, and it adds intelligence to the data a user wants. Think about it this way: if your user receives your data currently, how many more steps does it take him to use the data as a base to form a decision or make an assessment? The more steps, the worse.
Domain Expertise as the Real Moat
Domain expertise is in high demand, and the question is whether it will remain within the company who uses the data (in our case, an investment analyst) or if investment analysts will move to data providers to build better products.
If that step can be accomplished, we can go away from manual prompting.
Other advantages include: - If data providers can review the data before delivering it to customers, they can actually guarantee high-quality outputs. - It's much more efficient: with Chatbots, 100 users ask the same questions repeatedly, each time consuming tokens and energy, whereas in our case, the output is created once and distributed at scale without any marginal costs attached.
We then move from currently high variable costs (costs per prompt) to fixed costs (one-time costs of the output) and AI is working backend as a means to scale and be more efficient.