Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.
The future of AI research is proactive discovery. The goal is a system that not only monitors a portfolio but also recognizes what it doesn't know, then autonomously tasks its AI interviewer to conduct expert calls to generate the missing insights and deliver the new analysis to the user.
Unlike consumer chatbots, AlphaSense's AI is designed for verification in high-stakes environments. The UI makes it easy to see the source documents for every claim in a generated summary. This focus on traceable citations is crucial for building the user confidence required for multi-billion dollar decisions.
The company developed an AI that conducts highly technical expert network interviews, automating a high-friction manual process. This enables new, scalable content creation like monthly channel checks across dozens of industries—a task too repetitive for human analysts to perform consistently at scale.
The company wasn't built to solve a minor inconvenience. It was born from founder Jack Kokko's intense fear as an analyst of missing critical information in high-stakes M&A meetings. This deep-seated professional anxiety, not just a need for efficiency, fueled the creation of a market intelligence platform.
As platforms like AlphaSense automate the grunt work of research, the advantage is no longer in finding information. The new "alpha" for investors comes from asking better, more creative questions, identifying cross-industry trends, and being more adept at prompting the AI to uncover non-obvious connections.
Before generative AI, AlphaSense built its sentiment analysis model by employing a large team for years to manually tag financial statements. This highly specialized, narrow AI still surpasses the performance of today's more generalized Large Language Models for that specific task, proving the enduring value of focused training data.
