February 2026 – Product update

This is Alex, founder & CEO of Vega.
This month, we shipped:
- The first "Memories" feature in the industry
- An integration with Universal-3 Pro, the best speech-to-text model in the world
- A Calendly integration
- And more
For next month, we announce:
- The first agentic system to update your CRM system and manage emails via chat
- Document processing for tax and portfolio analyses
- A Google Chrome extension
- A whole new web interface
- Integrations with 2 other CRMs
Introducing Memories
The first AI in the planning industry that remembers

Large Langue Models (LLMs) are extremely powerful. There's a reason why ChatGPT has attracted 100 million daily active users in just 3 years.
But another technological breakthrough is needed for them to one day approximate intelligence in a human sense. Today, the fundamental problem of the language model you're chatting with is that it doesn't get better over time the way a human would. The field has an expression for it now, we say that LLMs lack "continual learning."
How do you teach a kid to play basketball? You have them try to shoot the ball, watch where it goes, and adjust. Now imagine teaching basketball this way instead: A kid takes one shot. The moment they miss, you send them home and write detailed instructions about what went wrong. The next kid reads your notes and tries to hit a half-court shot cold. When they miss, you refine the instructions for the next kid.
This is how the language model works, fundamentally: after you end your session, you're starting from scratch again.
And there's no solution in sight. Researchers are trying hard, but none of the recent breakthroughs touch on continual learning. The big story a year ago was test-time compute, also known as thinking or reasoning models (remember DeepSeek?). That had nothing to do with learning. Here's the original release by OpenAI: Reason with LLMs . And today's big story is Recursive Language Models . Again, that doesn't solve learning.
Instead, LLM applications need to use techniques like Retrieval Augmented Generation (RAG) or agentic searches. In essence, the application navigates some knowledge base every time the language model is queried to understand if user data processed in the past could help make the next LLM response better. This is not true learning, it's just some kind of scaffolding, but it does create a lot of value!
In that context, at Vega, we've just released the first version of Memories in the planning industry!
You can add clues along the lines of "remember this," "note it for next time," "save that info" to any instruction you send to the Vega AI, and our application will save a Memory. The Memory will be retrieved the next time our AI considers it will improve its own output. Here are the kind of memories that are helpful:
- A writing style preference: "Note that I don't like to use bullet points, I prefer paragraphs."
- A fact about a client: "Remember that Jane can only have her annual review in the winter, she travels abroad from March to October."
- A template idea: "For COI meetings, don't take notes as long as for client reviews. Cut to the chase. Remember this."
Recording a memory works in all locations where you can send natural language inputs to the AI:
- Your homepage chat interface on web
- Your homepage chat interface in the iOS app
- Your email plugin
- The post-meeting rewrite prompt
- The follow-up email rewrite prompt
- [Coming next month] The portable chat interface you can summon from any page in the app
We're very excited for you to try it out!
Partnering with Universal-3 Pro
The most accurate speech-to-text model in the world

AI solutions dedicated to financial planning teams are applications. In other words, us builders sit at the top layer in the hardware+software value chain that delivers value to end users. We translate compute (hardware maintained by Amazon/Google/Microsoft/etc.) and models (software built by OpenAI/Google/Anthropic/etc.) into real workflows and outcomes for planners, associate advisors, and CSAs.
Us and our competitors are not in the business of training AI foundation models. If you hear from a vendor in our industry that they "train a model" or "train an agent," be aware that they're lying. Training models requires skills that our industry cannot afford, simply because much larger industries buy those skills. One single, high-ranked machine learning engineer at a reputed lab earns more money annually than any software company in wealth tech could spend in total, on all their software developers combined.
So we integrate with models! If you can't beat them, join them! On the language model side, at Vega, we integrate with the GPT series from OpenAI, the Opus/Sonnet series from Anthropic, the Gemini series from Google. We use them through cloud infrastructure built with privacy in mind and dedicated to sensitive industries (no data retention policies, no training of models based on user data).
On the speech-to-text side, we have just released the best model in the world to capture meetings: Universal-3 Pro, announced by Assembly last week. Assembly has created a model with lower Word Error Rate (WER) than OpenAI, ElevenLabs and Deepgram's models.
The highest-ranked model in a given quarter might be displaced in the next quarter. We actively run evaluations internally to select the models that will transcribe client meetings most accurately, despite background noises, interjections and single-word responses, and hesitations.
Calendly integration
Save another dozen clicks every day

We have released an integration with the most popular meeting scheduling tool: Calendly.
Here are the 4 things it does for you:
- If you schedule your next meeting with your client during the current one, and it is relatively clear that you agreed on a time and location together, Vega will automatically find the Event Type in your Calendly account that is relevant, and create a meeting invitation. Vega will take care of pre-populating every field (event type, start date and time, invitee email, location, etc.). You just need to review and confirm, like you confirm a CRM task or a worfklow suggestion, and Calendly will handle the pre-meeting flow per usual (invite, reminders, other pre-meeting workflows).
- If you schedule a time by email, Vega now ingests your Calendly availability details to help draft your email, instead of just taking into account your Outlook or Google calendar availability.
- The meeting reminder emails that Vega drafts includes any Calendly data that could be helpful (rescheduling link, cancellation link).
- Your meeting follow-up emails as well.

Preview of next month
The best 30 days in the history of Vega
Now, for the serious stuff. We have been building our new platform in the background for the past 4 months. I can't wait to announce in 30 days in this same blog post. It will include:
- Document processing for tax and portfolio analyses
- A Google Chrome extension
- An agentic system to update your CRM and manage emails via chat
- Integrations with 2 other CRMs
- A reasoning model for chat
- Chat threads and automatic association of threads with households for indexing and AI retrieval
- A portable chat interface that can be summoned from any page and will retain context from that page
