Context raises $3.5M to elevate LLM apps with detailed analytics

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


London-based Context, a startup providing enterprises with detailed analytics to build better large language model (LLM)-powered applications, today said it has raised $3.5 million in funding from Google Ventures and Tomasz Tunguz from Theory Ventures.

The round also saw participation from 20SALES and multiple VCs and tech industry luminaries, including 20VC’s Harry Stebbings, Snyk founder Guy Podjarny, Synthesia founders Victor Riparbelli and Steffen Tjerrild, Google DeepMind’s Mehdi Ghissassi, Nested founder Matt Robinson, Deepset founder Milos Rusic and Sean Mullane from Algolia. Context AI said it will use the capital to grow its engineering teams and build out its platform to better serve customers.

The investment comes at a time when global companies are bullish on AI and racing to implement LLMs into their internal workflows and consumer-facing applications. According to estimates from McKinsey, with this pace, generative AI technologies could add up to $4.4 trillion annually to the global economy.

Developing LLM apps isn’t easy

While LLMs are all the rage, building applications using them isn’t exactly a cakewalk. You have to track the model’s performance, how the application is being used, and most importantly, whether it is providing the right answers to users or not — accurate, unbiased and grounded in reality. Without these insights, the whole effort is just like flying blind with no direction to make the product better.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

Henry Scott-Green, who previously worked as a product manager at Google, saw similar challenges earlier this year when working on a side project that tapped LLMs to let users chat with websites.

“We talked to many product developers in the AI space and discovered that this lack of user understanding was a shared, critical challenge facing the community,” Green told VentureBeat. “Once we identified and validated the problem, we started working on a prototype (analytics) solution. That was when we decided to build Context.”

Offering high-level insights

Today, Context is a full-fledged product analytics platform for LLM-powered applications. The offering provides high-level insights detailing how users are engaging with an app and how the product is performing in return.

This not only covers basic metrics like the volume of conversations on the application, top subjects being discussed, commonly used languages and user satisfaction ratings, but more specific tasks such as tracking specific topics (including risky ones) and transcribing entire conversations to help teams see how the application is responding in different scenarios.

Also Read : Arize AI wants to improve enterprise LLMs with ‘Prompt Playground,’ new data analysis tools

“We ingest message transcripts from our customers via API, and we have SDKs and a LangChain plugin that make this process <30 minutes of work,” Green explained. “We then run machine learning workflows over the ingested transcripts to understand the end user needs and the product performance. Specifically, this means assigning topics to the ingested conversations, automatically grouping them with similar conversations, and reporting the satisfaction of users with conversations about each topic.”

Ultimately, using the insights from the platform, teams can flag problem areas in their LLM products and work towards addressing them and delivering an improved offering to meet user needs.

Context AI’s product

Plan to scale up

Since being founded four months ago, Context claims to have garnered multiple paying customers, including Cognosys, Juicebox and ChartGPT, as well as several large enterprises. Green did not share the details of the enterprises citing NDA.

With this round, the company plans to build on this effort by hiring a technical founding team, which will allow Green and his team to accelerate their development velocity and build an even better product.

“The product itself has a few planned focus areas: To build higher quality ML systems that deliver deeper insights, to improve the user experience and to develop alternate deployment models, where our customers can deploy our software directly in their cloud,” the CEO said.

“At this stage, our goal is to continue growing our customer base while delivering value to the businesses using our product. And we’re seeing success,” he added.

Growing competition

As the demand for LLM-based applications grows, the number of solutions for tracking their performance is also expected to rise.

Observability player Arize has already launched a solution called Pheonix, which visualizes complex LLM decision-making and flags when and where models fail, go wrong, give poor responses or incorrectly generalize. Datadog is also going in the same direction and has started providing model monitoring capabilities that can analyze the behavior of the model and detect instances of hallucinations and drift based on different data characteristics such as prompt and response lengths, API latencies and token counts.

Green, however, emphasized that Context provides more insights than these offerings, which just flag the problem areas, and is more like web product analytics companies such as Amplitude and Mixpanel.

Originally appeared on: TheSpuzz

iSlumped