Worktrace: Introducing the New AI Adoption Layer for Enterprise

Posts
Interview
Share

Posts
Interview
Share

AI has moved faster than most organizations’ ability to understand their own work. The real bottleneck is not models, but mapping what people actually do all day across tools, tabs, and teams into something precise enough that software can help. At 8VC we have spent the last decade backing companies that build this “work ontology” and then use it to transform industries, from our original Smart Enterprise thesis to the current AI Applications and Services wave.

Today we are excited to share that 8VC is co-leading the $9M seed round in Worktrace AI, which observes real workflows, turns them into visual process maps, and continuously surfaces the highest-impact opportunities for AI agents to take work off people’s plates.

Co-founded by former OpenAI product leader Angela Jiang, who helped bring the first versions of ChatGPT to market, and UIUC computer science professor Deepak Vasisht, a leading systems and networking researcher, Worktrace is backed alongside us by Conviction and a group of operators at the center of modern AI, including Mira Murati of Thinking Machines. 

In the Q&A that follows, we dig into how they are building this new “AI adoption layer” for the enterprise and what it looks like when real work is continuously translated into the best that AI can do.

Let’s talk through the  origin story — how did you two meet and develop enough conviction to build Worktrace? What tailwinds did you observe and why did you think now is the right time to build? 

Angela: Prior to Worktrace, I was at OpenAI working as a product manager for the GPT models. A big part of my role was finding real-world use cases for research artifacts. Early on, it wasn’t obvious whether the models were good enough to apply to anything practical. But the capabilities rapidly improved, and the trajectory quickly accelerated.

Despite this, there was still a stark lack of adoption in both laggard industries and tech-native companies already engaging with us at OpenAI. They were incredibly interested in AI, but it often took months to identify a meaningful use case. The process dragged on, and either the ROI wasn’t clear, or the opportunity ended up being too insignificant.

This pattern continued to surface, even as the models were rapidly advancing. So when Deepak and I started thinking about building a company, we honed in on this adoption gap. We felt strongly that AI could deliver genuinely magical user experiences, yet it wasn’t making its way into people’s day-to-day work. We wanted to address this problem head-on.

I had also led a lot of AI literacy work for policymakers while at OpenAI. Deepak, as a professor, was similarly engaged in educating people on technology. We initially explored the idea through an AI literacy lens: many industries felt left behind by the AI wave, so we asked whether we could teach them what they need to know to advance their career and company success.

We were quickly confronted by the limitations. Most people simply don’t have the time to keep up with AI, and AI is going to continue evolving. It’s unrealistic to expect anyone to track those changes while also doing their job.

That insight led us to Worktrace. The core bottleneck we repeatedly witnessed, both at OpenAI and in Deepak’s research bringing AI into different industries, was that people didn’t know what they should be using AI for in their day-to-day work. They couldn’t identify the right use cases, and didn’t have a deep enough representation of their workflows to integrate AI tools effectively.

But that representation and understanding work, mapping out tasks in detail, identifying what is AI-compatible, felt like an ideal problem for AI itself. So we became interested in this meta-approach: using AI to help accelerate the adoption of AI. We realized that if we could solve that, it would create a flywheel where companies could continuously use the best models and tools without needing to consciously stay on top of every advancement.

We’d love to better understand the scope of the visual process mining problem. What’s exciting technically right now, especially given that process mining has existed for years and had many limitations?

Deepak:

What has fundamentally changed is AI capabilities. Seven or eight years ago, process mining largely analyzed logs: repeated clicks, repeated sequences, surface-level patterns. But those systems couldn’t understand the intent behind a task or handle small variations.

A helpful analogy is looking at signal-to-noise ratio. There’s so much noise in raw activity: someone is doing work, then checking ESPN, then tweeting, then returning to work. Older systems couldn’t distinguish what was actually part of a workflow versus incidental behavior.

Work itself is inherently messy. Extracting high-quality workflow structures from that mess, and mapping said structure to what AI can automate, is extremely challenging. What’s different now is that the underlying models can make common-sense judgments about what is or is not work. That opens up a new level of understanding.

Another big shift is toward AI-native processes. If you’re only mining clicks, then automation is limited to replicating those clicks. But if you mine intent, understanding how different people complete the same task with variations, then you can build agents capable of handling edge cases, making decisions, and acting intelligently.

Older automation assumed linear workflows. Today, with intelligence at the center, you can design hub-and-spoke systems where the agent selects tools and handles higher-order tasks.

We’re also expanding process mining into multimodality. Previously it was logs and clicks; now we incorporate visual screen understanding. Next, we want to add voice, letting people narrate their workflows so we can capture the cognitive steps that never get written down. The richer the modalities, the better we can infer intent and meaningfully improve work.

What other context do you need in order to understand a unit of someone’s work and the real intent behind a workflow?

Deepak:

It’s a broad space. Today we understand the screen, the clicks, and the applications involved. But then there’s audio as people narrate what they’re doing as well as company context. The more context you include, the more accurate the representation of work becomes.

There’s also company and team-level context. If you’ve already onboarded three customer success teams, and now you’re onboarding a fourth, what have we learned from earlier cohorts? How do workflows differ across individuals performing the same job? 

One of the most powerful signals is seeing multiple examples of the same work:

- One person doing the same task multiple times, with small variations 

- Different people doing the same task, each in their own way 

Those examples reveal natural variability. Downstream, they help us create evaluation sets for agents and give them concrete examples to learn from.

Angela: 

Another way to understand this is through the lens of employee onboarding. Imagine dropping a highly capable agent into a company and asking it to run payroll on day one. It needs huge amounts of contextual knowledge: systems, codes, error mappings, documentation, departmental nuances, customer nuances, sometimes country-specific rules. Humans pick this up informally; models need structured access to it.

Today, companies often put an AI engineer on automating something like payroll, but the engineer and payroll team don’t speak the same language. Translation becomes messy and slow. AI is much better at bridging that gap because it sees the work directly and can translate between informal domain-language and structured engineering needs.

How have early deployments been going? Any key learnings around how customers use Worktrace? 

Deepak:

One of our design partners decided to go fully AI-first. They launched a companywide initiative to document all work: everyone wrote down what their week looked like, which formed the basis for workflows and SOPs.

But no one enjoys documenting their work, and SOPs quickly become incomplete, outdated, and inaccurate. Their internal AI team was trying to convert SOPs into automations but spent 90% of their time simply interpreting them and clarifying missing details.

That’s where Worktrace comes in. We dramatically accelerate the process by automatically understanding the work itself. Their AI team no longer needs to chase people for SOPs or decipher ambiguous instructions.

Another learning: AI adoption is iterative. Many assume process discovery happens once. In reality, workflows change, and AI capabilities change even faster.

Angela: 

We’re seeing strong traction in fintech, logistics and healthcare back office. These sectors have rich, structured, repetitive workflows that are ideal for automation.

We’d love to get a bit more color around the folks who are building Worktrace. How did you two meet? How is the team structured across product and research?  

Deepak: 

Angela is married to my best friend. When I decided to move to Silicon Valley to start a company, I immediately reached out to her. She was about to leave OpenAI and planned on taking a break. But after a couple months, she realized she wasn’t great at retirement and missed working on AI. We started exploring ideas, and things evolved naturally into Worktrace.

On the company side, from day one we chose a space with high technical risk and low market risk. We believe  we can solve the difficult technical challenges, and companies desperately want to adopt cutting-edge AI. Our structure reflects that.

Our team is already extremely talent-dense. Early technical team members come from DeepMind and Meta as well as strong GTM backgrounds from McKinsey and Mastercard. 

Angela: 

 We’re building:

- a core research team advancing capabilities in understanding work and building intelligent automations 

- a product team focused on scaling those capabilities and delivering them to customers 

- go-to-market functions to support adoption 

Research is focused on building a continuously improving engine for companies – integrating the best models into their highest-impact tasks repeatedly.  

Product is equally challenging. We need to ingest vast amounts of multimodal data, process it efficiently with ML, and do so in a way that is enterprise-ready, secure, and privacy preserving.

Today the team is small but extremely strong. We’re excited to build out the founding team across research, engineering, design, and GTM. 

Continue Reading