cover of episode Generative AI Moats in B2B with Emergence Capital’s Jake Saper

Generative AI Moats in B2B with Emergence Capital’s Jake Saper

Publish Date: 2023/5/9
logo of podcast ACQ2 by Acquired

ACQ2 by Acquired

Chapters

Shownotes Transcript

Hello, Acquired listeners. We have here today our good friend, my good friend for many, many years and fourth time Acquired guest, I think now? At least the three-peat. This is at least a hot trick. Our dear, dear friend, Jake Saper, general partner at Emergence Capital, back for the third or fourth time to talk about maybe the most important topic we've talked about yet, which is

What the heck do founders and investors do about investing in generative AI right now, particularly in B2B, SaaS, generative AI? If you're starting a new company, think about that. If you are an incumbent, if you are an already established startup, this technology obviously will have enormous consequences. And nobody right now knows how to approach it, except for Jake and Emergence. Yeah, listeners. Yeah.

High expectations, David. I like that.

customer set, how should we think about generative AI? And Jake, you prepared a very nice deck from a lecture that you gave yesterday that we got to review ahead of this. And I must say, you're a good frameworks thinker. Thank you. It was the training and consulting when I was 22 that stayed with me.

We want to thank our longtime friend of the show, Vanta, the leading trust management platform. Vanta, of course, automates your security reviews and compliance efforts. So frameworks like SOC 2, ISO 27001, GDPR, and HIPAA compliance and monitoring, Vanta takes care of these otherwise incredibly time and resource draining efforts for your organization and makes them fast and simple.

Yeah, Vanta is the perfect example of the quote that we talk about all the time here on Acquired. Jeff Bezos, his idea that a company should only focus on what actually makes your beer taste better, i.e. spend your time and resources only on what's actually going to move the needle for your product and your customers and outsource everything else that doesn't. Every company needs compliance and trust with their vendors and customers.

It plays a major role in enabling revenue because customers and partners demand it, but yet it adds zero flavor to your actual product. Vanta takes care of all of it for you. No more spreadsheets, no fragmented tools, no manual reviews to cobble together your security and compliance requirements. It is one single software pane of glass.

that connects to all of your services via APIs and eliminates countless hours of work for your organization. There are now AI capabilities to make this even more powerful, and they even integrate with over 300 external tools. Plus, they let customers build private integrations with their internal systems.

And perhaps most importantly, your security reviews are now real-time instead of static, so you can monitor and share with your customers and partners to give them added confidence. So whether you're a startup or a large enterprise and your company is ready to automate compliance and streamline security reviews like

Like Vanta's 7,000 customers around the globe, and go back to making your beer taste better, head on over to vanta.com slash acquired and just tell them that Ben and David sent you. And thanks to friend of the show, Christina, Vanta's CEO, all acquired listeners get $1,000 of free credit. Vanta.com slash acquired. All right, let's dive into it. We're going to spend the bulk of this episode on what all of this generative AI process

open AI, everything happening means for B2B SaaS companies and for investing in them. But let's start first to get us all on the same page for folks who aren't as familiar with

What actually is going on right now? Like, what are LLMs that everyone's talking about? What are large language models? Let's start with that. Then let's talk a little bit about the current state of play. Then we'll get into B2B implications. Awesome. But at a high level, an LLM or a large language model, it's a program designed to understand and generate human language. It uses deep learning techniques to analyze vast amounts of text data and to learn the patterns and structures of human language.

And it uses these patterns to predict next words and phrases. So when you're using ChatGPT, what it's doing is making predictions on which words and phrases should come next based upon what you've typed previously. So that's an LLM in its most basic context. The other phrase that you've almost certainly heard is GPT. So what is GPT? Yeah, and I think this will be more interesting and perhaps less of you will know. Yes. So GPT stands for Generative Pre-trained Transformer.

And it's a type of LLM that has been developed and popularized by OpenAI, which is a company almost certainly all of you have heard of. And it uses these techniques to generate human-like language. It's based on this transformer architecture, which was first introduced back in 2017 and

And it's designed to process sequential data like language data in parallel, which allows us to process lots of information. And it's proven quite effective in natural language processing tasks, which is part of the reason why it sounds so damn realistic when you talk to it. You think it's a person. Transformers and this kind of branch, because I think a lot of people, myself included until recently, were like,

AI, machine learning, this has been a buzzword, you know, back since when Jensen and NVIDIA started evangelizing and creating CUDA 12 plus years ago. Transformers are a new branch of this whole domain that has become really, really important and useful for this use case, right? That's correct. Yeah. And so it's relatively new. It's six or seven years old.

And obviously the technology has gotten much, much better over time. It just enables the processing of massive amounts of data very, very quickly. And to do so in a way that is where the predictions are quite resonant with the, with the user.

So what was kind of the path for Transformers from like academic development in 2017 through into open AI and then GPTs and then kind of where we are now? Who carried the torch? What was the like moment that took this from interesting research in AI field to like, holy crap, this is changing the world at faster speeds than we've ever seen before? Yeah.

It's a really good question and frankly one that I think merits its own mini Sega-like episode. It's basically a combination of open source, which people publishing papers, people building on top of each other, and commercialization efforts that OpenAI and others have pushed forward. The history of this, it's a bit more of an academic topic than this more applied conversation, but I think the history of it is quite interesting to folks. That is not a story that's been broadcast widely.

Nope. Or the like exodus from the Google brain folks coming together with the Berkeley folks. There is a really interesting story to be told here. The information has written some stuff on this, but it hasn't been done in like proper narrative history fashion that acquired, you know, so good at.

We were going to do open AI this season, and then we were like, I don't know what we can add. History is being written in real time. But you're right. I think you got to do the work. I can give you guys a half-baked answer, but I don't want it to be the canonical acquired answer because it's not good enough for acquired. Fair enough. One thing I do want to say on GPT is that that second word pre-trained is really important.

So what it means is that the models are pre-trained at a point in time with point in time data. And in the case of the current GPT three and a half and four, those were pre-trained through 2021.

So if you ask the models, the off-the-shelf models, about what's happening with the war in Ukraine, they're totally unaware of what's happening. And that points to some of the limitations of these things. Now, there's lots of ways you can augment these models with more current data, and that's work that's going on right now. But it's important to keep in mind that these models are pre-trained and not currently being updated recursively.

I'm curious to get your take on this. I heard an interesting theory the other day, which is the internet from 2022 forward is basically all tainted because it is after GPT was released publicly. And so you kind of have to train on the pre-2022 internet to

Otherwise, it's sort of this recursive loop on training from the output of prior GPT models. It's like making a copy of a copy of a copy of a copy for those folks who remember copy machines. Just the quality degrades. Until suddenly we end up with these JPEG compression artifacts everywhere all over all of our answers to everything. That's kind of like the dystopian, one of the dystopian future uses of this technology, which is the internet will be primarily composed of

recursive material and the new stuff that's generated by humans will be so small that it won't actually move the needle in these models so right it's kind of crazy it's not quite orwellian in terms of like there's some puppet master at play controlling information it's almost like well whatever ended up getting encoded into the thing that we all believe to be the truth kind of becomes the truth because all the future truth is generated off of

this past truth, iterated upon truth. To me, the solution to this or a solution to this, which we'll get to later on in more of the applied section is how do we elevate the contributions of the human?

How do we identify when the human is contributing their creativity, their insight into the system, and tag it as such so that the system doesn't lose that and just copy and copy and copy and copy and copy itself? Right. It's almost akin to the security industry where it's sort of the, well, they built a bigger ladder, so we need to build a bigger wall, and then they go to work building a bigger ladder. It's like humans just sort of need to keep up-leveling what they're contributing to these bodies of work such that

There's some new thing where we're like, surely a machine can't do this. But a few years later, the machine will do that. And then we'll need to figure it out again. And I think that's incumbent upon the people who are developing this technology, including the people who are developing application layer technology to ensure that that insight, creativity, even brilliance from the human is captured and brought back into the system, both to make their core application better, but frankly, to not end up in a world of Xerox copy machines.

It's funny getting philosophical here, but you can't avoid it with generative AI. Maybe this is a great transition to how to invest and build in this environment. I don't know, and I'm very cognizant of the danger of making predictions about how things are going to play out here. But historically, technology has always followed the path of value accrual. If we end up in a Xerox copy machine world,

That probably doesn't seem like it's generating a lot of value. Capitalism will flow in generative AI just like it has in all technologies in the past towards like where value is being delivered. There's probably a role for humans in creating and directing that value. That's not necessarily true. Look at healthcare. Sometimes you have patterns that make it so that capitalism doesn't actually flow to value. Fair enough. But healthcare is a very, very broken market.

I also worry that that's not true. I mean, this goes deeper into philosophy, but I don't know if that's true on the consumer application side of things. Like if you're building a virtual companion for a lonely person, I don't know how important it is that what they say or do is necessarily unique or new or correct. It's really just a function of like, how long can it capture your attention?

That's a great counterpoint if you look at social media and whatnot today. There are only certain applications where truth matters. I want to talk about that. That's like a core part of what I want to talk about today because it matters a lot, particularly for building in B2B. All right, let's get into it. So you're building a B2B SaaS company. How are you thinking about this? One framework to use to think about how to build with generative AI is to think about how important is accuracy for the product you're building.

And how important are understanding the real world outcomes, right? And there are certain products where accuracy doesn't matter. There's certain products where it matters a lot. There's certain products where real world outcomes are irrelevant and they're somewhere at super high stakes. So if you think about products like the consumer applications that we kind of alluded to before, let's take a company like character.ai. So this is a company that allows people to chat with whoever they want, an avatar of whoever they want, including like dead celebrities.

There is no correct answer when you're chatting with a dead celebrity. Accuracy doesn't matter. By definition, there is nothing that's correct.

It's also true that real world outcomes don't really matter in that context. So that would be, you know, if you were to draw a two by two with accuracy and outcome orientation, that would be kind of bottom left where neither of those things matter. And so you don't have to think hard about the UX with those things, with those two characteristics in mind. Right. It kind of can just be a toy. That's fine. That meets the use case. Its goal is to keep your attention.

And that's like, frankly, where a lot of the generative AI stuff that's happening today is being built because like it's trying to keep people's attention. If you think about B2B use cases, almost all B2B use cases outcomes matter by definition, right? Because someone is paying you to achieve some goal. There are some B2B use cases where accuracy is super, super important. I would argue many or most. And there are some B2B use cases where accuracy matters less. So let's take copywriting, for example.

If you're generating copy for a new product you're building, the outcomes matter because you want to know, like, did the person buy the products? You know, whatever the outcome is you're trying to achieve, right?

But accuracy is less important because you're kind of creating something new. It's descriptive. It's adjectives more than it is facts and nouns and such. There's no penalty for wrong answers or multiple shots on goal. Yeah, exactly. I think unsurprisingly, some of the initial breakout successes, or at least thus far in B2B, have been companies like Jasper and Copy, which are building products that kind of have that focus in mind. We can talk a bit about in the defensibility section as to whether or not those are likely to endure.

But let's think about the majority of B2B use cases. These are situations where companies need high accuracy. So like a great example would be a medical use case. If an AI is being used to transcribe the conversation that David's having with his doctor and doctor says, David, what's your blood type? David says, I'm O. And the system captures that as A, David's dead.

accuracy really, really matters. And so the question is, like, given the fact that AI can be wrong, even as these models get better and better, there's still a one to 3% chance that the answer is wrong. How do you build a B2B use case? How do you build B2B applications that leverage this technology, but doesn't put David's life at risk?

It begs the answers of some sort of human in the loop system or what's the thing that they do in space. The historical U.S. space program would put like multiple computers computing the same answer and then do like the winner of three in case some radiation got through and affected the way the computer was doing the calculations. You could imagine some sort of like winner of three type thing. Where are you going with this? The initial UX experience should involve some sort of human in the loop, some sort of co-pilot, some sort of coach.

We at Emergence have been talking about this concept of coaching networks since 2015, which is the core idea of using AI to coach workers on how to do their jobs better in real time. And the idea here is as they're doing their task, they're getting some message from the bot that says, hey, try this. They're accepting, rejecting or modifying, importantly, modifying the suggestion that's made.

And then the system tracks the real world outcome. If this is a sales context, does the deal close? How quickly does it close, et cetera, et cetera, so that everyone else in the network gets their suggestions improved the next time that takes place.

That context is very important because the human is playing two roles there. The first is they are accuracy. They're trying to ensure that the answer that's being given is in fact correct. And the second thing they're trying to do is trying to create their and add their own insights to the system. And this gets back to our Xerox situation. If you have a situation where the system is just repeating itself over and over again, it's not ever evolving. It's not necessarily getting better.

But if you've got a human in there, particularly one who's adding their own edits, tweaks, insights, attempts at making the system better, then the insights that person is able to add to the system will be propagated throughout everyone else in the system.

This is interesting, right? And like, I suspect will be a theme of where we're going to spend a lot of time in this episode, if not the whole episode, that category of software already existed and was a highly investable theme.

before generative AI. It was one of your main themes. We spent so much time, somebody runs here in San Francisco talking about it. This is like your guru investment, which has done great. You can easily see though, how adding generative AI makes this even better.

That's the idea, right? Like you go from a place where most companies that were doing this had to build most of the infrastructure in-house, often developing their own models, often with far less performant models than what's available today, to a situation where you can just plug into an API and get incredibly performant models off the shelf. It also seemed like there was something that happened where...

We went from a world where the important thing was that you have some sufficient amount of proprietary data to train a model to this world where like...

The base level foundational model is trained on the whole internet, whether it be the open AI stuff or the open source stuff. So funny, they're both named open because one's not true. True. Right. By open, we mean closed. And so like you can augment, you know, with fine tuning, you can augment these foundational models. But at the end of the day, like the whole paradigm shifted from a you must bring your own data to these things are phenomenally useful even if you have no proprietary data. Yes. That's such a good insight, Ben. Yeah.

It leads to this defensibility question, but also like it means anyone can start a company doing this. But then the question is, what is ultimately defensible? So just zooming out for a second, the fact that anyone over the weekend can play with the GPT three and a half API and build a product has resulted in the current state of the startup market, which is effectively a horde of generative AI enabled hammers looking for a nail.

It's hundreds of thousands of people that have built effectively the same product that are all now live on Product Hunt. So I encourage you to go look on Product Hunt and see what's live right now. And it'll be very difficult to tell the difference between what's going on. I'm curious for your reaction to this. For me, at least, it's obviously been kind of demoralizing as an investor because it's like, this is too much. I'll tell you the main reason why it's demoralizing to me, because I don't fear having to sort through all the wheat from the chaff. That's what I get paid to do.

The reason I'm demoralized by it is because people have forgotten the core lesson in company building, which is you should build something people desperately need.

We've just like forgotten that we've been focused on like this tech is really cool. It's really magical. And I can over the weekend just hack away at it and build something really cool. Okay, now what should I do with this thing I built? That's the new area. Every freaking cycle VR, crypto, blah, blah, blah. Yeah, same thing happened crypto. Like we're in a situation where there's so much hysteria over the technology that we've forgotten the core reason why you should build a company in the first place, which is to solve a desperate problem.

It's so funny, Jake, you and I were joking the other day about being involved in East Coast colleges, some of which may be our alma maters, which we love dearly. But that like some of the like lessons of Silicon Valley and like kind of the Stanford ecosystem have not made it there yet. And like this is the core one. And yet even we.

forget it when there's a new gee whiz technology. You're referring to like East Coast academia for academia's sake. Yeah. Yeah. Or just like East Coast academia who are developing really novel technologies that could have real important implications in the real world. But the in the real world part is thought about six steps later. It's human nature when there's new cool tech that comes out to get really excited about the tech and play with the tech and forget perhaps like some of the more boring principles that are still enduring. And we're in that phase right now.

We're in the like horde of Gen AI enabled hammers phase. I'm optimistic that that phase will die down and we'll get into problem solving phase. And David, hopefully you'll feel less disheartened by the state of the market. Okay. So Jake, let's say I am building something that's trained on all public data, or I don't even know what it's trained on, which is the case most of the time, but the output sure does do amazing things. How do I build a defensible business using this technology? Yeah.

Yeah, let's use the Jasper and copy example that we talked about before. These are companies that are building in the copywriting use case. And I will use the phrase job to be done throughout this conversation, which was a phrase popularized by Clay Christensen. Highly recommend reading his stuff on this. But it's about thinking about a product, not for the product's own goal, but in the context of what is the job that it's trying to achieve. So in the case of those companies, they're trying to write marketing copy.

The question we think about those companies, they've been accused of being just rappers on top of LLMs, just a rapper on top of OpenAI. I don't think that's exactly the right way to think about those companies and more broadly defensibility in this space. I think the core question to think about is what portion of the job to be done I'm doing can be done mostly or entirely with off-the-shelf LLMs? If I'm writing copy,

how much of that job could be done within an LLM versus how much additional scaffolding is necessary to actually complete that task. And it could be the case that there are some jobs to be done that require a lot of scaffolding and therefore are likely more defensible. And there's some where, hey, the brilliant insight that comes out of the LLM itself is actually gets me to 90% of my answer. And those companies I think are less likely to endure. One thing that jumps out to me, Bill,

Relative to our earlier conversation about being in the gee whiz technology phase, this doesn't excuse you needing to figure out what the job to be done is. Yes, yes. This isn't the podcast for this conversation. But when you think about defining product market fit, the way that Andy Ratcliffe defines it is what do you uniquely provide that your customers desperately need?

And that's the framing to think about when you're thinking about what problem to solve, like what desperate problem exists and what unique insight do you have on how to solve it? One way to get there is from lived experience. So Eric at Zoom,

He was the VP of engineering at WebEx. He knew that there was a fundamental issue with that tech stack, and he knew that there was a desperate need that customers had to solve that problem. And so he had an unfair advantage in finding product market fit. There's a bunch of companies that I've worked with, Regal, Assembled, et cetera, where people in a previous life had a problem, looked around for off-the-shelf solutions to solve it, couldn't find the off-the-shelf solution, left and built that solution and found product market fit relatively quickly.

So we got to get back to that state. And this is like the classic B2B company story. Things are different in consumer, which we should talk about where more wild experimentation can be rewarded in B2B. You're trying to get somebody to pay you to do something. So like you need to be really specific about what you're doing. And so you really need to know what the problem is. Yeah. And in some cases, the more obscure job to be done or problem, the more opportunity you have for a unique insight.

which we can get to a bit if we talk a bit about kind of startups versus incumbents. So what are some examples of things that people could do on top of the raw LLM output to provide defensibility? Let's talk about the job to be done of legal contracting.

In the job development of legal contracting, there are three basic things that need to happen. You need to draft the contract, you need to negotiate the contract, and you need to agree upon the contract, both internally and externally. So ultimately sign the contract. So that's the job to be done with legal contracting. I'll talk a bit about a company that we work with called Ironclad, which is a player in this space that recently just sprinkled some magic generative AI pixie dust on their product. And I'll explain kind of how this fits into their job to be done and the defensibility potential over time.

So one initiative they just launched is effectively a Gen AI enabled redlining tool. So if you are going through your contract and you highlight a clause and you say, I want to make this clause mutual, it will make a call to GPT-4 and come back with a suggestion to redline it, the entire clause to make mutual.

And it's pretty phenomenal. The CTO called me a few months ago after he coded up over weekend and was like, oh, my God, like, like, look what look what we have built. This is very powerful. It took him longer to actually productize and get into the product. But it's now, you know, there and delivering value and customers are enjoying it. I don't think that in and of itself is defensible as excited as I am about the technology.

And the reason I think that's true is two reasons. The first is it's only solving a narrow slice of the job to be done. And going back to my previous framework, for that narrow slice, you actually could do most of that within the context of the LLM directly. You don't necessarily need another massive SaaS solution to do it. The other reason why I think that on its own isn't sustainable is it doesn't necessarily integrate proprietary outcomes data.

I think that gets defensible and much more interesting if you're able to say, hey, when I use this version of this clause, the contract closes 15% faster. That is data that the LLMs, no matter which off-the-shelf LLM you're using, are never going to have, no matter how much data they train on, because that is proprietary to you and you've gathered that data through the workflow. And so architecturally then...

Do these models sort of let you create feedback loops with your own data to create a better outcomes version of them? It depends how you define these models. Like if you're just using a call, an API call to GPT-4, no. There is an infrastructure layer that's being built that allows you to store and integrate your own proprietary data in things like vector databases so that you can maintain that knowledge.

In addition, and you guys referenced this earlier, there's a growing ecosystem of open source models and open source stacks that you could use to customize the models with this information.

This kind of closed versus open ecosystem debate is really fascinating right now. And my guess is there will be some kind of hard lines drawn in the sand over time on this. But back to the ironclad example, ultimately, the defensibility lies in the fact that they have all of this outcomes data because it's a broader workflow tool that has the full job to be done. And just zooming out for a second, Ben, you asked the question, what else do you need? What other scaffolding do you need?

It's all the boring SAS 1.0 stuff. It's robust permissionings and approvals because you need to do that to make sure the contract has approval from different people within the company. It's a native text editor. It's data integrations.

And the other formulations, you know, eSign, all the other things that you need to actually complete the full job to be done. Audit trails and logging and compliance. It's all that stuff that like, I think in the hysteria of Gen AI, we've lost sight of the fact that like, oh, you need all this stuff to actually make software work well, but you do. And so the point I would make is that there are a lot of companies that are either solving jobs to be done that can primarily be done within the LLMs and therefore not durable, or

And or the job to be chosen is such a narrow slice that there's not enough of this basic scaffolding you need to build something that's ultimately defensible and has workflow that gathers proprietary outcomes data. On this ironclad example, which is such a great one, I want to clarify something for me and hopefully for listeners, too. It sounded like...

You're not that excited as an investor about this feature, but I wonder if that might be mischaracterizing how you feel about it. Is that true or is it that you're not excited about if a competitor were to launch with this feature as like a new startup built around just this feature that they would have a very tough time competing with Ironclad? Which of those are you saying or something else?

I'm super excited about the feature in the sense that when I first saw it, I was also blown away. I think it has tremendous potential for the platform. The point I would make is if you were just developing this on a standalone basis, I would be less excited. If a company came to me and pitched me with a Gen AI-enabled contract redlining tool,

set aside my conflict because I'm invested in Ironclad, I wouldn't be excited because to me, the majority of that job to be done could be done within the context of the LLM. Maybe not as well, and certainly not with proprietary data that you'd be gathering over time. But if you're doing this within the broader context of a well-formed job to be done, that's where this thing gets interesting and defensible. Right, right, right. I would imagine as a board member of Ironclad, you're very excited about the future of

all of the new Gen AI enabled product features that Ironclad can ship over the coming years because they've already built this robust framework to get the job done and a venue where it is happening. And it's also why I don't have as much fear about the countless redlining Gen AI enabled startups that have popped over up over the past three days.

I thought you were going to say like months, but there probably have been countless over the last three days. Then math on this is crazy, Ben. Like if you look at the growth of ChatGPT, it took ChatGPT two months from launch to get to 100 million users. It took Instagram three years. It took Netflix 10 years.

And I've heard this stat cited a few times before. I have no doubt that it is unbelievably fast, but I don't think this is apples to apples because I think OpenAI is counting registered users, not active users. The hard thing about this is typically you would look at a monthly active user, but in this case, because it's only been a month or two, registered and active are basically the same thing. So I think what you're getting to, which is a good question, is what's the retention curve going to look like?

and TBD. What this shows is just the massive mainstream interest that this technology has, which is a big part of the reason why anyone can code is why there has been so much activity around new startup creation here. I mean, there was a South Park episode on it already two months ago.

Yeah. And they use the words chat GPT and open AI 50 times during the episode. So it's not like in the abstract, like mainstream America, mainstream world is already like, oh, cool. This is the product. And this is the company. I'm trying to think, has this happened before that? Like all of that, where it's like this being on South Park is the consumer use case for open AI's chat GPT product.

But what we're talking about here is like, how can this technology be used in the enterprise, which is a totally different thing, but it's like the same company and technology being used in both ways.

Part of the reason it's so exciting is because the extensibility of this stuff is effectively endless. You can start to see why people use this analogy, but it's kind of like when the internet was created, kind of the same thing happened. Like people went nuts when Netscape came out on the consumer side, but also a lot of Silicon Valley B2B technology companies were like, oh, we need to understand what this means for our side of the house. You know, and that led to the

the cloud. It's a good point, David, I guess. And that's a good transition to think about, like, how do we contextualize this in the broader context of like B2B software revolutions that have happened before? How does this compare to the on-prem to cloud revolution? How does this compare to mobile? So on-prem to cloud, incumbents had a really tough time adopting and adapting in that moment, because it's really hard to rewrite your entire code base.

Most of the incumbents from the on-prem era are dead. Siebel is dead. Salesforce became the cloud-based CRM and took off. And it's not just rewriting your code base. It is...

a physical change of how your entire company operates. So of course there's the like, we should make data centers and we should stream the bits down to customers and we should deliver it through the web and blah, blah, blah. But there's also like, we should completely change the nature of our relationship with customers such that they buy a different product from us, such that they're buying access to software rather than like,

the truck full of our stuff by our employee that arrives there to install it and charges service fees for installing it. Like, of course, all those companies are going to die. Rewriting code, it's like firing everyone and hiring a whole new set of people to do a completely different set of things such that the job to be done for the customer is the same. But, you know, it's like the tip of the iceberg looks the same, but what's under the water is actually a different iceberg. That's exactly right.

Yeah. And so unsurprisingly, there was a dinosaur-like asteroid that hit and the incumbents died and life was born in the cloud. Mobile was a little different in B2B. It did require some replatforming, but there have been a number of incumbents who have adapted with some success, certainly in the consumer space, Facebook very notably. To think about B2B, I would argue like Salesforce...

in the mobile revolution has done an okay job of adapting to mobile. But if anyone's used the product, the mobile product, it's still not awesome. There's now a bunch of use case specific mobile CRM applications that have been built to fill in the gaps. We're invested in one called Vimo focused on financial services. So in that era, the incumbents, it was difficult, but not impossible for them to adapt. And I think that's what's happened. Generative AI is a completely different ballgame.

It's as easy as an API call. And so Salesforce has already integrated this technology. Now, how much success they have, how well they've integrated, how other incumbents do, etc. We're all still very early in figuring that out. But this is a different ballgame and has meaningfully different implications for the startup opportunity. Yeah. I mean, we're just talking about the Ironclad example, right? Like they are the incumbent in the space and they are the best, you know, probably the best actual product version of generative AI in the space as well.

I'm obviously biased in saying that, but I think that's true. So there's interesting implications for if you think about them as the incumbent, you know, if a startup is trying to pick off little parts of their job to be done and you can do most of those within the LLM, it's going to be less durable. What types of incumbents do you think are the most at risk from complete disruption, like their business going to zero from the fact that this technology exists?

It's a good question. I think that there are incumbents for whom the current UX or UI paradigm is not one that will sustain effectively in the new environment. So let's get into UX UI. So as everyone knows, a chat interface, hence chat GPT, is a common way to interact with these LLMs.

There are going to be some B2B use cases and certainly many more, I would say, B2C use cases where a chat interface is superior to a point and click interface. And if you are an incumbent who has built their entire stack on a point and click interface, you're going to have an innovator's dilemma problem. So I don't know if this is true, but let's imagine a world where a chat first interface is the best way to build a CRM. If that's the case, it's going to be really hard for Salesforce.

It's not that Salesforce can't afford to hire great product people to build that. It's that they have an installed base of millions of daily active users. They're used to the point and click interface, and they can't disrupt that business. So there's an innovator's dilemma issue that could arise from this UX paradigm shift. One working hypothesis I've been sort of noodling on ever since we interviewed Avlak, the CEO of AngelList on our last ACQ2 episode is,

The more services-oriented a firm is, the more at risk they are of LLM disruption. He was pointing out that for all of everything that AngelList does, all the tens of thousands of portfolio companies sort of managed by AngelList and hundreds, maybe thousands of VCs,

that have a back office and I can't remember what he said, but like hundreds of thousands of K ones, there's 170 human team that works at the company, like inclusive of their software engineers and management, like everyone designers to perform all of that activity. And they do a lot of AI behind the scenes, uh,

for operational efficiency. His point was that 170 people relative to, joking on the episode, 170 people at AngelList is like probably roughly the same amount of people that Andreessen itself has in their back office. And AngelList, you know, has scaled to support nothing against Andreessen, right? But like they support just like orders of magnitude more funds, portfolio companies, K1s, and they use

LLM and Gen AI technology to get that leverage. So it basically provides operating leverage for businesses that that use case didn't use to have operating leverage. Like you can have high gross margins in what we used to think was an exclusively low gross margin industry. Yeah. I think it's also an opportunity to think about not just internally, but externally, what product could I build that was once a service? An example of that, and I don't know if anyone's working on this yet, but

pricing strategy is something our portfolio companies spend so much time and money trying to figure out.

It requires trying to estimate a price elasticity curve. How much are people willing to pay for this product? Different types of people. What should the right packaging be, et cetera? That type of thing today is the domain of consultants and guessing largely. But you can imagine a world where you could input all of your historical data on this and a model could spit out, this is how much you should charge for this. This is how much you should package it. And then it could be updating that real time as it's getting data from how people are purchasing. That is not an existing category.

So this isn't a question of like disrupting an incumbent. It's about creating an entirely new category. And also to your point about UI UX, that is a wholesale different UI UX from the current way that job is done now, which is by people and consulting and steak dinners. If you productize it, that's going to be very different. So one way to think about this is what jobs to be done couldn't have been done with previous technology.

And that's often going to be a thought exercise there is what is the domain of consultants today? My partner, Gordon Ritter, in 2013, wrote an article called The Death of McKinsey with sort of this spirit in mind. But I think the death of perhaps not McKinsey. It was just a few years too early. Yeah. I think many of these consulting efforts, if they don't adapt this technology themselves, are likely to be productized away. Then there's another category to think about if you're a startup thinking about how do I play in the incumbent landscape, which is

What entirely new jobs to be done will be created by this technology? And there's the obvious answers here around infrastructure, right? There's a whole layer of companies that are being built today that are doing vector databases, that are doing prompt engineering, that are doing model chaining, all these model training, et cetera, that exists as this technology rises. So that will be obviously opportunity for startups. There's going to be a massive opportunity around compliance, right?

In generative AI. And there's a bunch of stuff happening, you know, whispers of what's happening on Capitol Hill right now in terms of potential regulation around generative AI. But my view is that regardless of what happens in DC, enterprises themselves are going to demand that their vendors have some form of compliance on this front.

And that compliance will likely entail something around, hey, what data did you train on? Is this even legal for me to use this product? How do I ensure that you're not taking my data and using it in the model? Or if you are, I'm aware of that and getting paid for it, etc. How do I ensure that there are proper guardrails around the technology such that, you know, the thing doesn't go haywire and screw up my business? All of these types of things, there will be companies built to do this.

You put this very elegantly in our little outline we were working on before the episode for this, Vanta for AI, which I know is a sponsor. Such a great partner of ours. But yeah, totally, right? It's a super exciting and I think really challenging space to build in, but one that I think will become important. And it's an opportunity for startup because there's not a space today. While we're in compliance land, it's kind of an interesting thing to note that

SOC 2 is not a regulatory framework that has nothing to do with anyone on Capitol Hill. So will AI be the same way where it's not legally enforced, but it's sort of a set of standards that gets adopted? That's kind of been my current framing. In general, I like to invest in stuff that has business tailwinds and not just regulatory tailwinds.

And my sense is there's enough here that people are going to care. You guys probably read last week, Samsung discovered that three of its employees had uploaded proprietary data to ChatGPT. They had a bunch of secret meetings and they wanted someone to summarize or something to summarize the takeaways from the meeting. So they just put in ChatGPT without thinking like, oh, that data now belongs to OpenAI and Microsoft, our potential rival. More and more of that is going to happen.

Right now, what's happening is companies are just saying, don't use chat GPT. If you're firewalling this technology completely, then you're going to get left behind. So you've got to find a way to integrate this technology in a way that is enterprise compliant. And I think that will be driven by the enterprises, not necessarily the government.

Is that how ChatGPT's terms of service work? Any text that you upload here, we can read and any of our employees are allowed to look at this in plain text? So I don't know if it's any employees can read it. It's not like Uber God mode. No, I don't think it's God mode. I hope not. God, just thinking about some of the things I put in ChatGPT. I believe they changed their terms of service recently where it's either the default or easier to default into them not being able to access that data. But

But the broader point remains that there's a level of fear, and I think justified fear, around sharing particularly sensitive and proprietary data with third parties. Part of this will get addressed with this open versus closed ecosystem conversation we were having before. If you are super, super nervous about your data leaving your premises, you're likelier to opt into the open source and open ecosystem and kind of build your own, which is now easier to do than it traditionally was because of some of these breakthroughs.

But I think there are also going to be a bunch of privacy and compliance related technology that's necessary to ensure even in that world that the data that you're using is legally kosher and not leaving your premises and the thing doesn't go off the rails, etc. Well, this is such historically has been such a huge opportunity for startups. Like I'm thinking of Zoom. And I remember when we were talking about right after Jake and I were business school classmates and you joined Emergence right after we graduated and talking about the Zoom investment as you guys were making it.

And at the time, you know, FaceTime was really good, still is really good. The more directly competitive product with Google and Hangouts was pretty good. Zoom's probably still is better, but like still. I'm biased, but I think so. The obvious at the time, Zoom became much, much, much bigger and more consumery over time. But like,

was, hey, enterprises aren't going to use FaceTime for a whole bunch of reasons. And they also don't like Hangouts for a lot of the same reasons. Yeah, the enterprise of vacation of new technology creates massive businesses, both underlying application businesses, as well as the derivative technologies that are compliance and regulatory and everything else to make sure that they succeed. It's even more important in this world than I think it was in previous worlds, because the risk of things going wrong is so much worse.

I assume you guys are familiar with it. If you're not, I'll share a bit about this concept of auto GPT. Are you guys familiar with this? I like literally just started reading about it last night and listeners, this is going to date how long we sort of wait between recording and releasing episodes. But auto GPT is still like a pretty new thing as we're recording this. It's possible that we won't exist by the time that this episode comes out because auto GPT will have taken us over.

So auto GPT is effectively an AI agent that you can give a goal in natural language and it can attempt to achieve it by breaking the goal into subtasks and then using the internet and a bunch of other tools you can plug into it.

in an automatic loop to solve it. So basically, you give it agency to solve problems. It's implemented as a browser plugin, right? So it can do things like type in websites, go to them, fill out forms, download files. It can be. There's a bunch of different formats, but that is one of them. So basically, you can tell it like, hey, and there's been a bunch of examples of folks doing this, create a business. Like, I'm going to give you $100, create a business and try to make it as profitable as possible. And it can go and... I'm going to give you $100, turn it into $200 as quickly as inhumanly possible. Yeah.

It can start a company with Stripe Atlas. It can go to Shopify. It can spin up a shirt store. It can do all of those things. And it doesn't take much imagination to understand how sci-fi nightmares become real. And not just from a consumer perspective, but from a B2B perspective. If your well-meaning employee goes to AutoGPT and says, do a bunch of scouting for prospects and draft a bunch of emails and send a bunch of emails, you

You can imagine a world where some information that is shared in those emails is probably not what you want shared. Or, hey, do a bunch of research on this supply chain opportunity and reach out to these vendors and get some price quotes and come to us with the right. You can imagine a world where a bunch of stuff is purchased that you don't mean to purchase. There's all sorts of ways in which I think that we could be trending towards our own FTX moment in gendered AI. What do you mean by that?

So, what I mean by that is things are evolving so quickly that people are building first and thinking about oversight and guardrail second. And so, it's likely that there will be some sort of catastrophic issue in a company in the coming quarters that is driven by...

someone building an agent that goes wrong. And the reason I call it the FTX moment is because it will likely be similar to what happened there in that there will be, you know, a very famous blow up. And the issue won't necessarily be because there was a flaw in the underlying technology. FTX didn't blow up because crypto is bad. FTX blew up because there was poor oversight.

And I think the same thing could be true here. It's not that the underlying technology is bad. The underlying technology has a lot of limitations that you have to build around. But if you have proper oversight, I think it can be really helpful. But people aren't thinking about that right now. There are a bunch of Gen AI enabled hammers just looking for a nail. Another good analogy could potentially be the Sony hack, if folks remember that. That was such a key point of companies, even big enterprises like Sony, didn't think about cybersecurity yet.

in any way the same form until that happened and like the incredible disaster not just for Sony but for so many other companies that got caught up in that because of their emails with Sony people like Snapchat was caught up in that and that was a watershed moment for enterprise cybersecurity yeah so I think we will have something similar to that which I think ultimately will be healthy

We will likely go through some sort of trough of disillusionment with this, as is true with almost all new technology innovations. And it's possible that the apex will look like one of these catastrophic moments. And a lot of enterprises will pull back and say, whoa, whoa, whoa.

Is this ready for prime time? What do I need to do to make sure it is? And that will be kind of the healthy growth moment. It'll be an opportunity both for the derivative companies, the Vanta for AI to come out and help make that a reality. But it also goes back to UX design. I think we will learn a lesson in how we build these technologies to effectively include the human in the loop. It's pretty interesting, the trough of disillusionment, because normally when I look at this Gartner hype cycle graph, it's like...

about VR or about some technology that we hoped would reach mass scale and mass utility, but we all got too excited and it didn't. And then over time, slowly over the next five, 10 years, it did. In this situation, there's obviously an insane amount of utility for everything

Hundreds of millions of people. And that is already true. And so our sort of like fall from grace here when the hype gets ahead of the utility isn't going to be that there's not utility. It's going to be that we're not ready to embrace everything that comes along with that utility. So we're going to be like raining or attempting to rein in some technology that clearly is useful.

And that basically never goes well. Like you kind of can't tell people stop using that super hammer that's cheaper, faster, stronger, better than your existing hammer. It's like, I'm pretty sure I'm going to keep using the super hammer. Otherwise, I'm not going to come work at your company. People do what they want, where they find value. That is true. And I think that will particularly be true, like on the consumer side of things. I think the genie's out of the box.

When you're thinking about selling into a mid-market or enterprise company, there is a level of conservatism that exists there and should exist, which I think will rein in some of these behaviors. But it's the opportunity, right? If you're an application builder or anyone in the stack that's building to try to sell into a proper B2B company, you have to keep this stuff top of mind. I think one of the most interesting things to pay attention to in B2B software over the coming year or two is how user interfaces and user experience do evolve.

We talked a bit before about the Salesforce innovators dilemma issue, if chat interfaces become popular. There are some situations where chat interfaces will become more popular in B2B, although the infinite landscape, the infinite canvas that that presents, I think does have limitations. So I think it won't be ubiquitous necessarily. But in general, how do you figure out how to effectively build a co-pilot or a coach?

what are the best UX best practices to do so to ensure that you're both getting the best of the human as well as ensuring accuracy and that the thing you build doesn't go off the rails like the companies that do that best will likely be the winners of this next generation and it feels like it's already here right in that the the UX for some of those things you just mentioned but I haven't actually tried to co-pilot but like I'm thinking of notions AI feature it's not

really a chat interface like it's baked into the workflow that already exists in the platform and

I think a lot of these UXs will obviously in the back end, they're conversational. You're having kind of a conversation with the LLM, but the front end will likely include, depending upon the use case, some elements that are more conversational and some that aren't. I think Notion is a good example of that, right? In the sense that you can type a phrase and the Notion can tell you like, hey, you can offer you, do you want to make it more professional? Do you want to make it more casual, et cetera? So it's bounding the canvas and telling you what is possible. The scary thing with the chat is you just don't even know what's possible.

And there are lots of limitations within that, you know, and scary things that could happen. But I think a lot of these interfaces will take the best of that and say, hey, here's some suggestions on what you could do, press this button and see what happens. And then the best of them will learn what the human does with that suggestion, what edits they make, et cetera, and then tie it to a business outcome so that every time a new suggestion is being made, it's improved by the historical data. Ben and I, we were just texting yesterday, I think, right, about a piece of data that

that we've had for a long time on Acquired Episodes that we basically completely ignored is the graph of listener engagement through the course of an episode. Like, when do listeners stop listening? And we kind of have never done anything with it, but, like, you can imagine. If we had, though, it would have changed our behavior. We would have been like, oh, don't make episodes too long because it drops your completion percentage. Right. So, like, we might have optimized on the wrong things, but...

But also just completely ignoring it is like probably not the right answer. This is why the human loop matters. Just like that discussion you guys just had. Surfacing that data is going to be really important. And then you as the human have to figure out what do I do with that? That's where the interpretation and frankly, like the reason you get paid exists. Because otherwise, like auto GPT generated an all in podcast script.

And so you guys need to figure out how you can keep upping your game, which is ultimately going to come down from insights like what you just shared. Well, that's kind of where I was going with the like when we're talking about the Xerox risk earlier, like at least in our world, people will engage with what is compelling. If a Xerox world is not compelling, people won't engage with it. And then somebody will come along and tweak it as a human and they will again. I think that's right. There is likely to be an interstitial period, though,

where we build these technologies, these suggestion-based technologies, and the humans don't stay engaged. You can imagine a world where, let's say people start building what I'm describing, which is they have kind of a smart coach, co-pilot UX, which helps hopefully mitigate some of the accuracy risks and tries to get some of the brilliance.

But what ends up happening is the human gets used to it and just clicks, accept, accept, accept, accept, accept, and isn't actually engaging their brain at all. The Dacusign problem. Yeah, exactly. Which is, yes, yes, there's a real problem there. But I saw like a quote in the journal, like last week, the woman who runs HR at Heinz, at Kraft Heinz company said like, the thing that's keeping me awake at night right now is how to use these AIs as a co-pilot, not an autopilot. It's a succinct way to summarize this problem, but

There are too many business use cases where accuracy is critical and having the pilot fall asleep could crash the plane. And so back to the UX question, I think the best companies will find ways to keep users actively engaged.

And part of that may be things like the way you actually design a suggestion. So we work with a Seattle based company called Textio, which has been doing augmented writing since like 2015, I think. And their user interface is really clever. They're focused on HR writing. So if you have a job post, they will go through and help you highlight phrases and say, if you change this to this, you're 12% more likely to attract whatever type of candidate you want to attract and eliminate bias in that process.

Most of their suggestions, if you hover over them, will tell you, hey, change this to this. Every once in a while, they highlight a phrase and they say, change this. They don't give you any suggestion. Oh, just so you don't get in the habit of like, click, click, click, click, click. There's two reasons why they do it. The first is so you don't click, click, click, click, click. But the second is because they're actually trying to generate new data, new what you could call mutations for the system. So it's not the Xerox problem.

Interesting. And if the user base is wide enough, even if I only get what those, you know, one out of 100 times, and most of the time, it's really speedy, you still create a very large new corpus of data. Exactly. And like, if you if you take this even further, you could think about almost incentivizing or gamifying this process for the user.

If I innovate on the data set by coming up with a new way to answer this problem or phrase this thing in a job post, and that has a positive business outcome, it helps close the job post faster, whatever the outcome is, I should get paid. If the insight I came up with in conjunction with this AI really helps the whole system evolve or mutate to a better business outcome, I as a user should get paid.

And so there's going to be new like compensation models that I think could arise with this stuff. The Patreon of AI. Yeah, perhaps the Patreon. That's an interesting concept. By the way, I went to a Scary Pockets show a couple weeks ago at the Fillmore. Scary Pockets is an amazing funk band. They cover music and the CEO of Patreon is in the band. Amazing, amazing shout out. Jack Conte, right? Exactly. Jenny, my wife, went to high school with him.

Oh, that's so cool. As a lapsed musician myself, I find like his ability to combine his day life and his nightlife. So inspiring. Is Carlos still a Patreon? Carlos is still a Patreon. Yeah. I pinged him to see if he wanted to come to the show, but he moved to Montana. Oh, wow. Nice. This is our other GSB classmate who is also an accomplished musician. He was the, is the CFO, I think. I believe he is the CFO. Yeah.

Back to the UI and UX interface. There's a framework that actually this guy, James Cham at Bloomberg Beta helps me think through, which is thinking about the AI and human relationship as an intern to manager relationship. So if you think about the AI as the intern and their job is to help provide leverage to the manager and the manager's job is to review their work, improve upon it, and then press submit.

The human gets leverage from the AI, but the accountability stops with the human. That's what people are going to pay for, I think. Right. The intern screws up, the manager is going to get fired. Yeah, exactly. Exactly. And so we need to be building with this where the use cases where accuracy is important and it's tied to important business outcomes, building with this framework in mind, I think is likelier to create positive outcomes.

But right now, like so much of the conversation is the opposite, which is like the AI is going to be the boss can take over your job. And the reality is there's a ton of jobs where accuracy may not be as critical, where the human's input doesn't have as much input or effect on the business outcome, where the task is super static. It doesn't change very frequently where like those will be and are being automated away. But the use cases with the opposite, you need the human effectively involved. Yeah.

It's so hard for a human to stay in the loop if the suggestions are good enough. Like this is why Tesla makes you grab the steering wheel every X seconds because like you will just tune out and it is your fault if you crash the car. You have a big counter incentive including your own life to not tuning out but like you're still gonna because like we're all humans and we're wired that way. All those YouTube videos of people you know going to sleep in the back and like all you know just insane stuff.

David, you and I got a DocuSign with like a lot of pages and a few things that we needed to sign maybe three, four days ago. I don't think you read that whole document. I think you could click a lot. I think I know what you're talking. I'm pretty sure that in most cases these days, I'm just like, oh, Ben's going to do this for me. That's why I read it. I was like, there's no chance David's doing this. I need to do it. It's called parenting.

There are AI-enabled therapists if you guys are looking for some intervention here. A business marriage counselor. But let's take the Tesla example for a second, and even the DocuSign example for a second. I think one framing to use here that hasn't been discussed much is how much influence does variation in human behavior have on outcome?

What is the variance in outcomes in general? So in the case of driving the car, the variance in outcomes is it's kind of binary. Like you get into a crash or you don't.

But in a lot of business contexts, like sales, for example, there's like a top 10% that perform way, way, way, way better than everyone else. And that is largely driven based upon deltas and their behavior. There's uncapped upside and downside. Exactly. And so in that world, I think you can build user interfaces and potentially compensation schemes that keep the human being engaged.

And you make it to a world where just the very top performers are the ones that stay engaged. And then you fire everyone else because the system can do the lower stuff. I read a paper that my colleague Jess shared with me a few weeks ago about analysis of co-pilots in example scenarios. And it found that the biggest impact they had was on people who are already high performers. Biggest positive impact that they had. It helped the high performers perform even more.

Which is both like a heartening and also a little bit disheartening example. Perhaps in the future, the co-pilots can be built to actually up-level folks who aren't operating at a high level already. I was just thinking about that. What are high-performing people who I've worked with like?

They're people who are engaged. I thought you were going to name names. I was excited to hear you. Well, participants on this podcast are at the very top here. I'll leave it to you guys to debate who's higher. But yeah, it's people who aren't doing what I do with all of our...

back office stuff required and click, click, click, click. I'm a low performer there, but I'm a, hopefully a high performer when it comes to writing scripts. Like I'm highly engaged. Like that's the point, David, forgive me for like zoom out for a life philosophy for a second. My guiding life philosophy is understanding what gives you as an individual energy and then pursuing your life to maximize those things and minimize everything else.

And it's possible that writing scripts and maybe playing with Nell are the things that give you the most life energy. Maybe not, but they do. Doing DocuSign reviews is not something that gives you energy, but it's possible that Ben actually gets energy from it or someone else does, et cetera. The process of self-actualization is to get yourself into the zone of energy creation.

Here we go. Gen AI is pushing everybody to the top of Maslow's hierarchy. That's the positive spin on this, right? If we design the systems correctly, you can get to a place where it's like, you know what? The person who was like the lower performing sales rep, that never really gave him or her energy anyway. And so they can be replaced and the high performers can be coached and they can go and do the thing that gives them more energy. Doesn't this argument also require universal basic income though? Because at some point there's not enough income producing jobs for...

Everyone to do the thing that gives them the most flow I think that is probably true But I think one thing that is worth keeping in mind as we talk about all the bots that are coming for our jobs Is that after ATMs there were more bank tellers? The fear was that like ATMs come bank tellers are gonna be gone and now there are more bank tellers today than they were before population growth thing That's stupid. Why would there be more bank tellers? The bank tellers are doing other things. Yeah the point humans like interacting with humans and

Yeah. Before bank tellers were just like giving you cash and now bank tellers can do like a vast array of things that they could never do before. Manage your relationship with the bank. I get the new jobs thesis generally, but the more bank tellers thing always smells funny to me. I haven't dived in deeply on it. I know that Ezra Klein goes back to it frequently. And so I'm kind of trading on Ezra here. I don't mean to like invite you on my podcast and then tell you you're wrong. But like what I'm going to do is put Ezra in my place since that's where I stole that one from. And you can debate with him. Yeah.

I think that the broader point, though, is absolutely right, which is like you have no idea all the new jobs that get created by this new boom. Like all of the jobs that we all have, venture capitalists was a very niche thing, but not non-existent. But like all the jobs we have, podcaster didn't exist 30, 40 years ago. The job of program manager at technology company where I started my career basically didn't exist 30, 40 years ago. So most new jobs for college grads are ones that are new.

Yeah, it's beyond my expertise to like really authentically prognosticate around where this is going to go from a broader jobs perspective. But I think that it is possible that the jobs that do remain, people who have genuine energy towards doing those jobs will be able to do them more effectively in conjunction with the help of a robot. As long as they think of themselves as a manager and they think of the intern as the helper. And I have two questions.

areas I want to make sure we hit before we wrap up. Ben can jump in with others. One, from our outline, we need to make sure we hit because we'll be doing people a great disservice, founders and investors, if we don't. You have in here how to think about pricing in Gen AI. I have heard nobody talk about this. So please give us your thoughts. This is evolving quickly, and I'll explain why in a second. But one thing is pretty clear.

which is that the current paradigm of per seat pricing is unlikely to be the future. And there's a bunch of reasons why that's true. One obvious reason is that there's a potential cannibalization effect.

If this technology does make the user better, faster, et cetera, you just need fewer users to get the same job done. And so if you succeed, presumably your contract size could actually shrink. And so there's some cannibalization. Now, you could argue like if you make them more effective, you can hire more of them, et cetera. There'll be situations where that's not quite one-to-one. But in general, the goal with pricing is to tie yourself to value creation.

Like that is the framework to think about when you're trying to figure out how do I price? And per seat made sense when you were selling a hammer. When you're selling a generative AI enabled hammer, it may not. There's other reasons why per seat pricing isn't potentially the ideal for AI, but

Another is like per seat pricing can dissuade spreading across the company. And a lot of these products we talked before, like I believe that the medium term moat in this space is going to be robust workflows around a complete job to be done. And I think the longer term moat around the space is going to be proprietary outcomes data, business outcomes data. You're best suited to gather that proprietary business outcomes data when you have a lot of people that are using your product.

And if you're pricing per seat, it dissuades that from taking place. So I think there's also kind of just makes the product potentially worse over time if you if you price that way.

Okay. So what do you do? So what's the answer? Yeah. So just to double click on the proceed, which has become, I mean, correct me if I'm wrong, you're the expert, but like the standard in SAS. The reason I think it is, is that like, it's really intimate. You're going to tell an enterprise to adopt a new tool and you're like, oh, the way you do that is we sign a seven figure deal versus the way we do that is like a small team of you guys whips out a credit card. If we grow over time, that's a much better sales motion.

It is, although it can be hard to really get to the seven figure levels if you start that way. So it's a much broader conversation around go-to-market models for SaaS, which we can do an entire episode on if you guys want. The way to think about this is like, start with first principles, which is do an ROI analysis to figure out how much value your widget is creating. And this gets back to the first thing we talked about around like,

What is the desperate need that you're solving? How big is that need? And how effective are you at solving it? This is the honest, like truth moment where it's like, okay, well, I'm solving it. It's this big. And this is how much I'm solving it. And then you can ask yourself, okay, how much can I charge? So this is not how do I charge? It's how much can I charge? And in general, SaaS companies can charge roughly between 10 and 30% of the value they create.

In a more monopolistic setting, you can charge a lot more because there are other options. In a more commoditized situation, you obviously charge less. But that is a rough framework. And be really honest with yourself about this, obviously. Part of it is you have to gather data from customers and hopefully friendly customers. This is hard when you're just starting as a startup. You really can't do this because you don't know. It's true. Which is why pricing is an iterative process. Most companies, like in the startup phase...

They either, if there's a really good comp, they just charge what the comp is doing and maybe a little bit less because they're in a startup. So you need to get folks over or they just go to a customer and they have five different pilot customers. They give five wildly different prices and just do price discovery that way. So like this is more of the academic framing to think about. And certainly as you scale, this should be the framing you're using. But the process of getting there, it can be a messy one, but you need to be keeping it like proactively top of mind. Yeah.

So let's say you figure out how much to charge using, you know, some combination of what we just described. Then the question is, how do you actually charge? And as I said before, like the goal here is to tie your pricing mechanism to value creation. That is where value is aligned, right? You did add value, therefore you should get paid, but you don't want to disincentivize usage. And that's a really important point, as we talked about before.

So what's an example of that? So in the case of Textio, the company we mentioned before, the classic way to charge there would have been per seat in terms of number of recruiters for their job post product. But number of recruiters may not actually correlate with how heavily used the product is, how much value it's creating, etc. They may not be hiring a ton of people right now. They may be hiring a lot of people, etc. You could also charge on a volume basis, which is how many contracts go through the system, but that dissuades usage.

What Kieran, the CEO, decided to do was to charge on the basis of how many jobs the customer expects to fill over the course of the coming year. So that basically creates like an unlimited, all-you-can-eat usage of the product that

But if they're hiring a lot of people, then they should be paying more because they're getting more value out of it. If they're not hiring that many people, then you potentially charge less. There's ways to deal with wrongness there, right? Like if you overestimate three quarters in a row, then like we should give you some kind of rebate or we should talk about how we give credits toward your next thing. Likewise,

the opposite direction, you can bring out a big hammer and say, if you go over your limit, it gets much more expensive on a per hire basis. So you should estimate correctly. Yeah, that's right. And there's ways to validate. There's some work that goes into actually institutionalizing, operationalizing a model like this. And there are also downsides because the customer is not used to buying in a model like this. They're much more used to buying per seat. As people start experimenting with new models, for the reasons I talked about before, there'll be more openness to exploring approaches like this.

And then the bonus way to think about on that, like how to charge thing in the world of AI is how do you use pricing to actually improve your product? Which happened in the SAS revolution. Yeah. Like how do you actually get people to submit their data to your system so that your product gets better for everyone else and allow them to use your data anonymously across the product?

proprietary outcomes data we talked about, like, I think that's the long-term moat for these businesses. It's more powerful if you're not building a model just for one company's proprietary outcomes data, but you have all of your customers' proprietary outcomes data. Now, this is less likely to happen in data sets that are core to the company. So the way we think about this internally is what are critical non-core data sets?

Datasets that are important, that people are willing to pay a lot of money for access to in an anonymized way, but aren't necessarily core to your company. So hiring data is a good one. It's critical data, but like Google's core, you know, proprietary, their code behind the search algorithm, they're never going to give anyone else. But they're hiring data. They might in like a very anonymized, protected, et cetera, way if they get a pricing break. This gets back to the pricing thing.

So you can think about saying, hey, you can have the single tenant version of my model that I built for you, and it's X. And then you can have the multi-tenant that's 0.8X. Not only is it cheaper, but you also get access to everyone else's insights. And so you have a twofold incentive to participate in this structure. That type of pricing thinking will become increasingly popular. Here's the part of my thinking on this that is still evolving, because I think this whole space is still evolving.

which is given that in many cases, you're now renting a new piece of infrastructure that in and of itself has varying degrees of costs associated with it. How should you pass on that cost to your customer? Depending upon how much OpenAI is charging you for whatever thing you're using,

that'll have meaningful implications on how you can charge. So you have to think a little bit. Traditionally, software has never had a cost plus pricing mentality. Most stuff, most physical goods are cost plus. It's like, cost me this much to make it, I'm going to charge 20% on top of it. Software obviously doesn't have that paradigm, which is part of the reason it has such high gross margins. But this is a world where we're adding a new cog. We're adding a new cost of goods sold. Very real variable costs. And the question is, as you get better at the engineering problem, let's say a company sort of

has a use case where there's AI magic that happens and customers are loving it and the company figures out, ooh, we can do 20% less API calls if we cache in this layer and we figure out some efficiency there. That should accrue to you for figuring out that engineering efficiency and not to your customers. So you want to continue to get the operating leverage on...

new efficiencies you find in your company too. Exactly. This is all happening, evolving very quickly, real time. There's ways to optimize if you're using third-party models, like if you're using, you know, the OpenAI's suite of models. I know some folks, I won't name them publicly, but I know some folks that are using OpenAI's models to train their own proprietary models.

So they're building a use case specific, you know, app and they'll just do a bunch of calls on GPT-4 and then just, you know, they'll spend $50,000 training their model off of that. There's interesting legal questions here, right? Is that legal? This is like the LinkedIn bootstrap on address books. It's AI laundering. Yeah, but it's AI laundering. But like, isn't OpenAI doing some of that as well?

Like the data set they've trained on is that. Well, it's closed AI. We actually have no idea what they're doing. If it's dirty money all the way down, then like what's a few more germs? We've talked about the regulatory stuff here. Like one thing we didn't touch on is like, what will the legal implications of all this be? And I think it's going to be years before this gets sorted out. And between now and then someone will make billions of dollars. It's just like the LinkedIn thing. Yeah, exactly. People have to build. I had someone asked me recently, should I just like not do anything for now until it gets figured out? And I was like, well, do you want to

play in the next five, 10 years? It's like, no, you should probably go make a bunch of money right now before. To be clear, people should be behaving as in all contexts with a high level of integrity. So I don't want to excuse all this for lack of integrity. When I say make a bunch of money, it's like, go create a bunch of value for customers and ask people to pay you for the value you created. That is the way to, yeah. That's exactly right. And the way people price, given what's happening on the underlying COGS layer,

is going to have to evolve quickly because some people are training their own models off open AI. Some people are just doing their own open source stack, which potentially could be cheaper for them to serve. And therefore there could be some cost advantages against competitors here. So we talked about UX as a super fast evolving, like exciting part of this that has a bunch of unanswered questions. I think the infra and the way the infra translates into application layer pricing is a really exciting space that like a lot of innovation is going to happen over the next year or two.

Awesome pricing discussion. Thank you for all of the alpha that you have just given us and founders listening to this right now. I said there was another topic I want to touch on before we wrap here. I just want to do a temperature check with you as a venture capitalist. Obviously, this space and your corner of it in B2B is top of mind, something you're spending a lot of time and effort on.

This is a really weird time in venture and startup investing. Like we just came off this like 15 year boom with lots of mini booms building it all up to a huge deflation that happened violently and rapidly. Yeah.

And now here we are with another like, you know, it's very disorienting, or at least I find it very disorienting. Like, I'm curious, how are you feeling? How is emergence feeling? Like, what's what's going on? There's a level of schizophrenia, for lack of a better word.

Where, you know, you're pivoting between situations where companies, you know, may have been struggling to, you know, sell because sales cycles have gotten much worse or financing issues. And certainly the SVB crisis didn't do anything for the good for the blood pressure of this whole ecosystem. Yeah.

And then, you know, you're also chatting with folks who are building extraordinarily exciting products, many of which are me too products, but some of which I think have the potential to really be enduring companies. And so I think it's always true in venture that the best of us find ways to control our emotions and kind of find a centered place. I think this time it's the hardest time I've ever experienced to try to do that.

But it's also in some ways the most important because ultimately our job for the founders that we serve is to be emotional calibrants to them.

As bad things or good things happen, it's to help them calibrate their reaction and support them, particularly when things are challenging. So a lot of this time is about like trying to emotionally regulate your excitement, your fear, your anxiety, your opportunity. This is a time also of great anxiety, not just for VCs, but more broadly, like in the VC landscape or the startup landscape, there's a real fear of FOMO, right? Which is part of the reason why you see the horde of hammers. There's also like a human anxiety level of like,

what is the world going to be like for my kids? What should I be teaching my kids? Like, those are scary things to think through. All of that is very true. And I don't want to discount it. I think also that like one thing I'm struggling with is just like a,

capital allocation question, right? All of this is occurring when tech and venture, both public tech and private tech, broadly defined, like we said, just went through this massive deflationary cycle or prices crashed. Let's just put it that way. Let's be blunt. Interest rates

are like 5%, which turns out is actually a very attractive place to park a lot of capital. And so risk appetite has gone way, way, way down. And all of a sudden, here is this new, like incredible opportunity presented on a platter against the backdrop of an incredibly different macro environment from the last time.

I'll make this perhaps an easier question to answer. What are your conversations with LPs like right now? Yeah, we just had our LP meeting, our annual LP meeting a couple weeks ago. So it was top of mind. And they asked a ton of questions about AI because like, you know, both as consumers and obviously as investors themselves, it's top of mind. I think that one of the key things that we talk about with them and that we've tried really hard to stay disciplined around is time averaging our deployment pacing.

You can never time the market, both from an interest rate perspective and from a technology innovation perspective. And so one of the core lessons of investing over time is invest radibly over time. Be disciplined about that. Do not deploy a ton of capital when things seem hot. Do not pull back when things seem bad. And so...

For us, we tried to hold discipline to that. We're investing out of our sixth fund now. It will be a four-year, roughly, fund cycle. Effectively, all of our funds have had a three-and-a-half to four-year fund cycle, including those that happened in the 2021 cycle.

build up. And we will do that for fund seven as well. You're one of a small number. Yeah. And so that is something we've talked about with the RLPs. It seems like others did not follow that path. And we'll do the same thing now. Higher interest rates obviously mean lower exit multiples, but we're investing for the future, not for now. The technology landscape, there'll be a bunch of stuff, including stuff that we invest in that blows up and goes nowhere.

And hopefully, you know, there'll be stuff that goes somewhere. But we'll also continue to invest in this two, three, four years from now when some of the answers to the questions we've discussed today will become clearer.

So like a lot of this comes down to discipline and thinking about this as an institutional practice where you're investing erratably over time, understanding that most of your investments won't work. But if you invest with that discipline pace and you're able to, you know, spend time with the right people, one or two of them will hit and everything will work. It's interesting. We could do a whole other discussion on this, but why brand power is so powerful in venture investing.

And I think is unlikely to change, despite how much, you know, what venture invests in changes over time, as we've been talking about this whole episode. But yeah, brand and institutional staying power to be able to do that, because what's so hard about it, like everything you say is like, Oh, yeah, of course, duff, like,

That's what you should always do in any type of investing. The problem is the game on the field changes. Yes, that's easy to say. But when you rewind two years from now and companies are raising series A's at 200 post or 300 post, well, your options are don't deploy or play the game on the field. Well, that's true. So I got a really good question from one of our LPs that was good for me to wrestle with.

We generally strive for healthy ownership percentages when we first invest because we spend so much time with our founders. We make one investment per partner per year so that we have the time to try to become the most important partner to our founders. That's our second core value. We take it really seriously.

And the LP asked me, how have we been able to do that? Like, why have we been able to maintain our high initial ownership when others haven't had the same ability? And after reflection, I think there are three potential answers. The first is we are able to make higher conviction bets earlier than other people are seeing it because we're so focused.

In the case of this coaching network stuff, we've been doing it since 2015. And so we may see a company where they don't yet have obvious product market fit, but we have been spending enough time thinking about this construct that we're willing to take the early bet and get paid for that bet if it pays off.

There's a second category of companies, which you described before, that are obvious companies that are super hot and doing really well and high priced. And we win those deals and we pay and we get less ownership generally. And in that case, that obviously lowers our average ownership. Correct me if I'm wrong. I think there's still a class though that happens with those deals, but there are a class of firms of which I think you guys are one of them. There's very few of them.

That even in those cases, the ownership you're getting, it might be lower than that first class of deals, but it's not below a certain threshold. And that's a high threshold. That's true.

That's true. And hopefully we've earned that through our founders' references. That's the core value we have in a process where a new founder is trying to decide if it's worth working with someone like us or someone that may be less expensive, so to speak. It's ultimately about the value that you're able to add. Ultimately, this is a services business. And perhaps to the earlier conversation, we may get disrupted by AI ourselves as we are a services company.

But like, that's ultimately what it is. There's also a third category of companies where there's a negative sort where we're investing in stuff that we get the ownership, but it's not necessarily a great company.

The reality is like it's a portfolio. There's examples of all of that in our portfolio. If you invest with that mindset and you do it over time and you do it in a disciplined way without deploying too much too quickly or too little too quickly, then hopefully you win. I think you hit on the key point, which really is the theme of this whole episode. I think I didn't know going in we would maybe tie these two things together, but it's value, right? Like where's the value? Are you disciplined enough to invest in it?

And do you not dilute your own value? Yeah. How do you charge for the value? Like if there's one key takeaway, I also hadn't really thought about that coming in, but the biggest issue with the Gen AI startup landscape right now is that people are not thinking about the value they're creating. They're not focused on that. They're focused on the technology. And if you just start with the core principle of like in my business, what is the desperate problem I'm solving and how much value am I creating and solving it?

That helps clarify your thinking. If you're a gender of the eye startup, if you're an incumbent, if you're a venture capital firm, whatever it is you're doing. Yep. That feels like a great place to leave it for now.

Jake, always a pleasure. Thank you so much for coming back on Acquired, being such a good friend to us over the years in many, many ways. We're looking forward to seeing you next time. Thank you, David. Yeah, so that will be my GPT four or four and a half or five, depending upon how we're counting the number of appearances I've had here. You may actually be an AI model at this point. I may be an AI model. The crazy thing to think about is this conversation may

Maybe we do this on like an annual or semi-annual basis on this topic because like the things that we will talk about will look so different. I'm doing this lecture that Ben mentioned at the GSB tomorrow. And I've been teaching guest lecturing in this class on business and AI for four years now. And for the past three years, I've been able to use basically the same presentation because the space has evolved, but not all that much.

I have literally rewritten every single slide for the conversation I'm having tomorrow because the space has changed so quickly. And as a result, I think like this conversation, I know we try to keep this podcast evergreen and there'll be elements that are, but there'll be elements that I think fade quickly. That's what's cool about this. Our interviews and our conversations here on ACQ2 is like an explicit goal of

the main acquired stories is evergreen. These are timeless stories. But like, that's not all of the value in the world, right? There's a lot of value to what's going on right now and exploring as things are changing. So that's the goal. So the answer is, we just got to have you back in six months. I'm excited to do it. All right, Jake. Thank you, sir. Thanks, David. Jake, thank you so much. Thanks, Ben. And listeners, we'll see you next time.