cover of episode Saving the Planet with Better AI Data Centers (with Crusoe CEO Chase Lochmiller)

Saving the Planet with Better AI Data Centers (with Crusoe CEO Chase Lochmiller)

Publish Date: 2023/8/14
logo of podcast ACQ2 by Acquired

ACQ2 by Acquired

Chapters

Shownotes Transcript

Chase, welcome finally to ACQ2. We have been wanting to do this for so long. When did we first meet? Like a year and a half, two years ago, I think? I think something like that. Very excited to be here. Very excited to do this with you guys. When I'd heard about Crusoe years before, I was like, oh, wow, that's crazy. And then we met and I was like...

Wow, this is even crazier than I thought it was. Like, we got to tell this story. Today's episode is going to hit listeners so many different topical themes, but also acquired themes. I mean, on the topical themes, there is no one more smack dab in the middle of what's going on in AI infrastructure and GPUs right now than Crusoe. But on the like entrepreneurial theme side of the world, there's a lot of stuff that's going on.

Chase, what you and Cully and the team are doing, spoilers for what we're going to get into, building data centers and putting them next to oil fields where there are active oil flares to take advantage of energy that would otherwise be wasted and instead use it to power AI data centers is like...

hard shit. Like, you're running your own fiber, you're, you know, building data centers and infrastructure. There's so many cruxes of Acquired episodes where we zoom in on something and we're like, that may sound normal now, but that was insane at the time. And you guys are sort of in the middle of your, yep, this is still currently insane moment. That's right. It's been insane for five years. So, I mean, I guess let's just start with...

What you do, which I just laugh every time I say it because I'm like, this sounds insane. You are a cloud infrastructure provider. You've built a new one.

Usually, when you think cloud infrastructure, you think AWS, you think Azure, you think Google. You've built something just like that for AI companies using top-of-the-line NVIDIA hardware. When you think about our business, it really starts with our core mission. Our core mission is aligning the future of computing with the future of the climate. And so what that means is we really take this energy-first approach to

to building computing infrastructure, to tackle the most energy-intensive computing problems. And our goal in order to make that impact scalable is to not just make things environmentally aligned, but also to make them more cost-efficient. Because if you can make them more cost-competitive, you can actually drive impact at a more meaningful scale.

And so that's led us to focus on sort of the most energy intensive computing applications where the lifetime cost of ownership of an asset can be driven a lot by infrastructure and energy costs as opposed to other things. The amount of energy required to power a lot of the AI infrastructure

renaissance that's happening right now is of like a new step function on the scale, right? I mean, a single NVIDIA H100 running at full load takes the equivalent of 10 US homes worth of power, right?

That's right. It's a single H100 server. So it actually is eight H100 cards in the overall system, but it's still really, really significant. And this is part of the reason that's led us to building AI computing infrastructure is that energy becomes such a big part of the equation when you're thinking about doing this at a large and meaningful scale. Yeah.

We've taken this approach of focusing on the more energy intensive computing workloads and not going after everything that every big cloud provider is offering. And our goal is not to be everything to everyone and be part of the mass migration of the cloud for every single enterprise in the world, which is really, you know, the goals of every

AWS, GCP and Azure. You're not hosting SharePoint servers on exactly. Exactly. And there's a lot of different, you know, managed services that they offer to try to deliver everything to everyone. Our goal is to be very narrowly focused on the most energy intensive applications, which happen to be some of the fastest growing because of the large demand coming from the artificial intelligence space we're seeing today.

And being nimble and focused on that narrow footprint without having this baggage of this other large cloud platform that's trying to be everything to everyone has also been a big advantage for us. We've been able to be very focused on building the most high-performance infrastructure that has everything designed properly.

for this specific use case. And this starts at sort of the infrastructure and rack level. If you look at a traditional data center, oftentimes the standard rack power density is seven kilowatts.

a single H100 server, you know, you really need a budget 12 kilowatts for that single server. Even if you have something like a 15 kilowatt rack, you're only able to sort of rack one single server on that rack. Thinking through it first from, you know, the overall rack design, the way you're going to manage heat dissipation in the overall system and, you know, how the network comes into play when you're architecting the overall network design to create a high bandwidth

high-performance networking experience for server-to-server communication through things like RDMA with InfiniBand. And Chase, you can feel free to get a little bit technical here. I want to do a deep dive. If somebody came to you and said, how do you build a cloud? Can you take us layer by layer and maybe just go with one of your data centers as an example of like, what are all the necessary elements that you have to build in order to stand one of these things up? Sure. So,

You know, at a very high level, cloud computing sounds like kind of this, you know, magical experience. But, you know, at the end of the day, it's just renting servers, right? That's at the end of the day, kind of what you're doing when you're building a cloud computing platform. The technical details and how you actually deliver that experience to customers in a high-performance, positive experience is a bit more complex than just renting servers.

Our cloud is built on a KVM based architecture for virtualization. We've had to build a lot of tools in house to help support various demand workloads coming from customers. But sort of at a high level, there's three different big buckets, compute, networking and storage.

You know, on the compute side, that's really where the energy draw is coming from. And, you know, we are very much a compute first platform, really focusing on those very energy dense, energy intensive computing applications. I'm imagining for you relative to an AWS or an Azure, the compute focus of your build outs is significantly higher.

Absolutely. Compute is the product. That is why people are coming to the platform. Now, they also need storage and networking, though. We've had to support that with both storage on the actual VMs with some meaningful amount of NVMe storage on the actual instance that we offer to customers.

but also giving customers the option to mount large volumes through our high-performance block storage solution that we've worked on implementing. We actually partnered with a group called LightBits on that effort, sort of give them a shout out. They've done some really clever things to deliver a very, very high-performance block storage solution. Okay, so compute storage.

And then networking. And networking can range from the WAN aspect to it. So getting your data from your desktop to the cloud and what pathway that follows to get there. Because we're often building data centers in remote locations,

This can be kind of a tricky problem. We have had to leverage large telcos and fiber providers to get us very, very close to data centers. And then we're often in a situation where we have to build sort of a last mile connection. We may have to trench some fiber and actually build out that last mile connection to get the fiber to the site. We also need multiple sources of fiber to create geographically diverse fiber.

feeds into the data center so that if a farmer's going through the farmland and digs a little too deep and cuts the fiber, our customer workloads don't end up going down. That's sort of on the WAN piece. I want to pause here for a sec. I wanted to bring this up later, but I think now is actually the right time. This is both a unique challenge for you guys at Crusoe

But I think is also maybe like the key that enables you to exist and compete. If an investor were looking at you guys and said, well, why doesn't Amazon and Microsoft and Google go build clouds on top of oil flares and, you know, energy locations? They can't.

because they need their data centers for their clouds to be near internet traffic, right? And specifically, I think, David, the thing you're bringing up is the counter-positioning of if you're really just doing AI training and inference for customers, you kind of can have high latency. You can be far away from people's desktop computing experience

But if you're AWS, you kind of have to be close because people are interacting directly with your servers very often. Yeah, if you're hosting an e-commerce website, latency really matters. If you're training AI data... You can rethink everything. It's okay that you guys trench the last mile of fiber. That's right. One of the other things we've focused on very...

Yeah.

mining off of geosynchronous satellite networks that are 30,000 kilometers in orbit and take 700 milliseconds to ping the nodes. And the cost of that, we actually measured it in terms of what it meant in terms of our

potential race conditions for finding new blocks on the big blockchain. And because a block happens roughly every 10 minutes, the latency costs we measured was about 15 basis points. And so that was like the amount we were saving on the energy was so much more significant than the 15 basis points that we're paying for this like slow uplink, high latency, fairly low bandwidth solution.

Now, as we've built out a cloud computing platform, you can't get by with 25 megabits of bandwidth and 700 milliseconds of latency, but you can get by with 100 gigs of bandwidth and an extra 10 to 50 milliseconds of latency. That's pretty much imperceptible to someone that's training a large language model or any of these like diffusion-based models or any of these modern AI techniques. When you're running these training workloads, you're typically running them for...

you know, hours, days, weeks, the extra impact of tens of milliseconds of latency is just doesn't matter at all. Now that's on the training side. On the inference side, you know, you might say, oh, well on inference, if someone is hitting this webpage and you know, they want to generate some new image or some piece of text, latency should matter. And it does, but it doesn't matter

at the level of additional latency that we introduced to the process. And what I mean by that is that the actual feed forward time of these large language models or big neural networks to produce outputs, the amount of time it takes to process all of the tokens in the network

well exceeds the extra latency hop when you're talking about adding an extra 30 milliseconds of latency. It sort of becomes a rounding error in terms of the total computing time. In other words, when you're interacting with an AI application, when it feels a little bit sluggish,

Very little of that is coming from the round-trip network infrastructure of hitting that computer and coming back. It's all about the fact that it actually just takes a long time to execute that in the neural network. Exactly. You know, there's billions of parameters in these models. There's many billions of operations that need to, you know, take place in order to actually get the output from those models. And that's for each individual token. So if you're adding, you know, many tokens into the network, that's

the inference time is quite costly. It's a really good example of every business is a big set of trade-offs and it's about aligning the trade-offs you're willing to make with the actual needs of your customer. That's spot on.

We want to thank our longtime friend of the show, Vanta, the leading trust management platform. Vanta, of course, automates your security reviews and compliance efforts. So frameworks like SOC 2, ISO 27001, GDPR, and HIPAA compliance and monitoring, Vanta takes care of these otherwise incredibly time and resource draining efforts for your organization and makes them fast and simple.

Yep, Vanta is the perfect example of the quote that we talk about all the time here on Acquired. Jeff Bezos, his idea that a company should only focus on what actually makes your beer taste better, i.e. spend your time and resources only on what's actually going to move the needle for your product and your customers and outsource everything else that doesn't. Every company needs compliance and trust with their vendors and customers.

It plays a major role in enabling revenue because customers and partners demand it, but yet it adds zero flavor to your actual product. Vanta takes care of all of it for you. No more spreadsheets, no fragmented tools, no manual reviews to cobble together your security and compliance requirements. It is one single software pane of glass that connects to all of your services via APIs and eliminates conflux.

countless hours of work for your organization. There are now AI capabilities to make this even more powerful, and they even integrate with over 300 external tools. Plus, they let customers build private integrations with their internal systems. And perhaps most importantly, your security reviews are now real-time instead of static, so you can monitor and share with your customers and partners to give them added confidence. So whether you're a startup or a large enterprise, and your company is ready to automate compliance and streamline security reviews like

like Vanta's 7,000 customers around the globe and go back to making your beer taste better, head on over to vanta.com slash acquired and just tell them that Ben and David sent you. And thanks to friend of the show, Christina, Vanta's CEO, all acquired listeners get $1,000 of free credit. Vanta.com slash acquired.

So this is the WAN. We're still in networking land. Take us to the networking inside the data center. That's right. So getting data to the data center, you know, the WAN, we've had to do some creative things to make all of that work on the land side. What's under discussed, I think, often in the AI conversation today is how important networking has become to delivering high performance solutions.

When you look at, you know, the overall architectures that people have in place, being able to build these very, very high performance systems, like when you're training a large language model, it typically isn't on a single node. And a single node is comprised of eight GPUs, many CPUs, some, you know, on system memory and on system storage. But

But typically the workload sort of extends well beyond kind of a single server, especially if you're looking to train a bigger model. And so one of the big unlocks that's really unlocked our ability to train these large language models is what's called RDMA. So it stands for Remote Direct Memory Access.

And this is where basically you're connecting a NIC directly into the GPU. What's a NIC? A NIC is a network interface card. No one gets away with acronyms on this show. Yeah, yeah, yeah. I'll try to be... No, I'll just keep asking. Use whatever you want. Yeah, yeah. Okay. So NIC is plugged in directly to the GPU. And then that actually goes through this high-performance non-blocking fabric and can connect directly into...

another server's GPUs. What that enables is sort of this high performance, non-blocking fabric to share data and share information as you're training a workload server to server. You're basically going directly from memory on one GPU directly into memory on another server's GPU. And you don't have to go through any sort of PCIe or ethernet fabric to get there. The performance is really, really significant. So when you

Look at sort of the latest and greatest implementations of this. You know, Crusoe's built all of our architecture around InfiniBand, which is a technology developed by a company called Mellanox, which is owned by NVIDIA.

Nvidia's biggest acquisition, I think, of all time that they did a few years ago. And is a huge part of their strategy now. Yep. It was a $7.2 billion acquisition. Really talented technology team from Israel to build this high-performance networking solution. But what's cool about it is...

to server on our H100 clusters, we're able to get 3,200 gigabits per second of direct non-blocking

data transmission between servers. And so as much as people talk about sort of the GPU performance and the number of flops and the number of tensor cores that you're seeing on these new pieces of hardware, when you're running a big training workload, being able to share information between nodes is

is a very, very critical component to doing that in a high-performance capacity. And we're talking about this will shave significant amounts of time off of your overall training workload because you're not waiting for data to go from node to node.

Fascinating. So you gave us three building blocks, compute, storage, and networking. In my mind, there's three more building blocks, too, that are not really on the compute side. There's real estate, energy, and physical materials to build out your data centers.

And I'm curious, could you tell us a little bit about each of those pieces of the puzzle? And especially on the energy side, it's what makes you so unique. I think there's a layer in the middle. I mean, Chase, correct us if we're wrong, but there's the virtualization layer too, right in between those. Yeah. So you can set this up as a bare metal instance, but, you know, being able to share capacity, that's one of the benefits to, you know, running a cloud is that you have sort of

a front capex. Multi-tenant, elastic. Multi-tenancy, yeah, exactly. Elastic computing infrastructure. So we've built our own virtualization stack, as I mentioned before, it's based on KVM. And then also being able to deliver, because when you're delivering a cluster to a customer, what they want to experience is, you know, this is a multi-node cluster that they're training a workload on. And really the experience they want is to have a virtual private cloud.

They have their own kind of own subnet within, you know, this ecosystem. And for that, we sort of leveraged a lot of open source tooling, but, you know, have built this architecture based on OVN and OVS. OVN stands for Open Virtual Network. OVS stands for Open Virtual Switch.

They are tools to enable these software-defined networking solutions so that networking can become code. And you can actually create more configurable, high-performance networking solutions that enable these virtual private clouds and clusters as a service to basically be delivered to customers. This seems like a pretty cool recent trend.

kind of enabling factor for you guys too. And that like, I'm imagining 10 years ago, if you wanted to build the virtualization layer for a cloud, you probably had to spend a lot of money with VMware, right? Yeah, no, that's totally right. What's happened in the open source community has been incredible. I mean, there's so many great building blocks that you can leverage within open source that, you know, make this stuff possible. And I'm always just

inspired and amazed by the contributions being made by the open source community and looking things up on like Stack Overflow. I'm always like, man, who are the people that have all of the answers to my problems? And it's just really, really cool to, you know, just see community driven solutions that enable this type of technology to exist.

Is cost the main reason why you guys have essentially built a custom virtualization stack? There's not something off the shelf? So yes and no. I think being able to control your own virtualization stack and managing your own hypervisor, doing that in-house, we have sort of a unique experience.

Set up in terms of the way we think about regions, the way we think about individual nodes within regions, the more you can sort of manage those things yourself, the more you can create better solutions that are designed for the full problem statement that you're, you're focused on.

In a lot of ways, we've tried to vertically integrate a lot of components to building computing infrastructure. We're not in the chip design space, but most things downstream of that, we are focused on building and delivering for ourselves and for our customers.

and people veer away from these things a lot of times because they're hard, right? I mean, we talked about this earlier. It's hard to do many things well, but when you do, you end up with these incredible products that are truly designed for the full larger problem that you're trying to deliver to your end customer. I think of a company like

Tesla that started out just sort of taking the Lotus as the chassis for the vehicle and just loading up a bunch of batteries on it, you know, using kind of the same drivetrain and, you know, all these like different things that they tried to take off the shelf. And they quickly realized that

What we're building is completely different than a traditional internal combustion engine car. And we really need to kind of rethink the full plan. And they had to vertically integrate things. I had a conversation with JB Strubble, who was the long time CTO at Tesla. And let's get this on the record, like original co-founder of Tesla. Yeah. Co-founder and long time CTO of Tesla. Yeah. Yeah.

The man behind the scenes, like making it all happen alongside Elon. He was talking about at one point when they were, you know, building the model three and they were putting together all these demand forecasts, they were like, okay, so if this goes how we think it might go, we're actually going to need more batteries than the global production of batteries today. Yeah.

Nobody's going to be able to buy a laptop. Yeah. And we're going to soak up the entire battery supply chain. So we have to go out and we have to build our own battery factory. We have to go build the biggest battery manufacturing business in the world to support our own needs.

similar thing that they did with kind of the charging network, right? When electric cars weren't a thing and users really needed to be able to plug in to charge their vehicles and actually make them useful on road trips, Tesla had to invest in that infrastructure to really make electric vehicles a possibility for people to actually utilize.

The end result is this amazing vehicle that, you know, they've designed everything from the software systems to the way the door handles work to the way the phone application works, the way the batteries are designed and integrated with everything. The end result to the customer is just a better transportation experience. Like forget if it's a car or anything else, it's just a better experience of getting around.

What we're doing on the computing side is really focused on trying to deliver that same sense of, you know, trying to start from a very first principled approach to energy costs, thermal management, you know, heat management, managing virtual machines, you know, across the hypervisor, the way we think about coordinating various regions within clusters, etc.

The end result is a computing experience that can both drive down costs for end customers, as well as reduce the climate impact that they're having by running these workloads on these high-performance computing clusters.

Yeah. So let's talk about that. We spent a lot of time in computer land. Let's get to the physical nature. This whole other side of your business. I love computer land. It's easy, right? The real estate, the physical building stuff and the energy. Yeah. So we are very much a atoms to bits company, right? So we sort of, you know, exist at this infrastructure of the physical world and the virtual world.

Again, you come back to this notion of cloud computing. It sounds very ethereal. It's kind of like up in the sky. It intentionally sounds abstract. It's an abstraction layer. Send it up to the cloud. It's like this abstract thing. But the reality is that you're sending data to a physical data center that exists somewhere in physical space and is networked into the internet and runs on power that has to be generated from some sort of power generation facility. That power has significant

cost to it. Thinking about those physical aspects to things, we having come from the digital currency mining world where energy costs become such a large component of your ability to be profitable in that space. And not just profitable, but like it really kind of sucks too. I mean, way back on our Bitcoin and Ethereum episodes, there's a huge question here of like, is this going to destroy the world? Sure.

Sure. I've heard the arguments for and against it, not to get into a philosophical debate around Bitcoin, but- I don't think we need to do that in 2023, but the point is it takes a lot of energy. Sure. It takes a lot of energy. I think for a decentralized monetary ecosystem that's trying to create a digitally native store of value, having a large

energy footprint is actually a positive. That's actually what creates defensibility. It's what makes it resistant to attacks. Yeah. It's like, you know, if I, if I had a bunch of physical gold, you know, it's like, I want to store it in Fort Knox because it's really hard to break into Fort Knox. If I'm storing something in digital gold, I want to store it in the place that is the most difficult to attack, has the highest cost to attack, both from a energy cost standpoint, as well as sort of an infrastructure investment standpoint. But, you know,

That aside, you came from that world. We came from that world. You know, that is a business where cutting costs becomes very, very important. Early on, we're in the business of kind of like designing these containerized solutions to manage a lot of our Bitcoin mining workloads. Over time, we became sort of the largest customer of one of our suppliers in that space. That was an electrical fabrication shop that was just sort of

working with us on these designs and then would manufacture these big modular data centers for us. It was a multi-generational business where the father really just kind of wanted to sell out of the business and sort of move on. We ended up in a position where we ended up buying that manufacturing business. And for us, this made a lot of sense because it could help us further vertically integrate with our business in terms of controlling and owning the whole manufacturing process.

We could eke out quite a bit on the actual margin recapture of the cost of manufacturing that infrastructure. It also gave us a platform to really rapidly prototype and design new ideas, especially as we were going through the early phases of building out our cloud computing stack.

Today, I think we have a really incredible facility and we call it Crusoe Industries that is focused on sort of manufacturing our electrical and data center infrastructure in a very cost-effective manner. That's, you know, again, very purpose-built and designed around the specific workloads that we're focused on. And that stems from, you know, the way we sort of manage the heating and cooling of the systems, the way we manage the electrical feeds, the way we manage the battery backup systems.

How much heat are your racks and data centers producing?

For our cloud computing platform, we typically standardize around a 50 kilowatt rack. So, you know, quite a bit denser than these earlier, you know, seven to 15 kilowatt designs that I was sort of mentioning. Now, what's interesting about that, I guess we didn't talk about it, but the proximity of the hardware to one another actually becomes more important for managing that LAN piece, that high performance local area network.

The reason for that is, is that these cables and the transceivers and all of the components required to interconnect the servers in this high performance RDMA setup are really expensive and they scale exponentially as you're trying to go further distances.

Yeah, this isn't like a serial port that you're hooking up to. No, exactly. It's not like a 200-foot cable is about the same cost as a 10-foot cable. It actually scales kind of pretty exponentially. So being able to deliver these high-density racking systems actually becomes a great strategic advantage.

One area where the mining space, I think, has been quite ahead of, you know, sort of leapfrogged the data center space in a lot of ways is sort of the adoption of other advanced cooling techniques. And a lot of these were created from the traditional data center sector, but, you know, haven't been that widely adopted. I think you're going to start seeing a transformation where people are going to start moving to cold plate or immersion cooled solutions almost by default for some of these more high performance applications.

applications. And to explain what that is, I see you about to ask Ben. So thermals become a big deal when you're talking about running a server that draws 12 kilowatts. It's like, where is that 12 kilowatts of power coming from and where is it going to? And so a traditional design is you have a chip that's on the actual motherboard and then you have a heat sink attached to it. And that heat sink is typically, you know, aluminum or something to diffuse the heat across

Something to diffuse the heat. Exactly. And so the heat transfers from the chip to the heat sink, and then they typically have kind of these fins that you blow air over and you want to have a lot of surface area. So as you're blowing that cold air over those, those, those heat sinks, you know, you, you transfer the heat off the chip, you know, and sort of dispose of it separately. Yeah.

There's a couple of different advanced cooling solutions. One is cold plate where you're actually running instead of a heat sink, you actually have copper pipes that have cold water running over the chip. And it's actually more efficient to transfer the heat from the chip to the water than it is from the heat sink to the air. It sounds like a nuclear reactor. They kind of are. Honestly, it's like kind of crazy.

Then there's actually immersion cooling, which is even crazier. And there's single phase and there's two phase. But, you know, single phase immersion cooling is where you you have a non-conductive dielectric fluid that you're actually putting the chips into. Because obviously you can't put the chips in water or it's going to not end well. So non-conductive, it's like the chips are sitting in a liquid that can't short circuit it.

Exactly. You could actually put it into deionized water, funny enough. But if any dust gets into it, you're kind of in trouble. Is that right that water is only a conductor when it has impurities? Exactly. It's the ions. Yeah. Anyway.

Either way, you know, there are these non-conductive dielectric fluids that you can immerse the whole system into, and that is actually more effective to transfer the heat off the chip. And there's single phase, which means like you're basically, the phase of the fluid is staying as a liquid, and you're sort of running the cold fluid over the chip, and then it goes out to a heat exchanger, like a dry cooler, or whatever.

and then recycles back through the system. Or there's something called two-phase immersion coolant, where the fluid actually flows over it, and it actually boils at the surface, at the interface between the chip and the fluid. And the boiling process actually strips off heat even more efficiently. Now, the problem with two-phase immersion cooling is, one, the fluid is very expensive.

And two, they're generally these fluorocarbon fluids that are very, very bad for the environment in the case that any of it sort of escapes. You know, it has a global warming potential of something like 250, which means that for an equivalent volume of gas that escapes, it has a 250x impact compared to that equivalent amount of CO2. So it's a really, really nasty footprint. It's even earlier than methane. Yeah.

Methane's about 84. These fluorocarbons are quite a bit worse. So it's actually not something that we generally use today. But anyway, there are all these like very cool advanced cooling solutions that I think will become more standard in artificial intelligence and high density computing as the space continues to evolve. That's super cool. So like the traditional cloud providers and the like, we're not really doing this, this kind of

came out of the Bitcoin mining world? - It didn't come out of the Bitcoin mining world.

It was productionized by the Bitcoin mining world. It was like, you know, probably Bitcoin mining is one of the areas where this is probably happening at the most meaningful scale. There are people from the traditional HPC high performance computing space that, you know, have been big pioneers in sort of these immersion cooling and cold plate technologies. But it's certainly being scaled up very rapidly because of Bitcoin's tendency to generate a lot of heat and, you know, that thermal transfer being a big component of the overall problem.

Wow. Okay, so that's the thermal structure. Sorry, we're going in a lot of different verticals here. I think this is amazing. So I think definitionally, if you are generating a lot of heat, you're using a lot of energy. Right.

Let's talk about the energy piece. So our mission as a company is aligning the future of computing with the future of the climate. So, you know, we take this very energy first approach to the way in which we build computing infrastructure. Some people at the surface may say, well, wait a second, you're working with oil and gas companies and, you know, using oil and gas based products to power your data centers.

What's important to understand here is that what we're using is actually a waste product from the oil production process. When oil companies drill for oil, they drill a hole in the ground and then sort of what flows out of the reservoir is this combination of oil, natural gas, and water. And when that comes out of the ground, it goes through what's called a three-phase separator that separates the oil from the gas from the water.

And typically what happens, unless you have access to a pipeline on site, the oil can easily be trucked to an oil refinery. The water would be trucked to a water treatment facility.

And then the gas, because it's in a gaseous state, it is actually very, very difficult to deal with. It's very difficult to transport unless you have a pipeline. There are other existing solutions to do this, things like compressed natural gas, where you actually compress the gas on site into a 4,000 PSI tank or something like that. And then you truck the tank to an injection point, or you can liquefy it on site. But all of these things

It takes a lot of energy and it takes a lot of cost, right? So you have to come like running that compressor is very, very expensive. And the cost of operating these things typically exceeds the revenue that you actually get from them. Even though you're selling the natural gas, if you're selling it for $2 per MCF, MCF is a thousand cubic feet. If you're selling it for $2 in MCF, but it costs you $5 in MCF to do it, you just lost $3 in MCF. It's been this sort of conundrum in the oil industry where

Typically in these cases, the best and most economic thing to do with the gas is to just light it on fire. It's insane. Literally. It's completely insane.

And, you know, this has been a problem. It's not a new problem. This has been a problem, you know, since we've been producing oil. Like you can see, you know, the trends on flaring from the IEA website and just kind of look back at the history of it. But the overall concept is, you know, it becomes this waste product. It's not the reason they're drilling the well. They're drilling the well to produce oil. And oil is the product that they're looking to sell and monetize that investment with. Gas is the byproduct and it becomes a nuisance. This is a bad problem for the

One is sort of, you know, when you're burning off that gas, you know, obviously it creates a large CO2 emission footprint, but even worse is actually the methane emissions that come off from it because not all of the methane gets combusted in the flaring process. Typically nine to 10% of the methane escapes uncombusted.

And as we were talking about earlier, in terms of global warming potential, methane has a very, very high global warming potential of 84, which means it traps 84 times more heat in the atmosphere than an equivalent amount of CO2. So by volume, what that ends up with is from a flare, 70% of the overall greenhouse gas footprint comes from this methane that sort of escapes uncombusted.

Now with Crusoe, when we deploy our equipment to the site, we have these onsite generators, we have these onsite gas capture systems that basically feed that gas into our generators and turbines. And it becomes a very high efficiency combustion process, something called stoichiometric combustion, where you get the right fuel to air ratio of the overall combustion process. And we're able to get over 99.9% destruction efficiency of the methane.

And so by doing it this way, we're actually able to eliminate the greenhouse gas footprint of a flare by about 70%. So it's a really meaningful emission reduction compared to the status quo. And so far, none of this accounts for the actual benefit of what you're doing with the energy that would have consumed some other form of energy to get that computing done anyway. Exactly.

This is just a reduction. So there's a reduction from the status quo, but there's also what's called avoided grid emissions, which means if we weren't running this data center here, someone would be demanding that computing somewhere else. And it would be drawing power from some grid somewhere that has some carbon footprint associated with it. So it's sort of a win from an emission standpoint on both those verticals.

Now, just to give you guys a sense of sort of the magnitude of the flaring problem globally, when I started the company, this was not my domain expertise. This was my co-founder's domain expertise. He was, you know, someone that grew up in the oil industry as sort of a third generation oil and gas family. And it was honestly something that he struggled with a lot. He was an environmentalist and went to Middlebury College as an undergrad, which is, you know, a very environmentally progressive school.

You guys met in high school? We went to high school together, exactly. You know, he was a Thomas Watson fellow and sort of studied energy impact around the world, always trying to find the right balance between energy's impact on the economy as well as the environment and trying to find, you know, the right balance in terms of helping people get access to energy so they can raise their quality of life while also being conscious of long-term impacts on the climate and what that's going to mean for the society in the future.

But all of that aside, he educated me a lot about all of these things. When you look at flaring globally, there's about 14 billion cubic feet of gas that get burned every single day around the world.

Now, that sounds like a very big number. What does it actually mean in practice? If you were to capture that gas, you could power sub-Saharan Africa with that amount of power production. It's about two-thirds of the consumption of Europe. Europe's big. So it is this incredible waste that sort of exists within the overall energy ecosystem. But of course, as you mentioned, you can't...

economically actually get it to any of those places to use it there. Exactly. Transportation is the problem. Getting it to a place where it's actually useful, that's the difficulty. People aren't burning this because they hate the environment so much or they don't want to get paid for it. They're burning it because they have no other economic option to manage and deal with this gas.

One other data point, just the amount of gas being burned because of the methane emission footprint of it. It's nearly a gigaton of total greenhouse gas emissions when you account for the global warming potential of the methane emissions. How does that rank in terms of, you know, emission sources globally? You know, as a humanity, our emissions are a little over 50 gigatons. So you're talking about something that's nearly 2% of total global greenhouse gas emissions. Right.

And the crazy thing is, is that we don't benefit from it, right? It's one of these things where, you know, steel production generates a lot of greenhouse gas emissions, but we end up with skyscrapers, you know, cement, right?

A lot of greenhouse gas emissions, but roads are pretty handy. The transportation sector, we're able to get around effectively and conduct commerce. But in the case of flaring, it's this very large greenhouse gas emission source, and there's no beneficial use. Nobody's benefiting from this. It's literally a negative externality for everyone. With our digital flare mitigation solution, where we're co-locating the power generation and the computing infrastructure at the site,

we're able to reduce that greenhouse gas emission footprint by roughly 70% while also capturing a beneficial use. It really does kind of become a win-win. And kind of to your earlier comment around, you know, the business and taking computing to the sources of energy, when you think about flaring at the problem, you know, you really nailed it. It's the issue is a transportation issue. There is no market for the gas in the physical place that it's located.

So, yeah, I mean, out in rural Montana, rural Argentina, there's not demand in these oil fields for this massive amount of energy. Like, like, what are you going to use it for? That's right. When you break our business down to the simplest fashion, really what we're doing is we're unlocking value in these stranded energy resources with computing. And the insight really was that moving gas is difficult.

Moving power is difficult. You have to build large transmission lines. These are big infrastructure projects. Moving data is pretty easy. It's a lot easier than moving gas. And by recognizing that if you could actually just create a data pipeline, you essentially create this digital pipeline that you're able to create value in these remote locations with various computing workloads. Do you know the Iceland story about aluminum? This reminds me a lot of that industry.

I know some of it. My co-founder, Coley, actually spent a ton of time in Iceland working with the geothermal power production industry there. Oh, nice. Okay. That's...

They always joke that the island's going to slowly take over the world because it grows a few inches in each direction every year from the fault line down the middle still being, you know, active spew of magma. But for listeners that are unfamiliar, the insight that they realized, I think like decades ago in Iceland, which is very similar to this insight that you have with around data, is Iceland has totaled

tons of geothermal energy and not enough demand for it. Like so much supply, not enough demand. There's just not a lot of people that live there. The country doesn't have huge energy needs. And so what do you do? You look around for other energy intensive applications, one of which is refining aluminum ore.

The issue is there's no aluminum ore in Iceland. Naturally, they actually ship in. This is the most economic way to do it. They ship in aluminum ore to the country, use the geothermal energy to refine it there into the aluminum that we use in our lives every day, and then they ship the byproduct out. And as a global society, that is actually the most efficient way to make aluminum. It's pretty wild.

It's crazy. To your point, it's like, well, let's ship our data out to these oil fields so Crusoe can do something useful with it and ship it back to us. And that's actually way more efficient than moving the gas. I'm wondering, especially thinking about that. OK, flaring's been this problem forever. Why haven't, I don't know, aluminum smelters co-located there? Why haven't, I don't know, car factories located there? I'm imagining the problem is the oil wells are there. Like, they need to run so you can't just build a factory on top of it.

There's a couple of problems. One is typically a oil and gas well will have some sort of decline curve, which means the amount of gas being produced today will decline in some sort of exponential fashion to the future. And that creates problems because it's hard to

create a mobile aluminum smelting factory. It's just sort of difficult to invest the capital to kind of be there for a small period of time. It's measured in years, decades, maybe that these things run out of gas, literally. Yeah. Yeah. The amount of production just

declines over the course of time. And then it's also widely dispersed, right? So the nice part about computing is we can deploy sites that require or that are able to generate two megawatts, you know, all the way up to, you know, the largest flare mitigation site that we've done is upwards of 30 megawatts. But an aluminum smelting facility, you know, might require 500 megawatts, like, you know, all in a single location.

So, you know, not being able to sort of chop things up into tiny little blocks makes it quite challenging. So magnitude and durability. Yeah, exactly. The other aspect is it is a challenging environment to operate in. You're dealing with these, you know, oftentimes harsh environmental conditions. You're in remote locations, limited population centers. It can be a challenge to operate in that area with a significant, significant workforce. Yeah.

And so to put a fine point on the thing that you're sort of not saying, but is implicit in all this, your data centers don't require tons of humans. It doesn't require a huge footprint. It doesn't require building a small city around it. These things are mobile data centers. They don't all have to be in one place. Right. You can set them up, have them there for a period of time, and then at some point move them to a different flaring location. Right.

That's right. So, you know, everything has been built to be mobile and modular. So you can kind of think of them as building blocks that we can move around. Now, obviously, our Bitcoin mining modular data centers are much more mobile and easily to interrupt and move around. And, you know, we've sort of gotten...

excellent at that sort of mobilization process where we can just move a whole site in a single day and kind of get it back up and running. But the cloud computing data centers, we try to find locations that we are going to be there for a longer period of time. And remobilizing is probably not going to be an issue for, you know, at least a number of years. It makes sense.

We've spent all this time talking about the sort of catchy headline of Crusoe, which is we build data centers right next to oil flares. Oil flares are not the only place where energy is stranded. So I'm curious to hear a little bit about the early journey you've done into wind and other power generation where it's also...

an issue to move the energy. That's right. Coming back to our mission of lining the future of computing with the future of the climate, as we think forward in this energy transition that's sort of taking place across the world, we really view it as like there's two big opportunities for Crusoe to be an important component to that transition. The first is helping extend the climate runway, helping us buy time by reducing emissions from legacy industrial sources.

So this is what our flare mitigation business is. It's taking a big source of emissions, reducing it to by empowering computing infrastructure and just sort of reducing the overall footprint of that emission source as it exists today.

Now, the second big opportunity is that as we are thinking about electrifying everything, right? You know, this has been a big trend between electrifying cars, electrifying stoves, electrifying heating and cooling systems. Dude, I was researching heat pumps the other day. Yeah. Heat pumps are awesome. I installed one in my house about a year ago. Nice. Nice. We can trade notes on it, maybe separately. But

The whole point is all of these things require a lot more power, and we need that power as a society to be coming from carbon-free resources that aren't accelerating a climate crisis. Carbon-free resources that we're really focused on are wind, solar, geothermal, nuclear, and hydro. Those are kind of the big factors.

sources that, you know, we would consider powering data centers that are grid connected. What is the opportunity for someone like Crusoe that's focused on stranded energy resources? Well, there's sort of this conundrum that exists within how we build renewable infrastructure. And the conundrum is basically that when you think about investing in building a wind farm,

And your goal is to produce a lot of power from that wind farm. You want to find somewhere that is very consistently windy.

The problem is that isn't necessarily in the same place where you actually have consumers to buy your power. Moving power is hard. Moving power is difficult. And you have losses to it, too. I mean, there are significant transmission losses when you're moving power over significant distances. Moving power is hard. Storing power is even harder.

Storing power is even harder. Exactly. You can only charge up a gigantic stack of batteries so much. Yeah. And it can only store power for so long. And there's a lot of headway and technological breakthroughs I think we need to make in order to make long-term grid power storage a feasible reality. What sort of happened in the US when you look at sort of people that build and own wind farms, really their revenue stream is coming from two sources. Right.

When they build a wind farm, they're underwriting against revenue that they expect to get by selling power. Obviously, they're building a wind farm that generates power. But the second big resource of revenue for them is actually coming from production tax credits. So they get these credits that are incentives to basically build renewable energy for the country, which I think generally is a positive. However, they only get those to the extent they're actually selling the power.

So this has led them to building these wind farms in places where it is most consistently windy. One such area is an area like West Texas. And West Texas is consistently very windy, consistently very sunny, and consistently very...

sparse and depopulated, right? There's just not very much in West Texas. Isn't that where the Blue Origin launch operations are? Like Bezos has his ranch and the clock in the mountain and all that stuff. There's a lot of space there. What this has created is actually because of the production tax credit dynamic, wind farm operators will actually sell their power at a negative price because they still capture the production tax credit and it's still marginally economically beneficial. But

Again, if you're selling your primary product for a negative price, that's generally not a good business to be in. There's essentially no marginal demand for that power that's being produced.

The amount of time that people are getting negative pricing on their power in areas like West Texas can be really significant. Some of our partners, it's on the order of 20% to 30% of the time that they're generating power, they actually get negative pricing. So not a good model, doesn't really incentivize building more renewable capacity, and is not an efficient use of energy that's being produced. To Crusoe, that's a big opportunity.

And we've partnered with these renewable energy producers by actually, again, taking the market to them. We bring demand in the form of computing and data center infrastructure directly to these sites of stranded, heavily curtailed, or negatively priced power, where we can actually

Again, unlock value in that stranded energy resource with computing. And we deliver to them a price floor so that they're sort of eliminates their negative pricing risk where they're not, you know, having to pay to dispose of their power. They suddenly have a consumer, which is Crusoe.

And it helps us because it's very much in line with our mission where we're able to actually power our computing and data center infrastructure with these onsite renewable carbon free power resources. And there's a lot of different ways that people make claims about being net zero. We believe the best way of doing that is actually with onsite infrastructure.

renewables, on-site carbon-free power. That's really been the focus for us within this new business line we call digital renewables optimization, where we can help optimize renewable facilities by bringing digital infrastructure to the source. And for your customers, it doesn't feel any different. They're just getting an AI cloud, and it happens to be located next to a wind farm instead of an oil field.

That's right. It's basically just like a different region to them. The infrastructure is still in the box, you know, inside the data center. It's still, you know, the same high performance infrastructure and high performance solutions that we discussed earlier on the episode. And for customers and use cases, especially training, but I guess anything, can the workloads be sharded enough that like you have a big honking AI workload, right?

goes out to various regions and data centers on Crusoe and, like, kind of doesn't really matter? Or does it all need to be in one? Typically...

I think the best approach is certainly to be in like a single region for, you know, kind of a cluster. There are certain workloads that you can sort of shard in that capacity. There's actually a really cool startup that's building ways to, you know, leverage that type of overall architecture as like a layer of indirection to, you know, sort of manage across different geos with low cost projects.

you know, computing nodes, a company called Together. That's a really, really neat startup that's doing really, really interesting things in sort of the AI training infrastructure space. Wow. Fascinating. Can I ask, Chase, how on earth did you come up with this as the solution to, hey, we should do something better with the energy that's currently being flared? Like, now that we're deep into the episode, give me the history. Yeah.

Yeah, bit of backstory. The company honestly is a representation of me and my co-founder is really what it boils down to. Just by way of background, I was sort of in the applied AI research space working as a quant portfolio manager in the finance world where we were using advanced statistical modeling techniques to forecast stock prices and security prices. So I went to MIT as an undergrad, studied math and physics. I went to Stanford for grad school, studied computer science with a focus on AI, and then

you know, I spent that first chapter of my career as a quant. And in doing that, I mean, we were,

You know, this was sort of the early days of cloud. We were mostly building the infrastructure ourselves. We hired a lot of people from government laboratories like Lawrence Livermore in Los Alamos that were building this type of infrastructure themselves. People like Shaw Research that were building a lot of cool advanced computing infrastructure. I was always a big user of, you know, large computing infrastructure to train models, to run big simulations.

And, you know, at the end of the month we would get a bill, right? We would get a data center bill. And I was always just kind of like, holy crap, how much are we spending on power? That's, that's insane. You can buy a house with that. You know, it was always just kind of one of these just crazy, crazy things that, that stuck with me, you know, as I kind of, you know, went on in my career. I ended up getting really deeply interested in the digital asset and cryptocurrency space around 2016.

I ended up meeting a guy named Olaf Carlson Wee, who was the first employee at Coinbase. And he had left Coinbase to start a hedge fund to invest just in digital assets and cryptocurrencies. Here in 2023, there's a million crypto hedge funds and probably a million more that failed crypto hedge funds.

But at that time, that was a very, very unique idea. There really weren't other crypto hedge funds that were just focused on the digital asset space. So I ended up joining him in 2017 to build out this fund called Polychain Capital, which

There was a lot of chaos happening that year between, you know, ICO mania and people learning about what Ethereum was and, you know, how it was going to transform everything with smart contracts. And we were also big within sort of the Bitcoin ecosystem as well. And, you know, I really got sort of this front row seat to understanding proof of work blockchains in a very, very deep capacity.

Again, that really stuck with me as well as like something that was this digitally native asset that was protected fundamentally by low cost, decentralized computing infrastructure that required lots and lots of energy. I ended up leaving that in 2018 to go pursue a personal passion, which was I grew up in Colorado around the mountains, and I had always wanted to climb Mount Everest.

So I left to go- We had to work this into the episode. So yeah, okay. I'm glad that it's coming up. Yeah. I left to go climb Mount Everest and I sort of had this self-discovery expedition of climbing to the highest point in the world. Because at the time, I didn't have a plan, right? I didn't have a plan on what I was going to do next. And that was like a, what, a four-month

It was about two months. Two month experience. Two months in Nepal. For other aspiring entrepreneurs that are kind of out there listening, I do think that there's something very unique and special about the blank slate.

And just like the stillness of having nothing there. It can be particularly challenging for very ambitious people, right? Because it's like before I left high school, I knew exactly where I was going to college. Before, you know, the fall of my senior year, I knew exactly what job I had already accepted. Like when I left that first job and went to grad school, I knew that before I left the job. You know, it's kind of like I always had the next thing planned before I had left the previous thing.

Having that stillness and that void of like, I could do anything. What should I do? And just really having that openness to being open-minded to honestly doing anything was really, really important to me. And you did Summit, right? I did Summit, yes. That's crazy. Yeah, yeah.

It was the ultimate adventure. So much fun. A lot of really cool memories came out of it. And actually one of our core company values came out of this whole expedition as well. So one of our very, very unique company values is actually to think like a mountaineer. And we're not expecting everyone to climb Mount Everest. We're not expecting everyone to be a mountaineer, but we want them to channel the mindset of a mountaineer.

One of the ideas when you're climbing a mountain is that one, getting up is optional, getting down is mandatory. So you have to have a safety oriented mindset, which means you have to be thinking about what could go wrong. You know, you're going to have a plan A that's like, if everything goes to plan, you know, we're going to follow this route, we're going to climb this path and we're going to do this crux and then we're going to get to the top and then we're going to come down this way. But

The weather could change. The route could change. An avalanche could happen. All of these things are possible and you have to be prepared going into it with like, what am I going to do if this goes wrong? That's a core component to, you know, Crusoe culture is thinking like a mountaineer and really being prepared for things to break, for things to go wrong because they inevitably do. As they do in any startup, but I'm imagining like-

Just given the physical realities of everything we've been talking about here, I imagine a lot of things break all the time. Totally. Totally. And you just got to put in the right processes and preparation to make sure that you can avoid those or you have a plan in place to mitigate those risks. But anyway, coming full circle on sort of the entrepreneurial story, I came back from that Mount Everest trip.

I think one of the things that stood out to me was living through this AI landscape that we were using a lot of advanced statistical modeling techniques when I was a quant. And kind of when the deep learning boom happened, you had the initial like cap paper published and people were like, oh, there's these multi-layer deep learning solutions that multi-layer neural networks that you can utilize that are crushing every single benchmark.

we started to utilize some of these things in our own sort of strategies and solutions that we were building. Really, I sort of had this recognition that

A lot of these things, they weren't like big scientific breakthroughs, right? Like multi-layer neural networks had existed for decades. They're just like, you know, an interesting nonlinear modeling technique that, you know, you can model some sort of statistical representation of the data set that you have. Now, what had changed is that data had become far more abundant. So there was a lot more data that you could utilize to train these networks and computing had gotten a lot cheaper, right?

And those were the unlocks that actually enabled this technology to start to make meaningful breakthroughs for society. And I really felt that those were the two verticals that were going to continue to drive those breakthroughs. Increased access and availability to data and unique data sets that are meant to represent your overall data set and cheaper computing costs. And so I was thinking about what do I want to do? And I was thinking about

You know, I was really excited about kind of building at this infrastructure layer of computing. I was thinking about how do you make compute cheaper? How do I make it more efficient? Well, I could go design a new chip and compete with NVIDIA. Well, they seem pretty good at, you know, the parallelized computing stuff. Like maybe I'll avoid that, but ended up kind of meeting up with my co-founder when I was back in Colorado where I grew up.

we ended up kind of going on a climbing trip together and he was telling me all about struggling firsthand with this flaring problem where, you know, again, he, he's very much an environmentalist that grew up in this oil and gas family and was dealing firsthand with this problem of flaring where he felt like he was stuck between a rock and a hard place where the best thing economically to do for his stakeholders was to flare this gas. And yet,

It was a massive negative for everyone else and for the environment. And he was struggling with this and telling me about the problem and, you know, saying like, what, what could we do here? Like, is there a better solution? Is there something that can be done? We kind of came up with this concept that we could

solve the computing industry's problem of compute is expensive because power is expensive by simultaneously solving the problem of flaring is a problem because there's no demand for the gas in its current location. And we do that by sort of co-locating these computing solutions and facilities onsite with these waste sources of energy.

It's clever. It's just crazy. Totally insane. I just can't get over how perfectly the puzzle pieces fit together if you go endure lots of pain to make it true. Yeah. And the other thing I'll say is that we probably never would have gotten off the ground if we had started with building a cloud.

Yeah, Bitcoin probably made this possible, right? Bitcoin made this possible. You could throw everything into a container and drop it in, right? Yeah, because you have to remember, we're solving problems for two different counterparties in this situation. We're solving a problem for the energy company, and we're solving a problem for the computing customer. You were a trader. Listen to you with counterparties. But on the oil side, it's like if I came to an oil company and I said, hey,

I can solve your flaring problem. I just need a couple of years to build this whole high performance cloud platform. I need to go find customers that will utilize it. I need to build the infrastructure and then co-locate it. And they're like, dude, I need my flare gone tomorrow. Meanwhile, you go to the AI customers and you're like, I've got a great solution for you. It's going to be cheaper. It's going to be better. It's going to be ready in five years. I'm going to be like...

Dude. Don't care. Bitcoin mining bootstrapped your demand side of the marketplace. Exactly. And Bitcoin, by being an open permissionless network, you could rapidly scale it up and scale it down. It's a very elastic demand for computing where we could basically plop a data center filled with

Bitcoin mining rigs directly on site with this waste gas and utilize it and soak it all up. You know, I kind of think about Bitcoin as being, you know, a bit of like a power sponge, right? It can sort of soak up waste energy to the extent it's there. It can

you know, modulate and kind of flow with capacity available. And, you know, if it needs to be turned down, interrupted, all of these things are, you know, ultimately no big deal. And we actually have very big plans for how Bitcoin mining will be integrated into these large DRO behind the meter computing facilities co-located alongside of our high performance computing cloud data centers as well.

Because again, you can kind of think of them as these power sponge that really modulate and can create the most high performance campus, you know, in terms of being able to drive efficiencies without getting rid of any sort of reliability or redundancy that you need for a large high performance computing data center. It's like the ultimate elitist.

elastic computing workload, right? Because like AI training is fairly like elastic, like, you know, it's it's the ultimate spot instance. Oh, yeah. Ultimate spot instance. Yeah, exactly. Assuming you believe that the output has value, which that's a whole nother episode of debate. And there's lots of credible reasons why you do. Sure.

Yeah, right. You know, exactly. You know, I believe it has value because there's a market that tells me it has value. I can literally go on to Coinbase and observe the value of it. Yes, yes, yes. It's bitcoins all the way down. Okay. Yeah.

So, Chase, this story is just, I hope as promised upfront listeners, you find this as incredible as we do. Two more things we want to cover. One is your capital structure and how you financed all this. But two, maybe before we do that, we've mentioned NVIDIA a few times on the episode so far. I mean, anybody listening has got to be on their minds like,

They're pretty important to you guys. Yeah, absolutely. For our cloud platform, you know, because we've very much focused on the GPU market, NVIDIA is the 800 pound gorilla. In fact, they're like a 10,000 pound gorilla. And hey, AMD exists. AMD does exist. And AMD is actually building some really interesting, cool solutions.

There's been a lot of money poured into AI accelerators and interesting new technologies to tackle this AI problem. The problem is...

for those other competitors that NVIDIA is really, really, really good at it. And they're investing, you know, lots and lots of money and, and they've built a full ecosystem between CUDA nickel, which is the package to that helps manage this, these high performance networking solutions for the server to server communication. They've really nailed the, the, the full complete suite between hardware and software. We, we,

As a company, because we didn't have any previous baggage, we hadn't tried to build our own high-performance fabric from scratch. We hadn't tried to build our own chips from scratch. We really wanted to take the best things in the market and really what the market was demanding and deliver that to customers. And that really was kind of this NVIDIA stack of computing solutions.

We've been able to build, you know, honestly, a great relationship with NVIDIA. They've been a very good and key supplier to Crusoe. They're very aligned with our values of trying to deliver advanced computing solutions that are both cleaner and cheaper than a lot of traditional other offerings. Like NVIDIA, I don't think they like that approach.

their GPUs take a lot of power or that power is hard. So I think they probably like making power easier. No. And I think a lot of people are starting to see this issue just as in crypto, right? Initially, it was like, oh, cool. People are doing this decentralized computing thing and they're doing proof of work to create value for this global monetary ecosystem. And

And then as soon as the economic incentives were put in place and things started to ramp up and people built, you know, started doing on GPUs and then people started building ASICs for it. And, you know, those created just a lot of, you know, power demand. People are like, wait, this is consuming how much power? Like, that's crazy. I think we're at the very early innings of that with AI. And I think there's an opportunity to really get ahead of the climate impact of AI. A lot of people are talking about like responsible AI and AI ethics.

One of the key components that should be part of that discussion is actually the climate impact of AI. And I think that's really, you know, where our energy first approach to things really plays a major role.

It's sort of just like the nature of humanity that NVIDIA will come up with a much more efficient chip. And then people just use a lot more of that. And the overall power consumption actually goes up. And, you know, it's like, well, that's how things work. That's how things work. My unbelievably fast iPhone 13 mini, because I still love the mini, is not running the same performance.

apps as the iPhone 3G, like we get more compute, we use it. Yeah, that's exactly right. People design things around more computing being available and that's important progress, right? I mean, you know, at the end of the day, the potential for human progress and sort of uplifting human prosperity through computing led innovations is absolutely enormous. And I think it's going to be one of the greatest transformations of our lifetime.

And we want to make that possible, but we want to do it without having to pay a huge climate impact cost. But back to NVIDIA, they have this very, very important place in the overall ecosystem. And from my experience in working with NVIDIA, they are a company that cares very deeply about the end customers, people that are using the hardware and people that are wanting to get the best experience for running AI.

AI, high performance computing graphics solutions. It's been really kind of amazing to kind of watch them grow and scale into this opportunity. There are sort of significant supply chain constraints. They have been kind of, I think, taxed because of this step function increase in demand. That's not like

It's not like a software demand step function increase. It's like, you know, everything down from the supply chain at the foundry level with TSMC needs to be scaled up to be able to provide more wafers to NVIDIA that, you know, needs to... ASML needs to make some more, you know, magic. ASML needs to make more EUVs, yeah.

Drumpf needs to make more very high power specialized lasers. Like there's a lot. Yeah, the supply chain is big, complex, and it doesn't rapidly scale. So I think that's all the more why, you know, it's important that NVIDIA really cares about, you know, and customer needs. I think being a bespoke independent cloud provider compared to an AWS, a Microsoft or an Azure or GCP, we are afforded the flexibility of really

purpose building the architecture and delivering it in the best way possible to customers.

Amazon, for instance, made an acquisition of a company called Annapurna Labs. They've been trying to build their own AI accelerator chips called Tranium and Inferentia. Through that acquisition, they've also built their own high-performance networking solution called EFA, Elastic Fabric Adapter. And they're very committed to utilizing those things. And the problem is, frankly, that the market today is demanding the NVIDIA solutions. Even with the market demanding the NVIDIA solutions...

AWS is implementing, you know, their RDMA non-blocking fabric through EFA, not through InfiniBand. There are some significant trade-offs that are being made there. You know, just having the flexibility that like we want to build the platform that delivers the best experience to customers with the lowest environmental impact really gives us the opportunity to, you know,

build things, I think, in the right way. You have no vertical horizontal strategy conflict. You're trying to sell one thing to customers. And so you're trying to make that experience as great as possible. And when you sell a whole bunch of stuff, sometimes there's conflicts.

Yeah, that's right. I mean, you know, I think there's certainly kind of a frenemy type relationship when you look at, you know, the issues with Amazon building Tranium and Inferentia, Google building the TPU. These are meant to replace NVIDIA. I mean, at the end of the day, like big tech, they have a very complex set of relationships that for you and NVIDIA is very simple, right?

Yep. We're a customer. We want to deliver NVIDIA in its greatest glory to end customers. And, you know, I think that's a positive. And I also think NVIDIA has been very supportive of wanting to create a broader ecosystem of solutions than just kind of, you know, AWS, Azure and GCP.

You've seen sort of this emerging and burgeoning set of independent cloud services providers that are coming at it with their own solutions and delivering to end customers cool and unique ways of delivering infrastructure and enabling AI workloads. That's cool. Well, before you run, you've financed this company in a very unique way. Can you talk a little bit through sort of the capital structure? How much have you raised in equity? What type of funding?

folks have financed this company and what are you doing that's unique? Yeah. So, you know, obviously our business is not like just a pure enterprise SaaS business, right? It's pretty far from it. We have quite a bit of CapEx. We build technology. We build software solutions, but we also have, you know, physical infrastructure and, you know, big pieces of heavy machinery that are involved in the overall process.

And so that's led us to kind of this unique hybrid solution of being a fast growing startup that, you know, has gotten some venture equity funding. We've raised about 500 million in venture equity funding to date coming from groups like Founders Fund, Bain Capital Ventures, Valor Equity Partners, and those unfamiliar with Valor, you know, they were very instrumental in kind of helping Elon build a lot of his early companies, you know, between Tesla and

SpaceX, Boring Company, Neuralink. They've been heavily involved. And really where they excel is around kind of the operational expertise, oftentimes with either software or physical infrastructure companies. But they've gone deep with us on a lot of the physically operationally challenging aspects to our business and, you know, been very helpful in that.

regard. And then most recently in our Series C, our primary lead was a group called G2 Venture Partners or G2VP. It was formerly the Green Growth Fund at Kleiner Perkins and they spun out, but they're very focused on decarbonizing technologies and that are ready for scalability and growth. That's kind of our core equity strategy.

stack, we've had a bunch of strategics and other interesting investors get involved. One of the big areas of flaring, for instance, is the Middle East, right? So 38% of the global flaring happens in the MENA region. That actually led to a handful of sovereign wealth funds like Mubadala, the sovereign wealth fund of Abu Dhabi, as well as OIA and IDEO, the sovereign wealth funds of

Oman to invest in Crusoe, not just as a way to generate a financial return, but also as a way to bring an interesting, fast-growing technology solution to help solve a domestic problem that is an issue for areas in the Middle East like Oman and Abu Dhabi.

We've certainly done a lot of equity, but on the CapEx side, right, we've done a bunch of very interesting things around how do we actually scale the business without, you know, just plowing equity dollars into CapEx, right? Because, you know, at the end of the day, that's not really what we want to do. And maybe actually for listeners who don't come from the finance side of things, what

Why is it a bad idea to just finance all the capex with equity? And when in a business's life cycle can you explore other options? What level of predictability do you need? Well...

There's a bunch of FinTech startups that will give you a whole bunch of different answers. Everything from revenue financing to customer financing. I don't know. There's a million different ways to finance everything these days, it seems. In our case, we really didn't want to finance big physical assets with equity dollars because there is collateral there at the end of the day versus...

It's like a mortgage versus a, you know, venture investment. These are different things. Exactly. Exactly. Like most people don't buy their house with a hundred percent cash, right? Because they can get a low interest mortgage and the bank's happy to make that loan because there's existing collateral. If you stop making your payments, they can just take over your house and, you know, liquidate it for more than, you know, the outstanding loan that they have with you. Right.

In our case, we have large pieces of power generation equipment that we've been able to finance with asset-backed financing. There was one group that we have a large facility with called Generate. There's another group that we did something with called North Base and another group called Spark Fund, all on sort of the electrical and asset-backed financing for electrical systems and power generation equipment. Now, what's cool about that is one,

The way those are structured is they are asset backed, which means it isn't debt that sort of defaults up to the parent company necessarily. Like if we stop paying, they'd come and they'd take the generator and then they'd go liquidate it on a secondary market. They get made whole that way. And it's not like an incremental liability for Crusoe, the company.

now that's not our plan. You know, if my debt holders are listening to the show right now, we entirely intend to, you know, continue to make all of our payments. But it's useful for listeners to understand how those sorts of things work and how it connects to the parent company. And this is how most of the non-tech business world works. You know, like if you're

I don't know, Procter and Gamble or something like you're not financing your, you know, assembly lines with equity. Yep. Yep. We also set up on our Bitcoin mining business. You know, essentially we have four big pieces of CapEx. We have generators and electrical infrastructure supporting the power generation side. We have GPUs and, you know, associated networking equipment and servers.

We have Bitcoin mining hardware, so ASICs that are used to run the SHA-256D hashing algorithm. And then we have data center infrastructure, so the actual physical boxes or buildings that we build to sort of house the actual equipment. Our belief is that the best way to structure financing is actually have each of those individually with different asset-backed loan facilities, right?

And then we use equity capital to, you know, essentially, you know,

continue to grow, invest in technology, hire the team, and also come up with our piece of the loan essentially, where it's like, you typically don't get 100% loan to value on something. Just like when you buy a house, it's typically not a 0% down payment. You typically put in whatever 20, 30% as a down payment on a house. We do a similar thing with generators or GPUs with these asset-backed financing facilities. And I would imagine what's cool is there's

pools of capital out there that are interested in the specific risk and return profiles of each of those different things, right? Exactly. We did a project financing facility with a really clever and creative credit fund called Upper90. And this was actually focused on our Bitcoin mining business and

It had equity-like constructs to it, but it had debt-like constructs to it. And it was actually one of the keys to helping us get off the ground was through this facility. And it was cool to see investors like that, that were really willing to think deep and creatively about what our actual revenue stream was, independent of kind of where we were at in terms of stage of the company. Because we did that around our Series A. It ended up being a total of like $55 million.

Yeah, I think $55 million in total that we sort of deployed through these facilities that really enabled us to kind of grow and scale that digital currency mining business in a way that didn't dramatically dilute our equity cap. Right. Otherwise, you would have been adding on another $55 million to your Series A and-

And that would have sucked. Exactly, exactly, exactly. There are just creative financing solutions that people by default think they just have to go raise the next series of funding. And I don't think that's the case. And I think there's a lot of ways that founders can end up owning a larger percentage of their company by finding the right investor for the right component of their overall capital stack and capital structure. Did you have...

these relationships from your time in the finance and quant world? Or like, how did you go about putting all this together? Some. One of the founders of Upper90, I knew he was at Goldman for a long time and he was at Barclays for a long time. And I just kind of knew him through the finance world. And then he set up this like bespoke credit fund. And I was like, oh, this is really cool. On the venture side, you know, a lot of it was just kind of, you know, getting introductions from friends, people that, you know, I talked to that

think my business was really cool, they'd be like, oh, you should meet, you know, my friend, Scott Nolan, who's a partner at Founders Fund or Salil Deshpande, who's, you know, a partner at Bain Capital Ventures. And that was kind of, you know, the early start for us and really leveraging kind of our network of people that, you know, helped us

I don't know if it's break into or like, you know, get into the community of venture investors. Well, this is a great takeaway, I think, for founders among many on this episode. But if you really do shoot for the moon and you do something unique and challenging and good for the world and clever,

There are people who want to help your business succeed. There are doors that get opened for you because people are genuinely shocked, impressed, excited, and want to introduce you to their most valuable contact. David and I approached you 18, 24 months ago and said, we normally don't cover companies at this stage, but maybe at the time it was the LP show, now it's ACQ2. Can we come talk with you about it just because we're fascinated? It just opens doors for you that

if you're starting the next great SAS company that helps you do project management, like people are going to be like, cool. All right, later. Nothing against SAS, different set of challenges, but yeah. There,

There's always room for a new SaaS company, I guess. Yeah, I guess that wasn't my point, but no, no, no. Hopefully many of them will be, you know, Crusoe AI infrastructure customers. It is hard to differentiate. And I think we're, you know, we're probably going to see a wave of AI being the new platform. And we're already seeing it, frankly, it's just, you know, the amount of innovation, the amount of cool things being built by

young startups leveraging AI as a mechanism to unlock new productivity potential is absolutely insane and inspiring. And I'm very, very optimistic for a lot of these cool things being built. It's always been a bad idea if you're a venture investor to

stop investing in software startups. So like, you know, we should always invest in software startups in addition to super cool stuff. Nothing wrong with 80% gross margins and super scalable. Totally, totally. And, you know, maybe they go up with AI. We'll see. Well, Chase, I think that's a great place to leave it.

Where can listeners find you? How can people, if they want to be customers or Crusoe employees or, I mean, invest in any of these different facets of the business, how can they get in touch and who should reach out? We're always looking for highly talented people that are motivated by our mission to align the future of computing with the future of the climate. The scope of

that we have are probably much wider than most traditional tech startups. We have a wide ranging from high performance software engineers and infrastructure engineers to oil field mechanics and electricians and welders and technicians.

So it's not your typical software startup. You're not limited to just that audience. We're not limited to just that audience. We have a wide range of open roles. So we're always interested in talking to talented folks. You can visit our website. It's crusoenergy.com, C-R-U-S-O-E-N-E-R-G-Y.com. That has a lot of our open listings.

For those interested in leveraging our cloud computing platform that's focused on GPU cloud computing, you can visit crusocloud.com, C-R-U-S-O-E cloud.com. There, you'll be able to get more information on the instance types that we offer, the pricing and cost savings that we're able to deliver compared to many other incumbents. Or just straight up availability of GPUs would be nice. And availability, yeah, yeah. It is a rush right now.

On the energy side, I think we're always interested in talking to new partners that are dealing with stranded or underutilized energy resources that we may be able to help them create more economic outcomes and more environmentally friendly outcomes. So folks struggling with flaring is a problem. If there's any listeners from the oil and gas sector,

or any renewable energy producers that are struggling with curtailment or negative power pricing, we'd love to speak with you and see how we might be able to unlock value in that stranded energy with computing. Awesome. Chase, thank you so much. Thanks for having me. And listeners, we'll see you next time. We'll see you next time.