PodClips Logo
PodClips Logo
Modern Finance
Where Data Lives in an IPFS/Filecoin World with Mikeal Rogers
Where Data Lives in an IPFS/Filecoin World with Mikeal Rogers

Where Data Lives in an IPFS/Filecoin World with Mikeal Rogers

Modern FinanceGo to Podcast Page

Kevin Rose, Mikeal Rogers
·
30 Clips
·
Dec 8, 2021
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
Hey, everyone, before we start this episode, a huge announcement from us. If you are into n FTS, then I wanted to tell you about my new private, members-only nft collectors group called the proof Collective. This group will be limited to 1000 members in total and feature a private Discord. Early proof podcast episodes in person. Meetups and some awesome collaborations to be one of our 1,000 members. You'll need to purchase and hold the proof Collective nft. This will be your key for access.
0:30
And that in Ft goes on sale Saturday, December 11th at 9 a.m. Pacific time. For more information in all the details. Check out our site and video at proof in Ft. Dot. X y z that's proof nft do XYZ. Thanks.
0:46
All of us benefit from this data. All new developers want to build on top of this data set. We need to make sure that it's alive but also, there's no one individual actor who gets the monetize. It the way it's centralized applications get to monetize and create this two-sided network. But at the same time, this whole ecosystem is competing with the current web and we have to get the prices way down and in the case of, you know, Public Services, we need to get it down to free. Like, I've spent 20 years in open source, and we've figured out ways to make.
1:16
Make open source free at every level and every tear. And somebody is paying for it somewhere, but there's always enough institution to there that we can provide that service and as we build out the web three ecosystem. We need to make sure that public NST data is a public service in a public Commons and that we're back.
1:40
That was Michael Rogers, engineering manager at protocol Labs, which is the team behind filed coin and ipfs. This interview was about answering the important questions around decentralized storage or where do our in ftes and data live in a web three world. We answer questions, like what happens when a Market Place goes down does our nft data live on. We saw that happen recently when hen went offline. Also, what is filed coin? How does it
2:07
I work with ipfs, when will major browsers support ipfs. And how can we speed up ipfs image loading and a whole lot more? This is a really important interview. One that for me, has always been kind of confusing and I want to get some clarity around. So, I really consider this a good foundational primer to decentralize storage with that. This is Michael Rogers, Kevin Rose in his guests are not registered investment. Advisors. All opinions are Kevin's and his guests alone. Nothing discussed today.
2:37
Should be relied upon for investment decisions. Nor is it investment advice? This show is
2:41
solely for information and entertainment purposes. Only. Please
2:45
work directly with an investment. Professional
2:50
Michael. Thank you so much for joining me on the show. Thank you. I gotta say, I'm really excited for this episode. I was I was stoked when we got it on the books and we had actual date planned because as someone that is just deep into the world of n FTS.
3:07
I gotta say I'm really confused when it comes to storage of data and web three, there's, you know, obviously the time of terms being thrown around there was ipfs. What is file coin? How do the two work together or not? How does our we play into this whole thing? Centralized versus decentralized is a ton to unpack here. But yeah, so thanks for joining me. Number one. And where should we begin ipfs? Yeah. Yeah. It's a great place to start. Yeah, that's gas, content, addressing. Great. Tell me all about it. Yeah.
3:37
Okay, the current web. Well, if we went to call Web to everything's an Earl, right, we go to URLs on the web and we link to other websites. The URLs URLs are like what we call a location address, right? It says, here's a server on the internet, go over there and ask for this content, and whenever they tell you, is this content, that's the content that I'm talking about. So it's mutable like that location can change it, but also, it is baked into a location and this is where platform lock-in begin so that you cannot move the or tweets off of Twitter because
4:07
There are links to those tweets and those links will break if you just put it on a different website. So even if you can get a data export, you can't really fix the linkage that happened in the web because those are location-based addresses. It will be up to Twitter to do some type of redirect or something like that. But of course it's not in their best interest. So they're never going to do that. Yeah, never going to happen. This is just not going to happen. We what protocol Labs developed and this is going back at seven years. Now, is this vision for a decentralized web? Where we don't rely on location addresses anymore? And this is really like one Benes. Kind of bigger.
4:37
Shannon. Me and a lot of other people working in peer-to-peer web at the time and we're trying to figure this out and we didn't in one did figure it out, which is that if you take a sufficiently large hash. That's so unique that you can basically just ask the internet for that hash. So you can use the address of the content, right? Like the hash of the content to just ask the internet. Hey, who has this? And then anyone on the web can provide that content, as long as you just split it into two layers. So you have a Content Discovery layer, where you go? Hey, who has this data and then
5:07
You ask those people over whatever transport for that data back. So let's pause for one quick second to talk about a hashing itself. Don't think this is a very important piece. So for those that don't know what a unique hash is and how one is Created from a piece of content. Can you take us through an example of, let's just say someone creates a new nft. They have an image the image of a flower. Let's just say what what happens now? How is that hash link to that images image house, it proven that hash equals that image what's going on there?
5:37
Exactly. So hash is just a cryptographic function that will produce the same result, given the same input. But you cannot reverse the content from the hash. Right? The hash is much smaller for one thing. It's almost like a form of like, very lossy compression in a way. Right? Where you take this data? You hash it, you know, that you get that. So BitTorrent, which we've been using forever, use this mechanism. This is how you connect a random people on the internet and when you get that data back, you know, that you got the right data because those hashes are in the torrent file. So every time that you get a part of the data,
6:07
You go. Oh, is that the right data? Are they lying to me batches? So every single image ever created for an mft has its own unique hash and a hash for people who don't know is just like a series of numbers and letters. It's like how many characters long is a typical? I guess. It depends on the hash type. Right. Just doing a 56 bit or 512. Like the hash length is going to change. This is why in our addressing format inside of ipfs. We actually have a self-describing hash data structure called a multi hatch. So that we are not baking in the algorithm for the hashing and to the addressing format. You can actually
6:37
They create new ipfs files using different hashing algorithms. For instance. If one of these get broken in the future, where future proofed, and we can move on from there. Catch. Okay, so we have our flour and of T. We've made a hash of it. Now. We're basically going out to the internet and saying, hey, I have this string of characters. Where is this located that, right? Yeah. We're who has this whose house? Is this final? Yeah. Exactly. Who do? I connect you to give me this file and I might find a bunch of different people, right? It's just how BitTorrent Works to write. Like, it's actually not just a hash of the file.
7:07
It's actually a hash of a data structure for all of the parts of the file. So if it's a really big file like a 4K video, you can download it from five people at once and get different sections that file just like the north. So interesting. I didn't know that. So it functions like BitTorrent that way. So it's broken up if you get over a certain size. Yeah, crazy. And you can adjust the settings to and you can encode videos so that the boundaries apply around the keyframe boundaries. So that way, if you're skipping around in the video, you're always skipping to the right section and you're not Crossing between sections when you're skipping around. So
7:37
Whole entire open source structure of ipfs. This new way of storing things in the distributed fashion. That is decentralized. That is powering web 30 is all of that called just ipfs. Does that is that you have a coin is not involved in that at all. Is that correct? Correct. Correct. So ipfs is really this content addressing system and the network that can provide that. So, there are multiple content Discovery mechanisms. We now have multiple protocols for retrieval as well. And those are
8:07
So keep in mind, these are hashes like they cash really well, so you may ask for one of these hashes and get it off of your local disk because you already had it before, you might get it out of the CDN because the CDN already had it, right? Like these, you can build numerous caching layers in between here and there. So when you think about ipfs, it's not like one protocol. It's almost like a family of protocols that allow you to get any of these content in dresses that we've ever produced and what's the incentive for someone to because, you know, obviously this takes some type of collection of users to be able to come together and say,
8:37
We want to provide services like you have to provide machine resources to make this all happen. Why would someone come and say I want to run and ipfs just out of the goodness of their hearts or there's a lot of different reasons. Sometimes it is out of the goodness of their heart. Right? Like the internet archive has been, you know, partner of ours for a long time. Like they want to preserve, you know, the data of all of humans, right? There's a long now Foundation talk. Actually someone Binet just about creating an information layer for Humanity rise. Like how do we back up all of our data?
9:07
And all of our Collective works, so there is like a public service element and we can even get into two entities in a little bit and the public service element of that as well. Because that is public data. Also, there's private data. Sometimes people want to just store their data for their use case or for their application. There are pending services, like, Piñata where you can pay, you know, x amount per month, and they'll pin that data and keep it alive in the network. And then there's blockchains like, file coin. That actually have verifiable proof that people are storing this data. This is really interesting you mentioned
9:32
pinata, and I think it's
9:34
a bring it back to the world of n of t. S, I would
9:37
To talk about, what keeps this data alive something as just because you publish something out to ipfs that doesn't guarantee that it's going to be around forever. Correct, but that's very intentional because anyone at any time can decide to put up more copies of that data, right? Like as a great example, right? The you know, the hen ecosystem first. Yeah. Based on tasos, they went down recently and that was the big concern. Everyone's like, where are all my in FTS? What's going to happen? Exactly. So they published a list of all of their
10:07
CID has all their content, addresses and ipfs. The all of the data. They had. I think it's like leaven terabytes and we were like immediately. Oh great. Let's go back all that up. Like we just started putting that into entity storage immediately liked the script is just running now and updating and while we were in the process of even doing that other people, we saw also just copy that data. Like, there were enough people in the henna ecosystem that has an interest in keeping that data lab that they kept alive. And if you look at, if you take a longer view, to how many backups of a lot of the data on the web are there, right? If you look at public data on the web, there actually are
10:37
Multiple copies of that around. The Google has a copy of the they used for indexing. The internet archive has a copy of your website that they use the Wayback machine. These aren't accessible to you in an easy way because the addressing is different, right? Like we're dealing with content or we're dealing location addressing in the old web. But in the new web, anybody who wants to Index this data, anybody who has an interest in this data, can also just provide that data into the network. And it's often times easier for them to do that, then not to do that. Yeah, explain that a little bit. So here's a good example. This goes way.
11:07
Way back, this is like a I think a couple of years ago now, but we had some University lab was, doing a bunch of genetic work. And they so they had these huge files and ipfs that they were using for doing all of this all this processing on it. There. I think their power went out and when everything came back up, it was really slow and they thought that ipfs was really slow, still trying to figure out what was going on. And what they found out was that I think this is actually down there, ipfs note was down, but all of their pipeline was still running. It was just running slower because it was getting all of the data out of the network from other universities that are replicated, the data.
11:37
That's so cool.
11:37
Yeah, so there's like a real kind of fault tolerance to goes on in these networks when you have multiple copies and when it's, this is open Innovation, right? Anybody, who sees these see IDs in a chain, anybody who sees the CDs that we publish with it and publishes can go and backup data. What's the angle is absolutely super powerful, but let's take can, for example, because, I mean, this is just one of the scariest things to happen in the NFC space. That's the beginning of the end of t space. Like we saw a major platform. Just give up and say goodbye, of course the
12:07
Because it was open source where a lot of clones that popped up and everything. It's all good now, but my question is, okay. You've got this. What you got 11 terabytes worth of nft data, who else was replicating that and why and what if they were a smaller. What if Hannah was Tiny with only one terabyte and thousand users. Would anybody want be replicating? The data, or would it just be gone forever if they decide to hang up and go away. I mean if nobody knew about it, yes, but you do have people like us that are now looking at the chains and we're doing a lot of
12:37
Integration right now. We're once this data lens on chain. I think that you'll see multiple copies of it. Backed up. Once these ecosystems get a little bit more mature, like, for instance, you know, we didn't even have an index of tasos until a couple weeks ago. We found one I think is these mature. You're actually, this is this particular problem with public data is not going to be as big of a problem because there's so many actors like us who just want to backup this data who think that it's worth backing up that data and feel like this is a public service and there's a Commons here. And there's a clear kind of custodianship problem where all of us benefit from
13:06
Has data all new developers want to build on top of this data set. We need to make sure that it's alive, but also, there's no one individual actor who gets the monetize. It the way it's centralized applications get to monetize and create this two-sided network. But at the same time, this whole ecosystem is competing with the current web and we have to get the prices way down and in the case of, you know, Public Services, we need to get it down to free. Like, I've spent 20 years in open source, and we've figured out ways to make open source free at every level and it every day.
13:37
Here. And somebody is paying for it somewhere, but there's always enough institution to there that we can provide that service and as we build out the web three ecosystem. We need to make sure that public nft data is a public service in a public Commons and that were backing it up definitely for nmt data. But like, where do you draw the line? Because with data, it's like we're not storing. All the produce data because we just couldn't even if we wanted to. So, let's just say another service comes online that says, hey, you know, every time I do a connection,
14:06
My cell phone to do this, like, it's something that's generating billions of nude pieces of data per day ipfs. Just wouldn't be able to, there would be no public utility. That could handle that or someone has to be destructed, if it's encrypted for me. It's there's no Public Service. It's my data, like yeah, like that kind of thing is not the thing that we as a comment want to back up and there's going to be multiple business models around, how to store that data. I think that you should still address it with ipfs though, so that the business models that you decide to opt into when you
14:37
That data are not the ones that you're locked into because you can literally have any provider at any time, go and put up that data. Right? Like you could add it to file a coin, you could add pinata. You could put it in our we are we will I think our firm provides in the area. That's not working. Yeah. Yeah, that's right. I'm curious on piñatas in a really interesting one in that for people that don't know. So in correct me if I'm wrong here because I've never actually penned any content. But let's take another in a to use case for as an example. Let's say super rare, you know, fantastic high-end market place. There's some crazy.
15:06
Easy enough to use it, a worth millions of dollars on their super rare goes away tomorrow, but the address structure is still out there for that crazy, xcopy and of tea that I love, I could then go in and sponsor the storage of that by pinning. It and saying, I want this to still exist. I'm going to pay to make sure all this in a tea and all of my end of teas are always stored and accessible on on ipfs. Is that correct? Exactly? Yeah, so that's a fantastic back up. Even for your own in FTS. You could just
15:36
A my entire collection. I want to make sure we'll always have availability. So the value will never go away. Yeah, you can do that with Piñata or you can run that on your local laptop and provide it from your local laptop. Is there an easy way to do that? And that always sounds awesome. And then I'm like compiling shit and I'm just like just like yeah, I know there's been a lot of progress here. So 1 if you want just like a vanilla ipss experience. There's a desktop version that will give you some more kind of UI around ipfs.
16:06
But also IPS ships in Brave now, so if you just want to have Brave, be your ipss node, that's fine too. Oh, well, I mean, I know I'd be a physician was in Braden in terms of being able to access ipfs content directly, but I didn't know that it could turn it into an actual node. Yeah, it stays as a node on the network, for all the data that anybody's been accessing and can I pin stuff inside of Brave? I haven't pulled that up. Yeah, but we wrote a bunch of stuff to do that. So I imagine that you can't have a valid reason, really got well done it. But yeah, I'm pretty sure. Yeah, Brave had a little
16:36
Explorer where you can take your wallet connected in are give it an address and then just pin everything within that address are also any website that wants to be for brave has access to that ipss node in its context. So we can create numerous applications for you to go and interact with and pinned content into and that will stay available. Whenever Brave is up. Gotcha. So yeah, how does when our we've launched? And they were like, okay, we're different. We're going to store your data for the next 200 years. It's kind of baked into the
17:06
The price from the get-go, guaranteed availability. How does that interplay with ipfs? What did you think about that? Do you see these as complementary or competitive? Ultimately, complimentary we get asked about our we've a lot and I think not because they're like a competitor. But because their approach has been pretty different from ours, to our detriment. Sometimes you were just saying you're very confused about the difference being that keep us in foul coin. You're not the first person to say that we've taken this much more modular protocol Centric approach where there are different smaller components that are plugged.
17:36
All so that we have a lot of future proofing and we can swap pieces out. We need to our we've is gone. The other direction where it's a much more vertically integrated stack. So, a lot of things that we've been imagining building as Dows as like data Dows, that would keep data live forever potentially across different chains. They've baked into the protocol for their chain itself and it's just one big vertically, integrated stack. And so I like our, we've, as a component of a larger ecosystem, I think that it is often they looked at as the end-all-be-all solution and and
18:06
That's the approach that a lot of vertically integrated framework take and that they present themselves as because they do have a nice kind of beginning to end developer experience. I don't consider that a safe permanent place to put your data just like I don't consider any blockchain right now. Safe and permanent. This is a very early time. In this ecosystem. We're going to see very big changes in Jacobs. I don't think that anyone can be making claims with words like forever right now. I think they one thing that I'm seeing with new marketplaces that pop up. It seems that there are saying
18:37
Like you said and in some sense, we don't know what the future holds right now. Let's do ipfs. And are we let's do them both yeses for redundancy. And so they're doing all for which I think is interesting. Yeah, a lot of people that use a Nifty that storage which is our, we put up a product. You just make it easy to get data in and out of the network. So it's in ipfs available and ipfs its backup and file coin. We've had a lot of market and if the market places that already had ipfs infrastructure running say, okay great. I'm going to keep running my Epi F. That's the structure, but I'm also going to give data to you for.
19:06
Other copy like, open sea is running some of their own stuff, but they use our Gateway and we get all of their see ideas like in real time basis as well and catch ya. Now. One of the things that is fantastic about centralized Services is that they are just so performant when it comes to serving files. They're just yeah, they're so close to the customer and everything's like cash. They have beautiful cdns and people pay a lot of money for that type of performance. So when does how do we eventually get to a web true web three?
19:36
Old were, you know, when I was browsing him back in the day, speaking of ipfs, the credit they were like doing everything like web 3 and it was really slow. And if you just take a while for some of these images to load, the average consumer would just be like, what is going on with this side. It felt like dial up from like the 90s or something. Right? Like how did we get to a performance world with ipfs? The good news here is that we should be able to do a lot about read performance, given that everything is hashed. It's really like a cash engineer's dream.
20:06
Come to work with. So I think I have some short-term and long-term answers in the short term. Literally. I have a call tomorrow to kick this project off. We're putting up a CDN in front of the regular ipfs, Gateway, infrastructure. If you're not familiar with quick, most browsers, aren't Brave most browsers, do not have native idea that support. So what you end up, building. Your application to is an HP Gateway where you're actually asking this Earl API structure for data, out of the ipfs network, and that's bridging that into hdb. So we run a big public Highway infrastructure and what
20:36
Setting up. Now for nft storage is a new Gateway. Cash at Gateway Dot. N of T does orange where we're just going to focus on read performance for an FTS. That's the only thing that we're going to care about. Yeah, and so we're there's a lot that we can do here. Just Baseline like cloudflare turn on some CDN features, maybe race a couple gateways to see which one's faster, but beyond that we also, you know, are running the NFC storage service. We're looking at the chains. We have a lot of incoming data knowledge. We can crime the cash for every new nft that comes into the network in the CDN.
21:06
Then in cloud storage infrastructure, is this something where marketplaces are going to pay for this type of performance because somebody has to cover the cost of this. This is a big undertaking up. Yeah, right now, like we are flush with storage capacity and file coin. We're flush with cash right now. We're okay. Right now. This is a public service. We think that we need the bootstrap. This ecosystem. We have every interest in in making this free and perform at long-term. And we know that there is a broader business model that we're already engaged in on the other side of this, that's not public data.
21:36
Serenity for public and FTS. This is just going to be remaining free service and in the future. It may not just be us two. We may pull in other partners. We may start working with other organizations that are interested in the same Mission. Awesome. So, what is file? A coin? All right, awesome. So final coin, Falcone's a blockchain for storage deals. It uses a proof of storage. Right. Similar to, I think you talked about chiyo once on here as well. That's also does a prism storage. So you're proving how much disk you've basically allocated to this network.
22:06
File coin, which is a bit different than she. Those have to be in deals that have time durations on them. So you're saying I'm going to store this data for this amount of time. And while I'm proving that I'm going to earn tokens effectively. Now that time duration is really important when you're looking at verifiability, if people can just leave the network and leave your data at any time, without an economic penalty. I'm just not going to trust that network of that data. So we put a lot of thought into the verifiability.
22:36
Yet storage, right? Like, how do I know that that data is actually going to be there and going to stay there for that duration. And what we've come up with is a set of economic guaranteed. So, this is not something that the I was
22:47
always wondering why it seemed to me and carve
22:49
if I'm wrong because I haven't looked into it in a long time. But back in the day when I was looking into running my own file coin kind of no to participating in the network. The hardware requirements were pretty pretty hardcore. And I would imagine that was because of the guarantee around availability and performance. Is that correct?
23:06
T'. Yeah, and you know, we're working with some really new proof really new stuff. So we end up like really using all the hardware that you give us for sure. So why would someone so ipfs is sitting out here doing its own thing. It's got all the entities stuff going for it. Who's using file coin? Oh, yeah, like we had lots of people using five coin purposes. Are they using it to buy and the reliability of permanent ipfs torso? So, what the network can do right now, is that it can verifiably say it is storing this data.
23:36
Right, so it's great for archival use cases, especially right now and we're trying to just on board a lot of data into the network. We have 13, exabytes of committed capacity, but we only have 38 petabytes right now of user data. So our supply right now is dramatically at stripping demand. So what we're trying to figure out just who has a lot of data that we can get into this network. So, working with the Sloan foundation, with a lot of other people in this effort called pop-up it. So, we're just bringing in a lot of data into the network. So right now, we're focused on a lot of these really large data, use cases and trying to
24:06
Board that and then we have services like web 3.0 storage like an empty dot storage in the Estuary that really makes the developer experience of getting data and retrieving data out of the solid coin or really smooth and easy through IPS. Okay. So one thing I'm still broken on is, I understand that file coin is this block chain for storage deals, but the actual data that is being stored. So you here's a protocol Labs, the your overarching parent company for file claim a ipfs.
24:36
That that is correct runs both projects. Yeah. Yeah, I would call it more of a network. But yeah, okay. So where does where do they commit? Do they communicate with each other is like, let me an example. Let's just say I buy some storage. I'm like, okay. Hey file going. I'm just making this up. I don't you correct me if I'm wrong on the little details here, but hey file coin. I want two years of storage. I want two terabytes of data. I want the guaranteed availability, all that. I love it by it. Boom. Here's my data is
25:06
My data interacting with, or using ipfs at all, or is it at separate chain? That is file coin only? Okay. So here's what you get into sort of the multi-protocol nature of ipfs, a little bit. So there's actually several protocols happening. Ipfs doesn't participate in the proofs in the network because it doesn't really have any facility to do that. Like ipfs is just for retrieval. The part of the network that ipss really needs to engage in, is this retrieval piece? So, I have this CID that's
25:36
One of these deals I need to go and get the data. So right now like most IPS clients that have been distributed right now don't have this new protocol called graph sink, which is the protocol that we need to use in file coin because you need you need to do an iterative kind of payment Channel. Sometimes for paid retrievals. We have free retrievals as well for most of our user data right now, but there is a paid or triple flow. So we needed another protocol. That protocol can be bridged in the existing protocol. So, for instance, if you use Estuary, which is a service or you can actually just run Estuary yourself as a data provider.
26:06
Are you can run? It's called an estuary sub shuttle, but Estuary will put deals in to file coin. And then it will pull a bit swap node that bridges that retrieval protocol. So that data will be just directly accessible in the ipfs network that way, right? I say yeah, what we're doing in nft storage in with three storage is a little bit bigger scale and like more on the kind of cloud scale, kind of side of things where we're actually providing data directly in the ipfs network, and another copy as a hotter cash. And then working through
26:36
How does that become colder and colder cash? That's only in file coin? And then Bridge the same way. Yeah. Yeah. Yeah. Okay. So what is the bull case for for file coin? Like where does the exponential kind of growth? You mentioned, 38 petabytes and current storage on there. It seems like it's been this getting big data sets that need archival type story. I understand why someone would use this say, against like a what was it? Like, Amazon Avalanche or what was their Glacier? Something like that? They're slower cheaper.
27:06
Storage, how does this compete in terms of speed price? And then what is the 10-year plan? Look like here? What do you see this powering? Yeah, again, we tend to take this multi-protocol approach. So file coin does a piece of this really right now for faster retrievals. We're looking at like, okay like we can layer retrieval Market on top of this that has a slightly different incentive scheme in order to solve a lot of the faster retrieval stuff and not have people like me building cdns forever.
27:36
And we want the web, three native solution to that. So that's a layer that were plugging into solved. A lot of the read stuff. We also have just we ship a lot of improvements to the protocol is actually very impressive for a blockchain to be shipping. This many updates. We've done a bunch of them this year including now, we have full CID indexes inside of every deal so we can do random access inside of these deals, which is really nice and we're also currently working on an indexer node system. So that all of the CID is in every deal are just automatically provided and you don't have to have these intermediary nodes like Estuary is doing and like
28:06
We're looking at doing as well. So that's all going to go away. But I think that performance would still will still be a problem compared to what you're used to getting from a CDN or from a centralized solution. And so that's where a retrieval Network comes in. That's where it like other people are experimenting with other caching layers to. I think that there's a whole ecosystem of players out here trying to figure out what the most performant way to do. This is and what the right incentives are and we're just like at the beginning of it, really it. How do you from a developer standpoint? You know, when I think about just my own experiences with caching content and how
28:36
Easy Amazon, made it to where remember when S3 came out and they were just like, here's a bucket. Tell us who has access to the bucket. Drop files into said bucket, and then magic happens. Like just on the back end, everything gets distributed and is sent across the world and it just works and it scales and a millions of people can hit and there's no, no down time. When where are we in the, in terms of maturity of this, it seems like there's so many, you know, I'm a technologist. I'm still having.
29:06
A hard time, there's a lot of moving pieces. I'd imagine. Unless you're involved in this world every single day. It's you got to think through all the different ways that your data Traverse is these various different, protocols or networks? When does it become bucket like where it's just dead simple for developer to fire, something like this up. So we do have a dead simple developer experience and if T dot storage and in with three dots storage and all of that data is going to be in ipfs involved in that but that is like an HTTP service. Like we
29:36
A
29:36
relationship with you. It's not an entirely decentralized protocol. And right now in the current maturity of web three, you have to fall back to that in order to actually write browser applications. We are not where we wish that we were in terms of browsers. Really accepting, and certain to embrace web three. They're like other than brain and Opera Mobile. We're not seeing a huge amount of people. Looking at ipss as a native protocol looking at other blockchains its native protocols. Looking at integrating wallet, a lot of these workflows right now. They're really hampered.
30:06
By the lack of adoption and really like I feel like we're just not getting the traction that we need with major browser vendors. They seem to just not really care about crypto. And so they're not really looking at the space. So in the meantime, you have to balance spin up these HTTP and points and then do everything that you can to not imbue trust into those endpoints. So with our endpoints, you're giving us hash data, we're giving you back hash data. You don't really have to trust us much. You're trusting that we're going to store the data, but your we can't mutate the data, it would change the addresses.
30:36
Just like that. Who do you think is the first major browser to fall? Is it in some sense? I would say, of course, there's hardcore Engineers over Google writing Chrome and they're going to be like, let's just add a little ipfs action here. Brave has it. Let's move it over, like it. That could be a scenario. But in some sense, almost you want an underdog like, you know, Edge or something from Microsoft to say, we're going to take the risk. Is if you had any conversations with these groups are they at least curious to kick the tires on this stuff? I'll be perfectly.
31:06
To be honest, like I have seen just a lot of like blatant hostility. Like just open hostility from folks. There have been a few people in both brat in all browsers. Really. That have tried to advocate for this, that have really been Champions. But at the same time, there's just a huge amount of pushback from the traditional web community on crypto and a lot of crypto skepticism in general. And and that's been really hard to deal with. I think Brendan on Brave is the one kind of major outlier like, he's really push forward on integrating wallets.
31:36
The browser he believes that like is the necessary future and you can't really solve the security story using extensions. He's integrating ipfs into Brave. Like that's really the outlier here. And I think that it's not inconceivable that brave could end up having larger market share than Firefox on some time line. They've grown pretty quickly. Firefox has is been bleeding market share for years, that could be something that turns it but we're trying to get into Chrome we're talking with people that might get it into Chrome. I really hope that turns out and things change, but
32:06
Been a little bit too burned. I think that we need to get Mass adoption of D apps. Just using whatever means that we can and that adoption of D apps is what's really going to drive browsers to adopt this because there's actually nothing better for browsers. Keeping these applications in centralized, hdp endpoints that then just talked to the centralized protocols. It would be better for the browser's but put the browsers on better position if they had needed protocol integration and I think that they'll eventually get there. Once there's a big enough market demanding it. Yeah, most likely.
32:36
Be coming from the gaming side. I would imagine you think of Axiom finiti and some of these bigger projects that are attracting millions of users and that teams can get us there too. I don't know. The defy will have the user base to Warrant. It have to be in the, you know, Millions at for them to even pay attention. Yeah. And defy the user base for defy, will jump through, whatever Hoops, the been you to write, like, they'll deal with them at a masked wallet crashing, or whatever it is. But I think a lot of nft users and especially love the artists that are coming on. And if T is right like Grimes has been doing enough T's, she's got a mass audience.
33:06
Since a lot of our audience does not going to want to install the metal mask extension. So like getting, I think that you'll see a lot of these bigger name artists doing in a few plays things. I can be a Top Shot that are applied to like much bigger Mass market and that's really going to force their hand over a long enough. Period back to file quiet for a minute. What is, what are people? So the actual coin itself? Can you describe the use cases there? I understand that as a contributor of resources to the network. I am rewarded in file, coin are in my also using
33:36
According to buy storage as well. And do you all think of it? Almost like a type of stable coin that is just used for. I know it's not a stable coin but in a coin that it's not about the appreciation of the actual token itself. It's more about the utility of it being used as a way to buy and sell storage. I'd like it can be both. I think the trying to form a duality in which it has to be one of the other doesn't capture the complexity of the crypto economics that come into play. One of the reasons. Why story.
34:06
Age can be cheaper is because it is subsidized by people's future view of value in the network and their view of the value of the data already stored in the network as being the stable asset that people care about because it's not just the initial deals. Anybody can come along too and renew a deal. So something like, if I have any consistent, had been in a file, coin deal, for instance, and what they said was, okay, we're not going to renew this deal. Other people can come in to renew the deal. Other people can come in and pull the data out and put a deal in somewhere else would do a different deal. It's a lot of like open access.
34:37
From that perspective, but at the same time it is a network that is mining currency and has like an intrinsic value associated with it based on the demand for the currency out. Right? And you can't get around that. How does that interplay with the deals on the deal side? Because like I would imagine. So let's just say your hin. You're like, okay I want to pay file coin, is it put into a custody account that is granted out to the actual providers of storage over time as they prove that they accurately and reliably
35:06
Guided that storage. Is that exactly. Yeah, and also, the miner has to put up some money into escrow so that if they were to ever not be able to prove that they were storing the data. They would have a financial penalty, gotcha. But in some sense, is, if you're if you have file coin in, let's just say you pay X number of file coin, for storage over two years and the price of file coin. Quadruples. You're really pissed because you're paying a lot for that storage at that point. Like, how do you deal with those type of Dynamics? I think that's another Market opportunity that right.
35:36
If you have, especially if you have something like deal transfers, where people can start bidding to take over that deal on that miners began. So, I don't think that this part of the protocol exist yet, but it is something that like, has been talked about, but that to me, that just presents another Market opportunity. Yeah. It's almost like a bond in some sense, right? Heels are going to change over time, one way or the other and but there always be a liquid bond market for it. That'll allow you to sell or move these deals. Yeah, and I think that the general Trend here and this is certainly where we are trying to
36:06
The economics, the cost of storing data per gigabyte, should be as low as possible. We are trying to drive that price to 0 because this is like a storage layer for web, three and needs to compete with the existing storage systems that are out there. It needs to be cheaper than Amazon. He's be cheaper than Glacier and used to be cheaper than everybody else. Is it today or where does it stand? I believe it is. Yeah. Right now, because of coin plus deals that these verified deals, have a multiplier on them. You can get miners to store them pretty much for free. Like we're really not paying, but
36:36
Those is a market like that. Could change could tweak here there but markets are very good at commoditizing stuff like this. And that's what we're really looking at is like, how do we continue to make the cost of storage free? That doesn't mean that miners aren't making money. They may be making more money because of the token rewards, right? Like we have these other economic instruments and these other levels that we can pull around, but the overarching view is that the cost to the user should be as low as possible. We're always pushing that down. Yeah, so if you're a big, let's just say, aye.
37:06
I got to imagine it's a slow grind to get this to any kind of major crazy mess adoption because I'm just putting myself in the shoes. Let's say your Drew over a drop box. When you're sitting there and you're like, okay, we're all centralized. We've got this working. If you're going to shave 10 to 20% off my cost of storage. It's not worth my time to move over. Like it's just that just the headache and this, the uncertainty, and it has to be almost an order of magnitude and savings it for it to look attractive. Do you believe that to be the case for some people that is
37:36
I've also talked to other people though that every six months do an Arbitrage and figure out who is the cheapest toward provider and actually move the data around. So I've heard literally the exact opposite story. There is another team, really looking at onboarding. A lot of existing data. I'm much more focused on entities and my bigger concern is not, how do we get a bunch of adoption from people that already have data? But how do we actually store all the data? That's about to come into the network and has been coming into the network. Yes. We're like, you know, we're going to have a billion in of T's next year and then it's going to be 10 and then it's gonna be 100 billion and we're
38:06
going to You're Gonna blink and it's just gonna happen. Like this is a huge amount of data and a huge amount of demand, but I think is going to come in. So we're looking to just scale up all of our services and all of our technology to make sure that we can meet that demand. That's the one that I'm really worried about. Yeah, I get that because so in some sense like we were all this is just me talking out loud. Tell me if you were in this book too, but I was like, okay, when's the next drop box is going to appear That's running on file, corner ipfs. But in reality, that's not it. It's in. If he's are going to be that, that skill.
38:36
Our larger. It's gonna be, it's gonna be the thing that we just didn't even expect. It's not like, it's not gonna be the existing stuff being like, Oh, it's time to move over. Like it's probably going to be a lot of older archival data. And then the new stuff that is like, web three native from the get-go that just stores on You by default. Because that's the only thing they know. Also, we haven't really tapped into the network effects of these open data protocols yet because it's so early. So, we're just getting the first assets out of the network.
39:06
Work. We're not yet at a stage where new applications are enabled and are basically bootstrapped on all of the existing data. Like, the beautiful thing here is that, if you look at the etherium chain or any chain right now, they're just doing in identities. You have a public database for effectively. You can just go and build an Instagram clone using that data set and anyone can go and build that anybody can create new experiences and those experiences than have an effect on the perceived value of all those assets and people can do new things.
39:36
Within the you never thought of. I think it's really cool that we're seeing like card games and a lot of the cards come in two entities. That's like an obvious kind of use case but something I was talking somebody a few weeks about this few weeks ago about this something that you can do in an of T is that you could never do an old card games. Is that someone else could create a new game with those existing cards? Hmm, and they could change the rules and they could also integrate other cards from other games into it. And you can have a game of, you know, like the superhero cards in the magic cards competing together. You can even do crazy stuff. Like you could have a
40:06
A contract that said, I will give you this new card, but you have to give me three of these other cards from someone else from some other game and these three cards get burned and you get a new card, right? Like, it is literally a web of interconnected data. Like it's all kind of a flat namespace that we can all work with. These are open, protocols. A beautiful thing is they're all defined and there you can read the attributes and do whatever we want with them. Yeah, as long as we've long as we normalize the attributes a little bit better than we're doing now, that's one concern. We have a little informal working group and of see storage called standard dish where we're trying to like.
40:36
Just a bunch of people in the web three states were just trying to go like, okay. What is everybody doing with metadata doing, what then of teas and what kind of normalization can we do between them for like unified view layers and stuff like that? And things are very different between chains. Very different between different acid. In some sense, when you're sitting there doing the nft storage project. You could just curious the my collectors hat goes on for a second here and my kind of curiosity around. What's going on? Underneath the scenes, you could sit there with a little sniffer.
41:06
And watch and see what are the most requested and of tea images. And look for real-time traction, based on the request to the ipfs network. Could you not. Yeah. I mean, I'm sure the marketplace is already doing that, right? Because they have a window into that data as well. Yeah, but that's just like what a really cool way to discover new projects. Yeah, we have seen tools. Yeah, we've already discovered a lot of really cool projects by looking at the data usually because we see something crazy in the day and for like, what is this? Do you see a blip or something? Or you're just like, okay.
41:36
Something going on here. Yeah, that yeah, it's like somebody put an entire SVG in the token URI field like blew out this sort of primary key in the database and stuff. It's somebody doing generative art. Like they put the generative art generator in the contract. And so they needed to write it into something related to the end of T. They can't produce off chain data because they don't have network access in the contract. So they just shoved it into the token. You are crazy. Yeah. Do you know what artist that was not a hand to know? That's really cool. I ate the some of these.
42:06
Fermentation around what can be put on chain is just so much fun to watch, isn't it? Yeah. Yeah. Yeah, people are doing really cool stuff. Also, like I have to say with things moving towards like Solana and Tazo sand flow a bit. You're seeing a lot of new creativity. There's just a lot more that you can do. I feel like and just a transaction fees being so much cheaper. That's another reason why we're so bullish on. Getting the storage. These down is that you're seeing new innovation. Now that the transaction fees are less than you know hundred dollars in that tea. You can just do completely different.
42:36
So we're seeing people doing a million and ft drop, things like that that you couldn't imagine doing that in, theorem. I absolutely love the hen ecosystem. I just think it's so cool. It's like the Indie, a cool place to go and it did a lot of artists were attracted to it because of its how green it is a cryptocurrency and then also just meant was dollars not you know, hundreds of dollars. Yeah, but it falls over every time there's any kind of serious drop it doesn't fall over. I shouldn't say fall over it, slows down, anytime there's any day.
43:06
Had his do you sing that as well? Yeah. Yeah. Yeah, it's we just need a better caching layer. This is what we were literally kicking off project tomorrow to just put a better caching layer there. And I think that then we can basically fill that and I think that we can solve the that something up yet. Let's just not on the transaction side like on the actual chain side were seeing issues as well. Oh, yeah. We aren't going to stop that the chains, right? That primary the trees are already pretty well, incentivize to solve that and I've seen really big drops going to Solana without it, really hiccupping too much. Yes, Lana people that listen, as podcasting.
43:36
Fanboy and I just, it's just as fast. Yeah, are you doing a lot of work was Lana? So we've started to look a lot more to it in the last couple months. I've been a fan for a while, but we just we need to just wrap up some of the etherium stuff we were doing for. We can really shift some of our attention. And I've been really surprised at just how different it is. It is so different than aetherium. Like the contract language and everything. And and the way that in of teas are created in treated as so different to and it results in a very different developer experience that I don't think that you could even.
44:06
Really get in just literally contracts. So yeah, different in a good way different. In an odd way, a good way like, in my view different in a good way. I like the way that didn't refuse work. It violated a lot of assumptions that we had carried over from aetherium and some of the spec work that we were doing and now we're adjusting for instance, like in aetherium here, see some 21 and it's these are integer identities inside of a contract, right? So, they effectively live inside the contract. That means them, right? And like in Solana, contracts, don't mute.
44:36
State in the contract. They mutate state, in an account, that is applied to the contract. So you run the contract code over an account where the state has mutated. So just the whole kind of idea of like, where do these exist on chain? And how do you come up an index them? And how do you like omit other data Properties or talk about other data associated with the end of T just changes, right? Like the addressing for it changes, right? Hmm crazy? Yeah. I haven't looked into that it. Do you find that in your experience. Is it easier for you to navigate?
45:06
Or equally as hard as some people say, Rusty's your language then solidity. What's your thoughts? Their evident open source for 20 years. I don't I find a lot of the arguments about particular languages are really just preferences and and they're being like exported as some kind of objective analysis. I don't think the rest is inherently easier than solidity and a lot of ways ladies easier because they're just less than tax. I think that once you're comfortable with the language you undervalue how much work you had to do to understand the syntax or get the meaning of the code and and rust is a very high syntax language that said I love, right.
45:36
It's really nice. And and also Russ has a really good webassembly story and I'm betting a lot of kind of future ideas in the webassembly space. And currently the best story for compiling webassembly is coming out the rest. So I think that's just like a good bet. That's yeah, awesome. Cool. This was really helpful. I want one thing. I did want to cover briefly, if your game is given that you focus on an empties. What do you see out there that you're most excited about it with their, a free Marketplace standpoint or actual?
46:06
In appease themselves or the on a personal level, like taking off the protocol Labs have for a second. Are there things that you're drawn to or your team is looking at and get the, you get a good sided because I imagine anybody working, Auntie team has to be like a collector to write like they the most exciting at some point, everyone. And I think also weren't interesting position because we've been working on this technology for a few years now. And we have a lot of ideas about what's possible in capable in this ecosystem. And a lot of that is still unrealized. But the thing that has really shifted is that we went from trying.
46:36
To create opportunities where developers would come in to a complete shift. Now within of T's where developers are flooding into the space and their more traditional web developers that are starting with a very different background and that's just a very exciting place to be. It's always exciting to be part of a new high growth ecosystem. I was one of the first contributors to nodejs and really it's been a lot of time in the JC ecosystem and it feels like that. But even more supercharged there's even more growth and more people coming in and it's probably the biggest opportunity I've ever worked on. Actually, that's a
47:06
Funny, you say that because I was early web to and when I saw that growth and excitement and community and just folks getting together, exploring new ideas. This little date me a bit. But when Ajax came out for the first time, you could actually updated web page without having to go to another webpage, like it was a big deal. It's a, those are like those nights where you woke up at earlier than you should have 6 a.m. In the morning and just you couldn't go back to sleep because he's so excited about what's being built. And I get that again with this web 3 world, it feels like
47:36
It were just now we finally hit on something after so many years that web to start in 2004. It's been a while since we I guess you can say mobile and apps and stuff like that. But this is really something new. Sounds like you're getting some kind of Vibes. Yeah. Oh, yeah, and we're still in that space where Ajax was out, but Gmail had landed yet, but it was like, yeah, you can do all this new cool stuff and people are doing new cool apps from startups and boot camps or whatever. There weren't actually boot camps, yet that was invented to solve some of the talent issues. Oh my God. I totally forgot.
48:06
Got about, we'd have to click on a Hotmail email, to go to another
48:09
page and read the email. You just reminded me that
48:13
Gmail did change a lot of things. Then they went rematch to write. It was a really big one. Where was just like. Oh, wow. This is not something that we thought could happen on the wall. Again. I was printing out MapQuest, like I remember those days of MapQuest and it was like a static image of the plays. Then you print them is take it with you. It's so ridiculous. Oh man. Yeah. That was great though. It was
48:36
I
48:36
know it very much, feels the same. Yeah, it's going to be, I think that it's going to be even bigger though because you have now, we're going to have Network effects applied to the data produced by applications. So, in the analogy that I like to use actually is from the first web Revolution, right? Like the AOL versus the web. And if you remember, then Oprah had keyword Oprah on AOL. And if you were betting on, who is going to win this web thing or AOL and you looked at, how do I find out about Oprah a bunch of random fan?
49:06
And sites that don't make any sense on the web at these ugly URLs and you had to type HTTP colon whack. There was no special stuff to help you out and the web one, right? And the web one, not because it had a better website than keyword Oprah on AOL. It one, because all of the websites together could link to each other. So, you were competing against a network of interlinked value. And you could bootstrap new Innovations on top of all of that, interlinked value. And that's what we have. Now, with all of the data being generated by these applications, like
49:36
Apps are generating in at ease and those entities are now available for anyone else to build, whatever semantics that they want on top of them. Yeah, that is just fascinating. If you're absolutely right. I remember them saying, like wanting to go to AOL and be like, I wonder if I could buy my own keywords. That was the big deal. That was better than a domain name
49:54
back though. Back
49:55
on my far. Yeah, you know, it will had a monopoly on those obviously and it was just like they could sell this for anything they wanted but yeah, that's crazy. Awesome. This has been great.
50:06
Thank you for the dumbing it down, for me. Yeah. Yeah, it was really fun to be on any time. Awesome. I'd love to have you back again. Soon as this sounds like this is evolving every six months or so is something new and exciting? So feels like we're weak. But yeah, I feel do to me world of entities if I missed 24 hours of something. I'm just like, I've missed 10 projects. Yeah, somebody drops two million entities
50:27
already. And yeah, and you could have made a half a million dollars at you. I just
50:30
have any attention. Like I get that text every single day is for lost. That much. Exactly.
50:36
But it's fun times Oscar Michael. Thanks so much for being on the show. Yeah. Yeah, it's been great.
50:43
All right. That is it. I hope you enjoyed this episode in a quick. Reminder. If you're in 2nf T's and you want some really hardcore and ft coverage, you gotta check out our new podcast about proof proof dot x, y, z is the URL head on over there and
50:56
subscribe. That's it for now. Talk to you soon.
51:17
Everyone before we start this episode, a huge announcement from us. If you are into n FTS, then I wanted to tell you about my new private, members-only nft collectors group called the proof Collective. This group will be limited to 1000 members in total and feature a private Discord. Early proof podcast episodes in person. Meetups and some awesome collaborations to be one of our 1,000 members. You'll need to purchase and hold the proof Collective nft. This will
51:45
be your key for Access. And that in Ft, goes on sale Saturday, December 11th at 9 a.m. Pacific time. For more information and all the details check out our site and video at proof and ft dot x y z. That's proof nft dot x y z. Thanks.
ms