In my 上文章, I mentioned that compared with the previous two cycles, this round of crypto bull market cycle lacks sufficiently influential new business and new asset narratives. AI is one of the rare new narratives in this round of Web3. In this article, the author will combine this year's hot AI project IO.NET to try to sort out thoughts on the following two issues:
The commercial necessity of AI+Web3
The necessity and challenges of distributed computing services
Secondly, the author will sort out the key information of the representative project of AI distributed computing: IO.NET project, including product logic, competitive situation and project background, and deduce the valuation of the project.
Some of the thoughts in this article on the combination of AI and Web3 are inspired by The Real Merge written by Delphi Digital researcher Michael Rinko. Some of the views in this article are digested and quoted from the article. Readers are recommended to read the original text.
This article is the author's interim thinking at the time of publication. It may change in the future, and the views are highly subjective. There may also be errors in facts, data, and reasoning logic. Please do not use it as an investment reference.
The following is the main text.
1. Business logic: the combination of AI and Web3
1.1 2023: A new "miracle year" created by AI
Looking back at the history of human development, once technology has achieved a breakthrough, from individual daily life to various industrial structures, to the entire human civilization, earth-shaking changes will follow.
There are two important years in human history, 1666 and 1905, which are now known as the two "miracle years" in the history of science and technology.
1666 is called a miracle year because Newton's scientific achievements emerged in a concentrated manner in that year. In this year, he opened up the physics branch of optics, founded the mathematical branch of calculus, and derived the gravity formula, the basic law of modern natural science. Any of these is a foundational contribution to the development of human science in the next hundred years, greatly accelerating the development of science as a whole.
The second miracle year was 1905, when Einstein, who was only 26 years old at the time, published four consecutive papers in the Annals of Physics, involving the photoelectric effect (laying the foundation for quantum mechanics), Brownian motion (becoming an important reference for analyzing random processes), special relativity and the mass-energy equation (that is, the well-known formula E=MC^2). In the evaluation of later generations, each of these four papers exceeded the average level of the Nobel Prize in Physics (Einstein himself also won the Nobel Prize for his paper on the photoelectric effect), and the historical process of human civilization was once again greatly advanced by several steps.
And the year 2023, which has just passed, will most likely be called another "miracle year" because of ChatGPT.
We regard 2023 as a "miracle year" in the history of human science and technology, not only because of GPT's great progress in natural language understanding and generation, but also because humans have figured out the law of growth of large language model capabilities from the evolution of GPT - that is, by expanding model parameters and training data, the ability of the model can be improved exponentially - and there is no bottleneck in this process in the short term (as long as the computing power is sufficient).
This ability is far from understanding language and generating dialogues. It can also be widely used in various scientific and technological fields. Take the application of large language models in the biological field as an example:
In 2018, Francis Arnold, the Nobel Prize winner in Chemistry, said at the award ceremony: "Today we can read, write and edit any DNA sequence in practical applications, but we cannot compose it through it yet." Just five years after his speech, in 2023, researchers from Stanford University and Salesforce Research, an AI startup in Silicon Valley, published a paper in Nature Biotechnology. They created 1 million new proteins from scratch through a large language model fine-tuned based on GPT3, and found two proteins with completely different structures but both with bactericidal ability, which are expected to become a bacterial anti-bacterial solution other than antibiotics. In other words: with the help of AI, the bottleneck of protein "creation" has been broken.
Before that, the artificial intelligence AlphaFold algorithm predicted almost all of the 214 million protein structures on Earth within 18 months, an achievement that is hundreds of times greater than the work of all human structural biologists in the past.
With various AI-based models, from hard technologies such as biotechnology, materials science, and drug development to humanities such as law and art, there will be earth-shaking changes, and 2023 is the first year of all this.
We all know that in the past 100 years, human beings' ability to create wealth has grown exponentially, and the rapid maturity of AI technology will inevitably further accelerate this process.
1.2 Combination of AI and Crypto
To understand the necessity of combining AI and Crypto from the essence, we can start with the complementary characteristics of the two.
Complementarity of AI and Crypto Features
AI has three attributes:
Randomness: AI is random, and the mechanism behind its content production is a black box that is difficult to reproduce and explore, so the results are also random
Resource intensive: AI is a resource-intensive industry that requires a lot of energy, chips, and computing power
Human-like intelligence: AI will (soon) be able to pass the Turing test, after which it will be difficult to distinguish between humans and machines*
Test Report). The GPT4.0 score is 41%, which is only 9% away from the passing line of 50%. The human test score for the same project is 63%. The meaning of this Turing test is how many percent of people think that the person they are chatting with is a real person. If it exceeds 50%, it means that at least half of the people in the crowd think that the person they are talking to is a person, not a machine, which means that they have passed the Turing test.
While AI is creating new leapfrog productivity for mankind, its three attributes also bring huge challenges to human society, namely:
How to verify and control the randomness of AI, so that randomness becomes an advantage rather than a defect
How to meet the huge energy and computing power gap required by AI
How to distinguish between people and machines
The characteristics of Crypto and blockchain economy may be the best medicine to solve the challenges brought by AI. The crypto economy has the following three characteristics:
Determinism: The business is based on blockchain, code and smart contracts, with clear rules and boundaries, and what input will result in what, with a high degree of certainty
Efficient resource allocation: The crypto economy has built a huge global free market, where resource pricing, fundraising, and circulation are very fast. Due to the existence of tokens, incentives can be used to accelerate the matching of market supply and demand and accelerate the critical point. Trustless: The ledger is open, the code is open source, and everyone can easily verify it, bringing a "trustless" system, while ZK technology avoids privacy exposure during verification. Next, three examples are used to illustrate the complementarity of AI and crypto economy. Example A: Solving randomness, AI agent based on crypto economy AI agent is an artificial intelligence program that is responsible for performing work for humans based on human will (representative projects include Fetch.AI). Suppose we want our AI agent to handle a financial transaction, such as "buy $1,000 of BTC." AI agents may face two situations:
Situation 1: It needs to connect with traditional financial institutions (such as BlackRock) to purchase BTC ETFs. There are a lot of adaptation issues between AI agents and centralized institutions, such as KYC, data review, login, identity verification, etc., which are still very troublesome at present.
Situation 2: It runs based on the native crypto economy, and the situation will become much simpler. It will directly use your account to sign and place orders through Uniswap or an aggregated trading platform to complete the transaction and receive WBTC (or BTC in other package formats). The whole process is quick and simple. In fact, this is what various Trading BOTs are doing. They have actually played the role of a primary AI agent, but their work is focused on trading. In the future, various types of trading BOTs will inevitably be able to execute more complex trading intentions as AI is integrated and evolved. For example: track 100 smart money addresses on the chain, analyze their trading strategies and success rates, use 10% of the funds in my address to execute similar transactions within a week, and stop when the effect is not good, and summarize the possible reasons for the failure.
AI will run better in the blockchain system, essentially because of the clarity of cryptoeconomic rules and the permissionless access to the system. Performing tasks under limited rules will also reduce the potential risks brought by the randomness of AI. For example, AI has already crushed humans in chess and card games and video games, because chess and card games are a closed sandbox with clear rules. However, the progress of AI in autonomous driving will be relatively slow, because the challenges of an open external environment are greater, and it is more difficult for us to tolerate the randomness of AI in dealing with problems.
Example B: Shaping resources and gathering resources through token incentives
The global computing power network behind BTC currently has a total computing power (Hashrate: 576.70 EH/s) that exceeds the combined computing power of any country's supercomputers. Its development momentum comes from simple and fair network incentives.
In addition, DePIN projects including Mobile are also trying to shape the bilateral market on both the supply and demand sides through token incentives to achieve network effects. IO.NET, which will be focused on in this article, is a platform designed to gather AI computing power, hoping to stimulate more AI computing power potential through the token model.
Example C: Open source code, introducing ZK, distinguishing between humans and machines while protecting privacy
As a Web3 project participated by OpenAI founder Sam Altman, Worldcoin uses the hardware device Orb to generate exclusive and anonymous hash values based on human iris biometrics through ZK technology to verify identity and distinguish between humans and machines. In early March this year, the Web3 art project Drip began using Worldcoin's ID to verify real users and issue rewards.
In addition, Worldcoin has also recently open-sourced the program code of its iris hardware Orb to provide guarantees for the security and privacy of user biometrics.
In general, the crypto economy has become an important potential solution to the AI challenges facing human society due to the determinism of code and cryptography, the resource circulation and fundraising advantages brought by permissionless and token mechanisms, and the trustless attributes based on open source code and public ledgers.
And the most urgent challenge with the strongest commercial demand is the extreme hunger of AI products for computing resources, and the huge demand for chips and computing power.
This is also the main reason why the growth of distributed computing power projects has surpassed the overall AI track in this bull market cycle.
The commercial necessity of distributed computing (Decentralized Compute)
AI requires a large amount of computing resources, whether for training models or for reasoning.
In the practice of training large language models, one fact has been confirmed: as long as the scale of data parameters is large enough, large language models will emerge with some capabilities that were not available before. The exponential leap of each generation of GPT capabilities compared to the previous generation is behind the exponential growth of the amount of computing required for model training.
Research by DeepMind and Stanford University shows that when different large language models face different tasks (calculation, Persian question and answer, natural language understanding, etc.), as long as the scale of model parameters during model training is increased (correspondingly, the amount of training computation is also increased), before the amount of training reaches 10^22 FLOPs (FLOPs refers to the floating-point operations per second, which is used to measure computing performance), the performance of any task is similar to that of giving a random answer; and once the parameter scale exceeds the critical value of that scale, the task performance improves dramatically, regardless of the language model.
It is precisely the law and practice of "great effort to achieve miracles" in computing power that made Sam, the founder of OpenAI, Altman proposed to raise $7 trillion to build an advanced chip factory that is 10 times larger than the current TSMC (this part is expected to cost $1.5 trillion), and use the remaining funds for chip production and model training.
In addition to the computing power required for AI model training, the model reasoning process itself also requires a lot of computing power (although the amount of computing power is smaller than that of training), so the hunger for chips and computing power has become the norm for participants in the AI track.
Compared to centralized AI computing power providers such as Amazon Web Services, Google Cloud Platform, Microsoft's Azure, etc., the main value propositions of distributed AI computing include:
Accessibility: It usually takes several weeks to obtain access to computing power chips using cloud services such as AWS, GCP or Azure, and popular GPU models are often out of stock. In addition, in order to obtain computing power, consumers often need to sign long-term, inflexible contracts with these large companies. The distributed computing power platform can provide flexible hardware selection and greater accessibility.
Low pricing: Since it uses idle chips, and the network protocol party subsidizes the chip and computing power suppliers with tokens, the distributed computing power network may be able to provide cheaper computing power.
Anti-censorship: At present, cutting-edge computing power chips and supplies are monopolized by large technology companies. In addition, governments represented by the United States are increasing their review of AI computing power services. AI computing power can be distributed, elastic, and freely obtained, which has gradually become an explicit demand. This is also the core value proposition of the computing power service platform based on web3.
If fossil energy is the blood of the industrial age, then computing power may be the blood of the new digital age opened by AI, and the supply of computing power will become the infrastructure of the AI era. Just as stablecoins have become a thriving branch of fiat currency in the Web3 era, will the distributed computing power market become a branch of the rapidly growing AI computing power market?
Since this is still a fairly early market, everything remains to be seen. However, the following factors may stimulate the narrative or market adoption of distributed computing power:
The continued tight supply and demand of GPUs. The continued tight supply of GPUs may push some developers to try distributed computing power platforms.
Regulatory expansion. If you want to obtain AI computing power services from large cloud computing power platforms, you must go through KYC and layers of review. This may in turn promote the adoption of distributed computing power platforms, especially in some restricted and sanctioned regions.
The stimulation of token prices. The rise in token prices during the bull market cycle will increase the value of the platform's subsidies to the GPU supply side, thereby attracting more suppliers to enter the market, increasing the scale of the market, and reducing the actual purchase price for consumers.
But at the same time, the challenges of distributed computing platforms are also quite obvious:
Technical and engineering problems
Work verification problem: Due to the hierarchical structure of deep learning model calculations, the output of each layer is used as the input of the next layer. Therefore, to verify the validity of the calculation, all previous work needs to be executed, which cannot be verified simply and effectively. To solve this problem, distributed computing platforms need to develop new algorithms or use approximate verification techniques, which can provide probabilistic guarantees of the correctness of the results, rather than absolute certainty.
Parallelization problem: Distributed computing power platforms gather long-tail chip supply, which means that the computing power that a single device can provide is relatively limited. A single chip supplier can almost independently complete the training or reasoning tasks of the AI model in a short time, so it is necessary to disassemble and distribute tasks through parallelization to shorten the total completion time. Parallelization will inevitably face a series of problems such as how to decompose tasks (especially complex deep learning tasks), data dependencies, and additional communication costs between devices.
Privacy protection issues: How to ensure that the purchaser's data and models are not exposed to the recipient of the task?
Regulatory compliance problems
Distributed computing platforms can attract some customers as a selling point due to the permissionless nature of their supply and procurement bilateral markets. On the other hand, they may become the target of government rectification as AI regulatory standards are improved. In addition, some GPU suppliers are also worried about whether the computing resources they rent out are provided to sanctioned businesses or individuals.
In general, consumers of distributed computing platforms are mostly professional developers or small and medium-sized institutions. Unlike crypto investors who buy cryptocurrencies and NFTs, these users have higher requirements for the stability and sustainability of the services that the protocol can provide, and price may not be the main motivation for their decision-making. At present, distributed computing platforms still have a long way to go to gain recognition from such users.
Next, we will sort out and analyze the project information of IO.NET, a new distributed computing power project in this cycle, and calculate its possible valuation level after listing based on the AI projects and distributed computing projects in the same track on the market.
2. Distributed AI computing power platform: IO.NET
2.1 Project positioning
IO.NET is a decentralized computing network that builds a two-sided market around chips. The supply side is the computing power of chips distributed around the world (mainly GPU, but also CPU and Apple's iGPU, etc.), and the demand side is artificial intelligence engineers who want to complete AI model training or reasoning tasks.
On the official website of IO.NET, it is written:
Our Mission
Putting together one million GPUs in a DePIN – decentralized physical infrastructure network.
Its mission is to integrate millions of GPUs into its DePIN network.
Compared with existing cloud AI computing service providers, its main selling points are:
Flexible combination: AI engineers can freely select and combine the chips they need to form a "cluster" to complete their computing tasks
Quick deployment: No need for weeks of approval and waiting (currently the case with centralized vendors such as AWS), deployment can be completed within tens of seconds and tasks can be started
Low service price: The cost of services is 90% lower than that of mainstream vendors
In addition, IO.NET also plans to launch services such as AI model stores in the future.
2.2 Product Mechanism and Business Data
Product Mechanism and Deployment Experience
Like Amazon Cloud, Google Cloud, and Alibaba Cloud, the computing service provided by IO.NET is called IO Cloud. IO Cloud is a distributed, decentralized chip network that can execute Python-based machine learning code and run AI and machine learning programs.
The basic business module of IO Cloud is called Clusters. Clusters is a group of GPUs that can self-coordinate to complete computing tasks. Artificial intelligence engineers can customize the desired cluster according to their needs.
The product interface of IO.NET is very user-friendly. If you want to deploy your own chip cluster to complete AI computing tasks, after entering its Clusters product page, you can start configuring the chip cluster you want on demand.
First, you need to choose your task scenario. There are currently three types to choose from:
General: Provides a more general environment, suitable for early project stages where specific resource requirements are uncertain.
Train: A cluster designed for training and fine-tuning machine learning models. This option can provide more GPU resources, higher memory capacity, and/or faster network connections to facilitate these high-intensity computing tasks.
Inference: A cluster designed for low-latency reasoning and heavy-load work. In the context of machine learning, reasoning refers to using a trained model to make predictions or analyze new data and provide feedback. Therefore, this option will focus on optimizing latency and throughput to support real-time or near-real-time data processing needs.
Then, you need to choose the supplier of the chip cluster. Currently, IO.NET has reached a cooperation with Render Network and Filecoin's miner network, so users can choose IO.NET or the chips of the other two networks as the supplier of their own computing clusters, which is equivalent to IO.NET playing the role of an aggregator (but as of the time of writing, the Filecon service is temporarily offline). It is worth mentioning that according to the page, the number of GPUs available online for IO.NET is currently 200,000+, while the number of GPUs available for Render Network is 3,700+.
Then we enter the chip hardware selection link for the cluster. Currently, IO.NET lists only GPUs as the available hardware types, not including CPUs or Apple's iGPUs (M1, M2, etc.), and GPUs are mainly NVIDIA products.
Among the officially listed and available GPU hardware options, according to the data on the day of the author's test, the total number of available GPUs online in the IO.NET network is 206,001. Among them, the GeForce RTX 4090 has the largest number of available GPUs (45,250), followed by the GeForce RTX 3090 Ti (30,779).
In addition, the A100-SXM4-80GB chip (market price 15,000$+), which is more efficient in processing AI computing tasks such as machine learning, deep learning, and scientific computing, has 7,965 online.
Nvidia's H100 80GB HBM3 graphics card (market price 40,000$+), which was designed specifically for AI from the beginning of hardware design, has a training performance that is 3.3 times that of the A100 and an inference performance that is 4.5 times that of the A100. The actual number of online cards is 86.
From the current business data, IO.NET's supply-side expansion is smooth. Stimulated by the airdrop expectations and the community activities code-named "Ignition", it has quickly gathered a large amount of AI chip computing power. However, its expansion on the demand side is still in its early stages, and organic demand is currently insufficient. As for the current shortage on the demand side, whether it is because the expansion of the consumer side has not yet begun, or because the current service experience is not stable, so there is a lack of large-scale adoption, this still needs to be evaluated.
However, considering that the gap in AI computing power is difficult to fill in the short term, a large number of AI engineers and projects are looking for alternatives and may be interested in decentralized service providers. In addition, IO.NET has not yet launched economic and activity stimulation on the demand side, and the product experience is gradually improving. The gradual matching of supply and demand is still worth looking forward to.
2.3 Team background and financing situation
Team situation
The core team of IO.NET was founded on quantitative trading. Before June 2022, they had been focusing on developing institutional-level quantitative trading systems for stocks and crypto assets. Due to the demand for computing power in the backend of the system, the team began to explore the possibility of decentralized computing, and finally set its sights on the specific issue of reducing the cost of GPU computing services.
Founder &CEO: Ahmad Shadid
Ahmad Shadid has been engaged in quantitative and financial engineering-related work before IO.NET, and is also a volunteer for the Ethereum Foundation.
CMO & Chief Strategy Officer: Garrison Yang
Garrison Yang officially joined IO.NET in March this year. He was previously the VP of Strategy and Growth at Avalanche and graduated from the University of California, Santa Barbara.
COO: Tory Green
Tory Green is the Chief Operating Officer of io.net. Previously, he was the Chief Operating Officer of Hum Capital and the Director of Corporate Development and Strategy of Fox Mobile Group. He graduated from Stanford.
According to IO.NET's Linkedin information, the team is headquartered in New York, USA, with a branch in San Francisco, and the current team size is over 50 people.
Financing
IO.NET has only disclosed one round of financing so far, namely the A round of financing completed in March this year with a valuation of US$1 billion, raising a total of US$30 million, led by Hack VC, and other investors including Multicoin Capital, Delphi Digital, Foresight Ventures, Animoca Brands, Continue Capital, Solana Ventures, Aptos, LongHash Ventures, OKX Ventures, Amber Group, SevenX Ventures and ArkStream Capital.
It is worth mentioning that, perhaps because of the investment from the Aptos Foundation, the BC8.AI project, which was originally used for settlement and bookkeeping on Solana, has been switched to the same high-performance L1 Aptos.
2.4 Valuation Calculation
According to the previous founder and CEO Ahmad Shadid, IO.NET will launch tokens at the end of April.
IO.NET has two target projects that can be used as valuation references: Render Network and Akash Network, both of which are representative distributed computing projects.
We can deduce the market value range of IO.NET in two ways: 1. Market-to-sales ratio, that is, market value/revenue ratio; 2. Market value/number of network chips ratio.
First, let’s look at the valuation deduction based on the price-to-sales ratio:
From the perspective of the price-to-sales ratio, Akash can be used as the lower limit of IO.NET's valuation range, while Render can be used as a high-end pricing reference for valuation, with an FDV range of 1.67 billion to 5.93 billion US dollars.
However, considering that the IO.NET project is newer and the narrative is hotter, coupled with the smaller early circulating market value and the current larger supply-side scale, it is not unlikely that its FDV will exceed Render.
Let's look at another angle to compare valuations, namely the "market-to-core ratio".
In the context of a market where demand for AI computing power exceeds supply, the most important factor in a distributed AI computing network is the scale of the GPU supply side. Therefore, we can use the “market-to-chip ratio” for horizontal comparison, and use the “ratio of the total market value of the project to the number of chips in the network” to deduce the possible valuation range of IO.NET, which can be used as a market value reference for readers.
If the market value range of IO.NET is calculated based on the market-to-chip ratio, IO.NET takes the market-to-chip ratio of Render Network as the upper limit and Akash Network as the lower limit, and its FDV range is 20.6 billion to 197.5 billion US dollars.
I believe that readers who are optimistic about the IO.NET project will think that this is an extremely optimistic market value calculation.
And we need to take into account that the current large number of online chips of IO.NET is stimulated by airdrop expectations and incentive activities. The actual number of online chips on the supply side still needs to be observed after the project is officially launched.
Therefore, in general, valuation calculations from the perspective of market-to-sales ratio may be more referenceable.
IO.NET is a project with the triple halo of AI+DePIN+Solana ecology. Let us wait and see how its market value will perform after its launch.
Gain a broader understanding of the crypto industry through informative reports, and engage in in-depth discussions with other like-minded authors and readers. You are welcome to join us in our growing Coinlive community:https://t.me/CoinliveHQ