Coinbase in-depth analysis of whether Crypto x AI is a mirage

avatar
DAOSquare
6 months ago
This article is approximately 4408 words,and reading the entire article takes about 6 minutes
AI tokens have gained broad support in the Crypto and AI markets, but may lack sustainable demand drivers in the short to medium term.

Original title: Cryptos AI Mirage

Original author: David Han, Coinbase Institutional Research Analyst

Original compilation: DAOSquare

Coinbase in-depth analysis of whether Crypto x AI is a mirage

quick overview

Decentralized crypto-artificial intelligence (Crypto-AI) applications face a number of headwinds in the short to medium term that may hinder their adoption. However, the constructive narrative surrounding crypto and artificial intelligence is likely to sustain the trading narrative for some time.

Key takeaways

  • The intersection between artificial intelligence (AI) and Crypto is very broad, but few people often have a deep understanding of it. We believe that the different subfields at the intersection have distinct opportunities and development cycles.

  • We generally believe that for artificial intelligence products, the competitive advantage of decentralization itself is not enough, it must also maintain functional parity with centralized opponents in some other key areas.

  • Our contrarian view is that the value potential of many AI tokens may be exaggerated due to the market’s widespread focus on the AI ​​industry, and many AI tokens may lack sustainable demand drivers in the short to medium term.

In recent years, continued breakthroughs in artificial intelligence (especially in generative artificial intelligence) have created a strong focus on the artificial intelligence industry and provided opportunities for crypto projects in between. We previously covered some possibilities in the industry in a report in June 2023, noting that, judging from Crypto’s overall capital allocation, it seems that the artificial intelligence field is undervalued. Since then, the field of cryptographic artificial intelligence has begun to develop rapidly. At this point, we feel it is important to highlight some of the practical challenges that may hinder its widespread adoption.

The rapid changes in AI make us cautious about some Crypto platforms’ bold claims that their unique positioning will disrupt the entire industry, making the long-term and sustainable value accumulation of most AI tokens uncertain, especially This is especially true for projects with fixed token models. Instead, we believe that some emerging trends in AI may actually make it harder for Crypto-based innovations to be adopted, given broader market competition and regulatory factors.

That said, we believe the intersection between AI and Crypto is broad and holds diverse opportunities. Adoption may be faster in certain sub-sectors, although many of these areas lack tradable tokens. However, this doesnt seem to be dampening investor appetite. We find that the performance of AI-related crypto tokens is driven by the AI ​​market craze, which can support their positive price action even on days when Bitcoin trades lower. Therefore, we believe that many AI-related tokens will likely continue to be traded as representations of AI advancements.

Key Trends in Artificial Intelligence

In our opinion, one of the most important trends in the AI ​​space (related to crypto AI products) is the continuation of the culture around the open source model. More than 530,000 models are publicly available on Hugging Face, a collaboration platform for the AI ​​community, for researchers and users to run and fine-tune. Hugging Faces role in AI collaboration is no different than relying on Github for code hosting or Discord for community management (both widely used in crypto). We think this situation is unlikely to change in the near future, unless there is serious mismanagement.

Models available on Hugging Face range from large language models (LLMs) to generative image and video models, and come from major industry players such as OpenAI, Meta, and Google, as well as independent developers. Some open source language models even have better performance advantages than state-of-the-art closed source models in terms of throughput (while maintaining comparable output quality), which ensures a certain degree of competition between open source and commercial models (see Figure 1) . Importantly, we believe this vibrant open source ecosystem, combined with a competitive commercial sector, has fueled an industry in which underperforming models will be driven out of competition.

Coinbase in-depth analysis of whether Crypto x AI is a mirage

The second trend is the increasing quality and cost-effectiveness of small models (this was highlighted in an LLM study back in 2020 and more recently in a paper from MIcrosoft), which is also consistent with the open source culture. Analyze to further enable the future of high-performance, locally running AI models. On certain benchmarks, some fine-tuned open source models can even outperform leading closed source models. In such a world, some AI models could be run locally, maximizing decentralization. Of course, existing technology companies will continue to train and run larger models on the cloud, but there will be trade-offs in the design space between the two.

Additionally, given the increasing complexity of the task of benchmarking AI models (including data contamination and varying test scope), we believe that generated model output may ultimately be best evaluated by end users in a free market. In fact, there are already tools for end users to perform side-by-side comparisons of model outputs, and there are also benchmarking companies that provide similar services. The difficulty of generating artificial intelligence benchmarks can be seen from the growing variety of open LLM benchmarks, including MMLU, HellaSwag, TriviaQA, BoolQ, etc., each of which tests different use cases, such as common sense reasoning, academic Topics and various question formats, etc.

The third trend we observe in the AI ​​space is that existing platforms with strong user lock-in or specific business problems are able to disproportionately benefit from AI integration. For example, Github Copilots integration with code editors enhances an already powerful developer environment. Embedding AI interfaces into other tools such as email clients, spreadsheets, customer relationship management software, etc. is also a natural use case for AI (for example, Klarna’s AI assistant can do the work of 700 full-time agents).

However, it is important to note that in many of these scenarios, AI models will not create new platforms, but simply enhance existing ones. Other AI models that improve traditional business processes (for example, Meta’s Lattice restored its advertising performance after Apple introduced App Tracking Transparency) also often rely on proprietary data and closed systems. Because these types of AI models are vertically integrated into their core products and use proprietary data, they will likely remain closed source.

In the world of AI hardware and computing, we see two other related trends. The first is the transition of computational usage from training to inference. That is, when artificial intelligence models are first developed, vast amounts of computing resources are used to train the model by feeding it large data sets. Now it has moved on to model deployment and model querying.

Nvidia disclosed in its February 2024 earnings call that about 40% of their business is inference, and Sataya Nadella made similar remarks during Microsofts January earnings call, noted that most of their Azure AI usage is for inference. As this trend continues, we believe entities seeking to monetize their models will prioritize platforms that can reliably run their models in a secure and production-ready manner.

The second major trend we see is the competitive landscape surrounding hardware architecture. Nvidias H 200 processor will be available in the second quarter of 2024, and the performance of the next-generation B 100 is expected to further double. In addition, Googles continued support for its own tensor processing units (TPUs) and Groqs new language processing units (LPUs) may also strengthen their market share in this area in the coming years (see Figure 2). These developments are likely to change the cost dynamics of the AI ​​industry and could benefit cloud service providers that can adapt quickly, procure hardware at scale and set up any associated physical networks and development tools.

Coinbase in-depth analysis of whether Crypto x AI is a mirage

Overall, the field of artificial intelligence is an emerging and rapidly developing field. It has been less than a year and a half since ChatGPT first hit the market in November 2022 (although its underlying GPT-3 model has been around since June 2020), and the rapid development of the field since then has been astounding. Although there is some bias behind generative AI models, we are already starting to see the market’s survival-of-the-fittest effect (ignoring poorer performing models in favor of better alternatives). The industrys rapid growth and upcoming regulations mean that as new solutions will continue to hit the market, the industrys problem space will also change.

The oft-touted package of measures that “decentralization solves [insert problem]” appears to be the consensus, but in our view is premature for such a rapidly innovating area. And it also pre-emptively solves centralization issues that may not necessarily exist. The reality is that the AI ​​industry already has a lot of decentralization in technology and business verticals through competition between many different companies and open source projects. Furthermore, on both a technical and social level, truly decentralized protocols have much slower decision-making and consensus processes than centralized protocols. This may pose an obstacle to the search for products that balance decentralization and competitiveness at this stage of AI development. That said, we do think there are some meaningful synergies between crypto and artificial intelligence, but its more on a longer time horizon.

Scope the opportunity

Broadly speaking, we divide the intersection of AI and Crypto into two broad categories. The first is the use case for AI products to improve the crypto industry. This includes scenarios for creating human-readable transactions, improving blockchain data analysis, and using model outputs in permissionless protocols. The second category is use cases that aim to break the traditional AI process through Cryptos decentralized methods of calculation, verification, identity, etc.

In our view, in the former category, the use cases in those scenarios that are consistent with the business are clear, and we believe that, although significant technical challenges remain, in the long term they will benefit from more complex on-chain reasoning models. There will still be prospects in the scene. Centralized AI models can improve crypto like any other technology-focused industry, such as developer tools, code auditing, and translating human language into on-chain actions. But current investment in this area is usually owned by private companies through venture capital and is therefore often ignored by the public market.

However, what is less certain for us is the value proposition of the second category (i.e. that Crypto will disrupt the existing AI landscape). The latter category of challenges supersedes those of a technical nature (which we believe are generally solvable in the long term) and is an uphill battle against broader market and regulatory forces. Despite this, however, the reality is that much of the recent attention on AI + Crypto has been focused on this category, as these use cases are better suited to creating liquid tokens. This is our focus in the next section. In Crypto, there are relatively few liquidity tokens related to centralized AI tools (for now).

The role of Crypto in AI

To simplify, we analyze Crypto’s potential impact on AI through the four main stages of the AI ​​process, which are:

(1) Data collection, storage and processing, (2) Model training and inference, (3) Verification of model output, (4) AI model output tracking. A slew of new crypto-AI projects have emerged in these areas, although we believe that in the short to medium term many will face significant challenges in demand-side generation, as well as fierce competition from centralized companies and open source solutions.

Proprietary data

Data is the foundation of all AI models and is perhaps the key differentiator in the performance of professional AI models. Historical blockchain data itself is a new and rich source of data for models, and some projects (such as Grass) also aim to leverage Crypto incentives to obtain new data sets from the open internet. In this regard, Crypto has the opportunity to provide industry-specific data sets and incentivize the creation of new valuable data sets. (Reddit’s recent $60 million annual data licensing deal with Google foreshadows future growth in data set monetization.)

Many early models (such as GPT-3) used a mix of open datasets such as CommonCrawl, WebText 2, Books, and Wikipedia, and similar datasets are freely available on Hugging Face (currently hosting over 110,000 options). However, possibly to protect their commercial interests, many recently released closed-source models do not make their final training dataset combinations public. We believe that the trend towards proprietary data sets, particularly within business models, will continue and lead to an increase in the importance of data licensing.

Existing centralized data marketplaces are already helping to bridge the gap between data providers and consumers, and we believe this will create an opportunity space for emerging decentralized data marketplace solutions among open source data catalogs and enterprise competitors. . Without the support of a legal structure, a purely decentralized data marketplace would also need to build standardized data interfaces and channels, verify data integrity and configuration, and solve the cold start problem of its products. There is also a need to balance token incentives among market participants.

In addition, decentralized storage solutions may eventually find a niche in the AI ​​industry, although we believe there are still considerable challenges in this regard. On the one hand, channels for distributing open source datasets already exist and are widely used. On the other hand, many owners of proprietary data sets have strict security and compliance requirements. There is currently no regulatory pathway for hosting sensitive data on decentralized storage platforms such as Filecoin and Arweave. In fact, many enterprises are still transitioning from on-premises servers to centralized cloud storage providers. On a technical level, the decentralized nature of these networks is also currently incompatible with certain regional issues and physical data silo requirements for sensitive data storage.

While price comparisons between decentralized storage solutions and established cloud providers have also shown that decentralized options may be cheaper in terms of a single storage unit, we believe this misses the larger issue. First, there are the up-front costs of migrating systems between vendors that need to be taken into account, in addition to ongoing operating expenses. Second, Crypto-based decentralized storage platforms need to match the better tools and integrations provided by mature cloud systems developed over the past two decades. From a business operations perspective, cloud solutions are more predictable in cost and come with contractual obligations and a dedicated support team, as well as a large developer talent pool.

Its also worth noting that a cursory comparison with the big three cloud providers (AWS, Google Cloud Platform, and Microsoft Azure) is incomplete. There are dozens of low-cost cloud companies also vying for market share by offering cheaper, basic servers and other services. In our view, they are the real main competitors for cost-conscious consumers in the near term. That said, recent innovations like Filecoin’s data computing and Arweave’s ao computing environment may play a role in some of the upcoming innovative projects that often use less sensitive data sets or are most cost-sensitive. Sensitive (possibly smaller) companies that have not yet locked down a supplier.

So while there is certainly room for new Crypto products in the data space, we believe that short-term breakthroughs will occur where they can generate a unique value proposition. In our view, areas where decentralized products compete head-on with traditional and open source competitors will take longer to make substantial progress.

Training and inference models

The decentralized computing (DeComp) field in Crypto also aims to be an alternative to centralized cloud computing, in part due to the existing GPU supply crunch. One proposed solution to this shortage, employed by protocols such as Akash and Render, is to reintegrate idle computing resources into a centralized network, thereby reducing the costs of centralized cloud providers. According to preliminary indicators, such projects appear to be experiencing growth in both user and vendor adoption. For example, Akash has tripled its active tenancies (i.e. number of users) year-to-date (see Figure 3), primarily due to increased usage of its storage and computing resources.

Coinbase in-depth analysis of whether Crypto x AI is a mirage

However, since peaking in December 2023, the fees paid to the network have actually declined as the supply of available GPUs has outpaced the growth in demand for these resources. That said, as more providers join the network, the number of GPUs leased (which appears to be the largest revenue driver proportionally) has declined (see Figure 4). For a network where computational pricing can change based on changes in supply and demand, it’s unclear where sustained, usage-driven demand for native tokens will ultimately come from if supply-side growth exceeds demand-side growth. We believe that this token model may need to be revisited in the future to optimize market changes, although the long-term impact of such changes is currently unclear.

Coinbase in-depth analysis of whether Crypto x AI is a mirage

At the technical level, decentralized computing solutions also face the challenge of network bandwidth limitations. For large models that require multi-node training, the physical network infrastructure layer plays a crucial role. Data transfer speeds, synchronization overhead, and support for certain distributed training algorithms mean that specific network configurations and custom network communications (such as InfiniBand) are required to facilitate their efficient execution. This makes it difficult to implement in a decentralized manner once the cluster size exceeds a certain range.

Overall, we believe that the long-term success of decentralized computing (and storage) faces stiff competition from centralized cloud providers. In our view, any adoption will be a long-term process, at least in reference to the cloud service adoption cycle. Given the increased technical complexity of decentralized web development, combined with the lack of similarly scalable development and sales teams, we believe fully executing on the decentralized computing vision will be a difficult journey.

Authentication and trust model

As AI models become more important in our lives, there are growing concerns about the quality and bias of their output. Some crypto projects aim to find a decentralized, market-based solution to this problem by leveraging a set of algorithms to evaluate different categories of outputs. However, the above-mentioned challenges surrounding model benchmarking, as well as the obvious cost, throughput, and quality trade-offs, make competing head-on challenging. BitTensor, one of the largest AI-focused cryptocurrencies in the category, aims to solve this problem, although it still has some technical challenges that may hinder its widespread adoption (see Appendix 1).

Additionally, trustless model inference (i.e. proving that model outputs were actually generated by the claimed model) is another area of ​​active research at Crypto x AI. However, we believe that as the scale of open source models shrinks, these solutions may face challenges in terms of demand. In a world where models can be downloaded and run locally, with content integrity verified via established file hash/checksum methods, the role of trustless inference is less clear. It is true that many LLMs cannot yet be trained and run on lightweight devices such as mobile phones, but powerful desktop computers (such as those used for high-end games) can already be used to run many high-performance models.

Data source and identity

As the output of generative AI becomes increasingly indistinguishable from that of humans, the importance of tracking what AI generates comes into focus. GPT-4 passes the Turing test 3 times faster than GPT-3.5, and we are almost certain that one day not too far away, we will not be able to distinguish online personalities from machines or real humans. In such a world, determining the humanity of online users and watermarking AI-generated content will become critical capabilities.

Decentralized identifiers and proof-of-personhood mechanisms like Worldcoin aim to solve the former problem, identifying humans on-chain. Likewise, publishing data hashes to the blockchain can aid data provenance by verifying the age and provenance of the content. However, similar to the previous section, we believe that the feasibility of Crypto-based solutions must be weighed against centralized alternatives.

Some countries, such as China, link online personalities to government-controlled databases. Although much of the world is less centralized, a consortium of KYC providers could also offer proof of personality solutions that are independent of blockchain technology (perhaps in a manner similar to the trusted certificate authorities that form the cornerstone of todays internet security) . There is also ongoing research into AI watermarking to embed hidden signals in text and image output to allow algorithms to detect whether content was generated by AI. Many leading AI companies, including Microsoft, Anthropic, and Amazon, have publicly committed to adding such watermarks to the content they generate.

Additionally, many existing content providers are already trusted to keep strict records of content metadata for compliance reasons. As a result, users often trust the metadata associated with social media posts (though not their screenshots), even though they are stored centrally. It is important to note here that any Crypto-based data provenance and identity solution will need to be integrated with user platforms to be broadly effective. Therefore, while Crypto-based solutions are technically feasible in terms of proving identity and data provenance, we also believe that their adoption is not a given and will ultimately depend on business, compliance and regulatory requirements.

Trading AI Narrative

Despite the above issues, many AI tokens have outperformed Bitcoin and Ethereum, as well as major AI stocks such as Nvidia and Microsoft, starting in Q423. We believe this is because AI tokens generally benefit from the relative performance of the broader Crypto market and the associated AI craze (see Appendix 2). Therefore, even if Bitcoin prices fall, AI-focused tokens will experience upward price swings, creating upward volatility during Bitcoin’s downturn. Figure 5 shows the performance of AI tokens on days when Bitcoin was trading down.

Coinbase in-depth analysis of whether Crypto x AI is a mirage

Overall, we continue to believe that many short-term sustainable demand drivers are missing from AI narrative trading. The lack of clear adoption forecasts and metrics has given rise to meme-style speculation that, in our opinion, may not be sustainable in the long term. Eventually, price and utility will converge, and the open question is how long it will take, and whether utility will rise to match price or vice versa. That said, we do think a sustainable constructive crypto market and outperforming the AI ​​industry are likely to sustain a strong crypto AI narrative for some time.

in conclusion

Crypto’s role in AI does not exist in a vacuum, any decentralized platform competes with existing centralized alternatives and must be analyzed within the context of wider business and regulatory requirements. We therefore believe that simply replacing centralized providers for the sake of “decentralization” is not enough to drive meaningful market adoption. Generative AI models have been around for a few years and have maintained a degree of decentralization due to market competition and open source software.

A recurring theme in this report is that crypto-based solutions, while often technically feasible, still require significant work to achieve functional parity with more centralized platforms, and that is assuming those platforms will not Stagnant during this period. In fact, centralized development is often faster than decentralized development due to consensus mechanisms, which may pose challenges in a rapidly developing field like artificial intelligence.

With this in mind, we believe that the overlap between AI and crypto is still in its infancy and is likely to change rapidly in the coming years as the broader field of artificial intelligence develops. The decentralized AI future envisioned by many Crypto insiders is currently not guaranteed to come true. In fact, the future of the AI ​​industry itself remains largely undetermined. Therefore, we believe it is prudent to carefully navigate such a market and delve deeper into how cryptocurrency-based solutions may actually offer meaningfully better alternatives, or at least understand the underlying trading narrative. Therefore, we think it is prudent to err on the side of caution in a market like this and delve deeper into how Crypto-based solutions can truly offer a meaningfully superior alternative, or at least, understand the underlying transaction narrative.

Appendix 1: BitTensor

BitTensor incentivizes different intelligence markets across its 32 subnets. This is intended to address some of the issues with benchmarking by enabling subnet owners to create game-like constraints to extract intelligence from information providers. For example, its flagship subnet 1 centers around text prompts and incentivizes miners who “produce the best response based on prompts sent by subnet validators in that subnet.” That is, it rewards miners who can generate the best text response to a given prompt, as judged by other validators in that subnet. This enables a smart economy where network participants try to create models in various markets.

However, this validation and reward mechanism is still in its early stages and is vulnerable to adversarial attacks, especially if the model is evaluated using other models that contain biases (although progress has been made in this area using new synthetic data for evaluation certain subnets). This is especially true for “fuzzy” outputs such as language and art, where evaluation metrics can be subjective, thus leading to multiple benchmarks of model performance.

For example, BitTensors verification mechanism for subnet 1 requires in practice:

The validator generates one or more reference answers and all miners responses are compared. Those with the most similar answers to the reference will receive the highest reward and, ultimately, the greatest incentive.

Current similarity algorithms use a combination of string literals and semantic matches as the basis for rewards, but it is difficult to capture different style preferences with a limited set of reference answers.

It is unclear whether models resulting from the BitTensor incentive structure will ultimately outperform centralized models (or whether the best-performing models will move to BitTensor), or how it can accommodate other trade-offs such as model size and underlying computational cost. A marketplace where users are free to choose a model that suits their preferences might be able to achieve similar allocation of resources through an “invisible hand.” That said, BitTensor does attempt to solve a very challenging problem in an ever-expanding problem space.

Appendix 2: WorldCoin

Perhaps the most obvious example of an AI token following the artificial intelligence market craze is Worldcoin. It released the World ID 2.0 upgrade on December 13, 2023 to little attention, but after Sam Altman promoted Worldcoin on December 15, it rose 50%. (Speculation about Worldcoin’s future remains demonized, in part because Sam Altman is the co-founder of Tools for Humanity, the developer behind Worldcoin. Similarly, OpenAI’s release of Sora on February 15, 2024 caused its price to increase by nearly three times, although there is no relevant announcement on Worldcoin’s Twitter or blog (see Figure 6). As of press time, Worldcoin is valued at US$80 billion, which is very close to OpenAI’s US$86 billion valuation on February 16 (this is A company with annualized revenue of $2 billion).

Coinbase in-depth analysis of whether Crypto x AI is a mirage

Log in to https://dao2.io to keep abreast of the latest information, insights and research from major media and institutions around the world.

Join the DAOSquare community and learn about the latest developments of the DAOSquare incubator.

This article is translated from https://www.coinbase.com/Original linkIf reprinted, please indicate the source.

ODAILY reminds readers to establish correct monetary and investment concepts, rationally view blockchain, and effectively improve risk awareness; We can actively report and report any illegal or criminal clues discovered to relevant departments.

Recommended Reading
Editor’s Picks