1. Outlook
1. Macro-level summary and future forecasts
Last week, the Trump administration announced that it would impose a 25% tariff on all non-US-made cars, a decision that once again triggered panic in the market. This tariff policy may not only lead to a sharp increase in the prices of imported cars and parts, but may also trigger retaliatory measures from trading partners, further exacerbating international trade tensions. In the future, investors still need to pay close attention to the progress of trade negotiations and changes in the global economic situation.
2. Cryptocurrency market changes and warnings
Last week, the cryptocurrency market suffered a significant correction caused by macro-level fear. The previously accumulated rebound gains were drastically lost in just a few days. This fluctuation was mainly due to the renewed uncertainty in the global macroeconomic environment. Looking ahead to this week, the focus of the market will be on whether the prices of Bitcoin and Ethereum can effectively fall below the previous lows. This position is not only an important technical support level, but also a key line of defense at the psychological level of the market. On April 2, the United States officially kicked off the prelude to the imposition of reciprocal tariffs. If this move does not further intensify the panic in the market, the cryptocurrency market may usher in an opportunity for staged right-side bottom-fishing. However, investors still need to remain vigilant and pay close attention to market dynamics and changes in various related indicators.
3. Industry and track hot spots
Particle, a modular L1 chain abstraction platform led by Cobo and YZI and followed by Hashkey twice, greatly improves user experience and developer efficiency by simplifying cross-chain operations and payments, but also faces challenges in liquidity and centralized management; Skate, which focuses on seamlessly linking mainstream VM application layer protocols, provides an innovative and efficient solution. By providing a unified application state, simplifying cross-chain task execution, and ensuring security, Skate greatly reduces the complexity for developers and users in a multi-chain environment; Arcium is a fast, flexible, and low-cost infrastructure designed to enable access to encrypted computing through blockchain. Walrus, an innovative decentralized storage solution, raised a record $140 million.
2. Market hot spots and potential projects of the week
1. Potential track performance
1.1. A brief analysis of the features of Skate, an application layer protocol for seamlessly connecting mainstream VMs, led by Hashkey
Skate is a DAPP-focused infrastructure layer that allows users to seamlessly interact with their native chains by connecting to all virtual machines (EVM, TonVM, SolanaVM). For users, Skate provides applications that can run in their preferred environment. For developers, Skate manages cross-chain complexity and introduces a new application paradigm that allows applications to be built on all chains and all virtual machines, and use a unified application state to serve all chains.
Architecture Overview
Skates infrastructure consists of three basic layers:
1. Skate’s central chain: the central hub that handles all logical operations and stores application status.
2. Pre-confirmed AVS: AVS deployed on Eigenlayer facilitates the secure delegation of re-staked ETH to Skate’s executor network. It serves as the primary source of truth, ensuring that executors perform the required actions on the target chain.
3. Executor Network: A network of executors responsible for executing the operations defined by the application. Each application has its own set of executors.
As the central chain, Skate maintains and updates the shared state, providing instructions to connected peripheral chains, which only respond to the call data provided by Skate. This process is achieved through our network of executors, each of which is a registered AVS operator responsible for performing these tasks. In the event of dishonesty, we can rely on pre-confirmed AVS as the source of truth to punish the offending operator.
User Flow
Skate is mainly driven by intents, each of which encapsulates key information that expresses the actions that the user wants to perform, while defining the necessary parameters and boundaries. Users only need to sign the intent through their own local wallet and interact only on the chain, thus creating a user-native environment.
The intent flow is as follows:
1. Source chain
Users will initiate actions on the TON/Solana/EVM chain by signing intent.
2. Skate
The executor receives the intent and calls the processIntent function. This creates a task that encapsulates the key information required for the executors task execution. At the same time, the system triggers a TaskSubmitted event.
The AVS validator will actively listen to the TaskSubmitted event and verify the content of each task. Once consensus is reached in the pre-confirmed AVS, the forwarder will issue the signature required for task execution.
3. Target Chain
The executor calls the executeTask function on the Gateway contract.
The Gateway contract will verify whether the task has passed AVS verification, that is, confirm that the forwarders signature is valid, and then execute the function defined in the task.
The calldata of the function call is executed and the intent is marked as completed.
Reviews
Skate provides an innovative and efficient solution for cross-chain operations of decentralized applications. By providing a unified application state, simplifying cross-chain task execution, and ensuring security, Skate greatly reduces the complexity for developers and users in a multi-chain environment. Its flexible architecture and easy integration make it a promising candidate for application in a multi-chain ecosystem. However, in order to achieve full implementation in a high-concurrency and multi-chain ecosystem, Skate still needs to continue to work on performance optimization and cross-chain compatibility.
1.2. How Arcium, a decentralized cryptographic computing network co-invested by Coinbase, NGC and Long Hash, realizes its vision
Arcium is a fast, flexible and low-cost infrastructure designed to enable access to encrypted computing through blockchain. Arcium is a crypto supercomputer that provides large-scale encrypted computing services, supporting developers, applications and the entire industry to perform calculations on fully encrypted data, using a trustless, verifiable and efficient framework. Through secure multi-party computing (MPC) technology, Arcium provides scalable and secure encryption solutions for Web2 and Web3 projects, and provides support through decentralized networks.
Architecture Overview
The Arcium Network is designed to provide secure distributed confidential computing for a variety of applications, from artificial intelligence to decentralized finance (DeFi) and beyond. It is based on advanced cryptographic techniques, including multi-party computation (MPC), to achieve trustless and verifiable computing without the intervention of a central authority.
● Multi-party execution environments (MXEs)
MXEs are specialized, isolated environments for defining and securely executing computational tasks. They support parallel processing (as multiple clusters can simultaneously execute computations for different MXEs), which improves throughput and security.
MXEs are highly configurable, allowing computing customers to define security requirements, encryption schemes, and performance parameters based on their needs. While a single computing task will be executed in a specific cluster of Arx nodes, multiple clusters can be associated with a single MXE. This ensures that computing tasks can still be reliably executed even when some nodes in the cluster are offline or overloaded. By pre-defining these configurations, customers can customize the environment with high flexibility based on their specific use case requirements.
● arxOS
arxOS is the distributed execution engine in the Arcium network, responsible for coordinating the execution of computing tasks and driving Arx nodes and clusters. Each node (similar to the core in a computer) provides computing resources to execute computing tasks defined by MXEs.
● Arcis (Arciums developer framework)
Arcis is a Rust-based developer framework that enables developers to build applications on Arcium infrastructure and supports all of Arciums multi-party computing (MPC) protocols. It includes a Rust-based framework and compiler.
● Arx node cluster (running arxOS)
arxOS is the distributed execution engine in the Arcium network, coordinating the execution of computing tasks. Each node (similar to the core in a computer) provides computing resources to execute computing tasks defined by MXEs. The cluster provides a customizable trust model, supporting dishonest majority protocols (initially Cerberus) and honest but curious protocols (such as Manticore). Other protocols (including honest majority protocols) will be added in the future to support more use case scenarios.
Chain level enforcement
All state management and coordination of computation tasks are handled on-chain through the Solana blockchain, which acts as a consensus layer and coordinates the operation of Arx nodes. This ensures fair reward distribution, enforcement of network rules, and alignment between nodes on the current state of the network. Tasks are queued in a decentralized memory pool architecture, where on-chain components help determine which computation tasks have the highest priority, identify misbehavior, and manage execution order.
Nodes ensure compliance with network rules by staking collateral. If misconduct or deviation from the protocol occurs, the system will implement a penalty mechanism to punish the offending nodes by slashing the stake to maintain the integrity of the network.
Reviews
The following are the key features that make Arcium Network a cutting-edge secure computing solution:
1. Trustless, arbitrary encrypted computing: The Arcium Network enables trustless computing through its multi-party execution environments (MXEs), allowing arbitrary computing on encrypted data without exposing the data content.
2. Guaranteed execution: Through the blockchain-based coordination system, the Arcium network ensures that all calculations in MXEs can be reliably executed. Arciums protocol enforces compliance through a pledge and penalty mechanism. Nodes must commit to pledged collateral. Once they deviate from the agreed execution rules, the collateral will be punished, thereby ensuring the correct completion of each computing task.
3. Verifiability and privacy protection: Arcium provides a verifiable computing mechanism that allows participants to publicly audit the correctness of calculation results and enhance the transparency and reliability of data processing.
4. On-chain coordination: The network uses the Solana blockchain to manage node scheduling, compensation, and performance incentives. Staking, penalties, and other incentive mechanisms are all executed entirely on-chain to ensure the decentralization and fairness of the system.
5. Friendly interface for developers: Arcium provides dual interfaces: one is a web-based graphical interface for non-professional users, and the other is a Solana-compatible SDK for developers to create customized applications. This design makes confidential computing convenient for ordinary users and also meets the needs of highly technical developers.
6. Multi-chain compatibility: Although initially based on Solana, the Arcium network was designed with multi-chain compatibility in mind and can support access to different blockchain platforms.
Through these features, the Arcium Network aims to redefine how sensitive data is processed and shared in a trustless environment, promoting wider adoption of secure multi-party computing (MPC).
1.3. What are the characteristics of Particle, a modular L1 chain abstraction platform led by Cobo and YZI and followed by Hashkey twice?
Particle Network has completely simplified the user experience of Web3 through wallet abstraction and chain abstraction. Through its wallet abstraction SDK, developers can use social login to guide users into smart accounts with one click.
In addition, Particle Networks chain abstraction technology stack, with Universal Accounts as its flagship product, enables users to have unified accounts and balances on each chain.
Particle Network’s real-time wallet abstraction product suite consists of three key technologies:
1. User Onboarding
Through a simplified registration process, users can enter the Web3 ecosystem more easily and improve user experience.
2. Account Abstraction
Through account abstraction, users’ assets and operations no longer rely on a single chain, which improves flexibility and convenience of cross-chain operations.
3. Upcoming product release: Chain Abstraction
Chain abstraction will further enhance cross-chain capabilities, support users to seamlessly operate and manage assets across multiple blockchains, and create a unified on-chain account experience.
Architecture Analysis
Particle Network coordinates and completes cross-chain transactions in a high-performance EVM execution environment through its Universal Accounts and three core functions:
1. Universal Accounts
Providing unified account status and balance, users’ assets and operations on all chains are managed through a single account.
2. Universal Liquidity
Through the cross-chain liquidity pool, funds between different chains can be transferred and used seamlessly.
3. Universal Gas
Simplify the users operating experience by automatically managing the gas fees required for cross-chain transactions.
These three core functions work together to enable Particle Network to unify all on-chain interactions and automate cross-chain fund transfers through atomic cross-chain transactions, helping users achieve their goals without manual intervention.
Universal Accounts
Particle Networks universal account aggregates token balances on all chains, allowing users to leverage assets on all chains in decentralized applications (dApps) on any chain as if using a single wallet.
Universal accounts achieve this through Universal Liquidity. They can be understood as specialized smart account implementations deployed and coordinated on all chains. Users only need to connect their wallets to create and manage universal accounts, and the system will automatically assign management permissions to them. The wallets that users connect to can be generated through social login through Particle Networks Modular Smart Wallet-as-a-Service, or they can be ordinary Web3 wallets such as MetaMask, UniSat, Keplr, etc.
Developers can easily integrate universal account functions into their own dApps by implementing Particle Networks universal SDK, enabling cross-chain asset management and operations.
Universal Liquidity
Universal Liquidity is a technical architecture that supports aggregated balances on all chains. Its core function is to coordinate atomic cross-chain transactions and exchanges by the Particle Network. These atomic transaction sequences are driven by the Bundler node, executing user operations (UserOperations) and completing the operation on the target chain.
Universal liquidity relies on a network of liquidity providers (also known as fillers) to move intermediary tokens (such as USDC and USDT) between chains through token pools. These liquidity providers ensure that assets can flow smoothly across chains.
For example, suppose a user wants to use USDC to purchase an NFT priced in ETH on the Base chain. In this scenario:
1. Particle Network aggregates users’ USDC balances on multiple chains.
2. Users use their own assets to purchase NFTs.
3. After the transaction is confirmed, Particle Network automatically converts USDC to ETH and purchases the NFT.
These additional on-chain operations only take a few seconds to process and are transparent to users, without the need for manual intervention. In this way, Particle Network simplifies the management of cross-chain assets, making cross-chain transactions and operations seamless and automated.
Universal Gas
By unifying balances between chains through universal liquidity, Particle Network also solves the fragmentation problem of gas tokens.
In the past, users needed to hold fuel tokens of multiple chains in different wallets to pay for gas fees on different chains, which brought great barriers to use. To solve this problem, Particle Network uses its native Paymaster to allow users to use any token on any chain to pay for gas fees. These transactions will eventually be settled on Particle Networks L1 through the chains native token (PARTI).
Users do not need to hold PARTI tokens to use universal accounts, as their gas tokens are automatically redeemed and used for settlement. This makes cross-chain operations and payments easier without requiring users to manage multiple gas tokens.
Reviews
Advantages:
1. Unified management of cross-chain assets: Universal accounts and universal liquidity allow users to manage and use assets on different chains without worrying about the complexity of asset dispersion or cross-chain transfers.
2. Simplify user experience: Through social login and modular smart wallet as a service, users can easily access Web3, lowering the entry threshold.
3. Cross-chain transaction automation: Atomic cross-chain transactions and universal fuel make the automatic conversion and payment of assets and gas tokens seamless, improving the users operational convenience.
4. Developer-friendly: Developers can easily integrate cross-chain functions in their own dApps through Particle Network’s universal SDK, reducing the complexity of cross-chain integration.
Disadvantages:
1. Dependence on liquidity providers: Liquidity providers (such as cross-chain transfers of USDC and USDT) require extensive participation to ensure smooth liquidity. If the liquidity pool is insufficient or the provider participation is low, it may affect the smoothness of transactions.
2. Centralization risk: Particle Network relies to a certain extent on its native Paymaster to handle gas fee payments and settlements, which may introduce centralized risks and dependence.
3. Compatibility and popularity: Despite supporting multiple wallets (such as MetaMask, Keplr, etc.), compatibility between different chains and wallets may still be a major challenge for user experience, especially for smaller chains or wallet providers.
Overall, Particle Network has greatly improved user experience and developer efficiency by simplifying cross-chain operations and payments, but it also faces challenges in liquidity and centralized management.
2. Detailed explanation of the projects of interest this week
2.1. Detailed explanation of Walrus, an innovative decentralized storage solution led by A16z, which raised a record $140 million in financing that month
Introduction
Walrus is an innovative solution for decentralized big data storage. It combines fast linearly decodable erasure codes, which can be expanded to hundreds of storage nodes, thus achieving extremely high elasticity with low storage overhead; and uses the new generation of public chain Sui as the control plane to manage everything from the storage node life cycle to the big data life cycle, to economics and incentive mechanisms, eliminating the need for a complete customized blockchain protocol.
At the core of Walrus is a new encoding protocol called Red Stuff, which uses an innovative two-dimensional (2D) encoding algorithm based on fountain codes. Unlike RS encoding, fountain codes rely primarily on XOR or other very fast operations on large blocks of data, avoiding complex mathematical operations. This simplicity enables large files to be encoded in a single transmission, significantly speeding up processing. Red Stuffs 2D encoding makes it possible to recover lost fragments with a bandwidth proportional to the amount of lost data. In addition, Red Stuff also incorporates authenticated data structures to prevent malicious clients and ensure that stored and retrieved data remains consistent.
Walrus operates in epochs, each managed by a committee of storage nodes. All operations in each epoch can be sharded by blobid, allowing for high scalability. The system facilitates the writing process of blobs by encoding data into primary and secondary shards, generating Merkle commitments, and distributing these fragments to storage nodes. The reading process involves collecting and verifying fragments, and the system provides best effort paths and incentive paths to deal with potential system failures. In order to ensure that the availability of read and write blobs is not interrupted while handling the natural turnover of participants in the permission system, Walrus has an efficient committee reconfiguration protocol.
Another key innovation of Walrus is its approach to Proof of Storage, a mechanism for verifying that storage nodes actually store the data they claim to hold. Walrus addresses the scalability challenges associated with these proofs by incentivizing all storage nodes to hold a fragment of all stored files. This full replication enables a new Proof of Storage mechanism that challenges storage nodes as a whole, rather than challenging each file individually. As a result, the cost of proving file storage grows logarithmically with the number of stored files, rather than scaling linearly as in many existing systems.
Finally, Walrus also introduces a staking-based economic model that combines reward and penalty mechanisms to align incentives and enforce long-term commitments. The system includes a pricing mechanism for storage resources and write operations, and is equipped with a token governance model for parameter adjustment.
Technical Analysis
Red Stuff Encoding Protocol
Current industry encoding protocols achieve low overhead factors and extremely high guarantees, but are still not suitable for long-term deployment. The main challenge is that in a long-term, large-scale system, storage nodes will often experience failures, lose their fragments, and need to be replaced. In addition, in a permissionless system, even if storage nodes have enough incentives to participate, natural turnover between nodes will occur.
Both of these situations will result in a large amount of data to be transferred across the network, equal to the total amount of stored data, in order to restore the lost fragments for the new storage node. This is extremely expensive. Therefore, the team hopes that when a node is replaced, the cost of recovery is only proportional to the amount of data that needs to be restored, and decreases inversely as the number of storage nodes (n) increases.
To achieve this, Red Stuff encodes large data blocks in a two-dimensional (2D) encoding. The primary dimension is equivalent to the RS encoding used in previous systems. However, in order to efficiently recover fragments, Walrus also encodes in a secondary dimension. Red Stuff is based on linear erasure codes and the Twin-code framework, which provides erasure-coded storage with efficient recovery in fault-tolerant settings and is suitable for environments with trusted writers. The team adapted this framework to make it suitable for Byzantine fault-tolerant environments and optimized it for a single storage node cluster, which will be described in detail below.
● Coding
Our starting point is to split the large chunk of data into f + 1 pieces. Instead of just encoding the repaired pieces, we first add a dimension during the splitting process:
(a) 2D primary encoding. The file is split into 2f + 1 columns and f + 1 rows. Each column is encoded as a separate blob containing 2f repair symbols. Then, the extension of each row is the primary fragment of the corresponding node.
(b) 2D secondary encoding. The file is split into 2 f + 1 columns and f + 1 rows. Each row is encoded as a separate blob containing f repair symbols. Then, the expansion of each column is the secondary fragment of the corresponding node.
The original blob is split into f + 1 primary segments (vertically in the figure), and 2 f + 1 secondary segments (horizontally in the figure). Figure 2 shows this process. Finally, the file is split into (f + 1)( 2 f + 1) symbols, which can be visualized in a [f + 1, 2 f + 1 ] matrix.
Given this matrix, we generate repair symbols in both dimensions. We take each of the 2f+1 columns (each of size f+1) and expand it to n symbols, making the matrix have n rows. We assign each row as a primary fragment of a node (see Figure 2a). This almost triples the amount of data we need to send. To provide efficient recovery of each fragment, we also expand the initial [f+1, 2f+1] matrix, with each row expanding from 2f+1 symbols to n symbols (see Figure 2b), and use our encoding scheme. In this way, we create n columns, each of which is assigned as a secondary fragment of the corresponding node.
For each fragment (primary and secondary), W also computes commitments for its symbols. For each primary fragment, the commitment contains all symbols in the extended rows; and for each secondary fragment, the commitment contains all values in the extended columns. As a final step, the client creates a list of commitments containing the commitments of these fragments, which is referred to as a blob commitment.
● Write protocol
The write protocol of Red Stuff follows the same pattern as the RS encoding protocol. The writer W first encodes the blob and creates a fragment pair for each node. A fragment pair i is a pairing of the i-th primary fragment and secondary fragment. There are a total of n = 3f + 1 fragment pairs, which is equal to the number of nodes.
Next, W sends the commitment of all fragments to each node, along with the corresponding fragment pair. Each node checks whether its fragment in the fragment pair is consistent with the commitment, recalculates the commitment of the blob, and replies with signature confirmation. When 2f+1 signatures are collected, W generates a certificate and publishes it to the chain to prove that the blob will be available.
In the theoretical asynchronous network model, reliable transmission is assumed, so that all correct nodes will eventually receive a fragment pair from an honest writer. However, in the actual protocol, the writer may need to stop retransmitting. After collecting 2f+1 signatures, retransmission can be stopped safely, thus ensuring that at least f+1 correct nodes (selected from the 2f+1 responding nodes) hold the fragment pair for the blob.
(a) Node 1 and Node 3 share two rows and two columns
In this case, Node 1 and Node 3 hold two rows and two columns of the file respectively. The data fragments held by each node are assigned to different rows and columns in the two-dimensional encoding, ensuring that the data is distributed and redundantly stored across multiple nodes for high availability and fault tolerance.
(b) Each node sends the intersection of its row/column with the column/row of node 4 to node 4 (red). Node 3 needs to encode this row.
In this step, Node 1 and Node 3 send the intersection of their rows/columns with the columns/rows of Node 4 to Node 4. Specifically, Node 3 needs to encode the rows it holds so that it can intersect with the data fragment of Node 4 and pass it to Node 4. In this way, Node 4 can receive the complete data fragment and perform recovery or verification. This process ensures data integrity and redundancy, and even if some nodes fail, other nodes can still recover the data.
(c) Node 4 uses f + 1 symbols on its columns to recover the complete secondary fragment (green). Node 4 then sends the recovered column intersection to the rows of other recovery nodes.
In this step, node 4 uses f + 1 symbols on its columns to recover the complete secondary fragment. The recovery process is based on the intersection of data, ensuring efficient data recovery. After node 4 recovers its secondary fragment, it sends the recovered column intersection to other recovering nodes to help them recover their row data. This interactive method ensures smooth data recovery, and the collaboration between multiple nodes can speed up the recovery process.
(d) Node 4 recovers its primary fragment (dark blue) using the f + 1 symbols on its row and all recovered secondary symbols (green) sent by other honest recovery nodes (which should be at least 2 f symbols, plus the 1 symbol recovered in the previous step).
At this stage, node 4 not only uses the f + 1 symbols on its row to recover the primary fragment, but also needs to use the secondary symbols sent by other honest recovery nodes to help complete the recovery. Through these symbols received from other nodes, node 4 is able to recover its primary fragment. To ensure the accuracy of the recovery, node 4 will receive at least 2 f + 1 valid secondary symbols (including the 1 symbol recovered in the previous step). This mechanism enhances fault tolerance and data recovery capabilities by integrating data from multiple sources.
● Reading protocol
The read protocol is the same as for RS encoding, with nodes only having to use their primary fragments. The reader (R) first requests a commitment set for the blob from any node, and checks the returned commitment set against the requested blob commitment via the commitment opening protocol. Next, R requests read commitments for the blob from all nodes, and they respond with the primary fragments they hold (this may be done incrementally to save bandwidth). Each response is checked against the corresponding commitment in the commitment set for the blob.
When R has collected f + 1 correct primary fragments, R decodes the blob and re-encodes it, recomputes the blob commitment, and compares it with the requested blob commitment. If the two commitments match (i.e., they are the same as the commitment W published on-chain), R outputs blob B, otherwise, R outputs an indication of error or unrecoverable information.
Walrus Decentralized Secure Blob Storage
● Write a Blob
The process of writing a Blob into Walrus can be illustrated by Figure 4.
The process begins when the writer (➊) encodes the Blob using Red Stuff, as shown in Figure 2. This process generates sliver pairs, a set of commitments to slivers, and a Blob commitment. The writer derives a blobid by hashing the Blob commitment and combining it with metadata such as the files length and encoding type.
The writer (➋) then submits a transaction to the blockchain to obtain sufficient guarantees for the blob storage space in a certain number of epochs and register the blob. The size of the blob and the blob commitment are sent in the transaction, which can be used to re-derive the blobid. The blockchain smart contract needs to ensure that there is enough space to store the encoded sliver on each node, as well as all metadata related to the blob commitment. Some payment may be sent with the transaction to guarantee the free space, or the free space can be used as an additional resource with the request. Our implementation allows both options.
Once the registration transaction is submitted (➌), the writer notifies the storage nodes that they are responsible for storing the slivers for the blobid and sends the transaction, the commitment, and the primary and secondary slivers assigned to each storage node along with proof that the slivers are consistent with the published blobid. The storage node verifies the commitment and returns a signed confirmation of the blobid after successfully storing the commitment and sliver pair.
Finally, the writer waits to collect 2 f + 1 signature confirmations (➍), which constitute a write certificate. This certificate is then published to the chain (➎), which marks the point of availability (PoA) of the Blob in Walrus. The PoA indicates that the storage node is obligated to keep these slivers available for reading within the specified Epochs. At this point, the writer can delete the Blob from local storage and can go offline. In addition, the writer can also use the PoA as a credential to prove the availability of the Blob to third-party users and smart contracts.
Nodes listen to blockchain events to see if a Blob has reached its PoA. If they do not store a sliver pair for that Blob, they perform a recovery process to get commitments and sliver pairs for all Blobs until the PoA point in time. This ensures that eventually all correct nodes will hold sliver pairs for all Blobs.
Summarize
In summary, Walrus contributions include:
● Defined the problem of asynchronous complete data sharing and proposed Red Stuff, the first protocol that can efficiently solve this problem under Byzantine fault tolerance.
● Proposed Walrus, the first permissioned decentralized storage protocol designed for low replication costs and capable of efficiently recovering data lost due to failures or participant changes.
● By introducing a staking-based economic model, combining reward and penalty mechanisms to align incentives and enforce long-term commitments, and proposing the first asynchronous challenge protocol to achieve efficient storage proof.
3. Industry data analysis
1. Overall market performance
1.1 Spot BTCETH ETF
From March 24, 2025 to March 29, 2025, the fund flows of Bitcoin (BTC) and Ethereum (ETH) ETFs showed different trends:
Bitcoin ETF:
● March 24, 2025: The Bitcoin ETF saw net inflows of $84.2 million, its seventh consecutive day of positive inflows, bringing total inflows to $869.8 million.
● March 25, 2025: Bitcoin ETFs once again recorded net inflows of $26.8 million, bringing the cumulative inflows in 8 days to $896.6 million.
● March 26, 2025: Bitcoin ETFs continue to grow with net inflows reaching $89.6 million, marking the ninth consecutive day of inflows and bringing the total inflows to $986.2 million.
● March 27, 2025: Bitcoin ETFs saw net inflows of $89 million, maintaining the positive inflow trend.
● March 28, 2025: Bitcoin ETFs continued to record net inflows of $89 million, maintaining a continuous positive inflow trend.
Ethereum ETF:
● March 24, 2025: Ethereum ETF saw net inflows of $0, ending a 13-day streak of outflows.
● March 25, 2025: The Ethereum ETF saw a net outflow of $3.3 million, the first outflow after the outflow trend resumed.
● March 26, 2025: Ethereum ETF continues to face $5.9 million in net outflows, and investor sentiment remains cautious.
● March 27, 2025: Ethereum ETF saw a net outflow of $4.2 million, indicating that panic in the market still exists.
● March 28, 2025: The Ethereum ETF continued to experience a net outflow of $4.2 million, maintaining the negative outflow trend.
1.2. Spot BTC vs ETH price trend
BTC
Analysis
After BTC failed to test the upper track of the wedge (US$89,000) last week, it started to go down as expected. This week, users only need to pay attention to three important support levels: the first-line support of US$81,400, the second-line support given by the round mark of US$80,000, and the bottom support of US$76,600, the lowest point of this year. For users waiting for an opportunity to enter the market, the above three support positions can be regarded as suitable points for entering the market in batches.
ETH
Analysis
After failing to stabilize above $2,000, ETH is now close to a correction to this years low of around $1,760. The subsequent trend depends almost entirely on BTC. If BTC can stabilize above $80,000 and start a rebound, ETH will likely form a double bottom pattern above $1,760 and may move upward to the $2,300 resistance level. On the contrary, if BTC falls below $80,000 again and seeks support at $76,600 or even lower, ETH will likely move downward to the $1,700 level or even the $1,500 level bottom support level.
1.3. Fear Greed Index
2. Public chain data
2.1. BTC Layer 2 Summary
Analysis
From March 24 to March 28, 2025, the Bitcoin Layer-2 (L2) ecosystem experienced some important developments:
Stacks’ sBTC deposit cap increased: Stacks announced the completion of the cap-2 expansion of sBTC, increasing the deposit cap by 2,000 BTC, bringing the total capacity to 3,000 BTC (about $250 million). The increase is designed to enhance liquidity and support the growing demand for Bitcoin-backed DeFi applications on the Stacks platform.
Citrea’s Testnet Milestone: Bitcoin L2 solution Citrea reported a significant milestone — surpassing 10 million transactions on its testnet. The platform also updated its Clementine design, simplified its zero-knowledge proof (ZKP) validator, and enhanced security, laying the foundation for scalability of Bitcoin transactions.
BOBs BitVM bridge is enabled: BOB (Build on Bitcoin) has successfully enabled the BitVM bridge on the testnet, allowing users to mint BTC into Yield BTC with minimal trust assumptions. This development enhances the interoperability between Bitcoin and other blockchain networks, enabling more complex transactions without compromising security.
Bitlayers BitVM bridge released: Bitlayer launched the BitVM bridge, allowing users to mint BTC into Yield BTC with minimal trust assumptions. This innovation improves the scalability and flexibility of Bitcoin transactions and supports the development of DeFi applications within the Bitcoin ecosystem.
2.2. EVM non-EVM Layer 1 Summary
Analysis
EVM-compatible Layer 1 blockchain:
● BNB Chain 2025 Roadmap: BNB Chain announced its 2025 vision, planning to expand to 100 million transactions per day, improve security to address the miner extractable value (MEV) problem, and introduce smart wallet solutions similar to EIP-7702. The roadmap also emphasizes the integration of artificial intelligence (AI) use cases, focusing on leveraging valuable private data and improving developer tools.
● Polkadot’s development in 2025: Polkadot released its 2025 roadmap, highlighting support for EVM and Solidity, aiming to enhance interoperability and scalability. The plan includes implementing a multi-core architecture to increase capacity and upgrading cross-chain messaging through XCM v 5.
Non-EVM Layer 1 Blockchain:
● W Chain Mainnet Soft Launch: W Chain, a Singapore-based hybrid blockchain network, announced that its Layer 1 mainnet has entered the soft launch phase. After a successful testnet phase, W Chain introduced the W Chain bridge function to enhance cross-platform compatibility and interoperability. The commercial mainnet is expected to be officially launched in March 2025, and plans to launch features such as decentralized exchanges (DEX) and ambassador programs.
● N 1 Blockchain Investor Support Confirmed: N 1, an ultra-low latency Layer 1 blockchain, confirmed that its original investors, including Multicoin Capital and Arthur Hayes, will continue to support the project, which is expected to be launched before the mainnet launch. N 1 aims to provide developers with unrestricted scalability and ultra-low latency support for decentralized applications (DApps), and supports multiple programming languages to simplify development.
2.3. EVM Layer 2 Summary
Analysis
Between March 24 and March 29, 2025, several important developments occurred in the EVM Layer 2 ecosystem:
1. Polygon zkEVM Mainnet Beta Version Launched: On March 27, 2025, Polygon successfully launched the zkEVM (Zero-Knowledge Ethereum Virtual Machine) Mainnet Beta version. This Layer 2 scaling solution improves Ethereums scalability by performing off-chain computations, enabling faster and lower-cost transactions. Developers can seamlessly migrate their Ethereum applications to Polygons zkEVM as it is fully compatible with Ethereums codebase.
2. Telos Foundations ZK-EVM development roadmap: Telos Foundation announced a ZK-EVM development roadmap based on SNARKtor. The plan includes deploying hardware-accelerated zkEVM on the Telos testnet in Q4 2024, followed by integration with the Ethereum mainnet in Q1 2025. The next phase aims to integrate SNARKtor to improve verification efficiency on Layer 1, and full integration is expected to be completed by Q4 2025.
4. Macro data review and key data release nodes next week
The core PCE price index for February, released on March 28, was 2.7% year-on-year (expected 2.7%, previous value 2.6%), which was higher than the Feds target for the third consecutive month, mainly driven by higher import costs caused by tariffs.
Important macro data nodes this week (March 31-April 4) include:
April 1: US March ISM Manufacturing PMI
April 2: US ADP employment figures for March
April 3: U.S. initial jobless claims for the week ending March 29
April 4: U.S. unemployment rate in March; U.S. non-farm payrolls in March, seasonally adjusted
V. Regulatory policies
During the week, the U.S. SEC concluded its investigation into Crypto.com and Immutable, Trump pardoned the co-founder of BitMex, and a special bill for stablecoins was officially put on the agenda. The process of loosening and regulating the crypto industry is accelerating.
United States: Oklahoma passes strategic bitcoin reserve bill
The Oklahoma House of Representatives voted to pass the Strategic Bitcoin Reserve Act, which would allow the state to invest 10% of public funds in Bitcoin or any digital asset with a market value of more than $500 billion.
Separately, the U.S. Department of Justice announced that it had uncovered an ongoing terrorist financing scheme, seizing approximately $201,400 (at current values) in cryptocurrency held in wallets and accounts designed to fund Hamas. The seized funds came from Hamas fundraising addresses, allegedly controlled by Hamas, which had been used to launder more than $1.5 million in virtual currency since October 2024.
Panama: Proposed Cryptocurrency Bill Published
Panama has published a proposed crypto bill to regulate cryptocurrencies and promote the development of blockchain-based services. The proposed bill establishes a legal framework for the use of digital assets, sets licensing requirements for service providers, and includes strict compliance measures in line with international financial standards. Digital assets are recognized as a legal means of payment, allowing individuals and businesses to freely agree to use digital assets in commercial and civil contracts.
EU: May impose 100% capital support requirement on crypto assets
According to Cointelegraph, the EU insurance regulator has proposed imposing a 100% capital backing requirement on insurance companies holding crypto assets, citing the “inherent risks and high volatility” of crypto assets.
South Korea: Plans to block access to 17 overseas applications including Kucoin
The Financial Intelligence Analysis Unit (FIU) of South Korea announced that starting from March 25, it will impose domestic access restrictions on the Google Play platform applications of 17 overseas virtual asset service providers (VASPs) that are not registered in South Korea, including KuCoin, MEXC, etc. This means that users cannot install new related applications and existing users cannot update.