a modern open source data stack for blockchain

0
Haru Invest

1.The challenge for modern blockchain data stack

There are several challenges that a modern blockchain indexing startup may face, including:

  • Massive amounts of data. As the amount of data on the blockchain increases, the data index will need to scale up to handle the increased load and provide efficient access to the data. Consequently, it leads to higher storage costs, slow metrics calculation, and increased load on the database server.
  • Complex data processing pipeline. Blockchain technology is complex, and building a comprehensive and reliable data index requires a deep understanding of the underlying data structures and algorithms. The diversity of blockchain implementations inherits it. Given specific examples, NFTs in Ethereum are usually created within smart contracts following the ERC721 and ERC1155 formats. In contrast, the implementation of those on Polkadot, for instance, is usually built directly within blockchain runtime. Those should be considered NFTs and should be saved as those.
  • Integration capabilities. To provide maximum value to users, a blockchain indexing solution may need to integrate its data index with other systems, such as analytics platforms or APIs. This is challenging and requires significant effort placed into the architecture design.

As blockchain technology has become more widespread, the amount of data stored on the blockchain has increased. This is because more people are using the technology, and each transaction adds new data to the blockchain. Additionally, blockchain technology has evolved from simple money-transferring applications, such as those involving the use of Bitcoin, to more complex applications involving the implementation of business logic within smart contracts. These smart contracts can generate large amounts of data, contributing to the increased complexity and size of the blockchain. Over time, this has led to a larger and more complex blockchain.

In this article, we review the evolution of Footprint Analytics’ technology architecture in stages as a case study to explore how the Iceberg-Trino technology stack addresses the challenges of on-chain data.

Footprint Analytics has indexed about 22 public blockchain data, and 17 NFT marketplace, 1900 GameFi project, and over 100,000 NFT collections into a semantic abstraction data layer. It’s the most comprehensive blockchain data warehouse solution in the world.

Regardless of blockchain data, which includes over 20 billions rows of records of financial transactions, which data analysts frequently query. it’s different from ingression logs in traditional data warehouses.

We have experienced 3 major upgrades in the past several months to meet the growing business requirements:

2.  Architecture 1.0 Bigquery

At the beginning of Footprint Analytics, we used Google Bigquery as our storage and query engine; Bigquery is a great product. It is blazingly fast, easy to use, and provides dynamic arithmetic power and a flexible UDF syntax that helps us quickly get the job done.

However, Bigquery also has several problems.

  • Data is not compressed, resulting in high costs, especially when storing raw data of over 22 blockchains of Footprint Analytics.
  • Insufficient concurrency: Bigquery only supports 100 simultaneous queries, which is unsuitable for high concurrency scenarios for Footprint Analytics when serving many analysts and users.
  • Lock in with Google Bigquery, which is a closed-source product。

So we decided to explore other alternative architectures.

3.  Architecture 2.0 OLAP

We were very interested in some of the OLAP products which had become very popular. The most attractive advantage of OLAP is its query response time, which typically takes sub-seconds to return query results for massive amounts of data, and it can also support thousands of concurrent queries.

We picked one of the best OLAP databases, Doris, to give it a try. This engine performs well. However, at some point we soon ran into some other issues:

  • Data types such as Array or JSON are not yet supported (Nov, 2022). Arrays are a common type of data in some blockchains. For instance, the topic field in evm logs. Unable to compute on Array directly affects our ability to compute many business metrics.
  • Limited support for DBT, and for merge statements. These are common requirements for data engineers for ETL/ELT scenarios where we need to update some newly indexed data.

That being said, we couldn’t use Doris for our whole data pipeline on production, so we tried to use Doris as an OLAP database to solve part of our problem in the data production pipeline, acting as a query engine and providing fast and highly concurrent query capabilities.

Unfortunately, we could not replace Bigquery with Doris, so we had to periodically synchronize data from Bigquery to Doris using it as a query engine. This synchronization process had several issues, one of which was that the update writes got piled up quickly when the OLAP engine was busy serving queries to the front-end clients. Subsequently, the speed of the writing process got affected, and synchronization took much longer and sometimes even became impossible to finish.

We realized that the OLAP could solve several issues we are facing and could not become the turnkey solution of Footprint Analytics, especially for the data processing pipeline. Our problem is bigger and more complex, and we could say OLAP as a query engine alone was not enough for us.

4.  Architecture 3.0 Iceberg + Trino

Welcome to Footprint Analytics architecture 3.0, a complete overhaul of the underlying architecture. We have redesigned the entire architecture from the ground up to separate the storage, computation and query of data into three different pieces. Taking lessons from the two earlier architectures of Footprint Analytics and learning from the experience of other successful big data projects like Uber, Netflix, and Databricks.

4.1. Introduction of the data lake

We first turned our attention to data lake, a new type of data storage for both structured and unstructured data. Data lake is perfect for on-chain data storage as the formats of on-chain data range widely from unstructured raw data to structured abstraction data Footprint Analytics is well-known for. We expected to use data lake to solve the problem of data storage, and ideally it would also support mainstream compute engines such as Spark and Flink, so that it wouldn’t be a pain to integrate with different types of processing engines as Footprint Analytics evolves.

Iceberg integrates very well with Spark, Flink, Trino and other computational engines, and we can choose the most appropriate computation for each of our metrics. For example:

  • For those requiring complex computational logic, Spark will be the choice.
  • Flink for real-time computation.
  • For simple ETL tasks that can be performed using SQL, we use Trino.

4.2. Query engine

With Iceberg solving the storage and computation problems, we had to think about choosing a query engine. There are not many options available. The alternatives we considered were

The most important thing we considered before going deeper was that the future query engine had to be compatible with our current architecture.

  • To support Bigquery as a Data Source
  • To support DBT, on which we rely for many metrics to be produced
  • To support the BI tool metabase

Based on the above, we chose Trino, which has very good support for Iceberg and the team were so responsive that we raised a bug, which was fixed the next day and released to the latest version the following week. This was the best choice for the Footprint team, who also requires high implementation responsiveness.

4.3. Performance testing

Once we had decided on our direction, we did a performance test on the Trino + Iceberg combination to see if it could meet our needs and to our surprise, the queries were incredibly fast.

Knowing that Presto + Hive has been the worst comparator for years in all the OLAP hype, the combination of Trino + Iceberg completely blew our minds.

Here are the results of our tests.

case 1: join a large dataset

An 800 GB table1 joins another 50 GB table2 and does complex business calculations

case2: use a big single table to do a distinct query

Test sql: select distinct(address) from the table group by day

The Trino+Iceberg combination is about 3 times faster than Doris in the same configuration.

In addition, there is another surprise because Iceberg can use data formats such as Parquet, ORC, etc., which will compress and store the data. Iceberg’s table storage takes only about 1/5 of the space of other data warehouses The storage size of the same table in the three databases is as follows:

Note: The above tests are examples we have encountered in actual production and are for reference only.

4.4. Upgrade effect

The performance test reports gave us enough performance that it took our team about 2 months to complete the migration, and this is a diagram of our architecture after the upgrade.

  • Multiple computer engines match our various needs.
  • Trino supports DBT, and can query Iceberg directly, so we no longer have to deal with data synchronization.
  • The amazing performance of Trino + Iceberg allows us to open up all Bronze data (raw data) to our users.

5. Summary

Since its launch in August 2021, Footprint Analytics team has completed three architectural upgrades in less than a year and a half, thanks to its strong desire and determination to bring the benefits of the best database technology to its crypto users and solid execution on implementing and upgrading its underlying infrastructure and architecture.

The Footprint Analytics architecture upgrade 3.0 has bought a new experience to its users, allowing users from different backgrounds to get insights in more diverse usage and applications:

  • Built with the Metabase BI tool, Footprint facilitates analysts to gain access to decoded on-chain data, explore with complete freedom of choice of tools (no-code or hardcord), query entire history, and cross-examine datasets, to get insights in no-time.
  • Integrate both on-chain and off-chain data to analysis  across  web2 + web3;
  • By building / query metrics on top of Footprint’s business abstraction, analysts or developers save time on 80% of repetitive data processing work and focus on meaningful metrics, research, and product solutions based on their business.
  • Seamless experience from Footprint Web to REST API calls, all based on SQL
  • Real-time alerts and actionable notifications on key signals to support investment decisions

Credit: Source link

Leave A Reply

Your email address will not be published.