Battling the CPU Bottleneck in Apache Parquet to Arrow Conversion Using FPGA

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

In the domain of big data analytics, the bottleneck of converting storage-focused file formats to in-memory data structures has shifted from the bandwidth of storage to the performance of decoding and decompression software. Two widely used formats for big data storage and in-memory data are Apache Parquet and Apache Arrow, respectively. In order to improve the speed at which data can be loaded from disk to memory, we propose an FPGA accelerator design that converts Parquet files to Arrow in-memory data structures. We describe an extensible, publicly available, free and open-source implementation of the proposed converter that supports various Parquet file configurations. The performance of the converter is measured on an AWS EC2 F1 system and on a POWER9 system using the recently released OpenCAPI interface. A single instance of the converter can reach between 6 and 12 GB/s of end-to-end throughput, and shows up to a threefold improvement over the fastest single-thread CPU implementation. It has a low resource utilization (less than 5% for all types of FPGA resources). This allows scaling out the design to match the bandwidth of the coming generation of accelerator interfaces. The proposed design and implementation can be extended to support more of the many possible Parquet file configurations.
Original languageEnglish
Title of host publication2020 International Conference on Field-Programmable Technology (ICFPT)
Place of PublicationMaui, Hawaii, USA
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Publication statusAccepted/In press - 2021

Fingerprint Dive into the research topics of 'Battling the CPU Bottleneck in Apache Parquet to Arrow Conversion Using FPGA'. Together they form a unique fingerprint.

Cite this