SieveMem: A Computation-in-Memory Architecture for Fast and Accurate Pre-Alignment

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

3 Citations (Scopus)
4 Downloads (Pure)

Abstract

The high execution time of DNA sequence alignment negatively affects many genomic studies that rely on sequence alignment results. Pre-alignment filtering was introduced as a step before alignment to reduce the execution time of short-read sequence alignment greatly. With its success, i.e., achieving high accuracy and thus removing unnecessary alignments, the filtering itself now constitutes the larger portion of the execution time. A significant contributing factor entails the movement of sequences from the memory to the processing units, while a majority will filter out as they do not result in an acceptable alignment. State-of-the-art (SotA) pre-alignment filtering accelerators suffer from the same overhead for data movements. Furthermore, these accelerators lack support for future pre-alignment filtering algorithms using the same operations and underlying hardware. This paper addresses these shortcomings by introducing SieveMem. SieveMem is an architecture that exploits the Computation-in-Memory paradigm with memristive-based devices to support shared kernels of pre-alignment filters and algorithms inside the memory (i.e., preventing data movements). SieveMem architecture also provides support for future algorithms. SieveMem supports more than 47.6% of shared operations among all top 5 SotA filters. Moreover, SieveMem includes a hardware-friendly pre-alignment filtering algorithm called BandedKrait, inspired by a combination of mentioned kernels. Our evaluations show that SieveMem provides up to 331.1 x and 446.8 × improvement in the execution time of the two most-common kernels. Our evaluations also show that BandedKrait provides accuracy at the SotA level. Using BandedKrait on SieveMem, a design we call Mem-BandedKrait, one can improve the execution time of end-to-end sequence alignment irrespective of the dataset, which can go up to 91.4 × compared to the SotA accelerator on GPU.

Original languageEnglish
Title of host publicationProceedings of the 2023 IEEE 34th International Conference on Application-specific Systems, Architectures and Processors (ASAP)
PublisherIEEE
Pages156-164
Number of pages9
ISBN (Electronic)979-8-3503-4685-5
ISBN (Print)979-8-3503-4686-2
DOIs
Publication statusPublished - 2023
Event2023 IEEE 34th International Conference on Application-specific Systems, Architectures and Processors (ASAP) - Porto, Portugal
Duration: 19 Jul 202321 Jul 2023
Conference number: 34th

Conference

Conference2023 IEEE 34th International Conference on Application-specific Systems, Architectures and Processors (ASAP)
Country/TerritoryPortugal
CityPorto
Period19/07/2321/07/23

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Alignment
  • Pre-alignment Filter
  • Computation in Memory
  • Emerging Memory Technology
  • Hardware Accelerator

Fingerprint

Dive into the research topics of 'SieveMem: A Computation-in-Memory Architecture for Fast and Accurate Pre-Alignment'. Together they form a unique fingerprint.

Cite this