Abstract
At the beginning of every research effort, researchers in empirical software engineering have to go through the processes of extracting data from raw data sources and transforming them to what their tools expect as inputs. This step is time consuming and error prone, while the produced artifacts (code, intermediate datasets) are usually not of scientific value. In the recent years, Apache Spark has emerged as a solid foundation for data science and has taken the big data analytics domain by storm. We believe that the primitives exposed by Apache Spark can help software engineering researchers create and share reproducible, high-performance data analysis pipelines. In our technical briefing, we discuss how researchers can profit from Apache Spark, through a hands-on case study.
Original language | English |
---|---|
Title of host publication | Proceedings of the 40th International Conference on Software Engineering, ICSE '18 |
Subtitle of host publication | Companion Proceedings |
Place of Publication | New York, NY |
Publisher | ACM |
Pages | 542-543 |
Number of pages | 2 |
Volume | Part F137351 |
ISBN (Electronic) | 978-1-4503-5663-3 |
DOIs | |
Publication status | Published - 2018 |
Event | ICSE 2018: 40th International Conference on Software Engineering - Gothenburg, Sweden Duration: 27 May 2018 → 3 Jun 2018 Conference number: 40 https://www.icse2018.org/ |
Conference
Conference | ICSE 2018 |
---|---|
Country/Territory | Sweden |
City | Gothenburg |
Period | 27/05/18 → 3/06/18 |
Internet address |
Keywords
- Apache Spark
- Big data
- Data analytics