Download PDFOpen PDF in browser

Quantifying Scalability Using Extended TPC-DS Performance Metric

EasyChair Preprint 3635

10 pagesDate: June 17, 2020

Abstract

The TPC Benchmark™DS (TPC-DS) is a decision support benchmark that models several generally applicable aspects of a decision support system, including data loading, queries and data maintenance. The benchmark provides a representative evaluation of the System Under Test’s (SUT) performance as a general-purpose decision support system. TPC-DS defines three primary metrics. The most important is the Performance Metric, QphDS@SF, reflecting the TPC-DS query throughput at various scale factors. Performance metrics at different scale factors are not comparable, due to the substantially different computational challenges found at different data volumes. Data analytics platforms have two main components, Compute and Storage. In the last decade, many cloud data analytics platforms have begun to separate compute and storage. With this separation, data analytics platforms now can scale compute in and out independently on the same dataset with the same storage. The performance metric QphDS@SF at different compute levels demonstrates how well the system performance scales. This article shows some scalability analysis in the TPC-DS workload on a cloud data analytics platform and proposes a benchmark as an extension to TPC-DS.

Keyphrases: Price Performance Metric, Quantify Scalability, Scalability, Scaled Performance Metric, TPC-DS, performance metric

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:3635,
  author    = {Guoheng Chen and Miso Cilimdzic and Timothy Johnson},
  title     = {Quantifying Scalability Using Extended TPC-DS Performance Metric},
  howpublished = {EasyChair Preprint 3635},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser