Conducting Repeatable Experiments in Highly Variable Cloud Computing Environments
Citations Over TimeTop 10% of 2017 papers
Abstract
Previous work has shown that benchmark and application performance in public cloud computing environments can be highly variable. Utilizing Amazon EC2 traces that include measurements affected by CPU, memory, disk, and network performance, we study commonly used methodologies for comparing performance measurements in cloud computing environments. The results show considerable flaws in these methodologies that may lead to incorrect conclusions. For instance, these methodologies falsely report that the performance of two identical systems differ by 38% using a confidence level of 95%. We then study the efficacy of the Randomized Multiple Interleaved Trials (RMIT) methodology using the same traces. We demonstrate that RMIT could be used to conduct repeatable experiments that enable fair comparisons in this cloud computing environment despite the fact that changing conditions beyond the user's control make comparing competing alternatives highly challenging.
Related Papers
- → Edubase Cloud: Cloud platform for cloud education(2012)7 cited
- → Solutions to the Third Benchmark Control Problem(1991)3 cited
- Theoretical Analysis of the Benchmark for Choosing Manipulative Instruments of Monetary Policies(2009)
- → Exploring disk performance benchmarks(2017)
- → Support Structure Performance Benchmark(2023)