Erwin van Eyk, Joel Scheuner, Simon Eismann, Cristina L. Abad, Alexandru Iosup - HotCloudPerf @ ICPE 2020
As we are progressing towards the serverless benchmark, and as the plans are getting more concrete, we wrote a final vision on the subject. In this article we focused on motivating the serverless benchmark that we have been focussing on: why are serverless benchmarks needed; why are existing benchmarks and performance studies are not sufficient; and, what should be within and outside of the scope of this effort?
We further highlight the general approach, taking a structured approach to evaluating the performance of FaaS platforms. For this we use our FaaS Reference Architecture, basing the experiments and the metrics on the components that we identified.
Abstract - Serverless computing services, such as Function-as-a-Service (FaaS), hold the attractive promise of a high level of abstraction and high performance, combined with the minimization of operational logic. Several large ecosystems of serverless platforms, both open- and closed-source, aim to realize this promise. Consequently, a lucrative market has emerged. However, the performance trade-offs of these systems are not well-understood. Moreover, it is exactly the high level of abstraction and the opaqueness of the operational-side that make performance evaluation studies of serverless platforms challenging. Learning from the history of IT platforms, we argue that a benchmark for serverless platforms could help address this challenge. We envision a comprehensive serverless benchmark, which we contrast to the narrow focus of prior work in this area. We argue that a comprehensive benchmark will need to take into account more than just runtime overhead, and include notions of cost, realistic workloads, more (open-source) platforms, and cloud integrations. Finally, we show through preliminary real-world experiments how such a benchmark can help compare the performance overhead when running a serverless workload on state-of-the-art platforms.