Disclaimer: HotCloudPerf 2022 will be fully virtual. Our experience with a virtual ICPE (and HotCloudperf) 2020 and 2021 was excellent and the participants rated the experience and format very highly. For more information, please contact us at: firstname.lastname@example.org
The HotCloudPerf workshop proposes a meeting venue for academics and practitioners, from experts to trainees, in the field of cloud computing performance. The new understanding of cloud computing covers the full computational continuum from data centers to edge resources to IoT sensors and devices. The workshop aims to engage this community and to lead to the development of new methodological aspects for gaining a deeper understanding not only of cloud performance, but also of cloud operation and behavior, through diverse quantitative evaluation tools, including benchmarks, metrics, and workload generators. The workshop focuses on novel cloud properties such as elasticity, performance isolation, dependability, and other non-functional system properties, in addition to classical performance-related metrics such as response time, throughput, scalability, and efficiency.
The HotCloudPerf workshop is technically sponsored by the Standard Performance Evaluation Corporation (SPEC)’s Research Group (RG) and is organized annually by the RG Cloud Group. HotCloudPerf has emerged from the series of yearly meetings organized by the RG Cloud Group, since 2013. The RG Cloud Group group is taking a broad approach, relevant for both academia and industry, to cloud benchmarking, quantitative evaluation, and experimental analysis.
Session 1: Big Data & Microservices in the Cloud
14:15 Keynote 1: “Scaling Open Source Big Data Cloud Applications is Easy/Hard” by Paul Brebner
14:45 Floriment Klinaku, Martina Rapp, Jörg Henss and Stephan Rhode. Beauty and the beast: A case study on performance prototyping of data-intensive containerized cloud applications.
15:05 Thrivikraman V, Vishnu R Dixit, Nikhil Ram S, Vikas K Gowda, Santhosh Kumar Vasudevan and Subramaniam Kalambur. MiSeRTrace: Kernel-level Request Tracing for Microservice Visibility.
Session 2: Optimizing Datacenters and Development
15:30 Keynote 2: “ Onefold Tuning for System and DNN Model Parameters” by Lydia Chen
16:00 Laurens Versluis and Alexandru Iosup. TaskFlow: An Energy- and Makespan-Aware Task Placement Policy for Workflow Scheduling through Delay Management.
16:20 Robert Cordingly and Wes Lloyd. FaaSET: A Jupyter notebook to streamline every facet of serverless development.
Session 3: Serverless Computing
16:45 Keynote 3: “ Serverless Machine Learning Serving for Scalable Workflows” by Evgenia Smirni
17:15 Robert Schmitz, Danielle Lambion, Robert Cordingly, Navid Heydari and Wes Lloyd. Characterizing X86 and ARM Serverless Performance Variation: A natural language processing case study.
17:35 George Kousiouris, Chris Giannakos, Konstantinos Tserpes and Teta Stamati. Measuring Baseline Overheads in Different Orchestration Mechanisms for Large FaaS Workflows.
18:00 Social Session with members of the SPEC RG Cloud group; open to all ICPE attendees. Bring your own drink/snacks!
Lydia Chen: Onefold Tuning for System and DNN Model Parameters
Evgenia Smirni: Serverless Machine Learning Serving for Scalable Workflows
College of William & Mary, Virginia, US
Paul Brebner: Scaling Open Source Big Data Cloud Applications is Easy/Hard
Instaclustr, California, US
In the last decade, the development of modern horizontally scalable open-source Big Data technologies such as Apache Cassandra (for data storage), and Apache Kafka (for data streaming) enabled cost-effective, highly scalable, reliable, low-latency applications, and made these technologies increasingly ubiquitous. To enable reliable horizontal scalability, both Cassandra and Kafka utilize partitioning (for concurrency) and replication (for reliability and availability) across clustered servers. But building scalable applications isn’t as easy as just throwing more servers at the clusters, and unexpected speed humps are common. Consequently, you also need to understand the performance impact of partitions, replication, and clusters; monitor the correct metrics to have an end-to-end view of applications and clusters; conduct careful benchmarking, and scale and tune iteratively to take into account performance insights and optimizations. In this presentation, I will explore some of the performance goals, challenges, solutions, and results I discovered over the last 5 years building multiple realistic demonstration applications. The examples will include trade-offs with elastic Cassandra auto-scaling, scaling a Cassandra and Kafka anomaly detection application to 19 Billion checks per day, and building low-latency streaming data pipelines using Kafka Connect for multiple heterogeneous source and sink systems.
Methodological and practical aspects of software engineering, performance engineering, and computer systems related to hot topics in cloud performance.
Empirical performance studies in cloud computing environments and systems, including observation, measurement, and surveys.
Performance analysis using modeling, simulation, and queueing theory for cloud environments, applications, and systems.
Tuning and auto-tuning of systems operating in cloud environments, e.g., auto-tiering of data or optimized resource deployment.
Software patterns and architectures for engineering cloud performance, e.g., serverless.
End-to-end performance engineering for pipelines and workflows in cloud environments, or of applications with non-trivial SLAs.
Tools for monitoring and studying cloud computing performance.
General and specific methods and methodologies for understanding and engineering cloud performance.
Serverless computing platforms and microservices in cloud datacenters.
Case studies on cloud performance and its interaction with the computational continuum
January 15 January 27, 2022 January 20 January 27, 2022
February 25, 2022
May 4, 2022
April 9, 2022
Full-papers (8 pages including references)
Short-papers (4 pages including references)
Talk only (1-2 pages, not included in the proceedings).
The format of the submissions is single-blind and should follow the ACM format of the companion conference, ICPE.
All presented papers will have a good amount of time allocated for Q&A plus feedback. In addition, the presentation session will be wrapped up by a 10-15 min discussion.
Call for Papers
Cristina L. Abad, Escuela Superior Politécnica del Litoral, Ecuador, (email@example.com)
Simon Eismann, University of Würzburg, Germany, (firstname.lastname@example.org)
Alexandru Iosup, VU Amsterdam, the Netherlands (email@example.com)
To contact the chairs, you can email: firstname.lastname@example.org
Cristina Abad, Escuela Superior Politecnica del Litoral
Ahmed Ali-Eldin, Chalmers | University of Gothenburg
Marta Beltran, Universidad Rey Juan Carlos
Andre Bondi, Software Performance and Scalability Consulting LLC
Marc Brooker, Amazon Web Services
Lucy Cherkasova, ARM Research
Wilhelm Hasselbring, University of Kiel
Nikolas Herbst, University of Würzburg
Riccardo Pinciroli, Gran Sasso Science Institute
Alexandru Iosup, Vrije Universiteit Amsterdam
Alessandro Papadopoulos, Mälardalen University
Joel Scheuner, Chalmers | University of Gothenburg
Petr Tůma, Charles University
Alexandru Uta, Leiden University
Erwin van Eyk, Vrije Universiteit Amsterdam
André van Hoorn, University of Hamburg
Chen Wang, IBM