Disclaimer: At this moment, it is likely that HotCloudPerf 2021 will be fully virtual. Our experience with a virtual ICPE (and HotCloudperf) 2020 was excellent and the participants rated the experience and format very highly. For more information, pelase contact us: firstname.lastname@example.org
The HotCloudPerf workshop proposes a meeting venue for academics and practitioners, from experts to trainees, in the field of cloud computing performance. The workshop aims to engage this community, and to lead to the development of new methodological aspects for gaining deeper understanding not only of cloud performance, but also of cloud operation and behavior, through diverse quantitative evaluation tools, including benchmarks, metrics, and workload generators. The workshop focuses on novel cloud properties such as elasticity, performance isolation, dependability, and other non-functional system properties, in addition to classical performance-related metrics such as response time, throughput, scalability, and efficiency.
The HotCloudPerf workshop is technically sponsored by the Standard Performance Evaluation Corporation (SPEC)’s Research Group (RG), and is organized annually by the RG Cloud Group. HotCloudPerf has emerged from the series of yearly meetings organized by the RG Cloud Group, since 2013. The RG Cloud Group group is taking a broad approach, relevant for both academia and industry, to cloud benchmarking, quantitative evaluation, and experimental analysis.
Theme of 2021 Edition
“Benchmarking in the Cloud”
Articles focusing on this topic are particularly encouraged for HotCloudPerf-2021.
Empirical performance studies in cloud computing environments, applications, and systems, including observation, measurement, and surveys.
Comparative performance studies and benchmarking of cloud environments, applications, and systems.
Performance analysis using modeling and queueing theory for cloud environments, applications, and systems.
Simulation-based studies for all aspects of cloud computing performance.
Tuning and auto-tuning of systems operating in cloud environments, e.g., auto-scaling of resources and auto-tiering of data, optimized resource deployment.
Software patterns and architectures for engineering cloud performance, e.g., serverless.
Experience with and analysis of performance of cloud deployment models, including IaaS/PaaS/SaaS/FaaS.
End-to-end performance engineering for pipelines and workflows in cloud environments, or of applications with non-trivial SLAs.
Tools for monitoring and studying cloud computing performance.
General and specific methods and methodologies for understanding and engineering cloud performance.
Serverless computing platforms and microservices in cloud datacenters.
January 20, 2021 Abstract due (informative; extended deadline)
January 25, 2021 Papers due (extended deadline)
February 11, 2021 Author Notification
February 22, 2021 Camera-ready deadline (ACM's firm camera ready deadline is February 24th)
April 19 or 20, 2021 Workshop Day
Submission TypesUPDATE: One extra page for references now allowed
Full-papers (6 pages), plus 1 page for references
Short-papers (3 pages), plus 1 page for references
Talk only (1-2 pages, not included in the proceedings).
The format of the submissions should follow the ACM format of the companion conference, ICPE.
All presented papers will have a good amount of time allocated for Q&A plus feedback. In addition, presentation session will be wrapped-up by a 10-15 min discussion. All materials presented during the workshop will be made available, in widely used formats (i.e., pdf).
Richard Bieringa, Abijith Radhakrishnan, Tavneet Singh, Sophie Vos, Jesse Donkervliet and Alexandru Iosup. An Empirical Evaluation of the Performance of Video Conferencing Systems.
Dheeraj Chahal and Mayank Mishra. Performance and Cost Comparison of Cloud Services for Deep Learning Workload.
Giulia Guidi, Marquita Ellis, Aydin Buluc, Katherine Yelick and David Culler. 10 Years Later: Cloud Computing is Closing the Performance Gap.
Tim Hegeman, Matthijs Jansen, Alexandru Iosup and Animesh Trivedi. GradeML: Towards Holistic Performance Analysis for Machine Learning Workflows.
Luuk Klaver, Thijs van der Knaap, Johan van der Geest, Edwin Harmsma, Bram van der Waaij and Paolo Pileggi. Towards Independent Run-Time Cloud Monitoring.
Malte S. Kurz. Distributed Double Machine Learning with a Serverless Architecture.
Yuxuan Zhao, Dmitry Duplyakin, Robert Ricci and Alexandru Uta. Cloud Performance Variability Prediction.
Cristina L. Abad, Escuela Superior Politécnica del Litoral, Ecuador, (email@example.com)
Nikolas Herbst, University of Würzburg, Germany, (firstname.lastname@example.org)
Alexandru Uta, Leiden University, the Netherlands (email@example.com)
Alexandru Iosup, VU Amsterdam, the Netherlands (firstname.lastname@example.org)
To contact the chairs, you can email: email@example.com
Cristina Abad, Escuela Superior Politecnica del Litoral
Ahmed Ali-Eldin, Chalmers | University of Gothenburg
Marta Beltran, Universidad Rey Juan Carlos
Andre Bondi, Software Performance and Scalability Consulting LLC
Marc Brooker, Amazon Web Services
Lucy Cherkasova, ARM Research
Dmitry Duplyakin, University of Utah
Bogdan Ghit, Databricks
Wilhelm Hasselbring, University of Kiel
Nikolas Herbst, University of Würzburg
Alexandru Iosup, Vrije Universiteit Amsterdam
Alessandro Papadopoulos, Mälardalen University
Joel Scheuner, Chalmers | University of Gothenburg
Petr Tůma, Charles University
Alexandru Uta, Leiden University
Erwin van Eyk, Vrije Universiteit Amsterdam
André van Hoorn, University of Stuttgart
Chen Wang, IBM