Keynote 1 – Resampling with Feedback — A New Paradigm of Using Workload Data for Performance Evaluation
Bio
Dror Feitelson is a professor of Computer Science at the Hebrew University of Jerusalem, where he has been on the faculty of the Rachel and Selim Benin School of Computer Science and Engineering since 1995. His research emphasizes experimental techniques and real-world data in computer systems performance evaluation, and more recently also in software engineering. Using such data he and his students have demonstrated the importance of using correct workloads in performance evaluations, identified commonly made erroneous assumptions that may call research results into question, and developed methodologies to replace assumptions with real data. Other major contributions include co-founding the JSSPP series of workshops (now in its 20th year), establishing and maintaining the Parallel Workloads Archive (which has been used in about a thousand papers), and a recent book on Workload Modeling published by Cambridge University Press in 2015.
Abstract
Reliable performance evaluations require representative workloads. This has led to the use of accounting logs from production systems as a source for workload data in simulations. I will survey 20 years of ups and downs in the use of workload logs, culminating with the idea of resampling with feedback. It all started with the realization that using workload logs directly suffers from various deficiencies, such as providing data about only one specific situation, and lack of flexibility, namely the inability to adjust the workload as needed. Creating workload models solves some of these problems but creates others, most notably the danger of missing out on important details that were not recognized in advance, and therefore not included in the model. Resampling solves many of these deficiencies by combining the best of both worlds. It is based on partitioning the workload data into basic components (e.g. the jobs contributed by different users), and then generating new workloads by sampling from this pool of basic components. This allows analysts to create multiple varied (but related) workloads from the same original log, all the time retaining much of the structure that exists in the original workload. However, resampling should not be applied in an oblivious manner. Rather, the generated workloads need to be adjusted dynamically to the conditions of the simulated system using a feedback loop. Resampling with feedback is therefore a new way to use workload logs which benefits from the realism of logs while eliminating many of their drawbacks. In addition, it enables evaluations of throughput effects that are impossible with static workloads.
Keynote 2 – Improving Cloud Effectiveness
Bio
Dr. Walfredo Cirne has worked on the many aspects of cluster scheduling and management for the past 20 years. He is currently with the Technical Infrastructure Group at Google’s headquarter in Mountain View, California. Previously, he was faculty at the Universidade Federal de Campina Grande, where he led the OurGrid project. Dr. Cirne holds a PhD in Computer Science from the University of California San Diego, and Bachelors and Masters from the Universidade Federal de Campina Grande.
Abstract
Cloud computing has emerged in the last decade as a very cost effective way to do computing. Consumers avoid the fixed cost and slow deployment of running their own computers, gaining the ability to massively scale their computational ability.
This talk discusses what is need to make the Cloud even more effective. Part of it is to further increase the scope of Cloud, by making it better cover the demands of specialized, large users. The other part is how to make the Cloud more efficient. How can we manage the data center as to increase its utilization? In particular, we show how to providing a different SLOs enables us to better utilize our datacenter, as well as the impact such strategies have in the user experience, from reliability and performance to scalability and prices.
Keynote 3 – Energy-Efficient Algorithms
Bio
Susanne Albers has been professor in the Department of Computer Science
at the Technical University of Munich since 2013. Her research interests are in
the design and analysis of algorithms. Susanne Albers received her graduate
education at Saarland University and the Max Planck Institute for Informatics (MPII),
Saarbrucken, Germany. After completion of her PhD degree in 1993 she was
senior researcher at MPII until 1999. Between 1999 and 2013 she held positions
as associate and full professor at the University of Dortmund, the University of
Freiburg and Humboldt Universität zu Berlin. In 2008 Susanne Albers received
the Leibniz Award of the German Research Foundation, the highest honor in
German research. She is member of Leopoldina, the German National Academy
of Science, and the Academy of Sciences and Literature in Mainz. Moreover,
she is Fellow of the EATCS.
Abstract
We survey algorithmic techniques for energy savings. So far the algorithms literature focuses mostly on the system and device level: How can we save energy in a given computational device? More specifically, (a)~power-down mechanisms and (b)~dynamic speed scaling have been explored.
Power-down mechanisms: Consider a single device that is equipped with two states, an active state and a sleep state. These states have individual power consumption rates. Moreover, transitions between the states consume energy. We first present simple algorithms that specify state transitions in idle periods where the device is not in use. In the offline setting, the length of an idle period is known in advance. In the online setting, this information is not available.
Furthermore, we review results for the more advanced setting that the device has several low-power states. Again we show offline and online algorithms. Moreover, we study the challenging scenario that a large set of parallel devices/processors is given. The processors are heterogeneous in that each one has an individual set of low-power states with associated power consumption rates. Over a time horizon the processing demands vary. We give algorithms for constructing state transition schedules that minimize the total energy consumed by all the processors.
Dynamic speed scaling: This technique is based on the fact that many modern microprocessors can run at variable speed. High speeds imply high performance but also high energy consumption. Low speed levels save energy but the performance degrades. The general goal is to execute a set of jobs on variable-speed processors so as to optimize energy and, possibly, a second objective.
We first review basic results for a single processor. We consider classical deadline-based scheduling where each job is specified by an arrival time, a deadline and a processing volume. Offline and online strategies are presented. We also study a second setting where jobs are not labeled with deadlines and, instead, the objective is to minimize the total cost consisting of job response times and energy. Additionally, we review results for parallel processing environments where a set of homogeneous or heterogeneous processors is given. Last not least we address an advanced problem setting in which dynamic speed scaling and power-down mechanisms are combined.