Our team of scientists have developed the algorithms and models that power AWS computer vision services such as Amazon Rekognition and Amazon Textract. The AWS AI Labs team has a world-leading team of researchers and academics, and we are looking for world-class colleagues to join us and make the AI revolution happen. In this paper, we will describe the advantages and challenges of implementing and deploying Auto-WLM, as well as outline areas of research that may be of interest to those in the “ML for systems” community with an eye for practicality. Currently, Auto-WLM makes millions of decisions every day, and constantly optimizes the performance for each individual Amazon Redshift cluster. At its core, AutoWLM uses locally-trained query performance models to predict the query execution time and memory needs for each query, and uses this to make intelligent scheduling decisions. of the concurrency level, memory allocated to queries etc.) for each specific workload, Auto-WLM does this tuning automatically and as a result is able to quickly adapt and react to workload changes and demand spikes. While traditional heuristic-based workload management requires a lot of manual tuning (e.g. Auto-WLM intelligently schedules workloads to maximize throughput and horizontally scales clusters in response to workload spikes. Auto-WLM is an example of how machine learning can improve the performance of large data-warehouses in practice and at scale. In this paper, we describe Auto-WLM, a machine learning based automatic workload manager currently used in production in Amazon Redshift. However, few of these techniques have actually been used in the critical path of customer-facing database services. There has been a lot of excitement around using machine learning to improve the performance and usability of database systems.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |