转载

经典论文 | Google使用Borg进行大规模集群的管理(第七章、第八章、鸣谢、参考文献)

【编者的话】最后两章探讨的是相关工作和改进。从中可以看到从Borg到Kubernetes,他们也做了不少思考,而这方面的工作远远没有完善,一直在进行中。期待大家都能从Google的实践中学到一些东西,并分享出来。

7. 相关工作

资源调度在各个领域已经被研究了数十年了,包括在广域HPC超算集群中,在工作站网络中,在大规模服务器集群中。我们主要聚焦在最相关的大规模服务器集群这个领域。

最近的一些研究分析了集群趋势,来自于Yahoo、Google、和Facebook[20, 52, 63, 68, 70, 80, 82],展现了这些现代的数据中心和工作负载在规模和异构化方面碰到的挑战。[69]包含了这些集群管理架构的分类。

Apache Mesos [45]把资源管理和应用部署做了分离,资源管理由中心管理器(类似于Bormaster+scheduler)和多种类的“框架”比如Hadoop [41]和Spark [73],使用offer-based的机制。Borg则主要把这些几种在一起,使用request-based的机制,可以大规模扩展。DRF [29, 35, 36, 66]策略是内赋在Mesos里的;Borg则使用优先级和配额认证来替代。Mesos开发者已经宣布了他们的雄心壮志:推测性资源分配和回收,然后把[69]里面的问题都解决。

YARN [76]是一个Hadoop中心集群管理。每个应用都有一个管理器和中央资源管理器谈判;这和2008年开始Google MapReduce从Borg获取资源如出一辙。YARN的资源管理器最近才能容错。一个相关的开源项目是Hadoop Capacity Scheduler [42],提供了多租户下的容量保证、多层队列、弹性共享和公平调度。YARN最近被扩展成支持多种资源类型、优先级、驱逐、和高级权限控制[21]。俄罗斯方块原型[40]支持了最大完工时间觉察的job打包。

Facebook的Tupperware [64],是一个类Borg系统来调度cgroup容器;虽然只有少量资料泄露,看起来他也提供资源回收利用功能。Twitter有一个开源的Aurora[5],一个类Borg的长进程调度器,跑在Mesos智商,有一个类似于Borg的配置语言和状态机。

来自于微软的Autopilot[48]提供了“自动化的软件部署和开通;系统监控,以及在软硬件故障时的修复操作”给微软集群。Borg生态系统提供了相同的特性,不过还有没说完的;Isaard [48]概括和很多我们想拥护的最佳实践。

Quincy[49]使用了一个网络流模型来提供公平性和数据局部性在几百个节点的DAG数据处理上。Borg用的是配额和优先级在上万台机器上把资源分配给用户。Quincy处理直接执行图在Borg之上。

Cosmos [44]聚焦在批处理上,重点在于用户获得对集群捐献的资源进行公平获取。它使用一个每job的管理器来获取资源;没有更多公开的细节。

微软的Apollo系统[13]使用了一个每job的调度器给短期存活的batch job使用,在和Borg差不多量级的集群下获取高流量输出。Apollo使用了一个低优先级后台任务随机执行策略来增加资源利用率,代价是有多天的延迟。Apollo几点提供一个预测矩阵,关于启动时间为两个资源维度的函数。然后调度器会综合计算启动开销、远程数据获取开销来决定部署到哪里,然后用一个随机延时来避免冲突。Borg用的是中央调度器来决定部署位置,给予优先级分配处理更多的资源维度,而且更关注高可用、长期跑的应用;Apollo也许可以处理更多的task请求并发。

阿里巴巴的Fuxi(译者:也就是伏羲啦) [84]支撑数据分析的负载,从2009年开始运行。就像Borgmaster,一个中央的FuxiMaster(也是做了高可用多副本)从节点上获取可用的资源信息、接受应用的资源请求,然后做匹配。伏羲增加了和Borg完全相反的调度策略:伏羲把最新的可用资源分配给队列里面请求的任务。就像Mesos,伏羲允许定义“虚拟资源”类型。只有系统的工作负载输出是公开的。

Omega [69]支持多并行,特别是“铅垂线”策略,粗略相当于Borgmaster加上它的持久存储和link shards(连接分配)。Omega调度器用的是乐观并行的方式去控制一个共享的cell观察和预期状态,把这些状态放在一个中央的存储里面,和Borglet用独立的连接器进行同步。Omega架构。Omage架构是被设计出来给多种不同的工作负载,这些工作负载都有自己的应用定义的RPC接口、状态机和调度策略(例如长期跑的服务端程序、多个框架下的batch job、存储基础设施、GCE上的虚拟机)。形成对比的是,Borg提供了一种“万灵药”,同样的RPC接口、状态机语义、调度策略,随着时间流逝规模和复杂度增加,需要支持更多的不同方式的负载,而可可扩展性目前来说还不算一个问题($3.4)

Google的开源Kubernetes系统[53]把应用放在Docker容器内[28],分发到多机器上。它可以跑在物理机(和Borg一样)或跑在其他云比如GCE提供的主机上。Kubernetes的开发者和Borg是同一拨人而且正在狂开发中。Google提供了一个云主机版本叫Google Container Engine [39]。我们会在下一节里面讨论从Borg中学到了哪些东西用在了Kubernetes上。

在高性能计算社区有一些这个领域的长期传统工作(e.g., Maui, Moab, Platform LSF [2, 47, 50]);但是这和Google Cell所需要的规模、工作负载、容错性是完全不一样的。大概来说,这些系统通过让很多任务等待在一个长队列里面来获取极高的资源利用率。

虚拟化提供商例如VMware [77]和数据中心方案提供商例如HP and IBM [46]给了一个大概在1000台机器量级的集群解决方案。另外,一些研究小组用几种方式提升了资源调度质量(e.g., [25, 40, 72, 74])。

最后,就像我们所指出的,大规模集群管理的另外一个重要部分是自动化和无人化。[43]写了如何做故障计划、多租户、健康检查、权限控制、和重启动性来获得更大的机器数/操作员比。Borg的设计哲学也是这样的,让我们的一个SRE能支撑超过万台机器。

8. 经验教训和未来工作

在这一节中我们会聊一些十年以来我们在生产环境操作Borg得到的定性经验,然后描述下这些观察结果是怎么改善Kubernete[53]的设计。

8.1 教训

我们会从一些受到吐槽的Borg特性开始,然后说说Kubernetes是怎么干的。

Jobs是唯一的task分组的机制。Borg没有天然的方法去管理多个job组成单个实体,或者去指向相关的服务实例(例如,金丝雀和生产跟踪)。作为hack,用户把他们的服务拓扑编码写在job名字里面,然后用更高层的工具区解析这些名字。这个问题的另外一面是,没办法去指向服务的任意子集,这就导致了僵硬的语义,以至于无法滚动升级和改变job的实例数。

为了避免这些困难,Kubernetes不用job这个概念,而是用标签(label)来管理它的调度单位(pods),标签是任意的键值对,用户可以把标签打在系统的所有对象上。这样,对于一个Borg job,就可以在pod上打上job:jobname这样的标签,其他的有用的分组也可以用标签来表示,例如服务、层级、发布类型(生产、测试、阶段)。Kubernetes用标签选择这种方式来选取对象,完成操作。这样就比固定的job分组更加灵活好用。

一台机器一个IP把事情弄复杂了。在Borg里面,所有一台机器上的task都使用同一个IP地址,然后共享端口空间。这就带来几个麻烦:Borg必须把端口当做资源来调度;task必须先声明他们需要多少端口,然后了解启动的时候哪些可以用;Borglet必须完成端口隔离;命名和RPC系统必须和IP一样处理端口。

非常感谢Linux namespace,虚拟机,IPv6和软件定义网络SDN。Kubernetes可以用一种更用户友好的方式来消解这些复杂性:所有pod和service都可以有一个自己的IP地址,允许开发者选择端口而不是委托基础设施来帮他们选择,这些就消除了基础设置管理端口的复杂性。

给资深用户优化而忽略了初级用户。Borg提供了一大堆针对“资深用户”的特性这样他们可以仔细的调试怎么跑他们的程序(BCL有230个参数的选项):开始的目的是为了支持Google的大资源用户,提升他们的效率会带来更大的效益。但是很不幸的是这么复杂的API让初级用户用起来很复杂,约束了他们的进步。我们的解决方案是在Borg上又做了一些自动化的工具和服务,从实验中来决定合理的配置。这就让皮实的应用从实验中获得了自由:即使自动化出了麻烦的问题也不会导致灾难。

8.2 经验

另一方面,有不少Borg的设计是非常有益的,而且经历了时间考验。

Allocs是有用的。Borg alloc抽象导出了广泛使用的logsaver样式($2.4)和另一个流行样式:定期数据载入更新的web server。Allocs和packages允许这些辅助服务能被一个独立的小组开发。Kubernetes相对于alloc的设计是pod,是一个多个容器共享的资源封装,总是被调度到同一台机器上。Kubernetes用pod里面的辅助容器来替代alloc里面的task,不过思想是一样的。

集群管理比task管理要做更多的事。虽然Borg的主要角色是管理tasks和机器的生命周期,但Borg上的应用还是从其他的集群服务中收益良多,例如命名和负载均衡。Kubernetes用service抽象来支持命名和负载均衡:service有一个名字,用标签选择器来选择多个pod。在底下,Kubernetes自动的在这个service所拥有的pod之间自动负载均衡,然后在pod挂掉后被重新调度到其他机器上的时候也保持跟踪来做负载均衡。

反观自省是至关重要的。虽然Borg基本上是“just works”的,但当有出了问题后,找到这个问题的根源是非常有挑战性的。一个关键设计抉择是Borg把所有的debug信息暴露给用户而不是隐藏:Borg有几千个用户,所以“自助”是debug的第一步。虽然这会让我们很难抛弃一些用户依赖的内部策略,但这还是成功的,而且我们没有找到其他现实的替代方式。为了管理这么巨量的资源,我们提供了几层UI和debug工具,这样就可以升入研究基础设施本身和应用的错误日志和事件细节。

Kubernetes也希望重现很多Borg的自探查技术。例如它和cAdvisor [15] 一切发型用于资源监控,用Elasticsearch/Kibana [30] 和 Fluentd [32]来做日志聚合。从master可以获取一个对象的状态快照。Kubernetes有一个一致的所有组件都能用的事件记录机制(例如pod被调度、容器挂了),这样客户端就能访问。

master是分布式系统的核心.Borgmaster原来被设计成一个单一的系统,但是后来,它变成了服务生态和用户job的核心。比方说,我们把调度器和主UI(Sigma)分离出来成为单独的进程,然后增加了权限控制、纵向横向扩展、重打包task、周期性job提交(cron)、工作流管理,系统操作存档用于离线查询。最后,这些让我们能够提升工作负载和特性集,而无需牺牲性能和可维护性。

Kubernetes的架构走的更远一些:它有一个API服务在核心,仅仅负责处理请求和维护底下的对象的状态。集群管理逻辑做成了一个小的、微服务类型的客户端程序和API服务通信,其中的副本管理器(replication controller),维护在故障情况下pod的服务数量,还有节点管理器(node controller),管理机器生命周期。

8.3 总结

在过去十年间所有几乎所有的Google集群负载都移到了Borg上。我们将会持续改进,并把学到的东西应用到Kubernetes上。

鸣谢

这篇文章的作者同时也评审了这篇文章。但是几十个设计、实现、维护Borg组件和生态系统工程师才是这个系统成功的关键。我们在这里列表设计、实现、操作Borgmaster和Borglet的主要人员。如有遗漏抱歉。

Borgmaster主设计师和实现者有Jeremy Dion和Mark Vandevoorde,还有Ben Smith, Ken Ashcraft, Maricia Scott, Ming-Yee Iu, Monika Henzinger。Borglet的主要设计实现者是Paul Menage。

其他贡献者包括Abhishek Rai, Abhishek Verma, Andy Zheng, Ashwin Kumar, Beng-Hong Lim, Bin Zhang, Bolu Szewczyk, Brian Budge, Brian Grant, Brian Wickman, Chengdu Huang, Cynthia Wong, Daniel Smith, Dave Bort, David Oppenheimer, David Wall, Dawn Chen, Eric Haugen, Eric Tune, Ethan Solomita, Gaurav Dhiman, Geeta Chaudhry, Greg Roelofs, Grzegorz Czajkowski, James Eady, Jarek Kusmierek, Jaroslaw Przybylowicz, Jason Hickey, Javier Kohen, Jeremy Lau, Jerzy Szczepkowski, John Wilkes, Jonathan Wilson, Joso Eterovic, Jutta Degener, Kai Backman, Kamil Yurtsever, Kenji Kaneda, Kevan Miller, Kurt Steinkraus, Leo Landa, Liza Fireman, Madhukar Korupolu, Mark Logan, Markus Gutschke, Matt Sparks, Maya Haridasan, Michael Abd-El-Malek, Michael Kenniston, Mukesh Kumar, Nate Calvin, OnufryWojtaszczyk, Patrick Johnson, Pedro Valenzuela, PiotrWitusowski, Praveen Kallakuri, Rafal Sokolowski, Richard Gooch, Rishi Gosalia, Rob Radez, Robert Hagmann, Robert Jardine, Robert Kennedy, Rohit Jnagal, Roy Bryant, Rune Dahl, Scott Garriss, Scott Johnson, Sean Howarth, Sheena Madan, Smeeta Jalan, Stan Chesnutt, Temo Arobelidze, Tim Hockin, Todd Wang, Tomasz Blaszczyk, TomaszWozniak, Tomek Zielonka, Victor Marmol, Vish Kannan, Vrigo Gokhale, Walfredo Cirne, Walt Drummond, Weiran Liu, Xiaopan Zhang, Xiao Zhang, Ye Zhao, Zohaib Maya.

Borg SRE团队也是非常重要的,包括Adam Rogoyski, Alex Milivojevic, Anil Das, Cody Smith, Cooper Bethea, Folke Behrens, Matt Liggett, James Sanford, John Millikin, Matt Brown, Miki Habryn, Peter Dahl, Robert van Gent, Seppi Wilhelmi, Seth Hettich, Torsten Marek, and Viraj Alankar。Borg配置语言(BCL)和borgcfg工具是Marcel van Lohuizen, Robert Griesemer制作的。

谢谢我们的审稿人(尤其是especially Eric Brewer, Malte Schwarzkopf and Tom Rodeheffer),以及我们的牧师Christos Kozyrakis,对这篇论文的反馈。

参考文献

[1] O. A. Abdul-Rahman and K. Aida. Towards understanding the usage behavior of Google cloud users: the mice and elephants phenomenon. In Proc. IEEE Int’l Conf. on Cloud Computing Technology and Science (CloudCom), pages 272–277, Singapore, Dec. 2014.

[2] Adaptive Computing Enterprises Inc., Provo, UT. MauiScheduler Administrator’s Guide, 3.2 edition, 2011.

[3] T. Akidau, A. Balikov, K. Bekiro˘glu, S. Chernyak, J. Haberman, R. Lax, S. McVeety, D. Mills, P. Nordstrom,and S. Whittle. MillWheel: fault-tolerant stream processing at internet scale. In Proc. Int’l Conf. on Very Large Data Bases (VLDB), pages 734–746, Riva del Garda, Italy, Aug.2013.

[4] Y. Amir, B. Awerbuch, A. Barak, R. S. Borgstrom, and A. Keren. An opportunity cost approach for job assignment in a scalable computing cluster. IEEE Trans. Parallel Distrib.Syst., 11(7):760–768, July 2000.

[5] Apache Aurora. http://aurora.incubator.apache.org/ , 2014.

[6] Aurora Configuration Tutorial. https://aurora.incubator.apach ... rial/ ,2014.

[7] AWS. Amazon Web Services VM Instances. http://aws.amazon.com/ec2/instance-types/ , 2014.

[8] J. Baker, C. Bond, J. Corbett, J. Furman, A. Khorlin, J. Larson, J.-M. Leon, Y. Li, A. Lloyd, and V. Yushprakh. Megastore: Providing scalable, highly available storage for interactive services. In Proc. Conference on Innovative Data Systems Research (CIDR), pages 223–234, Asilomar, CA, USA, Jan. 2011.

[9] M. Baker and J. Ousterhout. Availability in the Sprite distributed file system. Operating Systems Review,25(2):95–98, Apr. 1991.

[10] L. A. Barroso, J. Clidaras, and U. H¨olzle. The datacenter as a computer: an introduction to the design of warehouse-scale machines. Morgan Claypool Publishers, 2nd edition, 2013.

[11] L. A. Barroso, J. Dean, and U. Holzle. Web search for a planet: the Google cluster architecture. In IEEE Micro, pages 22–28, 2003.

[12] I. Bokharouss. GCL Viewer: a study in improving the understanding of GCL programs. Technical report, Eindhoven Univ. of Technology, 2008. MS thesis.

[13] E. Boutin, J. Ekanayake, W. Lin, B. Shi, J. Zhou, Z. Qian, M. Wu, and L. Zhou. Apollo: scalable and coordinated scheduling for cloud-scale computing. In Proc. USENIX Symp. on Operating Systems Design and Implementation (OSDI), Oct. 2014.

[14] M. Burrows. The Chubby lock service for loosely-coupled distributed systems. In Proc. USENIX Symp. on Operating Systems Design and Implementation (OSDI), pages 335–350,Seattle, WA, USA, 2006.

[15] cAdvisor. https://github.com/google/cadvisor , 2014

[16] CFS per-entity load patches. http://lwn.net/Articles/531853 , 2013.

[17] cgroups. http://en.wikipedia.org/wiki/Cgroups , 2014.

[18] C. Chambers, A. Raniwala, F. Perry, S. Adams, R. R. Henry, R. Bradshaw, and N. Weizenbaum. FlumeJava: easy, efficient data-parallel pipelines. In Proc. ACM SIGPLAN Conf. on Programming Language Design and Implementation (PLDI), pages 363–375, Toronto, Ontario, Canada, 2010.

[19] F. Chang, J. Dean, S. Ghemawat, W. C. Hsieh, D. A. Wallach, M. Burrows, T. Chandra, A. Fikes, and R. E. Gruber. Bigtable: a distributed storage system for structured data. ACM Trans. on Computer Systems, 26(2):4:1–4:26, June 2008.

[20] Y. Chen, S. Alspaugh, and R. H. Katz. Design insights for MapReduce from diverse production workloads. Technical Report UCB/EECS–2012–17, UC Berkeley, Jan. 2012.

[21] C. Curino, D. E. Difallah, C. Douglas, S. Krishnan, R. Ramakrishnan, and S. Rao. Reservation-based scheduling: if you’re late don’t blame us! In Proc. ACM Symp. on Cloud Computing (SoCC), pages 2:1–2:14, Seattle, WA, USA, 2014.

[22] J. Dean and L. A. Barroso. The tail at scale. Communications of the ACM, 56(2):74–80, Feb. 2012.

[23] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107–113, 2008.

[24] C. Delimitrou and C. Kozyrakis. Paragon: QoS-aware scheduling for heterogeneous datacenters. In Proc. Int’l Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Mar. 201.

[25] C. Delimitrou and C. Kozyrakis. Quasar: resource-efficient and QoS-aware cluster management. In Proc. Int’l Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 127–144, Salt Lake City, UT, USA, 2014.

[26] S. Di, D. Kondo, and W. Cirne. Characterization and comparison of cloud versus Grid workloads. In International Conference on Cluster Computing (IEEE CLUSTER), pages 230–238, Beijing, China, Sept. 2012.

[27] S. Di, D. Kondo, and C. Franck. Characterizing cloud applications on a Google data center. In Proc. Int’l Conf. on Parallel Processing (ICPP), Lyon, France, Oct. 2013.

[28] Docker Project. https://www.docker.io/ , 2014.

[29] D. Dolev, D. G. Feitelson, J. Y. Halpern, R. Kupferman, and N. Linial. No justified complaints: on fair sharing of multiple resources. In Proc. Innovations in Theoretical Computer Science (ITCS), pages 68–75, Cambridge, MA, USA, 2012.

[30] ElasticSearch. http://www.elasticsearch.org , 2014.

[31] D. G. Feitelson. Workload Modeling for Computer Systems Performance Evaluation. Cambridge University Press, 2014.

[32] Fluentd. http://www.fluentd.org/ , 2014.

[33] GCE. Google Compute Engine. http: //cloud.google.com/products/compute-engine/, 2014.

[34] S. Ghemawat, H. Gobioff, and S.-T. Leung. The Google File System. In Proc. ACM Symp. on Operating Systems Principles (SOSP), pages 29–43, Bolton Landing, NY, USA, 2003. ACM.

[35] A. Ghodsi, M. Zaharia, B. Hindman, A. Konwinski, S. Shenker, and I. Stoica. Dominant Resource Fairness: fair allocation of multiple resource types. In Proc. USENIX Symp. on Networked Systems Design and Implementation (NSDI), pages 323–326, 2011.

[36] A. Ghodsi, M. Zaharia, S. Shenker, and I. Stoica. Choosy: max-min fair sharing for datacenter jobs with constraints. In Proc. European Conf. on Computer Systems (EuroSys), pages 365–378, Prague, Czech Republic, 2013.

[37] D. Gmach, J. Rolia, and L. Cherkasova. Selling T-shirts and time shares in the cloud. In Proc. IEEE/ACM Int’l Symp. on Cluster, Cloud and Grid Computing (CCGrid), pages 539–546, Ottawa, Canada, 2012.

[38] Google App Engine. http://cloud.google.com/AppEngine , 2014.

[39] Google Container Engine (GKE). https://cloud.google.com/container-engine/ , 2015.

[40] R. Grandl, G. Ananthanarayanan, S. Kandula, S. Rao, and A. Akella. Multi-resource packing for cluster schedulers. In Proc. ACM SIGCOMM, Aug. 2014.

[41] Apache Hadoop Project. http://hadoop.apache.org/ , 2009.

[42] Hadoop MapReduce Next Generation – Capacity Scheduler. http: //hadoop.apache.org/docs/r2.2.0/hadoop-yarn/ hadoop-yarn-site/CapacityScheduler.html, 2013.

[43] J. Hamilton. On designing and deploying internet-scale services. In Proc. Large Installation System Administration Conf. (LISA), pages 231–242, Dallas, TX, USA, Nov. 2007.

[44] P. Helland. Cosmos: big data and big challenges. http://research.microsoft.com/en-us/events/ fs2011/helland_cosmos_big_data_and_big/ _challenges.pdf, 2011.

[45] B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. Joseph, R. Katz, S. Shenker, and I. Stoica. Mesos: a platform for fine-grained resource sharing in the data center. In Proc. USENIX Symp. on Networked Systems Design and Implementation (NSDI), 2011.

[46] IBM Platform Computing. http://www-03.ibm.com/ systems/technicalcomputing/platformcomputing/ products/clustermanager/index.html.

[47] S. Iqbal, R. Gupta, and Y.-C. Fang. Planning considerations for job scheduling in HPC clusters. Dell Power Solutions, Feb. 2005.

[48] M. Isaard. Autopilot: Automatic data center management. ACM SIGOPS Operating Systems Review, 41(2), 2007.

[49] M. Isard, V. Prabhakaran, J. Currey, U. Wieder, K. Talwar, and A. Goldberg. Quincy: fair scheduling for distributed computing clusters. In Proc. ACM Symp. on Operating Systems Principles (SOSP), 2009.

[50] D. B. Jackson, Q. Snell, and M. J. Clement. Core algorithms of the Maui scheduler. In Proc. Int’l Workshop on Job Scheduling Strategies for Parallel Processing, pages 87–102. Springer-Verlag, 2001.

[51] M. Kambadur, T. Moseley, R. Hank, and M. A. Kim. Measuring interference between live datacenter applications. In Proc. Int’l Conf. for High Performance Computing, Networking, Storage and Analysis (SC), Salt Lake City, UT, Nov. 2012.

[52] S. Kavulya, J. Tan, R. Gandhi, and P. Narasimhan. An analysis of traces from a production MapReduce cluster. In Proc. IEEE/ACM Int’l Symp. on Cluster, Cloud and Grid Computing (CCGrid), pages 94–103, 2010.

[53] Kubernetes. http://kubernetes.io , Aug. 2014.

[54] Kernel Based Virtual Machine. http://www.linux-kvm.org.

[55] L. Lamport. The part-time parliament. ACM Trans. on Computer Systems, 16(2):133–169, May 1998.

[56] J. Leverich and C. Kozyrakis. Reconciling high server utilization and sub-millisecond quality-of-service. In Proc. European Conf. on Computer Systems (EuroSys), page 4, 2014.

[57] Z. Liu and S. Cho. Characterizing machines and workloads on a Google cluster. In Proc. Int’l Workshop on Scheduling and Resource Management for Parallel and Distributed Systems (SRMPDS), Pittsburgh, PA, USA, Sept. 2012.

[58] Google LMCTFY project (let me contain that for you). http://github.com/google/lmctfy , 2014.

[59] G. Malewicz, M. H. Austern, A. J. Bik, J. C. Dehnert, I. Horn, N. Leiser, and G. Czajkowski. Pregel: a system for large-scale graph processing. In Proc. ACM SIGMOD Conference, pages 135–146, Indianapolis, IA, USA, 2010.

[60] J. Mars, L. Tang, R. Hundt, K. Skadron, and M. L. Soffa. Bubble-Up: increasing utilization in modern warehouse scale computers via sensible co-locations. In Proc. Int’l Symp. on Microarchitecture (Micro), Porto Alegre, Brazil, 2011.

[61] S. Melnik, A. Gubarev, J. J. Long, G. Romer, S. Shivakumar, M. Tolton, and T. Vassilakis. Dremel: interactive analysis of web-scale datasets. In Proc. Int’l Conf. on Very Large Data Bases (VLDB), pages 330–339, Singapore, Sept. 2010.

[62] P. Menage. Linux control groups. http://www.kernel. org/doc/Documentation/cgroups/cgroups.txt, 2007–2014.

[63] A. K. Mishra, J. L. Hellerstein, W. Cirne, and C. R. Das. Towards characterizing cloud backend workloads: insights from Google compute clusters. ACM SIGMETRICS Performance Evaluation Review, 37:34–41, Mar. 2010.

[64] A. Narayanan. Tupperware: containerized deployment at Facebook. http://www.slideshare.net/dotCloud/ tupperware-containerized-deployment-at-facebook, June 2014.

[65] K. Ousterhout, P. Wendell, M. Zaharia, and I. Stoica. Sparrow: distributed, low latency scheduling. In Proc. ACM Symp. on Operating Systems Principles (SOSP), pages 69–84, Farminton, PA, USA, 2013.

[66] D. C. Parkes, A. D. Procaccia, and N. Shah. Beyond Dominant Resource Fairness: extensions, limitations, and indivisibilities. In Proc. Electronic Commerce, pages 808–825, Valencia, Spain, 2012.

[67] Protocol buffers. https: //developers.google.com/protocol-buffers/, and https://github.com/google/protobuf/. , 2014.

[68] C. Reiss, A. Tumanov, G. Ganger, R. Katz, and M. Kozuch. Heterogeneity and dynamicity of clouds at scale: Google trace analysis. In Proc. ACM Symp. on Cloud Computing (SoCC), San Jose, CA, USA, Oct. 2012.

[69] M. Schwarzkopf, A. Konwinski, M. Abd-El-Malek, and J. Wilkes. Omega: flexible, scalable schedulers for large compute clusters. In Proc. European Conf. on Computer Systems (EuroSys), Prague, Czech Republic, 2013.

[70] B. Sharma, V. Chudnovsky, J. L. Hellerstein, R. Rifaat, and C. R. Das. Modeling and synthesizing task placement constraints in Google compute clusters. In Proc. ACM Symp. on Cloud Computing (SoCC), pages 3:1–3:14, Cascais, Portugal, Oct. 2011.

[71] E. Shmueli and D. G. Feitelson. On simulation and design of parallel-systems schedulers: are we doing the right thing? IEEE Trans. on Parallel and Distributed Systems, 20(7):983–996, July 2009.

[72] A. Singh, M. Korupolu, and D. Mohapatra. Server-storage virtualization: integration and load balancing in data centers. In Proc. Int’l Conf. for High Performance Computing, Networking, Storage and Analysis (SC), pages 53:1–53:12, Austin, TX, USA, 2008.

[73] Apache Spark Project. http://spark.apache.org/ , 2014.

[74] A. Tumanov, J. Cipar, M. A. Kozuch, and G. R. Ganger. Alsched: algebraic scheduling of mixed workloads in heterogeneous clouds. In Proc. ACM Symp. on Cloud Computing (SoCC), San Jose, CA, USA, Oct. 2012.

[75] P. Turner, B. Rao, and N. Rao. CPU bandwidth control for CFS. In Proc. Linux Symposium, pages 245–254, July 2010.

[76] V. K. Vavilapalli, A. C. Murthy, C. Douglas, S. Agarwal, M. Konar, R. Evans, T. Graves, J. Lowe, H. Shah, S. Seth, B. Saha, C. Curino, O. O’Malley, S. Radia, B. Reed, and E. Baldeschwieler. Apache Hadoop YARN: Yet Another Resource Negotiator. In Proc. ACM Symp. on Cloud Computing (SoCC), Santa Clara, CA, USA, 2013.

[77] VMware VCloud Suite. http://www.vmware.com/products/vcloud-suite/.

[78] A. Verma, M. Korupolu, and J. Wilkes. Evaluating job packing in warehouse-scale computing. In IEEE Cluster, pages 48–56, Madrid, Spain, Sept. 2014.

[79] W. Whitt. Open and closed models for networks of queues. AT&T Bell Labs Technical Journal, 63(9), Nov. 1984.

[80] J. Wilkes. More Google cluster data. http://googleresearch.blogspot.com/2011/11/ more-google-cluster-data.html, Nov. 2011.

[81] Y. Zhai, X. Zhang, S. Eranian, L. Tang, and J. Mars. HaPPy: Hyperthread-aware power profiling dynamically. In Proc. USENIX Annual Technical Conf. (USENIX ATC), pages 211–217, Philadelphia, PA, USA, June 2014. USENIX Association.

[82] Q. Zhang, J. Hellerstein, and R. Boutaba. Characterizing task usage shapes in Google’s compute clusters. In Proc. Int’l Workshop on Large-Scale Distributed Systems and Middleware (LADIS), 2011.

[83] X. Zhang, E. Tune, R. Hagmann, R. Jnagal, V. Gokhale, and J. Wilkes. CPI2: CPU performance isolation for shared compute clusters. In Proc. European Conf. on Computer Systems (EuroSys), Prague, Czech Republic, 2013.

[84] Z. Zhang, C. Li, Y. Tao, R. Yang, H. Tang, and J. Xu. Fuxi: a fault-tolerant resource management and job scheduling system at internet scale. In Proc. Int’l Conf. on Very Large Data Bases (VLDB), pages 1393–1404. VLDB Endowment Inc., Sept. 2014.

勘误 2015-04-23

自从胶片版定稿后,我们发现了若干疏忽和歧义。

用户视角

SRE干的比SA(system administration)要多得多:他们是Google生产服务的负责工程师。他们设计和实现软件,包括自动化系统、管理应用、底层基础设施服务来保证Google这个量级的高可靠和高性能。

鸣谢

我们不小心忽略了Brad Strand, Chris Colohan, Divyesh Shah, Eric Wilcox, and Pavanish Nirula。

参考文献

[1] Michael Litzkow, Miron Livny, and Matt Mutka. "Condor - A Hunter of Idle Workstations". In

Proc. Int'l Conf. on Distributed Computing Systems (ICDCS) , pages 104-111, June 1988.

[2] Rajesh Raman, Miron Livny, and Marvin Solomon. "Matchmaking: Distributed Resource

Management for High Throughput Computing". In Proc. Int'l Symp. on High Performance

Distributed Computing (HPDC) , Chicago, IL, USA, July 1998.

正文到此结束
Loading...