Created
February 5, 2017 09:57
-
-
Save ianychoi/d0abaefd25912aef356786e5b71126f9 to your computer and use it in GitHub Desktop.
Result of tex file for arch-design build test in openstack/openstack-manuals
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| %% Generated by Sphinx. | |
| \def\sphinxdocclass{report} | |
| \documentclass[a4paper,11pt,english]{sphinxmanual} | |
| \ifdefined\pdfpxdimen | |
| \let\sphinxpxdimen\pdfpxdimen\else\newdimen\sphinxpxdimen | |
| \fi \sphinxpxdimen=49336sp\relax | |
| \usepackage[margin=1in,marginparwidth=0.5in]{geometry} | |
| \usepackage[utf8]{inputenc} | |
| \ifdefined\DeclareUnicodeCharacter | |
| \DeclareUnicodeCharacter{00A0}{\nobreakspace} | |
| \fi | |
| \usepackage{cmap} | |
| \usepackage[T1]{fontenc} | |
| \usepackage{amsmath,amssymb,amstext} | |
| \usepackage{babel} | |
| \usepackage{times} | |
| \usepackage[Bjarne]{fncychap} | |
| \usepackage{longtable} | |
| \usepackage{sphinx} | |
| \usepackage{multirow} | |
| \usepackage{eqparbox} | |
| % Include hyperref last. | |
| \usepackage{hyperref} | |
| % Fix anchor placement for figures with captions. | |
| \usepackage{hypcap}% it must be loaded after hyperref. | |
| % Set up styles of URL: it should be placed after hyperref. | |
| \urlstyle{same} | |
| \addto\captionsenglish{\renewcommand{\figurename}{Fig.}} | |
| \addto\captionsenglish{\renewcommand{\tablename}{Table}} | |
| \addto\captionsenglish{\renewcommand{\literalblockname}{Listing}} | |
| \addto\extrasenglish{\def\pageautorefname{page}} | |
| \setcounter{tocdepth}{1} | |
| \makeatletter | |
| % Defines title page | |
| \renewcommand{\maketitle}{ | |
| \begin{titlepage} | |
| \begin{flushleft} | |
| % TODO: preparing logo file dedecated to PDF generation not from www directory | |
| \includegraphics[width=3.5cm, height=0.8cm]{../../../../www/static/common/images/openstack-logo-full.png} | |
| \end{flushleft} | |
| \vskip 6em% | |
| \begin{center} | |
| % Document title | |
| {\Huge \textbf \@title} | |
| \vskip 2em% | |
| % Release name (from 'release' variable in conf.py) | |
| {\LARGE {\textbf \py@release} \newline} | |
| \end{center} | |
| \vskip 6em% | |
| \begin{flushright} | |
| % Author | |
| {\LARGE \@author} | |
| \vskip 20em% | |
| % Creation date | |
| {\Large \@date} | |
| \end{flushright} | |
| \end{titlepage} | |
| } | |
| \renewcommand{\releasename}{RELEASE_NAME_BY_RENEWCOMMAND} | |
| \title{Architecture Design Guide} | |
| \date{Feb 05, 2017} | |
| \release{0.9} | |
| \author{OpenStack contributors} | |
| \newcommand{\sphinxlogo}{} | |
| \renewcommand{\releasename}{Release} | |
| \makeindex | |
| \begin{document} | |
| \maketitle | |
| \sphinxtableofcontents | |
| \phantomsection\label{\detokenize{index::doc}} | |
| \chapter{Abstract} | |
| \label{\detokenize{index:abstract}}\label{\detokenize{index:openstack-architecture-design-guide}} | |
| To reap the benefits of OpenStack, you should plan, design, | |
| and architect your cloud properly, taking user's needs into | |
| account and understanding the use cases. | |
| \chapter{Contents} | |
| \label{\detokenize{index:contents}} | |
| \section{Conventions} | |
| \label{\detokenize{common/conventions:conventions}}\label{\detokenize{common/conventions::doc}} | |
| The OpenStack documentation uses several typesetting conventions. | |
| \subsection{Notices} | |
| \label{\detokenize{common/conventions:notices}} | |
| Notices take these forms: | |
| \begin{sphinxadmonition}{note}{Note:} | |
| A comment with additional information that explains a part of the | |
| text. | |
| \end{sphinxadmonition} | |
| \begin{sphinxadmonition}{important}{Important:} | |
| Something you must be aware of before proceeding. | |
| \end{sphinxadmonition} | |
| \begin{sphinxadmonition}{tip}{Tip:} | |
| An extra but helpful piece of practical advice. | |
| \end{sphinxadmonition} | |
| \begin{sphinxadmonition}{caution}{Caution:} | |
| Helpful information that prevents the user from making mistakes. | |
| \end{sphinxadmonition} | |
| \begin{sphinxadmonition}{warning}{Warning:} | |
| Critical information about the risk of data loss or security | |
| issues. | |
| \end{sphinxadmonition} | |
| \subsection{Command prompts} | |
| \label{\detokenize{common/conventions:command-prompts}} | |
| \begin{sphinxVerbatim}[commandchars=\\\{\}] | |
| \PYG{g+gp}{\PYGZdl{}} \PYG{n+nb}{command} | |
| \end{sphinxVerbatim} | |
| Any user, including the \sphinxcode{root} user, can run commands that are | |
| prefixed with the \sphinxcode{\$} prompt. | |
| \begin{sphinxVerbatim}[commandchars=\\\{\}] | |
| \PYG{g+gp}{\PYGZsh{}} \PYG{n+nb}{command} | |
| \end{sphinxVerbatim} | |
| The \sphinxcode{root} user must run commands that are prefixed with the \sphinxcode{\#} | |
| prompt. You can also prefix these commands with the \sphinxstyleliteralstrong{sudo} | |
| command, if available, to run them. | |
| \section{Introduction} | |
| \label{\detokenize{introduction:introduction}}\label{\detokenize{introduction::doc}} | |
| \subsection{Intended audience} | |
| \label{\detokenize{introduction-intended-audience::doc}}\label{\detokenize{introduction-intended-audience:intended-audience}} | |
| This book has been written for architects and designers of OpenStack | |
| clouds. For a guide on deploying and operating OpenStack, please refer | |
| to the \href{https://docs.openstack.org/ops-guide/}{OpenStack Operations Guide}. | |
| Before reading this book, we recommend prior knowledge of cloud | |
| architecture and principles, experience in enterprise system design, | |
| Linux and virtualization experience, and a basic understanding of | |
| networking principles and protocols. | |
| \subsection{How this book is organized} | |
| \label{\detokenize{introduction-how-this-book-is-organized:how-this-book-is-organized}}\label{\detokenize{introduction-how-this-book-is-organized::doc}} | |
| This book examines some of the most common uses for OpenStack clouds, | |
| and explains the considerations for each use case. Cloud architects may | |
| use this book as a comprehensive guide by reading all of the use cases, | |
| but it is also possible to review only the chapters which pertain to a | |
| specific use case. The use cases covered in this guide include: | |
| \begin{itemize} | |
| \item {} | |
| {\hyperref[\detokenize{generalpurpose::doc}]{\sphinxcrossref{\DUrole{doc}{General purpose}}}}: Uses common components that | |
| address 80\% of common use cases. | |
| \item {} | |
| {\hyperref[\detokenize{compute-focus::doc}]{\sphinxcrossref{\DUrole{doc}{Compute focused}}}}: For compute intensive workloads | |
| such as high performance computing (HPC). | |
| \item {} | |
| {\hyperref[\detokenize{storage-focus::doc}]{\sphinxcrossref{\DUrole{doc}{Storage focused}}}}: For storage intensive workloads | |
| such as data analytics with parallel file systems. | |
| \item {} | |
| {\hyperref[\detokenize{network-focus::doc}]{\sphinxcrossref{\DUrole{doc}{Network focused}}}}: For high performance and | |
| reliable networking, such as a {\hyperref[\detokenize{common/glossary:term-content-delivery-network-cdn}]{\sphinxtermref{\DUrole{xref,std,std-term}{content delivery network (CDN)}}}}. | |
| \item {} | |
| {\hyperref[\detokenize{multi-site::doc}]{\sphinxcrossref{\DUrole{doc}{Multi-site}}}}: For applications that require multiple | |
| site deployments for geographical, reliability or data locality | |
| reasons. | |
| \item {} | |
| {\hyperref[\detokenize{hybrid::doc}]{\sphinxcrossref{\DUrole{doc}{Hybrid cloud}}}}: Uses multiple disparate clouds connected | |
| either for failover, hybrid cloud bursting, or availability. | |
| \item {} | |
| {\hyperref[\detokenize{massively-scalable::doc}]{\sphinxcrossref{\DUrole{doc}{Massively scalable}}}}: For cloud service | |
| providers or other large installations. | |
| \item {} | |
| {\hyperref[\detokenize{specialized::doc}]{\sphinxcrossref{\DUrole{doc}{Specialized cases}}}}: Architectures that have not | |
| previously been covered in the defined use cases. | |
| \end{itemize} | |
| \subsection{Why and how we wrote this book} | |
| \label{\detokenize{introduction-how-this-book-was-written:why-and-how-we-wrote-this-book}}\label{\detokenize{introduction-how-this-book-was-written::doc}} | |
| We wrote this book to guide you through designing an OpenStack cloud | |
| architecture. This guide identifies design considerations for common | |
| cloud use cases and provides examples. | |
| The Architecture Design Guide was written in a book sprint format, which | |
| is a facilitated, rapid development production method for books. The | |
| Book Sprint was facilitated by Faith Bosworth and Adam Hyde of Book | |
| Sprints, for more information, see the Book Sprints website | |
| (www.booksprints.net). | |
| This book was written in five days during July 2014 while exhausting the | |
| M\&M, Mountain Dew and healthy options supply, complete with juggling | |
| entertainment during lunches at VMware's headquarters in Palo Alto. | |
| We would like to thank VMware for their generous hospitality, as well as | |
| our employers, Cisco, Cloudscaling, Comcast, EMC, Mirantis, Rackspace, | |
| Red Hat, Verizon, and VMware, for enabling us to contribute our time. We | |
| would especially like to thank Anne Gentle and Kenneth Hui for all of | |
| their shepherding and organization in making this happen. | |
| The author team includes: | |
| \begin{itemize} | |
| \item {} | |
| Kenneth Hui (EMC) \href{http://twitter.com/hui\_kenneth}{@hui\_kenneth} | |
| \item {} | |
| Alexandra Settle (Rackspace) | |
| \href{http://twitter.com/dewsday}{@dewsday} | |
| \item {} | |
| Anthony Veiga (Comcast) \href{http://twitter.com/daaelar}{@daaelar} | |
| \item {} | |
| Beth Cohen (Verizon) \href{http://twitter.com/bfcohen}{@bfcohen} | |
| \item {} | |
| Kevin Jackson (Rackspace) | |
| \href{http://twitter.com/itarchitectkev}{@itarchitectkev} | |
| \item {} | |
| Maish Saidel-Keesing (Cisco) | |
| \href{http://twitter.com/maishsk}{@maishsk} | |
| \item {} | |
| Nick Chase (Mirantis) \href{http://twitter.com/NickChase}{@NickChase} | |
| \item {} | |
| Scott Lowe (VMware) \href{http://twitter.com/scott\_lowe}{@scott\_lowe} | |
| \item {} | |
| Sean Collins (Comcast) \href{http://twitter.com/sc68cal}{@sc68cal} | |
| \item {} | |
| Sean Winn (Cloudscaling) | |
| \href{http://twitter.com/seanmwinn}{@seanmwinn} | |
| \item {} | |
| Sebastian Gutierrez (Red Hat) \href{http://twitter.com/gutseb}{@gutseb} | |
| \item {} | |
| Stephen Gordon (Red Hat) \href{http://twitter.com/xsgordon}{@xsgordon} | |
| \item {} | |
| Vinny Valdez (Red Hat) | |
| \href{http://twitter.com/VinnyValdez}{@VinnyValdez} | |
| \end{itemize} | |
| \subsection{Methodology} | |
| \label{\detokenize{introduction-methodology:methodology}}\label{\detokenize{introduction-methodology::doc}} | |
| The best way to design your cloud architecture is through creating and | |
| testing use cases. Planning for applications that support thousands of | |
| sessions per second, variable workloads, and complex, changing data, | |
| requires you to identify the key meters. Identifying these key meters, | |
| such as number of concurrent transactions per second, and size of | |
| database, makes it possible to build a method for testing your | |
| assumptions. | |
| Use a functional user scenario to develop test cases, and to measure | |
| overall project trajectory. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| If you do not want to use an application to develop user | |
| requirements automatically, you need to create requirements to build | |
| test harnesses and develop usable meters. | |
| \end{sphinxadmonition} | |
| Establishing these meters allows you to respond to changes quickly | |
| without having to set exact requirements in advance. This creates ways | |
| to configure the system, rather than redesigning it every time there is | |
| a requirements change. | |
| \begin{sphinxadmonition}{important}{Important:} | |
| It is important to limit scope creep. Ensure you address tool | |
| limitations, but do not recreate the entire suite of tools. Work | |
| with technical product owners to establish critical features that | |
| are needed for a successful cloud deployment. | |
| \end{sphinxadmonition} | |
| \subsubsection{Application cloud readiness} | |
| \label{\detokenize{introduction-methodology:application-cloud-readiness}} | |
| The cloud does more than host virtual machines and their applications. | |
| This \sphinxstyleemphasis{lift and shift} approach works in certain situations, but there is | |
| a fundamental difference between clouds and traditional bare-metal-based | |
| environments, or even traditional virtualized environments. | |
| In traditional environments, with traditional enterprise applications, | |
| the applications and the servers that run on them are \sphinxstyleemphasis{pets}. They are | |
| lovingly crafted and cared for, the servers have names like Gandalf or | |
| Tardis, and if they get sick someone nurses them back to health. All of | |
| this is designed so that the application does not experience an outage. | |
| In cloud environments, servers are more like cattle. There are thousands | |
| of them, they get names like NY-1138-Q, and if they get sick, they get | |
| put down and a sysadmin installs another one. Traditional applications | |
| that are unprepared for this kind of environment may suffer outages, | |
| loss of data, or complete failure. | |
| There are other reasons to design applications with the cloud in mind. | |
| Some are defensive, such as the fact that because applications cannot be | |
| certain of exactly where or on what hardware they will be launched, they | |
| need to be flexible, or at least adaptable. Others are proactive. For | |
| example, one of the advantages of using the cloud is scalability. | |
| Applications need to be designed in such a way that they can take | |
| advantage of these and other opportunities. | |
| \subsubsection{Determining whether an application is cloud-ready} | |
| \label{\detokenize{introduction-methodology:determining-whether-an-application-is-cloud-ready}} | |
| There are several factors to take into consideration when looking at | |
| whether an application is a good fit for the cloud. | |
| \begin{description} | |
| \item[{Structure}] \leavevmode | |
| A large, monolithic, single-tiered, legacy application typically is | |
| not a good fit for the cloud. Efficiencies are gained when load can | |
| be spread over several instances, so that a failure in one part of | |
| the system can be mitigated without affecting other parts of the | |
| system, or so that scaling can take place where the app needs it. | |
| \item[{Dependencies}] \leavevmode | |
| Applications that depend on specific hardware, such as a particular | |
| chip set or an external device such as a fingerprint reader, might | |
| not be a good fit for the cloud, unless those dependencies are | |
| specifically addressed. Similarly, if an application depends on an | |
| operating system or set of libraries that cannot be used in the | |
| cloud, or cannot be virtualized, that is a problem. | |
| \item[{Connectivity}] \leavevmode | |
| Self-contained applications, or those that depend on resources that | |
| are not reachable by the cloud in question, will not run. In some | |
| situations, you can work around these issues with custom network | |
| setup, but how well this works depends on the chosen cloud | |
| environment. | |
| \item[{Durability and resilience}] \leavevmode | |
| Despite the existence of SLAs, things break: servers go down, | |
| network connections are disrupted, or too many projects on a server | |
| make a server unusable. An application must be sturdy enough to | |
| contend with these issues. | |
| \end{description} | |
| \subsubsection{Designing for the cloud} | |
| \label{\detokenize{introduction-methodology:designing-for-the-cloud}} | |
| Here are some guidelines to keep in mind when designing an application | |
| for the cloud: | |
| \begin{itemize} | |
| \item {} | |
| Be a pessimist: Assume everything fails and design backwards. | |
| \item {} | |
| Put your eggs in multiple baskets: Leverage multiple providers, | |
| geographic regions and availability zones to accommodate for local | |
| availability issues. Design for portability. | |
| \item {} | |
| Think efficiency: Inefficient designs will not scale. Efficient | |
| designs become cheaper as they scale. Kill off unneeded components or | |
| capacity. | |
| \item {} | |
| Be paranoid: Design for defense in depth and zero tolerance by | |
| building in security at every level and between every component. | |
| Trust no one. | |
| \item {} | |
| But not too paranoid: Not every application needs the platinum | |
| solution. Architect for different SLA's, service tiers, and security | |
| levels. | |
| \item {} | |
| Manage the data: Data is usually the most inflexible and complex area | |
| of a cloud and cloud integration architecture. Do not short change | |
| the effort in analyzing and addressing data needs. | |
| \item {} | |
| Hands off: Leverage automation to increase consistency and quality | |
| and reduce response times. | |
| \item {} | |
| Divide and conquer: Pursue partitioning and parallel layering | |
| wherever possible. Make components as small and portable as possible. | |
| Use load balancing between layers. | |
| \item {} | |
| Think elasticity: Increasing resources should result in a | |
| proportional increase in performance and scalability. Decreasing | |
| resources should have the opposite effect. | |
| \item {} | |
| Be dynamic: Enable dynamic configuration changes such as auto | |
| scaling, failure recovery and resource discovery to adapt to changing | |
| environments, faults, and workload volumes. | |
| \item {} | |
| Stay close: Reduce latency by moving highly interactive components | |
| and data near each other. | |
| \item {} | |
| Keep it loose: Loose coupling, service interfaces, separation of | |
| concerns, abstraction, and well defined API's deliver flexibility. | |
| \item {} | |
| Be cost aware: Autoscaling, data transmission, virtual software | |
| licenses, reserved instances, and similar costs can rapidly increase | |
| monthly usage charges. Monitor usage closely. | |
| \end{itemize} | |
| {\hyperref[\detokenize{common/glossary:term-openstack}]{\sphinxtermref{\DUrole{xref,std,std-term}{OpenStack}}}} is a fully-featured, self-service cloud. This book takes you | |
| through some of the considerations you have to make when designing your | |
| cloud. | |
| \section{Security and legal requirements} | |
| \label{\detokenize{legal-security-requirements:security-and-legal-requirements}}\label{\detokenize{legal-security-requirements::doc}} | |
| This chapter discusses the legal and security requirements you | |
| need to consider for the different OpenStack scenarios. | |
| \subsection{Legal requirements} | |
| \label{\detokenize{legal-security-requirements:legal-requirements}} | |
| Many jurisdictions have legislative and regulatory | |
| requirements governing the storage and management of data in | |
| cloud environments. Common areas of regulation include: | |
| \begin{itemize} | |
| \item {} | |
| Data retention policies ensuring storage of persistent data | |
| and records management to meet data archival requirements. | |
| \item {} | |
| Data ownership policies governing the possession and | |
| responsibility for data. | |
| \item {} | |
| Data sovereignty policies governing the storage of data in | |
| foreign countries or otherwise separate jurisdictions. | |
| \item {} | |
| Data compliance policies governing certain types of | |
| information needing to reside in certain locations due to | |
| regulatory issues - and more importantly, cannot reside in | |
| other locations for the same reason. | |
| \end{itemize} | |
| Examples of such legal frameworks include the | |
| \href{http://ec.europa.eu/justice/data-protection/}{data protection framework} | |
| of the European Union and the requirements of the | |
| \href{http://www.finra.org/Industry/Regulation/FINRARules/}{Financial Industry Regulatory Authority} | |
| in the United States. | |
| Consult a local regulatory body for more information. | |
| \subsection{Security} | |
| \label{\detokenize{legal-security-requirements:security}}\label{\detokenize{legal-security-requirements:id1}} | |
| When deploying OpenStack in an enterprise as a private cloud, | |
| despite activating a firewall and binding employees with security | |
| agreements, cloud architecture should not make assumptions about | |
| safety and protection. | |
| In addition to considering the users, operators, or administrators | |
| who will use the environment, consider also negative or hostile users who | |
| would attack or compromise the security of your deployment regardless | |
| of firewalls or security agreements. | |
| Attack vectors increase further in a public facing OpenStack deployment. | |
| For example, the API endpoints and the software behind it become | |
| vulnerable to hostile entities attempting to gain unauthorized access | |
| or prevent access to services. | |
| This can result in loss of reputation and you must protect against | |
| it through auditing and appropriate filtering. | |
| It is important to understand that user authentication requests | |
| encase sensitive information such as user names, passwords, and | |
| authentication tokens. For this reason, place the API services | |
| behind hardware that performs SSL termination. | |
| \begin{sphinxadmonition}{warning}{Warning:} | |
| Be mindful of consistency when utilizing third party | |
| clouds to explore authentication options. | |
| \end{sphinxadmonition} | |
| \subsection{Security domains} | |
| \label{\detokenize{legal-security-requirements:security-domains}} | |
| A security domain comprises users, applications, servers or networks | |
| that share common trust requirements and expectations within a system. | |
| Typically, security domains have the same authentication and | |
| authorization requirements and users. | |
| You can map security domains individually to the installation, | |
| or combine them. For example, some deployment topologies combine both | |
| guest and data domains onto one physical network. | |
| In other cases these networks are physically separate. | |
| Map out the security domains against specific OpenStack topologies needs. | |
| The domains and their trust requirements depend on whether the cloud | |
| instance is public, private, or hybrid. | |
| \subsubsection{Public security domains} | |
| \label{\detokenize{legal-security-requirements:public-security-domains}} | |
| The public security domain is an untrusted area of the cloud | |
| infrastructure. It can refer to the internet as a whole or simply | |
| to networks over which the user has no authority. | |
| Always consider this domain untrusted. For example, | |
| in a hybrid cloud deployment, any information traversing between and | |
| beyond the clouds is in the public domain and untrustworthy. | |
| \subsubsection{Guest security domains} | |
| \label{\detokenize{legal-security-requirements:guest-security-domains}} | |
| Typically used for compute instance-to-instance traffic, the | |
| guest security domain handles compute data generated by | |
| instances on the cloud but not services that support the | |
| operation of the cloud, such as API calls. Public cloud | |
| providers and private cloud providers who do not have | |
| stringent controls on instance use or who allow unrestricted | |
| internet access to instances should consider this domain to be | |
| untrusted. Private cloud providers may want to consider this | |
| network as internal and therefore trusted only if they have | |
| controls in place to assert that they trust instances and all | |
| their projects. | |
| \subsubsection{Management security domains} | |
| \label{\detokenize{legal-security-requirements:management-security-domains}} | |
| The management security domain is where services interact. | |
| The networks in this domain transport confidential data such as | |
| configuration parameters, user names, and passwords. Trust this | |
| domain when it is behind an organization's firewall in deployments. | |
| \subsubsection{Data security domains} | |
| \label{\detokenize{legal-security-requirements:data-security-domains}} | |
| The data security domain is concerned primarily with | |
| information pertaining to the storage services within OpenStack. | |
| The data that crosses this network has integrity and | |
| confidentiality requirements. Depending on the type of deployment there | |
| may also be availability requirements. The trust level of this network | |
| is heavily dependent on deployment decisions and does not have a default | |
| level of trust. | |
| \subsection{Hypervisor-security} | |
| \label{\detokenize{legal-security-requirements:hypervisor-security}} | |
| The hypervisor also requires a security assessment. In a | |
| public cloud, organizations typically do not have control | |
| over the choice of hypervisor. Properly securing your | |
| hypervisor is important. Attacks made upon the | |
| unsecured hypervisor are called a \sphinxstylestrong{hypervisor breakout}. | |
| Hypervisor breakout describes the event of a | |
| compromised or malicious instance breaking out of the resource | |
| controls of the hypervisor and gaining access to the bare | |
| metal operating system and hardware resources. | |
| There is not an issue if the security of instances is not important. | |
| However, enterprises need to avoid vulnerability. The only way to | |
| do this is to avoid the situation where the instances are running | |
| on a public cloud. That does not mean that there is a | |
| need to own all of the infrastructure on which an OpenStack | |
| installation operates; it suggests avoiding situations in which | |
| sharing hardware with others occurs. | |
| \subsection{Baremetal security} | |
| \label{\detokenize{legal-security-requirements:baremetal-security}} | |
| There are other services worth considering that provide a | |
| bare metal instance instead of a cloud. In other cases, it is | |
| possible to replicate a second private cloud by integrating | |
| with a private Cloud-as-a-Service deployment. The | |
| organization does not buy the hardware, but also does not share | |
| with other projects. It is also possible to use a provider that | |
| hosts a bare-metal public cloud instance for which the | |
| hardware is dedicated only to one customer, or a provider that | |
| offers private Cloud-as-a-Service. | |
| \begin{sphinxadmonition}{important}{Important:} | |
| Each cloud implements services differently. | |
| What keeps data secure in one cloud may not do the same in another. | |
| Be sure to know the security requirements of every cloud that | |
| handles the organization's data or workloads. | |
| \end{sphinxadmonition} | |
| More information on OpenStack Security can be found in the | |
| \href{https://docs.openstack.org/security-guide}{OpenStack Security Guide}. | |
| \subsection{Networking security} | |
| \label{\detokenize{legal-security-requirements:networking-security}} | |
| Consider security implications and requirements before designing the | |
| physical and logical network topologies. Make sure that the networks are | |
| properly segregated and traffic flows are going to the correct | |
| destinations without crossing through locations that are undesirable. | |
| Consider the following example factors: | |
| \begin{itemize} | |
| \item {} | |
| Firewalls | |
| \item {} | |
| Overlay interconnects for joining separated project networks | |
| \item {} | |
| Routing through or avoiding specific networks | |
| \end{itemize} | |
| How networks attach to hypervisors can expose security | |
| vulnerabilities. To mitigate against exploiting hypervisor breakouts, | |
| separate networks from other systems and schedule instances for the | |
| network onto dedicated compute nodes. This prevents attackers | |
| from having access to the networks from a compromised instance. | |
| \subsection{Multi-site security} | |
| \label{\detokenize{legal-security-requirements:multi-site-security}} | |
| Securing a multi-site OpenStack installation brings | |
| extra challenges. Projects may expect a project-created network | |
| to be secure. In a multi-site installation the use of a | |
| non-private connection between sites may be required. This may | |
| mean that traffic would be visible to third parties and, in | |
| cases where an application requires security, this issue | |
| requires mitigation. In these instances, install a VPN or | |
| encrypted connection between sites to conceal sensitive traffic. | |
| Another security consideration with regard to multi-site | |
| deployments is Identity. Centralize authentication within a | |
| multi-site deployment. Centralization provides a | |
| single authentication point for users across the deployment, | |
| as well as a single point of administration for traditional | |
| create, read, update, and delete operations. Centralized | |
| authentication is also useful for auditing purposes because | |
| all authentication tokens originate from the same source. | |
| Just as projects in a single-site deployment need isolation | |
| from each other, so do projects in multi-site installations. | |
| The extra challenges in multi-site designs revolve around | |
| ensuring that project networks function across regions. | |
| OpenStack Networking (neutron) does not presently support | |
| a mechanism to provide this functionality, therefore an | |
| external system may be necessary to manage these mappings. | |
| Project networks may contain sensitive information requiring | |
| that this mapping be accurate and consistent to ensure that a | |
| project in one site does not connect to a different project in | |
| another site. | |
| \subsection{OpenStack components} | |
| \label{\detokenize{legal-security-requirements:openstack-components}} | |
| Most OpenStack installations require a bare minimum set of | |
| pieces to function. These include OpenStack Identity | |
| (keystone) for authentication, OpenStack Compute | |
| (nova) for compute, OpenStack Image service (glance) for image | |
| storage, OpenStack Networking (neutron) for networking, and | |
| potentially an object store in the form of OpenStack Object | |
| Storage (swift). Bringing multi-site into play also demands extra | |
| components in order to coordinate between regions. Centralized | |
| Identity service is necessary to provide the single authentication | |
| point. Centralized dashboard is also recommended to provide a | |
| single login point and a mapped experience to the API and CLI | |
| options available. If needed, use a centralized Object Storage service, | |
| installing the required swift proxy service alongside the Object | |
| Storage service. | |
| It may also be helpful to install a few extra options in | |
| order to facilitate certain use cases. For instance, | |
| installing DNS service may assist in automatically generating | |
| DNS domains for each region with an automatically-populated | |
| zone full of resource records for each instance. This | |
| facilitates using DNS as a mechanism for determining which | |
| region would be selected for certain applications. | |
| Another useful tool for managing a multi-site installation | |
| is Orchestration (heat). The Orchestration service allows | |
| the use of templates to define a set of instances to be launched | |
| together or for scaling existing sets. | |
| It can set up matching or differentiated groupings based on regions. | |
| For instance, if an application requires an equally balanced | |
| number of nodes across sites, the same heat template can be used | |
| to cover each site with small alterations to only the region name. | |
| \section{General purpose} | |
| \label{\detokenize{generalpurpose:general-purpose}}\label{\detokenize{generalpurpose::doc}} | |
| \subsection{User requirements} | |
| \label{\detokenize{generalpurpose-user-requirements:user-requirements}}\label{\detokenize{generalpurpose-user-requirements::doc}} | |
| When building a general purpose cloud, you should follow the | |
| {\hyperref[\detokenize{common/glossary:term-infrastructure-as-a-service-iaas}]{\sphinxtermref{\DUrole{xref,std,std-term}{Infrastructure-as-a-Service (IaaS)}}}} model; a platform best suited | |
| for use cases with simple requirements. General purpose cloud user | |
| requirements are not complex. However, it is important to capture them | |
| even if the project has minimum business and technical requirements, such | |
| as a proof of concept (PoC), or a small lab platform. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| The following user considerations are written from the perspective | |
| of the cloud builder, not from the perspective of the end user. | |
| \end{sphinxadmonition} | |
| \subsubsection{Business requirements} | |
| \label{\detokenize{generalpurpose-user-requirements:business-requirements}}\begin{description} | |
| \item[{Cost}] \leavevmode | |
| Financial factors are a primary concern for any organization. Cost | |
| is an important criterion as general purpose clouds are considered | |
| the baseline from which all other cloud architecture environments | |
| derive. General purpose clouds do not always provide the most | |
| cost-effective environment for specialized applications or | |
| situations. Unless razor-thin margins and costs have been mandated | |
| as a critical factor, cost should not be the sole consideration when | |
| choosing or designing a general purpose architecture. | |
| \item[{Time to market}] \leavevmode | |
| The ability to deliver services or products within a flexible time | |
| frame is a common business factor when building a general purpose | |
| cloud. Delivering a product in six months instead of two years is a | |
| driving force behind the decision to build general purpose clouds. | |
| General purpose clouds allow users to self-provision and gain access | |
| to compute, network, and storage resources on-demand thus decreasing | |
| time to market. | |
| \item[{Revenue opportunity}] \leavevmode | |
| Revenue opportunities for a cloud will vary greatly based on the | |
| intended use case of that particular cloud. Some general purpose | |
| clouds are built for commercial customer facing products, but there | |
| are alternatives that might make the general purpose cloud the right | |
| choice. | |
| \end{description} | |
| \subsubsection{Technical requirements} | |
| \label{\detokenize{generalpurpose-user-requirements:technical-requirements}} | |
| Technical cloud architecture requirements should be weighted against the | |
| business requirements. | |
| \begin{description} | |
| \item[{Performance}] \leavevmode | |
| As a baseline product, general purpose clouds do not provide | |
| optimized performance for any particular function. While a general | |
| purpose cloud should provide enough performance to satisfy average | |
| user considerations, performance is not a general purpose cloud | |
| customer driver. | |
| \item[{No predefined usage model}] \leavevmode | |
| The lack of a pre-defined usage model enables the user to run a wide | |
| variety of applications without having to know the application | |
| requirements in advance. This provides a degree of independence and | |
| flexibility that no other cloud scenarios are able to provide. | |
| \item[{On-demand and self-service application}] \leavevmode | |
| By definition, a cloud provides end users with the ability to | |
| self-provision computing power, storage, networks, and software in a | |
| simple and flexible way. The user must be able to scale their | |
| resources up to a substantial level without disrupting the | |
| underlying host operations. One of the benefits of using a general | |
| purpose cloud architecture is the ability to start with limited | |
| resources and increase them over time as the user demand grows. | |
| \item[{Public cloud}] \leavevmode | |
| For a company interested in building a commercial public cloud | |
| offering based on OpenStack, the general purpose architecture model | |
| might be the best choice. Designers are not always going to know the | |
| purposes or workloads for which the end users will use the cloud. | |
| \item[{Internal consumption (private) cloud}] \leavevmode | |
| Organizations need to determine if it is logical to create their own | |
| clouds internally. Using a private cloud, organizations are able to | |
| maintain complete control over architectural and cloud components. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Users will want to combine using the internal cloud with access | |
| to an external cloud. If that case is likely, it might be worth | |
| exploring the possibility of taking a multi-cloud approach with | |
| regard to at least some of the architectural elements. | |
| \end{sphinxadmonition} | |
| Designs that incorporate the use of multiple clouds, such as a | |
| private cloud and a public cloud offering, are described in the | |
| ``Multi-Cloud'' scenario, see {\hyperref[\detokenize{multi-site::doc}]{\sphinxcrossref{\DUrole{doc}{Multi-site}}}}. | |
| \item[{Security}] \leavevmode | |
| Security should be implemented according to asset, threat, and | |
| vulnerability risk assessment matrices. For cloud domains that | |
| require increased computer security, network security, or | |
| information security, a general purpose cloud is not considered an | |
| appropriate choice. | |
| \end{description} | |
| \subsection{Technical considerations} | |
| \label{\detokenize{generalpurpose-technical-considerations::doc}}\label{\detokenize{generalpurpose-technical-considerations:technical-considerations}} | |
| General purpose clouds are expected to include these base services: | |
| \begin{itemize} | |
| \item {} | |
| Compute | |
| \item {} | |
| Network | |
| \item {} | |
| Storage | |
| \end{itemize} | |
| Each of these services have different resource requirements. As a | |
| result, you must make design decisions relating directly to the service, | |
| as well as provide a balanced infrastructure for all services. | |
| Take into consideration the unique aspects of each service, as | |
| individual characteristics and service mass can impact the hardware | |
| selection process. Hardware designs should be generated for each of the | |
| services. | |
| Hardware decisions are also made in relation to network architecture and | |
| facilities planning. These factors play heavily into the overall | |
| architecture of an OpenStack cloud. | |
| \subsubsection{Compute resource design} | |
| \label{\detokenize{generalpurpose-technical-considerations:compute-resource-design}} | |
| When designing compute resource pools, a number of factors can impact | |
| your design decisions. Factors such as number of processors, amount of | |
| memory, and the quantity of storage required for each hypervisor must be | |
| taken into account. | |
| You will also need to decide whether to provide compute resources in a | |
| single pool or in multiple pools. In most cases, multiple pools of | |
| resources can be allocated and addressed on demand. A compute design | |
| that allocates multiple pools of resources makes best use of application | |
| resources, and is commonly referred to as bin packing. | |
| In a bin packing design, each independent resource pool provides service | |
| for specific flavors. This helps to ensure that, as instances are | |
| scheduled onto compute hypervisors, each independent node's resources | |
| will be allocated in a way that makes the most efficient use of the | |
| available hardware. Bin packing also requires a common hardware design, | |
| with all hardware nodes within a compute resource pool sharing a common | |
| processor, memory, and storage layout. This makes it easier to deploy, | |
| support, and maintain nodes throughout their lifecycle. | |
| An overcommit ratio is the ratio of available virtual resources to | |
| available physical resources. This ratio is configurable for CPU and | |
| memory. The default CPU overcommit ratio is 16:1, and the default memory | |
| overcommit ratio is 1.5:1. Determining the tuning of the overcommit | |
| ratios during the design phase is important as it has a direct impact on | |
| the hardware layout of your compute nodes. | |
| When selecting a processor, compare features and performance | |
| characteristics. Some processors include features specific to | |
| virtualized compute hosts, such as hardware-assisted virtualization, and | |
| technology related to memory paging (also known as EPT shadowing). These | |
| types of features can have a significant impact on the performance of | |
| your virtual machine. | |
| You will also need to consider the compute requirements of | |
| non-hypervisor nodes (sometimes referred to as resource nodes). This | |
| includes controller, object storage, and block storage nodes, and | |
| networking services. | |
| The number of processor cores and threads impacts the number of worker | |
| threads which can be run on a resource node. Design decisions must | |
| relate directly to the service being run on it, as well as provide a | |
| balanced infrastructure for all services. | |
| Workload can be unpredictable in a general purpose cloud, so consider | |
| including the ability to add additional compute resource pools on | |
| demand. In some cases, however, the demand for certain instance types or | |
| flavors may not justify individual hardware design. In either case, | |
| start by allocating hardware designs that are capable of servicing the | |
| most common instance requests. If you want to add additional hardware to | |
| the overall architecture, this can be done later. | |
| \subsubsection{Designing network resources} | |
| \label{\detokenize{generalpurpose-technical-considerations:designing-network-resources}} | |
| OpenStack clouds generally have multiple network segments, with each | |
| segment providing access to particular resources. The network services | |
| themselves also require network communication paths which should be | |
| separated from the other networks. When designing network services for a | |
| general purpose cloud, plan for either a physical or logical separation | |
| of network segments used by operators and projects. You can also create | |
| an additional network segment for access to internal services such as | |
| the message bus and database used by various services. Segregating these | |
| services onto separate networks helps to protect sensitive data and | |
| protects against unauthorized access to services. | |
| Choose a networking service based on the requirements of your instances. | |
| The architecture and design of your cloud will impact whether you choose | |
| OpenStack Networking (neutron), or legacy networking (nova-network). | |
| \begin{description} | |
| \item[{Legacy networking (nova-network)}] \leavevmode | |
| The legacy networking (nova-network) service is primarily a layer-2 | |
| networking service that functions in two modes, which use VLANs in | |
| different ways. In a flat network mode, all network hardware nodes | |
| and devices throughout the cloud are connected to a single layer-2 | |
| network segment that provides access to application data. | |
| When the network devices in the cloud support segmentation using | |
| VLANs, legacy networking can operate in the second mode. In this | |
| design model, each project within the cloud is assigned a network | |
| subnet which is mapped to a VLAN on the physical network. It is | |
| especially important to remember the maximum number of 4096 VLANs | |
| which can be used within a spanning tree domain. This places a hard | |
| limit on the amount of growth possible within the data center. When | |
| designing a general purpose cloud intended to support multiple | |
| projects, we recommend the use of legacy networking with VLANs, and | |
| not in flat network mode. | |
| \end{description} | |
| Another consideration regarding network is the fact that legacy | |
| networking is entirely managed by the cloud operator; projects do not | |
| have control over network resources. If projects require the ability to | |
| manage and create network resources such as network segments and | |
| subnets, it will be necessary to install the OpenStack Networking | |
| service to provide network access to instances. | |
| \begin{description} | |
| \item[{Networking (neutron)}] \leavevmode | |
| OpenStack Networking (neutron) is a first class networking service | |
| that gives full control over creation of virtual network resources | |
| to projects. This is often accomplished in the form of tunneling | |
| protocols which will establish encapsulated communication paths over | |
| existing network infrastructure in order to segment project traffic. | |
| These methods vary depending on the specific implementation, but | |
| some of the more common methods include tunneling over GRE, | |
| encapsulating with VXLAN, and VLAN tags. | |
| \end{description} | |
| We recommend you design at least three network segments: | |
| \begin{itemize} | |
| \item {} | |
| The first segment is a public network, used for access to REST APIs | |
| by projects and operators. The controller nodes and swift proxies are | |
| the only devices connecting to this network segment. In some cases, | |
| this network might also be serviced by hardware load balancers and | |
| other network devices. | |
| \item {} | |
| The second segment is used by administrators to manage hardware | |
| resources. Configuration management tools also use this for deploying | |
| software and services onto new hardware. In some cases, this network | |
| segment might also be used for internal services, including the | |
| message bus and database services. This network needs to communicate | |
| with every hardware node. Due to the highly sensitive nature of this | |
| network segment, you also need to secure this network from | |
| unauthorized access. | |
| \item {} | |
| The third network segment is used by applications and consumers to | |
| access the physical network, and for users to access applications. | |
| This network is segregated from the one used to access the cloud APIs | |
| and is not capable of communicating directly with the hardware | |
| resources in the cloud. Compute resource nodes and network gateway | |
| services which allow application data to access the physical network | |
| from outside of the cloud need to communicate on this network | |
| segment. | |
| \end{itemize} | |
| \subsubsection{Designing Object Storage} | |
| \label{\detokenize{generalpurpose-technical-considerations:designing-object-storage}} | |
| When designing hardware resources for OpenStack Object Storage, the | |
| primary goal is to maximize the amount of storage in each resource node | |
| while also ensuring that the cost per terabyte is kept to a minimum. | |
| This often involves utilizing servers which can hold a large number of | |
| spinning disks. Whether choosing to use 2U server form factors with | |
| directly attached storage or an external chassis that holds a larger | |
| number of drives, the main goal is to maximize the storage available in | |
| each node. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| We do not recommended investing in enterprise class drives for an | |
| OpenStack Object Storage cluster. The consistency and partition | |
| tolerance characteristics of OpenStack Object Storage ensures that | |
| data stays up to date and survives hardware faults without the use | |
| of any specialized data replication devices. | |
| \end{sphinxadmonition} | |
| One of the benefits of OpenStack Object Storage is the ability to mix | |
| and match drives by making use of weighting within the swift ring. When | |
| designing your swift storage cluster, we recommend making use of the | |
| most cost effective storage solution available at the time. | |
| To achieve durability and availability of data stored as objects it is | |
| important to design object storage resource pools to ensure they can | |
| provide the suggested availability. Considering rack-level and | |
| zone-level designs to accommodate the number of replicas configured to | |
| be stored in the Object Storage service (the default number of replicas | |
| is three) is important when designing beyond the hardware node level. | |
| Each replica of data should exist in its own availability zone with its | |
| own power, cooling, and network resources available to service that | |
| specific zone. | |
| Object storage nodes should be designed so that the number of requests | |
| does not hinder the performance of the cluster. The object storage | |
| service is a chatty protocol, therefore making use of multiple | |
| processors that have higher core counts will ensure the IO requests do | |
| not inundate the server. | |
| \subsubsection{Designing Block Storage} | |
| \label{\detokenize{generalpurpose-technical-considerations:designing-block-storage}} | |
| When designing OpenStack Block Storage resource nodes, it is helpful to | |
| understand the workloads and requirements that will drive the use of | |
| block storage in the cloud. We recommend designing block storage pools | |
| so that projects can choose appropriate storage solutions for their | |
| applications. By creating multiple storage pools of different types, in | |
| conjunction with configuring an advanced storage scheduler for the block | |
| storage service, it is possible to provide projects with a large catalog | |
| of storage services with a variety of performance levels and redundancy | |
| options. | |
| Block storage also takes advantage of a number of enterprise storage | |
| solutions. These are addressed via a plug-in driver developed by the | |
| hardware vendor. A large number of enterprise storage plug-in drivers | |
| ship out-of-the-box with OpenStack Block Storage (and many more | |
| available via third party channels). General purpose clouds are more | |
| likely to use directly attached storage in the majority of block storage | |
| nodes, deeming it necessary to provide additional levels of service to | |
| projects which can only be provided by enterprise class storage | |
| solutions. | |
| Redundancy and availability requirements impact the decision to use a | |
| RAID controller card in block storage nodes. The input-output per second | |
| (IOPS) demand of your application will influence whether or not you | |
| should use a RAID controller, and which level of RAID is required. | |
| Making use of higher performing RAID volumes is suggested when | |
| considering performance. However, where redundancy of block storage | |
| volumes is more important we recommend making use of a redundant RAID | |
| configuration such as RAID 5 or RAID 6. Some specialized features, such | |
| as automated replication of block storage volumes, may require the use | |
| of third-party plug-ins and enterprise block storage solutions in order | |
| to provide the high demand on storage. Furthermore, where extreme | |
| performance is a requirement it may also be necessary to make use of | |
| high speed SSD disk drives' high performing flash storage solutions. | |
| \subsubsection{Software selection} | |
| \label{\detokenize{generalpurpose-technical-considerations:software-selection}} | |
| The software selection process plays a large role in the architecture of | |
| a general purpose cloud. The following have a large impact on the design | |
| of the cloud: | |
| \begin{itemize} | |
| \item {} | |
| Choice of operating system | |
| \item {} | |
| Selection of OpenStack software components | |
| \item {} | |
| Choice of hypervisor | |
| \item {} | |
| Selection of supplemental software | |
| \end{itemize} | |
| Operating system (OS) selection plays a large role in the design and | |
| architecture of a cloud. There are a number of OSes which have native | |
| support for OpenStack including: | |
| \begin{itemize} | |
| \item {} | |
| Ubuntu | |
| \item {} | |
| Red Hat Enterprise Linux (RHEL) | |
| \item {} | |
| CentOS | |
| \item {} | |
| SUSE Linux Enterprise Server (SLES) | |
| \end{itemize} | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Native support is not a constraint on the choice of OS; users are | |
| free to choose just about any Linux distribution (or even Microsoft | |
| Windows) and install OpenStack directly from source (or compile | |
| their own packages). However, many organizations will prefer to | |
| install OpenStack from distribution-supplied packages or | |
| repositories (although using the distribution vendor's OpenStack | |
| packages might be a requirement for support). | |
| \end{sphinxadmonition} | |
| OS selection also directly influences hypervisor selection. A cloud | |
| architect who selects Ubuntu, RHEL, or SLES has some flexibility in | |
| hypervisor; KVM, Xen, and LXC are supported virtualization methods | |
| available under OpenStack Compute (nova) on these Linux distributions. | |
| However, a cloud architect who selects Hyper-V is limited to Windows | |
| Servers. Similarly, a cloud architect who selects XenServer is limited | |
| to the CentOS-based dom0 operating system provided with XenServer. | |
| The primary factors that play into OS-hypervisor selection include: | |
| \begin{description} | |
| \item[{User requirements}] \leavevmode | |
| The selection of OS-hypervisor combination first and foremost needs | |
| to support the user requirements. | |
| \item[{Support}] \leavevmode | |
| The selected OS-hypervisor combination needs to be supported by | |
| OpenStack. | |
| \item[{Interoperability}] \leavevmode | |
| The OS-hypervisor needs to be interoperable with other features and | |
| services in the OpenStack design in order to meet the user | |
| requirements. | |
| \end{description} | |
| \subsubsection{Hypervisor} | |
| \label{\detokenize{generalpurpose-technical-considerations:hypervisor}} | |
| OpenStack supports a wide variety of hypervisors, one or more of which | |
| can be used in a single cloud. These hypervisors include: | |
| \begin{itemize} | |
| \item {} | |
| KVM (and QEMU) | |
| \item {} | |
| XCP/XenServer | |
| \item {} | |
| vSphere (vCenter and ESXi) | |
| \item {} | |
| Hyper-V | |
| \item {} | |
| LXC | |
| \item {} | |
| Docker | |
| \item {} | |
| Bare-metal | |
| \end{itemize} | |
| A complete list of supported hypervisors and their capabilities can be | |
| found at \href{https://wiki.openstack.org/wiki/HypervisorSupportMatrix}{OpenStack Hypervisor Support | |
| Matrix}. | |
| We recommend general purpose clouds use hypervisors that support the | |
| most general purpose use cases, such as KVM and Xen. More specific | |
| hypervisors should be chosen to account for specific functionality or a | |
| supported feature requirement. In some cases, there may also be a | |
| mandated requirement to run software on a certified hypervisor including | |
| solutions from VMware, Microsoft, and Citrix. | |
| The features offered through the OpenStack cloud platform determine the | |
| best choice of a hypervisor. Each hypervisor has their own hardware | |
| requirements which may affect the decisions around designing a general | |
| purpose cloud. | |
| In a mixed hypervisor environment, specific aggregates of compute | |
| resources, each with defined capabilities, enable workloads to utilize | |
| software and hardware specific to their particular requirements. This | |
| functionality can be exposed explicitly to the end user, or accessed | |
| through defined metadata within a particular flavor of an instance. | |
| \subsubsection{OpenStack components} | |
| \label{\detokenize{generalpurpose-technical-considerations:openstack-components}} | |
| A general purpose OpenStack cloud design should incorporate the core | |
| OpenStack services to provide a wide range of services to end-users. The | |
| OpenStack core services recommended in a general purpose cloud are: | |
| \begin{itemize} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-compute-service-nova}]{\sphinxtermref{\DUrole{xref,std,std-term}{Compute service (nova)}}}} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-networking-service-neutron}]{\sphinxtermref{\DUrole{xref,std,std-term}{Networking service (neutron)}}}} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-image-service-glance}]{\sphinxtermref{\DUrole{xref,std,std-term}{Image service (glance)}}}} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-identity-service-keystone}]{\sphinxtermref{\DUrole{xref,std,std-term}{Identity service (keystone)}}}} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-dashboard-horizon}]{\sphinxtermref{\DUrole{xref,std,std-term}{Dashboard (horizon)}}}} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-telemetry-service-telemetry}]{\sphinxtermref{\DUrole{xref,std,std-term}{Telemetry service (telemetry)}}}} | |
| \end{itemize} | |
| A general purpose cloud may also include {\hyperref[\detokenize{common/glossary:term-object-storage-service-swift}]{\sphinxtermref{\DUrole{xref,std,std-term}{Object Storage service | |
| (swift)}}}}. {\hyperref[\detokenize{common/glossary:term-block-storage-service-cinder}]{\sphinxtermref{\DUrole{xref,std,std-term}{Block Storage service (cinder)}}}}. | |
| These may be selected to provide storage to applications and instances. | |
| \subsubsection{Supplemental software} | |
| \label{\detokenize{generalpurpose-technical-considerations:supplemental-software}} | |
| A general purpose OpenStack deployment consists of more than just | |
| OpenStack-specific components. A typical deployment involves services | |
| that provide supporting functionality, including databases and message | |
| queues, and may also involve software to provide high availability of | |
| the OpenStack environment. Design decisions around the underlying | |
| message queue might affect the required number of controller services, | |
| as well as the technology to provide highly resilient database | |
| functionality, such as MariaDB with Galera. In such a scenario, | |
| replication of services relies on quorum. | |
| Where many general purpose deployments use hardware load balancers to | |
| provide highly available API access and SSL termination, software | |
| solutions, for example HAProxy, can also be considered. It is vital to | |
| ensure that such software implementations are also made highly | |
| available. High availability can be achieved by using software such as | |
| Keepalived or Pacemaker with Corosync. Pacemaker and Corosync can | |
| provide active-active or active-passive highly available configuration | |
| depending on the specific service in the OpenStack environment. Using | |
| this software can affect the design as it assumes at least a 2-node | |
| controller infrastructure where one of those nodes may be running | |
| certain services in standby mode. | |
| Memcached is a distributed memory object caching system, and Redis is a | |
| key-value store. Both are deployed on general purpose clouds to assist | |
| in alleviating load to the Identity service. The memcached service | |
| caches tokens, and due to its distributed nature it can help alleviate | |
| some bottlenecks to the underlying authentication system. Using | |
| memcached or Redis does not affect the overall design of your | |
| architecture as they tend to be deployed onto the infrastructure nodes | |
| providing the OpenStack services. | |
| \subsubsection{Controller infrastructure} | |
| \label{\detokenize{generalpurpose-technical-considerations:controller-infrastructure}} | |
| The Controller infrastructure nodes provide management services to the | |
| end-user as well as providing services internally for the operating of | |
| the cloud. The Controllers run message queuing services that carry | |
| system messages between each service. Performance issues related to the | |
| message bus would lead to delays in sending that message to where it | |
| needs to go. The result of this condition would be delays in operation | |
| functions such as spinning up and deleting instances, provisioning new | |
| storage volumes and managing network resources. Such delays could | |
| adversely affect an application’s ability to react to certain | |
| conditions, especially when using auto-scaling features. It is important | |
| to properly design the hardware used to run the controller | |
| infrastructure as outlined above in the Hardware Selection section. | |
| Performance of the controller services is not limited to processing | |
| power, but restrictions may emerge in serving concurrent users. Ensure | |
| that the APIs and Horizon services are load tested to ensure that you | |
| are able to serve your customers. Particular attention should be made to | |
| the OpenStack Identity Service (Keystone), which provides the | |
| authentication and authorization for all services, both internally to | |
| OpenStack itself and to end-users. This service can lead to a | |
| degradation of overall performance if this is not sized appropriately. | |
| \subsubsection{Network performance} | |
| \label{\detokenize{generalpurpose-technical-considerations:network-performance}} | |
| In a general purpose OpenStack cloud, the requirements of the network | |
| help determine performance capabilities. It is possible to design | |
| OpenStack environments that run a mix of networking capabilities. By | |
| utilizing the different interface speeds, the users of the OpenStack | |
| environment can choose networks that are fit for their purpose. | |
| Network performance can be boosted considerably by implementing hardware | |
| load balancers to provide front-end service to the cloud APIs. The | |
| hardware load balancers also perform SSL termination if that is a | |
| requirement of your environment. When implementing SSL offloading, it is | |
| important to understand the SSL offloading capabilities of the devices | |
| selected. | |
| \subsubsection{Compute host} | |
| \label{\detokenize{generalpurpose-technical-considerations:compute-host}} | |
| The choice of hardware specifications used in compute nodes including | |
| CPU, memory and disk type directly affects the performance of the | |
| instances. Other factors which can directly affect performance include | |
| tunable parameters within the OpenStack services, for example the | |
| overcommit ratio applied to resources. The defaults in OpenStack Compute | |
| set a 16:1 over-commit of the CPU and 1.5 over-commit of the memory. | |
| Running at such high ratios leads to an increase in ``noisy-neighbor'' | |
| activity. Care must be taken when sizing your Compute environment to | |
| avoid this scenario. For running general purpose OpenStack environments | |
| it is possible to keep to the defaults, but make sure to monitor your | |
| environment as usage increases. | |
| \subsubsection{Storage performance} | |
| \label{\detokenize{generalpurpose-technical-considerations:storage-performance}} | |
| When considering performance of Block Storage, hardware and | |
| architecture choice is important. Block Storage can use enterprise | |
| back-end systems such as NetApp or EMC, scale out storage such as | |
| GlusterFS and Ceph, or simply use the capabilities of directly attached | |
| storage in the nodes themselves. Block Storage may be deployed so that | |
| traffic traverses the host network, which could affect, and be adversely | |
| affected by, the front-side API traffic performance. As such, consider | |
| using a dedicated data storage network with dedicated interfaces on the | |
| Controller and Compute hosts. | |
| When considering performance of Object Storage, a number of design | |
| choices will affect performance. A user’s access to the Object | |
| Storage is through the proxy services, which sit behind hardware load | |
| balancers. By the very nature of a highly resilient storage system, | |
| replication of the data would affect performance of the overall system. | |
| In this case, 10 GbE (or better) networking is recommended throughout | |
| the storage network architecture. | |
| \subsubsection{High Availability} | |
| \label{\detokenize{generalpurpose-technical-considerations:high-availability}} | |
| In OpenStack, the infrastructure is integral to providing services and | |
| should always be available, especially when operating with SLAs. | |
| Ensuring network availability is accomplished by designing the network | |
| architecture so that no single point of failure exists. A consideration | |
| of the number of switches, routes and redundancies of power should be | |
| factored into core infrastructure, as well as the associated bonding of | |
| networks to provide diverse routes to your highly available switch | |
| infrastructure. | |
| The OpenStack services themselves should be deployed across multiple | |
| servers that do not represent a single point of failure. Ensuring API | |
| availability can be achieved by placing these services behind highly | |
| available load balancers that have multiple OpenStack servers as | |
| members. | |
| OpenStack lends itself to deployment in a highly available manner where | |
| it is expected that at least 2 servers be utilized. These can run all | |
| the services involved from the message queuing service, for example | |
| RabbitMQ or QPID, and an appropriately deployed database service such as | |
| MySQL or MariaDB. As services in the cloud are scaled out, back-end | |
| services will need to scale too. Monitoring and reporting on server | |
| utilization and response times, as well as load testing your systems, | |
| will help determine scale out decisions. | |
| Care must be taken when deciding network functionality. Currently, | |
| OpenStack supports both the legacy networking (nova-network) system and | |
| the newer, extensible OpenStack Networking (neutron). Both have their | |
| pros and cons when it comes to providing highly available access. Legacy | |
| networking, which provides networking access maintained in the OpenStack | |
| Compute code, provides a feature that removes a single point of failure | |
| when it comes to routing, and this feature is currently missing in | |
| OpenStack Networking. The effect of legacy networking’s multi-host | |
| functionality restricts failure domains to the host running that | |
| instance. | |
| When using Networking, the OpenStack controller servers or | |
| separate Networking hosts handle routing. For a deployment that requires | |
| features available in only Networking, it is possible to remove this | |
| restriction by using third party software that helps maintain highly | |
| available L3 routes. Doing so allows for common APIs to control network | |
| hardware, or to provide complex multi-tier web applications in a secure | |
| manner. It is also possible to completely remove routing from | |
| Networking, and instead rely on hardware routing capabilities. In this | |
| case, the switching infrastructure must support L3 routing. | |
| OpenStack Networking and legacy networking both have their advantages | |
| and disadvantages. They are both valid and supported options that fit | |
| different network deployment models described in the | |
| \sphinxtitleref{Networking deployment options table \textless{}https://docs.openstack.org/ops-guide/arch-network-design.html\#network-topology\textgreater{}} | |
| of OpenStack Operations Guide. | |
| Ensure your deployment has adequate back-up capabilities. | |
| Application design must also be factored into the capabilities of the | |
| underlying cloud infrastructure. If the compute hosts do not provide a | |
| seamless live migration capability, then it must be expected that when a | |
| compute host fails, that instance and any data local to that instance | |
| will be deleted. However, when providing an expectation to users that | |
| instances have a high-level of uptime guarantees, the infrastructure | |
| must be deployed in a way that eliminates any single point of failure | |
| when a compute host disappears. This may include utilizing shared file | |
| systems on enterprise storage or OpenStack Block storage to provide a | |
| level of guarantee to match service features. | |
| For more information on high availability in OpenStack, see the | |
| \href{https://docs.openstack.org/ha-guide/}{OpenStack High Availability | |
| Guide}. | |
| \subsubsection{Security} | |
| \label{\detokenize{generalpurpose-technical-considerations:security}} | |
| A security domain comprises users, applications, servers or networks | |
| that share common trust requirements and expectations within a system. | |
| Typically they have the same authentication and authorization | |
| requirements and users. | |
| These security domains are: | |
| \begin{itemize} | |
| \item {} | |
| Public | |
| \item {} | |
| Guest | |
| \item {} | |
| Management | |
| \item {} | |
| Data | |
| \end{itemize} | |
| These security domains can be mapped to an OpenStack deployment | |
| individually, or combined. In each case, the cloud operator should be | |
| aware of the appropriate security concerns. Security domains should be | |
| mapped out against your specific OpenStack deployment topology. The | |
| domains and their trust requirements depend upon whether the cloud | |
| instance is public, private, or hybrid. | |
| \begin{itemize} | |
| \item {} | |
| The public security domain is an entirely untrusted area of the cloud | |
| infrastructure. It can refer to the internet as a whole or simply to | |
| networks over which you have no authority. This domain should always | |
| be considered untrusted. | |
| \item {} | |
| The guest security domain handles compute data generated by instances | |
| on the cloud but not services that support the operation of the | |
| cloud, such as API calls. Public cloud providers and private cloud | |
| providers who do not have stringent controls on instance use or who | |
| allow unrestricted internet access to instances should consider this | |
| domain to be untrusted. Private cloud providers may want to consider | |
| this network as internal and therefore trusted only if they have | |
| controls in place to assert that they trust instances and all their | |
| projects. | |
| \item {} | |
| The management security domain is where services interact. Sometimes | |
| referred to as the control plane, the networks in this domain | |
| transport confidential data such as configuration parameters, user | |
| names, and passwords. In most deployments this domain is considered | |
| trusted. | |
| \item {} | |
| The data security domain is concerned primarily with information | |
| pertaining to the storage services within OpenStack. Much of the data | |
| that crosses this network has high integrity and confidentiality | |
| requirements and, depending on the type of deployment, may also have | |
| strong availability requirements. The trust level of this network is | |
| heavily dependent on other deployment decisions. | |
| \end{itemize} | |
| When deploying OpenStack in an enterprise as a private cloud it is | |
| usually behind the firewall and within the trusted network alongside | |
| existing systems. Users of the cloud are employees that are bound by the | |
| security requirements set forth by the company. This tends to push most | |
| of the security domains towards a more trusted model. However, when | |
| deploying OpenStack in a public facing role, no assumptions can be made | |
| and the attack vectors significantly increase. | |
| Consideration must be taken when managing the users of the system for | |
| both public and private clouds. The identity service allows for LDAP to | |
| be part of the authentication process. Including such systems in an | |
| OpenStack deployment may ease user management if integrating into | |
| existing systems. | |
| It is important to understand that user authentication requests include | |
| sensitive information including user names, passwords, and | |
| authentication tokens. For this reason, placing the API services behind | |
| hardware that performs SSL termination is strongly recommended. | |
| For more information OpenStack Security, see the \href{https://docs.openstack.org/security-guide/}{OpenStack Security | |
| Guide}. | |
| \subsection{Operational considerations} | |
| \label{\detokenize{generalpurpose-operational-considerations:operational-considerations}}\label{\detokenize{generalpurpose-operational-considerations::doc}} | |
| In the planning and design phases of the build out, it is important to | |
| include the operation's function. Operational factors affect the design | |
| choices for a general purpose cloud, and operations staff are often | |
| tasked with the maintenance of cloud environments for larger | |
| installations. | |
| Expectations set by the Service Level Agreements (SLAs) directly affect | |
| knowing when and where you should implement redundancy and high | |
| availability. SLAs are contractual obligations that provide assurances | |
| for service availability. They define the levels of availability that | |
| drive the technical design, often with penalties for not meeting | |
| contractual obligations. | |
| SLA terms that affect design include: | |
| \begin{itemize} | |
| \item {} | |
| API availability guarantees implying multiple infrastructure services | |
| and highly available load balancers. | |
| \item {} | |
| Network uptime guarantees affecting switch design, which might | |
| require redundant switching and power. | |
| \item {} | |
| Factor in networking security policy requirements in to your | |
| deployments. | |
| \end{itemize} | |
| \subsubsection{Support and maintainability} | |
| \label{\detokenize{generalpurpose-operational-considerations:support-and-maintainability}} | |
| To be able to support and maintain an installation, OpenStack cloud | |
| management requires operations staff to understand and comprehend design | |
| architecture content. The operations and engineering staff skill level, | |
| and level of separation, are dependent on size and purpose of the | |
| installation. Large cloud service providers, or telecom providers, are | |
| more likely to be managed by specially trained, dedicated operations | |
| organizations. Smaller implementations are more likely to rely on | |
| support staff that need to take on combined engineering, design and | |
| operations functions. | |
| Maintaining OpenStack installations requires a variety of technical | |
| skills. You may want to consider using a third-party management company | |
| with special expertise in managing OpenStack deployment. | |
| \subsubsection{Monitoring} | |
| \label{\detokenize{generalpurpose-operational-considerations:monitoring}} | |
| OpenStack clouds require appropriate monitoring platforms to ensure | |
| errors are caught and managed appropriately. Specific meters that are | |
| critically important to monitor include: | |
| \begin{itemize} | |
| \item {} | |
| Image disk utilization | |
| \item {} | |
| Response time to the {\hyperref[\detokenize{common/glossary:term-compute-api-nova-api}]{\sphinxtermref{\DUrole{xref,std,std-term}{Compute API}}}} | |
| \end{itemize} | |
| Leveraging existing monitoring systems is an effective check to ensure | |
| OpenStack environments can be monitored. | |
| \subsubsection{Downtime} | |
| \label{\detokenize{generalpurpose-operational-considerations:downtime}} | |
| To effectively run cloud installations, initial downtime planning | |
| includes creating processes and architectures that support the | |
| following: | |
| \begin{itemize} | |
| \item {} | |
| Planned (maintenance) | |
| \item {} | |
| Unplanned (system faults) | |
| \end{itemize} | |
| Resiliency of overall system and individual components are going to be | |
| dictated by the requirements of the SLA, meaning designing for | |
| {\hyperref[\detokenize{common/glossary:term-high-availability-ha}]{\sphinxtermref{\DUrole{xref,std,std-term}{high availability (HA)}}}} can have cost ramifications. | |
| \subsubsection{Capacity planning} | |
| \label{\detokenize{generalpurpose-operational-considerations:capacity-planning}} | |
| Capacity constraints for a general purpose cloud environment include: | |
| \begin{itemize} | |
| \item {} | |
| Compute limits | |
| \item {} | |
| Storage limits | |
| \end{itemize} | |
| A relationship exists between the size of the compute environment and | |
| the supporting OpenStack infrastructure controller nodes requiring | |
| support. | |
| Increasing the size of the supporting compute environment increases the | |
| network traffic and messages, adding load to the controller or | |
| networking nodes. Effective monitoring of the environment will help with | |
| capacity decisions on scaling. | |
| Compute nodes automatically attach to OpenStack clouds, resulting in a | |
| horizontally scaling process when adding extra compute capacity to an | |
| OpenStack cloud. Additional processes are required to place nodes into | |
| appropriate availability zones and host aggregates. When adding | |
| additional compute nodes to environments, ensure identical or functional | |
| compatible CPUs are used, otherwise live migration features will break. | |
| It is necessary to add rack capacity or network switches as scaling out | |
| compute hosts directly affects network and datacenter resources. | |
| Assessing the average workloads and increasing the number of instances | |
| that can run within the compute environment by adjusting the overcommit | |
| ratio is another option. It is important to remember that changing the | |
| CPU overcommit ratio can have a detrimental effect and cause a potential | |
| increase in a noisy neighbor. The additional risk of increasing the | |
| overcommit ratio is more instances failing when a compute host fails. | |
| Compute host components can also be upgraded to account for increases in | |
| demand; this is known as vertical scaling. Upgrading CPUs with more | |
| cores, or increasing the overall server memory, can add extra needed | |
| capacity depending on whether the running applications are more CPU | |
| intensive or memory intensive. | |
| Insufficient disk capacity could also have a negative effect on overall | |
| performance including CPU and memory usage. Depending on the back-end | |
| architecture of the OpenStack Block Storage layer, capacity includes | |
| adding disk shelves to enterprise storage systems or installing | |
| additional block storage nodes. Upgrading directly attached storage | |
| installed in compute hosts, and adding capacity to the shared storage | |
| for additional ephemeral storage to instances, may be necessary. | |
| For a deeper discussion on many of these topics, refer to the \href{https://docs.openstack.org/ops}{OpenStack | |
| Operations Guide}. | |
| \subsection{Architecture} | |
| \label{\detokenize{generalpurpose-architecture::doc}}\label{\detokenize{generalpurpose-architecture:architecture}} | |
| Hardware selection involves three key areas: | |
| \begin{itemize} | |
| \item {} | |
| Compute | |
| \item {} | |
| Network | |
| \item {} | |
| Storage | |
| \end{itemize} | |
| Hardware for a general purpose OpenStack cloud should reflect a cloud | |
| with no pre-defined usage model, designed to run a wide variety of | |
| applications with varying resource usage requirements. These | |
| applications include any of the following: | |
| \begin{itemize} | |
| \item {} | |
| RAM-intensive | |
| \item {} | |
| CPU-intensive | |
| \item {} | |
| Storage-intensive | |
| \end{itemize} | |
| Certain hardware form factors may better suit a general purpose | |
| OpenStack cloud due to the requirement for equal (or nearly equal) | |
| balance of resources. Server hardware must provide the following: | |
| \begin{itemize} | |
| \item {} | |
| Equal (or nearly equal) balance of compute capacity (RAM and CPU) | |
| \item {} | |
| Network capacity (number and speed of links) | |
| \item {} | |
| Storage capacity (gigabytes or terabytes as well as {\hyperref[\detokenize{common/glossary:term-input-output-operations-per-second-iops}]{\sphinxtermref{\DUrole{xref,std,std-term}{Input/Output | |
| Operations Per Second (IOPS)}}}} | |
| \end{itemize} | |
| Evaluate server hardware around four conflicting dimensions: | |
| \begin{description} | |
| \item[{Server density}] \leavevmode | |
| A measure of how many servers can fit into a given measure of | |
| physical space, such as a rack unit {[}U{]}. | |
| \item[{Resource capacity}] \leavevmode | |
| The number of CPU cores, amount of RAM, or amount of deliverable | |
| storage. | |
| \item[{Expandability}] \leavevmode | |
| Limit of additional resources you can add to a server. | |
| \item[{Cost}] \leavevmode | |
| The relative purchase price of the hardware weighted against the | |
| level of design effort needed to build the system. | |
| \end{description} | |
| Increasing server density means sacrificing resource capacity or | |
| expandability, however, increasing resource capacity and expandability | |
| increases cost and decreases server density. As a result, determining | |
| the best server hardware for a general purpose OpenStack architecture | |
| means understanding how choice of form factor will impact the rest of | |
| the design. The following list outlines the form factors to choose from: | |
| \begin{itemize} | |
| \item {} | |
| Blade servers typically support dual-socket multi-core CPUs. Blades | |
| also offer outstanding density. | |
| \item {} | |
| 1U rack-mounted servers occupy only a single rack unit. Their | |
| benefits include high density, support for dual-socket multi-core | |
| CPUs, and support for reasonable RAM amounts. This form factor offers | |
| limited storage capacity, limited network capacity, and limited | |
| expandability. | |
| \item {} | |
| 2U rack-mounted servers offer the expanded storage and networking | |
| capacity that 1U servers tend to lack, but with a corresponding | |
| decrease in server density (half the density offered by 1U | |
| rack-mounted servers). | |
| \item {} | |
| Larger rack-mounted servers, such as 4U servers, will tend to offer | |
| even greater CPU capacity, often supporting four or even eight CPU | |
| sockets. These servers often have much greater expandability so will | |
| provide the best option for upgradability. This means, however, that | |
| the servers have a much lower server density and a much greater | |
| hardware cost. | |
| \item {} | |
| \sphinxstyleemphasis{Sled servers} are rack-mounted servers that support multiple | |
| independent servers in a single 2U or 3U enclosure. This form factor | |
| offers increased density over typical 1U-2U rack-mounted servers but | |
| tends to suffer from limitations in the amount of storage or network | |
| capacity each individual server supports. | |
| \end{itemize} | |
| The best form factor for server hardware supporting a general purpose | |
| OpenStack cloud is driven by outside business and cost factors. No | |
| single reference architecture applies to all implementations; the | |
| decision must flow from user requirements, technical considerations, and | |
| operational considerations. Here are some of the key factors that | |
| influence the selection of server hardware: | |
| \begin{description} | |
| \item[{Instance density}] \leavevmode | |
| Sizing is an important consideration for a general purpose OpenStack | |
| cloud. The expected or anticipated number of instances that each | |
| hypervisor can host is a common meter used in sizing the deployment. | |
| The selected server hardware needs to support the expected or | |
| anticipated instance density. | |
| \item[{Host density}] \leavevmode | |
| Physical data centers have limited physical space, power, and | |
| cooling. The number of hosts (or hypervisors) that can be fitted | |
| into a given metric (rack, rack unit, or floor tile) is another | |
| important method of sizing. Floor weight is an often overlooked | |
| consideration. The data center floor must be able to support the | |
| weight of the proposed number of hosts within a rack or set of | |
| racks. These factors need to be applied as part of the host density | |
| calculation and server hardware selection. | |
| \item[{Power density}] \leavevmode | |
| Data centers have a specified amount of power fed to a given rack or | |
| set of racks. Older data centers may have a power density as power | |
| as low as 20 AMPs per rack, while more recent data centers can be | |
| architected to support power densities as high as 120 AMP per rack. | |
| The selected server hardware must take power density into account. | |
| \item[{Network connectivity}] \leavevmode | |
| The selected server hardware must have the appropriate number of | |
| network connections, as well as the right type of network | |
| connections, in order to support the proposed architecture. Ensure | |
| that, at a minimum, there are at least two diverse network | |
| connections coming into each rack. | |
| \end{description} | |
| The selection of form factors or architectures affects the selection of | |
| server hardware. Ensure that the selected server hardware is configured | |
| to support enough storage capacity (or storage expandability) to match | |
| the requirements of selected scale-out storage solution. Similarly, the | |
| network architecture impacts the server hardware selection and vice | |
| versa. | |
| \subsubsection{Selecting storage hardware} | |
| \label{\detokenize{generalpurpose-architecture:selecting-storage-hardware}} | |
| Determine storage hardware architecture by selecting specific storage | |
| architecture. Determine the selection of storage architecture by | |
| evaluating possible solutions against the critical factors, the user | |
| requirements, technical considerations, and operational considerations. | |
| Incorporate the following facts into your storage architecture: | |
| \begin{description} | |
| \item[{Cost}] \leavevmode | |
| Storage can be a significant portion of the overall system cost. For | |
| an organization that is concerned with vendor support, a commercial | |
| storage solution is advisable, although it comes with a higher price | |
| tag. If initial capital expenditure requires minimization, designing | |
| a system based on commodity hardware would apply. The trade-off is | |
| potentially higher support costs and a greater risk of | |
| incompatibility and interoperability issues. | |
| \item[{Scalability}] \leavevmode | |
| Scalability, along with expandability, is a major consideration in a | |
| general purpose OpenStack cloud. It might be difficult to predict | |
| the final intended size of the implementation as there are no | |
| established usage patterns for a general purpose cloud. It might | |
| become necessary to expand the initial deployment in order to | |
| accommodate growth and user demand. | |
| \item[{Expandability}] \leavevmode | |
| Expandability is a major architecture factor for storage solutions | |
| with general purpose OpenStack cloud. A storage solution that | |
| expands to 50 PB is considered more expandable than a solution that | |
| only scales to 10 PB. This meter is related to scalability, which is | |
| the measure of a solution's performance as it expands. | |
| \end{description} | |
| Using a scale-out storage solution with direct-attached storage (DAS) in | |
| the servers is well suited for a general purpose OpenStack cloud. Cloud | |
| services requirements determine your choice of scale-out solution. You | |
| need to determine if a single, highly expandable and highly vertical, | |
| scalable, centralized storage array is suitable for your design. After | |
| determining an approach, select the storage hardware based on this | |
| criteria. | |
| This list expands upon the potential impacts for including a particular | |
| storage architecture (and corresponding storage hardware) into the | |
| design for a general purpose OpenStack cloud: | |
| \begin{description} | |
| \item[{Connectivity}] \leavevmode | |
| Ensure that, if storage protocols other than Ethernet are part of | |
| the storage solution, the appropriate hardware has been selected. If | |
| a centralized storage array is selected, ensure that the hypervisor | |
| will be able to connect to that storage array for image storage. | |
| \item[{Usage}] \leavevmode | |
| How the particular storage architecture will be used is critical for | |
| determining the architecture. Some of the configurations that will | |
| influence the architecture include whether it will be used by the | |
| hypervisors for ephemeral instance storage or if OpenStack Object | |
| Storage will use it for object storage. | |
| \item[{Instance and image locations}] \leavevmode | |
| Where instances and images will be stored will influence the | |
| architecture. | |
| \item[{Server hardware}] \leavevmode | |
| If the solution is a scale-out storage architecture that includes | |
| DAS, it will affect the server hardware selection. This could ripple | |
| into the decisions that affect host density, instance density, power | |
| density, OS-hypervisor, management tools and others. | |
| \end{description} | |
| General purpose OpenStack cloud has multiple options. The key factors | |
| that will have an influence on selection of storage hardware for a | |
| general purpose OpenStack cloud are as follows: | |
| \begin{description} | |
| \item[{Capacity}] \leavevmode | |
| Hardware resources selected for the resource nodes should be capable | |
| of supporting enough storage for the cloud services. Defining the | |
| initial requirements and ensuring the design can support adding | |
| capacity is important. Hardware nodes selected for object storage | |
| should be capable of support a large number of inexpensive disks | |
| with no reliance on RAID controller cards. Hardware nodes selected | |
| for block storage should be capable of supporting high speed storage | |
| solutions and RAID controller cards to provide performance and | |
| redundancy to storage at a hardware level. Selecting hardware RAID | |
| controllers that automatically repair damaged arrays will assist | |
| with the replacement and repair of degraded or deleted storage | |
| devices. | |
| \item[{Performance}] \leavevmode | |
| Disks selected for object storage services do not need to be fast | |
| performing disks. We recommend that object storage nodes take | |
| advantage of the best cost per terabyte available for storage. | |
| Contrastingly, disks chosen for block storage services should take | |
| advantage of performance boosting features that may entail the use | |
| of SSDs or flash storage to provide high performance block storage | |
| pools. Storage performance of ephemeral disks used for instances | |
| should also be taken into consideration. | |
| \item[{Fault tolerance}] \leavevmode | |
| Object storage resource nodes have no requirements for hardware | |
| fault tolerance or RAID controllers. It is not necessary to plan for | |
| fault tolerance within the object storage hardware because the | |
| object storage service provides replication between zones as a | |
| feature of the service. Block storage nodes, compute nodes, and | |
| cloud controllers should all have fault tolerance built in at the | |
| hardware level by making use of hardware RAID controllers and | |
| varying levels of RAID configuration. The level of RAID chosen | |
| should be consistent with the performance and availability | |
| requirements of the cloud. | |
| \end{description} | |
| \subsubsection{Selecting networking hardware} | |
| \label{\detokenize{generalpurpose-architecture:selecting-networking-hardware}} | |
| Selecting network architecture determines which network hardware will be | |
| used. Networking software is determined by the selected networking | |
| hardware. | |
| There are more subtle design impacts that need to be considered. The | |
| selection of certain networking hardware (and the networking software) | |
| affects the management tools that can be used. There are exceptions to | |
| this; the rise of \sphinxstyleemphasis{open} networking software that supports a range of | |
| networking hardware means that there are instances where the | |
| relationship between networking hardware and networking software are not | |
| as tightly defined. | |
| Some of the key considerations that should be included in the selection | |
| of networking hardware include: | |
| \begin{description} | |
| \item[{Port count}] \leavevmode | |
| The design will require networking hardware that has the requisite | |
| port count. | |
| \item[{Port density}] \leavevmode | |
| The network design will be affected by the physical space that is | |
| required to provide the requisite port count. A higher port density | |
| is preferred, as it leaves more rack space for compute or storage | |
| components that may be required by the design. This can also lead | |
| into concerns about fault domains and power density that should be | |
| considered. Higher density switches are more expensive and should | |
| also be considered, as it is important not to over design the | |
| network if it is not required. | |
| \item[{Port speed}] \leavevmode | |
| The networking hardware must support the proposed network speed, for | |
| example: 1 GbE, 10 GbE, or 40 GbE (or even 100 GbE). | |
| \item[{Redundancy}] \leavevmode | |
| The level of network hardware redundancy required is influenced by | |
| the user requirements for high availability and cost considerations. | |
| Network redundancy can be achieved by adding redundant power | |
| supplies or paired switches. If this is a requirement, the hardware | |
| will need to support this configuration. | |
| \item[{Power requirements}] \leavevmode | |
| Ensure that the physical data center provides the necessary power | |
| for the selected network hardware. | |
| \end{description} | |
| \begin{sphinxadmonition}{note}{Note:} | |
| This may be an issue for spine switches in a leaf and spine | |
| fabric, or end of row (EoR) switches. | |
| \end{sphinxadmonition} | |
| There is no single best practice architecture for the networking | |
| hardware supporting a general purpose OpenStack cloud that will apply to | |
| all implementations. Some of the key factors that will have a strong | |
| influence on selection of networking hardware include: | |
| \begin{description} | |
| \item[{Connectivity}] \leavevmode | |
| All nodes within an OpenStack cloud require network connectivity. In | |
| some cases, nodes require access to more than one network segment. | |
| The design must encompass sufficient network capacity and bandwidth | |
| to ensure that all communications within the cloud, both north-south | |
| and east-west traffic have sufficient resources available. | |
| \item[{Scalability}] \leavevmode | |
| The network design should encompass a physical and logical network | |
| design that can be easily expanded upon. Network hardware should | |
| offer the appropriate types of interfaces and speeds that are | |
| required by the hardware nodes. | |
| \item[{Availability}] \leavevmode | |
| To ensure that access to nodes within the cloud is not interrupted, | |
| we recommend that the network architecture identify any single | |
| points of failure and provide some level of redundancy or fault | |
| tolerance. With regard to the network infrastructure itself, this | |
| often involves use of networking protocols such as LACP, VRRP or | |
| others to achieve a highly available network connection. In | |
| addition, it is important to consider the networking implications on | |
| API availability. In order to ensure that the APIs, and potentially | |
| other services in the cloud are highly available, we recommend you | |
| design a load balancing solution within the network architecture to | |
| accommodate for these requirements. | |
| \end{description} | |
| \subsubsection{Software selection} | |
| \label{\detokenize{generalpurpose-architecture:software-selection}} | |
| Software selection for a general purpose OpenStack architecture design | |
| needs to include these three areas: | |
| \begin{itemize} | |
| \item {} | |
| Operating system (OS) and hypervisor | |
| \item {} | |
| OpenStack components | |
| \item {} | |
| Supplemental software | |
| \end{itemize} | |
| \subsubsection{Operating system and hypervisor} | |
| \label{\detokenize{generalpurpose-architecture:operating-system-and-hypervisor}} | |
| The operating system (OS) and hypervisor have a significant impact on | |
| the overall design. Selecting a particular operating system and | |
| hypervisor can directly affect server hardware selection. Make sure the | |
| storage hardware and topology support the selected operating system and | |
| hypervisor combination. Also ensure the networking hardware selection | |
| and topology will work with the chosen operating system and hypervisor | |
| combination. | |
| Some areas that could be impacted by the selection of OS and hypervisor | |
| include: | |
| \begin{description} | |
| \item[{Cost}] \leavevmode | |
| Selecting a commercially supported hypervisor, such as Microsoft | |
| Hyper-V, will result in a different cost model rather than | |
| community-supported open source hypervisors including | |
| {\hyperref[\detokenize{common/glossary:term-kernel-based-vm-kvm}]{\sphinxtermref{\DUrole{xref,std,std-term}{KVM}}}}, Kinstance or {\hyperref[\detokenize{common/glossary:term-xen}]{\sphinxtermref{\DUrole{xref,std,std-term}{Xen}}}}. When | |
| comparing open source OS solutions, choosing Ubuntu over Red Hat | |
| (or vice versa) will have an impact on cost due to support | |
| contracts. | |
| \item[{Supportability}] \leavevmode | |
| Depending on the selected hypervisor, staff should have the | |
| appropriate training and knowledge to support the selected OS and | |
| hypervisor combination. If they do not, training will need to be | |
| provided which could have a cost impact on the design. | |
| \item[{Management tools}] \leavevmode | |
| The management tools used for Ubuntu and Kinstance differ from the | |
| management tools for VMware vSphere. Although both OS and hypervisor | |
| combinations are supported by OpenStack, there will be very | |
| different impacts to the rest of the design as a result of the | |
| selection of one combination versus the other. | |
| \item[{Scale and performance}] \leavevmode | |
| Ensure that selected OS and hypervisor combinations meet the | |
| appropriate scale and performance requirements. The chosen | |
| architecture will need to meet the targeted instance-host ratios | |
| with the selected OS-hypervisor combinations. | |
| \item[{Security}] \leavevmode | |
| Ensure that the design can accommodate regular periodic | |
| installations of application security patches while maintaining | |
| required workloads. The frequency of security patches for the | |
| proposed OS-hypervisor combination will have an impact on | |
| performance and the patch installation process could affect | |
| maintenance windows. | |
| \item[{Supported features}] \leavevmode | |
| Determine which features of OpenStack are required. This will often | |
| determine the selection of the OS-hypervisor combination. Some | |
| features are only available with specific operating systems or | |
| hypervisors. | |
| \item[{Interoperability}] \leavevmode | |
| You will need to consider how the OS and hypervisor combination | |
| interactions with other operating systems and hypervisors, including | |
| other software solutions. Operational troubleshooting tools for one | |
| OS-hypervisor combination may differ from the tools used for another | |
| OS-hypervisor combination and, as a result, the design will need to | |
| address if the two sets of tools need to interoperate. | |
| \end{description} | |
| \subsubsection{OpenStack components} | |
| \label{\detokenize{generalpurpose-architecture:openstack-components}} | |
| Selecting which OpenStack components are included in the overall design | |
| is important. Some OpenStack components, like compute and Image service, | |
| are required in every architecture. Other components, like | |
| Orchestration, are not always required. | |
| Excluding certain OpenStack components can limit or constrain the | |
| functionality of other components. For example, if the architecture | |
| includes Orchestration but excludes Telemetry, then the design will not | |
| be able to take advantage of Orchestrations' auto scaling functionality. | |
| It is important to research the component interdependencies in | |
| conjunction with the technical requirements before deciding on the final | |
| architecture. | |
| \paragraph{Networking software} | |
| \label{\detokenize{generalpurpose-architecture:networking-software}} | |
| OpenStack Networking (neutron) provides a wide variety of networking | |
| services for instances. There are many additional networking software | |
| packages that can be useful when managing OpenStack components. Some | |
| examples include: | |
| \begin{itemize} | |
| \item {} | |
| Software to provide load balancing | |
| \item {} | |
| Network redundancy protocols | |
| \item {} | |
| Routing daemons | |
| \end{itemize} | |
| Some of these software packages are described in more detail in the | |
| OpenStack High Availability Guide (refer to the \href{https://docs.openstack.org/ha-guide/networking-ha.html}{OpenStack network | |
| nodes | |
| chapter} of | |
| the OpenStack High Availability Guide). | |
| For a general purpose OpenStack cloud, the OpenStack infrastructure | |
| components need to be highly available. If the design does not include | |
| hardware load balancing, networking software packages like HAProxy will | |
| need to be included. | |
| \paragraph{Management software} | |
| \label{\detokenize{generalpurpose-architecture:management-software}} | |
| Selected supplemental software solution impacts and affects the overall | |
| OpenStack cloud design. This includes software for providing clustering, | |
| logging, monitoring and alerting. | |
| Inclusion of clustering software, such as Corosync or Pacemaker, is | |
| determined primarily by the availability requirements. The impact of | |
| including (or not including) these software packages is primarily | |
| determined by the availability of the cloud infrastructure and the | |
| complexity of supporting the configuration after it is deployed. The | |
| \href{https://docs.openstack.org/ha-guide/}{OpenStack High Availability | |
| Guide} provides more details on | |
| the installation and configuration of Corosync and Pacemaker, should | |
| these packages need to be included in the design. | |
| Requirements for logging, monitoring, and alerting are determined by | |
| operational considerations. Each of these sub-categories includes a | |
| number of various options. | |
| If these software packages are required, the design must account for the | |
| additional resource consumption (CPU, RAM, storage, and network | |
| bandwidth). Some other potential design impacts include: | |
| \begin{itemize} | |
| \item {} | |
| OS-hypervisor combination: Ensure that the selected logging, | |
| monitoring, or alerting tools support the proposed OS-hypervisor | |
| combination. | |
| \item {} | |
| Network hardware: The network hardware selection needs to be | |
| supported by the logging, monitoring, and alerting software. | |
| \end{itemize} | |
| \paragraph{Database software} | |
| \label{\detokenize{generalpurpose-architecture:database-software}} | |
| OpenStack components often require access to back-end database services | |
| to store state and configuration information. Selecting an appropriate | |
| back-end database that satisfies the availability and fault tolerance | |
| requirements of the OpenStack services is required. OpenStack services | |
| supports connecting to a database that is supported by the SQLAlchemy | |
| python drivers, however, most common database deployments make use of | |
| MySQL or variations of it. We recommend that the database, which | |
| provides back-end service within a general purpose cloud, be made highly | |
| available when using an available technology which can accomplish that | |
| goal. | |
| \subsection{Prescriptive example} | |
| \label{\detokenize{generalpurpose-prescriptive-example::doc}}\label{\detokenize{generalpurpose-prescriptive-example:prescriptive-example}} | |
| An online classified advertising company wants to run web applications | |
| consisting of Tomcat, Nginx and MariaDB in a private cloud. To be able | |
| to meet policy requirements, the cloud infrastructure will run in their | |
| own data center. The company has predictable load requirements, but | |
| requires scaling to cope with nightly increases in demand. Their current | |
| environment does not have the flexibility to align with their goal of | |
| running an open source API environment. The current environment consists | |
| of the following: | |
| \begin{itemize} | |
| \item {} | |
| Between 120 and 140 installations of Nginx and Tomcat, each with 2 | |
| vCPUs and 4 GB of RAM | |
| \item {} | |
| A three-node MariaDB and Galera cluster, each with 4 vCPUs and 8 GB | |
| RAM | |
| \end{itemize} | |
| The company runs hardware load balancers and multiple web applications | |
| serving their websites, and orchestrates environments using combinations | |
| of scripts and Puppet. The website generates large amounts of log data | |
| daily that requires archiving. | |
| The solution would consist of the following OpenStack components: | |
| \begin{itemize} | |
| \item {} | |
| A firewall, switches and load balancers on the public facing network | |
| connections. | |
| \item {} | |
| OpenStack Controller service running Image, Identity, Networking, | |
| combined with support services such as MariaDB and RabbitMQ, | |
| configured for high availability on at least three controller nodes. | |
| \item {} | |
| OpenStack compute nodes running the KVM hypervisor. | |
| \item {} | |
| OpenStack Block Storage for use by compute instances, requiring | |
| persistent storage (such as databases for dynamic sites). | |
| \item {} | |
| OpenStack Object Storage for serving static objects (such as images). | |
| \end{itemize} | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{General_Architecture3}.png} | |
| \end{figure} | |
| Running up to 140 web instances and the small number of MariaDB | |
| instances requires 292 vCPUs available, as well as 584 GB RAM. On a | |
| typical 1U server using dual-socket hex-core Intel CPUs with | |
| Hyperthreading, and assuming 2:1 CPU overcommit ratio, this would | |
| require 8 OpenStack compute nodes. | |
| The web application instances run from local storage on each of the | |
| OpenStack compute nodes. The web application instances are stateless, | |
| meaning that any of the instances can fail and the application will | |
| continue to function. | |
| MariaDB server instances store their data on shared enterprise storage, | |
| such as NetApp or Solidfire devices. If a MariaDB instance fails, | |
| storage would be expected to be re-attached to another instance and | |
| rejoined to the Galera cluster. | |
| Logs from the web application servers are shipped to OpenStack Object | |
| Storage for processing and archiving. | |
| Additional capabilities can be realized by moving static web content to | |
| be served from OpenStack Object Storage containers, and backing the | |
| OpenStack Image service with OpenStack Object Storage. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Increasing OpenStack Object Storage means network bandwidth needs to | |
| be taken into consideration. Running OpenStack Object Storage with | |
| network connections offering 10 GbE or better connectivity is | |
| advised. | |
| \end{sphinxadmonition} | |
| Leveraging Orchestration and Telemetry services is also a potential | |
| issue when providing auto-scaling, orchestrated web application | |
| environments. Defining the web applications in a | |
| {\hyperref[\detokenize{common/glossary:term-heat-orchestration-template-hot}]{\sphinxtermref{\DUrole{xref,std,std-term}{Heat Orchestration Template (HOT)}}}} | |
| negates the reliance on the current scripted Puppet | |
| solution. | |
| OpenStack Networking can be used to control hardware load balancers | |
| through the use of plug-ins and the Networking API. This allows users to | |
| control hardware load balance pools and instances as members in these | |
| pools, but their use in production environments must be carefully | |
| weighed against current stability. | |
| An OpenStack general purpose cloud is often considered a starting | |
| point for building a cloud deployment. They are designed to balance | |
| the components and do not emphasize any particular aspect of the | |
| overall computing environment. Cloud design must give equal weight | |
| to the compute, network, and storage components. General purpose clouds | |
| are found in private, public, and hybrid environments, lending | |
| themselves to many different use cases. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| General purpose clouds are homogeneous deployments. | |
| They are not suited to specialized environments or edge case situations. | |
| \end{sphinxadmonition} | |
| Common uses of a general purpose cloud include: | |
| \begin{itemize} | |
| \item {} | |
| Providing a simple database | |
| \item {} | |
| A web application runtime environment | |
| \item {} | |
| A shared application development platform | |
| \item {} | |
| Lab test bed | |
| \end{itemize} | |
| Use cases that benefit from scale-out rather than scale-up approaches | |
| are good candidates for general purpose cloud architecture. | |
| A general purpose cloud is designed to have a range of potential | |
| uses or functions; not specialized for specific use cases. General | |
| purpose architecture is designed to address 80\% of potential use | |
| cases available. The infrastructure, in itself, is a specific use | |
| case, enabling it to be used as a base model for the design process. | |
| General purpose clouds are designed to be platforms that are suited | |
| for general purpose applications. | |
| General purpose clouds are limited to the most basic components, | |
| but they can include additional resources such as: | |
| \begin{itemize} | |
| \item {} | |
| Virtual-machine disk image library | |
| \item {} | |
| Raw block storage | |
| \item {} | |
| File or object storage | |
| \item {} | |
| Firewalls | |
| \item {} | |
| Load balancers | |
| \item {} | |
| IP addresses | |
| \item {} | |
| Network overlays or virtual local area networks (VLANs) | |
| \item {} | |
| Software bundles | |
| \end{itemize} | |
| \section{Compute focused} | |
| \label{\detokenize{compute-focus:compute-focused}}\label{\detokenize{compute-focus::doc}} | |
| \subsection{Technical considerations} | |
| \label{\detokenize{compute-focus-technical-considerations::doc}}\label{\detokenize{compute-focus-technical-considerations:technical-considerations}} | |
| In a compute-focused OpenStack cloud, the type of instance workloads you | |
| provision heavily influences technical decision making. | |
| Public and private clouds require deterministic capacity planning to | |
| support elastic growth in order to meet user SLA expectations. | |
| Deterministic capacity planning is the path to predicting the effort and | |
| expense of making a given process perform consistently. This process is | |
| important because, when a service becomes a critical part of a user's | |
| infrastructure, the user's experience links directly to the SLAs of the | |
| cloud itself. | |
| There are two aspects of capacity planning to consider: | |
| \begin{itemize} | |
| \item {} | |
| Planning the initial deployment footprint | |
| \item {} | |
| Planning expansion of the environment to stay ahead of cloud user demands | |
| \end{itemize} | |
| Begin planning an initial OpenStack deployment footprint with | |
| estimations of expected uptake, and existing infrastructure workloads. | |
| The starting point is the core count of the cloud. By applying relevant | |
| ratios, the user can gather information about: | |
| \begin{itemize} | |
| \item {} | |
| The number of expected concurrent instances: (overcommit fraction × | |
| cores) / virtual cores per instance | |
| \item {} | |
| Required storage: flavor disk size × number of instances | |
| \end{itemize} | |
| These ratios determine the amount of additional infrastructure needed to | |
| support the cloud. For example, consider a situation in which you | |
| require 1600 instances, each with 2 vCPU and 50 GB of storage. Assuming | |
| the default overcommit rate of 16:1, working out the math provides an | |
| equation of: | |
| \begin{itemize} | |
| \item {} | |
| 1600 = (16 × (number of physical cores)) / 2 | |
| \item {} | |
| Storage required = 50 GB × 1600 | |
| \end{itemize} | |
| On the surface, the equations reveal the need for 200 physical cores and | |
| 80 TB of storage for \sphinxcode{/var/lib/nova/instances/}. However, it is also | |
| important to look at patterns of usage to estimate the load that the API | |
| services, database servers, and queue servers are likely to encounter. | |
| Aside from the creation and termination of instances, consider the | |
| impact of users accessing the service, particularly on nova-api and its | |
| associated database. Listing instances gathers a great deal of | |
| information and given the frequency with which users run this operation, | |
| a cloud with a large number of users can increase the load | |
| significantly. This can even occur unintentionally. For example, the | |
| OpenStack Dashboard instances tab refreshes the list of instances every | |
| 30 seconds, so leaving it open in a browser window can cause unexpected | |
| load. | |
| Consideration of these factors can help determine how many cloud | |
| controller cores you require. A server with 8 CPU cores and 8 GB of RAM | |
| server would be sufficient for a rack of compute nodes, given the above | |
| caveats. | |
| Key hardware specifications are also crucial to the performance of user | |
| instances. Be sure to consider budget and performance needs, including | |
| storage performance (spindles/core), memory availability (RAM/core), | |
| network bandwidth (Gbps/core), and overall CPU performance (CPU/core). | |
| The cloud resource calculator is a useful tool in examining the impacts | |
| of different hardware and instance load outs. See \href{https://github.com/noslzzp/cloud-resource-calculator/blob/master/cloud-resource-calculator.ods}{cloud-resource-calculator}. | |
| \subsubsection{Expansion planning} | |
| \label{\detokenize{compute-focus-technical-considerations:expansion-planning}} | |
| A key challenge for planning the expansion of cloud compute services is | |
| the elastic nature of cloud infrastructure demands. | |
| Planning for expansion is a balancing act. Planning too conservatively | |
| can lead to unexpected oversubscription of the cloud and dissatisfied | |
| users. Planning for cloud expansion too aggressively can lead to | |
| unexpected underuse of the cloud and funds spent unnecessarily | |
| on operating infrastructure. | |
| The key is to carefully monitor the trends in cloud usage over time. The | |
| intent is to measure the consistency with which you deliver services, | |
| not the average speed or capacity of the cloud. Using this information | |
| to model capacity performance enables users to more accurately determine | |
| the current and future capacity of the cloud. | |
| \subsubsection{CPU and RAM} | |
| \label{\detokenize{compute-focus-technical-considerations:cpu-and-ram}} | |
| OpenStack enables users to overcommit CPU and RAM on compute nodes. This | |
| allows an increase in the number of instances running on the cloud at | |
| the cost of reducing the performance of the instances. OpenStack Compute | |
| uses the following ratios by default: | |
| \begin{itemize} | |
| \item {} | |
| CPU allocation ratio: 16:1 | |
| \item {} | |
| RAM allocation ratio: 1.5:1 | |
| \end{itemize} | |
| The default CPU allocation ratio of 16:1 means that the scheduler | |
| allocates up to 16 virtual cores per physical core. For example, if a | |
| physical node has 12 cores, the scheduler sees 192 available virtual | |
| cores. With typical flavor definitions of 4 virtual cores per instance, | |
| this ratio would provide 48 instances on a physical node. | |
| Similarly, the default RAM allocation ratio of 1.5:1 means that the | |
| scheduler allocates instances to a physical node as long as the total | |
| amount of RAM associated with the instances is less than 1.5 times the | |
| amount of RAM available on the physical node. | |
| You must select the appropriate CPU and RAM allocation ratio based on | |
| particular use cases. | |
| \subsubsection{Additional hardware} | |
| \label{\detokenize{compute-focus-technical-considerations:additional-hardware}} | |
| Certain use cases may benefit from exposure to additional devices on the | |
| compute node. Examples might include: | |
| \begin{itemize} | |
| \item {} | |
| High performance computing jobs that benefit from the availability of | |
| graphics processing units (GPUs) for general-purpose computing. | |
| \item {} | |
| Cryptographic routines that benefit from the availability of hardware | |
| random number generators to avoid entropy starvation. | |
| \item {} | |
| Database management systems that benefit from the availability of | |
| SSDs for ephemeral storage to maximize read/write time. | |
| \end{itemize} | |
| Host aggregates group hosts that share similar characteristics, which | |
| can include hardware similarities. The addition of specialized hardware | |
| to a cloud deployment is likely to add to the cost of each node, so | |
| consider carefully whether all compute nodes, or just a subset targeted | |
| by flavors, need the additional customization to support the desired | |
| workloads. | |
| \subsubsection{Utilization} | |
| \label{\detokenize{compute-focus-technical-considerations:utilization}} | |
| Infrastructure-as-a-Service offerings, including OpenStack, use flavors | |
| to provide standardized views of virtual machine resource requirements | |
| that simplify the problem of scheduling instances while making the best | |
| use of the available physical resources. | |
| In order to facilitate packing of virtual machines onto physical hosts, | |
| the default selection of flavors provides a second largest flavor that | |
| is half the size of the largest flavor in every dimension. It has half | |
| the vCPUs, half the vRAM, and half the ephemeral disk space. The next | |
| largest flavor is half that size again. The following figure provides a | |
| visual representation of this concept for a general purpose computing | |
| design: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Compute_Tech_Bin_Packing_General1}.png} | |
| \end{figure} | |
| The following figure displays a CPU-optimized, packed server: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Compute_Tech_Bin_Packing_CPU_optimized1}.png} | |
| \end{figure} | |
| These default flavors are well suited to typical configurations of | |
| commodity server hardware. To maximize utilization, however, it may be | |
| necessary to customize the flavors or create new ones in order to better | |
| align instance sizes to the available hardware. | |
| Workload characteristics may also influence hardware choices and flavor | |
| configuration, particularly where they present different ratios of CPU | |
| versus RAM versus HDD requirements. | |
| For more information on Flavors see \href{https://docs.openstack.org/ops-guide/ops-user-facing-operations.html\#flavors}{OpenStack Operations Guide: | |
| Flavors}. | |
| \subsubsection{OpenStack components} | |
| \label{\detokenize{compute-focus-technical-considerations:openstack-components}} | |
| Due to the nature of the workloads in this scenario, a number of | |
| components are highly beneficial for a Compute-focused cloud. This | |
| includes the typical OpenStack components: | |
| \begin{itemize} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-compute-service-nova}]{\sphinxtermref{\DUrole{xref,std,std-term}{Compute service (nova)}}}} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-image-service-glance}]{\sphinxtermref{\DUrole{xref,std,std-term}{Image service (glance)}}}} | |
| \item {} | |
| {\hyperref[\detokenize{common/glossary:term-identity-service-keystone}]{\sphinxtermref{\DUrole{xref,std,std-term}{Identity service (keystone)}}}} | |
| \end{itemize} | |
| Also consider several specialized components: | |
| \begin{itemize} | |
| \item {} \begin{description} | |
| \item[{{\hyperref[\detokenize{common/glossary:term-orchestration-service-heat}]{\sphinxtermref{\DUrole{xref,std,std-term}{Orchestration service (heat)}}}}}] \leavevmode | |
| Given the nature of the applications involved in this scenario, these | |
| are heavily automated deployments. Making use of Orchestration is | |
| highly beneficial in this case. You can script the deployment of a | |
| batch of instances and the running of tests, but it makes sense to | |
| use the Orchestration service to handle all these actions. | |
| \end{description} | |
| \item {} \begin{description} | |
| \item[{{\hyperref[\detokenize{common/glossary:term-telemetry-service-telemetry}]{\sphinxtermref{\DUrole{xref,std,std-term}{Telemetry service (telemetry)}}}}}] \leavevmode | |
| Telemetry and the alarms it generates support autoscaling of | |
| instances using Orchestration. Users that are not using the | |
| Orchestration service do not need to deploy the Telemetry service and | |
| may choose to use external solutions to fulfill their metering and | |
| monitoring requirements. | |
| \end{description} | |
| \item {} \begin{description} | |
| \item[{{\hyperref[\detokenize{common/glossary:term-block-storage-service-cinder}]{\sphinxtermref{\DUrole{xref,std,std-term}{Block Storage service (cinder)}}}}}] \leavevmode | |
| Due to the burst-able nature of the workloads and the applications | |
| and instances that perform batch processing, this cloud mainly uses | |
| memory or CPU, so the need for add-on storage to each instance is not | |
| a likely requirement. This does not mean that you do not use | |
| OpenStack Block Storage (cinder) in the infrastructure, but typically | |
| it is not a central component. | |
| \end{description} | |
| \item {} \begin{description} | |
| \item[{{\hyperref[\detokenize{common/glossary:term-networking-service-neutron}]{\sphinxtermref{\DUrole{xref,std,std-term}{Networking service (neutron)}}}}}] \leavevmode | |
| When choosing a networking platform, ensure that it either works with | |
| all desired hypervisor and container technologies and their OpenStack | |
| drivers, or that it includes an implementation of an ML2 mechanism | |
| driver. You can mix networking platforms that provide ML2 mechanisms | |
| drivers. | |
| \end{description} | |
| \end{itemize} | |
| \subsection{Operational considerations} | |
| \label{\detokenize{compute-focus-operational-considerations:operational-considerations}}\label{\detokenize{compute-focus-operational-considerations::doc}} | |
| There are a number of operational considerations that affect the design | |
| of compute-focused OpenStack clouds, including: | |
| \begin{itemize} | |
| \item {} | |
| Enforcing strict API availability requirements | |
| \item {} | |
| Understanding and dealing with failure scenarios | |
| \item {} | |
| Managing host maintenance schedules | |
| \end{itemize} | |
| Service-level agreements (SLAs) are contractual obligations that ensure | |
| the availability of a service. When designing an OpenStack cloud, | |
| factoring in promises of availability implies a certain level of | |
| redundancy and resiliency. | |
| \subsubsection{Monitoring} | |
| \label{\detokenize{compute-focus-operational-considerations:monitoring}} | |
| OpenStack clouds require appropriate monitoring platforms to catch and | |
| manage errors. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| We recommend leveraging existing monitoring systems to see if they | |
| are able to effectively monitor an OpenStack environment. | |
| \end{sphinxadmonition} | |
| Specific meters that are critically important to capture include: | |
| \begin{itemize} | |
| \item {} | |
| Image disk utilization | |
| \item {} | |
| Response time to the Compute API | |
| \end{itemize} | |
| \subsubsection{Capacity planning} | |
| \label{\detokenize{compute-focus-operational-considerations:capacity-planning}} | |
| Adding extra capacity to an OpenStack cloud is a horizontally scaling | |
| process. | |
| We recommend similar (or the same) CPUs when adding extra nodes to the | |
| environment. This reduces the chance of breaking live-migration features | |
| if they are present. Scaling out hypervisor hosts also has a direct | |
| effect on network and other data center resources. We recommend you | |
| factor in this increase when reaching rack capacity or when requiring | |
| extra network switches. | |
| Changing the internal components of a Compute host to account for | |
| increases in demand is a process known as vertical scaling. Swapping a | |
| CPU for one with more cores, or increasing the memory in a server, can | |
| help add extra capacity for running applications. | |
| Another option is to assess the average workloads and increase the | |
| number of instances that can run within the compute environment by | |
| adjusting the overcommit ratio. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| It is important to remember that changing the CPU overcommit ratio | |
| can have a detrimental effect and cause a potential increase in a | |
| noisy neighbor. | |
| \end{sphinxadmonition} | |
| The added risk of increasing the overcommit ratio is that more instances | |
| fail when a compute host fails. We do not recommend that you increase | |
| the CPU overcommit ratio in compute-focused OpenStack design | |
| architecture, as it can increase the potential for noisy neighbor | |
| issues. | |
| \subsection{Architecture} | |
| \label{\detokenize{compute-focus-architecture::doc}}\label{\detokenize{compute-focus-architecture:architecture}} | |
| The hardware selection covers three areas: | |
| \begin{itemize} | |
| \item {} | |
| Compute | |
| \item {} | |
| Network | |
| \item {} | |
| Storage | |
| \end{itemize} | |
| Compute-focused OpenStack clouds have high demands on processor and | |
| memory resources, and requires hardware that can handle these demands. | |
| Consider the following factors when selecting compute (server) hardware: | |
| \begin{itemize} | |
| \item {} | |
| Server density | |
| \item {} | |
| Resource capacity | |
| \item {} | |
| Expandability | |
| \item {} | |
| Cost | |
| \end{itemize} | |
| Weigh these considerations against each other to determine the best | |
| design for the desired purpose. For example, increasing server density | |
| means sacrificing resource capacity or expandability. | |
| A compute-focused cloud should have an emphasis on server hardware that | |
| can offer more CPU sockets, more CPU cores, and more RAM. Network | |
| connectivity and storage capacity are less critical. | |
| When designing a compute-focused OpenStack architecture, you must | |
| consider whether you intend to scale up or scale out. Selecting a | |
| smaller number of larger hosts, or a larger number of smaller hosts, | |
| depends on a combination of factors: cost, power, cooling, physical rack | |
| and floor space, support-warranty, and manageability. | |
| Considerations for selecting hardware: | |
| \begin{itemize} | |
| \item {} | |
| Most blade servers can support dual-socket multi-core CPUs. To avoid | |
| this CPU limit, select \sphinxcode{full width} or \sphinxcode{full height} blades. Be | |
| aware, however, that this also decreases server density. For example, | |
| high density blade servers such as HP BladeSystem or Dell PowerEdge | |
| M1000e support up to 16 servers in only ten rack units. Using | |
| half-height blades is twice as dense as using full-height blades, | |
| which results in only eight servers per ten rack units. | |
| \item {} | |
| 1U rack-mounted servers that occupy only a single rack unit may offer | |
| greater server density than a blade server solution. It is possible | |
| to place forty 1U servers in a rack, providing space for the top of | |
| rack (ToR) switches, compared to 32 full width blade servers. | |
| \item {} | |
| 2U rack-mounted servers provide quad-socket, multi-core CPU support, | |
| but with a corresponding decrease in server density (half the density | |
| that 1U rack-mounted servers offer). | |
| \item {} | |
| Larger rack-mounted servers, such as 4U servers, often provide even | |
| greater CPU capacity, commonly supporting four or even eight CPU | |
| sockets. These servers have greater expandability, but such servers | |
| have much lower server density and are often more expensive. | |
| \item {} | |
| \sphinxcode{Sled servers} are rack-mounted servers that support multiple | |
| independent servers in a single 2U or 3U enclosure. These deliver | |
| higher density as compared to typical 1U or 2U rack-mounted servers. | |
| For example, many sled servers offer four independent dual-socket | |
| nodes in 2U for a total of eight CPU sockets in 2U. | |
| \end{itemize} | |
| Consider these when choosing server hardware for a compute-focused | |
| OpenStack design architecture: | |
| \begin{itemize} | |
| \item {} | |
| Instance density | |
| \item {} | |
| Host density | |
| \item {} | |
| Power and cooling density | |
| \end{itemize} | |
| \subsubsection{Selecting networking hardware} | |
| \label{\detokenize{compute-focus-architecture:selecting-networking-hardware}} | |
| Some of the key considerations for networking hardware selection | |
| include: | |
| \begin{itemize} | |
| \item {} | |
| Port count | |
| \item {} | |
| Port density | |
| \item {} | |
| Port speed | |
| \item {} | |
| Redundancy | |
| \item {} | |
| Power requirements | |
| \end{itemize} | |
| We recommend designing the network architecture using a scalable network | |
| model that makes it easy to add capacity and bandwidth. A good example | |
| of such a model is the leaf-spline model. In this type of network | |
| design, it is possible to easily add additional bandwidth as well as | |
| scale out to additional racks of gear. It is important to select network | |
| hardware that supports the required port count, port speed, and port | |
| density while also allowing for future growth as workload demands | |
| increase. It is also important to evaluate where in the network | |
| architecture it is valuable to provide redundancy. | |
| \subsubsection{Operating system and hypervisor} | |
| \label{\detokenize{compute-focus-architecture:operating-system-and-hypervisor}} | |
| The selection of operating system (OS) and hypervisor has a significant | |
| impact on the end point design. | |
| OS and hypervisor selection impact the following areas: | |
| \begin{itemize} | |
| \item {} | |
| Cost | |
| \item {} | |
| Supportability | |
| \item {} | |
| Management tools | |
| \item {} | |
| Scale and performance | |
| \item {} | |
| Security | |
| \item {} | |
| Supported features | |
| \item {} | |
| Interoperability | |
| \end{itemize} | |
| \subsubsection{OpenStack components} | |
| \label{\detokenize{compute-focus-architecture:openstack-components}} | |
| The selection of OpenStack components is important. There are certain | |
| components that are required, for example the compute and image | |
| services, but others, such as the Orchestration service, may not be | |
| present. | |
| For a compute-focused OpenStack design architecture, the following | |
| components may be present: | |
| \begin{itemize} | |
| \item {} | |
| Identity (keystone) | |
| \item {} | |
| Dashboard (horizon) | |
| \item {} | |
| Compute (nova) | |
| \item {} | |
| Object Storage (swift) | |
| \item {} | |
| Image (glance) | |
| \item {} | |
| Networking (neutron) | |
| \item {} | |
| Orchestration (heat) | |
| \begin{sphinxadmonition}{note}{Note:} | |
| A compute-focused design is less likely to include OpenStack Block | |
| Storage. However, there may be some situations where the need for | |
| performance requires a block storage component to improve data I-O. | |
| \end{sphinxadmonition} | |
| \end{itemize} | |
| The exclusion of certain OpenStack components might also limit the | |
| functionality of other components. If a design includes the | |
| Orchestration service but excludes the Telemetry service, then the | |
| design cannot take advantage of Orchestration's auto scaling | |
| functionality as this relies on information from Telemetry. | |
| \subsubsection{Networking software} | |
| \label{\detokenize{compute-focus-architecture:networking-software}} | |
| OpenStack Networking provides a wide variety of networking services for | |
| instances. There are many additional networking software packages that | |
| might be useful to manage the OpenStack components themselves. The | |
| \href{https://docs.openstack.org/ha-guide/}{OpenStack High Availability Guide} | |
| describes some of these software packages in more detail. | |
| For a compute-focused OpenStack cloud, the OpenStack infrastructure | |
| components must be highly available. If the design does not include | |
| hardware load balancing, you must add networking software packages, for | |
| example, HAProxy. | |
| \subsubsection{Management software} | |
| \label{\detokenize{compute-focus-architecture:management-software}} | |
| The selected supplemental software solution impacts and affects the | |
| overall OpenStack cloud design. This includes software for providing | |
| clustering, logging, monitoring and alerting. | |
| The availability of design requirements is the main determiner for the | |
| inclusion of clustering software, such as Corosync or Pacemaker. | |
| Operational considerations determine the requirements for logging, | |
| monitoring, and alerting. Each of these sub-categories include various | |
| options. | |
| Some other potential design impacts include: | |
| \begin{description} | |
| \item[{OS-hypervisor combination}] \leavevmode | |
| Ensure that the selected logging, monitoring, or alerting tools | |
| support the proposed OS-hypervisor combination. | |
| \item[{Network hardware}] \leavevmode | |
| The logging, monitoring, and alerting software must support the | |
| network hardware selection. | |
| \end{description} | |
| \subsubsection{Database software} | |
| \label{\detokenize{compute-focus-architecture:database-software}} | |
| A large majority of OpenStack components require access to back-end | |
| database services to store state and configuration information. Select | |
| an appropriate back-end database that satisfies the availability and | |
| fault tolerance requirements of the OpenStack services. OpenStack | |
| services support connecting to any database that the SQLAlchemy Python | |
| drivers support, however most common database deployments make use of | |
| MySQL or some variation of it. We recommend that you make the database | |
| that provides back-end services within a general-purpose cloud highly | |
| available. Some of the more common software solutions include Galera, | |
| MariaDB, and MySQL with multi-master replication. | |
| \subsection{Prescriptive examples} | |
| \label{\detokenize{compute-focus-prescriptive-examples::doc}}\label{\detokenize{compute-focus-prescriptive-examples:prescriptive-examples}} | |
| The Conseil Européen pour la Recherche Nucléaire (CERN), also known as | |
| the European Organization for Nuclear Research, provides particle | |
| accelerators and other infrastructure for high-energy physics research. | |
| As of 2011 CERN operated these two compute centers in Europe with plans | |
| to add a third. | |
| \noindent\begin{tabular}{|*{2}{p{\dimexpr(\linewidth-\arrayrulewidth)/2-2\tabcolsep-\arrayrulewidth\relax}|}} | |
| \hline | |
| \sphinxstylethead{\relax | |
| Data center | |
| \unskip}\relax &\sphinxstylethead{\relax | |
| Approximate capacity | |
| \unskip}\relax \\ | |
| \hline | |
| Geneva, Switzerland | |
| &\begin{itemize} | |
| \item {} | |
| 3.5 Mega Watts | |
| \item {} | |
| 91000 cores | |
| \item {} | |
| 120 PB HDD | |
| \item {} | |
| 100 PB Tape | |
| \item {} | |
| 310 TB Memory | |
| \end{itemize} | |
| \\ | |
| \hline | |
| Budapest, Hungary | |
| &\begin{itemize} | |
| \item {} | |
| 2.5 Mega Watts | |
| \item {} | |
| 20000 cores | |
| \item {} | |
| 6 PB HDD | |
| \end{itemize} | |
| \\ | |
| \hline\end{tabular} | |
| To support a growing number of compute-heavy users of experiments | |
| related to the Large Hadron Collider (LHC), CERN ultimately elected to | |
| deploy an OpenStack cloud using Scientific Linux and RDO. This effort | |
| aimed to simplify the management of the center's compute resources with | |
| a view to doubling compute capacity through the addition of a data | |
| center in 2013 while maintaining the same levels of compute staff. | |
| The CERN solution uses {\hyperref[\detokenize{common/glossary:term-cell}]{\sphinxtermref{\DUrole{xref,std,std-term}{cells}}}} for segregation of compute | |
| resources and for transparently scaling between different data centers. | |
| This decision meant trading off support for security groups and live | |
| migration. In addition, they must manually replicate some details, like | |
| flavors, across cells. In spite of these drawbacks cells provide the | |
| required scale while exposing a single public API endpoint to users. | |
| CERN created a compute cell for each of the two original data centers | |
| and created a third when it added a new data center in 2013. Each cell | |
| contains three availability zones to further segregate compute resources | |
| and at least three RabbitMQ message brokers configured for clustering | |
| with mirrored queues for high availability. | |
| The API cell, which resides behind a HAProxy load balancer, is in the | |
| data center in Switzerland and directs API calls to compute cells using | |
| a customized variation of the cell scheduler. The customizations allow | |
| certain workloads to route to a specific data center or all data | |
| centers, with cell RAM availability determining cell selection in the | |
| latter case. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Generic_CERN_Example}.png} | |
| \end{figure} | |
| There is also some customization of the filter scheduler that handles | |
| placement within the cells: | |
| \begin{description} | |
| \item[{ImagePropertiesFilter}] \leavevmode | |
| Provides special handling depending on the guest operating system in | |
| use (Linux-based or Windows-based). | |
| \item[{ProjectsToAggregateFilter}] \leavevmode | |
| Provides special handling depending on which project the instance is | |
| associated with. | |
| \item[{default\_schedule\_zones}] \leavevmode | |
| Allows the selection of multiple default availability zones, rather | |
| than a single default. | |
| \end{description} | |
| A central database team manages the MySQL database server in each cell | |
| in an active/passive configuration with a NetApp storage back end. | |
| Backups run every 6 hours. | |
| \subsubsection{Network architecture} | |
| \label{\detokenize{compute-focus-prescriptive-examples:network-architecture}} | |
| To integrate with existing networking infrastructure, CERN made | |
| customizations to legacy networking (nova-network). This was in the form | |
| of a driver to integrate with CERN's existing database for tracking MAC | |
| and IP address assignments. | |
| The driver facilitates selection of a MAC address and IP for new | |
| instances based on the compute node where the scheduler places the | |
| instance. | |
| The driver considers the compute node where the scheduler placed an | |
| instance and selects a MAC address and IP from the pre-registered list | |
| associated with that node in the database. The database updates to | |
| reflect the address assignment to that instance. | |
| \subsubsection{Storage architecture} | |
| \label{\detokenize{compute-focus-prescriptive-examples:storage-architecture}} | |
| CERN deploys the OpenStack Image service in the API cell and configures | |
| it to expose version 1 (V1) of the API. This also requires the image | |
| registry. The storage back end in use is a 3 PB Ceph cluster. | |
| CERN maintains a small set of Scientific Linux 5 and 6 images onto which | |
| orchestration tools can place applications. Puppet manages instance | |
| configuration and customization. | |
| \subsubsection{Monitoring} | |
| \label{\detokenize{compute-focus-prescriptive-examples:monitoring}} | |
| CERN does not require direct billing, but uses the Telemetry service to | |
| perform metering for the purposes of adjusting project quotas. CERN uses | |
| a sharded, replicated, MongoDB back-end. To spread API load, CERN | |
| deploys instances of the nova-api service within the child cells for | |
| Telemetry to query against. This also requires the configuration of | |
| supporting services such as keystone, glance-api, and glance-registry in | |
| the child cells. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Generic_CERN_Architecture}.png} | |
| \end{figure} | |
| Additional monitoring tools in use include | |
| \href{http://flume.apache.org/}{Flume}, \href{http://www.elasticsearch.org/}{Elastic | |
| Search}, | |
| \href{http://www.elasticsearch.org/overview/kibana/}{Kibana}, and the CERN | |
| developed \href{http://lemon.web.cern.ch/lemon/index.shtml}{Lemon} | |
| project. | |
| Compute-focused clouds are a specialized subset of the general | |
| purpose OpenStack cloud architecture. A compute-focused cloud | |
| specifically supports compute intensive workloads. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Compute intensive workloads may be CPU intensive, RAM intensive, | |
| or both; they are not typically storage or network intensive. | |
| \end{sphinxadmonition} | |
| Compute-focused workloads may include the following use cases: | |
| \begin{itemize} | |
| \item {} | |
| High performance computing (HPC) | |
| \item {} | |
| Big data analytics using Hadoop or other distributed data stores | |
| \item {} | |
| Continuous integration/continuous deployment (CI/CD) | |
| \item {} | |
| Platform-as-a-Service (PaaS) | |
| \item {} | |
| Signal processing for network function virtualization (NFV) | |
| \end{itemize} | |
| \begin{sphinxadmonition}{note}{Note:} | |
| A compute-focused OpenStack cloud does not typically use raw | |
| block storage services as it does not host applications that | |
| require persistent block storage. | |
| \end{sphinxadmonition} | |
| \section{Storage focused} | |
| \label{\detokenize{storage-focus:storage-focused}}\label{\detokenize{storage-focus::doc}} | |
| \subsection{Technical considerations} | |
| \label{\detokenize{storage-focus-technical-considerations::doc}}\label{\detokenize{storage-focus-technical-considerations:technical-considerations}} | |
| Some of the key technical considerations that are critical to a | |
| storage-focused OpenStack design architecture include: | |
| \begin{description} | |
| \item[{Input-Output requirements}] \leavevmode | |
| Input-Output performance requirements require researching and | |
| modeling before deciding on a final storage framework. Running | |
| benchmarks for Input-Output performance provides a baseline for | |
| expected performance levels. If these tests include details, then | |
| the resulting data can help model behavior and results during | |
| different workloads. Running scripted smaller benchmarks during the | |
| lifecycle of the architecture helps record the system health at | |
| different points in time. The data from these scripted benchmarks | |
| assist in future scoping and gaining a deeper understanding of an | |
| organization's needs. | |
| \item[{Scale}] \leavevmode | |
| Scaling storage solutions in a storage-focused OpenStack | |
| architecture design is driven by initial requirements, including | |
| {\hyperref[\detokenize{common/glossary:term-input-output-operations-per-second-iops}]{\sphinxtermref{\DUrole{xref,std,std-term}{IOPS}}}}, capacity, | |
| bandwidth, and future needs. Planning capacity based on projected | |
| needs over the course of a budget cycle is important for a design. | |
| The architecture should balance cost and capacity, while also allowing | |
| flexibility to implement new technologies and methods as they become | |
| available. | |
| \item[{Security}] \leavevmode | |
| Designing security around data has multiple points of focus that | |
| vary depending on SLAs, legal requirements, industry regulations, | |
| and certifications needed for systems or people. Consider compliance | |
| with HIPPA, ISO9000, and SOX based on the type of data. For certain | |
| organizations, multiple levels of access control are important. | |
| \item[{OpenStack compatibility}] \leavevmode | |
| Interoperability and integration with OpenStack can be paramount in | |
| deciding on a storage hardware and storage management platform. | |
| Interoperability and integration includes factors such as OpenStack | |
| Block Storage interoperability, OpenStack Object Storage | |
| compatibility, and hypervisor compatibility (which affects the | |
| ability to use storage for ephemeral instance storage). | |
| \item[{Storage management}] \leavevmode | |
| You must address a range of storage management-related | |
| considerations in the design of a storage-focused OpenStack cloud. | |
| These considerations include, but are not limited to, backup | |
| strategy (and restore strategy, since a backup that cannot be | |
| restored is useless), data valuation-hierarchical storage | |
| management, retention strategy, data placement, and workflow | |
| automation. | |
| \item[{Data grids}] \leavevmode | |
| Data grids are helpful when answering questions around data | |
| valuation. Data grids improve decision making through correlation of | |
| access patterns, ownership, and business-unit revenue with other | |
| metadata values to deliver actionable information about data. | |
| \end{description} | |
| When building a storage-focused OpenStack architecture, strive to build | |
| a flexible design based on an industry standard core. One way of | |
| accomplishing this might be through the use of different back ends | |
| serving different use cases. | |
| \subsection{Operational Considerations} | |
| \label{\detokenize{storage-focus-operational-considerations:operational-considerations}}\label{\detokenize{storage-focus-operational-considerations::doc}} | |
| Several operational factors affect the design choices for a general | |
| purpose cloud. Operations staff receive tasks regarding the maintenance | |
| of cloud environments for larger installations, including: | |
| \begin{description} | |
| \item[{Maintenance tasks}] \leavevmode | |
| The storage solution should take into account storage maintenance | |
| and the impact on underlying workloads. | |
| \item[{Reliability and availability}] \leavevmode | |
| Reliability and availability depend on wide area network | |
| availability and on the level of precautions taken by the service | |
| provider. | |
| \item[{Flexibility}] \leavevmode | |
| Organizations need to have the flexibility to choose between | |
| off-premise and on-premise cloud storage options. This relies on | |
| relevant decision criteria with potential cost savings. For example, | |
| continuity of operations, disaster recovery, security, records | |
| retention laws, regulations, and policies. | |
| \end{description} | |
| Monitoring and alerting services are vital in cloud environments with | |
| high demands on storage resources. These services provide a real-time | |
| view into the health and performance of the storage systems. An | |
| integrated management console, or other dashboards capable of | |
| visualizing SNMP data, is helpful when discovering and resolving issues | |
| that arise within the storage cluster. | |
| A storage-focused cloud design should include: | |
| \begin{itemize} | |
| \item {} | |
| Monitoring of physical hardware resources. | |
| \item {} | |
| Monitoring of environmental resources such as temperature and | |
| humidity. | |
| \item {} | |
| Monitoring of storage resources such as available storage, memory, | |
| and CPU. | |
| \item {} | |
| Monitoring of advanced storage performance data to ensure that | |
| storage systems are performing as expected. | |
| \item {} | |
| Monitoring of network resources for service disruptions which would | |
| affect access to storage. | |
| \item {} | |
| Centralized log collection. | |
| \item {} | |
| Log analytics capabilities. | |
| \item {} | |
| Ticketing system (or integration with a ticketing system) to track | |
| issues. | |
| \item {} | |
| Alerting and notification of responsible teams or automated systems | |
| which remediate problems with storage as they arise. | |
| \item {} | |
| Network Operations Center (NOC) staffed and always available to | |
| resolve issues. | |
| \end{itemize} | |
| \subsubsection{Application awareness} | |
| \label{\detokenize{storage-focus-operational-considerations:application-awareness}} | |
| Well-designed applications should be aware of underlying storage | |
| subsystems in order to use cloud storage solutions effectively. | |
| If natively available replication is not available, operations personnel | |
| must be able to modify the application so that they can provide their | |
| own replication service. In the event that replication is unavailable, | |
| operations personnel can design applications to react such that they can | |
| provide their own replication services. An application designed to | |
| detect underlying storage systems can function in a wide variety of | |
| infrastructures, and still have the same basic behavior regardless of | |
| the differences in the underlying infrastructure. | |
| \subsubsection{Fault tolerance and availability} | |
| \label{\detokenize{storage-focus-operational-considerations:fault-tolerance-and-availability}} | |
| Designing for fault tolerance and availability of storage systems in an | |
| OpenStack cloud is vastly different when comparing the Block Storage and | |
| Object Storage services. | |
| \paragraph{Block Storage fault tolerance and availability} | |
| \label{\detokenize{storage-focus-operational-considerations:block-storage-fault-tolerance-and-availability}} | |
| Configure Block Storage resource nodes with advanced RAID controllers | |
| and high performance disks to provide fault tolerance at the hardware | |
| level. | |
| Deploy high performing storage solutions such as SSD disk drives or | |
| flash storage systems for applications requiring extreme performance out | |
| of Block Storage devices. | |
| In environments that place extreme demands on Block Storage, we | |
| recommend using multiple storage pools. In this case, each pool of | |
| devices should have a similar hardware design and disk configuration | |
| across all hardware nodes in that pool. This allows for a design that | |
| provides applications with access to a wide variety of Block Storage | |
| pools, each with their own redundancy, availability, and performance | |
| characteristics. When deploying multiple pools of storage it is also | |
| important to consider the impact on the Block Storage scheduler which is | |
| responsible for provisioning storage across resource nodes. Ensuring | |
| that applications can schedule volumes in multiple regions, each with | |
| their own network, power, and cooling infrastructure, can give projects | |
| the ability to build fault tolerant applications that are distributed | |
| across multiple availability zones. | |
| In addition to the Block Storage resource nodes, it is important to | |
| design for high availability and redundancy of the APIs, and related | |
| services that are responsible for provisioning and providing access to | |
| storage. We recommend designing a layer of hardware or software load | |
| balancers in order to achieve high availability of the appropriate REST | |
| API services to provide uninterrupted service. In some cases, it may | |
| also be necessary to deploy an additional layer of load balancing to | |
| provide access to back-end database services responsible for servicing | |
| and storing the state of Block Storage volumes. We also recommend | |
| designing a highly available database solution to store the Block | |
| Storage databases. Leverage highly available database solutions such as | |
| Galera and MariaDB to help keep database services online for | |
| uninterrupted access, so that projects can manage Block Storage volumes. | |
| In a cloud with extreme demands on Block Storage, the network | |
| architecture should take into account the amount of East-West bandwidth | |
| required for instances to make use of the available storage resources. | |
| The selected network devices should support jumbo frames for | |
| transferring large blocks of data. In some cases, it may be necessary to | |
| create an additional back-end storage network dedicated to providing | |
| connectivity between instances and Block Storage resources so that there | |
| is no contention of network resources. | |
| \paragraph{Object Storage fault tolerance and availability} | |
| \label{\detokenize{storage-focus-operational-considerations:object-storage-fault-tolerance-and-availability}} | |
| While consistency and partition tolerance are both inherent features of | |
| the Object Storage service, it is important to design the overall | |
| storage architecture to ensure that the implemented system meets those | |
| goals. The OpenStack Object Storage service places a specific number of | |
| data replicas as objects on resource nodes. These replicas are | |
| distributed throughout the cluster based on a consistent hash ring which | |
| exists on all nodes in the cluster. | |
| Design the Object Storage system with a sufficient number of zones to | |
| provide quorum for the number of replicas defined. For example, with | |
| three replicas configured in the Swift cluster, the recommended number | |
| of zones to configure within the Object Storage cluster in order to | |
| achieve quorum is five. While it is possible to deploy a solution with | |
| fewer zones, the implied risk of doing so is that some data may not be | |
| available and API requests to certain objects stored in the cluster | |
| might fail. For this reason, ensure you properly account for the number | |
| of zones in the Object Storage cluster. | |
| Each Object Storage zone should be self-contained within its own | |
| availability zone. Each availability zone should have independent access | |
| to network, power and cooling infrastructure to ensure uninterrupted | |
| access to data. In addition, a pool of Object Storage proxy servers | |
| providing access to data stored on the object nodes should service each | |
| availability zone. Object proxies in each region should leverage local | |
| read and write affinity so that local storage resources facilitate | |
| access to objects wherever possible. We recommend deploying upstream | |
| load balancing to ensure that proxy services are distributed across the | |
| multiple zones and, in some cases, it may be necessary to make use of | |
| third-party solutions to aid with geographical distribution of services. | |
| A zone within an Object Storage cluster is a logical division. Any of | |
| the following may represent a zone: | |
| \begin{itemize} | |
| \item {} | |
| A disk within a single node | |
| \item {} | |
| One zone per node | |
| \item {} | |
| Zone per collection of nodes | |
| \item {} | |
| Multiple racks | |
| \item {} | |
| Multiple DCs | |
| \end{itemize} | |
| Selecting the proper zone design is crucial for allowing the Object | |
| Storage cluster to scale while providing an available and redundant | |
| storage system. It may be necessary to configure storage policies that | |
| have different requirements with regards to replicas, retention and | |
| other factors that could heavily affect the design of storage in a | |
| specific zone. | |
| \subsubsection{Scaling storage services} | |
| \label{\detokenize{storage-focus-operational-considerations:scaling-storage-services}} | |
| Adding storage capacity and bandwidth is a very different process when | |
| comparing the Block and Object Storage services. While adding Block | |
| Storage capacity is a relatively simple process, adding capacity and | |
| bandwidth to the Object Storage systems is a complex task that requires | |
| careful planning and consideration during the design phase. | |
| \paragraph{Scaling Block Storage} | |
| \label{\detokenize{storage-focus-operational-considerations:scaling-block-storage}} | |
| You can upgrade Block Storage pools to add storage capacity without | |
| interrupting the overall Block Storage service. Add nodes to the pool by | |
| installing and configuring the appropriate hardware and software and | |
| then allowing that node to report in to the proper storage pool via the | |
| message bus. This is because Block Storage nodes report into the | |
| scheduler service advertising their availability. After the node is | |
| online and available, projects can make use of those storage resources | |
| instantly. | |
| In some cases, the demand on Block Storage from instances may exhaust | |
| the available network bandwidth. As a result, design network | |
| infrastructure that services Block Storage resources in such a way that | |
| you can add capacity and bandwidth easily. This often involves the use | |
| of dynamic routing protocols or advanced networking solutions to add | |
| capacity to downstream devices easily. Both the front-end and back-end | |
| storage network designs should encompass the ability to quickly and | |
| easily add capacity and bandwidth. | |
| \paragraph{Scaling Object Storage} | |
| \label{\detokenize{storage-focus-operational-considerations:scaling-object-storage}} | |
| Adding back-end storage capacity to an Object Storage cluster requires | |
| careful planning and consideration. In the design phase, it is important | |
| to determine the maximum partition power required by the Object Storage | |
| service, which determines the maximum number of partitions which can | |
| exist. Object Storage distributes data among all available storage, but | |
| a partition cannot span more than one disk, although a disk can have | |
| multiple partitions. | |
| For example, a system that starts with a single disk and a partition | |
| power of 3 can have 8 (2\textasciicircum{}3) partitions. Adding a second disk means that | |
| each has 4 partitions. The one-disk-per-partition limit means that this | |
| system can never have more than 8 partitions, limiting its scalability. | |
| However, a system that starts with a single disk and a partition power | |
| of 10 can have up to 1024 (2\textasciicircum{}10) partitions. | |
| As you add back-end storage capacity to the system, the partition maps | |
| redistribute data amongst the storage nodes. In some cases, this | |
| replication consists of extremely large data sets. In these cases, we | |
| recommend using back-end replication links that do not contend with | |
| projects' access to data. | |
| As more projects begin to access data within the cluster and their data | |
| sets grow, it is necessary to add front-end bandwidth to service data | |
| access requests. Adding front-end bandwidth to an Object Storage cluster | |
| requires careful planning and design of the Object Storage proxies that | |
| projects use to gain access to the data, along with the high availability | |
| solutions that enable easy scaling of the proxy layer. We recommend | |
| designing a front-end load balancing layer that projects and consumers | |
| use to gain access to data stored within the cluster. This load | |
| balancing layer may be distributed across zones, regions or even across | |
| geographic boundaries, which may also require that the design encompass | |
| geo-location solutions. | |
| In some cases, you must add bandwidth and capacity to the network | |
| resources servicing requests between proxy servers and storage nodes. | |
| For this reason, the network architecture used for access to storage | |
| nodes and proxy servers should make use of a design which is scalable. | |
| \subsection{Architecture} | |
| \label{\detokenize{storage-focus-architecture::doc}}\label{\detokenize{storage-focus-architecture:architecture}} | |
| Consider the following factors when selecting storage hardware: | |
| \begin{itemize} | |
| \item {} | |
| Cost | |
| \item {} | |
| Performance | |
| \item {} | |
| Reliability | |
| \end{itemize} | |
| Storage-focused OpenStack clouds must address I/O intensive workloads. | |
| These workloads are not CPU intensive, nor are they consistently network | |
| intensive. The network may be heavily utilized to transfer storage, but | |
| they are not otherwise network intensive. | |
| The selection of storage hardware determines the overall performance and | |
| scalability of a storage-focused OpenStack design architecture. Several | |
| factors impact the design process, including: | |
| \begin{description} | |
| \item[{Cost}] \leavevmode | |
| The cost of components affects which storage architecture and | |
| hardware you choose. | |
| \item[{Performance}] \leavevmode | |
| The latency of storage I/O requests indicates performance. | |
| Performance requirements affect which solution you choose. | |
| \item[{Scalability}] \leavevmode | |
| Scalability refers to how the storage solution performs as it | |
| expands to its maximum size. Storage solutions that perform well in | |
| small configurations but have degraded performance in large | |
| configurations are not scalable. A solution that performs well at | |
| maximum expansion is scalable. Large deployments require a storage | |
| solution that performs well as it expands. | |
| \end{description} | |
| Latency is a key consideration in a storage-focused OpenStack cloud. | |
| Using solid-state disks (SSDs) to minimize latency and, to reduce CPU | |
| delays caused by waiting for the storage, increases performance. Use | |
| RAID controller cards in compute hosts to improve the performance of the | |
| underlying disk subsystem. | |
| Depending on the storage architecture, you can adopt a scale-out | |
| solution, or use a highly expandable and scalable centralized storage | |
| array. If a centralized storage array is the right fit for your | |
| requirements, then the array vendor determines the hardware selection. | |
| It is possible to build a storage array using commodity hardware with | |
| Open Source software, but requires people with expertise to build such a | |
| system. | |
| On the other hand, a scale-out storage solution that uses | |
| direct-attached storage (DAS) in the servers may be an appropriate | |
| choice. This requires configuration of the server hardware to support | |
| the storage solution. | |
| Considerations affecting storage architecture (and corresponding storage | |
| hardware) of a Storage-focused OpenStack cloud include: | |
| \begin{description} | |
| \item[{Connectivity}] \leavevmode | |
| Based on the selected storage solution, ensure the connectivity | |
| matches the storage solution requirements. We recommended confirming | |
| that the network characteristics minimize latency to boost the | |
| overall performance of the design. | |
| \item[{Latency}] \leavevmode | |
| Determine if the use case has consistent or highly variable latency. | |
| \item[{Throughput}] \leavevmode | |
| Ensure that the storage solution throughput is optimized for your | |
| application requirements. | |
| \item[{Server hardware}] \leavevmode | |
| Use of DAS impacts the server hardware choice and affects host | |
| density, instance density, power density, OS-hypervisor, and | |
| management tools. | |
| \end{description} | |
| \subsubsection{Compute (server) hardware selection} | |
| \label{\detokenize{storage-focus-architecture:compute-server-hardware-selection}} | |
| Four opposing factors determine the compute (server) hardware selection: | |
| \begin{description} | |
| \item[{Server density}] \leavevmode | |
| A measure of how many servers can fit into a given measure of | |
| physical space, such as a rack unit {[}U{]}. | |
| \item[{Resource capacity}] \leavevmode | |
| The number of CPU cores, how much RAM, or how much storage a given | |
| server delivers. | |
| \item[{Expandability}] \leavevmode | |
| The number of additional resources you can add to a server before it | |
| reaches capacity. | |
| \item[{Cost}] \leavevmode | |
| The relative cost of the hardware weighed against the level of | |
| design effort needed to build the system. | |
| \end{description} | |
| You must weigh the dimensions against each other to determine the best | |
| design for the desired purpose. For example, increasing server density | |
| can mean sacrificing resource capacity or expandability. Increasing | |
| resource capacity and expandability can increase cost but decrease | |
| server density. Decreasing cost often means decreasing supportability, | |
| server density, resource capacity, and expandability. | |
| Compute capacity (CPU cores and RAM capacity) is a secondary | |
| consideration for selecting server hardware. As a result, the required | |
| server hardware must supply adequate CPU sockets, additional CPU cores, | |
| and more RAM; network connectivity and storage capacity are not as | |
| critical. The hardware needs to provide enough network connectivity and | |
| storage capacity to meet the user requirements, however they are not the | |
| primary consideration. | |
| Some server hardware form factors are better suited to storage-focused | |
| designs than others. The following is a list of these form factors: | |
| \begin{itemize} | |
| \item {} | |
| Most blade servers support dual-socket multi-core CPUs. Choose either | |
| full width or full height blades to avoid the limit. High density | |
| blade servers support up to 16 servers in only 10 rack units using | |
| half height or half width blades. | |
| \begin{sphinxadmonition}{warning}{Warning:} | |
| This decreases density by 50\% (only 8 servers in 10 U) if a full | |
| width or full height option is used. | |
| \end{sphinxadmonition} | |
| \item {} | |
| 1U rack-mounted servers have the ability to offer greater server | |
| density than a blade server solution, but are often limited to | |
| dual-socket, multi-core CPU configurations. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Due to cooling requirements, it is rare to see 1U rack-mounted | |
| servers with more than 2 CPU sockets. | |
| \end{sphinxadmonition} | |
| To obtain greater than dual-socket support in a 1U rack-mount form | |
| factor, customers need to buy their systems from Original Design | |
| Manufacturers (ODMs) or second-tier manufacturers. | |
| \end{itemize} | |
| \begin{sphinxadmonition}{warning}{Warning:} | |
| This may cause issues for organizations that have preferred | |
| vendor policies or concerns with support and hardware warranties | |
| of non-tier 1 vendors. | |
| \end{sphinxadmonition} | |
| \begin{itemize} | |
| \item {} | |
| 2U rack-mounted servers provide quad-socket, multi-core CPU support | |
| but with a corresponding decrease in server density (half the density | |
| offered by 1U rack-mounted servers). | |
| \item {} | |
| Larger rack-mounted servers, such as 4U servers, often provide even | |
| greater CPU capacity. Commonly supporting four or even eight CPU | |
| sockets. These servers have greater expandability but such servers | |
| have much lower server density and usually greater hardware cost. | |
| \item {} | |
| Rack-mounted servers that support multiple independent servers in a | |
| single 2U or 3U enclosure, ``sled servers'', deliver increased density | |
| as compared to a typical 1U-2U rack-mounted servers. | |
| \end{itemize} | |
| Other factors that influence server hardware selection for a | |
| storage-focused OpenStack design architecture include: | |
| \begin{description} | |
| \item[{Instance density}] \leavevmode | |
| In this architecture, instance density and CPU-RAM oversubscription | |
| are lower. You require more hosts to support the anticipated scale, | |
| especially if the design uses dual-socket hardware designs. | |
| \item[{Host density}] \leavevmode | |
| Another option to address the higher host count is to use a | |
| quad-socket platform. Taking this approach decreases host density | |
| which also increases rack count. This configuration affects the | |
| number of power connections and also impacts network and cooling | |
| requirements. | |
| \item[{Power and cooling density}] \leavevmode | |
| The power and cooling density requirements might be lower than with | |
| blade, sled, or 1U server designs due to lower host density (by | |
| using 2U, 3U or even 4U server designs). For data centers with older | |
| infrastructure, this might be a desirable feature. | |
| \end{description} | |
| Storage-focused OpenStack design architecture server hardware selection | |
| should focus on a ``scale-up'' versus ``scale-out'' solution. The | |
| determination of which is the best solution (a smaller number of larger | |
| hosts or a larger number of smaller hosts), depends on a combination of | |
| factors including cost, power, cooling, physical rack and floor space, | |
| support-warranty, and manageability. | |
| \subsubsection{Networking hardware selection} | |
| \label{\detokenize{storage-focus-architecture:networking-hardware-selection}} | |
| Key considerations for the selection of networking hardware include: | |
| \begin{description} | |
| \item[{Port count}] \leavevmode | |
| The user requires networking hardware that has the requisite port | |
| count. | |
| \item[{Port density}] \leavevmode | |
| The physical space required to provide the requisite port count | |
| affects the network design. A switch that provides 48 10 GbE ports | |
| in 1U has a much higher port density than a switch that provides 24 | |
| 10 GbE ports in 2U. On a general scale, a higher port density leaves | |
| more rack space for compute or storage components which is | |
| preferred. It is also important to consider fault domains and power | |
| density. Finally, higher density switches are more expensive, | |
| therefore it is important not to over design the network. | |
| \item[{Port speed}] \leavevmode | |
| The networking hardware must support the proposed network speed, for | |
| example: 1 GbE, 10 GbE, or 40 GbE (or even 100 GbE). | |
| \item[{Redundancy}] \leavevmode | |
| User requirements for high availability and cost considerations | |
| influence the required level of network hardware redundancy. Achieve | |
| network redundancy by adding redundant power supplies or paired | |
| switches. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| If this is a requirement, the hardware must support this | |
| configuration. User requirements determine if a completely | |
| redundant network infrastructure is required. | |
| \end{sphinxadmonition} | |
| \item[{Power requirements}] \leavevmode | |
| Ensure that the physical data center provides the necessary power | |
| for the selected network hardware. This is not an issue for top of | |
| rack (ToR) switches, but may be an issue for spine switches in a | |
| leaf and spine fabric, or end of row (EoR) switches. | |
| \item[{Protocol support}] \leavevmode | |
| It is possible to gain more performance out of a single storage | |
| system by using specialized network technologies such as RDMA, SRP, | |
| iSER and SCST. The specifics for using these technologies is beyond | |
| the scope of this book. | |
| \end{description} | |
| \subsubsection{Software selection} | |
| \label{\detokenize{storage-focus-architecture:software-selection}} | |
| Factors that influence the software selection for a storage-focused | |
| OpenStack architecture design include: | |
| \begin{itemize} | |
| \item {} | |
| Operating system (OS) and hypervisor | |
| \item {} | |
| OpenStack components | |
| \item {} | |
| Supplemental software | |
| \end{itemize} | |
| Design decisions made in each of these areas impacts the rest of the | |
| OpenStack architecture design. | |
| \subsubsection{Operating system and hypervisor} | |
| \label{\detokenize{storage-focus-architecture:operating-system-and-hypervisor}} | |
| Operating system (OS) and hypervisor have a significant impact on the | |
| overall design and also affect server hardware selection. Ensure the | |
| selected operating system and hypervisor combination support the storage | |
| hardware and work with the networking hardware selection and topology. | |
| Operating system and hypervisor selection affect the following areas: | |
| \begin{description} | |
| \item[{Cost}] \leavevmode | |
| Selecting a commercially supported hypervisor, such as Microsoft | |
| Hyper-V, results in a different cost model than a | |
| community-supported open source hypervisor like Kinstance or Xen. | |
| Similarly, choosing Ubuntu over Red Hat (or vice versa) impacts cost | |
| due to support contracts. However, business or application | |
| requirements might dictate a specific or commercially supported | |
| hypervisor. | |
| \item[{Supportability}] \leavevmode | |
| Staff must have training with the chosen hypervisor. Consider the | |
| cost of training when choosing a solution. The support of a | |
| commercial product such as Red Hat, SUSE, or Windows, is the | |
| responsibility of the OS vendor. If an open source platform is | |
| chosen, the support comes from in-house resources. | |
| \item[{Management tools}] \leavevmode | |
| Ubuntu and Kinstance use different management tools than VMware | |
| vSphere. Although both OS and hypervisor combinations are supported | |
| by OpenStack, there are varying impacts to the rest of the design as | |
| a result of the selection of one combination versus the other. | |
| \item[{Scale and performance}] \leavevmode | |
| Ensure the selected OS and hypervisor combination meet the | |
| appropriate scale and performance requirements needed for this | |
| storage focused OpenStack cloud. The chosen architecture must meet | |
| the targeted instance-host ratios with the selected OS-hypervisor | |
| combination. | |
| \item[{Security}] \leavevmode | |
| Ensure the design can accommodate the regular periodic installation | |
| of application security patches while maintaining the required | |
| workloads. The frequency of security patches for the proposed | |
| OS-hypervisor combination impacts performance and the patch | |
| installation process could affect maintenance windows. | |
| \item[{Supported features}] \leavevmode | |
| Selecting the OS-hypervisor combination often determines the | |
| required features of OpenStack. Certain features are only available | |
| with specific OSes or hypervisors. For example, if certain features | |
| are not available, you might need to modify the design to meet user | |
| requirements. | |
| \item[{Interoperability}] \leavevmode | |
| The OS-hypervisor combination should be chosen based on the | |
| interoperability with one another, and other OS-hyervisor | |
| combinations. Operational and troubleshooting tools for one | |
| OS-hypervisor combination may differ from the tools used for another | |
| OS-hypervisor combination. As a result, the design must address if | |
| the two sets of tools need to interoperate. | |
| \end{description} | |
| \subsubsection{OpenStack components} | |
| \label{\detokenize{storage-focus-architecture:openstack-components}} | |
| The OpenStack components you choose can have a significant impact on the | |
| overall design. While there are certain components that are always | |
| present (Compute and Image service, for example), there are other | |
| services that may not be required. As an example, a certain design may | |
| not require the Orchestration service. Omitting Orchestration would not | |
| typically have a significant impact on the overall design, however, if | |
| the architecture uses a replacement for OpenStack Object Storage for its | |
| storage component, this could potentially have significant impacts on | |
| the rest of the design. | |
| A storage-focused design might require the ability to use Orchestration | |
| to launch instances with Block Storage volumes to perform | |
| storage-intensive processing. | |
| A storage-focused OpenStack design architecture uses the following | |
| components: | |
| \begin{itemize} | |
| \item {} | |
| OpenStack Identity (keystone) | |
| \item {} | |
| OpenStack dashboard (horizon) | |
| \item {} \begin{description} | |
| \item[{OpenStack Compute (nova) (including the use of multiple hypervisor}] \leavevmode | |
| drivers) | |
| \end{description} | |
| \item {} | |
| OpenStack Object Storage (swift) (or another object storage solution) | |
| \item {} | |
| OpenStack Block Storage (cinder) | |
| \item {} | |
| OpenStack Image service (glance) | |
| \item {} | |
| OpenStack Networking (neutron) or legacy networking (nova-network) | |
| \end{itemize} | |
| Excluding certain OpenStack components may limit or constrain the | |
| functionality of other components. If a design opts to include | |
| Orchestration but exclude Telemetry, then the design cannot take | |
| advantage of Orchestration's auto scaling functionality (which relies on | |
| information from Telemetry). Due to the fact that you can use | |
| Orchestration to spin up a large number of instances to perform the | |
| compute-intensive processing, we strongly recommend including | |
| Orchestration in a compute-focused architecture design. | |
| \subsubsection{Networking software} | |
| \label{\detokenize{storage-focus-architecture:networking-software}} | |
| OpenStack Networking (neutron) provides a wide variety of networking | |
| services for instances. There are many additional networking software | |
| packages that may be useful to manage the OpenStack components | |
| themselves. Some examples include HAProxy, Keepalived, and various | |
| routing daemons (like Quagga). The OpenStack High Availability Guide | |
| describes some of these software packages, HAProxy in particular. See | |
| the \href{https://docs.openstack.org/ha-guide/networking-ha.html}{Network controller cluster stack | |
| chapter} of | |
| the OpenStack High Availability Guide. | |
| \subsubsection{Management software} | |
| \label{\detokenize{storage-focus-architecture:management-software}} | |
| Management software includes software for providing: | |
| \begin{itemize} | |
| \item {} | |
| Clustering | |
| \item {} | |
| Logging | |
| \item {} | |
| Monitoring | |
| \item {} | |
| Alerting | |
| \end{itemize} | |
| \begin{sphinxadmonition}{important}{Important:} | |
| The factors for determining which software packages in this category | |
| to select is outside the scope of this design guide. | |
| \end{sphinxadmonition} | |
| The availability design requirements determine the selection of | |
| Clustering Software, such as Corosync or Pacemaker. The availability of | |
| the cloud infrastructure and the complexity of supporting the | |
| configuration after deployment determines the impact of including these | |
| software packages. The OpenStack High Availability Guide provides more | |
| details on the installation and configuration of Corosync and Pacemaker. | |
| Operational considerations determine the requirements for logging, | |
| monitoring, and alerting. Each of these sub-categories includes options. | |
| For example, in the logging sub-category you could select Logstash, | |
| Splunk, Log Insight, or another log aggregation-consolidation tool. | |
| Store logs in a centralized location to facilitate performing analytics | |
| against the data. Log data analytics engines can also provide automation | |
| and issue notification, by providing a mechanism to both alert and | |
| automatically attempt to remediate some of the more commonly known | |
| issues. | |
| If you require any of these software packages, the design must account | |
| for the additional resource consumption. Some other potential design | |
| impacts include: | |
| \begin{itemize} | |
| \item {} | |
| OS-Hypervisor combination: Ensure that the selected logging, | |
| monitoring, or alerting tools support the proposed OS-hypervisor | |
| combination. | |
| \item {} | |
| Network hardware: The network hardware selection needs to be | |
| supported by the logging, monitoring, and alerting software. | |
| \end{itemize} | |
| \subsubsection{Database software} | |
| \label{\detokenize{storage-focus-architecture:database-software}} | |
| Most OpenStack components require access to back-end database services | |
| to store state and configuration information. Choose an appropriate | |
| back-end database which satisfies the availability and fault tolerance | |
| requirements of the OpenStack services. | |
| MySQL is the default database for OpenStack, but other compatible | |
| databases are available. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Telemetry uses MongoDB. | |
| \end{sphinxadmonition} | |
| The chosen high availability database solution changes according to the | |
| selected database. MySQL, for example, provides several options. Use a | |
| replication technology such as Galera for active-active clustering. For | |
| active-passive use some form of shared storage. Each of these potential | |
| solutions has an impact on the design: | |
| \begin{itemize} | |
| \item {} | |
| Solutions that employ Galera/MariaDB require at least three MySQL | |
| nodes. | |
| \item {} | |
| MongoDB has its own design considerations for high availability. | |
| \item {} | |
| OpenStack design, generally, does not include shared storage. | |
| However, for some high availability designs, certain components might | |
| require it depending on the specific implementation. | |
| \end{itemize} | |
| \subsection{Prescriptive Examples} | |
| \label{\detokenize{storage-focus-prescriptive-examples::doc}}\label{\detokenize{storage-focus-prescriptive-examples:prescriptive-examples}} | |
| Storage-focused architecture depends on specific use cases. This section | |
| discusses three example use cases: | |
| \begin{itemize} | |
| \item {} | |
| An object store with a RESTful interface | |
| \item {} | |
| Compute analytics with parallel file systems | |
| \item {} | |
| High performance database | |
| \end{itemize} | |
| The example below shows a REST interface without a high performance | |
| requirement. | |
| Swift is a highly scalable object store that is part of the OpenStack | |
| project. This diagram explains the example architecture: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Storage_Object}.png} | |
| \end{figure} | |
| The example REST interface, presented as a traditional Object store | |
| running on traditional spindles, does not require a high performance | |
| caching tier. | |
| This example uses the following components: | |
| Network: | |
| \begin{itemize} | |
| \item {} | |
| 10 GbE horizontally scalable spine leaf back-end storage and front | |
| end network. | |
| \end{itemize} | |
| Storage hardware: | |
| \begin{itemize} | |
| \item {} | |
| 10 storage servers each with 12x4 TB disks equaling 480 TB total | |
| space with approximately 160 TB of usable space after replicas. | |
| \end{itemize} | |
| Proxy: | |
| \begin{itemize} | |
| \item {} | |
| 3x proxies | |
| \item {} | |
| 2x10 GbE bonded front end | |
| \item {} | |
| 2x10 GbE back-end bonds | |
| \item {} | |
| Approximately 60 Gb of total bandwidth to the back-end storage | |
| cluster | |
| \end{itemize} | |
| \begin{sphinxadmonition}{note}{Note:} | |
| It may be necessary to implement a 3rd-party caching layer for some | |
| applications to achieve suitable performance. | |
| \end{sphinxadmonition} | |
| \subsubsection{Compute analytics with Data processing service} | |
| \label{\detokenize{storage-focus-prescriptive-examples:compute-analytics-with-data-processing-service}} | |
| Analytics of large data sets are dependent on the performance of the | |
| storage system. Clouds using storage systems such as Hadoop Distributed | |
| File System (HDFS) have inefficiencies which can cause performance | |
| issues. | |
| One potential solution to this problem is the implementation of storage | |
| systems designed for performance. Parallel file systems have previously | |
| filled this need in the HPC space and are suitable for large scale | |
| performance-orientated systems. | |
| OpenStack has integration with Hadoop to manage the Hadoop cluster | |
| within the cloud. The following diagram shows an OpenStack store with a | |
| high performance requirement: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Storage_Hadoop3}.png} | |
| \end{figure} | |
| The hardware requirements and configuration are similar to those of the | |
| High Performance Database example below. In this case, the architecture | |
| uses Ceph's Swift-compatible REST interface, features that allow for | |
| connecting a caching pool to allow for acceleration of the presented | |
| pool. | |
| \subsubsection{High performance database with Database service} | |
| \label{\detokenize{storage-focus-prescriptive-examples:high-performance-database-with-database-service}} | |
| Databases are a common workload that benefit from high performance | |
| storage back ends. Although enterprise storage is not a requirement, | |
| many environments have existing storage that OpenStack cloud can use as | |
| back ends. You can create a storage pool to provide block devices with | |
| OpenStack Block Storage for instances as well as object interfaces. In | |
| this example, the database I-O requirements are high and demand storage | |
| presented from a fast SSD pool. | |
| A storage system presents a LUN backed by a set of SSDs using a | |
| traditional storage array with OpenStack Block Storage integration or a | |
| storage platform such as Ceph or Gluster. | |
| This system can provide additional performance. For example, in the | |
| database example below, a portion of the SSD pool can act as a block | |
| device to the Database server. In the high performance analytics | |
| example, the inline SSD cache layer accelerates the REST interface. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Storage_Database_+_Object5}.png} | |
| \end{figure} | |
| In this example, Ceph presents a Swift-compatible REST interface, as | |
| well as a block level storage from a distributed storage cluster. It is | |
| highly flexible and has features that enable reduced cost of operations | |
| such as self healing and auto balancing. Using erasure coded pools are a | |
| suitable way of maximizing the amount of usable space. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| There are special considerations around erasure coded pools. For | |
| example, higher computational requirements and limitations on the | |
| operations allowed on an object; erasure coded pools do not support | |
| partial writes. | |
| \end{sphinxadmonition} | |
| Using Ceph as an applicable example, a potential architecture would have | |
| the following requirements: | |
| Network: | |
| \begin{itemize} | |
| \item {} | |
| 10 GbE horizontally scalable spine leaf back-end storage and | |
| front-end network | |
| \end{itemize} | |
| Storage hardware: | |
| \begin{itemize} | |
| \item {} | |
| 5 storage servers for caching layer 24x1 TB SSD | |
| \item {} | |
| 10 storage servers each with 12x4 TB disks which equals 480 TB total | |
| space with about approximately 160 TB of usable space after 3 | |
| replicas | |
| \end{itemize} | |
| REST proxy: | |
| \begin{itemize} | |
| \item {} | |
| 3x proxies | |
| \item {} | |
| 2x10 GbE bonded front end | |
| \item {} | |
| 2x10 GbE back-end bonds | |
| \item {} | |
| Approximately 60 Gb of total bandwidth to the back-end storage | |
| cluster | |
| \end{itemize} | |
| Using an SSD cache layer, you can present block devices directly to | |
| hypervisors or instances. The REST interface can also use the SSD cache | |
| systems as an inline cache. | |
| Cloud storage is a model of data storage that stores digital data in | |
| logical pools and physical storage that spans across multiple servers | |
| and locations. Cloud storage commonly refers to a hosted object storage | |
| service, however the term also includes other types of data storage that | |
| are available as a service, for example block storage. | |
| Cloud storage runs on virtualized infrastructure and resembles broader | |
| cloud computing in terms of accessible interfaces, elasticity, | |
| scalability, multi-tenancy, and metered resources. You can use cloud | |
| storage services from an off-premises service or deploy on-premises. | |
| Cloud storage consists of many distributed, synonymous resources, which | |
| are often referred to as integrated storage clouds. Cloud storage is | |
| highly fault tolerant through redundancy and the distribution of data. | |
| It is highly durable through the creation of versioned copies, and can | |
| be consistent with regard to data replicas. | |
| At large scale, management of data operations is a resource intensive | |
| process for an organization. Hierarchical storage management (HSM) | |
| systems and data grids help annotate and report a baseline data | |
| valuation to make intelligent decisions and automate data decisions. HSM | |
| enables automated tiering and movement, as well as orchestration of data | |
| operations. A data grid is an architecture, or set of services evolving | |
| technology, that brings together sets of services enabling users to | |
| manage large data sets. | |
| Example applications deployed with cloud storage characteristics: | |
| \begin{itemize} | |
| \item {} | |
| Active archive, backups and hierarchical storage management. | |
| \item {} | |
| General content storage and synchronization. An example of this is | |
| private dropbox. | |
| \item {} | |
| Data analytics with parallel file systems. | |
| \item {} | |
| Unstructured data store for services. For example, social media | |
| back-end storage. | |
| \item {} | |
| Persistent block storage. | |
| \item {} | |
| Operating system and application image store. | |
| \item {} | |
| Media streaming. | |
| \item {} | |
| Databases. | |
| \item {} | |
| Content distribution. | |
| \item {} | |
| Cloud storage peering. | |
| \end{itemize} | |
| \section{Network focused} | |
| \label{\detokenize{network-focus::doc}}\label{\detokenize{network-focus:network-focused}} | |
| \subsection{User requirements} | |
| \label{\detokenize{network-focus-user-requirements:user-requirements}}\label{\detokenize{network-focus-user-requirements::doc}} | |
| Network-focused architectures vary from the general-purpose architecture | |
| designs. Certain network-intensive applications influence these | |
| architectures. Some of the business requirements that influence the | |
| design include network latency through slow page loads, degraded video | |
| streams, and low quality VoIP sessions impacts the user experience. | |
| Users are often not aware of how network design and architecture affects their | |
| experiences. Both enterprise customers and end-users rely on the network for | |
| delivery of an application. Network performance problems can result in a | |
| negative experience for the end-user, as well as productivity and economic | |
| loss. | |
| \subsubsection{High availability issues} | |
| \label{\detokenize{network-focus-user-requirements:high-availability-issues}} | |
| Depending on the application and use case, network-intensive OpenStack | |
| installations can have high availability requirements. Financial | |
| transaction systems have a much higher requirement for high availability | |
| than a development application. Use network availability technologies, | |
| for example {\hyperref[\detokenize{common/glossary:term-quality-of-service-qos}]{\sphinxtermref{\DUrole{xref,std,std-term}{quality of service (QoS)}}}}, to improve the network | |
| performance of sensitive applications such as VoIP and video streaming. | |
| High performance systems have SLA requirements for a minimum QoS with | |
| regard to guaranteed uptime, latency, and bandwidth. The level of the | |
| SLA can have a significant impact on the network architecture and | |
| requirements for redundancy in the systems. | |
| \subsubsection{Risks} | |
| \label{\detokenize{network-focus-user-requirements:risks}}\begin{description} | |
| \item[{Network misconfigurations}] \leavevmode | |
| Configuring incorrect IP addresses, VLANs, and routers can cause | |
| outages to areas of the network or, in the worst-case scenario, the | |
| entire cloud infrastructure. Automate network configurations to | |
| minimize the opportunity for operator error as it can cause | |
| disruptive problems. | |
| \item[{Capacity planning}] \leavevmode | |
| Cloud networks require management for capacity and growth over time. | |
| Capacity planning includes the purchase of network circuits and | |
| hardware that can potentially have lead times measured in months or | |
| years. | |
| \item[{Network tuning}] \leavevmode | |
| Configure cloud networks to minimize link loss, packet loss, packet | |
| storms, broadcast storms, and loops. | |
| \item[{Single Point Of Failure (SPOF)}] \leavevmode | |
| Consider high availability at the physical and environmental layers. | |
| If there is a single point of failure due to only one upstream link, | |
| or only one power supply, an outage can become unavoidable. | |
| \item[{Complexity}] \leavevmode | |
| An overly complex network design can be difficult to maintain and | |
| troubleshoot. While device-level configuration can ease maintenance | |
| concerns and automated tools can handle overlay networks, avoid or | |
| document non-traditional interconnects between functions and | |
| specialized hardware to prevent outages. | |
| \item[{Non-standard features}] \leavevmode | |
| There are additional risks that arise from configuring the cloud | |
| network to take advantage of vendor specific features. One example | |
| is multi-link aggregation (MLAG) used to provide redundancy at the | |
| aggregator switch level of the network. MLAG is not a standard and, | |
| as a result, each vendor has their own proprietary implementation of | |
| the feature. MLAG architectures are not interoperable across switch | |
| vendors, which leads to vendor lock-in, and can cause delays or | |
| inability when upgrading components. | |
| \end{description} | |
| \subsection{Technical considerations} | |
| \label{\detokenize{network-focus-technical-considerations::doc}}\label{\detokenize{network-focus-technical-considerations:technical-considerations}} | |
| When you design an OpenStack network architecture, you must consider | |
| layer-2 and layer-3 issues. Layer-2 decisions involve those made at the | |
| data-link layer, such as the decision to use Ethernet versus Token Ring. | |
| Layer-3 decisions involve those made about the protocol layer and the | |
| point when IP comes into the picture. As an example, a completely | |
| internal OpenStack network can exist at layer 2 and ignore layer 3. In | |
| order for any traffic to go outside of that cloud, to another network, | |
| or to the Internet, however, you must use a layer-3 router or switch. | |
| The past few years have seen two competing trends in networking. One | |
| trend leans towards building data center network architectures based on | |
| layer-2 networking. Another trend treats the cloud environment | |
| essentially as a miniature version of the Internet. This approach is | |
| radically different from the network architecture approach in the | |
| staging environment: the Internet only uses layer-3 routing rather than | |
| layer-2 switching. | |
| A network designed on layer-2 protocols has advantages over one designed | |
| on layer-3 protocols. In spite of the difficulties of using a bridge to | |
| perform the network role of a router, many vendors, customers, and | |
| service providers choose to use Ethernet in as many parts of their | |
| networks as possible. The benefits of selecting a layer-2 design are: | |
| \begin{itemize} | |
| \item {} | |
| Ethernet frames contain all the essentials for networking. These | |
| include, but are not limited to, globally unique source addresses, | |
| globally unique destination addresses, and error control. | |
| \item {} | |
| Ethernet frames can carry any kind of packet. Networking at layer-2 | |
| is independent of the layer-3 protocol. | |
| \item {} | |
| Adding more layers to the Ethernet frame only slows the networking | |
| process down. This is known as `nodal processing delay'. | |
| \item {} | |
| You can add adjunct networking features, for example class of service | |
| (CoS) or multicasting, to Ethernet as readily as IP networks. | |
| \item {} | |
| VLANs are an easy mechanism for isolating networks. | |
| \end{itemize} | |
| Most information starts and ends inside Ethernet frames. Today this | |
| applies to data, voice (for example, VoIP), and video (for example, web | |
| cameras). The concept is that if you can perform more of the end-to-end | |
| transfer of information from a source to a destination in the form of | |
| Ethernet frames, the network benefits more from the advantages of | |
| Ethernet. Although it is not a substitute for IP networking, networking | |
| at layer-2 can be a powerful adjunct to IP networking. | |
| Layer-2 Ethernet usage has these advantages over layer-3 IP network | |
| usage: | |
| \begin{itemize} | |
| \item {} | |
| Speed | |
| \item {} | |
| Reduced overhead of the IP hierarchy. | |
| \item {} | |
| No need to keep track of address configuration as systems move | |
| around. Whereas the simplicity of layer-2 protocols might work well | |
| in a data center with hundreds of physical machines, cloud data | |
| centers have the additional burden of needing to keep track of all | |
| virtual machine addresses and networks. In these data centers, it is | |
| not uncommon for one physical node to support 30-40 instances. | |
| \begin{sphinxadmonition}{important}{Important:} | |
| Networking at the frame level says nothing about the presence or | |
| absence of IP addresses at the packet level. Almost all ports, | |
| links, and devices on a network of LAN switches still have IP | |
| addresses, as do all the source and destination hosts. There are | |
| many reasons for the continued need for IP addressing. The largest | |
| one is the need to manage the network. A device or link without an | |
| IP address is usually invisible to most management applications. | |
| Utilities including remote access for diagnostics, file transfer of | |
| configurations and software, and similar applications cannot run | |
| without IP addresses as well as MAC addresses. | |
| \end{sphinxadmonition} | |
| \end{itemize} | |
| \subsubsection{Layer-2 architecture limitations} | |
| \label{\detokenize{network-focus-technical-considerations:layer-2-architecture-limitations}} | |
| Outside of the traditional data center the limitations of layer-2 | |
| network architectures become more obvious. | |
| \begin{itemize} | |
| \item {} | |
| Number of VLANs is limited to 4096. | |
| \item {} | |
| The number of MACs stored in switch tables is limited. | |
| \item {} | |
| You must accommodate the need to maintain a set of layer-4 devices to | |
| handle traffic control. | |
| \item {} | |
| MLAG, often used for switch redundancy, is a proprietary solution | |
| that does not scale beyond two devices and forces vendor lock-in. | |
| \item {} | |
| It can be difficult to troubleshoot a network without IP addresses | |
| and ICMP. | |
| \item {} | |
| Configuring {\hyperref[\detokenize{common/glossary:term-address-resolution-protocol-arp}]{\sphinxtermref{\DUrole{xref,std,std-term}{ARP}}}} can be | |
| complicated on large layer-2 networks. | |
| \item {} | |
| All network devices need to be aware of all MACs, even instance MACs, | |
| so there is constant churn in MAC tables and network state changes as | |
| instances start and stop. | |
| \item {} | |
| Migrating MACs (instance migration) to different physical locations | |
| are a potential problem if you do not set ARP table timeouts | |
| properly. | |
| \end{itemize} | |
| It is important to know that layer-2 has a very limited set of network | |
| management tools. It is very difficult to control traffic, as it does | |
| not have mechanisms to manage the network or shape the traffic, and | |
| network troubleshooting is very difficult. One reason for this | |
| difficulty is network devices have no IP addresses. As a result, there | |
| is no reasonable way to check network delay in a layer-2 network. | |
| On large layer-2 networks, configuring ARP learning can also be | |
| complicated. The setting for the MAC address timer on switches is | |
| critical and, if set incorrectly, can cause significant performance | |
| problems. As an example, the Cisco default MAC address timer is | |
| extremely long. Migrating MACs to different physical locations to | |
| support instance migration can be a significant problem. In this case, | |
| the network information maintained in the switches could be out of sync | |
| with the new location of the instance. | |
| In a layer-2 network, all devices are aware of all MACs, even those that | |
| belong to instances. The network state information in the backbone | |
| changes whenever an instance starts or stops. As a result there is far | |
| too much churn in the MAC tables on the backbone switches. | |
| \subsubsection{Layer-3 architecture advantages} | |
| \label{\detokenize{network-focus-technical-considerations:layer-3-architecture-advantages}} | |
| In the layer-3 case, there is no churn in the routing tables due to | |
| instances starting and stopping. The only time there would be a routing | |
| state change is in the case of a Top of Rack (ToR) switch failure or a | |
| link failure in the backbone itself. Other advantages of using a layer-3 | |
| architecture include: | |
| \begin{itemize} | |
| \item {} | |
| Layer-3 networks provide the same level of resiliency and scalability | |
| as the Internet. | |
| \item {} | |
| Controlling traffic with routing metrics is straightforward. | |
| \item {} | |
| You can configure layer 3 to use {\hyperref[\detokenize{common/glossary:term-border-gateway-protocol-bgp}]{\sphinxtermref{\DUrole{xref,std,std-term}{BGP}}}} | |
| confederation for scalability so core routers have state proportional to the | |
| number of racks, not to the number of servers or instances. | |
| \item {} | |
| Routing takes instance MAC and IP addresses out of the network core, | |
| reducing state churn. Routing state changes only occur in the case of | |
| a ToR switch failure or backbone link failure. | |
| \item {} | |
| There are a variety of well tested tools, for example ICMP, to | |
| monitor and manage traffic. | |
| \item {} | |
| Layer-3 architectures enable the use of {\hyperref[\detokenize{common/glossary:term-quality-of-service-qos}]{\sphinxtermref{\DUrole{xref,std,std-term}{quality of service (QoS)}}}} to | |
| manage network performance. | |
| \end{itemize} | |
| \paragraph{Layer-3 architecture limitations} | |
| \label{\detokenize{network-focus-technical-considerations:layer-3-architecture-limitations}} | |
| The main limitation of layer 3 is that there is no built-in isolation | |
| mechanism comparable to the VLANs in layer-2 networks. Furthermore, the | |
| hierarchical nature of IP addresses means that an instance is on the | |
| same subnet as its physical host. This means that you cannot migrate it | |
| outside of the subnet easily. For these reasons, network virtualization | |
| needs to use IP {\hyperref[\detokenize{common/glossary:term-encapsulation}]{\sphinxtermref{\DUrole{xref,std,std-term}{encapsulation}}}} and software at the end hosts for | |
| isolation and the separation of the addressing in the virtual layer from | |
| the addressing in the physical layer. Other potential disadvantages of | |
| layer 3 include the need to design an IP addressing scheme rather than | |
| relying on the switches to keep track of the MAC addresses automatically | |
| and to configure the interior gateway routing protocol in the switches. | |
| \subsubsection{Network recommendations overview} | |
| \label{\detokenize{network-focus-technical-considerations:network-recommendations-overview}} | |
| OpenStack has complex networking requirements for several reasons. Many | |
| components interact at different levels of the system stack that adds | |
| complexity. Data flows are complex. Data in an OpenStack cloud moves | |
| both between instances across the network (also known as East-West), as | |
| well as in and out of the system (also known as North-South). Physical | |
| server nodes have network requirements that are independent of instance | |
| network requirements, which you must isolate from the core network to | |
| account for scalability. We recommend functionally separating the | |
| networks for security purposes and tuning performance through traffic | |
| shaping. | |
| You must consider a number of important general technical and business | |
| factors when planning and designing an OpenStack network. They include: | |
| \begin{itemize} | |
| \item {} | |
| A requirement for vendor independence. To avoid hardware or software | |
| vendor lock-in, the design should not rely on specific features of a | |
| vendor's router or switch. | |
| \item {} | |
| A requirement to massively scale the ecosystem to support millions of | |
| end users. | |
| \item {} | |
| A requirement to support indeterminate platforms and applications. | |
| \item {} | |
| A requirement to design for cost efficient operations to take | |
| advantage of massive scale. | |
| \item {} | |
| A requirement to ensure that there is no single point of failure in | |
| the cloud ecosystem. | |
| \item {} | |
| A requirement for high availability architecture to meet customer SLA | |
| requirements. | |
| \item {} | |
| A requirement to be tolerant of rack level failure. | |
| \item {} | |
| A requirement to maximize flexibility to architect future production | |
| environments. | |
| \end{itemize} | |
| Bearing in mind these considerations, we recommend the following: | |
| \begin{itemize} | |
| \item {} | |
| Layer-3 designs are preferable to layer-2 architectures. | |
| \item {} | |
| Design a dense multi-path network core to support multi-directional | |
| scaling and flexibility. | |
| \item {} | |
| Use hierarchical addressing because it is the only viable option to | |
| scale network ecosystem. | |
| \item {} | |
| Use virtual networking to isolate instance service network traffic | |
| from the management and internal network traffic. | |
| \item {} | |
| Isolate virtual networks using encapsulation technologies. | |
| \item {} | |
| Use traffic shaping for performance tuning. | |
| \item {} | |
| Use eBGP to connect to the Internet up-link. | |
| \item {} | |
| Use iBGP to flatten the internal traffic on the layer-3 mesh. | |
| \item {} | |
| Determine the most effective configuration for block storage network. | |
| \end{itemize} | |
| \subsubsection{Additional considerations} | |
| \label{\detokenize{network-focus-technical-considerations:additional-considerations}} | |
| There are several further considerations when designing a | |
| network-focused OpenStack cloud. | |
| \paragraph{OpenStack Networking versus legacy networking (nova-network) considerations} | |
| \label{\detokenize{network-focus-technical-considerations:openstack-networking-versus-legacy-networking-nova-network-considerations}} | |
| Selecting the type of networking technology to implement depends on many | |
| factors. OpenStack Networking (neutron) and legacy networking | |
| (nova-network) both have their advantages and disadvantages. They are | |
| both valid and supported options that fit different use cases: | |
| \begin{threeparttable} | |
| \capstart\caption{\sphinxstylestrong{Redundant networking: ToR switch high availability risk | |
| analysis}}\label{\detokenize{network-focus-technical-considerations:id1}} | |
| \noindent\begin{tabulary}{\linewidth}{|L|L|} | |
| \hline | |
| \sphinxstylethead{\relax | |
| Legacy networking (nova-network) | |
| \unskip}\relax &\sphinxstylethead{\relax | |
| OpenStack Networking | |
| \unskip}\relax \\ | |
| \hline | |
| Simple, single agent | |
| & | |
| Complex, multiple agents | |
| \\ | |
| \hline | |
| More mature, established | |
| & | |
| Newer, maturing | |
| \\ | |
| \hline | |
| Flat or VLAN | |
| & | |
| Flat, VLAN, Overlays, L2-L3, SDN | |
| \\ | |
| \hline | |
| No plug-in support | |
| & | |
| Plug-in support for 3rd parties | |
| \\ | |
| \hline | |
| Scales well | |
| & | |
| Scaling requires 3rd party plug-ins | |
| \\ | |
| \hline | |
| No multi-tier topologies | |
| & | |
| Multi-tier topologies | |
| \\ | |
| \hline\end{tabulary} | |
| \end{threeparttable} | |
| \paragraph{Redundant networking: ToR switch high availability risk analysis} | |
| \label{\detokenize{network-focus-technical-considerations:redundant-networking-tor-switch-high-availability-risk-analysis}} | |
| A technical consideration of networking is the idea that you should | |
| install switching gear in a data center with backup switches in case of | |
| hardware failure. | |
| Research indicates the mean time between failures (MTBF) on switches is | |
| between 100,000 and 200,000 hours. This number is dependent on the | |
| ambient temperature of the switch in the data center. When properly | |
| cooled and maintained, this translates to between 11 and 22 years before | |
| failure. Even in the worst case of poor ventilation and high ambient | |
| temperatures in the data center, the MTBF is still 2-3 years. See | |
| \href{http://media.beldensolutions.com/garrettcom/techsupport/papers/ethernet\_switch\_reliability.pdf}{Ethernet switch reliablity: Temperature vs. moving parts} | |
| for further information. | |
| In most cases, it is much more economical to use a single switch with a | |
| small pool of spare switches to replace failed units than it is to | |
| outfit an entire data center with redundant switches. Applications | |
| should tolerate rack level outages without affecting normal operations, | |
| since network and compute resources are easily provisioned and | |
| plentiful. | |
| \paragraph{Preparing for the future: IPv6 support} | |
| \label{\detokenize{network-focus-technical-considerations:preparing-for-the-future-ipv6-support}} | |
| One of the most important networking topics today is the impending | |
| exhaustion of IPv4 addresses. In early 2014, ICANN announced that they | |
| started allocating the final IPv4 address blocks to the \href{http://www.internetsociety.org/deploy360/blog/2014/05/goodbye-ipv4-iana-starts-allocating-final-address-blocks/}{Regional | |
| Internet Registries}. | |
| This means the IPv4 address space is close to being fully allocated. As | |
| a result, it will soon become difficult to allocate more IPv4 addresses | |
| to an application that has experienced growth, or that you expect to | |
| scale out, due to the lack of unallocated IPv4 address blocks. | |
| For network focused applications the future is the IPv6 protocol. IPv6 | |
| increases the address space significantly, fixes long standing issues in | |
| the IPv4 protocol, and will become essential for network focused | |
| applications in the future. | |
| OpenStack Networking supports IPv6 when configured to take advantage of | |
| it. To enable IPv6, create an IPv6 subnet in Networking and use IPv6 | |
| prefixes when creating security groups. | |
| \paragraph{Asymmetric links} | |
| \label{\detokenize{network-focus-technical-considerations:asymmetric-links}} | |
| When designing a network architecture, the traffic patterns of an | |
| application heavily influence the allocation of total bandwidth and the | |
| number of links that you use to send and receive traffic. Applications | |
| that provide file storage for customers allocate bandwidth and links to | |
| favor incoming traffic, whereas video streaming applications allocate | |
| bandwidth and links to favor outgoing traffic. | |
| \paragraph{Performance} | |
| \label{\detokenize{network-focus-technical-considerations:performance}} | |
| It is important to analyze the applications' tolerance for latency and | |
| jitter when designing an environment to support network focused | |
| applications. Certain applications, for example VoIP, are less tolerant | |
| of latency and jitter. Where latency and jitter are concerned, certain | |
| applications may require tuning of QoS parameters and network device | |
| queues to ensure that they queue for transmit immediately or guarantee | |
| minimum bandwidth. Since OpenStack currently does not support these | |
| functions, consider carefully your selected network plug-in. | |
| The location of a service may also impact the application or consumer | |
| experience. If an application serves differing content to different | |
| users it must properly direct connections to those specific locations. | |
| Where appropriate, use a multi-site installation for these situations. | |
| You can implement networking in two separate ways. Legacy networking | |
| (nova-network) provides a flat DHCP network with a single broadcast | |
| domain. This implementation does not support project isolation networks | |
| or advanced plug-ins, but it is currently the only way to implement a | |
| distributed {\hyperref[\detokenize{common/glossary:term-layer-3-l3-agent}]{\sphinxtermref{\DUrole{xref,std,std-term}{layer-3 (L3) agent}}}} using the multi\_host configuration. | |
| OpenStack Networking (neutron) is the official networking implementation and | |
| provides a pluggable architecture that supports a large variety of | |
| network methods. Some of these include a layer-2 only provider network | |
| model, external device plug-ins, or even OpenFlow controllers. | |
| Networking at large scales becomes a set of boundary questions. The | |
| determination of how large a layer-2 domain must be is based on the | |
| amount of nodes within the domain and the amount of broadcast traffic | |
| that passes between instances. Breaking layer-2 boundaries may require | |
| the implementation of overlay networks and tunnels. This decision is a | |
| balancing act between the need for a smaller overhead or a need for a | |
| smaller domain. | |
| When selecting network devices, be aware that making this decision based | |
| on the greatest port density often comes with a drawback. Aggregation | |
| switches and routers have not all kept pace with Top of Rack switches | |
| and may induce bottlenecks on north-south traffic. As a result, it may | |
| be possible for massive amounts of downstream network utilization to | |
| impact upstream network devices, impacting service to the cloud. Since | |
| OpenStack does not currently provide a mechanism for traffic shaping or | |
| rate limiting, it is necessary to implement these features at the | |
| network hardware level. | |
| \subsection{Operational considerations} | |
| \label{\detokenize{network-focus-operational-considerations:operational-considerations}}\label{\detokenize{network-focus-operational-considerations::doc}} | |
| Network-focused OpenStack clouds have a number of operational | |
| considerations that influence the selected design, including: | |
| \begin{itemize} | |
| \item {} | |
| Dynamic routing of static routes | |
| \item {} | |
| Service level agreements (SLAs) | |
| \item {} | |
| Ownership of user management | |
| \end{itemize} | |
| An initial network consideration is the selection of a telecom company | |
| or transit provider. | |
| Make additional design decisions about monitoring and alarming. This can | |
| be an internal responsibility or the responsibility of the external | |
| provider. In the case of using an external provider, service level | |
| agreements (SLAs) likely apply. In addition, other operational | |
| considerations such as bandwidth, latency, and jitter can be part of an | |
| SLA. | |
| Consider the ability to upgrade the infrastructure. As demand for | |
| network resources increase, operators add additional IP address blocks | |
| and add additional bandwidth capacity. In addition, consider managing | |
| hardware and software lifecycle events, for example upgrades, | |
| decommissioning, and outages, while avoiding service interruptions for | |
| projects. | |
| Factor maintainability into the overall network design. This includes | |
| the ability to manage and maintain IP addresses as well as the use of | |
| overlay identifiers including VLAN tag IDs, GRE tunnel IDs, and MPLS | |
| tags. As an example, if you may need to change all of the IP addresses | |
| on a network, a process known as renumbering, then the design must | |
| support this function. | |
| Address network-focused applications when considering certain | |
| operational realities. For example, consider the impending exhaustion of | |
| IPv4 addresses, the migration to IPv6, and the use of private networks | |
| to segregate different types of traffic that an application receives or | |
| generates. In the case of IPv4 to IPv6 migrations, applications should | |
| follow best practices for storing IP addresses. We recommend you avoid | |
| relying on IPv4 features that did not carry over to the IPv6 protocol or | |
| have differences in implementation. | |
| To segregate traffic, allow applications to create a private project | |
| network for database and storage network traffic. Use a public network | |
| for services that require direct client access from the internet. Upon | |
| segregating the traffic, consider {\hyperref[\detokenize{common/glossary:term-quality-of-service-qos}]{\sphinxtermref{\DUrole{xref,std,std-term}{quality of service (QoS)}}}} and | |
| security to ensure each network has the required level of service. | |
| Finally, consider the routing of network traffic. For some applications, | |
| develop a complex policy framework for routing. To create a routing | |
| policy that satisfies business requirements, consider the economic cost | |
| of transmitting traffic over expensive links versus cheaper links, in | |
| addition to bandwidth, latency, and jitter requirements. | |
| Additionally, consider how to respond to network events. As an example, | |
| how load transfers from one link to another during a failure scenario | |
| could be a factor in the design. If you do not plan network capacity | |
| correctly, failover traffic could overwhelm other ports or network links | |
| and create a cascading failure scenario. In this case, traffic that | |
| fails over to one link overwhelms that link and then moves to the | |
| subsequent links until all network traffic stops. | |
| \subsection{Architecture} | |
| \label{\detokenize{network-focus-architecture::doc}}\label{\detokenize{network-focus-architecture:architecture}} | |
| Network-focused OpenStack architectures have many similarities to other | |
| OpenStack architecture use cases. There are several factors to consider | |
| when designing for a network-centric or network-heavy application | |
| environment. | |
| Networks exist to serve as a medium of transporting data between | |
| systems. It is inevitable that an OpenStack design has | |
| inter-dependencies with non-network portions of OpenStack as well as on | |
| external systems. Depending on the specific workload, there may be major | |
| interactions with storage systems both within and external to the | |
| OpenStack environment. For example, in the case of content delivery | |
| network, there is twofold interaction with storage. Traffic flows to and | |
| from the storage array for ingesting and serving content in a | |
| north-south direction. In addition, there is replication traffic flowing | |
| in an east-west direction. | |
| Compute-heavy workloads may also induce interactions with the network. | |
| Some high performance compute applications require network-based memory | |
| mapping and data sharing and, as a result, induce a higher network load | |
| when they transfer results and data sets. Others may be highly | |
| transactional and issue transaction locks, perform their functions, and | |
| revoke transaction locks at high rates. This also has an impact on the | |
| network performance. | |
| Some network dependencies are external to OpenStack. While OpenStack | |
| Networking is capable of providing network ports, IP addresses, some | |
| level of routing, and overlay networks, there are some other functions | |
| that it cannot provide. For many of these, you may require external | |
| systems or equipment to fill in the functional gaps. Hardware load | |
| balancers are an example of equipment that may be necessary to | |
| distribute workloads or offload certain functions. OpenStack Networking | |
| provides a tunneling feature, however it is constrained to a | |
| Networking-managed region. If the need arises to extend a tunnel beyond | |
| the OpenStack region to either another region or an external system, | |
| implement the tunnel itself outside OpenStack or use a tunnel management | |
| system to map the tunnel or overlay to an external tunnel. | |
| Depending on the selected design, Networking itself might not support | |
| the required {\hyperref[\detokenize{common/glossary:term-layer-3-network}]{\sphinxtermref{\DUrole{xref,std,std-term}{layer-3 network}}}} functionality. If | |
| you choose to use the provider networking mode without running the layer-3 | |
| agent, you must install an external router to provide layer-3 connectivity | |
| to outside systems. | |
| Interaction with orchestration services is inevitable in larger-scale | |
| deployments. The Orchestration service is capable of allocating network | |
| resource defined in templates to map to project networks and for port | |
| creation, as well as allocating floating IPs. If there is a requirement | |
| to define and manage network resources when using orchestration, we | |
| recommend that the design include the Orchestration service to meet the | |
| demands of users. | |
| \subsubsection{Design impacts} | |
| \label{\detokenize{network-focus-architecture:design-impacts}} | |
| A wide variety of factors can affect a network-focused OpenStack | |
| architecture. While there are some considerations shared with a general | |
| use case, specific workloads related to network requirements influence | |
| network design decisions. | |
| One decision includes whether or not to use Network Address Translation | |
| (NAT) and where to implement it. If there is a requirement for floating | |
| IPs instead of public fixed addresses then you must use NAT. An example | |
| of this is a DHCP relay that must know the IP of the DHCP server. In | |
| these cases it is easier to automate the infrastructure to apply the | |
| target IP to a new instance rather than to reconfigure legacy or | |
| external systems for each new instance. | |
| NAT for floating IPs managed by Networking resides within the hypervisor | |
| but there are also versions of NAT that may be running elsewhere. If | |
| there is a shortage of IPv4 addresses there are two common methods to | |
| mitigate this externally to OpenStack. The first is to run a load | |
| balancer either within OpenStack as an instance, or use an external load | |
| balancing solution. In the internal scenario, Networking's | |
| Load-Balancer-as-a-Service (LBaaS) can manage load balancing software, | |
| for example HAproxy. This is specifically to manage the Virtual IP (VIP) | |
| while a dual-homed connection from the HAproxy instance connects the | |
| public network with the project private network that hosts all of the | |
| content servers. In the external scenario, a load balancer needs to | |
| serve the VIP and also connect to the project overlay network through | |
| external means or through private addresses. | |
| Another kind of NAT that may be useful is protocol NAT. In some cases it | |
| may be desirable to use only IPv6 addresses on instances and operate | |
| either an instance or an external service to provide a NAT-based | |
| transition technology such as NAT64 and DNS64. This provides the ability | |
| to have a globally routable IPv6 address while only consuming IPv4 | |
| addresses as necessary or in a shared manner. | |
| Application workloads affect the design of the underlying network | |
| architecture. If a workload requires network-level redundancy, the | |
| routing and switching architecture have to accommodate this. There are | |
| differing methods for providing this that are dependent on the selected | |
| network hardware, the performance of the hardware, and which networking | |
| model you deploy. Examples include Link aggregation (LAG) and Hot | |
| Standby Router Protocol (HSRP). Also consider whether to deploy | |
| OpenStack Networking or legacy networking (nova-network), and which | |
| plug-in to select for OpenStack Networking. If using an external system, | |
| configure Networking to run {\hyperref[\detokenize{common/glossary:term-layer-2-network}]{\sphinxtermref{\DUrole{xref,std,std-term}{layer-2}}}} with a provider | |
| network configuration. For example, implement HSRP to terminate layer-3 | |
| connectivity. | |
| Depending on the workload, overlay networks may not be the best | |
| solution. Where application network connections are small, short lived, | |
| or bursty, running a dynamic overlay can generate as much bandwidth as | |
| the packets it carries. It also can induce enough latency to cause | |
| issues with certain applications. There is an impact to the device | |
| generating the overlay which, in most installations, is the hypervisor. | |
| This causes performance degradation on packet per second and connection | |
| per second rates. | |
| Overlays also come with a secondary option that may not be appropriate | |
| to a specific workload. While all of them operate in full mesh by | |
| default, there might be good reasons to disable this function because it | |
| may cause excessive overhead for some workloads. Conversely, other | |
| workloads operate without issue. For example, most web services | |
| applications do not have major issues with a full mesh overlay network, | |
| while some network monitoring tools or storage replication workloads | |
| have performance issues with throughput or excessive broadcast traffic. | |
| Many people overlook an important design decision: The choice of layer-3 | |
| protocols. While OpenStack was initially built with only IPv4 support, | |
| Networking now supports IPv6 and dual-stacked networks. Some workloads | |
| are possible through the use of IPv6 and IPv6 to IPv4 reverse transition | |
| mechanisms such as NAT64 and DNS64 or {\hyperref[\detokenize{common/glossary:term-6to4}]{\sphinxtermref{\DUrole{xref,std,std-term}{6to4}}}}. This alters the | |
| requirements for any address plan as single-stacked and transitional IPv6 | |
| deployments can alleviate the need for IPv4 addresses. | |
| OpenStack has limited support for dynamic routing, however there are a | |
| number of options available by incorporating third party solutions to | |
| implement routing within the cloud including network equipment, hardware | |
| nodes, and instances. Some workloads perform well with nothing more than | |
| static routes and default gateways configured at the layer-3 termination | |
| point. In most cases this is sufficient, however some cases require the | |
| addition of at least one type of dynamic routing protocol if not | |
| multiple protocols. Having a form of interior gateway protocol (IGP) | |
| available to the instances inside an OpenStack installation opens up the | |
| possibility of use cases for anycast route injection for services that | |
| need to use it as a geographic location or failover mechanism. Other | |
| applications may wish to directly participate in a routing protocol, | |
| either as a passive observer, as in the case of a looking glass, or as | |
| an active participant in the form of a route reflector. Since an | |
| instance might have a large amount of compute and memory resources, it | |
| is trivial to hold an entire unpartitioned routing table and use it to | |
| provide services such as network path visibility to other applications | |
| or as a monitoring tool. | |
| Path maximum transmission unit (MTU) failures are lesser known but | |
| harder to diagnose. The MTU must be large enough to handle normal | |
| traffic, overhead from an overlay network, and the desired layer-3 | |
| protocol. Adding externally built tunnels reduces the MTU packet size. | |
| In this case, you must pay attention to the fully calculated MTU size | |
| because some systems ignore or drop path MTU discovery packets. | |
| \subsubsection{Tunable networking components} | |
| \label{\detokenize{network-focus-architecture:tunable-networking-components}} | |
| Consider configurable networking components related to an OpenStack | |
| architecture design when designing for network intensive workloads that | |
| include MTU and QoS. Some workloads require a larger MTU than normal due | |
| to the transfer of large blocks of data. When providing network service | |
| for applications such as video streaming or storage replication, we | |
| recommend that you configure both OpenStack hardware nodes and the | |
| supporting network equipment for jumbo frames where possible. This | |
| allows for better use of available bandwidth. Configure jumbo frames | |
| across the complete path the packets traverse. If one network component | |
| is not capable of handling jumbo frames then the entire path reverts to | |
| the default MTU. | |
| {\hyperref[\detokenize{common/glossary:term-quality-of-service-qos}]{\sphinxtermref{\DUrole{xref,std,std-term}{Quality of Service (QoS)}}}} also has a great impact on network intensive | |
| workloads as it provides instant service to packets which have a higher | |
| priority due to the impact of poor network performance. In applications | |
| such as Voice over IP (VoIP), differentiated services code points are a | |
| near requirement for proper operation. You can also use QoS in the | |
| opposite direction for mixed workloads to prevent low priority but high | |
| bandwidth applications, for example backup services, video conferencing, | |
| or file sharing, from blocking bandwidth that is needed for the proper | |
| operation of other workloads. It is possible to tag file storage traffic | |
| as a lower class, such as best effort or scavenger, to allow the higher | |
| priority traffic through. In cases where regions within a cloud might be | |
| geographically distributed it may also be necessary to plan accordingly | |
| to implement WAN optimization to combat latency or packet loss. | |
| \subsection{Prescriptive examples} | |
| \label{\detokenize{network-focus-prescriptive-examples::doc}}\label{\detokenize{network-focus-prescriptive-examples:prescriptive-examples}} | |
| An organization designs a large-scale web application with cloud | |
| principles in mind. The application scales horizontally in a bursting | |
| fashion and generates a high instance count. The application requires an | |
| SSL connection to secure data and must not lose connection state to | |
| individual servers. | |
| The figure below depicts an example design for this workload. In this | |
| example, a hardware load balancer provides SSL offload functionality and | |
| connects to project networks in order to reduce address consumption. This | |
| load balancer links to the routing architecture as it services the VIP | |
| for the application. The router and load balancer use the GRE tunnel ID | |
| of the application's project network and an IP address within the project | |
| subnet but outside of the address pool. This is to ensure that the load | |
| balancer can communicate with the application's HTTP servers without | |
| requiring the consumption of a public IP address. | |
| Because sessions persist until closed, the routing and switching | |
| architecture provides high availability. Switches mesh to each | |
| hypervisor and each other, and also provide an MLAG implementation to | |
| ensure that layer-2 connectivity does not fail. Routers use VRRP and | |
| fully mesh with switches to ensure layer-3 connectivity. Since GRE is | |
| provides an overlay network, Networking is present and uses the Open | |
| vSwitch agent in GRE tunnel mode. This ensures all devices can reach all | |
| other devices and that you can create project networks for private | |
| addressing links to the load balancer. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Network_Web_Services1}.png} | |
| \end{figure} | |
| A web service architecture has many options and optional components. Due | |
| to this, it can fit into a large number of other OpenStack designs. A | |
| few key components, however, need to be in place to handle the nature of | |
| most web-scale workloads. You require the following components: | |
| \begin{itemize} | |
| \item {} | |
| OpenStack Controller services (Image, Identity, Networking and | |
| supporting services such as MariaDB and RabbitMQ) | |
| \item {} | |
| OpenStack Compute running KVM hypervisor | |
| \item {} | |
| OpenStack Object Storage | |
| \item {} | |
| Orchestration service | |
| \item {} | |
| Telemetry service | |
| \end{itemize} | |
| Beyond the normal Identity, Compute, Image service, and Object Storage | |
| components, we recommend the Orchestration service component to handle | |
| the proper scaling of workloads to adjust to demand. Due to the | |
| requirement for auto-scaling, the design includes the Telemetry service. | |
| Web services tend to be bursty in load, have very defined peak and | |
| valley usage patterns and, as a result, benefit from automatic scaling | |
| of instances based upon traffic. At a network level, a split network | |
| configuration works well with databases residing on private project | |
| networks since these do not emit a large quantity of broadcast traffic | |
| and may need to interconnect to some databases for content. | |
| \subsubsection{Load balancing} | |
| \label{\detokenize{network-focus-prescriptive-examples:load-balancing}} | |
| Load balancing spreads requests across multiple instances. This workload | |
| scales well horizontally across large numbers of instances. This enables | |
| instances to run without publicly routed IP addresses and instead to | |
| rely on the load balancer to provide a globally reachable service. Many | |
| of these services do not require direct server return. This aids in | |
| address planning and utilization at scale since only the virtual IP | |
| (VIP) must be public. | |
| \subsubsection{Overlay networks} | |
| \label{\detokenize{network-focus-prescriptive-examples:overlay-networks}} | |
| The overlay functionality design includes OpenStack Networking in Open | |
| vSwitch GRE tunnel mode. In this case, the layer-3 external routers pair | |
| with VRRP, and switches pair with an implementation of MLAG to ensure | |
| that you do not lose connectivity with the upstream routing | |
| infrastructure. | |
| \subsubsection{Performance tuning} | |
| \label{\detokenize{network-focus-prescriptive-examples:performance-tuning}} | |
| Network level tuning for this workload is minimal. {\hyperref[\detokenize{common/glossary:term-quality-of-service-qos}]{\sphinxtermref{\DUrole{xref,std,std-term}{Quality of Service | |
| (QoS)}}}} applies to these workloads for a middle ground Class Selector | |
| depending on existing policies. It is higher than a best effort queue | |
| but lower than an Expedited Forwarding or Assured Forwarding queue. | |
| Since this type of application generates larger packets with | |
| longer-lived connections, you can optimize bandwidth utilization for | |
| long duration TCP. Normal bandwidth planning applies here with regards | |
| to benchmarking a session's usage multiplied by the expected number of | |
| concurrent sessions with overhead. | |
| \subsubsection{Network functions} | |
| \label{\detokenize{network-focus-prescriptive-examples:network-functions}} | |
| Network functions is a broad category but encompasses workloads that | |
| support the rest of a system's network. These workloads tend to consist | |
| of large amounts of small packets that are very short lived, such as DNS | |
| queries or SNMP traps. These messages need to arrive quickly and do not | |
| deal with packet loss as there can be a very large volume of them. There | |
| are a few extra considerations to take into account for this type of | |
| workload and this can change a configuration all the way to the | |
| hypervisor level. For an application that generates 10 TCP sessions per | |
| user with an average bandwidth of 512 kilobytes per second per flow and | |
| expected user count of ten thousand concurrent users, the expected | |
| bandwidth plan is approximately 4.88 gigabits per second. | |
| The supporting network for this type of configuration needs to have a | |
| low latency and evenly distributed availability. This workload benefits | |
| from having services local to the consumers of the service. Use a | |
| multi-site approach as well as deploying many copies of the application | |
| to handle load as close as possible to consumers. Since these | |
| applications function independently, they do not warrant running | |
| overlays to interconnect project networks. Overlays also have the | |
| drawback of performing poorly with rapid flow setup and may incur too | |
| much overhead with large quantities of small packets and therefore we do | |
| not recommend them. | |
| QoS is desirable for some workloads to ensure delivery. DNS has a major | |
| impact on the load times of other services and needs to be reliable and | |
| provide rapid responses. Configure rules in upstream devices to apply a | |
| higher Class Selector to DNS to ensure faster delivery or a better spot | |
| in queuing algorithms. | |
| \subsubsection{Cloud storage} | |
| \label{\detokenize{network-focus-prescriptive-examples:cloud-storage}} | |
| Another common use case for OpenStack environments is providing a | |
| cloud-based file storage and sharing service. You might consider this a | |
| storage-focused use case, but its network-side requirements make it a | |
| network-focused use case. | |
| For example, consider a cloud backup application. This workload has two | |
| specific behaviors that impact the network. Because this workload is an | |
| externally-facing service and an internally-replicating application, it | |
| has both {\hyperref[\detokenize{common/glossary:term-north-south-traffic}]{\sphinxtermref{\DUrole{xref,std,std-term}{north-south}}}} and | |
| {\hyperref[\detokenize{common/glossary:term-east-west-traffic}]{\sphinxtermref{\DUrole{xref,std,std-term}{east-west}}}} traffic considerations: | |
| \begin{description} | |
| \item[{north-south traffic}] \leavevmode | |
| When a user uploads and stores content, that content moves into the | |
| OpenStack installation. When users download this content, the | |
| content moves out from the OpenStack installation. Because this | |
| service operates primarily as a backup, most of the traffic moves | |
| southbound into the environment. In this situation, it benefits you | |
| to configure a network to be asymmetrically downstream because the | |
| traffic that enters the OpenStack installation is greater than the | |
| traffic that leaves the installation. | |
| \item[{east-west traffic}] \leavevmode | |
| Likely to be fully symmetric. Because replication originates from | |
| any node and might target multiple other nodes algorithmically, it | |
| is less likely for this traffic to have a larger volume in any | |
| specific direction. However this traffic might interfere with | |
| north-south traffic. | |
| \end{description} | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Network_Cloud_Storage2}.png} | |
| \end{figure} | |
| This application prioritizes the north-south traffic over east-west | |
| traffic: the north-south traffic involves customer-facing data. | |
| The network design in this case is less dependent on availability and | |
| more dependent on being able to handle high bandwidth. As a direct | |
| result, it is beneficial to forgo redundant links in favor of bonding | |
| those connections. This increases available bandwidth. It is also | |
| beneficial to configure all devices in the path, including OpenStack, to | |
| generate and pass jumbo frames. | |
| All OpenStack deployments depend on network communication in order to function | |
| properly due to its service-based nature. In some cases, however, the network | |
| elevates beyond simple infrastructure. This chapter discusses architectures | |
| that are more reliant or focused on network services. These architectures | |
| depend on the network infrastructure and require network services that | |
| perform reliably in order to satisfy user and application requirements. | |
| Some possible use cases include: | |
| \begin{description} | |
| \item[{Content delivery network}] \leavevmode | |
| This includes streaming video, viewing photographs, or accessing any other | |
| cloud-based data repository distributed to a large number of end users. | |
| Network configuration affects latency, bandwidth, and the distribution of | |
| instances. Therefore, it impacts video streaming. Not all video streaming | |
| is consumer-focused. For example, multicast videos (used for media, press | |
| conferences, corporate presentations, and web conferencing services) can | |
| also use a content delivery network. The location of the video repository | |
| and its relationship to end users affects content delivery. Network | |
| throughput of the back-end systems, as well as the WAN architecture and | |
| the cache methodology, also affect performance. | |
| \item[{Network management functions}] \leavevmode | |
| Use this cloud to provide network service functions built to support the | |
| delivery of back-end network services such as DNS, NTP, or SNMP. | |
| \item[{Network service offerings}] \leavevmode | |
| Use this cloud to run customer-facing network tools to support services. | |
| Examples include VPNs, MPLS private networks, and GRE tunnels. | |
| \item[{Web portals or web services}] \leavevmode | |
| Web servers are a common application for cloud services, and we recommend | |
| an understanding of their network requirements. The network requires scaling | |
| out to meet user demand and deliver web pages with a minimum latency. | |
| Depending on the details of the portal architecture, consider the internal | |
| east-west and north-south network bandwidth. | |
| \item[{High speed and high volume transactional systems}] \leavevmode | |
| These types of applications are sensitive to network configurations. Examples | |
| include financial systems, credit card transaction applications, and trading | |
| and other extremely high volume systems. These systems are sensitive to | |
| network jitter and latency. They must balance a high volume of East-West and | |
| North-South network traffic to maximize efficiency of the data delivery. Many | |
| of these systems must access large, high performance database back ends. | |
| \item[{High availability}] \leavevmode | |
| These types of use cases are dependent on the proper sizing of the network to | |
| maintain replication of data between sites for high availability. If one site | |
| becomes unavailable, the extra sites can serve the displaced load until the | |
| original site returns to service. It is important to size network capacity to | |
| handle the desired loads. | |
| \item[{Big data}] \leavevmode | |
| Clouds used for the management and collection of big data (data ingest) have | |
| a significant demand on network resources. Big data often uses partial | |
| replicas of the data to maintain integrity over large distributed clouds. | |
| Other big data applications that require a large amount of network resources | |
| are Hadoop, Cassandra, NuoDB, Riak, and other NoSQL and distributed | |
| databases. | |
| \item[{Virtual desktop infrastructure (VDI)}] \leavevmode | |
| This use case is sensitive to network congestion, latency, jitter, and other | |
| network characteristics. Like video streaming, the user experience is | |
| important. However, unlike video streaming, caching is not an option to | |
| offset the network issues. VDI requires both upstream and downstream traffic | |
| and cannot rely on caching for the delivery of the application to the end | |
| user. | |
| \item[{Voice over IP (VoIP)}] \leavevmode | |
| This is sensitive to network congestion, latency, jitter, and other network | |
| characteristics. VoIP has a symmetrical traffic pattern and it requires | |
| network {\hyperref[\detokenize{common/glossary:term-quality-of-service-qos}]{\sphinxtermref{\DUrole{xref,std,std-term}{quality of service (QoS)}}}} for best performance. In addition, | |
| you can implement active queue management to deliver voice and multimedia | |
| content. Users are sensitive to latency and jitter fluctuations and can detect | |
| them at very low levels. | |
| \item[{Video Conference or web conference}] \leavevmode | |
| This is sensitive to network congestion, latency, jitter, and other network | |
| characteristics. Video Conferencing has a symmetrical traffic pattern, but | |
| unless the network is on an MPLS private network, it cannot use network | |
| {\hyperref[\detokenize{common/glossary:term-quality-of-service-qos}]{\sphinxtermref{\DUrole{xref,std,std-term}{quality of service (QoS)}}}} to improve performance. Similar to VoIP, | |
| users are sensitive to network performance issues even at low levels. | |
| \item[{High performance computing (HPC)}] \leavevmode | |
| This is a complex use case that requires careful consideration of the traffic | |
| flows and usage patterns to address the needs of cloud clusters. It has high | |
| east-west traffic patterns for distributed computing, but there can be | |
| substantial north-south traffic depending on the specific application. | |
| \end{description} | |
| \section{Multi-site} | |
| \label{\detokenize{multi-site:multi-site}}\label{\detokenize{multi-site::doc}} | |
| \subsection{User requirements} | |
| \label{\detokenize{multi-site-user-requirements:user-requirements}}\label{\detokenize{multi-site-user-requirements::doc}} | |
| \subsubsection{Workload characteristics} | |
| \label{\detokenize{multi-site-user-requirements:workload-characteristics}} | |
| An understanding of the expected workloads for a desired multi-site | |
| environment and use case is an important factor in the decision-making | |
| process. In this context, \sphinxcode{workload} refers to the way the systems are | |
| used. A workload could be a single application or a suite of | |
| applications that work together. It could also be a duplicate set of | |
| applications that need to run in multiple cloud environments. Often in a | |
| multi-site deployment, the same workload will need to work identically | |
| in more than one physical location. | |
| This multi-site scenario likely includes one or more of the other | |
| scenarios in this book with the additional requirement of having the | |
| workloads in two or more locations. The following are some possible | |
| scenarios: | |
| For many use cases the proximity of the user to their workloads has a | |
| direct influence on the performance of the application and therefore | |
| should be taken into consideration in the design. Certain applications | |
| require zero to minimal latency that can only be achieved by deploying | |
| the cloud in multiple locations. These locations could be in different | |
| data centers, cities, countries or geographical regions, depending on | |
| the user requirement and location of the users. | |
| \subsubsection{Consistency of images and templates across different sites} | |
| \label{\detokenize{multi-site-user-requirements:consistency-of-images-and-templates-across-different-sites}} | |
| It is essential that the deployment of instances is consistent across | |
| the different sites and built into the infrastructure. If the OpenStack | |
| Object Storage is used as a back end for the Image service, it is | |
| possible to create repositories of consistent images across multiple | |
| sites. Having central endpoints with multiple storage nodes allows | |
| consistent centralized storage for every site. | |
| Not using a centralized object store increases the operational overhead | |
| of maintaining a consistent image library. This could include | |
| development of a replication mechanism to handle the transport of images | |
| and the changes to the images across multiple sites. | |
| \subsubsection{High availability} | |
| \label{\detokenize{multi-site-user-requirements:high-availability}} | |
| If high availability is a requirement to provide continuous | |
| infrastructure operations, a basic requirement of high availability | |
| should be defined. | |
| The OpenStack management components need to have a basic and minimal | |
| level of redundancy. The simplest example is the loss of any single site | |
| should have minimal impact on the availability of the OpenStack | |
| services. | |
| The \href{https://docs.openstack.org/ha-guide/}{OpenStack High Availability | |
| Guide} contains more information | |
| on how to provide redundancy for the OpenStack components. | |
| Multiple network links should be deployed between sites to provide | |
| redundancy for all components. This includes storage replication, which | |
| should be isolated to a dedicated network or VLAN with the ability to | |
| assign QoS to control the replication traffic or provide priority for | |
| this traffic. Note that if the data store is highly changeable, the | |
| network requirements could have a significant effect on the operational | |
| cost of maintaining the sites. | |
| The ability to maintain object availability in both sites has | |
| significant implications on the object storage design and | |
| implementation. It also has a significant impact on the WAN network | |
| design between the sites. | |
| Connecting more than two sites increases the challenges and adds more | |
| complexity to the design considerations. Multi-site implementations | |
| require planning to address the additional topology used for internal | |
| and external connectivity. Some options include full mesh topology, hub | |
| spoke, spine leaf, and 3D Torus. | |
| If applications running in a cloud are not cloud-aware, there should be | |
| clear measures and expectations to define what the infrastructure can | |
| and cannot support. An example would be shared storage between sites. It | |
| is possible, however such a solution is not native to OpenStack and | |
| requires a third-party hardware vendor to fulfill such a requirement. | |
| Another example can be seen in applications that are able to consume | |
| resources in object storage directly. These applications need to be | |
| cloud aware to make good use of an OpenStack Object Store. | |
| \subsubsection{Application readiness} | |
| \label{\detokenize{multi-site-user-requirements:application-readiness}} | |
| Some applications are tolerant of the lack of synchronized object | |
| storage, while others may need those objects to be replicated and | |
| available across regions. Understanding how the cloud implementation | |
| impacts new and existing applications is important for risk mitigation, | |
| and the overall success of a cloud project. Applications may have to be | |
| written or rewritten for an infrastructure with little to no redundancy, | |
| or with the cloud in mind. | |
| \subsubsection{Cost} | |
| \label{\detokenize{multi-site-user-requirements:cost}} | |
| A greater number of sites increase cost and complexity for a multi-site | |
| deployment. Costs can be broken down into the following categories: | |
| \begin{itemize} | |
| \item {} | |
| Compute resources | |
| \item {} | |
| Networking resources | |
| \item {} | |
| Replication | |
| \item {} | |
| Storage | |
| \item {} | |
| Management | |
| \item {} | |
| Operational costs | |
| \end{itemize} | |
| \subsubsection{Site loss and recovery} | |
| \label{\detokenize{multi-site-user-requirements:site-loss-and-recovery}} | |
| Outages can cause partial or full loss of site functionality. Strategies | |
| should be implemented to understand and plan for recovery scenarios. | |
| \begin{itemize} | |
| \item {} | |
| The deployed applications need to continue to function and, more | |
| importantly, you must consider the impact on the performance and | |
| reliability of the application when a site is unavailable. | |
| \item {} | |
| It is important to understand what happens to the replication of | |
| objects and data between the sites when a site goes down. If this | |
| causes queues to start building up, consider how long these queues | |
| can safely exist until an error occurs. | |
| \item {} | |
| After an outage, ensure the method for resuming proper operations of | |
| a site is implemented when it comes back online. We recommend you | |
| architect the recovery to avoid race conditions. | |
| \end{itemize} | |
| \subsubsection{Compliance and geo-location} | |
| \label{\detokenize{multi-site-user-requirements:compliance-and-geo-location}} | |
| An organization may have certain legal obligations and regulatory | |
| compliance measures which could require certain workloads or data to not | |
| be located in certain regions. | |
| \subsubsection{Auditing} | |
| \label{\detokenize{multi-site-user-requirements:auditing}} | |
| A well thought-out auditing strategy is important in order to be able to | |
| quickly track down issues. Keeping track of changes made to security | |
| groups and project changes can be useful in rolling back the changes if | |
| they affect production. For example, if all security group rules for a | |
| project disappeared, the ability to quickly track down the issue would be | |
| important for operational and legal reasons. | |
| \subsubsection{Separation of duties} | |
| \label{\detokenize{multi-site-user-requirements:separation-of-duties}} | |
| A common requirement is to define different roles for the different | |
| cloud administration functions. An example would be a requirement to | |
| segregate the duties and permissions by site. | |
| \subsubsection{Authentication between sites} | |
| \label{\detokenize{multi-site-user-requirements:authentication-between-sites}} | |
| It is recommended to have a single authentication domain rather than a | |
| separate implementation for each and every site. This requires an | |
| authentication mechanism that is highly available and distributed to | |
| ensure continuous operation. Authentication server locality might be | |
| required and should be planned for. | |
| \subsection{Technical considerations} | |
| \label{\detokenize{multi-site-technical-considerations::doc}}\label{\detokenize{multi-site-technical-considerations:technical-considerations}} | |
| There are many technical considerations to take into account with regard | |
| to designing a multi-site OpenStack implementation. An OpenStack cloud | |
| can be designed in a variety of ways to handle individual application | |
| needs. A multi-site deployment has additional challenges compared to | |
| single site installations and therefore is a more complex solution. | |
| When determining capacity options be sure to take into account not just | |
| the technical issues, but also the economic or operational issues that | |
| might arise from specific decisions. | |
| Inter-site link capacity describes the capabilities of the connectivity | |
| between the different OpenStack sites. This includes parameters such as | |
| bandwidth, latency, whether or not a link is dedicated, and any business | |
| policies applied to the connection. The capability and number of the | |
| links between sites determine what kind of options are available for | |
| deployment. For example, if two sites have a pair of high-bandwidth | |
| links available between them, it may be wise to configure a separate | |
| storage replication network between the two sites to support a single | |
| Swift endpoint and a shared Object Storage capability between them. An | |
| example of this technique, as well as a configuration walk-through, is | |
| available at \href{https://docs.openstack.org/developer/swift/replication\_network.html\#dedicated-replication-network}{Dedicated replication network}. | |
| Another option in this scenario is to build a dedicated set of project | |
| private networks across the secondary link, using overlay networks with | |
| a third party mapping the site overlays to each other. | |
| The capacity requirements of the links between sites is driven by | |
| application behavior. If the link latency is too high, certain | |
| applications that use a large number of small packets, for example RPC | |
| calls, may encounter issues communicating with each other or operating | |
| properly. Additionally, OpenStack may encounter similar types of issues. | |
| To mitigate this, Identity service call timeouts can be tuned to prevent | |
| issues authenticating against a central Identity service. | |
| Another network capacity consideration for a multi-site deployment is | |
| the amount and performance of overlay networks available for project | |
| networks. If using shared project networks across zones, it is imperative | |
| that an external overlay manager or controller be used to map these | |
| overlays together. It is necessary to ensure the amount of possible IDs | |
| between the zones are identical. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| As of the Kilo release, OpenStack Networking was not capable of | |
| managing tunnel IDs across installations. So if one site runs out of | |
| IDs, but another does not, that project's network is unable to reach | |
| the other site. | |
| \end{sphinxadmonition} | |
| Capacity can take other forms as well. The ability for a region to grow | |
| depends on scaling out the number of available compute nodes. This topic | |
| is covered in greater detail in the section for compute-focused | |
| deployments. However, it may be necessary to grow cells in an individual | |
| region, depending on the size of your cluster and the ratio of virtual | |
| machines per hypervisor. | |
| A third form of capacity comes in the multi-region-capable components of | |
| OpenStack. Centralized Object Storage is capable of serving objects | |
| through a single namespace across multiple regions. Since this works by | |
| accessing the object store through swift proxy, it is possible to | |
| overload the proxies. There are two options available to mitigate this | |
| issue: | |
| \begin{itemize} | |
| \item {} | |
| Deploy a large number of swift proxies. The drawback is that the | |
| proxies are not load-balanced and a large file request could | |
| continually hit the same proxy. | |
| \item {} | |
| Add a caching HTTP proxy and load balancer in front of the swift | |
| proxies. Since swift objects are returned to the requester via HTTP, | |
| this load balancer would alleviate the load required on the swift | |
| proxies. | |
| \end{itemize} | |
| \subsubsection{Utilization} | |
| \label{\detokenize{multi-site-technical-considerations:utilization}} | |
| While constructing a multi-site OpenStack environment is the goal of | |
| this guide, the real test is whether an application can utilize it. | |
| The Identity service is normally the first interface for OpenStack users | |
| and is required for almost all major operations within OpenStack. | |
| Therefore, it is important that you provide users with a single URL for | |
| Identity service authentication, and document the configuration of | |
| regions within the Identity service. Each of the sites defined in your | |
| installation is considered to be a region in Identity nomenclature. This | |
| is important for the users, as it is required to define the region name | |
| when providing actions to an API endpoint or in the dashboard. | |
| Load balancing is another common issue with multi-site installations. | |
| While it is still possible to run HAproxy instances with | |
| Load-Balancer-as-a-Service, these are defined to a specific region. Some | |
| applications can manage this using internal mechanisms. Other | |
| applications may require the implementation of an external system, | |
| including global services load balancers or anycast-advertised DNS. | |
| Depending on the storage model chosen during site design, storage | |
| replication and availability are also a concern for end-users. If an | |
| application can support regions, then it is possible to keep the object | |
| storage system separated by region. In this case, users who want to have | |
| an object available to more than one region need to perform cross-site | |
| replication. However, with a centralized swift proxy, the user may need | |
| to benchmark the replication timing of the Object Storage back end. | |
| Benchmarking allows the operational staff to provide users with an | |
| understanding of the amount of time required for a stored or modified | |
| object to become available to the entire environment. | |
| \subsubsection{Performance} | |
| \label{\detokenize{multi-site-technical-considerations:performance}} | |
| Determining the performance of a multi-site installation involves | |
| considerations that do not come into play in a single-site deployment. | |
| Being a distributed deployment, performance in multi-site deployments | |
| may be affected in certain situations. | |
| Since multi-site systems can be geographically separated, there may be | |
| greater latency or jitter when communicating across regions. This can | |
| especially impact systems like the OpenStack Identity service when | |
| making authentication attempts from regions that do not contain the | |
| centralized Identity implementation. It can also affect applications | |
| which rely on Remote Procedure Call (RPC) for normal operation. An | |
| example of this can be seen in high performance computing workloads. | |
| Storage availability can also be impacted by the architecture of a | |
| multi-site deployment. A centralized Object Storage service requires | |
| more time for an object to be available to instances locally in regions | |
| where the object was not created. Some applications may need to be tuned | |
| to account for this effect. Block Storage does not currently have a | |
| method for replicating data across multiple regions, so applications | |
| that depend on available block storage need to manually cope with this | |
| limitation by creating duplicate block storage entries in each region. | |
| \subsubsection{OpenStack components} | |
| \label{\detokenize{multi-site-technical-considerations:openstack-components}} | |
| Most OpenStack installations require a bare minimum set of pieces to | |
| function. These include the OpenStack Identity (keystone) for | |
| authentication, OpenStack Compute (nova) for compute, OpenStack Image | |
| service (glance) for image storage, OpenStack Networking (neutron) for | |
| networking, and potentially an object store in the form of OpenStack | |
| Object Storage (swift). Deploying a multi-site installation also demands | |
| extra components in order to coordinate between regions. A centralized | |
| Identity service is necessary to provide the single authentication | |
| point. A centralized dashboard is also recommended to provide a single | |
| login point and a mapping to the API and CLI options available. A | |
| centralized Object Storage service may also be used, but will require | |
| the installation of the swift proxy service. | |
| It may also be helpful to install a few extra options in order to | |
| facilitate certain use cases. For example, installing Designate may | |
| assist in automatically generating DNS domains for each region with an | |
| automatically-populated zone full of resource records for each instance. | |
| This facilitates using DNS as a mechanism for determining which region | |
| will be selected for certain applications. | |
| Another useful tool for managing a multi-site installation is | |
| Orchestration (heat). The Orchestration service allows the use of | |
| templates to define a set of instances to be launched together or for | |
| scaling existing sets. It can also be used to set up matching or | |
| differentiated groupings based on regions. For instance, if an | |
| application requires an equally balanced number of nodes across sites, | |
| the same heat template can be used to cover each site with small | |
| alterations to only the region name. | |
| \subsection{Operational considerations} | |
| \label{\detokenize{multi-site-operational-considerations:operational-considerations}}\label{\detokenize{multi-site-operational-considerations::doc}} | |
| Multi-site OpenStack cloud deployment using regions requires that the | |
| service catalog contains per-region entries for each service deployed | |
| other than the Identity service. Most off-the-shelf OpenStack deployment | |
| tools have limited support for defining multiple regions in this | |
| fashion. | |
| Deployers should be aware of this and provide the appropriate | |
| customization of the service catalog for their site either manually, or | |
| by customizing deployment tools in use. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| As of the Kilo release, documentation for implementing this feature | |
| is in progress. See this bug for more information: | |
| \url{https://bugs.launchpad.net/openstack-manuals/+bug/1340509}. | |
| \end{sphinxadmonition} | |
| \subsubsection{Licensing} | |
| \label{\detokenize{multi-site-operational-considerations:licensing}} | |
| Multi-site OpenStack deployments present additional licensing | |
| considerations over and above regular OpenStack clouds, particularly | |
| where site licenses are in use to provide cost efficient access to | |
| software licenses. The licensing for host operating systems, guest | |
| operating systems, OpenStack distributions (if applicable), | |
| software-defined infrastructure including network controllers and | |
| storage systems, and even individual applications need to be evaluated. | |
| Topics to consider include: | |
| \begin{itemize} | |
| \item {} | |
| The definition of what constitutes a site in the relevant licenses, | |
| as the term does not necessarily denote a geographic or otherwise | |
| physically isolated location. | |
| \item {} | |
| Differentiations between ``hot'' (active) and ``cold'' (inactive) sites, | |
| where significant savings may be made in situations where one site is | |
| a cold standby for disaster recovery purposes only. | |
| \item {} | |
| Certain locations might require local vendors to provide support and | |
| services for each site which may vary with the licensing agreement in | |
| place. | |
| \end{itemize} | |
| \subsubsection{Logging and monitoring} | |
| \label{\detokenize{multi-site-operational-considerations:logging-and-monitoring}} | |
| Logging and monitoring does not significantly differ for a multi-site | |
| OpenStack cloud. The tools described in the \href{https://docs.openstack.org/ops-guide/ops-logging-monitoring.html}{Logging and monitoring | |
| chapter} | |
| of the OpenStack Operations Guide remain applicable. Logging and monitoring | |
| can be provided on a per-site basis, and in a common centralized location. | |
| When attempting to deploy logging and monitoring facilities to a | |
| centralized location, care must be taken with the load placed on the | |
| inter-site networking links. | |
| \subsubsection{Upgrades} | |
| \label{\detokenize{multi-site-operational-considerations:upgrades}} | |
| In multi-site OpenStack clouds deployed using regions, sites are | |
| independent OpenStack installations which are linked together using | |
| shared centralized services such as OpenStack Identity. At a high level | |
| the recommended order of operations to upgrade an individual OpenStack | |
| environment is (see the \href{https://docs.openstack.org/ops-guide/ops-upgrades.html}{Upgrades | |
| chapter} | |
| of the OpenStack Operations Guide for details): | |
| \begin{enumerate} | |
| \item {} | |
| Upgrade the OpenStack Identity service (keystone). | |
| \item {} | |
| Upgrade the OpenStack Image service (glance). | |
| \item {} | |
| Upgrade OpenStack Compute (nova), including networking components. | |
| \item {} | |
| Upgrade OpenStack Block Storage (cinder). | |
| \item {} | |
| Upgrade the OpenStack dashboard (horizon). | |
| \end{enumerate} | |
| The process for upgrading a multi-site environment is not significantly | |
| different: | |
| \begin{enumerate} | |
| \item {} | |
| Upgrade the shared OpenStack Identity service (keystone) deployment. | |
| \item {} | |
| Upgrade the OpenStack Image service (glance) at each site. | |
| \item {} | |
| Upgrade OpenStack Compute (nova), including networking components, at | |
| each site. | |
| \item {} | |
| Upgrade OpenStack Block Storage (cinder) at each site. | |
| \item {} | |
| Upgrade the OpenStack dashboard (horizon), at each site or in the | |
| single central location if it is shared. | |
| \end{enumerate} | |
| Compute upgrades within each site can also be performed in a rolling | |
| fashion. Compute controller services (API, Scheduler, and Conductor) can | |
| be upgraded prior to upgrading of individual compute nodes. This allows | |
| operations staff to keep a site operational for users of Compute | |
| services while performing an upgrade. | |
| \subsubsection{Quota management} | |
| \label{\detokenize{multi-site-operational-considerations:quota-management}} | |
| Quotas are used to set operational limits to prevent system capacities | |
| from being exhausted without notification. They are currently enforced | |
| at the project level rather than at the user level. | |
| Quotas are defined on a per-region basis. Operators can define identical | |
| quotas for projects in each region of the cloud to provide a consistent | |
| experience, or even create a process for synchronizing allocated quotas | |
| across regions. It is important to note that only the operational limits | |
| imposed by the quotas will be aligned consumption of quotas by users | |
| will not be reflected between regions. | |
| For example, given a cloud with two regions, if the operator grants a | |
| user a quota of 25 instances in each region then that user may launch a | |
| total of 50 instances spread across both regions. They may not, however, | |
| launch more than 25 instances in any single region. | |
| For more information on managing quotas refer to the \href{https://docs.openstack.org/ops-guide/ops-projects-users.html}{Managing projects | |
| and users | |
| chapter} | |
| of the OpenStack Operators Guide. | |
| \subsubsection{Policy management} | |
| \label{\detokenize{multi-site-operational-considerations:policy-management}} | |
| OpenStack provides a default set of Role Based Access Control (RBAC) | |
| policies, defined in a \sphinxcode{policy.json} file, for each service. Operators | |
| edit these files to customize the policies for their OpenStack | |
| installation. If the application of consistent RBAC policies across | |
| sites is a requirement, then it is necessary to ensure proper | |
| synchronization of the \sphinxcode{policy.json} files to all installations. | |
| This must be done using system administration tools such as rsync as | |
| functionality for synchronizing policies across regions is not currently | |
| provided within OpenStack. | |
| \subsubsection{Documentation} | |
| \label{\detokenize{multi-site-operational-considerations:documentation}} | |
| Users must be able to leverage cloud infrastructure and provision new | |
| resources in the environment. It is important that user documentation is | |
| accessible by users to ensure they are given sufficient information to | |
| help them leverage the cloud. As an example, by default OpenStack | |
| schedules instances on a compute node automatically. However, when | |
| multiple regions are available, the end user needs to decide in which | |
| region to schedule the new instance. The dashboard presents the user | |
| with the first region in your configuration. The API and CLI tools do | |
| not execute commands unless a valid region is specified. It is therefore | |
| important to provide documentation to your users describing the region | |
| layout as well as calling out that quotas are region-specific. If a user | |
| reaches his or her quota in one region, OpenStack does not automatically | |
| build new instances in another. Documenting specific examples helps | |
| users understand how to operate the cloud, thereby reducing calls and | |
| tickets filed with the help desk. | |
| \subsection{Architecture} | |
| \label{\detokenize{multi-site-architecture::doc}}\label{\detokenize{multi-site-architecture:architecture}} | |
| {\hyperref[\detokenize{multi-site-architecture:ms-openstack-architecture}]{\sphinxcrossref{\DUrole{std,std-ref}{Multi-site OpenStack architecture}}}} illustrates a high level multi-site | |
| OpenStack architecture. Each site is an OpenStack cloud but it may be | |
| necessary to architect the sites on different versions. For example, | |
| if the second site is intended to be a replacement for the first site, | |
| they would be different. Another common design would be a private | |
| OpenStack cloud with a replicated site that would be used for high | |
| availability or disaster recovery. The most important design decision | |
| is configuring storage as a single shared pool or separate pools, depending | |
| on user and technical requirements. | |
| \begin{figure}[H] | |
| \centering | |
| \capstart | |
| \noindent\sphinxincludegraphics{{Multi-Site_shared_keystone_horizon_swift1}.png} | |
| \caption{\sphinxstylestrong{Multi-site OpenStack architecture}}\label{\detokenize{multi-site-architecture:ms-openstack-architecture}}\label{\detokenize{multi-site-architecture:id1}}\end{figure} | |
| \subsubsection{OpenStack services architecture} | |
| \label{\detokenize{multi-site-architecture:openstack-services-architecture}} | |
| The Identity service, which is used by all other OpenStack components | |
| for authorization and the catalog of service endpoints, supports the | |
| concept of regions. A region is a logical construct used to group | |
| OpenStack services in close proximity to one another. The concept of | |
| regions is flexible; it may contain OpenStack service endpoints located | |
| within a distinct geographic region or regions. It may be smaller in | |
| scope, where a region is a single rack within a data center, with | |
| multiple regions existing in adjacent racks in the same data center. | |
| The majority of OpenStack components are designed to run within the | |
| context of a single region. The Compute service is designed to manage | |
| compute resources within a region, with support for subdivisions of | |
| compute resources by using availability zones and cells. The Networking | |
| service can be used to manage network resources in the same broadcast | |
| domain or collection of switches that are linked. The OpenStack Block | |
| Storage service controls storage resources within a region with all | |
| storage resources residing on the same storage network. Like the | |
| OpenStack Compute service, the OpenStack Block Storage service also | |
| supports the availability zone construct which can be used to subdivide | |
| storage resources. | |
| The OpenStack dashboard, OpenStack Identity, and OpenStack Object | |
| Storage services are components that can each be deployed centrally in | |
| order to serve multiple regions. | |
| \subsubsection{Storage} | |
| \label{\detokenize{multi-site-architecture:storage}} | |
| With multiple OpenStack regions, it is recommended to configure a single | |
| OpenStack Object Storage service endpoint to deliver shared file storage | |
| for all regions. The Object Storage service internally replicates files | |
| to multiple nodes which can be used by applications or workloads in | |
| multiple regions. This simplifies high availability failover and | |
| disaster recovery rollback. | |
| In order to scale the Object Storage service to meet the workload of | |
| multiple regions, multiple proxy workers are run and load-balanced, | |
| storage nodes are installed in each region, and the entire Object | |
| Storage Service can be fronted by an HTTP caching layer. This is done so | |
| client requests for objects can be served out of caches rather than | |
| directly from the storage modules themselves, reducing the actual load | |
| on the storage network. In addition to an HTTP caching layer, use a | |
| caching layer like Memcache to cache objects between the proxy and | |
| storage nodes. | |
| If the cloud is designed with a separate Object Storage service endpoint | |
| made available in each region, applications are required to handle | |
| synchronization (if desired) and other management operations to ensure | |
| consistency across the nodes. For some applications, having multiple | |
| Object Storage Service endpoints located in the same region as the | |
| application may be desirable due to reduced latency, cross region | |
| bandwidth, and ease of deployment. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| For the Block Storage service, the most important decisions are the | |
| selection of the storage technology, and whether a dedicated network | |
| is used to carry storage traffic from the storage service to the | |
| compute nodes. | |
| \end{sphinxadmonition} | |
| \subsubsection{Networking} | |
| \label{\detokenize{multi-site-architecture:networking}} | |
| When connecting multiple regions together, there are several design | |
| considerations. The overlay network technology choice determines how | |
| packets are transmitted between regions and how the logical network and | |
| addresses present to the application. If there are security or | |
| regulatory requirements, encryption should be implemented to secure the | |
| traffic between regions. For networking inside a region, the overlay | |
| network technology for project networks is equally important. The overlay | |
| technology and the network traffic that an application generates or | |
| receives can be either complementary or serve cross purposes. For | |
| example, using an overlay technology for an application that transmits a | |
| large amount of small packets could add excessive latency or overhead to | |
| each packet if not configured properly. | |
| \subsubsection{Dependencies} | |
| \label{\detokenize{multi-site-architecture:dependencies}} | |
| The architecture for a multi-site OpenStack installation is dependent on | |
| a number of factors. One major dependency to consider is storage. When | |
| designing the storage system, the storage mechanism needs to be | |
| determined. Once the storage type is determined, how it is accessed is | |
| critical. For example, we recommend that storage should use a dedicated | |
| network. Another concern is how the storage is configured to protect the | |
| data. For example, the Recovery Point Objective (RPO) and the Recovery | |
| Time Objective (RTO). How quickly recovery from a fault can be | |
| completed, determines how often the replication of data is required. | |
| Ensure that enough storage is allocated to support the data protection | |
| strategy. | |
| Networking decisions include the encapsulation mechanism that can be | |
| used for the project networks, how large the broadcast domains should be, | |
| and the contracted SLAs for the interconnects. | |
| \subsection{Prescriptive examples} | |
| \label{\detokenize{multi-site-prescriptive-examples::doc}}\label{\detokenize{multi-site-prescriptive-examples:prescriptive-examples}} | |
| There are multiple ways to build a multi-site OpenStack installation, | |
| based on the needs of the intended workloads. Below are example | |
| architectures based on different requirements. These examples are meant | |
| as a reference, and not a hard and fast rule for deployments. Use the | |
| previous sections of this chapter to assist in selecting specific | |
| components and implementations based on specific needs. | |
| A large content provider needs to deliver content to customers that are | |
| geographically dispersed. The workload is very sensitive to latency and | |
| needs a rapid response to end-users. After reviewing the user, technical | |
| and operational considerations, it is determined beneficial to build a | |
| number of regions local to the customer's edge. Rather than build a few | |
| large, centralized data centers, the intent of the architecture is to | |
| provide a pair of small data centers in locations that are closer to the | |
| customer. In this use case, spreading applications out allows for | |
| different horizontal scaling than a traditional compute workload scale. | |
| The intent is to scale by creating more copies of the application in | |
| closer proximity to the users that need it most, in order to ensure | |
| faster response time to user requests. This provider deploys two | |
| datacenters at each of the four chosen regions. The implications of this | |
| design are based around the method of placing copies of resources in | |
| each of the remote regions. Swift objects, Glance images, and block | |
| storage need to be manually replicated into each region. This may be | |
| beneficial for some systems, such as the case of content service, where | |
| only some of the content needs to exist in some but not all regions. A | |
| centralized Keystone is recommended to ensure authentication and that | |
| access to the API endpoints is easily manageable. | |
| It is recommended that you install an automated DNS system such as | |
| Designate. Application administrators need a way to manage the mapping | |
| of which application copy exists in each region and how to reach it, | |
| unless an external Dynamic DNS system is available. Designate assists by | |
| making the process automatic and by populating the records in the each | |
| region's zone. | |
| Telemetry for each region is also deployed, as each region may grow | |
| differently or be used at a different rate. Ceilometer collects each | |
| region's meters from each of the controllers and report them back to a | |
| central location. This is useful both to the end user and the | |
| administrator of the OpenStack environment. The end user will find this | |
| method useful, as it makes possible to determine if certain locations | |
| are experiencing higher load than others, and take appropriate action. | |
| Administrators also benefit by possibly being able to forecast growth | |
| per region, rather than expanding the capacity of all regions | |
| simultaneously, therefore maximizing the cost-effectiveness of the | |
| multi-site design. | |
| One of the key decisions of running this infrastructure is whether or | |
| not to provide a redundancy model. Two types of redundancy and high | |
| availability models in this configuration can be implemented. The first | |
| type is the availability of central OpenStack components. Keystone can | |
| be made highly available in three central data centers that host the | |
| centralized OpenStack components. This prevents a loss of any one of the | |
| regions causing an outage in service. It also has the added benefit of | |
| being able to run a central storage repository as a primary cache for | |
| distributing content to each of the regions. | |
| The second redundancy type is the edge data center itself. A second data | |
| center in each of the edge regional locations house a second region near | |
| the first region. This ensures that the application does not suffer | |
| degraded performance in terms of latency and availability. | |
| {\hyperref[\detokenize{multi-site-prescriptive-examples:ms-customer-edge}]{\sphinxcrossref{\DUrole{std,std-ref}{Multi-site architecture example}}}} depicts the solution designed to have both a | |
| centralized set of core data centers for OpenStack services and paired edge | |
| data centers: | |
| \begin{figure}[H] | |
| \centering | |
| \capstart | |
| \noindent\sphinxincludegraphics{{Multi-Site_Customer_Edge}.png} | |
| \caption{\sphinxstylestrong{Multi-site architecture example}}\label{\detokenize{multi-site-prescriptive-examples:ms-customer-edge}}\label{\detokenize{multi-site-prescriptive-examples:id1}}\end{figure} | |
| \subsubsection{Geo-redundant load balancing} | |
| \label{\detokenize{multi-site-prescriptive-examples:geo-redundant-load-balancing}} | |
| A large-scale web application has been designed with cloud principles in | |
| mind. The application is designed provide service to application store, | |
| on a 24/7 basis. The company has typical two tier architecture with a | |
| web front-end servicing the customer requests, and a NoSQL database back | |
| end storing the information. | |
| As of late there has been several outages in number of major public | |
| cloud providers due to applications running out of a single geographical | |
| location. The design therefore should mitigate the chance of a single | |
| site causing an outage for their business. | |
| The solution would consist of the following OpenStack components: | |
| \begin{itemize} | |
| \item {} | |
| A firewall, switches and load balancers on the public facing network | |
| connections. | |
| \item {} | |
| OpenStack Controller services running, Networking, dashboard, Block | |
| Storage and Compute running locally in each of the three regions. | |
| Identity service, Orchestration service, Telemetry service, Image | |
| service and Object Storage service can be installed centrally, with | |
| nodes in each of the region providing a redundant OpenStack | |
| Controller plane throughout the globe. | |
| \item {} | |
| OpenStack compute nodes running the KVM hypervisor. | |
| \item {} | |
| OpenStack Object Storage for serving static objects such as images | |
| can be used to ensure that all images are standardized across all the | |
| regions, and replicated on a regular basis. | |
| \item {} | |
| A distributed DNS service available to all regions that allows for | |
| dynamic update of DNS records of deployed instances. | |
| \item {} | |
| A geo-redundant load balancing service can be used to service the | |
| requests from the customers based on their origin. | |
| \end{itemize} | |
| An autoscaling heat template can be used to deploy the application in | |
| the three regions. This template includes: | |
| \begin{itemize} | |
| \item {} | |
| Web Servers, running Apache. | |
| \item {} | |
| Appropriate \sphinxcode{user\_data} to populate the central DNS servers upon | |
| instance launch. | |
| \item {} | |
| Appropriate Telemetry alarms that maintain state of the application | |
| and allow for handling of region or instance failure. | |
| \end{itemize} | |
| Another autoscaling Heat template can be used to deploy a distributed | |
| MongoDB shard over the three locations, with the option of storing | |
| required data on a globally available swift container. According to the | |
| usage and load on the database server, additional shards can be | |
| provisioned according to the thresholds defined in Telemetry. | |
| Two data centers would have been sufficient had the requirements been | |
| met. But three regions are selected here to avoid abnormal load on a | |
| single region in the event of a failure. | |
| Orchestration is used because of the built-in functionality of | |
| autoscaling and auto healing in the event of increased load. Additional | |
| configuration management tools, such as Puppet or Chef could also have | |
| been used in this scenario, but were not chosen since Orchestration had | |
| the appropriate built-in hooks into the OpenStack cloud, whereas the | |
| other tools were external and not native to OpenStack. In addition, | |
| external tools were not needed since this deployment scenario was | |
| straight forward. | |
| OpenStack Object Storage is used here to serve as a back end for the | |
| Image service since it is the most suitable solution for a globally | |
| distributed storage solution with its own replication mechanism. Home | |
| grown solutions could also have been used including the handling of | |
| replication, but were not chosen, because Object Storage is already an | |
| intricate part of the infrastructure and a proven solution. | |
| An external load balancing service was used and not the LBaaS in | |
| OpenStack because the solution in OpenStack is not redundant and does | |
| not have any awareness of geo location. | |
| \begin{figure}[H] | |
| \centering | |
| \capstart | |
| \noindent\sphinxincludegraphics{{Multi-site_Geo_Redundant_LB}.png} | |
| \caption{\sphinxstylestrong{Multi-site geo-redundant architecture}}\label{\detokenize{multi-site-prescriptive-examples:ms-geo-redundant}}\label{\detokenize{multi-site-prescriptive-examples:id2}}\end{figure} | |
| \subsubsection{Location-local service} | |
| \label{\detokenize{multi-site-prescriptive-examples:location-local-service}} | |
| A common use for multi-site OpenStack deployment is creating a Content | |
| Delivery Network. An application that uses a location-local architecture | |
| requires low network latency and proximity to the user to provide an | |
| optimal user experience and reduce the cost of bandwidth and transit. | |
| The content resides on sites closer to the customer, instead of a | |
| centralized content store that requires utilizing higher cost | |
| cross-country links. | |
| This architecture includes a geo-location component that places user | |
| requests to the closest possible node. In this scenario, 100\% redundancy | |
| of content across every site is a goal rather than a requirement, with | |
| the intent to maximize the amount of content available within a minimum | |
| number of network hops for end users. Despite these differences, the | |
| storage replication configuration has significant overlap with that of a | |
| geo-redundant load balancing use case. | |
| In {\hyperref[\detokenize{multi-site-prescriptive-examples:ms-shared-keystone}]{\sphinxcrossref{\DUrole{std,std-ref}{Multi-site shared keystone architecture}}}}, the application utilizing this multi-site | |
| OpenStack install that is location-aware would launch web server or content | |
| serving instances on the compute cluster in each site. Requests from clients | |
| are first sent to a global services load balancer that determines the location | |
| of the client, then routes the request to the closest OpenStack site where the | |
| application completes the request. | |
| \begin{figure}[H] | |
| \centering | |
| \capstart | |
| \noindent\sphinxincludegraphics{{Multi-Site_shared_keystone1}.png} | |
| \caption{\sphinxstylestrong{Multi-site shared keystone architecture}}\label{\detokenize{multi-site-prescriptive-examples:ms-shared-keystone}}\label{\detokenize{multi-site-prescriptive-examples:id3}}\end{figure} | |
| OpenStack is capable of running in a multi-region configuration. This | |
| enables some parts of OpenStack to effectively manage a group of sites | |
| as a single cloud. | |
| Some use cases that might indicate a need for a multi-site deployment of | |
| OpenStack include: | |
| \begin{itemize} | |
| \item {} | |
| An organization with a diverse geographic footprint. | |
| \item {} | |
| Geo-location sensitive data. | |
| \item {} | |
| Data locality, in which specific data or functionality should be | |
| close to users. | |
| \end{itemize} | |
| \section{Hybrid} | |
| \label{\detokenize{hybrid:hybrid}}\label{\detokenize{hybrid::doc}} | |
| \subsection{User requirements} | |
| \label{\detokenize{hybrid-user-requirements:user-requirements}}\label{\detokenize{hybrid-user-requirements::doc}} | |
| Hybrid cloud architectures are complex, especially those | |
| that use heterogeneous cloud platforms. | |
| Ensure that design choices match requirements so that the | |
| benefits outweigh the inherent additional complexity and risks. | |
| \subsubsection{Business considerations} | |
| \label{\detokenize{hybrid-user-requirements:business-considerations}} | |
| \paragraph{Business considerations when designing a hybrid cloud deployment} | |
| \label{\detokenize{hybrid-user-requirements:business-considerations-when-designing-a-hybrid-cloud-deployment}}\begin{description} | |
| \item[{Cost}] \leavevmode | |
| A hybrid cloud architecture involves multiple vendors and | |
| technical architectures. | |
| These architectures may be more expensive to deploy and maintain. | |
| Operational costs can be higher because of the need for more | |
| sophisticated orchestration and brokerage tools than in other architectures. | |
| In contrast, overall operational costs might be lower by | |
| virtue of using a cloud brokerage tool to deploy the | |
| workloads to the most cost effective platform. | |
| \item[{Revenue opportunity}] \leavevmode | |
| Revenue opportunities vary based on the intent and use case of the cloud. | |
| As a commercial, customer-facing product, you must consider whether building | |
| over multiple platforms makes the design more attractive to customers. | |
| \item[{Time-to-market}] \leavevmode | |
| One common reason to use cloud platforms is to improve the | |
| time-to-market of a new product or application. | |
| For example, using multiple cloud platforms is viable because | |
| there is an existing investment in several applications. | |
| It is faster to tie the investments together rather than migrate | |
| the components and refactoring them to a single platform. | |
| \item[{Business or technical diversity}] \leavevmode | |
| Organizations leveraging cloud-based services can embrace business | |
| diversity and utilize a hybrid cloud design to spread their | |
| workloads across multiple cloud providers. This ensures that | |
| no single cloud provider is the sole host for an application. | |
| \item[{Application momentum}] \leavevmode | |
| Businesses with existing applications may find that it is | |
| more cost effective to integrate applications on multiple | |
| cloud platforms than migrating them to a single platform. | |
| \end{description} | |
| \subsubsection{Workload considerations} | |
| \label{\detokenize{hybrid-user-requirements:workload-considerations}} | |
| A workload can be a single application or a suite of applications | |
| that work together. It can also be a duplicate set of applications that | |
| need to run on multiple cloud environments. | |
| In a hybrid cloud deployment, the same workload often needs to function | |
| equally well on radically different public and private cloud environments. | |
| The architecture needs to address these potential conflicts, | |
| complexity, and platform incompatibilities. | |
| \paragraph{Use cases for a hybrid cloud architecture} | |
| \label{\detokenize{hybrid-user-requirements:use-cases-for-a-hybrid-cloud-architecture}}\begin{description} | |
| \item[{Dynamic resource expansion or bursting}] \leavevmode | |
| An application that requires additional resources may suit a multiple | |
| cloud architecture. For example, a retailer needs additional resources | |
| during the holiday season, but does not want to add private cloud | |
| resources to meet the peak demand. | |
| The user can accommodate the increased load by bursting to | |
| a public cloud for these peak load periods. These bursts could be | |
| for long or short cycles ranging from hourly to yearly. | |
| \item[{Disaster recovery and business continuity}] \leavevmode | |
| Cheaper storage makes the public cloud suitable for maintaining | |
| backup applications. | |
| \item[{Federated hypervisor and instance management}] \leavevmode | |
| Adding self-service, charge back, and transparent delivery of | |
| the resources from a federated pool can be cost effective. | |
| In a hybrid cloud environment, this is a particularly important | |
| consideration. Look for a cloud that provides cross-platform | |
| hypervisor support and robust instance management tools. | |
| \item[{Application portfolio integration}] \leavevmode | |
| An enterprise cloud delivers efficient application portfolio | |
| management and deployments by leveraging self-service features | |
| and rules according to use. | |
| Integrating existing cloud environments is a common driver | |
| when building hybrid cloud architectures. | |
| \item[{Migration scenarios}] \leavevmode | |
| Hybrid cloud architecture enables the migration of | |
| applications between different clouds. | |
| \item[{High availability}] \leavevmode | |
| A combination of locations and platforms enables a level of | |
| availability that is not possible with a single platform. | |
| This approach increases design complexity. | |
| \end{description} | |
| As running a workload on multiple cloud platforms increases design | |
| complexity, we recommend first exploring options such as transferring | |
| workloads across clouds at the application, instance, cloud platform, | |
| hypervisor, and network levels. | |
| \subsubsection{Tools considerations} | |
| \label{\detokenize{hybrid-user-requirements:tools-considerations}} | |
| Hybrid cloud designs must incorporate tools to facilitate working | |
| across multiple clouds. | |
| \paragraph{Tool functions} | |
| \label{\detokenize{hybrid-user-requirements:tool-functions}}\begin{description} | |
| \item[{Broker between clouds}] \leavevmode | |
| Brokering software evaluates relative costs between different | |
| cloud platforms. Cloud Management Platforms (CMP) | |
| allow the designer to determine the right location for the | |
| workload based on predetermined criteria. | |
| \item[{Facilitate orchestration across the clouds}] \leavevmode | |
| CMPs simplify the migration of application workloads between | |
| public, private, and hybrid cloud platforms. | |
| We recommend using cloud orchestration tools for managing a diverse | |
| portfolio of systems and applications across multiple cloud platforms. | |
| \end{description} | |
| \subsubsection{Network considerations} | |
| \label{\detokenize{hybrid-user-requirements:network-considerations}} | |
| It is important to consider the functionality, security, scalability, | |
| availability, and testability of network when choosing a CMP and cloud | |
| provider. | |
| \begin{itemize} | |
| \item {} | |
| Decide on a network framework and design minimum functionality tests. | |
| This ensures testing and functionality persists during and after | |
| upgrades. | |
| \item {} | |
| Scalability across multiple cloud providers may dictate which underlying | |
| network framework you choose in different cloud providers. | |
| It is important to present the network API functions and to verify | |
| that functionality persists across all cloud endpoints chosen. | |
| \item {} | |
| High availability implementations vary in functionality and design. | |
| Examples of some common methods are active-hot-standby, active-passive, | |
| and active-active. | |
| Development of high availability and test frameworks is necessary to | |
| insure understanding of functionality and limitations. | |
| \item {} | |
| Consider the security of data between the client and the endpoint, | |
| and of traffic that traverses the multiple clouds. | |
| \end{itemize} | |
| \subsubsection{Risk mitigation and management considerations} | |
| \label{\detokenize{hybrid-user-requirements:risk-mitigation-and-management-considerations}} | |
| Hybrid cloud architectures introduce additional risk because | |
| they are more complex than a single cloud design and may involve | |
| incompatible components or tools. However, they also reduce | |
| risk by spreading workloads over multiple providers. | |
| \paragraph{Hybrid cloud risks} | |
| \label{\detokenize{hybrid-user-requirements:hybrid-cloud-risks}}\begin{description} | |
| \item[{Provider availability or implementation details}] \leavevmode | |
| Business changes can affect provider availability. | |
| Likewise, changes in a provider's service can disrupt | |
| a hybrid cloud environment or increase costs. | |
| \item[{Differing SLAs}] \leavevmode | |
| Hybrid cloud designs must accommodate differences in SLAs | |
| between providers, and consider their enforceability. | |
| \item[{Security levels}] \leavevmode | |
| Securing multiple cloud environments is more complex than | |
| securing single cloud environments. We recommend addressing | |
| concerns at the application, network, and cloud platform levels. | |
| Be aware that each cloud platform approaches security differently, | |
| and a hybrid cloud design must address and compensate for these differences. | |
| \item[{Provider API changes}] \leavevmode | |
| Consumers of external clouds rarely have control over provider | |
| changes to APIs, and changes can break compatibility. | |
| Using only the most common and basic APIs can minimize potential conflicts. | |
| \end{description} | |
| \subsection{Technical considerations} | |
| \label{\detokenize{hybrid-technical-considerations::doc}}\label{\detokenize{hybrid-technical-considerations:technical-considerations}} | |
| A hybrid cloud environment requires inspection and | |
| understanding of technical issues in external data centers that may | |
| not be in your control. Ideally, select an architecture | |
| and CMP that are adaptable to changing environments. | |
| Using diverse cloud platforms increases the risk of compatibility | |
| issues, but clouds using the same version and distribution | |
| of OpenStack are less likely to experience problems. | |
| Clouds that exclusively use the same versions of OpenStack should | |
| have no issues, regardless of distribution. More recent distributions | |
| are less likely to encounter incompatibility between versions. | |
| An OpenStack community initiative defines core functions that need to | |
| remain backward compatible between supported versions. For example, the | |
| DefCore initiative defines basic functions that every distribution must | |
| support in order to use the name OpenStack. | |
| Vendors can add proprietary customization to their distributions. | |
| If an application or architecture makes use of these features, it can be | |
| difficult to migrate to or use other types of environments. | |
| If an environment includes non-OpenStack clouds, it may experience | |
| compatibility problems. CMP tools must account for the differences in | |
| the handling of operations and the implementation of services. | |
| \sphinxstylestrong{Possible cloud incompatibilities} | |
| \begin{itemize} | |
| \item {} | |
| Instance deployment | |
| \item {} | |
| Network management | |
| \item {} | |
| Application management | |
| \item {} | |
| Services implementation | |
| \end{itemize} | |
| \subsubsection{Capacity planning} | |
| \label{\detokenize{hybrid-technical-considerations:capacity-planning}} | |
| One of the primary reasons many organizations use a hybrid cloud | |
| is to increase capacity without making large capital investments. | |
| Capacity and the placement of workloads are key design considerations | |
| for hybrid clouds. The long-term capacity plan for these designs must | |
| incorporate growth over time to prevent permanent consumption of more | |
| expensive external clouds. | |
| To avoid this scenario, account for future applications' capacity | |
| requirements and plan growth appropriately. | |
| It is difficult to predict the amount of load a particular | |
| application might incur if the number of users fluctuates, or the | |
| application experiences an unexpected increase in use. | |
| It is possible to define application requirements in terms of | |
| vCPU, RAM, bandwidth, or other resources and plan appropriately. | |
| However, other clouds might not use the same meter or even the same | |
| oversubscription rates. | |
| Oversubscription is a method to emulate more capacity than | |
| may physically be present. | |
| For example, a physical hypervisor node with 32 GB RAM may host | |
| 24 instances, each provisioned with 2 GB RAM. | |
| As long as all 24 instances do not concurrently use 2 full | |
| gigabytes, this arrangement works well. | |
| However, some hosts take oversubscription to extremes and, | |
| as a result, performance can be inconsistent. | |
| If at all possible, determine what the oversubscription rates | |
| of each host are and plan capacity accordingly. | |
| \subsubsection{Utilization} | |
| \label{\detokenize{hybrid-technical-considerations:utilization}} | |
| A CMP must be aware of what workloads are running, where they are | |
| running, and their preferred utilizations. | |
| For example, in most cases it is desirable to run as many workloads | |
| internally as possible, utilizing other resources only when necessary. | |
| On the other hand, situations exist in which the opposite is true, | |
| such as when an internal cloud is only for development and stressing | |
| it is undesirable. A cost model of various scenarios and | |
| consideration of internal priorities helps with this decision. | |
| To improve efficiency, automate these decisions when possible. | |
| The Telemetry service (ceilometer) provides information on the usage | |
| of various OpenStack components. Note the following: | |
| \begin{itemize} | |
| \item {} | |
| If Telemetry must retain a large amount of data, for | |
| example when monitoring a large or active cloud, we recommend | |
| using a NoSQL back end such as MongoDB. | |
| \item {} | |
| You must monitor connections to non-OpenStack clouds | |
| and report this information to the CMP. | |
| \end{itemize} | |
| \subsubsection{Performance} | |
| \label{\detokenize{hybrid-technical-considerations:performance}} | |
| Performance is critical to hybrid cloud deployments, and they are | |
| affected by many of the same issues as multi-site deployments, such | |
| as network latency between sites. Also consider the time required to | |
| run a workload in different clouds and methods for reducing this time. | |
| This may require moving data closer to applications or applications | |
| closer to the data they process, and grouping functionality so that | |
| connections that require low latency take place over a single cloud | |
| rather than spanning clouds. | |
| This may also require a CMP that can determine which cloud can most | |
| efficiently run which types of workloads. | |
| As with utilization, native OpenStack tools help improve performance. | |
| For example, you can use Telemetry to measure performance and the | |
| Orchestration service (heat) to react to changes in demand. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Orchestration requires special client configurations to integrate | |
| with Amazon Web Services. For other types of clouds, use CMP features. | |
| \end{sphinxadmonition} | |
| \subsubsection{Components} | |
| \label{\detokenize{hybrid-technical-considerations:components}} | |
| Using more than one cloud in any design requires consideration of | |
| four OpenStack tools: | |
| \begin{description} | |
| \item[{OpenStack Compute (nova)}] \leavevmode | |
| Regardless of deployment location, hypervisor choice has a direct | |
| effect on how difficult it is to integrate with additional clouds. | |
| \item[{Networking (neutron)}] \leavevmode | |
| Whether using OpenStack Networking (neutron) or legacy | |
| networking (nova-network), it is necessary to understand | |
| network integration capabilities in order to connect between clouds. | |
| \item[{Telemetry (ceilometer)}] \leavevmode | |
| Use of Telemetry depends, in large part, on what the other parts | |
| of the cloud you are using. | |
| \item[{Orchestration (heat)}] \leavevmode | |
| Orchestration can be a valuable tool in orchestrating tasks a | |
| CMP decides are necessary in an OpenStack-based cloud. | |
| \end{description} | |
| \subsubsection{Special considerations} | |
| \label{\detokenize{hybrid-technical-considerations:special-considerations}} | |
| Hybrid cloud deployments require consideration of two issues that | |
| are not common in other situations: | |
| \begin{description} | |
| \item[{Image portability}] \leavevmode | |
| As of the Kilo release, there is no common image format that is | |
| usable by all clouds. Conversion or recreation of images is necessary | |
| if migrating between clouds. To simplify deployment, use the smallest | |
| and simplest images feasible, install only what is necessary, and | |
| use a deployment manager such as Chef or Puppet. Do not use golden | |
| images to speed up the process unless you repeatedly deploy the same | |
| images on the same cloud. | |
| \item[{API differences}] \leavevmode | |
| Avoid using a hybrid cloud deployment with more than just | |
| OpenStack (or with different versions of OpenStack) as API changes | |
| can cause compatibility issues. | |
| \end{description} | |
| \subsection{Architecture} | |
| \label{\detokenize{hybrid-architecture::doc}}\label{\detokenize{hybrid-architecture:architecture}} | |
| Map out the dependencies of the expected workloads and the cloud | |
| infrastructures required to support them to architect a solution | |
| for the broadest compatibility between cloud platforms, minimizing | |
| the need to create workarounds and processes to fill identified gaps. | |
| For your chosen cloud management platform, note the relative | |
| levels of support for both monitoring and orchestration. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics[width=1.000\linewidth]{{Multi-Cloud_Priv-AWS4}.png} | |
| \end{figure} | |
| \subsubsection{Image portability} | |
| \label{\detokenize{hybrid-architecture:image-portability}} | |
| The majority of cloud workloads currently run on instances using | |
| hypervisor technologies. The challenge is that each of these hypervisors | |
| uses an image format that may not be compatible with the others. | |
| When possible, standardize on a single hypervisor and instance image format. | |
| This may not be possible when using externally-managed public clouds. | |
| Conversion tools exist to address image format compatibility. | |
| Examples include \href{http://libguestfs.org/virt-v2v}{virt-p2v/virt-v2v} | |
| and \href{http://libguestfs.org/virt-edit.1.html}{virt-edit}. | |
| These tools cannot serve beyond basic cloud instance specifications. | |
| Alternatively, build a thin operating system image as the base for | |
| new instances. | |
| This facilitates rapid creation of cloud instances using cloud orchestration | |
| or configuration management tools for more specific templating. | |
| Remember if you intend to use portable images for disaster recovery, | |
| application diversity, or high availability, your users could move | |
| the images and instances between cloud platforms regularly. | |
| \subsubsection{Upper-layer services} | |
| \label{\detokenize{hybrid-architecture:upper-layer-services}} | |
| Many clouds offer complementary services beyond the | |
| basic compute, network, and storage components. | |
| These additional services often simplify the deployment | |
| and management of applications on a cloud platform. | |
| When moving workloads from the source to the destination | |
| cloud platforms, consider that the destination cloud platform | |
| may not have comparable services. Implement workloads | |
| in a different way or by using a different technology. | |
| For example, moving an application that uses a NoSQL database | |
| service such as MongoDB could cause difficulties in maintaining | |
| the application between the platforms. | |
| There are a number of options that are appropriate for | |
| the hybrid cloud use case: | |
| \begin{itemize} | |
| \item {} | |
| Implementing a baseline of upper-layer services across all | |
| of the cloud platforms. For platforms that do not support | |
| a given service, create a service on top of that platform | |
| and apply it to the workloads as they are launched on that cloud. | |
| \item {} | |
| For example, through the {\hyperref[\detokenize{common/glossary:term-database-service-trove}]{\sphinxtermref{\DUrole{xref,std,std-term}{Database service}}}} for OpenStack ({\hyperref[\detokenize{common/glossary:term-trove}]{\sphinxtermref{\DUrole{xref,std,std-term}{trove}}}}), OpenStack supports MySQL | |
| as a service but not NoSQL databases in production. | |
| To move from or run alongside AWS, a NoSQL workload must use | |
| an automation tool, such as the Orchestration service (heat), | |
| to recreate the NoSQL database on top of OpenStack. | |
| \item {} | |
| Deploying a {\hyperref[\detokenize{common/glossary:term-platform-as-a-service-paas}]{\sphinxtermref{\DUrole{xref,std,std-term}{Platform-as-a-Service (PaaS)}}}} technology that | |
| abstracts the upper-layer services from the underlying cloud platform. | |
| The unit of application deployment and migration is the PaaS. | |
| It leverages the services of the PaaS and only consumes the base | |
| infrastructure services of the cloud platform. | |
| \item {} | |
| Using automation tools to create the required upper-layer services | |
| that are portable across all cloud platforms. | |
| For example, instead of using database services that are inherent | |
| in the cloud platforms, launch cloud instances and deploy the | |
| databases on those instances using scripts or configuration and | |
| application deployment tools. | |
| \end{itemize} | |
| \subsubsection{Network services} | |
| \label{\detokenize{hybrid-architecture:network-services}} | |
| Network services functionality is a critical component of | |
| multiple cloud architectures. It is an important factor | |
| to assess when choosing a CMP and cloud provider. | |
| Considerations include: | |
| \begin{itemize} | |
| \item {} | |
| Functionality | |
| \item {} | |
| Security | |
| \item {} | |
| Scalability | |
| \item {} | |
| High availability (HA) | |
| \end{itemize} | |
| Verify and test critical cloud endpoint features. | |
| \begin{itemize} | |
| \item {} | |
| After selecting the network functionality framework, | |
| you must confirm the functionality is compatible. | |
| This ensures testing and functionality persists | |
| during and after upgrades. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| Diverse cloud platforms may de-synchronize over time | |
| if you do not maintain their mutual compatibility. | |
| This is a particular issue with APIs. | |
| \end{sphinxadmonition} | |
| \item {} | |
| Scalability across multiple cloud providers determines | |
| your choice of underlying network framework. | |
| It is important to have the network API functions presented | |
| and to verify that the desired functionality persists across | |
| all chosen cloud endpoint. | |
| \item {} | |
| High availability implementations vary in functionality and design. | |
| Examples of some common methods are active-hot-standby, | |
| active-passive, and active-active. | |
| Develop your high availability implementation and a test framework to | |
| understand the functionality and limitations of the environment. | |
| \item {} | |
| It is imperative to address security considerations. | |
| For example, addressing how data is secured between client and | |
| endpoint and any traffic that traverses the multiple clouds. | |
| Business and regulatory requirements dictate what security | |
| approach to take. For more information, see the | |
| {\hyperref[\detokenize{legal-security-requirements:security}]{\sphinxcrossref{\DUrole{std,std-ref}{Security requirements}}}} chapter. | |
| \end{itemize} | |
| \subsubsection{Data} | |
| \label{\detokenize{hybrid-architecture:data}} | |
| Traditionally, replication has been the best method of protecting | |
| object store implementations. A variety of replication methods exist | |
| in storage architectures, for example synchronous and asynchronous | |
| mirroring. Most object stores and back-end storage systems implement | |
| methods for replication at the storage subsystem layer. | |
| Object stores also tailor replication techniques | |
| to fit a cloud's requirements. | |
| Organizations must find the right balance between | |
| data integrity and data availability. Replication strategy may | |
| also influence disaster recovery methods. | |
| Replication across different racks, data centers, and geographical | |
| regions increases focus on determining and ensuring data locality. | |
| The ability to guarantee data is accessed from the nearest or | |
| fastest storage can be necessary for applications to perform well. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| When running embedded object store methods, ensure that you do not | |
| instigate extra data replication as this can cause performance issues. | |
| \end{sphinxadmonition} | |
| \subsection{Operational considerations} | |
| \label{\detokenize{hybrid-operational-considerations:operational-considerations}}\label{\detokenize{hybrid-operational-considerations::doc}} | |
| Hybrid cloud deployments present complex operational challenges. | |
| Differences between provider clouds can cause incompatibilities | |
| with workloads or Cloud Management Platforms (CMP). | |
| Cloud providers may also offer different levels of integration | |
| with competing cloud offerings. | |
| Monitoring is critical to maintaining a hybrid cloud, and it is | |
| important to determine if a CMP supports monitoring of all the | |
| clouds involved, or if compatible APIs are available to be queried | |
| for necessary information. | |
| \subsubsection{Agility} | |
| \label{\detokenize{hybrid-operational-considerations:agility}} | |
| Hybrid clouds provide application availability across different | |
| cloud environments and technologies. | |
| This availability enables the deployment to survive disaster | |
| in any single cloud environment. | |
| Each cloud should provide the means to create instances quickly in | |
| response to capacity issues or failure elsewhere in the hybrid cloud. | |
| \subsubsection{Application readiness} | |
| \label{\detokenize{hybrid-operational-considerations:application-readiness}} | |
| Enterprise workloads that depend on the underlying infrastructure | |
| for availability are not designed to run on OpenStack. | |
| If the application cannot tolerate infrastructure failures, | |
| it is likely to require significant operator intervention to recover. | |
| Applications for hybrid clouds must be fault tolerant, with an SLA | |
| that is not tied to the underlying infrastructure. | |
| Ideally, cloud applications should be able to recover when entire | |
| racks and data centers experience an outage. | |
| \subsubsection{Upgrades} | |
| \label{\detokenize{hybrid-operational-considerations:upgrades}} | |
| If a deployment includes a public cloud, predicting upgrades may | |
| not be possible. Carefully examine provider SLAs. | |
| \begin{sphinxadmonition}{note}{Note:} | |
| At massive scale, even when dealing with a cloud that offers | |
| an SLA with a high percentage of uptime, workloads must be able | |
| to recover quickly. | |
| \end{sphinxadmonition} | |
| When upgrading private cloud deployments, minimize disruption by | |
| making incremental changes and providing a facility to either rollback | |
| or continue to roll forward when using a continuous delivery model. | |
| You may need to coordinate CMP upgrades with hybrid cloud upgrades | |
| if there are API changes. | |
| \subsubsection{Network Operation Center} | |
| \label{\detokenize{hybrid-operational-considerations:network-operation-center}} | |
| Consider infrastructure control when planning the Network Operation | |
| Center (NOC) for a hybrid cloud environment. | |
| If a significant portion of the cloud is on externally managed systems, | |
| prepare for situations where it may not be possible to make changes. | |
| Additionally, providers may differ on how infrastructure must be | |
| managed and exposed. This can lead to delays in root cause analysis | |
| where each insists the blame lies with the other provider. | |
| Ensure that the network structure connects all clouds to form | |
| integrated system, keeping in mind the state of handoffs. | |
| These handoffs must both be as reliable as possible and | |
| include as little latency as possible to ensure the best | |
| performance of the overall system. | |
| \subsubsection{Maintainability} | |
| \label{\detokenize{hybrid-operational-considerations:maintainability}} | |
| Hybrid clouds rely on third party systems and processes. | |
| As a result, it is not possible to guarantee proper maintenance | |
| of the overall system. Instead, be prepared to abandon workloads | |
| and recreate them in an improved state. | |
| \subsection{Prescriptive examples} | |
| \label{\detokenize{hybrid-prescriptive-examples::doc}}\label{\detokenize{hybrid-prescriptive-examples:prescriptive-examples}} | |
| Hybrid cloud environments are designed for these use cases: | |
| \begin{itemize} | |
| \item {} | |
| Bursting workloads from private to public OpenStack clouds | |
| \item {} | |
| Bursting workloads from private to public non-OpenStack clouds | |
| \item {} | |
| High availability across clouds (for technical diversity) | |
| \end{itemize} | |
| This chapter provides examples of environments that address | |
| each of these use cases. | |
| \subsubsection{Bursting to a public OpenStack cloud} | |
| \label{\detokenize{hybrid-prescriptive-examples:bursting-to-a-public-openstack-cloud}} | |
| Company A's data center is running low on capacity. | |
| It is not possible to expand the data center in the foreseeable future. | |
| In order to accommodate the continuously growing need for | |
| development resources in the organization, | |
| Company A decides to use resources in the public cloud. | |
| Company A has an established data center with a substantial amount | |
| of hardware. Migrating the workloads to a public cloud is not feasible. | |
| The company has an internal cloud management platform that directs | |
| requests to the appropriate cloud, depending on the local capacity. | |
| This is a custom in-house application written for this specific purpose. | |
| This solution is depicted in the figure below: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics[width=1.000\linewidth]{{Multi-Cloud_Priv-Pub3}.png} | |
| \end{figure} | |
| This example shows two clouds with a Cloud Management | |
| Platform (CMP) connecting them. This guide does not | |
| discuss a specific CMP, but describes how the Orchestration and | |
| Telemetry services handle, manage, and control workloads. | |
| The private OpenStack cloud has at least one controller and at least | |
| one compute node. It includes metering using the Telemetry service. | |
| The Telemetry service captures the load increase and the CMP | |
| processes the information. If there is available capacity, | |
| the CMP uses the OpenStack API to call the Orchestration service. | |
| This creates instances on the private cloud in response to user requests. | |
| When capacity is not available on the private cloud, the CMP issues | |
| a request to the Orchestration service API of the public cloud. | |
| This creates the instance on the public cloud. | |
| In this example, Company A does not direct the deployments to an | |
| external public cloud due to concerns regarding resource control, | |
| security, and increased operational expense. | |
| \subsubsection{Bursting to a public non-OpenStack cloud} | |
| \label{\detokenize{hybrid-prescriptive-examples:bursting-to-a-public-non-openstack-cloud}} | |
| The second example examines bursting workloads from the private cloud | |
| into a non-OpenStack public cloud using Amazon Web Services (AWS) | |
| to take advantage of additional capacity and to scale applications. | |
| The following diagram demonstrates an OpenStack-to-AWS hybrid cloud: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics[width=1.000\linewidth]{{Multi-Cloud_Priv-AWS4}.png} | |
| \end{figure} | |
| Company B states that its developers are already using AWS | |
| and do not want to change to a different provider. | |
| If the CMP is capable of connecting to an external cloud | |
| provider with an appropriate API, the workflow process remains | |
| the same as the previous scenario. | |
| The actions the CMP takes, such as monitoring loads and | |
| creating new instances, stay the same. | |
| However, the CMP performs actions in the public cloud | |
| using applicable API calls. | |
| If the public cloud is AWS, the CMP would use the | |
| EC2 API to create a new instance and assign an Elastic IP. | |
| It can then add that IP to HAProxy in the private cloud. | |
| The CMP can also reference AWS-specific | |
| tools such as CloudWatch and CloudFormation. | |
| Several open source tool kits for building CMPs are | |
| available and can handle this kind of translation. | |
| Examples include ManageIQ, jClouds, and JumpGate. | |
| \subsubsection{High availability and disaster recovery} | |
| \label{\detokenize{hybrid-prescriptive-examples:high-availability-and-disaster-recovery}} | |
| Company C requires their local data center to be able to | |
| recover from failure. Some of the workloads currently in | |
| use are running on their private OpenStack cloud. | |
| Protecting the data involves Block Storage, Object Storage, | |
| and a database. The architecture supports the failure of | |
| large components of the system while ensuring that the | |
| system continues to deliver services. | |
| While the services remain available to users, the failed | |
| components are restored in the background based on standard | |
| best practice data replication policies. | |
| To achieve these objectives, Company C replicates data to | |
| a second cloud in a geographically distant location. | |
| The following diagram describes this system: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics[width=1.000\linewidth]{{Multi-Cloud_failover2}.png} | |
| \end{figure} | |
| This example includes two private OpenStack clouds connected with a CMP. | |
| The source cloud, OpenStack Cloud 1, includes a controller and | |
| at least one instance running MySQL. It also includes at least | |
| one Block Storage volume and one Object Storage volume. | |
| This means that data is available to the users at all times. | |
| The details of the method for protecting each of these sources | |
| of data differs. | |
| Object Storage relies on the replication capabilities of | |
| the Object Storage provider. | |
| Company C enables OpenStack Object Storage so that it creates | |
| geographically separated replicas that take advantage of this feature. | |
| The company configures storage so that at least one replica | |
| exists in each cloud. In order to make this work, the company | |
| configures a single array spanning both clouds with OpenStack Identity. | |
| Using Federated Identity, the array talks to both clouds, communicating | |
| with OpenStack Object Storage through the Swift proxy. | |
| For Block Storage, the replication is a little more difficult, | |
| and involves tools outside of OpenStack itself. | |
| The OpenStack Block Storage volume is not set as the drive itself | |
| but as a logical object that points to a physical back end. | |
| Disaster recovery is configured for Block Storage for | |
| synchronous backup for the highest level of data protection, | |
| but asynchronous backup could have been set as an alternative | |
| that is not as latency sensitive. | |
| For asynchronous backup, the Block Storage API makes it possible | |
| to export the data and also the metadata of a particular volume, | |
| so that it can be moved and replicated elsewhere. | |
| More information can be found here: | |
| \url{https://blueprints.launchpad.net/cinder/+spec/cinder-backup-volume-metadata-support}. | |
| The synchronous backups create an identical volume in both | |
| clouds and chooses the appropriate flavor so that each cloud | |
| has an identical back end. This is done by creating volumes | |
| through the CMP. After this is configured, a solution | |
| involving DRDB synchronizes the physical drives. | |
| The database component is backed up using synchronous backups. | |
| MySQL does not support geographically diverse replication, | |
| so disaster recovery is provided by replicating the file itself. | |
| As it is not possible to use Object Storage as the back end of | |
| a database like MySQL, Swift replication is not an option. | |
| Company C decides not to store the data on another geo-tiered | |
| storage system, such as Ceph, as Block Storage. | |
| This would have given another layer of protection. | |
| Another option would have been to store the database on an OpenStack | |
| Block Storage volume and backing it up like any other Block Storage. | |
| A {\hyperref[\detokenize{common/glossary:term-hybrid-cloud}]{\sphinxtermref{\DUrole{xref,std,std-term}{hybrid cloud}}}} design is one that uses more than one cloud. | |
| For example, designs that use both an OpenStack-based private | |
| cloud and an OpenStack-based public cloud, or that use an | |
| OpenStack cloud and a non-OpenStack cloud, are hybrid clouds. | |
| {\hyperref[\detokenize{common/glossary:term-bursting}]{\sphinxtermref{\DUrole{xref,std,std-term}{Bursting}}}} describes the practice of creating new instances | |
| in an external cloud to alleviate capacity issues in a private cloud. | |
| \sphinxstylestrong{Example scenarios suited to hybrid clouds} | |
| \begin{itemize} | |
| \item {} | |
| Bursting from a private cloud to a public cloud | |
| \item {} | |
| Disaster recovery | |
| \item {} | |
| Development and testing | |
| \item {} | |
| Federated cloud, enabling users to choose resources from multiple providers | |
| \item {} | |
| Supporting legacy systems as they transition to the cloud | |
| \end{itemize} | |
| Hybrid clouds interact with systems that are outside the | |
| control of the private cloud administrator, and require | |
| careful architecture to prevent conflicts with hardware, | |
| software, and APIs under external control. | |
| The degree to which the architecture is OpenStack-based affects your ability | |
| to accomplish tasks with native OpenStack tools. By definition, | |
| this is a situation in which no single cloud can provide all | |
| of the necessary functionality. In order to manage the entire | |
| system, we recommend using a cloud management platform (CMP). | |
| There are several commercial and open source CMPs available, | |
| but there is no single CMP that can address all needs in all | |
| scenarios, and sometimes a manually-built solution is the best | |
| option. This chapter includes discussion of using CMPs for | |
| managing a hybrid cloud. | |
| \section{Massively scalable} | |
| \label{\detokenize{massively-scalable:massively-scalable}}\label{\detokenize{massively-scalable::doc}} | |
| \subsection{User requirements} | |
| \label{\detokenize{massively-scalable-user-requirements:user-requirements}}\label{\detokenize{massively-scalable-user-requirements::doc}} | |
| Defining user requirements for a massively scalable OpenStack design | |
| architecture dictates approaching the design from two different, yet sometimes | |
| opposing, perspectives: the cloud user, and the cloud operator. The | |
| expectations and perceptions of the consumption and management of resources of | |
| a massively scalable OpenStack cloud from these two perspectives are | |
| distinctly different. | |
| Massively scalable OpenStack clouds have the following user requirements: | |
| \begin{itemize} | |
| \item {} | |
| The cloud user expects repeatable, dependable, and deterministic processes | |
| for launching and deploying cloud resources. You could deliver this through | |
| a web-based interface or publicly available API endpoints. All appropriate | |
| options for requesting cloud resources must be available through some type | |
| of user interface, a command-line interface (CLI), or API endpoints. | |
| \item {} | |
| Cloud users expect a fully self-service and on-demand consumption model. | |
| When an OpenStack cloud reaches the massively scalable size, expect | |
| consumption as a service in each and every way. | |
| \item {} | |
| For a user of a massively scalable OpenStack public cloud, there are no | |
| expectations for control over security, performance, or availability. Users | |
| expect only SLAs related to uptime of API services, and very basic SLAs for | |
| services offered. It is the user's responsibility to address these issues on | |
| their own. The exception to this expectation is the rare case of a massively | |
| scalable cloud infrastructure built for a private or government organization | |
| that has specific requirements. | |
| \end{itemize} | |
| The cloud user's requirements and expectations that determine the cloud design | |
| focus on the consumption model. The user expects to consume cloud resources in | |
| an automated and deterministic way, without any need for knowledge of the | |
| capacity, scalability, or other attributes of the cloud's underlying | |
| infrastructure. | |
| \subsubsection{Operator requirements} | |
| \label{\detokenize{massively-scalable-user-requirements:operator-requirements}} | |
| While the cloud user can be completely unaware of the underlying | |
| infrastructure of the cloud and its attributes, the operator must build and | |
| support the infrastructure for operating at scale. This presents a very | |
| demanding set of requirements for building such a cloud from the operator's | |
| perspective: | |
| \begin{itemize} | |
| \item {} | |
| Everything must be capable of automation. For example, everything from | |
| compute hardware, storage hardware, networking hardware, to the installation | |
| and configuration of the supporting software. Manual processes are | |
| impractical in a massively scalable OpenStack design architecture. | |
| \item {} | |
| The cloud operator requires that capital expenditure (CapEx) is minimized at | |
| all layers of the stack. Operators of massively scalable OpenStack clouds | |
| require the use of dependable commodity hardware and freely available open | |
| source software components to reduce deployment costs and operational | |
| expenses. Initiatives like OpenCompute (more information available at | |
| \href{http://www.opencompute.org}{Open Compute Project}) | |
| provide additional information and pointers. To | |
| cut costs, many operators sacrifice redundancy. For example, using redundant | |
| power supplies, network connections, and rack switches. | |
| \item {} | |
| Companies operating a massively scalable OpenStack cloud also require that | |
| operational expenditures (OpEx) be minimized as much as possible. We | |
| recommend using cloud-optimized hardware when managing operational overhead. | |
| Some of the factors to consider include power, cooling, and the physical | |
| design of the chassis. Through customization, it is possible to optimize the | |
| hardware and systems for this type of workload because of the scale of these | |
| implementations. | |
| \item {} | |
| Massively scalable OpenStack clouds require extensive metering and | |
| monitoring functionality to maximize the operational efficiency by keeping | |
| the operator informed about the status and state of the infrastructure. This | |
| includes full scale metering of the hardware and software status. A | |
| corresponding framework of logging and alerting is also required to store | |
| and enable operations to act on the meters provided by the metering and | |
| monitoring solutions. The cloud operator also needs a solution that uses the | |
| data provided by the metering and monitoring solution to provide capacity | |
| planning and capacity trending analysis. | |
| \item {} | |
| Invariably, massively scalable OpenStack clouds extend over several sites. | |
| Therefore, the user-operator requirements for a multi-site OpenStack | |
| architecture design are also applicable here. This includes various legal | |
| requirements; other jurisdictional legal or compliance requirements; image | |
| consistency-availability; storage replication and availability (both block | |
| and file/object storage); and authentication, authorization, and auditing | |
| (AAA). See {\hyperref[\detokenize{multi-site::doc}]{\sphinxcrossref{\DUrole{doc}{Multi-site}}}} for more details on requirements and | |
| considerations for multi-site OpenStack clouds. | |
| \item {} | |
| The design architecture of a massively scalable OpenStack cloud must address | |
| considerations around physical facilities such as space, floor weight, rack | |
| height and type, environmental considerations, power usage and power usage | |
| efficiency (PUE), and physical security. | |
| \end{itemize} | |
| \subsection{Technical considerations} | |
| \label{\detokenize{massively-scalable-technical-considerations::doc}}\label{\detokenize{massively-scalable-technical-considerations:technical-considerations}} | |
| Repurposing an existing OpenStack environment to be massively scalable is a | |
| formidable task. When building a massively scalable environment from the | |
| ground up, ensure you build the initial deployment with the same principles | |
| and choices that apply as the environment grows. For example, a good approach | |
| is to deploy the first site as a multi-site environment. This enables you to | |
| use the same deployment and segregation methods as the environment grows to | |
| separate locations across dedicated links or wide area networks. In a | |
| hyperscale cloud, scale trumps redundancy. Modify applications with this in | |
| mind, relying on the scale and homogeneity of the environment to provide | |
| reliability rather than redundant infrastructure provided by non-commodity | |
| hardware solutions. | |
| \subsubsection{Infrastructure segregation} | |
| \label{\detokenize{massively-scalable-technical-considerations:infrastructure-segregation}} | |
| OpenStack services support massive horizontal scale. Be aware that this is | |
| not the case for the entire supporting infrastructure. This is particularly a | |
| problem for the database management systems and message queues that OpenStack | |
| services use for data storage and remote procedure call communications. | |
| Traditional clustering techniques typically provide high availability and some | |
| additional scale for these environments. In the quest for massive scale, | |
| however, you must take additional steps to relieve the performance pressure on | |
| these components in order to prevent them from negatively impacting the | |
| overall performance of the environment. Ensure that all the components are in | |
| balance so that if the massively scalable environment fails, all the | |
| components are near maximum capacity and a single component is not causing the | |
| failure. | |
| Regions segregate completely independent installations linked only by an | |
| Identity and Dashboard (optional) installation. Services have separate API | |
| endpoints for each region, and include separate database and queue | |
| installations. This exposes some awareness of the environment's fault domains | |
| to users and gives them the ability to ensure some degree of application | |
| resiliency while also imposing the requirement to specify which region to | |
| apply their actions to. | |
| Environments operating at massive scale typically need their regions or sites | |
| subdivided further without exposing the requirement to specify the failure | |
| domain to the user. This provides the ability to further divide the | |
| installation into failure domains while also providing a logical unit for | |
| maintenance and the addition of new hardware. At hyperscale, instead of adding | |
| single compute nodes, administrators can add entire racks or even groups of | |
| racks at a time with each new addition of nodes exposed via one of the | |
| segregation concepts mentioned herein. | |
| {\hyperref[\detokenize{common/glossary:term-cell}]{\sphinxtermref{\DUrole{xref,std,std-term}{Cells}}}} provide the ability to subdivide the compute portion of | |
| an OpenStack installation, including regions, while still exposing a single | |
| endpoint. Each region has an API cell along with a number of compute cells | |
| where the workloads actually run. Each cell has its own database and message | |
| queue setup (ideally clustered), providing the ability to subdivide the load | |
| on these subsystems, improving overall performance. | |
| Each compute cell provides a complete compute installation, complete with full | |
| database and queue installations, scheduler, conductor, and multiple compute | |
| hosts. The cells scheduler handles placement of user requests from the single | |
| API endpoint to a specific cell from those available. The normal filter | |
| scheduler then handles placement within the cell. | |
| Unfortunately, Compute is the only OpenStack service that provides good | |
| support for cells. In addition, cells do not adequately support some standard | |
| OpenStack functionality such as security groups and host aggregates. Due to | |
| their relative newness and specialized use, cells receive relatively little | |
| testing in the OpenStack gate. Despite these issues, cells play an important | |
| role in well known OpenStack installations operating at massive scale, such as | |
| those at CERN and Rackspace. | |
| \subsubsection{Host aggregates} | |
| \label{\detokenize{massively-scalable-technical-considerations:host-aggregates}} | |
| Host aggregates enable partitioning of OpenStack Compute deployments into | |
| logical groups for load balancing and instance distribution. You can also use | |
| host aggregates to further partition an availability zone. Consider a cloud | |
| which might use host aggregates to partition an availability zone into groups | |
| of hosts that either share common resources, such as storage and network, or | |
| have a special property, such as trusted computing hardware. You cannot target | |
| host aggregates explicitly. Instead, select instance flavors that map to host | |
| aggregate metadata. These flavors target host aggregates implicitly. | |
| \subsubsection{Availability zones} | |
| \label{\detokenize{massively-scalable-technical-considerations:availability-zones}} | |
| Availability zones provide another mechanism for subdividing an installation | |
| or region. They are, in effect, host aggregates exposed for (optional) | |
| explicit targeting by users. | |
| Unlike cells, availability zones do not have their own database server or | |
| queue broker but represent an arbitrary grouping of compute nodes. Typically, | |
| nodes are grouped into availability zones using a shared failure domain based | |
| on a physical characteristic such as a shared power source or physical network | |
| connections. Users can target exposed availability zones; however, this is not | |
| a requirement. An alternative approach is to set a default availability zone | |
| to schedule instances to a non-default availability zone of nova. | |
| \subsubsection{Segregation example} | |
| \label{\detokenize{massively-scalable-technical-considerations:segregation-example}} | |
| In this example, the cloud is divided into two regions, an API cell and | |
| three child cells for each region, with three availability zones in each | |
| cell based on the power layout of the data centers. | |
| The below figure describes the relationship between them within one region. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Massively_Scalable_Cells_regions_azs}.png} | |
| \end{figure} | |
| A number of host aggregates enable targeting of virtual machine instances | |
| using flavors, that require special capabilities shared by the target hosts | |
| such as SSDs, 10 GbE networks, or GPU cards. | |
| \subsection{Operational considerations} | |
| \label{\detokenize{massively-scalable-operational-considerations:operational-considerations}}\label{\detokenize{massively-scalable-operational-considerations::doc}} | |
| In order to run efficiently at massive scale, automate as many of the | |
| operational processes as possible. Automation includes the configuration of | |
| provisioning, monitoring and alerting systems. Part of the automation process | |
| includes the capability to determine when human intervention is required and | |
| who should act. The objective is to decrease the ratio of operational staff to | |
| running systems as much as possible in order to reduce maintenance costs. In a | |
| massively scaled environment, it is very difficult for staff to give each | |
| system individual care. | |
| Configuration management tools such as Puppet and Chef enable operations staff | |
| to categorize systems into groups based on their roles and thus create | |
| configurations and system states that the provisioning system enforces. | |
| Systems that fall out of the defined state due to errors or failures are | |
| quickly removed from the pool of active nodes and replaced. | |
| At large scale the resource cost of diagnosing failed individual systems is | |
| far greater than the cost of replacement. It is more economical to replace the | |
| failed system with a new system, provisioning and configuring it automatically | |
| and adding it to the pool of active nodes. By automating tasks that are | |
| labor-intensive, repetitive, and critical to operations, cloud operations | |
| teams can work more efficiently because fewer resources are required for these | |
| common tasks. Administrators are then free to tackle tasks that are not easy | |
| to automate and that have longer-term impacts on the business, for example, | |
| capacity planning. | |
| \subsubsection{The bleeding edge} | |
| \label{\detokenize{massively-scalable-operational-considerations:the-bleeding-edge}} | |
| Running OpenStack at massive scale requires striking a balance between | |
| stability and features. For example, it might be tempting to run an older | |
| stable release branch of OpenStack to make deployments easier. However, when | |
| running at massive scale, known issues that may be of some concern or only | |
| have minimal impact in smaller deployments could become pain points. Recent | |
| releases may address well known issues. The OpenStack community can help | |
| resolve reported issues by applying the collective expertise of the OpenStack | |
| developers. | |
| The number of organizations running at massive scales is a small proportion of | |
| the OpenStack community, therefore it is important to share related issues | |
| with the community and be a vocal advocate for resolving them. Some issues | |
| only manifest when operating at large scale, and the number of organizations | |
| able to duplicate and validate an issue is small, so it is important to | |
| document and dedicate resources to their resolution. | |
| In some cases, the resolution to the problem is ultimately to deploy a more | |
| recent version of OpenStack. Alternatively, when you must resolve an issue in | |
| a production environment where rebuilding the entire environment is not an | |
| option, it is sometimes possible to deploy updates to specific underlying | |
| components in order to resolve issues or gain significant performance | |
| improvements. Although this may appear to expose the deployment to increased | |
| risk and instability, in many cases it could be an undiscovered issue. | |
| We recommend building a development and operations organization that is | |
| responsible for creating desired features, diagnosing and resolving issues, | |
| and building the infrastructure for large scale continuous integration tests | |
| and continuous deployment. This helps catch bugs early and makes deployments | |
| faster and easier. In addition to development resources, we also recommend the | |
| recruitment of experts in the fields of message queues, databases, distributed | |
| systems, networking, cloud, and storage. | |
| \subsubsection{Growth and capacity planning} | |
| \label{\detokenize{massively-scalable-operational-considerations:growth-and-capacity-planning}} | |
| An important consideration in running at massive scale is projecting growth | |
| and utilization trends in order to plan capital expenditures for the short and | |
| long term. Gather utilization meters for compute, network, and storage, along | |
| with historical records of these meters. While securing major anchor projects | |
| can lead to rapid jumps in the utilization rates of all resources, the steady | |
| adoption of the cloud inside an organization or by consumers in a public | |
| offering also creates a steady trend of increased utilization. | |
| \subsubsection{Skills and training} | |
| \label{\detokenize{massively-scalable-operational-considerations:skills-and-training}} | |
| Projecting growth for storage, networking, and compute is only one aspect of a | |
| growth plan for running OpenStack at massive scale. Growing and nurturing | |
| development and operational staff is an additional consideration. Sending team | |
| members to OpenStack conferences, meetup events, and encouraging active | |
| participation in the mailing lists and committees is a very important way to | |
| maintain skills and forge relationships in the community. For a list of | |
| OpenStack training providers in the marketplace, see the \href{https://www.openstack.org/marketplace/training/}{Openstack Marketplace}. | |
| A massively scalable architecture is a cloud implementation | |
| that is either a very large deployment, such as a commercial | |
| service provider might build, or one that has the capability | |
| to support user requests for large amounts of cloud resources. | |
| An example is an infrastructure in which requests to service | |
| 500 or more instances at a time is common. A massively scalable | |
| infrastructure fulfills such a request without exhausting the | |
| available cloud infrastructure resources. While the high capital | |
| cost of implementing such a cloud architecture means that it | |
| is currently in limited use, many organizations are planning for | |
| massive scalability in the future. | |
| A massively scalable OpenStack cloud design presents a unique | |
| set of challenges and considerations. For the most part it is | |
| similar to a general purpose cloud architecture, as it is built | |
| to address a non-specific range of potential use cases or | |
| functions. Typically, it is rare that particular workloads determine | |
| the design or configuration of massively scalable clouds. The | |
| massively scalable cloud is most often built as a platform for | |
| a variety of workloads. Because private organizations rarely | |
| require or have the resources for them, massively scalable | |
| OpenStack clouds are generally built as commercial, public | |
| cloud offerings. | |
| Services provided by a massively scalable OpenStack cloud | |
| include: | |
| \begin{itemize} | |
| \item {} | |
| Virtual-machine disk image library | |
| \item {} | |
| Raw block storage | |
| \item {} | |
| File or object storage | |
| \item {} | |
| Firewall functionality | |
| \item {} | |
| Load balancing functionality | |
| \item {} | |
| Private (non-routable) and public (floating) IP addresses | |
| \item {} | |
| Virtualized network topologies | |
| \item {} | |
| Software bundles | |
| \item {} | |
| Virtual compute resources | |
| \end{itemize} | |
| Like a general purpose cloud, the instances deployed in a | |
| massively scalable OpenStack cloud do not necessarily use | |
| any specific aspect of the cloud offering (compute, network, or storage). | |
| As the cloud grows in scale, the number of workloads can cause | |
| stress on all the cloud components. This adds further stresses | |
| to supporting infrastructure such as databases and message brokers. | |
| The architecture design for such a cloud must account for these | |
| performance pressures without negatively impacting user experience. | |
| \section{Specialized cases} | |
| \label{\detokenize{specialized:specialized-cases}}\label{\detokenize{specialized::doc}} | |
| \subsection{Multi-hypervisor example} | |
| \label{\detokenize{specialized-multi-hypervisor:multi-hypervisor-example}}\label{\detokenize{specialized-multi-hypervisor::doc}} | |
| A financial company requires its applications migrated | |
| from a traditional, virtualized environment to an API driven, | |
| orchestrated environment. The new environment needs | |
| multiple hypervisors since many of the company's applications | |
| have strict hypervisor requirements. | |
| Currently, the company's vSphere environment runs 20 VMware | |
| ESXi hypervisors. These hypervisors support 300 instances of | |
| various sizes. Approximately 50 of these instances must run | |
| on ESXi. The remaining 250 or so have more flexible requirements. | |
| The financial company decides to manage the | |
| overall system with a common OpenStack platform. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics[width=1.000\linewidth]{{Compute_NSX}.png} | |
| \end{figure} | |
| Architecture planning teams decided to run a host aggregate | |
| containing KVM hypervisors for the general purpose instances. | |
| A separate host aggregate targets instances requiring ESXi. | |
| Images in the OpenStack Image service have particular | |
| hypervisor metadata attached. When a user requests a | |
| certain image, the instance spawns on the relevant aggregate. | |
| Images for ESXi use the VMDK format. You can convert | |
| QEMU disk images to VMDK, VMFS Flat Disks. These disk images | |
| can also be thin, thick, zeroed-thick, and eager-zeroed-thick. | |
| After exporting a VMFS thin disk from VMFS to the | |
| OpenStack Image service (a non-VMFS location), it becomes a | |
| preallocated flat disk. This impacts the transfer time from the | |
| OpenStack Image service to the data store since transfers require | |
| moving the full preallocated flat disk rather than the thin disk. | |
| The VMware host aggregate compute nodes communicate with | |
| vCenter rather than spawning directly on a hypervisor. | |
| The vCenter then requests scheduling for the instance to run on | |
| an ESXi hypervisor. | |
| This functionality requires that VMware Distributed Resource | |
| Scheduler (DRS) is enabled on a cluster and set to \sphinxstylestrong{Fully Automated}. | |
| The vSphere requires shared storage because the DRS uses vMotion | |
| which is a service that relies on shared storage. | |
| This solution to the company's migration uses shared storage | |
| to provide Block Storage capabilities to the KVM instances while | |
| also providing vSphere storage. The new environment provides this | |
| storage functionality using a dedicated data network. The | |
| compute hosts should have dedicated NICs to support the | |
| dedicated data network. vSphere supports OpenStack Block Storage. This | |
| support gives storage from a VMFS datastore to an instance. For the | |
| financial company, Block Storage in their new architecture supports | |
| both hypervisors. | |
| OpenStack Networking provides network connectivity in this new | |
| architecture, with the VMware NSX plug-in driver configured. legacy | |
| networking (nova-network) supports both hypervisors in this new | |
| architecture example, but has limitations. Specifically, vSphere | |
| with legacy networking does not support security groups. The new | |
| architecture uses VMware NSX as a part of the design. When users launch an | |
| instance within either of the host aggregates, VMware NSX ensures the | |
| instance attaches to the appropriate network overlay-based logical networks. | |
| The architecture planning teams also consider OpenStack Compute integration. | |
| When running vSphere in an OpenStack environment, nova-compute | |
| communications with vCenter appear as a single large hypervisor. | |
| This hypervisor represents the entire ESXi cluster. Multiple nova-compute | |
| instances can represent multiple ESXi clusters. They can connect to | |
| multiple vCenter servers. If the process running nova-compute | |
| crashes it cuts the connection to the vCenter server. | |
| Any ESXi clusters will stop running, and you will not be able to | |
| provision further instances on the vCenter, even if you enable high | |
| availability. You must monitor the nova-compute service connected | |
| to vSphere carefully for any disruptions as a result of this failure point. | |
| \subsection{Specialized networking example} | |
| \label{\detokenize{specialized-networking::doc}}\label{\detokenize{specialized-networking:specialized-networking-example}} | |
| Some applications that interact with a network require | |
| specialized connectivity. Applications such as a looking glass | |
| require the ability to connect to a BGP peer, or route participant | |
| applications may need to join a network at a layer2 level. | |
| \subsubsection{Challenges} | |
| \label{\detokenize{specialized-networking:challenges}} | |
| Connecting specialized network applications to their required | |
| resources alters the design of an OpenStack installation. | |
| Installations that rely on overlay networks are unable to | |
| support a routing participant, and may also block layer-2 listeners. | |
| \subsubsection{Possible solutions} | |
| \label{\detokenize{specialized-networking:possible-solutions}} | |
| Deploying an OpenStack installation using OpenStack Networking with a | |
| provider network allows direct layer-2 connectivity to an | |
| upstream networking device. | |
| This design provides the layer-2 connectivity required to communicate | |
| via Intermediate System-to-Intermediate System (ISIS) protocol or | |
| to pass packets controlled by an OpenFlow controller. | |
| Using the multiple layer-2 plug-in with an agent such as | |
| {\hyperref[\detokenize{common/glossary:term-open-vswitch}]{\sphinxtermref{\DUrole{xref,std,std-term}{Open vSwitch}}}} allows a private connection through a VLAN | |
| directly to a specific port in a layer-3 device. | |
| This allows a BGP point-to-point link to join the autonomous system. | |
| Avoid using layer-3 plug-ins as they divide the broadcast | |
| domain and prevent router adjacencies from forming. | |
| \subsection{Software-defined networking} | |
| \label{\detokenize{specialized-software-defined-networking::doc}}\label{\detokenize{specialized-software-defined-networking:software-defined-networking}} | |
| Software-defined networking (SDN) is the separation of the data | |
| plane and control plane. SDN is a popular method of | |
| managing and controlling packet flows within networks. | |
| SDN uses overlays or directly controlled layer-2 devices to | |
| determine flow paths, and as such presents challenges to a | |
| cloud environment. Some designers may wish to run their | |
| controllers within an OpenStack installation. Others may wish | |
| to have their installations participate in an SDN-controlled network. | |
| \subsubsection{Challenges} | |
| \label{\detokenize{specialized-software-defined-networking:challenges}} | |
| SDN is a relatively new concept that is not yet standardized, | |
| so SDN systems come in a variety of different implementations. | |
| Because of this, a truly prescriptive architecture is not feasible. | |
| Instead, examine the differences between an existing and a planned | |
| OpenStack design and determine where potential conflicts and gaps exist. | |
| \subsubsection{Possible solutions} | |
| \label{\detokenize{specialized-software-defined-networking:possible-solutions}} | |
| If an SDN implementation requires layer-2 access because it | |
| directly manipulates switches, we do not recommend running an | |
| overlay network or a layer-3 agent. | |
| If the controller resides within an OpenStack installation, | |
| it may be necessary to build an ML2 plug-in and schedule the | |
| controller instances to connect to project VLANs that they can | |
| talk directly to the switch hardware. | |
| Alternatively, depending on the external device support, | |
| use a tunnel that terminates at the switch hardware itself. | |
| \paragraph{Diagram} | |
| \label{\detokenize{specialized-software-defined-networking:diagram}} | |
| OpenStack hosted SDN controller: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Specialized_SDN_hosted}.png} | |
| \end{figure} | |
| OpenStack participating in an SDN controller network: | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Specialized_SDN_external}.png} | |
| \end{figure} | |
| \subsection{Desktop-as-a-Service} | |
| \label{\detokenize{specialized-desktop-as-a-service:desktop-as-a-service}}\label{\detokenize{specialized-desktop-as-a-service::doc}} | |
| Virtual Desktop Infrastructure (VDI) is a service that hosts | |
| user desktop environments on remote servers. This application | |
| is very sensitive to network latency and requires a high | |
| performance compute environment. Traditionally these types of | |
| services do not use cloud environments because few clouds | |
| support such a demanding workload for user-facing applications. | |
| As cloud environments become more robust, vendors are starting | |
| to provide services that provide virtual desktops in the cloud. | |
| OpenStack may soon provide the infrastructure for these types of deployments. | |
| \subsubsection{Challenges} | |
| \label{\detokenize{specialized-desktop-as-a-service:challenges}} | |
| Designing an infrastructure that is suitable to host virtual | |
| desktops is a very different task to that of most virtual workloads. | |
| For example, the design must consider: | |
| \begin{itemize} | |
| \item {} | |
| Boot storms, when a high volume of logins occur in a short period of time | |
| \item {} | |
| The performance of the applications running on virtual desktops | |
| \item {} | |
| Operating systems and their compatibility with the OpenStack hypervisor | |
| \end{itemize} | |
| \subsubsection{Broker} | |
| \label{\detokenize{specialized-desktop-as-a-service:broker}} | |
| The connection broker determines which remote desktop host | |
| users can access. Medium and large scale environments require a broker | |
| since its service represents a central component of the architecture. | |
| The broker is a complete management product, and enables automated | |
| deployment and provisioning of remote desktop hosts. | |
| \subsubsection{Possible solutions} | |
| \label{\detokenize{specialized-desktop-as-a-service:possible-solutions}} | |
| There are a number of commercial products currently available that | |
| provide a broker solution. However, no native OpenStack projects | |
| provide broker services. | |
| Not providing a broker is also an option, but managing this manually | |
| would not suffice for a large scale, enterprise solution. | |
| \subsubsection{Diagram} | |
| \label{\detokenize{specialized-desktop-as-a-service:diagram}}\begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics{{Specialized_VDI1}.png} | |
| \end{figure} | |
| \subsection{OpenStack on OpenStack} | |
| \label{\detokenize{specialized-openstack-on-openstack:openstack-on-openstack}}\label{\detokenize{specialized-openstack-on-openstack::doc}} | |
| In some cases, users may run OpenStack nested on top | |
| of another OpenStack cloud. This scenario describes how to | |
| manage and provision complete OpenStack environments on instances | |
| supported by hypervisors and servers, which an underlying OpenStack | |
| environment controls. | |
| Public cloud providers can use this technique to manage the | |
| upgrade and maintenance process on complete OpenStack environments. | |
| Developers and those testing OpenStack can also use this | |
| technique to provision their own OpenStack environments on | |
| available OpenStack Compute resources, whether public or private. | |
| \subsubsection{Challenges} | |
| \label{\detokenize{specialized-openstack-on-openstack:challenges}} | |
| The network aspect of deploying a nested cloud is the most | |
| complicated aspect of this architecture. | |
| You must expose VLANs to the physical ports on which the underlying | |
| cloud runs because the bare metal cloud owns all the hardware. | |
| You must also expose them to the nested levels as well. | |
| Alternatively, you can use the network overlay technologies on the | |
| OpenStack environment running on the host OpenStack environment to | |
| provide the required software defined networking for the deployment. | |
| \subsubsection{Hypervisor} | |
| \label{\detokenize{specialized-openstack-on-openstack:hypervisor}} | |
| In this example architecture, consider which | |
| approach you should take to provide a nested | |
| hypervisor in OpenStack. This decision influences which | |
| operating systems you use for the deployment of the nested | |
| OpenStack deployments. | |
| \subsubsection{Possible solutions: deployment} | |
| \label{\detokenize{specialized-openstack-on-openstack:possible-solutions-deployment}} | |
| Deployment of a full stack can be challenging but you can mitigate | |
| this difficulty by creating a Heat template to deploy the | |
| entire stack, or a configuration management system. After creating | |
| the Heat template, you can automate the deployment of additional stacks. | |
| The OpenStack-on-OpenStack project ({\hyperref[\detokenize{common/glossary:term-tripleo}]{\sphinxtermref{\DUrole{xref,std,std-term}{TripleO}}}}) | |
| addresses this issue. Currently, however, the project does | |
| not completely cover nested stacks. For more information, see | |
| \url{https://wiki.openstack.org/wiki/TripleO}. | |
| \subsubsection{Possible solutions: hypervisor} | |
| \label{\detokenize{specialized-openstack-on-openstack:possible-solutions-hypervisor}} | |
| In the case of running TripleO, the underlying OpenStack | |
| cloud deploys the compute nodes as bare-metal. You then deploy | |
| OpenStack on these Compute bare-metal servers with the | |
| appropriate hypervisor, such as KVM. | |
| In the case of running smaller OpenStack clouds for testing | |
| purposes, where performance is not a critical factor, you can use | |
| QEMU instead. It is also possible to run a KVM hypervisor in an instance | |
| (see \href{http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/}{davejingtian.org}), | |
| though this is not a supported configuration, and could be a | |
| complex solution for such a use case. | |
| \subsubsection{Diagram} | |
| \label{\detokenize{specialized-openstack-on-openstack:diagram}}\begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics[width=1.000\linewidth]{{Specialized_OOO}.png} | |
| \end{figure} | |
| \subsection{Specialized hardware} | |
| \label{\detokenize{specialized-hardware:specialized-hardware}}\label{\detokenize{specialized-hardware::doc}} | |
| Certain workloads require specialized hardware devices that | |
| have significant virtualization or sharing challenges. | |
| Applications such as load balancers, highly parallel brute | |
| force computing, and direct to wire networking may need | |
| capabilities that basic OpenStack components do not provide. | |
| \subsubsection{Challenges} | |
| \label{\detokenize{specialized-hardware:challenges}} | |
| Some applications need access to hardware devices to either | |
| improve performance or provide capabilities that are not | |
| virtual CPU, RAM, network, or storage. These can be a shared | |
| resource, such as a cryptography processor, or a dedicated | |
| resource, such as a Graphics Processing Unit (GPU). OpenStack can | |
| provide some of these, while others may need extra work. | |
| \subsubsection{Solutions} | |
| \label{\detokenize{specialized-hardware:solutions}} | |
| To provide cryptography offloading to a set of instances, | |
| you can use Image service configuration options. | |
| For example, assign the cryptography chip to a device node in the guest. | |
| The OpenStack Command Line Reference contains further information on | |
| configuring this solution in the section \href{https://docs.openstack.org/cli-reference/glance.html\#image-service-property-keys}{Image service property keys}. | |
| A challenge, however, is this option allows all guests using the | |
| configured images to access the hypervisor cryptography device. | |
| If you require direct access to a specific device, PCI pass-through | |
| enables you to dedicate the device to a single instance per hypervisor. | |
| You must define a flavor that has the PCI device specifically in order | |
| to properly schedule instances. | |
| More information regarding PCI pass-through, including instructions for | |
| implementing and using it, is available at | |
| \href{https://wiki.openstack.org/wiki/Pci\_passthrough\#How\_to\_check\_PCI\_status\_with\_PCI\_api\_patches}{https://wiki.openstack.org/wiki/Pci\_passthrough}. | |
| \begin{figure}[H] | |
| \centering | |
| \noindent\sphinxincludegraphics[width=1.000\linewidth]{{Specialized_Hardware2}.png} | |
| \end{figure} | |
| Although most OpenStack architecture designs fall into one | |
| of the seven major scenarios outlined in other sections | |
| (compute focused, network focused, storage focused, general | |
| purpose, multi-site, hybrid cloud, and massively scalable), | |
| there are a few use cases that do not fit into these categories. | |
| This section discusses these specialized cases and provide some | |
| additional details and design considerations for each use case: | |
| \begin{itemize} | |
| \item {} | |
| {\hyperref[\detokenize{specialized-networking::doc}]{\sphinxcrossref{\DUrole{doc}{Specialized networking}}}}: | |
| describes running networking-oriented software that may involve reading | |
| packets directly from the wire or participating in routing protocols. | |
| \item {} | |
| {\hyperref[\detokenize{specialized-software-defined-networking::doc}]{\sphinxcrossref{\DUrole{doc}{Software-defined networking (SDN)}}}}: | |
| describes both running an SDN controller from within OpenStack | |
| as well as participating in a software-defined network. | |
| \item {} | |
| {\hyperref[\detokenize{specialized-desktop-as-a-service::doc}]{\sphinxcrossref{\DUrole{doc}{Desktop-as-a-Service}}}}: | |
| describes running a virtualized desktop environment in a cloud | |
| ({\hyperref[\detokenize{common/glossary:term-desktop-as-a-service}]{\sphinxtermref{\DUrole{xref,std,std-term}{Desktop-as-a-Service}}}}). | |
| This applies to private and public clouds. | |
| \item {} | |
| {\hyperref[\detokenize{specialized-openstack-on-openstack::doc}]{\sphinxcrossref{\DUrole{doc}{OpenStack on OpenStack}}}}: | |
| describes building a multi-tiered cloud by running OpenStack | |
| on top of an OpenStack installation. | |
| \item {} | |
| {\hyperref[\detokenize{specialized-hardware::doc}]{\sphinxcrossref{\DUrole{doc}{Specialized hardware}}}}: | |
| describes the use of specialized hardware devices from within | |
| the OpenStack environment. | |
| \end{itemize} | |
| \section{References} | |
| \label{\detokenize{references:references}}\label{\detokenize{references::doc}} | |
| \href{http://ec.europa.eu/justice/data-protection/}{Data Protection framework of the European Union} | |
| : Guidance on Data Protection laws governed by the EU. | |
| \href{http://www.internetsociety.org/deploy360/blog/2014/05/goodbye-ipv4-iana-starts-allocating-final-address-blocks/}{Depletion of IPv4 Addresses} | |
| : describing how IPv4 addresses and the migration to IPv6 is inevitable. | |
| \href{http://www.garrettcom.com/techsupport/papers/ethernet\_switch\_reliability.pdf}{Ethernet Switch Reliability} | |
| : Research white paper on Ethernet Switch reliability. | |
| \href{http://www.finra.org/Industry/Regulation/FINRARules/}{Financial Industry Regulatory Authority} | |
| : Requirements of the Financial Industry Regulatory Authority in the USA. | |
| \href{https://docs.openstack.org/cli-reference/glance.html\#image-service-property-keys}{Image Service property keys} | |
| : Glance API property keys allows the administrator to attach custom | |
| characteristics to images. | |
| \href{http://libguestfs.org}{LibGuestFS Documentation} | |
| : Official LibGuestFS documentation. | |
| \href{https://docs.openstack.org/ops-guide/ops-logging-monitoring.html}{Logging and Monitoring} | |
| : Official OpenStack Operations documentation. | |
| \href{http://manageiq.org/}{ManageIQ Cloud Management Platform} | |
| : An Open Source Cloud Management Platform for managing multiple clouds. | |
| \href{https://www.scribd.com/doc/298973976/Network-Availability}{N-Tron Network Availability} | |
| : Research white paper on network availability. | |
| \href{http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun}{Nested KVM} | |
| : Post on how to nest KVM under KVM. | |
| \href{http://www.opencompute.org/}{Open Compute Project} | |
| : The Open Compute Project Foundation's mission is to design | |
| and enable the delivery of the most efficient server, | |
| storage and data center hardware designs for scalable computing. | |
| \href{https://docs.openstack.org/ops-guide/ops-user-facing-operations.html\#flavors}{OpenStack Flavors} | |
| : Official OpenStack documentation. | |
| \href{https://docs.openstack.org/ha-guide/}{OpenStack High Availability Guide} | |
| : Information on how to provide redundancy for the OpenStack components. | |
| \href{https://wiki.openstack.org/wiki/HypervisorSupportMatrix}{OpenStack Hypervisor Support Matrix} | |
| : Matrix of supported hypervisors and capabilities when used with OpenStack. | |
| \href{https://docs.openstack.org/developer/swift/replication\_network.html}{OpenStack Object Store (Swift) Replication Reference} | |
| : Developer documentation of Swift replication. | |
| \href{https://docs.openstack.org/ops-guide/}{OpenStack Operations Guide} | |
| : The OpenStack Operations Guide provides information on setting up | |
| and installing OpenStack. | |
| \href{https://docs.openstack.org/security-guide/}{OpenStack Security Guide} | |
| : The OpenStack Security Guide provides information on securing | |
| OpenStack deployments. | |
| \href{https://www.openstack.org/marketplace/training}{OpenStack Training Marketplace} | |
| : The OpenStack Market for training and Vendors providing training | |
| on OpenStack. | |
| \href{https://wiki.openstack.org/wiki/Pci\_passthrough\#How\_to\_check\_PCI\_status\_with\_PCI\_api\_paches}{PCI passthrough} | |
| : The PCI API patches extend the servers/os-hypervisor to | |
| show PCI information for instance and compute node, | |
| and also provides a resource endpoint to show PCI information. | |
| \href{https://wiki.openstack.org/wiki/TripleO}{TripleO} | |
| : TripleO is a program aimed at installing, upgrading and operating | |
| OpenStack clouds using OpenStack's own cloud facilities as the foundation. | |
| \chapter{Appendix} | |
| \label{\detokenize{index:appendix}} | |
| \section{Community support} | |
| \label{\detokenize{common/app-support:community-support}}\label{\detokenize{common/app-support::doc}} | |
| The following resources are available to help you run and use OpenStack. | |
| The OpenStack community constantly improves and adds to the main | |
| features of OpenStack, but if you have any questions, do not hesitate to | |
| ask. Use the following resources to get OpenStack support and | |
| troubleshoot your installations. | |
| \subsection{Documentation} | |
| \label{\detokenize{common/app-support:documentation}} | |
| For the available OpenStack documentation, see | |
| \href{https://docs.openstack.org}{docs.openstack.org}. | |
| To provide feedback on documentation, join and use the | |
| \href{mailto:[email protected]}{[email protected]} mailing list at \href{http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs}{OpenStack | |
| Documentation Mailing | |
| List}, | |
| join our IRC channel \sphinxcode{\#openstack-doc} on the freenode IRC network, | |
| or \href{https://bugs.launchpad.net/openstack-manuals/+filebug}{report a | |
| bug}. | |
| The following books explain how to install an OpenStack cloud and its | |
| associated components: | |
| \begin{itemize} | |
| \item {} | |
| \href{https://docs.openstack.org/newton/install-guide-obs/}{Installation Tutorial for openSUSE Leap 42.2 and SUSE Linux Enterprise | |
| Server 12 SP2} | |
| \item {} | |
| \href{https://docs.openstack.org/newton/install-guide-rdo/}{Installation Tutorial for Red Hat Enterprise Linux 7 and CentOS 7} | |
| \item {} | |
| \href{https://docs.openstack.org/newton/install-guide-ubuntu/}{Installation Tutorial for Ubuntu 16.04 (LTS)} | |
| \item {} | |
| \href{https://docs.openstack.org/newton/install-guide-debconf/}{Installation Tutorial for Debian with Debconf} | |
| \item {} | |
| \href{https://docs.openstack.org/newton/install-guide-debian/}{Installation Tutorial for Debian} | |
| \end{itemize} | |
| The following books explain how to configure and run an OpenStack cloud: | |
| \begin{itemize} | |
| \item {} | |
| \href{https://docs.openstack.org/arch-design/}{Architecture Design Guide} | |
| \item {} | |
| \href{https://docs.openstack.org/admin-guide/}{Administrator Guide} | |
| \item {} | |
| \href{https://docs.openstack.org/newton/config-reference/}{Configuration Reference} | |
| \item {} | |
| \href{https://docs.openstack.org/ops/}{Operations Guide} | |
| \item {} | |
| \href{https://docs.openstack.org/newton/networking-guide}{Networking Guide} | |
| \item {} | |
| \href{https://docs.openstack.org/ha-guide/}{High Availability Guide} | |
| \item {} | |
| \href{https://docs.openstack.org/sec/}{Security Guide} | |
| \item {} | |
| \href{https://docs.openstack.org/image-guide/}{Virtual Machine Image Guide} | |
| \end{itemize} | |
| The following books explain how to use the OpenStack Dashboard and | |
| command-line clients: | |
| \begin{itemize} | |
| \item {} | |
| \href{https://docs.openstack.org/user-guide/}{End User Guide} | |
| \item {} | |
| \href{https://docs.openstack.org/cli-reference/}{Command-Line Interface Reference} | |
| \end{itemize} | |
| The following documentation provides reference and guidance information | |
| for the OpenStack APIs: | |
| \begin{itemize} | |
| \item {} | |
| \href{https://developer.openstack.org/api-guide/quick-start/}{API Guide} | |
| \end{itemize} | |
| The following guide provides how to contribute to OpenStack documentation: | |
| \begin{itemize} | |
| \item {} | |
| \href{https://docs.openstack.org/contributor-guide/}{Documentation Contributor Guide} | |
| \end{itemize} | |
| \subsection{ask.openstack.org} | |
| \label{\detokenize{common/app-support:ask-openstack-org}} | |
| During the set up or testing of OpenStack, you might have questions | |
| about how a specific task is completed or be in a situation where a | |
| feature does not work correctly. Use the | |
| \href{https://ask.openstack.org}{ask.openstack.org} site to ask questions | |
| and get answers. When you visit the \href{https://ask.openstack.org}{Ask OpenStack} site, scan | |
| the recently asked questions to see whether your question has already | |
| been answered. If not, ask a new question. Be sure to give a clear, | |
| concise summary in the title and provide as much detail as possible in | |
| the description. Paste in your command output or stack traces, links to | |
| screen shots, and any other information which might be useful. | |
| \subsection{OpenStack mailing lists} | |
| \label{\detokenize{common/app-support:openstack-mailing-lists}} | |
| A great way to get answers and insights is to post your question or | |
| problematic scenario to the OpenStack mailing list. You can learn from | |
| and help others who might have similar issues. To subscribe or view the | |
| archives, go to the \href{http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack}{general OpenStack mailing list}. If you are | |
| interested in the other mailing lists for specific projects or development, | |
| refer to \href{https://wiki.openstack.org/wiki/Mailing\_Lists}{Mailing Lists}. | |
| \subsection{The OpenStack wiki} | |
| \label{\detokenize{common/app-support:the-openstack-wiki}} | |
| The \href{https://wiki.openstack.org/}{OpenStack wiki} contains a broad | |
| range of topics but some of the information can be difficult to find or | |
| is a few pages deep. Fortunately, the wiki search feature enables you to | |
| search by title or content. If you search for specific information, such | |
| as about networking or OpenStack Compute, you can find a large amount | |
| of relevant material. More is being added all the time, so be sure to | |
| check back often. You can find the search box in the upper-right corner | |
| of any OpenStack wiki page. | |
| \subsection{The Launchpad Bugs area} | |
| \label{\detokenize{common/app-support:the-launchpad-bugs-area}} | |
| The OpenStack community values your set up and testing efforts and wants | |
| your feedback. To log a bug, you must sign up for a Launchpad account at | |
| \url{https://launchpad.net/+login}. You can view existing bugs and report bugs | |
| in the Launchpad Bugs area. Use the search feature to determine whether | |
| the bug has already been reported or already been fixed. If it still | |
| seems like your bug is unreported, fill out a bug report. | |
| Some tips: | |
| \begin{itemize} | |
| \item {} | |
| Give a clear, concise summary. | |
| \item {} | |
| Provide as much detail as possible in the description. Paste in your | |
| command output or stack traces, links to screen shots, and any other | |
| information which might be useful. | |
| \item {} | |
| Be sure to include the software and package versions that you are | |
| using, especially if you are using a development branch, such as, | |
| \sphinxcode{"Kilo release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208}. | |
| \item {} | |
| Any deployment-specific information is helpful, such as whether you | |
| are using Ubuntu 14.04 or are performing a multi-node installation. | |
| \end{itemize} | |
| The following Launchpad Bugs areas are available: | |
| \begin{itemize} | |
| \item {} | |
| \href{https://bugs.launchpad.net/cinder}{Bugs: OpenStack Block Storage | |
| (cinder)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/nova}{Bugs: OpenStack Compute (nova)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/horizon}{Bugs: OpenStack Dashboard | |
| (horizon)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/keystone}{Bugs: OpenStack Identity | |
| (keystone)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/glance}{Bugs: OpenStack Image service | |
| (glance)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/neutron}{Bugs: OpenStack Networking | |
| (neutron)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/swift}{Bugs: OpenStack Object Storage | |
| (swift)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/murano}{Bugs: Application catalog (murano)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/ironic}{Bugs: Bare metal service (ironic)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/senlin}{Bugs: Clustering service (senlin)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/magnum}{Bugs: Container Infrastructure Management service (magnum)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/sahara}{Bugs: Data processing service | |
| (sahara)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/trove}{Bugs: Database service (trove)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/fuel}{Bugs: Deployment service (fuel)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/designate}{Bugs: DNS service (designate)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/barbican}{Bugs: Key Manager Service (barbican)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/monasca}{Bugs: Monitoring (monasca)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/heat}{Bugs: Orchestration (heat)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/cloudkitty}{Bugs: Rating (cloudkitty)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/manila}{Bugs: Shared file systems (manila)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/ceilometer}{Bugs: Telemetry | |
| (ceilometer)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/gnocchi}{Bugs: Telemetry v3 | |
| (gnocchi)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/mistral}{Bugs: Workflow service | |
| (mistral)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/zaqar}{Bugs: Messaging service | |
| (zaqar)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/openstack-api-site}{Bugs: OpenStack API Documentation | |
| (developer.openstack.org)} | |
| \item {} | |
| \href{https://bugs.launchpad.net/openstack-manuals}{Bugs: OpenStack Documentation | |
| (docs.openstack.org)} | |
| \end{itemize} | |
| \subsection{The OpenStack IRC channel} | |
| \label{\detokenize{common/app-support:the-openstack-irc-channel}} | |
| The OpenStack community lives in the \#openstack IRC channel on the | |
| Freenode network. You can hang out, ask questions, or get immediate | |
| feedback for urgent and pressing issues. To install an IRC client or use | |
| a browser-based client, go to | |
| \href{https://webchat.freenode.net}{https://webchat.freenode.net/}. You can | |
| also use \href{http://colloquy.info/}{Colloquy} (Mac OS X), | |
| \href{http://www.mirc.com/}{mIRC} (Windows), | |
| or XChat (Linux). When you are in the IRC channel | |
| and want to share code or command output, the generally accepted method | |
| is to use a Paste Bin. The OpenStack project has one at | |
| \url{http://paste.openstack.org}. Just paste your longer amounts of text or | |
| logs in the web form and you get a URL that you can paste into the | |
| channel. The OpenStack IRC channel is \sphinxcode{\#openstack} on | |
| \sphinxcode{irc.freenode.net}. You can find a list of all OpenStack IRC channels | |
| at \url{https://wiki.openstack.org/wiki/IRC}. | |
| \subsection{Documentation feedback} | |
| \label{\detokenize{common/app-support:documentation-feedback}} | |
| To provide feedback on documentation, join and use the | |
| \href{mailto:[email protected]}{[email protected]} mailing list at \href{http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs}{OpenStack | |
| Documentation Mailing | |
| List}, | |
| or \href{https://bugs.launchpad.net/openstack-manuals/+filebug}{report a | |
| bug}. | |
| \subsection{OpenStack distribution packages} | |
| \label{\detokenize{common/app-support:openstack-distribution-packages}} | |
| The following Linux distributions provide community-supported packages | |
| for OpenStack: | |
| \begin{itemize} | |
| \item {} | |
| \sphinxstylestrong{Debian:} \url{https://wiki.debian.org/OpenStack} | |
| \item {} | |
| \sphinxstylestrong{CentOS, Fedora, and Red Hat Enterprise Linux:} | |
| \url{https://www.rdoproject.org/} | |
| \item {} | |
| \sphinxstylestrong{openSUSE and SUSE Linux Enterprise Server:} | |
| \url{https://en.opensuse.org/Portal:OpenStack} | |
| \item {} | |
| \sphinxstylestrong{Ubuntu:} \url{https://wiki.ubuntu.com/ServerTeam/CloudArchive} | |
| \end{itemize} | |
| \chapter{Glossary} | |
| \label{\detokenize{index:glossary}} | |
| \section{Glossary} | |
| \label{\detokenize{common/glossary:glossary}}\label{\detokenize{common/glossary::doc}} | |
| This glossary offers a list of terms and definitions to define a | |
| vocabulary for OpenStack-related concepts. | |
| To add to OpenStack glossary, clone the \href{https://git.openstack.org/cgit/openstack/openstack-manuals}{openstack/openstack-manuals | |
| repository} and | |
| update the source file \sphinxcode{doc/common/glossary.rst} through the | |
| OpenStack contribution process. | |
| \subsection{0-9} | |
| \label{\detokenize{common/glossary:id1}}\begin{description} | |
| \item[{6to4\index{6to4|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-6to4}} | |
| A mechanism that allows IPv6 packets to be transmitted | |
| over an IPv4 network, providing a strategy for migrating to | |
| IPv6. | |
| \end{description} | |
| \subsection{A} | |
| \label{\detokenize{common/glossary:a}}\begin{description} | |
| \item[{absolute limit\index{absolute limit|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-absolute-limit}} | |
| Impassable limits for guest VMs. Settings include total RAM | |
| size, maximum number of vCPUs, and maximum disk size. | |
| \item[{access control list (ACL)\index{access control list (ACL)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-access-control-list-acl}} | |
| A list of permissions attached to an object. An ACL specifies | |
| which users or system processes have access to objects. It also | |
| defines which operations can be performed on specified objects. Each | |
| entry in a typical ACL specifies a subject and an operation. For | |
| instance, the ACL entry \sphinxcode{(Alice, delete)} for a file gives | |
| Alice permission to delete the file. | |
| \item[{access key\index{access key|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-access-key}} | |
| Alternative term for an Amazon EC2 access key. See EC2 access | |
| key. | |
| \item[{account\index{account|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-account}} | |
| The Object Storage context of an account. Do not confuse with a | |
| user account from an authentication service, such as Active Directory, | |
| /etc/passwd, OpenLDAP, OpenStack Identity, and so on. | |
| \item[{account auditor\index{account auditor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-account-auditor}} | |
| Checks for missing replicas and incorrect or corrupted objects | |
| in a specified Object Storage account by running queries against the | |
| back-end SQLite database. | |
| \item[{account database\index{account database|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-account-database}} | |
| A SQLite database that contains Object Storage accounts and | |
| related metadata and that the accounts server accesses. | |
| \item[{account reaper\index{account reaper|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-account-reaper}} | |
| An Object Storage worker that scans for and deletes account | |
| databases and that the account server has marked for deletion. | |
| \item[{account server\index{account server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-account-server}} | |
| Lists containers in Object Storage and stores container | |
| information in the account database. | |
| \item[{account service\index{account service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-account-service}} | |
| An Object Storage component that provides account services such | |
| as list, create, modify, and audit. Do not confuse with OpenStack | |
| Identity service, OpenLDAP, or similar user-account services. | |
| \item[{accounting\index{accounting|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-accounting}} | |
| The Compute service provides accounting information through the | |
| event notification and system usage data facilities. | |
| \item[{Active Directory\index{Active Directory|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-active-directory}} | |
| Authentication and identity service by Microsoft, based on LDAP. | |
| Supported in OpenStack. | |
| \item[{active/active configuration\index{active/active configuration|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-active-active-configuration}} | |
| In a high-availability setup with an active/active | |
| configuration, several systems share the load together and if one | |
| fails, the load is distributed to the remaining systems. | |
| \item[{active/passive configuration\index{active/passive configuration|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-active-passive-configuration}} | |
| In a high-availability setup with an active/passive | |
| configuration, systems are set up to bring additional resources online | |
| to replace those that have failed. | |
| \item[{address pool\index{address pool|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-address-pool}} | |
| A group of fixed and/or floating IP addresses that are assigned | |
| to a project and can be used by or assigned to the VM instances in a | |
| project. | |
| \item[{Address Resolution Protocol (ARP)\index{Address Resolution Protocol (ARP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-address-resolution-protocol-arp}} | |
| The protocol by which layer-3 IP addresses are resolved into | |
| layer-2 link local addresses. | |
| \item[{admin API\index{admin API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-admin-api}} | |
| A subset of API calls that are accessible to authorized | |
| administrators and are generally not accessible to end users or the | |
| public Internet. They can exist as a separate service (keystone) or | |
| can be a subset of another API (nova). | |
| \item[{admin server\index{admin server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-admin-server}} | |
| In the context of the Identity service, the worker process that | |
| provides access to the admin API. | |
| \item[{administrator\index{administrator|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-administrator}} | |
| The person responsible for installing, configuring, | |
| and managing an OpenStack cloud. | |
| \item[{Advanced Message Queuing Protocol (AMQP)\index{Advanced Message Queuing Protocol (AMQP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-advanced-message-queuing-protocol-amqp}} | |
| The open standard messaging protocol used by OpenStack | |
| components for intra-service communications, provided by RabbitMQ, | |
| Qpid, or ZeroMQ. | |
| \item[{Advanced RISC Machine (ARM)\index{Advanced RISC Machine (ARM)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-advanced-risc-machine-arm}} | |
| Lower power consumption CPU often found in mobile and embedded | |
| devices. Supported by OpenStack. | |
| \item[{alert\index{alert|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-alert}} | |
| The Compute service can send alerts through its notification | |
| system, which includes a facility to create custom notification | |
| drivers. Alerts can be sent to and displayed on the dashboard. | |
| \item[{allocate\index{allocate|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-allocate}} | |
| The process of taking a floating IP address from the address | |
| pool so it can be associated with a fixed IP on a guest VM | |
| instance. | |
| \item[{Amazon Kernel Image (AKI)\index{Amazon Kernel Image (AKI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-amazon-kernel-image-aki}} | |
| Both a VM container format and disk format. Supported by Image | |
| service. | |
| \item[{Amazon Machine Image (AMI)\index{Amazon Machine Image (AMI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-amazon-machine-image-ami}} | |
| Both a VM container format and disk format. Supported by Image | |
| service. | |
| \item[{Amazon Ramdisk Image (ARI)\index{Amazon Ramdisk Image (ARI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-amazon-ramdisk-image-ari}} | |
| Both a VM container format and disk format. Supported by Image | |
| service. | |
| \item[{Anvil\index{Anvil|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-anvil}} | |
| A project that ports the shell script-based project named | |
| DevStack to Python. | |
| \item[{aodh\index{aodh|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-aodh}} | |
| Part of the OpenStack {\hyperref[\detokenize{common/glossary:term-telemetry-service-telemetry}]{\sphinxtermref{\DUrole{xref,std,std-term}{Telemetry service}}}}; provides alarming functionality. | |
| \item[{Apache\index{Apache|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-apache}} | |
| The Apache Software Foundation supports the Apache community of | |
| open-source software projects. These projects provide software | |
| products for the public good. | |
| \item[{Apache License 2.0\index{Apache License 2.0|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-apache-license-2-0}} | |
| All OpenStack core projects are provided under the terms of the | |
| Apache License 2.0 license. | |
| \item[{Apache Web Server\index{Apache Web Server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-apache-web-server}} | |
| The most common web server software currently used on the | |
| Internet. | |
| \item[{API endpoint\index{API endpoint|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-api-endpoint}} | |
| The daemon, worker, or service that a client communicates with | |
| to access an API. API endpoints can provide any number of services, | |
| such as authentication, sales data, performance meters, Compute VM | |
| commands, census data, and so on. | |
| \item[{API extension\index{API extension|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-api-extension}} | |
| Custom modules that extend some OpenStack core APIs. | |
| \item[{API extension plug-in\index{API extension plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-api-extension-plug-in}} | |
| Alternative term for a Networking plug-in or Networking API | |
| extension. | |
| \item[{API key\index{API key|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-api-key}} | |
| Alternative term for an API token. | |
| \item[{API server\index{API server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-api-server}} | |
| Any node running a daemon or worker that provides an API | |
| endpoint. | |
| \item[{API token\index{API token|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-api-token}} | |
| Passed to API requests and used by OpenStack to verify that the | |
| client is authorized to run the requested operation. | |
| \item[{API version\index{API version|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-api-version}} | |
| In OpenStack, the API version for a project is part of the URL. | |
| For example, \sphinxcode{example.com/nova/v1/foobar}. | |
| \item[{applet\index{applet|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-applet}} | |
| A Java program that can be embedded into a web page. | |
| \item[{Application Catalog service (murano)\index{Application Catalog service (murano)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-application-catalog-service-murano}} | |
| The project that provides an application catalog service so that users | |
| can compose and deploy composite environments on an application | |
| abstraction level while managing the application lifecycle. | |
| \item[{Application Programming Interface (API)\index{Application Programming Interface (API)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-application-programming-interface-api}} | |
| A collection of specifications used to access a service, | |
| application, or program. Includes service calls, required parameters | |
| for each call, and the expected return values. | |
| \item[{application server\index{application server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-application-server}} | |
| A piece of software that makes available another piece of | |
| software over a network. | |
| \item[{Application Service Provider (ASP)\index{Application Service Provider (ASP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-application-service-provider-asp}} | |
| Companies that rent specialized applications that help | |
| businesses and organizations provide additional services | |
| with lower cost. | |
| \item[{arptables\index{arptables|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-arptables}} | |
| Tool used for maintaining Address Resolution Protocol packet | |
| filter rules in the Linux kernel firewall modules. Used along with | |
| iptables, ebtables, and ip6tables in Compute to provide firewall | |
| services for VMs. | |
| \item[{associate\index{associate|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-associate}} | |
| The process associating a Compute floating IP address with a | |
| fixed IP address. | |
| \item[{Asynchronous JavaScript and XML (AJAX)\index{Asynchronous JavaScript and XML (AJAX)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-asynchronous-javascript-and-xml-ajax}} | |
| A group of interrelated web development techniques used on the | |
| client-side to create asynchronous web applications. Used extensively | |
| in horizon. | |
| \item[{ATA over Ethernet (AoE)\index{ATA over Ethernet (AoE)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ata-over-ethernet-aoe}} | |
| A disk storage protocol tunneled within Ethernet. | |
| \item[{attach\index{attach|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-attach}} | |
| The process of connecting a VIF or vNIC to a L2 network in | |
| Networking. In the context of Compute, this process connects a storage | |
| volume to an instance. | |
| \item[{attachment (network)\index{attachment (network)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-attachment-network}} | |
| Association of an interface ID to a logical port. Plugs an | |
| interface into a port. | |
| \item[{auditing\index{auditing|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-auditing}} | |
| Provided in Compute through the system usage data | |
| facility. | |
| \item[{auditor\index{auditor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-auditor}} | |
| A worker process that verifies the integrity of Object Storage | |
| objects, containers, and accounts. Auditors is the collective term for | |
| the Object Storage account auditor, container auditor, and object | |
| auditor. | |
| \item[{Austin\index{Austin|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-austin}} | |
| The code name for the initial release of | |
| OpenStack. The first design summit took place in | |
| Austin, Texas, US. | |
| \item[{auth node\index{auth node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-auth-node}} | |
| Alternative term for an Object Storage authorization | |
| node. | |
| \item[{authentication\index{authentication|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-authentication}} | |
| The process that confirms that the user, process, or client is | |
| really who they say they are through private key, secret token, | |
| password, fingerprint, or similar method. | |
| \item[{authentication token\index{authentication token|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-authentication-token}} | |
| A string of text provided to the client after authentication. | |
| Must be provided by the user or process in subsequent requests to the | |
| API endpoint. | |
| \item[{AuthN\index{AuthN|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-authn}} | |
| The Identity service component that provides authentication | |
| services. | |
| \item[{authorization\index{authorization|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-authorization}} | |
| The act of verifying that a user, process, or client is | |
| authorized to perform an action. | |
| \item[{authorization node\index{authorization node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-authorization-node}} | |
| An Object Storage node that provides authorization | |
| services. | |
| \item[{AuthZ\index{AuthZ|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-authz}} | |
| The Identity component that provides high-level | |
| authorization services. | |
| \item[{Auto ACK\index{Auto ACK|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-auto-ack}} | |
| Configuration setting within RabbitMQ that enables or disables | |
| message acknowledgment. Enabled by default. | |
| \item[{auto declare\index{auto declare|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-auto-declare}} | |
| A Compute RabbitMQ setting that determines whether a message | |
| exchange is automatically created when the program starts. | |
| \item[{availability zone\index{availability zone|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-availability-zone}} | |
| An Amazon EC2 concept of an isolated area that is used for fault | |
| tolerance. Do not confuse with an OpenStack Compute zone or | |
| cell. | |
| \item[{AWS CloudFormation template\index{AWS CloudFormation template|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-aws-cloudformation-template}} | |
| AWS CloudFormation allows Amazon Web Services (AWS) users to create and manage a | |
| collection of related resources. The Orchestration service | |
| supports a CloudFormation-compatible format (CFN). | |
| \end{description} | |
| \subsection{B} | |
| \label{\detokenize{common/glossary:b}}\begin{description} | |
| \item[{back end\index{back end|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-back-end}} | |
| Interactions and processes that are obfuscated from the user, | |
| such as Compute volume mount, data transmission to an iSCSI target by | |
| a daemon, or Object Storage object integrity checks. | |
| \item[{back-end catalog\index{back-end catalog|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-back-end-catalog}} | |
| The storage method used by the Identity service catalog service | |
| to store and retrieve information about API endpoints that are | |
| available to the client. Examples include an SQL database, LDAP | |
| database, or KVS back end. | |
| \item[{back-end store\index{back-end store|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-back-end-store}} | |
| The persistent data store used to save and retrieve information | |
| for a service, such as lists of Object Storage objects, current state | |
| of guest VMs, lists of user names, and so on. Also, the method that the | |
| Image service uses to get and store VM images. Options include Object | |
| Storage, locally mounted file system, RADOS block devices, VMware | |
| datastore, and HTTP. | |
| \item[{Backup, Restore, and Disaster Recovery service (freezer)\index{Backup, Restore, and Disaster Recovery service (freezer)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-backup-restore-and-disaster-recovery-service-freezer}} | |
| The project that provides integrated tooling for backing up, restoring, | |
| and recovering file systems, instances, or database backups. | |
| \item[{bandwidth\index{bandwidth|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bandwidth}} | |
| The amount of available data used by communication resources, | |
| such as the Internet. Represents the amount of data that is used to | |
| download things or the amount of data available to download. | |
| \item[{barbican\index{barbican|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-barbican}} | |
| Code name of the {\hyperref[\detokenize{common/glossary:term-key-manager-service-barbican}]{\sphinxtermref{\DUrole{xref,std,std-term}{Key Manager service}}}}. | |
| \item[{bare\index{bare|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bare}} | |
| An Image service container format that indicates that no | |
| container exists for the VM image. | |
| \item[{Bare Metal service (ironic)\index{Bare Metal service (ironic)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bare-metal-service-ironic}} | |
| The OpenStack service that provides a service and associated libraries | |
| capable of managing and provisioning physical machines in a | |
| security-aware and fault-tolerant manner. | |
| \item[{base image\index{base image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-base-image}} | |
| An OpenStack-provided image. | |
| \item[{Bell-LaPadula model\index{Bell-LaPadula model|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bell-lapadula-model}} | |
| A security model that focuses on data confidentiality | |
| and controlled access to classified information. | |
| This model divides the entities into subjects and objects. | |
| The clearance of a subject is compared to the classification of the | |
| object to determine if the subject is authorized for the specific access mode. | |
| The clearance or classification scheme is expressed in terms of a lattice. | |
| \item[{Benchmark service (rally)\index{Benchmark service (rally)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-benchmark-service-rally}} | |
| OpenStack project that provides a framework for | |
| performance analysis and benchmarking of individual | |
| OpenStack components as well as full production OpenStack | |
| cloud deployments. | |
| \item[{Bexar\index{Bexar|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bexar}} | |
| A grouped release of projects related to | |
| OpenStack that came out in February of 2011. It | |
| included only Compute (nova) and Object Storage (swift). | |
| Bexar is the code name for the second release of | |
| OpenStack. The design summit took place in | |
| San Antonio, Texas, US, which is the county seat for Bexar county. | |
| \item[{binary\index{binary|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-binary}} | |
| Information that consists solely of ones and zeroes, which is | |
| the language of computers. | |
| \item[{bit\index{bit|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bit}} | |
| A bit is a single digit number that is in base of 2 (either a | |
| zero or one). Bandwidth usage is measured in bits per second. | |
| \item[{bits per second (BPS)\index{bits per second (BPS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bits-per-second-bps}} | |
| The universal measurement of how quickly data is transferred | |
| from place to place. | |
| \item[{block device\index{block device|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-block-device}} | |
| A device that moves data in the form of blocks. These device | |
| nodes interface the devices, such as hard disks, CD-ROM drives, flash | |
| drives, and other addressable regions of memory. | |
| \item[{block migration\index{block migration|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-block-migration}} | |
| A method of VM live migration used by KVM to evacuate instances | |
| from one host to another with very little downtime during a | |
| user-initiated switchover. Does not require shared storage. Supported | |
| by Compute. | |
| \item[{Block Storage API\index{Block Storage API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-block-storage-api}} | |
| An API on a separate endpoint for attaching, | |
| detaching, and creating block storage for compute | |
| VMs. | |
| \item[{Block Storage service (cinder)\index{Block Storage service (cinder)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-block-storage-service-cinder}} | |
| The OpenStack service that implement services and libraries to provide | |
| on-demand, self-service access to Block Storage resources via abstraction | |
| and automation on top of other block storage devices. | |
| \item[{BMC (Baseboard Management Controller)\index{BMC (Baseboard Management Controller)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bmc-baseboard-management-controller}} | |
| The intelligence in the IPMI architecture, which is a specialized | |
| micro-controller that is embedded on the motherboard of a computer | |
| and acts as a server. Manages the interface between system management | |
| software and platform hardware. | |
| \item[{bootable disk image\index{bootable disk image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bootable-disk-image}} | |
| A type of VM image that exists as a single, bootable | |
| file. | |
| \item[{Bootstrap Protocol (BOOTP)\index{Bootstrap Protocol (BOOTP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bootstrap-protocol-bootp}} | |
| A network protocol used by a network client to obtain an IP | |
| address from a configuration server. Provided in Compute through the | |
| dnsmasq daemon when using either the FlatDHCP manager or VLAN manager | |
| network manager. | |
| \item[{Border Gateway Protocol (BGP)\index{Border Gateway Protocol (BGP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-border-gateway-protocol-bgp}} | |
| The Border Gateway Protocol is a dynamic routing protocol | |
| that connects autonomous systems. Considered the | |
| backbone of the Internet, this protocol connects disparate | |
| networks to form a larger network. | |
| \item[{browser\index{browser|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-browser}} | |
| Any client software that enables a computer or device to access | |
| the Internet. | |
| \item[{builder file\index{builder file|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-builder-file}} | |
| Contains configuration information that Object Storage uses to | |
| reconfigure a ring or to re-create it from scratch after a serious | |
| failure. | |
| \item[{bursting\index{bursting|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-bursting}} | |
| The practice of utilizing a secondary environment to | |
| elastically build instances on-demand when the primary | |
| environment is resource constrained. | |
| \item[{button class\index{button class|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-button-class}} | |
| A group of related button types within horizon. Buttons to | |
| start, stop, and suspend VMs are in one class. Buttons to associate | |
| and disassociate floating IP addresses are in another class, and so | |
| on. | |
| \item[{byte\index{byte|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-byte}} | |
| Set of bits that make up a single character; there are usually 8 | |
| bits to a byte. | |
| \end{description} | |
| \subsection{C} | |
| \label{\detokenize{common/glossary:c}}\begin{description} | |
| \item[{cache pruner\index{cache pruner|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cache-pruner}} | |
| A program that keeps the Image service VM image cache at or | |
| below its configured maximum size. | |
| \item[{Cactus\index{Cactus|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cactus}} | |
| An OpenStack grouped release of projects that came out in the | |
| spring of 2011. It included Compute (nova), Object Storage (swift), | |
| and the Image service (glance). | |
| Cactus is a city in Texas, US and is the code name for | |
| the third release of OpenStack. When OpenStack releases went | |
| from three to six months long, the code name of the release | |
| changed to match a geography nearest the previous | |
| summit. | |
| \item[{CALL\index{CALL|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-call}} | |
| One of the RPC primitives used by the OpenStack message queue | |
| software. Sends a message and waits for a response. | |
| \item[{capability\index{capability|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-capability}} | |
| Defines resources for a cell, including CPU, storage, and | |
| networking. Can apply to the specific services within a cell or a | |
| whole cell. | |
| \item[{capacity cache\index{capacity cache|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-capacity-cache}} | |
| A Compute back-end database table that contains the current | |
| workload, amount of free RAM, and number of VMs running on each host. | |
| Used to determine on which host a VM starts. | |
| \item[{capacity updater\index{capacity updater|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-capacity-updater}} | |
| A notification driver that monitors VM instances and updates the | |
| capacity cache as needed. | |
| \item[{CAST\index{CAST|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cast}} | |
| One of the RPC primitives used by the OpenStack message queue | |
| software. Sends a message and does not wait for a response. | |
| \item[{catalog\index{catalog|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-catalog}} | |
| A list of API endpoints that are available to a user after | |
| authentication with the Identity service. | |
| \item[{catalog service\index{catalog service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-catalog-service}} | |
| An Identity service that lists API endpoints that are available | |
| to a user after authentication with the Identity service. | |
| \item[{ceilometer\index{ceilometer|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ceilometer}} | |
| Part of the OpenStack {\hyperref[\detokenize{common/glossary:term-telemetry-service-telemetry}]{\sphinxtermref{\DUrole{xref,std,std-term}{Telemetry service}}}}; gathers and stores metrics from other | |
| OpenStack services. | |
| \item[{cell\index{cell|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cell}} | |
| Provides logical partitioning of Compute resources in a child | |
| and parent relationship. Requests are passed from parent cells to | |
| child cells if the parent cannot provide the requested | |
| resource. | |
| \item[{cell forwarding\index{cell forwarding|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cell-forwarding}} | |
| A Compute option that enables parent cells to pass resource | |
| requests to child cells if the parent cannot provide the requested | |
| resource. | |
| \item[{cell manager\index{cell manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cell-manager}} | |
| The Compute component that contains a list of the current | |
| capabilities of each host within the cell and routes requests as | |
| appropriate. | |
| \item[{CentOS\index{CentOS|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-centos}} | |
| A Linux distribution that is compatible with OpenStack. | |
| \item[{Ceph\index{Ceph|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ceph}} | |
| Massively scalable distributed storage system that consists of | |
| an object store, block store, and POSIX-compatible distributed file | |
| system. Compatible with OpenStack. | |
| \item[{CephFS\index{CephFS|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cephfs}} | |
| The POSIX-compliant file system provided by Ceph. | |
| \item[{certificate authority (CA)\index{certificate authority (CA)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-certificate-authority-ca}} | |
| In cryptography, an entity that issues digital certificates. The digital | |
| certificate certifies the ownership of a public key by the named | |
| subject of the certificate. This enables others (relying parties) to | |
| rely upon signatures or assertions made by the private key that | |
| corresponds to the certified public key. In this model of trust | |
| relationships, a CA is a trusted third party for both the subject | |
| (owner) of the certificate and the party relying upon the certificate. | |
| CAs are characteristic of many public key infrastructure (PKI) | |
| schemes. | |
| In OpenStack, a simple certificate authority is provided by Compute for | |
| cloudpipe VPNs and VM image decryption. | |
| \item[{Challenge-Handshake Authentication Protocol (CHAP)\index{Challenge-Handshake Authentication Protocol (CHAP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-challenge-handshake-authentication-protocol-chap}} | |
| An iSCSI authentication method supported by Compute. | |
| \item[{chance scheduler\index{chance scheduler|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-chance-scheduler}} | |
| A scheduling method used by Compute that randomly chooses an | |
| available host from the pool. | |
| \item[{changes since\index{changes since|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-changes-since}} | |
| A Compute API parameter that downloads changes to the requested | |
| item since your last request, instead of downloading a new, fresh set | |
| of data and comparing it against the old data. | |
| \item[{Chef\index{Chef|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-chef}} | |
| An operating system configuration management tool supporting | |
| OpenStack deployments. | |
| \item[{child cell\index{child cell|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-child-cell}} | |
| If a requested resource such as CPU time, disk storage, or | |
| memory is not available in the parent cell, the request is forwarded | |
| to its associated child cells. If the child cell can fulfill the | |
| request, it does. Otherwise, it attempts to pass the request to any of | |
| its children. | |
| \item[{cinder\index{cinder|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cinder}} | |
| Codename for {\hyperref[\detokenize{common/glossary:term-block-storage-service-cinder}]{\sphinxtermref{\DUrole{xref,std,std-term}{Block Storage service}}}}. | |
| \item[{CirrOS\index{CirrOS|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cirros}} | |
| A minimal Linux distribution designed for use as a test | |
| image on clouds such as OpenStack. | |
| \item[{Cisco neutron plug-in\index{Cisco neutron plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cisco-neutron-plug-in}} | |
| A Networking plug-in for Cisco devices and technologies, | |
| including UCS and Nexus. | |
| \item[{cloud architect\index{cloud architect|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-architect}} | |
| A person who plans, designs, and oversees the creation of | |
| clouds. | |
| \item[{Cloud Auditing Data Federation (CADF)\index{Cloud Auditing Data Federation (CADF)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-auditing-data-federation-cadf}} | |
| Cloud Auditing Data Federation (CADF) is a | |
| specification for audit event data. CADF is | |
| supported by OpenStack Identity. | |
| \item[{cloud computing\index{cloud computing|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-computing}} | |
| A model that enables access to a shared pool of configurable | |
| computing resources, such as networks, servers, storage, applications, | |
| and services, that can be rapidly provisioned and released with | |
| minimal management effort or service provider interaction. | |
| \item[{cloud controller\index{cloud controller|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-controller}} | |
| Collection of Compute components that represent the global state | |
| of the cloud; talks to services, such as Identity authentication, | |
| Object Storage, and node/storage workers through a | |
| queue. | |
| \item[{cloud controller node\index{cloud controller node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-controller-node}} | |
| A node that runs network, volume, API, scheduler, and image | |
| services. Each service may be broken out into separate nodes for | |
| scalability or availability. | |
| \item[{Cloud Data Management Interface (CDMI)\index{Cloud Data Management Interface (CDMI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-data-management-interface-cdmi}} | |
| SINA standard that defines a RESTful API for managing objects in | |
| the cloud, currently unsupported in OpenStack. | |
| \item[{Cloud Infrastructure Management Interface (CIMI)\index{Cloud Infrastructure Management Interface (CIMI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-infrastructure-management-interface-cimi}} | |
| An in-progress specification for cloud management. Currently | |
| unsupported in OpenStack. | |
| \item[{cloud-init\index{cloud-init|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloud-init}} | |
| A package commonly installed in VM images that performs | |
| initialization of an instance after boot using information that it | |
| retrieves from the metadata service, such as the SSH public key and | |
| user data. | |
| \item[{cloudadmin\index{cloudadmin|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloudadmin}} | |
| One of the default roles in the Compute RBAC system. Grants | |
| complete system access. | |
| \item[{Cloudbase-Init\index{Cloudbase-Init|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloudbase-init}} | |
| A Windows project providing guest initialization features, | |
| similar to cloud-init. | |
| \item[{cloudpipe\index{cloudpipe|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloudpipe}} | |
| A compute service that creates VPNs on a per-project | |
| basis. | |
| \item[{cloudpipe image\index{cloudpipe image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cloudpipe-image}} | |
| A pre-made VM image that serves as a cloudpipe server. | |
| Essentially, OpenVPN running on Linux. | |
| \item[{Clustering service (senlin)\index{Clustering service (senlin)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-clustering-service-senlin}} | |
| The project that implements clustering services and libraries | |
| for the management of groups of homogeneous objects exposed | |
| by other OpenStack services. | |
| \item[{command filter\index{command filter|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-command-filter}} | |
| Lists allowed commands within the Compute rootwrap | |
| facility. | |
| \item[{Common Internet File System (CIFS)\index{Common Internet File System (CIFS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-common-internet-file-system-cifs}} | |
| A file sharing protocol. It is a public or open variation of the | |
| original Server Message Block (SMB) protocol developed and used by | |
| Microsoft. Like the SMB protocol, CIFS runs at a higher level and uses | |
| the TCP/IP protocol. | |
| \item[{Common Libraries (oslo)\index{Common Libraries (oslo)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-common-libraries-oslo}} | |
| The project that produces a set of python libraries containing code | |
| shared by OpenStack projects. The APIs provided by these libraries | |
| should be high quality, stable, consistent, documented and generally | |
| applicable. | |
| \item[{community project\index{community project|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-community-project}} | |
| A project that is not officially endorsed by the OpenStack | |
| Foundation. If the project is successful enough, it might be elevated | |
| to an incubated project and then to a core project, or it might be | |
| merged with the main code trunk. | |
| \item[{compression\index{compression|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-compression}} | |
| Reducing the size of files by special encoding, the file can be | |
| decompressed again to its original content. OpenStack supports | |
| compression at the Linux file system level but does not support | |
| compression for things such as Object Storage objects or Image service | |
| VM images. | |
| \item[{Compute API (Nova API)\index{Compute API (Nova API)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-compute-api-nova-api}} | |
| The nova-api daemon provides access to nova services. Can communicate with | |
| other APIs, such as the Amazon EC2 API. | |
| \item[{compute controller\index{compute controller|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-compute-controller}} | |
| The Compute component that chooses suitable hosts on which to | |
| start VM instances. | |
| \item[{compute host\index{compute host|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-compute-host}} | |
| Physical host dedicated to running compute nodes. | |
| \item[{compute node\index{compute node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-compute-node}} | |
| A node that runs the nova-compute daemon that manages VM | |
| instances that provide a wide | |
| range of services, such as web applications and analytics. | |
| \item[{Compute service (nova)\index{Compute service (nova)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-compute-service-nova}} | |
| The OpenStack core project that implements services and associated | |
| libraries to provide massively-scalable, on-demand, self-service | |
| access to compute resources, including bare metal, virtual machines, | |
| and containers. | |
| \item[{compute worker\index{compute worker|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-compute-worker}} | |
| The Compute component that runs on each compute node and manages | |
| the VM instance lifecycle, including run, reboot, terminate, | |
| attach/detach volumes, and so on. Provided by the nova-compute daemon. | |
| \item[{concatenated object\index{concatenated object|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-concatenated-object}} | |
| A set of segment objects that Object Storage combines and sends | |
| to the client. | |
| \item[{conductor\index{conductor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-conductor}} | |
| In Compute, conductor is the process that proxies database | |
| requests from the compute process. Using conductor improves security | |
| because compute nodes do not need direct access to the | |
| database. | |
| \item[{congress\index{congress|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-congress}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-governance-service-congress}]{\sphinxtermref{\DUrole{xref,std,std-term}{Governance service}}}}. | |
| \item[{consistency window\index{consistency window|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-consistency-window}} | |
| The amount of time it takes for a new Object Storage object to | |
| become accessible to all clients. | |
| \item[{console log\index{console log|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-console-log}} | |
| Contains the output from a Linux VM console in Compute. | |
| \item[{container\index{container|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-container}} | |
| Organizes and stores objects in Object Storage. Similar to the | |
| concept of a Linux directory but cannot be nested. Alternative term | |
| for an Image service container format. | |
| \item[{container auditor\index{container auditor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-container-auditor}} | |
| Checks for missing replicas or incorrect objects in specified | |
| Object Storage containers through queries to the SQLite back-end | |
| database. | |
| \item[{container database\index{container database|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-container-database}} | |
| A SQLite database that stores Object Storage containers and | |
| container metadata. The container server accesses this | |
| database. | |
| \item[{container format\index{container format|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-container-format}} | |
| A wrapper used by the Image service that contains a VM image and | |
| its associated metadata, such as machine state, OS disk size, and so | |
| on. | |
| \item[{Container Infrastructure Management service (magnum)\index{Container Infrastructure Management service (magnum)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-container-infrastructure-management-service-magnum}} | |
| The project which provides a set of services for provisioning, scaling, | |
| and managing container orchestration engines. | |
| \item[{container server\index{container server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-container-server}} | |
| An Object Storage server that manages containers. | |
| \item[{container service\index{container service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-container-service}} | |
| The Object Storage component that provides container services, | |
| such as create, delete, list, and so on. | |
| \item[{content delivery network (CDN)\index{content delivery network (CDN)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-content-delivery-network-cdn}} | |
| A content delivery network is a specialized network that is | |
| used to distribute content to clients, typically located | |
| close to the client for increased performance. | |
| \item[{controller node\index{controller node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-controller-node}} | |
| Alternative term for a cloud controller node. | |
| \item[{core API\index{core API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-core-api}} | |
| Depending on context, the core API is either the OpenStack API | |
| or the main API of a specific core project, such as Compute, | |
| Networking, Image service, and so on. | |
| \item[{core service\index{core service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-core-service}} | |
| An official OpenStack service defined as core by | |
| DefCore Committee. Currently, consists of | |
| Block Storage service (cinder), Compute service (nova), | |
| Identity service (keystone), Image service (glance), | |
| Networking service (neutron), and Object Storage service (swift). | |
| \item[{cost\index{cost|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cost}} | |
| Under the Compute distributed scheduler, this is calculated by | |
| looking at the capabilities of each host relative to the flavor of the | |
| VM instance being requested. | |
| \item[{credentials\index{credentials|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-credentials}} | |
| Data that is only known to or accessible by a user and | |
| used to verify that the user is who he says he is. | |
| Credentials are presented to the server during | |
| authentication. Examples include a password, secret key, | |
| digital certificate, and fingerprint. | |
| \item[{Cross-Origin Resource Sharing (CORS)\index{Cross-Origin Resource Sharing (CORS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-cross-origin-resource-sharing-cors}} | |
| A mechanism that allows many resources (for example, | |
| fonts, JavaScript) on a web page to be requested from | |
| another domain outside the domain from which the resource | |
| originated. In particular, JavaScript's AJAX calls can use | |
| the XMLHttpRequest mechanism. | |
| \item[{Crowbar\index{Crowbar|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-crowbar}} | |
| An open source community project by Dell that aims to provide | |
| all necessary services to quickly deploy clouds. | |
| \item[{current workload\index{current workload|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-current-workload}} | |
| An element of the Compute capacity cache that is calculated | |
| based on the number of build, snapshot, migrate, and resize operations | |
| currently in progress on a given host. | |
| \item[{customer\index{customer|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-customer}} | |
| Alternative term for project. | |
| \item[{customization module\index{customization module|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-customization-module}} | |
| A user-created Python module that is loaded by horizon to change | |
| the look and feel of the dashboard. | |
| \end{description} | |
| \subsection{D} | |
| \label{\detokenize{common/glossary:d}}\begin{description} | |
| \item[{daemon\index{daemon|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-daemon}} | |
| A process that runs in the background and waits for requests. | |
| May or may not listen on a TCP or UDP port. Do not confuse with a | |
| worker. | |
| \item[{Dashboard (horizon)\index{Dashboard (horizon)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dashboard-horizon}} | |
| OpenStack project which provides an extensible, unified, web-based | |
| user interface for all OpenStack services. | |
| \item[{data encryption\index{data encryption|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-data-encryption}} | |
| Both Image service and Compute support encrypted virtual machine | |
| (VM) images (but not instances). In-transit data encryption is | |
| supported in OpenStack using technologies such as HTTPS, SSL, TLS, and | |
| SSH. Object Storage does not support object encryption at the | |
| application level but may support storage that uses disk encryption. | |
| \item[{Data loss prevention (DLP) software\index{Data loss prevention (DLP) software|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-data-loss-prevention-dlp-software}} | |
| Software programs used to protect sensitive information | |
| and prevent it from leaking outside a network boundary | |
| through the detection and denying of the data transportation. | |
| \item[{Data Processing service (sahara)\index{Data Processing service (sahara)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-data-processing-service-sahara}} | |
| OpenStack project that provides a scalable | |
| data-processing stack and associated management | |
| interfaces. | |
| \item[{data store\index{data store|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-data-store}} | |
| A database engine supported by the Database service. | |
| \item[{database ID\index{database ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-database-id}} | |
| A unique ID given to each replica of an Object Storage | |
| database. | |
| \item[{database replicator\index{database replicator|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-database-replicator}} | |
| An Object Storage component that copies changes in the account, | |
| container, and object databases to other nodes. | |
| \item[{Database service (trove)\index{Database service (trove)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-database-service-trove}} | |
| An integrated project that provides scalable and reliable | |
| Cloud Database-as-a-Service functionality for both | |
| relational and non-relational database engines. | |
| \item[{deallocate\index{deallocate|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-deallocate}} | |
| The process of removing the association between a floating IP | |
| address and a fixed IP address. Once this association is removed, the | |
| floating IP returns to the address pool. | |
| \item[{Debian\index{Debian|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-debian}} | |
| A Linux distribution that is compatible with OpenStack. | |
| \item[{deduplication\index{deduplication|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-deduplication}} | |
| The process of finding duplicate data at the disk block, file, | |
| and/or object level to minimize storage use—currently unsupported | |
| within OpenStack. | |
| \item[{default panel\index{default panel|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-default-panel}} | |
| The default panel that is displayed when a user accesses the | |
| dashboard. | |
| \item[{default project\index{default project|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-default-project}} | |
| New users are assigned to this project if no project is specified | |
| when a user is created. | |
| \item[{default token\index{default token|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-default-token}} | |
| An Identity service token that is not associated with a specific | |
| project and is exchanged for a scoped token. | |
| \item[{delayed delete\index{delayed delete|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-delayed-delete}} | |
| An option within Image service so that an image is deleted after | |
| a predefined number of seconds instead of immediately. | |
| \item[{delivery mode\index{delivery mode|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-delivery-mode}} | |
| Setting for the Compute RabbitMQ message delivery mode; can be | |
| set to either transient or persistent. | |
| \item[{denial of service (DoS)\index{denial of service (DoS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-denial-of-service-dos}} | |
| Denial of service (DoS) is a short form for | |
| denial-of-service attack. This is a malicious attempt to | |
| prevent legitimate users from using a service. | |
| \item[{deprecated auth\index{deprecated auth|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-deprecated-auth}} | |
| An option within Compute that enables administrators to create | |
| and manage users through the \sphinxcode{nova-manage} command as | |
| opposed to using the Identity service. | |
| \item[{designate\index{designate|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-designate}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-dns-service-designate}]{\sphinxtermref{\DUrole{xref,std,std-term}{DNS service}}}}. | |
| \item[{Desktop-as-a-Service\index{Desktop-as-a-Service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-desktop-as-a-service}} | |
| A platform that provides a suite of desktop environments | |
| that users access to receive a desktop experience from | |
| any location. This may provide general use, development, or | |
| even homogeneous testing environments. | |
| \item[{developer\index{developer|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-developer}} | |
| One of the default roles in the Compute RBAC system and the | |
| default role assigned to a new user. | |
| \item[{device ID\index{device ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-device-id}} | |
| Maps Object Storage partitions to physical storage | |
| devices. | |
| \item[{device weight\index{device weight|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-device-weight}} | |
| Distributes partitions proportionately across Object Storage | |
| devices based on the storage capacity of each device. | |
| \item[{DevStack\index{DevStack|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-devstack}} | |
| Community project that uses shell scripts to quickly build | |
| complete OpenStack development environments. | |
| \item[{DHCP agent\index{DHCP agent|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dhcp-agent}} | |
| OpenStack Networking agent that provides DHCP services | |
| for virtual networks. | |
| \item[{Diablo\index{Diablo|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-diablo}} | |
| A grouped release of projects related to OpenStack that came out | |
| in the fall of 2011, the fourth release of OpenStack. It included | |
| Compute (nova 2011.3), Object Storage (swift 1.4.3), and the Image | |
| service (glance). | |
| Diablo is the code name for the fourth release of | |
| OpenStack. The design summit took place in | |
| the Bay Area near Santa Clara, | |
| California, US and Diablo is a nearby city. | |
| \item[{direct consumer\index{direct consumer|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-direct-consumer}} | |
| An element of the Compute RabbitMQ that comes to life when a RPC | |
| call is executed. It connects to a direct exchange through a unique | |
| exclusive queue, sends the message, and terminates. | |
| \item[{direct exchange\index{direct exchange|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-direct-exchange}} | |
| A routing table that is created within the Compute RabbitMQ | |
| during RPC calls; one is created for each RPC call that is | |
| invoked. | |
| \item[{direct publisher\index{direct publisher|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-direct-publisher}} | |
| Element of RabbitMQ that provides a response to an incoming MQ | |
| message. | |
| \item[{disassociate\index{disassociate|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-disassociate}} | |
| The process of removing the association between a floating IP | |
| address and fixed IP and thus returning the floating IP address to the | |
| address pool. | |
| \item[{Discretionary Access Control (DAC)\index{Discretionary Access Control (DAC)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-discretionary-access-control-dac}} | |
| Governs the ability of subjects to access objects, while enabling | |
| users to make policy decisions and assign security attributes. | |
| The traditional UNIX system of users, groups, and read-write-execute | |
| permissions is an example of DAC. | |
| \item[{disk encryption\index{disk encryption|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-disk-encryption}} | |
| The ability to encrypt data at the file system, disk partition, | |
| or whole-disk level. Supported within Compute VMs. | |
| \item[{disk format\index{disk format|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-disk-format}} | |
| The underlying format that a disk image for a VM is stored as | |
| within the Image service back-end store. For example, AMI, ISO, QCOW2, | |
| VMDK, and so on. | |
| \item[{dispersion\index{dispersion|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dispersion}} | |
| In Object Storage, tools to test and ensure dispersion of | |
| objects and containers to ensure fault tolerance. | |
| \item[{distributed virtual router (DVR)\index{distributed virtual router (DVR)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-distributed-virtual-router-dvr}} | |
| Mechanism for highly available multi-host routing when using | |
| OpenStack Networking (neutron). | |
| \item[{Django\index{Django|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-django}} | |
| A web framework used extensively in horizon. | |
| \item[{DNS record\index{DNS record|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dns-record}} | |
| A record that specifies information about a particular domain | |
| and belongs to the domain. | |
| \item[{DNS service (designate)\index{DNS service (designate)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dns-service-designate}} | |
| OpenStack project that provides scalable, on demand, self | |
| service access to authoritative DNS services, in a | |
| technology-agnostic manner. | |
| \item[{dnsmasq\index{dnsmasq|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dnsmasq}} | |
| Daemon that provides DNS, DHCP, BOOTP, and TFTP services for | |
| virtual networks. | |
| \item[{domain\index{domain|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-domain}} | |
| An Identity API v3 entity. Represents a collection of | |
| projects, groups and users that defines administrative boundaries for | |
| managing OpenStack Identity entities. | |
| On the Internet, separates a website from other sites. Often, | |
| the domain name has two or more parts that are separated by dots. | |
| For example, yahoo.com, usa.gov, harvard.edu, or | |
| mail.yahoo.com. | |
| Also, a domain is an entity or container of all DNS-related | |
| information containing one or more records. | |
| \item[{Domain Name System (DNS)\index{Domain Name System (DNS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-domain-name-system-dns}} | |
| A system by which Internet domain name-to-address and | |
| address-to-name resolutions are determined. | |
| DNS helps navigate the Internet by translating the IP address | |
| into an address that is easier to remember. For example, translating | |
| 111.111.111.1 into www.yahoo.com. | |
| All domains and their components, such as mail servers, utilize | |
| DNS to resolve to the appropriate locations. DNS servers are usually | |
| set up in a master-slave relationship such that failure of the master | |
| invokes the slave. DNS servers might also be clustered or replicated | |
| such that changes made to one DNS server are automatically propagated | |
| to other active servers. | |
| In Compute, the support that enables associating DNS entries | |
| with floating IP addresses, nodes, or cells so that hostnames are | |
| consistent across reboots. | |
| \item[{download\index{download|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-download}} | |
| The transfer of data, usually in the form of files, from one | |
| computer to another. | |
| \item[{durable exchange\index{durable exchange|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-durable-exchange}} | |
| The Compute RabbitMQ message exchange that remains active when | |
| the server restarts. | |
| \item[{durable queue\index{durable queue|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-durable-queue}} | |
| A Compute RabbitMQ message queue that remains active when the | |
| server restarts. | |
| \item[{Dynamic Host Configuration Protocol (DHCP)\index{Dynamic Host Configuration Protocol (DHCP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dynamic-host-configuration-protocol-dhcp}} | |
| A network protocol that configures devices that are connected to a | |
| network so that they can communicate on that network by using the | |
| Internet Protocol (IP). The protocol is implemented in a client-server | |
| model where DHCP clients request configuration data, such as an IP | |
| address, a default route, and one or more DNS server addresses from a | |
| DHCP server. | |
| A method to automatically configure networking for a host at | |
| boot time. Provided by both Networking and Compute. | |
| \item[{Dynamic HyperText Markup Language (DHTML)\index{Dynamic HyperText Markup Language (DHTML)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-dynamic-hypertext-markup-language-dhtml}} | |
| Pages that use HTML, JavaScript, and Cascading Style Sheets to | |
| enable users to interact with a web page or show simple | |
| animation. | |
| \end{description} | |
| \subsection{E} | |
| \label{\detokenize{common/glossary:e}}\begin{description} | |
| \item[{east-west traffic\index{east-west traffic|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-east-west-traffic}} | |
| Network traffic between servers in the same cloud or data center. | |
| See also north-south traffic. | |
| \item[{EBS boot volume\index{EBS boot volume|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ebs-boot-volume}} | |
| An Amazon EBS storage volume that contains a bootable VM image, | |
| currently unsupported in OpenStack. | |
| \item[{ebtables\index{ebtables|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ebtables}} | |
| Filtering tool for a Linux bridging firewall, enabling | |
| filtering of network traffic passing through a Linux bridge. | |
| Used in Compute along with arptables, iptables, and ip6tables | |
| to ensure isolation of network communications. | |
| \item[{EC2\index{EC2|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ec2}} | |
| The Amazon commercial compute product, similar to | |
| Compute. | |
| \item[{EC2 access key\index{EC2 access key|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ec2-access-key}} | |
| Used along with an EC2 secret key to access the Compute EC2 | |
| API. | |
| \item[{EC2 API\index{EC2 API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ec2-api}} | |
| OpenStack supports accessing the Amazon EC2 API through | |
| Compute. | |
| \item[{EC2 Compatibility API\index{EC2 Compatibility API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ec2-compatibility-api}} | |
| A Compute component that enables OpenStack to communicate with | |
| Amazon EC2. | |
| \item[{EC2 secret key\index{EC2 secret key|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ec2-secret-key}} | |
| Used along with an EC2 access key when communicating with the | |
| Compute EC2 API; used to digitally sign each request. | |
| \item[{Elastic Block Storage (EBS)\index{Elastic Block Storage (EBS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-elastic-block-storage-ebs}} | |
| The Amazon commercial block storage product. | |
| \item[{encapsulation\index{encapsulation|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-encapsulation}} | |
| The practice of placing one packet type within another for | |
| the purposes of abstracting or securing data. Examples | |
| include GRE, MPLS, or IPsec. | |
| \item[{encryption\index{encryption|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-encryption}} | |
| OpenStack supports encryption technologies such as HTTPS, SSH, | |
| SSL, TLS, digital certificates, and data encryption. | |
| \item[{endpoint\index{endpoint|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-endpoint}} | |
| See API endpoint. | |
| \item[{endpoint registry\index{endpoint registry|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-endpoint-registry}} | |
| Alternative term for an Identity service catalog. | |
| \item[{endpoint template\index{endpoint template|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-endpoint-template}} | |
| A list of URL and port number endpoints that indicate where a | |
| service, such as Object Storage, Compute, Identity, and so on, can be | |
| accessed. | |
| \item[{entity\index{entity|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-entity}} | |
| Any piece of hardware or software that wants to connect to the | |
| network services provided by Networking, the network connectivity | |
| service. An entity can make use of Networking by implementing a | |
| VIF. | |
| \item[{ephemeral image\index{ephemeral image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ephemeral-image}} | |
| A VM image that does not save changes made to its volumes and | |
| reverts them to their original state after the instance is | |
| terminated. | |
| \item[{ephemeral volume\index{ephemeral volume|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ephemeral-volume}} | |
| Volume that does not save the changes made to it and reverts to | |
| its original state when the current user relinquishes control. | |
| \item[{Essex\index{Essex|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-essex}} | |
| A grouped release of projects related to OpenStack that came out | |
| in April 2012, the fifth release of OpenStack. It included Compute | |
| (nova 2012.1), Object Storage (swift 1.4.8), Image (glance), Identity | |
| (keystone), and Dashboard (horizon). | |
| Essex is the code name for the fifth release of | |
| OpenStack. The design summit took place in | |
| Boston, Massachusetts, US and Essex is a nearby city. | |
| \item[{ESXi\index{ESXi|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-esxi}} | |
| An OpenStack-supported hypervisor. | |
| \item[{ETag\index{ETag|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-etag}} | |
| MD5 hash of an object within Object Storage, used to ensure data | |
| integrity. | |
| \item[{euca2ools\index{euca2ools|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-euca2ools}} | |
| A collection of command-line tools for administering VMs; most | |
| are compatible with OpenStack. | |
| \item[{Eucalyptus Kernel Image (EKI)\index{Eucalyptus Kernel Image (EKI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-eucalyptus-kernel-image-eki}} | |
| Used along with an ERI to create an EMI. | |
| \item[{Eucalyptus Machine Image (EMI)\index{Eucalyptus Machine Image (EMI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-eucalyptus-machine-image-emi}} | |
| VM image container format supported by Image service. | |
| \item[{Eucalyptus Ramdisk Image (ERI)\index{Eucalyptus Ramdisk Image (ERI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-eucalyptus-ramdisk-image-eri}} | |
| Used along with an EKI to create an EMI. | |
| \item[{evacuate\index{evacuate|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-evacuate}} | |
| The process of migrating one or all virtual machine (VM) | |
| instances from one host to another, compatible with both shared | |
| storage live migration and block migration. | |
| \item[{exchange\index{exchange|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-exchange}} | |
| Alternative term for a RabbitMQ message exchange. | |
| \item[{exchange type\index{exchange type|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-exchange-type}} | |
| A routing algorithm in the Compute RabbitMQ. | |
| \item[{exclusive queue\index{exclusive queue|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-exclusive-queue}} | |
| Connected to by a direct consumer in RabbitMQ—Compute, the | |
| message can be consumed only by the current connection. | |
| \item[{extended attributes (xattr)\index{extended attributes (xattr)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-extended-attributes-xattr}} | |
| File system option that enables storage of additional | |
| information beyond owner, group, permissions, modification time, and | |
| so on. The underlying Object Storage file system must support extended | |
| attributes. | |
| \item[{extension\index{extension|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-extension}} | |
| Alternative term for an API extension or plug-in. In the context | |
| of Identity service, this is a call that is specific to the | |
| implementation, such as adding support for OpenID. | |
| \item[{external network\index{external network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-external-network}} | |
| A network segment typically used for instance Internet | |
| access. | |
| \item[{extra specs\index{extra specs|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-extra-specs}} | |
| Specifies additional requirements when Compute determines where | |
| to start a new instance. Examples include a minimum amount of network | |
| bandwidth or a GPU. | |
| \end{description} | |
| \subsection{F} | |
| \label{\detokenize{common/glossary:f}}\begin{description} | |
| \item[{FakeLDAP\index{FakeLDAP|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-fakeldap}} | |
| An easy method to create a local LDAP directory for testing | |
| Identity and Compute. Requires Redis. | |
| \item[{fan-out exchange\index{fan-out exchange|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-fan-out-exchange}} | |
| Within RabbitMQ and Compute, it is the messaging interface that | |
| is used by the scheduler service to receive capability messages from | |
| the compute, volume, and network nodes. | |
| \item[{federated identity\index{federated identity|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-federated-identity}} | |
| A method to establish trusts between identity providers and the | |
| OpenStack cloud. | |
| \item[{Fedora\index{Fedora|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-fedora}} | |
| A Linux distribution compatible with OpenStack. | |
| \item[{Fibre Channel\index{Fibre Channel|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-fibre-channel}} | |
| Storage protocol similar in concept to TCP/IP; encapsulates SCSI | |
| commands and data. | |
| \item[{Fibre Channel over Ethernet (FCoE)\index{Fibre Channel over Ethernet (FCoE)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-fibre-channel-over-ethernet-fcoe}} | |
| The fibre channel protocol tunneled within Ethernet. | |
| \item[{fill-first scheduler\index{fill-first scheduler|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-fill-first-scheduler}} | |
| The Compute scheduling method that attempts to fill a host with | |
| VMs rather than starting new VMs on a variety of hosts. | |
| \item[{filter\index{filter|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-filter}} | |
| The step in the Compute scheduling process when hosts that | |
| cannot run VMs are eliminated and not chosen. | |
| \item[{firewall\index{firewall|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-firewall}} | |
| Used to restrict communications between hosts and/or nodes, | |
| implemented in Compute using iptables, arptables, ip6tables, and | |
| ebtables. | |
| \item[{FireWall-as-a-Service (FWaaS)\index{FireWall-as-a-Service (FWaaS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-firewall-as-a-service-fwaas}} | |
| A Networking extension that provides perimeter firewall | |
| functionality. | |
| \item[{fixed IP address\index{fixed IP address|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-fixed-ip-address}} | |
| An IP address that is associated with the same instance each | |
| time that instance boots, is generally not accessible to end users or | |
| the public Internet, and is used for management of the | |
| instance. | |
| \item[{Flat Manager\index{Flat Manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-flat-manager}} | |
| The Compute component that gives IP addresses to authorized | |
| nodes and assumes DHCP, DNS, and routing configuration and services | |
| are provided by something else. | |
| \item[{flat mode injection\index{flat mode injection|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-flat-mode-injection}} | |
| A Compute networking method where the OS network configuration | |
| information is injected into the VM image before the instance | |
| starts. | |
| \item[{flat network\index{flat network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-flat-network}} | |
| Virtual network type that uses neither VLANs nor tunnels to | |
| segregate project traffic. Each flat network typically requires | |
| a separate underlying physical interface defined by bridge | |
| mappings. However, a flat network can contain multiple | |
| subnets. | |
| \item[{FlatDHCP Manager\index{FlatDHCP Manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-flatdhcp-manager}} | |
| The Compute component that provides dnsmasq (DHCP, DNS, BOOTP, | |
| TFTP) and radvd (routing) services. | |
| \item[{flavor\index{flavor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-flavor}} | |
| Alternative term for a VM instance type. | |
| \item[{flavor ID\index{flavor ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-flavor-id}} | |
| UUID for each Compute or Image service VM flavor or instance | |
| type. | |
| \item[{floating IP address\index{floating IP address|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-floating-ip-address}} | |
| An IP address that a project can associate with a VM so that the | |
| instance has the same public IP address each time that it boots. You | |
| create a pool of floating IP addresses and assign them to instances as | |
| they are launched to maintain a consistent IP address for maintaining | |
| DNS assignment. | |
| \item[{Folsom\index{Folsom|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-folsom}} | |
| A grouped release of projects related to OpenStack that came out | |
| in the fall of 2012, the sixth release of OpenStack. It includes | |
| Compute (nova), Object Storage (swift), Identity (keystone), | |
| Networking (neutron), Image service (glance), and Volumes or Block | |
| Storage (cinder). | |
| Folsom is the code name for the sixth release of | |
| OpenStack. The design summit took place in | |
| San Francisco, California, US and Folsom is a nearby city. | |
| \item[{FormPost\index{FormPost|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-formpost}} | |
| Object Storage middleware that uploads (posts) an image through | |
| a form on a web page. | |
| \item[{freezer\index{freezer|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-freezer}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-backup-restore-and-disaster-recovery-service-freezer}]{\sphinxtermref{\DUrole{xref,std,std-term}{Backup, Restore, and Disaster Recovery service}}}}. | |
| \item[{front end\index{front end|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-front-end}} | |
| The point where a user interacts with a service; can be an API | |
| endpoint, the dashboard, or a command-line tool. | |
| \end{description} | |
| \subsection{G} | |
| \label{\detokenize{common/glossary:g}}\begin{description} | |
| \item[{gateway\index{gateway|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-gateway}} | |
| An IP address, typically assigned to a router, that | |
| passes network traffic between different networks. | |
| \item[{generic receive offload (GRO)\index{generic receive offload (GRO)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-generic-receive-offload-gro}} | |
| Feature of certain network interface drivers that | |
| combines many smaller received packets into a large packet | |
| before delivery to the kernel IP stack. | |
| \item[{generic routing encapsulation (GRE)\index{generic routing encapsulation (GRE)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-generic-routing-encapsulation-gre}} | |
| Protocol that encapsulates a wide variety of network | |
| layer protocols inside virtual point-to-point links. | |
| \item[{glance\index{glance|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-glance}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-image-service-glance}]{\sphinxtermref{\DUrole{xref,std,std-term}{Image service}}}}. | |
| \item[{glance API server\index{glance API server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-glance-api-server}} | |
| Alternative name for the {\hyperref[\detokenize{common/glossary:term-image-api}]{\sphinxtermref{\DUrole{xref,std,std-term}{Image API}}}}. | |
| \item[{glance registry\index{glance registry|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-glance-registry}} | |
| Alternative term for the Image service {\hyperref[\detokenize{common/glossary:term-image-registry}]{\sphinxtermref{\DUrole{xref,std,std-term}{image registry}}}}. | |
| \item[{global endpoint template\index{global endpoint template|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-global-endpoint-template}} | |
| The Identity service endpoint template that contains services | |
| available to all projects. | |
| \item[{GlusterFS\index{GlusterFS|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-glusterfs}} | |
| A file system designed to aggregate NAS hosts, compatible with | |
| OpenStack. | |
| \item[{gnocchi\index{gnocchi|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-gnocchi}} | |
| Part of the OpenStack {\hyperref[\detokenize{common/glossary:term-telemetry-service-telemetry}]{\sphinxtermref{\DUrole{xref,std,std-term}{Telemetry service}}}}; provides an indexer and time-series | |
| database. | |
| \item[{golden image\index{golden image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-golden-image}} | |
| A method of operating system installation where a finalized disk | |
| image is created and then used by all nodes without | |
| modification. | |
| \item[{Governance service (congress)\index{Governance service (congress)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-governance-service-congress}} | |
| The project that provides Governance-as-a-Service across | |
| any collection of cloud services in order to monitor, | |
| enforce, and audit policy over dynamic infrastructure. | |
| \item[{Graphic Interchange Format (GIF)\index{Graphic Interchange Format (GIF)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-graphic-interchange-format-gif}} | |
| A type of image file that is commonly used for animated images | |
| on web pages. | |
| \item[{Graphics Processing Unit (GPU)\index{Graphics Processing Unit (GPU)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-graphics-processing-unit-gpu}} | |
| Choosing a host based on the existence of a GPU is currently | |
| unsupported in OpenStack. | |
| \item[{Green Threads\index{Green Threads|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-green-threads}} | |
| The cooperative threading model used by Python; reduces race | |
| conditions and only context switches when specific library calls are | |
| made. Each OpenStack service is its own thread. | |
| \item[{Grizzly\index{Grizzly|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-grizzly}} | |
| The code name for the seventh release of | |
| OpenStack. The design summit took place in | |
| San Diego, California, US and Grizzly is an element of the state flag of | |
| California. | |
| \item[{Group\index{Group|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-group}} | |
| An Identity v3 API entity. Represents a collection of users that is | |
| owned by a specific domain. | |
| \item[{guest OS\index{guest OS|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-guest-os}} | |
| An operating system instance running under the control of a | |
| hypervisor. | |
| \end{description} | |
| \subsection{H} | |
| \label{\detokenize{common/glossary:h}}\begin{description} | |
| \item[{Hadoop\index{Hadoop|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hadoop}} | |
| Apache Hadoop is an open source software framework that supports | |
| data-intensive distributed applications. | |
| \item[{Hadoop Distributed File System (HDFS)\index{Hadoop Distributed File System (HDFS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hadoop-distributed-file-system-hdfs}} | |
| A distributed, highly fault-tolerant file system designed to run | |
| on low-cost commodity hardware. | |
| \item[{handover\index{handover|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-handover}} | |
| An object state in Object Storage where a new replica of the | |
| object is automatically created due to a drive failure. | |
| \item[{HAProxy\index{HAProxy|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-haproxy}} | |
| Provides a high availability load balancer and proxy server for | |
| TCP and HTTP-based applications that spreads requests across | |
| multiple servers. | |
| \item[{hard reboot\index{hard reboot|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hard-reboot}} | |
| A type of reboot where a physical or virtual power button is | |
| pressed as opposed to a graceful, proper shutdown of the operating | |
| system. | |
| \item[{Havana\index{Havana|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-havana}} | |
| The code name for the eighth release of OpenStack. The | |
| design summit took place in Portland, Oregon, US and Havana is | |
| an unincorporated community in Oregon. | |
| \item[{health monitor\index{health monitor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-health-monitor}} | |
| Determines whether back-end members of a VIP pool can | |
| process a request. A pool can have several health monitors | |
| associated with it. When a pool has several monitors | |
| associated with it, all monitors check each member of the | |
| pool. All monitors must declare a member to be healthy for | |
| it to stay active. | |
| \item[{heat\index{heat|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-heat}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-orchestration-service-heat}]{\sphinxtermref{\DUrole{xref,std,std-term}{Orchestration service}}}}. | |
| \item[{Heat Orchestration Template (HOT)\index{Heat Orchestration Template (HOT)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-heat-orchestration-template-hot}} | |
| Heat input in the format native to OpenStack. | |
| \item[{high availability (HA)\index{high availability (HA)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-high-availability-ha}} | |
| A high availability system design approach and associated | |
| service implementation ensures that a prearranged level of | |
| operational performance will be met during a contractual | |
| measurement period. High availability systems seek to | |
| minimize system downtime and data loss. | |
| \item[{horizon\index{horizon|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-horizon}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-dashboard-horizon}]{\sphinxtermref{\DUrole{xref,std,std-term}{Dashboard}}}}. | |
| \item[{horizon plug-in\index{horizon plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-horizon-plug-in}} | |
| A plug-in for the OpenStack Dashboard (horizon). | |
| \item[{host\index{host|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-host}} | |
| A physical computer, not a VM instance (node). | |
| \item[{host aggregate\index{host aggregate|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-host-aggregate}} | |
| A method to further subdivide availability zones into hypervisor | |
| pools, a collection of common hosts. | |
| \item[{Host Bus Adapter (HBA)\index{Host Bus Adapter (HBA)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-host-bus-adapter-hba}} | |
| Device plugged into a PCI slot, such as a fibre channel or | |
| network card. | |
| \item[{hybrid cloud\index{hybrid cloud|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hybrid-cloud}} | |
| A hybrid cloud is a composition of two or more clouds | |
| (private, community or public) that remain distinct entities | |
| but are bound together, offering the benefits of multiple | |
| deployment models. Hybrid cloud can also mean the ability | |
| to connect colocation, managed and/or dedicated services | |
| with cloud resources. | |
| \item[{Hyper-V\index{Hyper-V|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hyper-v}} | |
| One of the hypervisors supported by OpenStack. | |
| \item[{hyperlink\index{hyperlink|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hyperlink}} | |
| Any kind of text that contains a link to some other site, | |
| commonly found in documents where clicking on a word or words opens up | |
| a different website. | |
| \item[{Hypertext Transfer Protocol (HTTP)\index{Hypertext Transfer Protocol (HTTP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hypertext-transfer-protocol-http}} | |
| An application protocol for distributed, collaborative, | |
| hypermedia information systems. It is the foundation of data | |
| communication for the World Wide Web. Hypertext is structured | |
| text that uses logical links (hyperlinks) between nodes containing | |
| text. HTTP is the protocol to exchange or transfer hypertext. | |
| \item[{Hypertext Transfer Protocol Secure (HTTPS)\index{Hypertext Transfer Protocol Secure (HTTPS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hypertext-transfer-protocol-secure-https}} | |
| An encrypted communications protocol for secure communication | |
| over a computer network, with especially wide deployment on the | |
| Internet. Technically, it is not a protocol in and of itself; | |
| rather, it is the result of simply layering the Hypertext Transfer | |
| Protocol (HTTP) on top of the TLS or SSL protocol, thus adding the | |
| security capabilities of TLS or SSL to standard HTTP communications. | |
| Most OpenStack API endpoints and many inter-component communications | |
| support HTTPS communication. | |
| \item[{hypervisor\index{hypervisor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hypervisor}} | |
| Software that arbitrates and controls VM access to the actual | |
| underlying hardware. | |
| \item[{hypervisor pool\index{hypervisor pool|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-hypervisor-pool}} | |
| A collection of hypervisors grouped together through host | |
| aggregates. | |
| \end{description} | |
| \subsection{I} | |
| \label{\detokenize{common/glossary:i}}\begin{description} | |
| \item[{Icehouse\index{Icehouse|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-icehouse}} | |
| The code name for the ninth release of OpenStack. The | |
| design summit took place in Hong Kong and Ice House is a | |
| street in that city. | |
| \item[{ID number\index{ID number|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-id-number}} | |
| Unique numeric ID associated with each user in Identity, | |
| conceptually similar to a Linux or LDAP UID. | |
| \item[{Identity API\index{Identity API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-identity-api}} | |
| Alternative term for the Identity service API. | |
| \item[{Identity back end\index{Identity back end|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-identity-back-end}} | |
| The source used by Identity service to retrieve user | |
| information; an OpenLDAP server, for example. | |
| \item[{identity provider\index{identity provider|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-identity-provider}} | |
| A directory service, which allows users to login with a user | |
| name and password. It is a typical source of authentication | |
| tokens. | |
| \item[{Identity service (keystone)\index{Identity service (keystone)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-identity-service-keystone}} | |
| The project that facilitates API client authentication, service | |
| discovery, distributed multi-tenant authorization, and auditing. | |
| It provides a central directory of users mapped to the OpenStack | |
| services they can access. It also registers endpoints for OpenStack | |
| services and acts as a common authentication system. | |
| \item[{Identity service API\index{Identity service API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-identity-service-api}} | |
| The API used to access the OpenStack Identity service provided | |
| through keystone. | |
| \item[{image\index{image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image}} | |
| A collection of files for a specific operating system (OS) that | |
| you use to create or rebuild a server. OpenStack provides pre-built | |
| images. You can also create custom images, or snapshots, from servers | |
| that you have launched. Custom images can be used for data backups or | |
| as ``gold'' images for additional servers. | |
| \item[{Image API\index{Image API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-api}} | |
| The Image service API endpoint for management of VM | |
| images. | |
| Processes client requests for VMs, updates Image service | |
| metadata on the registry server, and communicates with the store | |
| adapter to upload VM images from the back-end store. | |
| \item[{image cache\index{image cache|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-cache}} | |
| Used by Image service to obtain images on the local host rather | |
| than re-downloading them from the image server each time one is | |
| requested. | |
| \item[{image ID\index{image ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-id}} | |
| Combination of a URI and UUID used to access Image service VM | |
| images through the image API. | |
| \item[{image membership\index{image membership|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-membership}} | |
| A list of projects that can access a given VM image within Image | |
| service. | |
| \item[{image owner\index{image owner|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-owner}} | |
| The project who owns an Image service virtual machine | |
| image. | |
| \item[{image registry\index{image registry|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-registry}} | |
| A list of VM images that are available through Image | |
| service. | |
| \item[{Image service (glance)\index{Image service (glance)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-service-glance}} | |
| The OpenStack service that provide services and associated libraries | |
| to store, browse, share, distribute and manage bootable disk images, | |
| other data closely associated with initializing compute resources, | |
| and metadata definitions. | |
| \item[{image status\index{image status|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-status}} | |
| The current status of a VM image in Image service, not to be | |
| confused with the status of a running instance. | |
| \item[{image store\index{image store|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-store}} | |
| The back-end store used by Image service to store VM images, | |
| options include Object Storage, locally mounted file system, | |
| RADOS block devices, VMware datastore, or HTTP. | |
| \item[{image UUID\index{image UUID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-image-uuid}} | |
| UUID used by Image service to uniquely identify each VM | |
| image. | |
| \item[{incubated project\index{incubated project|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-incubated-project}} | |
| A community project may be elevated to this status and is then | |
| promoted to a core project. | |
| \item[{Infrastructure Optimization service (watcher)\index{Infrastructure Optimization service (watcher)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-infrastructure-optimization-service-watcher}} | |
| OpenStack project that aims to provide a flexible and scalable resource | |
| optimization service for multi-tenant OpenStack-based clouds. | |
| \item[{Infrastructure-as-a-Service (IaaS)\index{Infrastructure-as-a-Service (IaaS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-infrastructure-as-a-service-iaas}} | |
| IaaS is a provisioning model in which an organization outsources | |
| physical components of a data center, such as storage, hardware, | |
| servers, and networking components. A service provider owns the | |
| equipment and is responsible for housing, operating and maintaining | |
| it. The client typically pays on a per-use basis. | |
| IaaS is a model for providing cloud services. | |
| \item[{ingress filtering\index{ingress filtering|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ingress-filtering}} | |
| The process of filtering incoming network traffic. Supported by | |
| Compute. | |
| \item[{INI format\index{INI format|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ini-format}} | |
| The OpenStack configuration files use an INI format to | |
| describe options and their values. It consists of sections | |
| and key value pairs. | |
| \item[{injection\index{injection|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-injection}} | |
| The process of putting a file into a virtual machine image | |
| before the instance is started. | |
| \item[{Input/Output Operations Per Second (IOPS)\index{Input/Output Operations Per Second (IOPS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-input-output-operations-per-second-iops}} | |
| IOPS are a common performance measurement used to benchmark computer | |
| storage devices like hard disk drives, solid state drives, and | |
| storage area networks. | |
| \item[{instance\index{instance|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-instance}} | |
| A running VM, or a VM in a known state such as suspended, that | |
| can be used like a hardware server. | |
| \item[{instance ID\index{instance ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-instance-id}} | |
| Alternative term for instance UUID. | |
| \item[{instance state\index{instance state|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-instance-state}} | |
| The current state of a guest VM image. | |
| \item[{instance tunnels network\index{instance tunnels network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-instance-tunnels-network}} | |
| A network segment used for instance traffic tunnels | |
| between compute nodes and the network node. | |
| \item[{instance type\index{instance type|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-instance-type}} | |
| Describes the parameters of the various virtual machine images | |
| that are available to users; includes parameters such as CPU, storage, | |
| and memory. Alternative term for flavor. | |
| \item[{instance type ID\index{instance type ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-instance-type-id}} | |
| Alternative term for a flavor ID. | |
| \item[{instance UUID\index{instance UUID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-instance-uuid}} | |
| Unique ID assigned to each guest VM instance. | |
| \item[{Intelligent Platform Management Interface (IPMI)\index{Intelligent Platform Management Interface (IPMI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-intelligent-platform-management-interface-ipmi}} | |
| IPMI is a standardized computer system interface used by system | |
| administrators for out-of-band management of computer systems and | |
| monitoring of their operation. In layman's terms, it is a way to | |
| manage a computer using a direct network connection, whether it is | |
| turned on or not; connecting to the hardware rather than an operating | |
| system or login shell. | |
| \item[{interface\index{interface|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-interface}} | |
| A physical or virtual device that provides connectivity | |
| to another device or medium. | |
| \item[{interface ID\index{interface ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-interface-id}} | |
| Unique ID for a Networking VIF or vNIC in the form of a | |
| UUID. | |
| \item[{Internet Control Message Protocol (ICMP)\index{Internet Control Message Protocol (ICMP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-internet-control-message-protocol-icmp}} | |
| A network protocol used by network devices for control messages. | |
| For example, \sphinxstyleliteralstrong{ping} uses ICMP to test | |
| connectivity. | |
| \item[{Internet protocol (IP)\index{Internet protocol (IP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-internet-protocol-ip}} | |
| Principal communications protocol in the internet protocol | |
| suite for relaying datagrams across network boundaries. | |
| \item[{Internet Service Provider (ISP)\index{Internet Service Provider (ISP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-internet-service-provider-isp}} | |
| Any business that provides Internet access to individuals or | |
| businesses. | |
| \item[{Internet Small Computer System Interface (iSCSI)\index{Internet Small Computer System Interface (iSCSI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-internet-small-computer-system-interface-iscsi}} | |
| Storage protocol that encapsulates SCSI frames for transport | |
| over IP networks. | |
| Supported by Compute, Object Storage, and Image service. | |
| \item[{IP address\index{IP address|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ip-address}} | |
| Number that is unique to every computer system on the Internet. | |
| Two versions of the Internet Protocol (IP) are in use for addresses: | |
| IPv4 and IPv6. | |
| \item[{IP Address Management (IPAM)\index{IP Address Management (IPAM)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ip-address-management-ipam}} | |
| The process of automating IP address allocation, deallocation, | |
| and management. Currently provided by Compute, melange, and | |
| Networking. | |
| \item[{ip6tables\index{ip6tables|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ip6tables}} | |
| Tool used to set up, maintain, and inspect the tables of IPv6 | |
| packet filter rules in the Linux kernel. In OpenStack Compute, | |
| ip6tables is used along with arptables, ebtables, and iptables to | |
| create firewalls for both nodes and VMs. | |
| \item[{ipset\index{ipset|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ipset}} | |
| Extension to iptables that allows creation of firewall rules | |
| that match entire ``sets'' of IP addresses simultaneously. These | |
| sets reside in indexed data structures to increase efficiency, | |
| particularly on systems with a large quantity of rules. | |
| \item[{iptables\index{iptables|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-iptables}} | |
| Used along with arptables and ebtables, iptables create | |
| firewalls in Compute. iptables are the tables provided by the Linux | |
| kernel firewall (implemented as different Netfilter modules) and the | |
| chains and rules it stores. Different kernel modules and programs are | |
| currently used for different protocols: iptables applies to IPv4, | |
| ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames. | |
| Requires root privilege to manipulate. | |
| \item[{ironic\index{ironic|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ironic}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-bare-metal-service-ironic}]{\sphinxtermref{\DUrole{xref,std,std-term}{Bare Metal service}}}}. | |
| \item[{iSCSI Qualified Name (IQN)\index{iSCSI Qualified Name (IQN)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-iscsi-qualified-name-iqn}} | |
| IQN is the format most commonly used for iSCSI names, which uniquely | |
| identify nodes in an iSCSI network. | |
| All IQNs follow the pattern iqn.yyyy-mm.domain:identifier, where | |
| `yyyy-mm' is the year and month in which the domain was registered, | |
| `domain' is the reversed domain name of the issuing organization, and | |
| `identifier' is an optional string which makes each IQN under the same | |
| domain unique. For example, `iqn.2015-10.org.openstack.408ae959bce1'. | |
| \item[{ISO9660\index{ISO9660|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-iso9660}} | |
| One of the VM image disk formats supported by Image | |
| service. | |
| \item[{itsec\index{itsec|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-itsec}} | |
| A default role in the Compute RBAC system that can quarantine an | |
| instance in any project. | |
| \end{description} | |
| \subsection{J} | |
| \label{\detokenize{common/glossary:j}}\begin{description} | |
| \item[{Java\index{Java|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-java}} | |
| A programming language that is used to create systems that | |
| involve more than one computer by way of a network. | |
| \item[{JavaScript\index{JavaScript|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-javascript}} | |
| A scripting language that is used to build web pages. | |
| \item[{JavaScript Object Notation (JSON)\index{JavaScript Object Notation (JSON)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-javascript-object-notation-json}} | |
| One of the supported response formats in OpenStack. | |
| \item[{Jenkins\index{Jenkins|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-jenkins}} | |
| Tool used to run jobs automatically for OpenStack | |
| development. | |
| \item[{jumbo frame\index{jumbo frame|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-jumbo-frame}} | |
| Feature in modern Ethernet networks that supports frames up to | |
| approximately 9000 bytes. | |
| \item[{Juno\index{Juno|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-juno}} | |
| The code name for the tenth release of OpenStack. The | |
| design summit took place in Atlanta, Georgia, US and Juno is | |
| an unincorporated community in Georgia. | |
| \end{description} | |
| \subsection{K} | |
| \label{\detokenize{common/glossary:k}}\begin{description} | |
| \item[{Kerberos\index{Kerberos|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-kerberos}} | |
| A network authentication protocol which works on the basis of | |
| tickets. Kerberos allows nodes communication over a non-secure | |
| network, and allows nodes to prove their identity to one another in a | |
| secure manner. | |
| \item[{kernel-based VM (KVM)\index{kernel-based VM (KVM)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-kernel-based-vm-kvm}} | |
| An OpenStack-supported hypervisor. KVM is a full | |
| virtualization solution for Linux on x86 hardware containing | |
| virtualization extensions (Intel VT or AMD-V), ARM, IBM | |
| Power, and IBM zSeries. It consists of a loadable kernel | |
| module, that provides the core virtualization infrastructure | |
| and a processor specific module. | |
| \item[{Key Manager service (barbican)\index{Key Manager service (barbican)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-key-manager-service-barbican}} | |
| The project that produces a secret storage and | |
| generation system capable of providing key management for | |
| services wishing to enable encryption features. | |
| \item[{keystone\index{keystone|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-keystone}} | |
| Codename of the {\hyperref[\detokenize{common/glossary:term-identity-service-keystone}]{\sphinxtermref{\DUrole{xref,std,std-term}{Identity service}}}}. | |
| \item[{Kickstart\index{Kickstart|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-kickstart}} | |
| A tool to automate system configuration and installation on Red | |
| Hat, Fedora, and CentOS-based Linux distributions. | |
| \item[{Kilo\index{Kilo|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-kilo}} | |
| The code name for the eleventh release of OpenStack. The | |
| design summit took place in Paris, France. Due to delays in the name | |
| selection, the release was known only as K. Because \sphinxcode{k} is the | |
| unit symbol for kilo and the reference artifact is stored near Paris | |
| in the Pavillon de Breteuil in Sèvres, the community chose Kilo as | |
| the release name. | |
| \end{description} | |
| \subsection{L} | |
| \label{\detokenize{common/glossary:l}}\begin{description} | |
| \item[{large object\index{large object|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-large-object}} | |
| An object within Object Storage that is larger than 5 GB. | |
| \item[{Launchpad\index{Launchpad|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-launchpad}} | |
| The collaboration site for OpenStack. | |
| \item[{Layer-2 (L2) agent\index{Layer-2 (L2) agent|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-layer-2-l2-agent}} | |
| OpenStack Networking agent that provides layer-2 | |
| connectivity for virtual networks. | |
| \item[{Layer-2 network\index{Layer-2 network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-layer-2-network}} | |
| Term used in the OSI network architecture for the data link | |
| layer. The data link layer is responsible for media access | |
| control, flow control and detecting and possibly correcting | |
| errors that may occur in the physical layer. | |
| \item[{Layer-3 (L3) agent\index{Layer-3 (L3) agent|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-layer-3-l3-agent}} | |
| OpenStack Networking agent that provides layer-3 | |
| (routing) services for virtual networks. | |
| \item[{Layer-3 network\index{Layer-3 network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-layer-3-network}} | |
| Term used in the OSI network architecture for the network | |
| layer. The network layer is responsible for packet | |
| forwarding including routing from one node to another. | |
| \item[{Liberty\index{Liberty|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-liberty}} | |
| The code name for the twelfth release of OpenStack. The | |
| design summit took place in Vancouver, Canada and Liberty is | |
| the name of a village in the Canadian province of | |
| Saskatchewan. | |
| \item[{libvirt\index{libvirt|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-libvirt}} | |
| Virtualization API library used by OpenStack to interact with | |
| many of its supported hypervisors. | |
| \item[{Lightweight Directory Access Protocol (LDAP)\index{Lightweight Directory Access Protocol (LDAP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-lightweight-directory-access-protocol-ldap}} | |
| An application protocol for accessing and maintaining distributed | |
| directory information services over an IP network. | |
| \item[{Linux bridge\index{Linux bridge|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-linux-bridge}} | |
| Software that enables multiple VMs to share a single physical | |
| NIC within Compute. | |
| \item[{Linux Bridge neutron plug-in\index{Linux Bridge neutron plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-linux-bridge-neutron-plug-in}} | |
| Enables a Linux bridge to understand a Networking port, | |
| interface attachment, and other abstractions. | |
| \item[{Linux containers (LXC)\index{Linux containers (LXC)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-linux-containers-lxc}} | |
| An OpenStack-supported hypervisor. | |
| \item[{live migration\index{live migration|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-live-migration}} | |
| The ability within Compute to move running virtual machine | |
| instances from one host to another with only a small service | |
| interruption during switchover. | |
| \item[{load balancer\index{load balancer|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-load-balancer}} | |
| A load balancer is a logical device that belongs to a cloud | |
| account. It is used to distribute workloads between multiple back-end | |
| systems or services, based on the criteria defined as part of its | |
| configuration. | |
| \item[{load balancing\index{load balancing|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-load-balancing}} | |
| The process of spreading client requests between two or more | |
| nodes to improve performance and availability. | |
| \item[{Load-Balancer-as-a-Service (LBaaS)\index{Load-Balancer-as-a-Service (LBaaS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-load-balancer-as-a-service-lbaas}} | |
| Enables Networking to distribute incoming requests evenly | |
| between designated instances. | |
| \item[{Load-balancing service (octavia)\index{Load-balancing service (octavia)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-load-balancing-service-octavia}} | |
| The project that aims to rovide scalable, on demand, self service | |
| access to load-balancer services, in technology-agnostic manner. | |
| \item[{Logical Volume Manager (LVM)\index{Logical Volume Manager (LVM)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-logical-volume-manager-lvm}} | |
| Provides a method of allocating space on mass-storage | |
| devices that is more flexible than conventional partitioning | |
| schemes. | |
| \end{description} | |
| \subsection{M} | |
| \label{\detokenize{common/glossary:m}}\begin{description} | |
| \item[{magnum\index{magnum|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-magnum}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-container-infrastructure-management-service-magnum}]{\sphinxtermref{\DUrole{xref,std,std-term}{Containers Infrastructure Management | |
| service}}}}. | |
| \item[{management API\index{management API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-management-api}} | |
| Alternative term for an admin API. | |
| \item[{management network\index{management network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-management-network}} | |
| A network segment used for administration, not accessible to the | |
| public Internet. | |
| \item[{manager\index{manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-manager}} | |
| Logical groupings of related code, such as the Block Storage | |
| volume manager or network manager. | |
| \item[{manifest\index{manifest|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-manifest}} | |
| Used to track segments of a large object within Object | |
| Storage. | |
| \item[{manifest object\index{manifest object|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-manifest-object}} | |
| A special Object Storage object that contains the manifest for a | |
| large object. | |
| \item[{manila\index{manila|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-manila}} | |
| Codename for OpenStack {\hyperref[\detokenize{common/glossary:term-shared-file-systems-service-manila}]{\sphinxtermref{\DUrole{xref,std,std-term}{Shared File Systems service}}}}. | |
| \item[{manila-share\index{manila-share|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-manila-share}} | |
| Responsible for managing Shared File System Service devices, specifically | |
| the back-end devices. | |
| \item[{maximum transmission unit (MTU)\index{maximum transmission unit (MTU)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-maximum-transmission-unit-mtu}} | |
| Maximum frame or packet size for a particular network | |
| medium. Typically 1500 bytes for Ethernet networks. | |
| \item[{mechanism driver\index{mechanism driver|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-mechanism-driver}} | |
| A driver for the Modular Layer 2 (ML2) neutron plug-in that | |
| provides layer-2 connectivity for virtual instances. A | |
| single OpenStack installation can use multiple mechanism | |
| drivers. | |
| \item[{melange\index{melange|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-melange}} | |
| Project name for OpenStack Network Information Service. To be | |
| merged with Networking. | |
| \item[{membership\index{membership|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-membership}} | |
| The association between an Image service VM image and a project. | |
| Enables images to be shared with specified projects. | |
| \item[{membership list\index{membership list|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-membership-list}} | |
| A list of projects that can access a given VM image within Image | |
| service. | |
| \item[{memcached\index{memcached|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-memcached}} | |
| A distributed memory object caching system that is used by | |
| Object Storage for caching. | |
| \item[{memory overcommit\index{memory overcommit|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-memory-overcommit}} | |
| The ability to start new VM instances based on the actual memory | |
| usage of a host, as opposed to basing the decision on the amount of | |
| RAM each running instance thinks it has available. Also known as RAM | |
| overcommit. | |
| \item[{message broker\index{message broker|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-message-broker}} | |
| The software package used to provide AMQP messaging capabilities | |
| within Compute. Default package is RabbitMQ. | |
| \item[{message bus\index{message bus|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-message-bus}} | |
| The main virtual communication line used by all AMQP messages | |
| for inter-cloud communications within Compute. | |
| \item[{message queue\index{message queue|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-message-queue}} | |
| Passes requests from clients to the appropriate workers and | |
| returns the output to the client after the job completes. | |
| \item[{Message service (zaqar)\index{Message service (zaqar)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-message-service-zaqar}} | |
| The project that provides a messaging service that affords a | |
| variety of distributed application patterns in an efficient, | |
| scalable and highly available manner, and to create and maintain | |
| associated Python libraries and documentation. | |
| \item[{Meta-Data Server (MDS)\index{Meta-Data Server (MDS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-meta-data-server-mds}} | |
| Stores CephFS metadata. | |
| \item[{Metadata agent\index{Metadata agent|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-metadata-agent}} | |
| OpenStack Networking agent that provides metadata | |
| services for instances. | |
| \item[{migration\index{migration|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-migration}} | |
| The process of moving a VM instance from one host to | |
| another. | |
| \item[{mistral\index{mistral|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-mistral}} | |
| Code name for {\hyperref[\detokenize{common/glossary:term-workflow-service-mistral}]{\sphinxtermref{\DUrole{xref,std,std-term}{Workflow service}}}}. | |
| \item[{Mitaka\index{Mitaka|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-mitaka}} | |
| The code name for the thirteenth release of OpenStack. | |
| The design summit took place in Tokyo, Japan. Mitaka | |
| is a city in Tokyo. | |
| \item[{Modular Layer 2 (ML2) neutron plug-in\index{Modular Layer 2 (ML2) neutron plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-modular-layer-2-ml2-neutron-plug-in}} | |
| Can concurrently use multiple layer-2 networking technologies, | |
| such as 802.1Q and VXLAN, in Networking. | |
| \item[{monasca\index{monasca|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-monasca}} | |
| Codename for OpenStack {\hyperref[\detokenize{common/glossary:term-monitoring-monasca}]{\sphinxtermref{\DUrole{xref,std,std-term}{Monitoring}}}}. | |
| \item[{Monitor (LBaaS)\index{Monitor (LBaaS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-monitor-lbaas}} | |
| LBaaS feature that provides availability monitoring using the | |
| \sphinxcode{ping} command, TCP, and HTTP/HTTPS GET. | |
| \item[{Monitor (Mon)\index{Monitor (Mon)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-monitor-mon}} | |
| A Ceph component that communicates with external clients, checks | |
| data state and consistency, and performs quorum functions. | |
| \item[{Monitoring (monasca)\index{Monitoring (monasca)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-monitoring-monasca}} | |
| The OpenStack service that provides a multi-tenant, highly scalable, | |
| performant, fault-tolerant monitoring-as-a-service solution for metrics, | |
| complex event processing and logging. To build an extensible platform for | |
| advanced monitoring services that can be used by both operators and | |
| tenants to gain operational insight and visibility, ensuring availability | |
| and stability. | |
| \item[{multi-factor authentication\index{multi-factor authentication|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-multi-factor-authentication}} | |
| Authentication method that uses two or more credentials, such as | |
| a password and a private key. Currently not supported in | |
| Identity. | |
| \item[{multi-host\index{multi-host|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-multi-host}} | |
| High-availability mode for legacy (nova) networking. | |
| Each compute node handles NAT and DHCP and acts as a gateway | |
| for all of the VMs on it. A networking failure on one compute | |
| node doesn't affect VMs on other compute nodes. | |
| \item[{multinic\index{multinic|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-multinic}} | |
| Facility in Compute that allows each virtual machine instance to | |
| have more than one VIF connected to it. | |
| \item[{murano\index{murano|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-murano}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-application-catalog-service-murano}]{\sphinxtermref{\DUrole{xref,std,std-term}{Application Catalog service}}}}. | |
| \end{description} | |
| \subsection{N} | |
| \label{\detokenize{common/glossary:n}}\begin{description} | |
| \item[{Nebula\index{Nebula|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-nebula}} | |
| Released as open source by NASA in 2010 and is the basis for | |
| Compute. | |
| \item[{netadmin\index{netadmin|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-netadmin}} | |
| One of the default roles in the Compute RBAC system. Enables the | |
| user to allocate publicly accessible IP addresses to instances and | |
| change firewall rules. | |
| \item[{NetApp volume driver\index{NetApp volume driver|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-netapp-volume-driver}} | |
| Enables Compute to communicate with NetApp storage devices | |
| through the NetApp OnCommand | |
| Provisioning Manager. | |
| \item[{network\index{network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network}} | |
| A virtual network that provides connectivity between entities. | |
| For example, a collection of virtual ports that share network | |
| connectivity. In Networking terminology, a network is always a layer-2 | |
| network. | |
| \item[{Network Address Translation (NAT)\index{Network Address Translation (NAT)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-address-translation-nat}} | |
| Process of modifying IP address information while in transit. | |
| Supported by Compute and Networking. | |
| \item[{network controller\index{network controller|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-controller}} | |
| A Compute daemon that orchestrates the network configuration of | |
| nodes, including IP addresses, VLANs, and bridging. Also manages | |
| routing for both public and private networks. | |
| \item[{Network File System (NFS)\index{Network File System (NFS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-file-system-nfs}} | |
| A method for making file systems available over the network. | |
| Supported by OpenStack. | |
| \item[{network ID\index{network ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-id}} | |
| Unique ID assigned to each network segment within Networking. | |
| Same as network UUID. | |
| \item[{network manager\index{network manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-manager}} | |
| The Compute component that manages various network components, | |
| such as firewall rules, IP address allocation, and so on. | |
| \item[{network namespace\index{network namespace|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-namespace}} | |
| Linux kernel feature that provides independent virtual | |
| networking instances on a single host with separate routing | |
| tables and interfaces. Similar to virtual routing and forwarding | |
| (VRF) services on physical network equipment. | |
| \item[{network node\index{network node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-node}} | |
| Any compute node that runs the network worker daemon. | |
| \item[{network segment\index{network segment|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-segment}} | |
| Represents a virtual, isolated OSI layer-2 subnet in | |
| Networking. | |
| \item[{Network Service Header (NSH)\index{Network Service Header (NSH)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-service-header-nsh}} | |
| Provides a mechanism for metadata exchange along the | |
| instantiated service path. | |
| \item[{Network Time Protocol (NTP)\index{Network Time Protocol (NTP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-time-protocol-ntp}} | |
| Method of keeping a clock for a host or node correct via | |
| communication with a trusted, accurate time source. | |
| \item[{network UUID\index{network UUID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-uuid}} | |
| Unique ID for a Networking network segment. | |
| \item[{network worker\index{network worker|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-network-worker}} | |
| The \sphinxcode{nova-network} worker daemon; provides | |
| services such as giving an IP address to a booting nova | |
| instance. | |
| \item[{Networking API (Neutron API)\index{Networking API (Neutron API)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-networking-api-neutron-api}} | |
| API used to access OpenStack Networking. Provides an extensible | |
| architecture to enable custom plug-in creation. | |
| \item[{Networking service (neutron)\index{Networking service (neutron)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-networking-service-neutron}} | |
| The OpenStack project which implements services and associated | |
| libraries to provide on-demand, scalable, and technology-agnostic | |
| network abstraction. | |
| \item[{neutron\index{neutron|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-neutron}} | |
| Codename for OpenStack {\hyperref[\detokenize{common/glossary:term-networking-service-neutron}]{\sphinxtermref{\DUrole{xref,std,std-term}{Networking service}}}}. | |
| \item[{neutron API\index{neutron API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-neutron-api}} | |
| An alternative name for {\hyperref[\detokenize{common/glossary:term-networking-api-neutron-api}]{\sphinxtermref{\DUrole{xref,std,std-term}{Networking API}}}}. | |
| \item[{neutron manager\index{neutron manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-neutron-manager}} | |
| Enables Compute and Networking integration, which enables | |
| Networking to perform network management for guest VMs. | |
| \item[{neutron plug-in\index{neutron plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-neutron-plug-in}} | |
| Interface within Networking that enables organizations to create | |
| custom plug-ins for advanced features, such as QoS, ACLs, or | |
| IDS. | |
| \item[{Newton\index{Newton|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-newton}} | |
| The code name for the fourteenth release of OpenStack. The | |
| design summit took place in Austin, Texas, US. The | |
| release is named after ``Newton House'' which is located at | |
| 1013 E. Ninth St., Austin, TX. which is listed on the | |
| National Register of Historic Places. | |
| \item[{Nexenta volume driver\index{Nexenta volume driver|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-nexenta-volume-driver}} | |
| Provides support for NexentaStor devices in Compute. | |
| \item[{NFV Orchestration Service (tacker)\index{NFV Orchestration Service (tacker)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-nfv-orchestration-service-tacker}} | |
| OpenStack service that aims to implement Network Function Virtualization | |
| (NFV) Orchestration services and libraries for end-to-end life-cycle | |
| management of Network Services and Virtual Network Functions (VNFs). | |
| \item[{Nginx\index{Nginx|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-nginx}} | |
| An HTTP and reverse proxy server, a mail proxy server, and a generic | |
| TCP/UDP proxy server. | |
| \item[{No ACK\index{No ACK|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-no-ack}} | |
| Disables server-side message acknowledgment in the Compute | |
| RabbitMQ. Increases performance but decreases reliability. | |
| \item[{node\index{node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-node}} | |
| A VM instance that runs on a host. | |
| \item[{non-durable exchange\index{non-durable exchange|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-non-durable-exchange}} | |
| Message exchange that is cleared when the service restarts. Its | |
| data is not written to persistent storage. | |
| \item[{non-durable queue\index{non-durable queue|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-non-durable-queue}} | |
| Message queue that is cleared when the service restarts. Its | |
| data is not written to persistent storage. | |
| \item[{non-persistent volume\index{non-persistent volume|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-non-persistent-volume}} | |
| Alternative term for an ephemeral volume. | |
| \item[{north-south traffic\index{north-south traffic|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-north-south-traffic}} | |
| Network traffic between a user or client (north) and a | |
| server (south), or traffic into the cloud (south) and | |
| out of the cloud (north). See also east-west traffic. | |
| \item[{nova\index{nova|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-nova}} | |
| Codename for OpenStack {\hyperref[\detokenize{common/glossary:term-compute-service-nova}]{\sphinxtermref{\DUrole{xref,std,std-term}{Compute service}}}}. | |
| \item[{Nova API\index{Nova API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-nova-api}} | |
| Alternative term for the {\hyperref[\detokenize{common/glossary:term-compute-api-nova-api}]{\sphinxtermref{\DUrole{xref,std,std-term}{Compute API}}}}. | |
| \item[{nova-network\index{nova-network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-nova-network}} | |
| A Compute component that manages IP address allocation, | |
| firewalls, and other network-related tasks. This is the legacy | |
| networking option and an alternative to Networking. | |
| \end{description} | |
| \subsection{O} | |
| \label{\detokenize{common/glossary:o}}\begin{description} | |
| \item[{object\index{object|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object}} | |
| A BLOB of data held by Object Storage; can be in any | |
| format. | |
| \item[{object auditor\index{object auditor|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-auditor}} | |
| Opens all objects for an object server and verifies the MD5 | |
| hash, size, and metadata for each object. | |
| \item[{object expiration\index{object expiration|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-expiration}} | |
| A configurable option within Object Storage to automatically | |
| delete objects after a specified amount of time has passed or a | |
| certain date is reached. | |
| \item[{object hash\index{object hash|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-hash}} | |
| Unique ID for an Object Storage object. | |
| \item[{object path hash\index{object path hash|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-path-hash}} | |
| Used by Object Storage to determine the location of an object in | |
| the ring. Maps objects to partitions. | |
| \item[{object replicator\index{object replicator|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-replicator}} | |
| An Object Storage component that copies an object to remote | |
| partitions for fault tolerance. | |
| \item[{object server\index{object server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-server}} | |
| An Object Storage component that is responsible for managing | |
| objects. | |
| \item[{Object Storage API\index{Object Storage API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-storage-api}} | |
| API used to access OpenStack {\hyperref[\detokenize{common/glossary:term-object-storage-service-swift}]{\sphinxtermref{\DUrole{xref,std,std-term}{Object Storage}}}}. | |
| \item[{Object Storage Device (OSD)\index{Object Storage Device (OSD)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-storage-device-osd}} | |
| The Ceph storage daemon. | |
| \item[{Object Storage service (swift)\index{Object Storage service (swift)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-storage-service-swift}} | |
| The OpenStack core project that provides eventually consistent | |
| and redundant storage and retrieval of fixed digital content. | |
| \item[{object versioning\index{object versioning|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-object-versioning}} | |
| Allows a user to set a flag on an {\hyperref[\detokenize{common/glossary:term-object-storage-service-swift}]{\sphinxtermref{\DUrole{xref,std,std-term}{Object Storage}}}} container so that all objects within the container are | |
| versioned. | |
| \item[{Ocata\index{Ocata|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ocata}} | |
| The code name for the fifteenth release of OpenStack. The | |
| design summit will take place in Barcelona, Spain. Ocata is | |
| a beach north of Barcelona. | |
| \item[{Octavia\index{Octavia|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-octavia}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-load-balancing-service-octavia}]{\sphinxtermref{\DUrole{xref,std,std-term}{Load-balancing service}}}}. | |
| \item[{Oldie\index{Oldie|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-oldie}} | |
| Term for an {\hyperref[\detokenize{common/glossary:term-object-storage-service-swift}]{\sphinxtermref{\DUrole{xref,std,std-term}{Object Storage}}}} | |
| process that runs for a long time. Can indicate a hung process. | |
| \item[{Open Cloud Computing Interface (OCCI)\index{Open Cloud Computing Interface (OCCI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-open-cloud-computing-interface-occi}} | |
| A standardized interface for managing compute, data, and network | |
| resources, currently unsupported in OpenStack. | |
| \item[{Open Virtualization Format (OVF)\index{Open Virtualization Format (OVF)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-open-virtualization-format-ovf}} | |
| Standard for packaging VM images. Supported in OpenStack. | |
| \item[{Open vSwitch\index{Open vSwitch|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-open-vswitch}} | |
| Open vSwitch is a production quality, multilayer virtual | |
| switch licensed under the open source Apache 2.0 license. It | |
| is designed to enable massive network automation through | |
| programmatic extension, while still supporting standard | |
| management interfaces and protocols (for example NetFlow, | |
| sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). | |
| \item[{Open vSwitch (OVS) agent\index{Open vSwitch (OVS) agent|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-open-vswitch-ovs-agent}} | |
| Provides an interface to the underlying Open vSwitch service for | |
| the Networking plug-in. | |
| \item[{Open vSwitch neutron plug-in\index{Open vSwitch neutron plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-open-vswitch-neutron-plug-in}} | |
| Provides support for Open vSwitch in Networking. | |
| \item[{OpenLDAP\index{OpenLDAP|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-openldap}} | |
| An open source LDAP server. Supported by both Compute and | |
| Identity. | |
| \item[{OpenStack\index{OpenStack|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-openstack}} | |
| OpenStack is a cloud operating system that controls large pools | |
| of compute, storage, and networking resources throughout a data | |
| center, all managed through a dashboard that gives administrators | |
| control while empowering their users to provision resources through a | |
| web interface. OpenStack is an open source project licensed under the | |
| Apache License 2.0. | |
| \item[{OpenStack code name\index{OpenStack code name|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-openstack-code-name}} | |
| Each OpenStack release has a code name. Code names ascend in | |
| alphabetical order: Austin, Bexar, Cactus, Diablo, Essex, | |
| Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, | |
| Mitaka, Newton, Ocata, Pike, and Queens. | |
| Code names are cities or counties near where the | |
| corresponding OpenStack design summit took place. An | |
| exception, called the Waldon exception, is granted to | |
| elements of the state flag that sound especially cool. Code | |
| names are chosen by popular vote. | |
| \item[{openSUSE\index{openSUSE|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-opensuse}} | |
| A Linux distribution that is compatible with OpenStack. | |
| \item[{operator\index{operator|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-operator}} | |
| The person responsible for planning and maintaining an OpenStack | |
| installation. | |
| \item[{optional service\index{optional service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-optional-service}} | |
| An official OpenStack service defined as optional by | |
| DefCore Committee. Currently, consists of | |
| Dashboard (horizon), Telemetry service (Telemetry), | |
| Orchestration service (heat), Database service (trove), | |
| Bare Metal service (ironic), and so on. | |
| \item[{Orchestration service (heat)\index{Orchestration service (heat)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-orchestration-service-heat}} | |
| The OpenStack service which orchestrates composite cloud | |
| applications using a declarative template format through | |
| an OpenStack-native REST API. | |
| \item[{orphan\index{orphan|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-orphan}} | |
| In the context of Object Storage, this is a process that is not | |
| terminated after an upgrade, restart, or reload of the service. | |
| \item[{Oslo\index{Oslo|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-oslo}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-common-libraries-oslo}]{\sphinxtermref{\DUrole{xref,std,std-term}{Common Libraries project}}}}. | |
| \end{description} | |
| \subsection{P} | |
| \label{\detokenize{common/glossary:p}}\begin{description} | |
| \item[{panko\index{panko|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-panko}} | |
| Part of the OpenStack {\hyperref[\detokenize{common/glossary:term-telemetry-service-telemetry}]{\sphinxtermref{\DUrole{xref,std,std-term}{Telemetry service}}}}; provides event storage. | |
| \item[{parent cell\index{parent cell|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-parent-cell}} | |
| If a requested resource, such as CPU time, disk storage, or | |
| memory, is not available in the parent cell, the request is forwarded | |
| to associated child cells. | |
| \item[{partition\index{partition|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-partition}} | |
| A unit of storage within Object Storage used to store objects. | |
| It exists on top of devices and is replicated for fault | |
| tolerance. | |
| \item[{partition index\index{partition index|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-partition-index}} | |
| Contains the locations of all Object Storage partitions within | |
| the ring. | |
| \item[{partition shift value\index{partition shift value|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-partition-shift-value}} | |
| Used by Object Storage to determine which partition data should | |
| reside on. | |
| \item[{path MTU discovery (PMTUD)\index{path MTU discovery (PMTUD)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-path-mtu-discovery-pmtud}} | |
| Mechanism in IP networks to detect end-to-end MTU and adjust | |
| packet size accordingly. | |
| \item[{pause\index{pause|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-pause}} | |
| A VM state where no changes occur (no changes in memory, network | |
| communications stop, etc); the VM is frozen but not shut down. | |
| \item[{PCI passthrough\index{PCI passthrough|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-pci-passthrough}} | |
| Gives guest VMs exclusive access to a PCI device. Currently | |
| supported in OpenStack Havana and later releases. | |
| \item[{persistent message\index{persistent message|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-persistent-message}} | |
| A message that is stored both in memory and on disk. The message | |
| is not lost after a failure or restart. | |
| \item[{persistent volume\index{persistent volume|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-persistent-volume}} | |
| Changes to these types of disk volumes are saved. | |
| \item[{personality file\index{personality file|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-personality-file}} | |
| A file used to customize a Compute instance. It can be used to | |
| inject SSH keys or a specific network configuration. | |
| \item[{Pike\index{Pike|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-pike}} | |
| The code name for the sixteenth release of OpenStack. The design | |
| summit will take place in Boston, Massachusetts, US. The release | |
| is named after the Massachusetts Turnpike, abbreviated commonly | |
| as the Mass Pike, which is the eastermost stretch of | |
| Interstate 90. | |
| \item[{Platform-as-a-Service (PaaS)\index{Platform-as-a-Service (PaaS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-platform-as-a-service-paas}} | |
| Provides to the consumer the ability to deploy applications | |
| through a programming language or tools supported by the cloud | |
| platform provider. An example of Platform-as-a-Service is an | |
| Eclipse/Java programming platform provided with no downloads | |
| required. | |
| \item[{plug-in\index{plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-plug-in}} | |
| Software component providing the actual implementation for | |
| Networking APIs, or for Compute APIs, depending on the context. | |
| \item[{policy service\index{policy service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-policy-service}} | |
| Component of Identity that provides a rule-management | |
| interface and a rule-based authorization engine. | |
| \item[{policy-based routing (PBR)\index{policy-based routing (PBR)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-policy-based-routing-pbr}} | |
| Provides a mechanism to implement packet forwarding and routing | |
| according to the policies defined by the network administrator. | |
| \item[{pool\index{pool|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-pool}} | |
| A logical set of devices, such as web servers, that you | |
| group together to receive and process traffic. The load | |
| balancing function chooses which member of the pool handles | |
| the new requests or connections received on the VIP | |
| address. Each VIP has one pool. | |
| \item[{pool member\index{pool member|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-pool-member}} | |
| An application that runs on the back-end server in a | |
| load-balancing system. | |
| \item[{port\index{port|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-port}} | |
| A virtual network port within Networking; VIFs / vNICs are | |
| connected to a port. | |
| \item[{port UUID\index{port UUID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-port-uuid}} | |
| Unique ID for a Networking port. | |
| \item[{preseed\index{preseed|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-preseed}} | |
| A tool to automate system configuration and installation on | |
| Debian-based Linux distributions. | |
| \item[{private image\index{private image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-private-image}} | |
| An Image service VM image that is only available to specified | |
| projects. | |
| \item[{private IP address\index{private IP address|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-private-ip-address}} | |
| An IP address used for management and administration, not | |
| available to the public Internet. | |
| \item[{private network\index{private network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-private-network}} | |
| The Network Controller provides virtual networks to enable | |
| compute servers to interact with each other and with the public | |
| network. All machines must have a public and private network | |
| interface. A private network interface can be a flat or VLAN network | |
| interface. A flat network interface is controlled by the | |
| flat\_interface with flat managers. A VLAN network interface is | |
| controlled by the \sphinxcode{vlan\_interface} option with VLAN | |
| managers. | |
| \item[{project\index{project|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-project}} | |
| Projects represent the base unit of “ownership” in OpenStack, | |
| in that all resources in OpenStack should be owned by a specific project. | |
| In OpenStack Identity, a project must be owned by a specific domain. | |
| \item[{project ID\index{project ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-project-id}} | |
| Unique ID assigned to each project by the Identity service. | |
| \item[{project VPN\index{project VPN|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-project-vpn}} | |
| Alternative term for a cloudpipe. | |
| \item[{promiscuous mode\index{promiscuous mode|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-promiscuous-mode}} | |
| Causes the network interface to pass all traffic it | |
| receives to the host rather than passing only the frames | |
| addressed to it. | |
| \item[{protected property\index{protected property|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-protected-property}} | |
| Generally, extra properties on an Image service image to | |
| which only cloud administrators have access. Limits which user | |
| roles can perform CRUD operations on that property. The cloud | |
| administrator can configure any image property as | |
| protected. | |
| \item[{provider\index{provider|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-provider}} | |
| An administrator who has access to all hosts and | |
| instances. | |
| \item[{proxy node\index{proxy node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-proxy-node}} | |
| A node that provides the Object Storage proxy service. | |
| \item[{proxy server\index{proxy server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-proxy-server}} | |
| Users of Object Storage interact with the service through the | |
| proxy server, which in turn looks up the location of the requested | |
| data within the ring and returns the results to the user. | |
| \item[{public API\index{public API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-public-api}} | |
| An API endpoint used for both service-to-service communication | |
| and end-user interactions. | |
| \item[{public image\index{public image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-public-image}} | |
| An Image service VM image that is available to all | |
| projects. | |
| \item[{public IP address\index{public IP address|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-public-ip-address}} | |
| An IP address that is accessible to end-users. | |
| \item[{public key authentication\index{public key authentication|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-public-key-authentication}} | |
| Authentication method that uses keys rather than | |
| passwords. | |
| \item[{public network\index{public network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-public-network}} | |
| The Network Controller provides virtual networks to enable | |
| compute servers to interact with each other and with the public | |
| network. All machines must have a public and private network | |
| interface. The public network interface is controlled by the | |
| \sphinxcode{public\_interface} option. | |
| \item[{Puppet\index{Puppet|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-puppet}} | |
| An operating system configuration-management tool supported by | |
| OpenStack. | |
| \item[{Python\index{Python|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-python}} | |
| Programming language used extensively in OpenStack. | |
| \end{description} | |
| \subsection{Q} | |
| \label{\detokenize{common/glossary:q}}\begin{description} | |
| \item[{QEMU Copy On Write 2 (QCOW2)\index{QEMU Copy On Write 2 (QCOW2)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-qemu-copy-on-write-2-qcow2}} | |
| One of the VM image disk formats supported by Image | |
| service. | |
| \item[{Qpid\index{Qpid|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-qpid}} | |
| Message queue software supported by OpenStack; an alternative to | |
| RabbitMQ. | |
| \item[{Quality of Service (QoS)\index{Quality of Service (QoS)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-quality-of-service-qos}} | |
| The ability to guarantee certain network or storage requirements to | |
| satisfy a Service Level Agreement (SLA) between an application provider | |
| and end users. | |
| Typically includes performance requirements like networking bandwidth, | |
| latency, jitter correction, and reliability as well as storage | |
| performance in Input/Output Operations Per Second (IOPS), throttling | |
| agreements, and performance expectations at peak load. | |
| \item[{quarantine\index{quarantine|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-quarantine}} | |
| If Object Storage finds objects, containers, or accounts that | |
| are corrupt, they are placed in this state, are not replicated, cannot | |
| be read by clients, and a correct copy is re-replicated. | |
| \item[{Queens\index{Queens|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-queens}} | |
| The code name for the seventeenth release of OpenStack. The | |
| design summit will take place in Sydney, Australia. The release | |
| is named after the Queens Pound river in the South Coast region | |
| of New South Wales. | |
| \item[{Quick EMUlator (QEMU)\index{Quick EMUlator (QEMU)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-quick-emulator-qemu}} | |
| QEMU is a generic and open source machine emulator and | |
| virtualizer. | |
| One of the hypervisors supported by OpenStack, generally used | |
| for development purposes. | |
| \item[{quota\index{quota|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-quota}} | |
| In Compute and Block Storage, the ability to set resource limits | |
| on a per-project basis. | |
| \end{description} | |
| \subsection{R} | |
| \label{\detokenize{common/glossary:r}}\begin{description} | |
| \item[{RabbitMQ\index{RabbitMQ|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rabbitmq}} | |
| The default message queue software used by OpenStack. | |
| \item[{Rackspace Cloud Files\index{Rackspace Cloud Files|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rackspace-cloud-files}} | |
| Released as open source by Rackspace in 2010; the basis for | |
| Object Storage. | |
| \item[{RADOS Block Device (RBD)\index{RADOS Block Device (RBD)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rados-block-device-rbd}} | |
| Ceph component that enables a Linux block device to be striped | |
| over multiple distributed data stores. | |
| \item[{radvd\index{radvd|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-radvd}} | |
| The router advertisement daemon, used by the Compute VLAN | |
| manager and FlatDHCP manager to provide routing services for VM | |
| instances. | |
| \item[{rally\index{rally|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rally}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-benchmark-service-rally}]{\sphinxtermref{\DUrole{xref,std,std-term}{Benchmark service}}}}. | |
| \item[{RAM filter\index{RAM filter|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ram-filter}} | |
| The Compute setting that enables or disables RAM | |
| overcommitment. | |
| \item[{RAM overcommit\index{RAM overcommit|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ram-overcommit}} | |
| The ability to start new VM instances based on the actual memory | |
| usage of a host, as opposed to basing the decision on the amount of | |
| RAM each running instance thinks it has available. Also known as | |
| memory overcommit. | |
| \item[{rate limit\index{rate limit|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rate-limit}} | |
| Configurable option within Object Storage to limit database | |
| writes on a per-account and/or per-container basis. | |
| \item[{raw\index{raw|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-raw}} | |
| One of the VM image disk formats supported by Image service; an | |
| unstructured disk image. | |
| \item[{rebalance\index{rebalance|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rebalance}} | |
| The process of distributing Object Storage partitions across all | |
| drives in the ring; used during initial ring creation and after ring | |
| reconfiguration. | |
| \item[{reboot\index{reboot|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-reboot}} | |
| Either a soft or hard reboot of a server. With a soft reboot, | |
| the operating system is signaled to restart, which enables a graceful | |
| shutdown of all processes. A hard reboot is the equivalent of power | |
| cycling the server. The virtualization platform should ensure that the | |
| reboot action has completed successfully, even in cases in which the | |
| underlying domain/VM is paused or halted/stopped. | |
| \item[{rebuild\index{rebuild|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rebuild}} | |
| Removes all data on the server and replaces it with the | |
| specified image. Server ID and IP addresses remain the same. | |
| \item[{Recon\index{Recon|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-recon}} | |
| An Object Storage component that collects meters. | |
| \item[{record\index{record|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-record}} | |
| Belongs to a particular domain and is used to specify | |
| information about the domain. | |
| There are several types of DNS records. Each record type contains | |
| particular information used to describe the purpose of that record. | |
| Examples include mail exchange (MX) records, which specify the mail | |
| server for a particular domain; and name server (NS) records, which | |
| specify the authoritative name servers for a domain. | |
| \item[{record ID\index{record ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-record-id}} | |
| A number within a database that is incremented each time a | |
| change is made. Used by Object Storage when replicating. | |
| \item[{Red Hat Enterprise Linux (RHEL)\index{Red Hat Enterprise Linux (RHEL)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-red-hat-enterprise-linux-rhel}} | |
| A Linux distribution that is compatible with OpenStack. | |
| \item[{reference architecture\index{reference architecture|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-reference-architecture}} | |
| A recommended architecture for an OpenStack cloud. | |
| \item[{region\index{region|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-region}} | |
| A discrete OpenStack environment with dedicated API endpoints | |
| that typically shares only the Identity (keystone) with other | |
| regions. | |
| \item[{registry\index{registry|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-registry}} | |
| Alternative term for the Image service registry. | |
| \item[{registry server\index{registry server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-registry-server}} | |
| An Image service that provides VM image metadata information to | |
| clients. | |
| \item[{Reliable, Autonomic Distributed Object Store\index{Reliable, Autonomic Distributed Object Store|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-reliable-autonomic-distributed-object-store}} | |
| (RADOS) | |
| A collection of components that provides object storage within | |
| Ceph. Similar to OpenStack Object Storage. | |
| \item[{Remote Procedure Call (RPC)\index{Remote Procedure Call (RPC)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-remote-procedure-call-rpc}} | |
| The method used by the Compute RabbitMQ for intra-service | |
| communications. | |
| \item[{replica\index{replica|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-replica}} | |
| Provides data redundancy and fault tolerance by creating copies | |
| of Object Storage objects, accounts, and containers so that they are | |
| not lost when the underlying storage fails. | |
| \item[{replica count\index{replica count|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-replica-count}} | |
| The number of replicas of the data in an Object Storage | |
| ring. | |
| \item[{replication\index{replication|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-replication}} | |
| The process of copying data to a separate physical device for | |
| fault tolerance and performance. | |
| \item[{replicator\index{replicator|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-replicator}} | |
| The Object Storage back-end process that creates and manages | |
| object replicas. | |
| \item[{request ID\index{request ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-request-id}} | |
| Unique ID assigned to each request sent to Compute. | |
| \item[{rescue image\index{rescue image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rescue-image}} | |
| A special type of VM image that is booted when an instance is | |
| placed into rescue mode. Allows an administrator to mount the file | |
| systems for an instance to correct the problem. | |
| \item[{resize\index{resize|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-resize}} | |
| Converts an existing server to a different flavor, which scales | |
| the server up or down. The original server is saved to enable rollback | |
| if a problem occurs. All resizes must be tested and explicitly | |
| confirmed, at which time the original server is removed. | |
| \item[{RESTful\index{RESTful|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-restful}} | |
| A kind of web service API that uses REST, or Representational | |
| State Transfer. REST is the style of architecture for hypermedia | |
| systems that is used for the World Wide Web. | |
| \item[{ring\index{ring|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ring}} | |
| An entity that maps Object Storage data to partitions. A | |
| separate ring exists for each service, such as account, object, and | |
| container. | |
| \item[{ring builder\index{ring builder|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ring-builder}} | |
| Builds and manages rings within Object Storage, assigns | |
| partitions to devices, and pushes the configuration to other storage | |
| nodes. | |
| \item[{role\index{role|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-role}} | |
| A personality that a user assumes to perform a specific set of | |
| operations. A role includes a set of rights and privileges. A user | |
| assuming that role inherits those rights and privileges. | |
| \item[{Role Based Access Control (RBAC)\index{Role Based Access Control (RBAC)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-role-based-access-control-rbac}} | |
| Provides a predefined list of actions that the user can perform, | |
| such as start or stop VMs, reset passwords, and so on. Supported in | |
| both Identity and Compute and can be configured using the dashboard. | |
| \item[{role ID\index{role ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-role-id}} | |
| Alphanumeric ID assigned to each Identity service role. | |
| \item[{Root Cause Analysis (RCA) service (Vitrage)\index{Root Cause Analysis (RCA) service (Vitrage)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-root-cause-analysis-rca-service-vitrage}} | |
| OpenStack project that aims to organize, analyze and visualize OpenStack | |
| alarms and events, yield insights regarding the root cause of problems | |
| and deduce their existence before they are directly detected. | |
| \item[{rootwrap\index{rootwrap|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rootwrap}} | |
| A feature of Compute that allows the unprivileged ``nova'' user to | |
| run a specified list of commands as the Linux root user. | |
| \item[{round-robin scheduler\index{round-robin scheduler|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-round-robin-scheduler}} | |
| Type of Compute scheduler that evenly distributes instances | |
| among available hosts. | |
| \item[{router\index{router|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-router}} | |
| A physical or virtual network device that passes network | |
| traffic between different networks. | |
| \item[{routing key\index{routing key|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-routing-key}} | |
| The Compute direct exchanges, fanout exchanges, and topic | |
| exchanges use this key to determine how to process a message; | |
| processing varies depending on exchange type. | |
| \item[{RPC driver\index{RPC driver|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rpc-driver}} | |
| Modular system that allows the underlying message queue software | |
| of Compute to be changed. For example, from RabbitMQ to ZeroMQ or | |
| Qpid. | |
| \item[{rsync\index{rsync|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rsync}} | |
| Used by Object Storage to push object replicas. | |
| \item[{RXTX cap\index{RXTX cap|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rxtx-cap}} | |
| Absolute limit on the amount of network traffic a Compute VM | |
| instance can send and receive. | |
| \item[{RXTX quota\index{RXTX quota|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-rxtx-quota}} | |
| Soft limit on the amount of network traffic a Compute VM | |
| instance can send and receive. | |
| \end{description} | |
| \subsection{S} | |
| \label{\detokenize{common/glossary:s}}\begin{description} | |
| \item[{sahara\index{sahara|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-sahara}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-data-processing-service-sahara}]{\sphinxtermref{\DUrole{xref,std,std-term}{Data Processing service}}}}. | |
| \item[{SAML assertion\index{SAML assertion|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-saml-assertion}} | |
| Contains information about a user as provided by the identity | |
| provider. It is an indication that a user has been authenticated. | |
| \item[{scheduler manager\index{scheduler manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-scheduler-manager}} | |
| A Compute component that determines where VM instances should | |
| start. Uses modular design to support a variety of scheduler | |
| types. | |
| \item[{scoped token\index{scoped token|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-scoped-token}} | |
| An Identity service API access token that is associated with a | |
| specific project. | |
| \item[{scrubber\index{scrubber|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-scrubber}} | |
| Checks for and deletes unused VMs; the component of Image | |
| service that implements delayed delete. | |
| \item[{secret key\index{secret key|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-secret-key}} | |
| String of text known only by the user; used along with an access | |
| key to make requests to the Compute API. | |
| \item[{secure boot\index{secure boot|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-secure-boot}} | |
| Process whereby the system firmware validates the authenticity of | |
| the code involved in the boot process. | |
| \item[{secure shell (SSH)\index{secure shell (SSH)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-secure-shell-ssh}} | |
| Open source tool used to access remote hosts through an | |
| encrypted communications channel, SSH key injection is supported by | |
| Compute. | |
| \item[{security group\index{security group|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-security-group}} | |
| A set of network traffic filtering rules that are applied to a | |
| Compute instance. | |
| \item[{segmented object\index{segmented object|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-segmented-object}} | |
| An Object Storage large object that has been broken up into | |
| pieces. The re-assembled object is called a concatenated | |
| object. | |
| \item[{self-service\index{self-service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-self-service}} | |
| For IaaS, ability for a regular (non-privileged) account to | |
| manage a virtual infrastructure component such as networks without | |
| involving an administrator. | |
| \item[{SELinux\index{SELinux|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-selinux}} | |
| Linux kernel security module that provides the mechanism for | |
| supporting access control policies. | |
| \item[{senlin\index{senlin|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-senlin}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-clustering-service-senlin}]{\sphinxtermref{\DUrole{xref,std,std-term}{Clustering service}}}}. | |
| \item[{server\index{server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-server}} | |
| Computer that provides explicit services to the client software | |
| running on that system, often managing a variety of computer | |
| operations. | |
| A server is a VM instance in the Compute system. Flavor and | |
| image are requisite elements when creating a server. | |
| \item[{server image\index{server image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-server-image}} | |
| Alternative term for a VM image. | |
| \item[{server UUID\index{server UUID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-server-uuid}} | |
| Unique ID assigned to each guest VM instance. | |
| \item[{service\index{service|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service}} | |
| An OpenStack service, such as Compute, Object Storage, or Image | |
| service. Provides one or more endpoints through which users can access | |
| resources and perform operations. | |
| \item[{service catalog\index{service catalog|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-catalog}} | |
| Alternative term for the Identity service catalog. | |
| \item[{Service Function Chain (SFC)\index{Service Function Chain (SFC)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-function-chain-sfc}} | |
| For a given service, SFC is the abstracted view of the required | |
| service functions and the order in which they are to be applied. | |
| \item[{service ID\index{service ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-id}} | |
| Unique ID assigned to each service that is available in the | |
| Identity service catalog. | |
| \item[{Service Level Agreement (SLA)\index{Service Level Agreement (SLA)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-level-agreement-sla}} | |
| Contractual obligations that ensure the availability of a | |
| service. | |
| \item[{service project\index{service project|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-project}} | |
| Special project that contains all services that are listed in the | |
| catalog. | |
| \item[{service provider\index{service provider|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-provider}} | |
| A system that provides services to other system entities. In | |
| case of federated identity, OpenStack Identity is the service | |
| provider. | |
| \item[{service registration\index{service registration|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-registration}} | |
| An Identity service feature that enables services, such as | |
| Compute, to automatically register with the catalog. | |
| \item[{service token\index{service token|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-service-token}} | |
| An administrator-defined token used by Compute to communicate | |
| securely with the Identity service. | |
| \item[{session back end\index{session back end|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-session-back-end}} | |
| The method of storage used by horizon to track client sessions, | |
| such as local memory, cookies, a database, or memcached. | |
| \item[{session persistence\index{session persistence|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-session-persistence}} | |
| A feature of the load-balancing service. It attempts to force | |
| subsequent connections to a service to be redirected to the same node | |
| as long as it is online. | |
| \item[{session storage\index{session storage|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-session-storage}} | |
| A horizon component that stores and tracks client session | |
| information. Implemented through the Django sessions framework. | |
| \item[{share\index{share|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-share}} | |
| A remote, mountable file system in the context of the {\hyperref[\detokenize{common/glossary:term-shared-file-systems-service-manila}]{\sphinxtermref{\DUrole{xref,std,std-term}{Shared | |
| File Systems service}}}}. You can | |
| mount a share to, and access a share from, several hosts by several | |
| users at a time. | |
| \item[{share network\index{share network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-share-network}} | |
| An entity in the context of the {\hyperref[\detokenize{common/glossary:term-shared-file-systems-service-manila}]{\sphinxtermref{\DUrole{xref,std,std-term}{Shared File Systems | |
| service}}}} that encapsulates | |
| interaction with the Networking service. If the driver you selected | |
| runs in the mode requiring such kind of interaction, you need to | |
| specify the share network to create a share. | |
| \item[{Shared File Systems API\index{Shared File Systems API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-shared-file-systems-api}} | |
| A Shared File Systems service that provides a stable RESTful API. | |
| The service authenticates and routes requests throughout the Shared | |
| File Systems service. There is python-manilaclient to interact with | |
| the API. | |
| \item[{Shared File Systems service (manila)\index{Shared File Systems service (manila)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-shared-file-systems-service-manila}} | |
| The service that provides a set of services for | |
| management of shared file systems in a multi-tenant cloud | |
| environment, similar to how OpenStack provides block-based storage | |
| management through the OpenStack {\hyperref[\detokenize{common/glossary:term-block-storage-service-cinder}]{\sphinxtermref{\DUrole{xref,std,std-term}{Block Storage service}}}} project. | |
| With the Shared File Systems service, you can create a remote file | |
| system and mount the file system on your instances. You can also | |
| read and write data from your instances to and from your file system. | |
| \item[{shared IP address\index{shared IP address|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-shared-ip-address}} | |
| An IP address that can be assigned to a VM instance within the | |
| shared IP group. Public IP addresses can be shared across multiple | |
| servers for use in various high-availability scenarios. When an IP | |
| address is shared to another server, the cloud network restrictions | |
| are modified to enable each server to listen to and respond on that IP | |
| address. You can optionally specify that the target server network | |
| configuration be modified. Shared IP addresses can be used with many | |
| standard heartbeat facilities, such as keepalive, that monitor for | |
| failure and manage IP failover. | |
| \item[{shared IP group\index{shared IP group|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-shared-ip-group}} | |
| A collection of servers that can share IPs with other members of | |
| the group. Any server in a group can share one or more public IPs with | |
| any other server in the group. With the exception of the first server | |
| in a shared IP group, servers must be launched into shared IP groups. | |
| A server may be a member of only one shared IP group. | |
| \item[{shared storage\index{shared storage|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-shared-storage}} | |
| Block storage that is simultaneously accessible by multiple | |
| clients, for example, NFS. | |
| \item[{Sheepdog\index{Sheepdog|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-sheepdog}} | |
| Distributed block storage system for QEMU, supported by | |
| OpenStack. | |
| \item[{Simple Cloud Identity Management (SCIM)\index{Simple Cloud Identity Management (SCIM)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-simple-cloud-identity-management-scim}} | |
| Specification for managing identity in the cloud, currently | |
| unsupported by OpenStack. | |
| \item[{Simple Protocol for Independent Computing Environments (SPICE)\index{Simple Protocol for Independent Computing Environments (SPICE)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-simple-protocol-for-independent-computing-environments-spice}} | |
| SPICE provides remote desktop access to guest virtual machines. It | |
| is an alternative to VNC. SPICE is supported by OpenStack. | |
| \item[{Single-root I/O Virtualization (SR-IOV)\index{Single-root I/O Virtualization (SR-IOV)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-single-root-i-o-virtualization-sr-iov}} | |
| A specification that, when implemented by a physical PCIe | |
| device, enables it to appear as multiple separate PCIe devices. This | |
| enables multiple virtualized guests to share direct access to the | |
| physical device, offering improved performance over an equivalent | |
| virtual device. Currently supported in OpenStack Havana and later | |
| releases. | |
| \item[{SmokeStack\index{SmokeStack|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-smokestack}} | |
| Runs automated tests against the core OpenStack API; written in | |
| Rails. | |
| \item[{snapshot\index{snapshot|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-snapshot}} | |
| A point-in-time copy of an OpenStack storage volume or image. | |
| Use storage volume snapshots to back up volumes. Use image snapshots | |
| to back up data, or as ``gold'' images for additional servers. | |
| \item[{soft reboot\index{soft reboot|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-soft-reboot}} | |
| A controlled reboot where a VM instance is properly restarted | |
| through operating system commands. | |
| \item[{Software Development Lifecycle Automation service (solum)\index{Software Development Lifecycle Automation service (solum)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-software-development-lifecycle-automation-service-solum}} | |
| OpenStack project that aims to make cloud services easier to | |
| consume and integrate with application development process | |
| by automating the source-to-image process, and simplifying | |
| app-centric deployment. | |
| \item[{Software-defined networking (SDN)\index{Software-defined networking (SDN)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-software-defined-networking-sdn}} | |
| Provides an approach for network administrators to manage computer | |
| network services through abstraction of lower-level functionality. | |
| \item[{SolidFire Volume Driver\index{SolidFire Volume Driver|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-solidfire-volume-driver}} | |
| The Block Storage driver for the SolidFire iSCSI storage | |
| appliance. | |
| \item[{solum\index{solum|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-solum}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-software-development-lifecycle-automation-service-solum}]{\sphinxtermref{\DUrole{xref,std,std-term}{Software Development Lifecycle Automation | |
| service}}}}. | |
| \item[{spread-first scheduler\index{spread-first scheduler|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-spread-first-scheduler}} | |
| The Compute VM scheduling algorithm that attempts to start a new | |
| VM on the host with the least amount of load. | |
| \item[{SQLAlchemy\index{SQLAlchemy|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-sqlalchemy}} | |
| An open source SQL toolkit for Python, used in OpenStack. | |
| \item[{SQLite\index{SQLite|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-sqlite}} | |
| A lightweight SQL database, used as the default persistent | |
| storage method in many OpenStack services. | |
| \item[{stack\index{stack|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-stack}} | |
| A set of OpenStack resources created and managed by the | |
| Orchestration service according to a given template (either an | |
| AWS CloudFormation template or a Heat Orchestration | |
| Template (HOT)). | |
| \item[{StackTach\index{StackTach|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-stacktach}} | |
| Community project that captures Compute AMQP communications; | |
| useful for debugging. | |
| \item[{static IP address\index{static IP address|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-static-ip-address}} | |
| Alternative term for a fixed IP address. | |
| \item[{StaticWeb\index{StaticWeb|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-staticweb}} | |
| WSGI middleware component of Object Storage that serves | |
| container data as a static web page. | |
| \item[{storage back end\index{storage back end|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-storage-back-end}} | |
| The method that a service uses for persistent storage, such as | |
| iSCSI, NFS, or local disk. | |
| \item[{storage manager\index{storage manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-storage-manager}} | |
| A XenAPI component that provides a pluggable interface to | |
| support a wide variety of persistent storage back ends. | |
| \item[{storage manager back end\index{storage manager back end|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-storage-manager-back-end}} | |
| A persistent storage method supported by XenAPI, such as iSCSI | |
| or NFS. | |
| \item[{storage node\index{storage node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-storage-node}} | |
| An Object Storage node that provides container services, account | |
| services, and object services; controls the account databases, | |
| container databases, and object storage. | |
| \item[{storage services\index{storage services|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-storage-services}} | |
| Collective name for the Object Storage object services, | |
| container services, and account services. | |
| \item[{strategy\index{strategy|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-strategy}} | |
| Specifies the authentication source used by Image service or | |
| Identity. In the Database service, it refers to the extensions | |
| implemented for a data store. | |
| \item[{subdomain\index{subdomain|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-subdomain}} | |
| A domain within a parent domain. Subdomains cannot be | |
| registered. Subdomains enable you to delegate domains. Subdomains can | |
| themselves have subdomains, so third-level, fourth-level, fifth-level, | |
| and deeper levels of nesting are possible. | |
| \item[{subnet\index{subnet|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-subnet}} | |
| Logical subdivision of an IP network. | |
| \item[{SUSE Linux Enterprise Server (SLES)\index{SUSE Linux Enterprise Server (SLES)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-suse-linux-enterprise-server-sles}} | |
| A Linux distribution that is compatible with OpenStack. | |
| \item[{suspend\index{suspend|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-suspend}} | |
| Alternative term for a paused VM instance. | |
| \item[{swap\index{swap|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-swap}} | |
| Disk-based virtual memory used by operating systems to provide | |
| more memory than is actually available on the system. | |
| \item[{swauth\index{swauth|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-swauth}} | |
| An authentication and authorization service for Object Storage, | |
| implemented through WSGI middleware; uses Object Storage itself as the | |
| persistent backing store. | |
| \item[{swift\index{swift|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-swift}} | |
| Codename for OpenStack {\hyperref[\detokenize{common/glossary:term-object-storage-service-swift}]{\sphinxtermref{\DUrole{xref,std,std-term}{Object Storage service}}}}. | |
| \item[{swift All in One (SAIO)\index{swift All in One (SAIO)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-swift-all-in-one-saio}} | |
| Creates a full Object Storage development environment within a | |
| single VM. | |
| \item[{swift middleware\index{swift middleware|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-swift-middleware}} | |
| Collective term for Object Storage components that provide | |
| additional functionality. | |
| \item[{swift proxy server\index{swift proxy server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-swift-proxy-server}} | |
| Acts as the gatekeeper to Object Storage and is responsible for | |
| authenticating the user. | |
| \item[{swift storage node\index{swift storage node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-swift-storage-node}} | |
| A node that runs Object Storage account, container, and object | |
| services. | |
| \item[{sync point\index{sync point|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-sync-point}} | |
| Point in time since the last container and accounts database | |
| sync among nodes within Object Storage. | |
| \item[{sysadmin\index{sysadmin|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-sysadmin}} | |
| One of the default roles in the Compute RBAC system. Enables a | |
| user to add other users to a project, interact with VM images that are | |
| associated with the project, and start and stop VM instances. | |
| \item[{system usage\index{system usage|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-system-usage}} | |
| A Compute component that, along with the notification system, | |
| collects meters and usage information. This information can be used | |
| for billing. | |
| \end{description} | |
| \subsection{T} | |
| \label{\detokenize{common/glossary:t}}\begin{description} | |
| \item[{tacker\index{tacker|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tacker}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-nfv-orchestration-service-tacker}]{\sphinxtermref{\DUrole{xref,std,std-term}{NFV Orchestration service}}}} | |
| \item[{Telemetry service (telemetry)\index{Telemetry service (telemetry)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-telemetry-service-telemetry}} | |
| The OpenStack project which collects measurements of the utilization | |
| of the physical and virtual resources comprising deployed clouds, | |
| persists this data for subsequent retrieval and analysis, and triggers | |
| actions when defined criteria are met. | |
| \item[{TempAuth\index{TempAuth|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tempauth}} | |
| An authentication facility within Object Storage that enables | |
| Object Storage itself to perform authentication and authorization. | |
| Frequently used in testing and development. | |
| \item[{Tempest\index{Tempest|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tempest}} | |
| Automated software test suite designed to run against the trunk | |
| of the OpenStack core project. | |
| \item[{TempURL\index{TempURL|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tempurl}} | |
| An Object Storage middleware component that enables creation of | |
| URLs for temporary object access. | |
| \item[{tenant\index{tenant|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tenant}} | |
| A group of users; used to isolate access to Compute resources. | |
| An alternative term for a project. | |
| \item[{Tenant API\index{Tenant API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tenant-api}} | |
| An API that is accessible to projects. | |
| \item[{tenant endpoint\index{tenant endpoint|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tenant-endpoint}} | |
| An Identity service API endpoint that is associated with one or | |
| more projects. | |
| \item[{tenant ID\index{tenant ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tenant-id}} | |
| An alternative term for {\hyperref[\detokenize{common/glossary:term-project-id}]{\sphinxtermref{\DUrole{xref,std,std-term}{project ID}}}}. | |
| \item[{token\index{token|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-token}} | |
| An alpha-numeric string of text used to access OpenStack APIs | |
| and resources. | |
| \item[{token services\index{token services|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-token-services}} | |
| An Identity service component that manages and validates tokens | |
| after a user or project has been authenticated. | |
| \item[{tombstone\index{tombstone|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tombstone}} | |
| Used to mark Object Storage objects that have been | |
| deleted; ensures that the object is not updated on another node after | |
| it has been deleted. | |
| \item[{topic publisher\index{topic publisher|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-topic-publisher}} | |
| A process that is created when a RPC call is executed; used to | |
| push the message to the topic exchange. | |
| \item[{Torpedo\index{Torpedo|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-torpedo}} | |
| Community project used to run automated tests against the | |
| OpenStack API. | |
| \item[{transaction ID\index{transaction ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-transaction-id}} | |
| Unique ID assigned to each Object Storage request; used for | |
| debugging and tracing. | |
| \item[{transient\index{transient|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-transient}} | |
| Alternative term for non-durable. | |
| \item[{transient exchange\index{transient exchange|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-transient-exchange}} | |
| Alternative term for a non-durable exchange. | |
| \item[{transient message\index{transient message|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-transient-message}} | |
| A message that is stored in memory and is lost after the server | |
| is restarted. | |
| \item[{transient queue\index{transient queue|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-transient-queue}} | |
| Alternative term for a non-durable queue. | |
| \item[{TripleO\index{TripleO|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-tripleo}} | |
| OpenStack-on-OpenStack program. The code name for the | |
| OpenStack Deployment program. | |
| \item[{trove\index{trove|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-trove}} | |
| Codename for OpenStack {\hyperref[\detokenize{common/glossary:term-database-service-trove}]{\sphinxtermref{\DUrole{xref,std,std-term}{Database service}}}}. | |
| \item[{trusted platform module (TPM)\index{trusted platform module (TPM)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-trusted-platform-module-tpm}} | |
| Specialized microprocessor for incorporating cryptographic keys | |
| into devices for authenticating and securing a hardware platform. | |
| \end{description} | |
| \subsection{U} | |
| \label{\detokenize{common/glossary:u}}\begin{description} | |
| \item[{Ubuntu\index{Ubuntu|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-ubuntu}} | |
| A Debian-based Linux distribution. | |
| \item[{unscoped token\index{unscoped token|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-unscoped-token}} | |
| Alternative term for an Identity service default token. | |
| \item[{updater\index{updater|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-updater}} | |
| Collective term for a group of Object Storage components that | |
| processes queued and failed updates for containers and objects. | |
| \item[{user\index{user|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-user}} | |
| In OpenStack Identity, entities represent individual API | |
| consumers and are owned by a specific domain. In OpenStack Compute, | |
| a user can be associated with roles, projects, or both. | |
| \item[{user data\index{user data|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-user-data}} | |
| A blob of data that the user can specify when they launch | |
| an instance. The instance can access this data through the | |
| metadata service or config drive. | |
| Commonly used to pass a shell script that the instance runs on boot. | |
| \item[{User Mode Linux (UML)\index{User Mode Linux (UML)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-user-mode-linux-uml}} | |
| An OpenStack-supported hypervisor. | |
| \end{description} | |
| \subsection{V} | |
| \label{\detokenize{common/glossary:v}}\begin{description} | |
| \item[{VIF UUID\index{VIF UUID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vif-uuid}} | |
| Unique ID assigned to each Networking VIF. | |
| \item[{Virtual Central Processing Unit (vCPU)\index{Virtual Central Processing Unit (vCPU)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-central-processing-unit-vcpu}} | |
| Subdivides physical CPUs. Instances can then use those | |
| divisions. | |
| \item[{Virtual Disk Image (VDI)\index{Virtual Disk Image (VDI)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-disk-image-vdi}} | |
| One of the VM image disk formats supported by Image | |
| service. | |
| \item[{Virtual Extensible LAN (VXLAN)\index{Virtual Extensible LAN (VXLAN)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-extensible-lan-vxlan}} | |
| A network virtualization technology that attempts to reduce the | |
| scalability problems associated with large cloud computing | |
| deployments. It uses a VLAN-like encapsulation technique to | |
| encapsulate Ethernet frames within UDP packets. | |
| \item[{Virtual Hard Disk (VHD)\index{Virtual Hard Disk (VHD)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-hard-disk-vhd}} | |
| One of the VM image disk formats supported by Image | |
| service. | |
| \item[{virtual IP address (VIP)\index{virtual IP address (VIP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-ip-address-vip}} | |
| An Internet Protocol (IP) address configured on the load | |
| balancer for use by clients connecting to a service that is load | |
| balanced. Incoming connections are distributed to back-end nodes based | |
| on the configuration of the load balancer. | |
| \item[{virtual machine (VM)\index{virtual machine (VM)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-machine-vm}} | |
| An operating system instance that runs on top of a hypervisor. | |
| Multiple VMs can run at the same time on the same physical | |
| host. | |
| \item[{virtual network\index{virtual network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-network}} | |
| An L2 network segment within Networking. | |
| \item[{Virtual Network Computing (VNC)\index{Virtual Network Computing (VNC)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-network-computing-vnc}} | |
| Open source GUI and CLI tools used for remote console access to | |
| VMs. Supported by Compute. | |
| \item[{Virtual Network InterFace (VIF)\index{Virtual Network InterFace (VIF)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-network-interface-vif}} | |
| An interface that is plugged into a port in a Networking | |
| network. Typically a virtual network interface belonging to a | |
| VM. | |
| \item[{virtual networking\index{virtual networking|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-networking}} | |
| A generic term for virtualization of network functions | |
| such as switching, routing, load balancing, and security using | |
| a combination of VMs and overlays on physical network | |
| infrastructure. | |
| \item[{virtual port\index{virtual port|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-port}} | |
| Attachment point where a virtual interface connects to a virtual | |
| network. | |
| \item[{virtual private network (VPN)\index{virtual private network (VPN)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-private-network-vpn}} | |
| Provided by Compute in the form of cloudpipes, specialized | |
| instances that are used to create VPNs on a per-project basis. | |
| \item[{virtual server\index{virtual server|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-server}} | |
| Alternative term for a VM or guest. | |
| \item[{virtual switch (vSwitch)\index{virtual switch (vSwitch)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-switch-vswitch}} | |
| Software that runs on a host or node and provides the features | |
| and functions of a hardware-based network switch. | |
| \item[{virtual VLAN\index{virtual VLAN|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtual-vlan}} | |
| Alternative term for a virtual network. | |
| \item[{VirtualBox\index{VirtualBox|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-virtualbox}} | |
| An OpenStack-supported hypervisor. | |
| \item[{Vitrage\index{Vitrage|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vitrage}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-root-cause-analysis-rca-service-vitrage}]{\sphinxtermref{\DUrole{xref,std,std-term}{Root Cause Analysis service}}}}. | |
| \item[{VLAN manager\index{VLAN manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vlan-manager}} | |
| A Compute component that provides dnsmasq and radvd and sets up | |
| forwarding to and from cloudpipe instances. | |
| \item[{VLAN network\index{VLAN network|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vlan-network}} | |
| The Network Controller provides virtual networks to enable | |
| compute servers to interact with each other and with the public | |
| network. All machines must have a public and private network | |
| interface. A VLAN network is a private network interface, which is | |
| controlled by the \sphinxcode{vlan\_interface} option with VLAN | |
| managers. | |
| \item[{VM disk (VMDK)\index{VM disk (VMDK)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vm-disk-vmdk}} | |
| One of the VM image disk formats supported by Image | |
| service. | |
| \item[{VM image\index{VM image|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vm-image}} | |
| Alternative term for an image. | |
| \item[{VM Remote Control (VMRC)\index{VM Remote Control (VMRC)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vm-remote-control-vmrc}} | |
| Method to access VM instance consoles using a web browser. | |
| Supported by Compute. | |
| \item[{VMware API\index{VMware API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vmware-api}} | |
| Supports interaction with VMware products in Compute. | |
| \item[{VMware NSX Neutron plug-in\index{VMware NSX Neutron plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vmware-nsx-neutron-plug-in}} | |
| Provides support for VMware NSX in Neutron. | |
| \item[{VNC proxy\index{VNC proxy|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vnc-proxy}} | |
| A Compute component that provides users access to the consoles | |
| of their VM instances through VNC or VMRC. | |
| \item[{volume\index{volume|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume}} | |
| Disk-based data storage generally represented as an iSCSI target | |
| with a file system that supports extended attributes; can be | |
| persistent or ephemeral. | |
| \item[{Volume API\index{Volume API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-api}} | |
| Alternative name for the Block Storage API. | |
| \item[{volume controller\index{volume controller|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-controller}} | |
| A Block Storage component that oversees and coordinates storage | |
| volume actions. | |
| \item[{volume driver\index{volume driver|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-driver}} | |
| Alternative term for a volume plug-in. | |
| \item[{volume ID\index{volume ID|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-id}} | |
| Unique ID applied to each storage volume under the Block Storage | |
| control. | |
| \item[{volume manager\index{volume manager|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-manager}} | |
| A Block Storage component that creates, attaches, and detaches | |
| persistent storage volumes. | |
| \item[{volume node\index{volume node|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-node}} | |
| A Block Storage node that runs the cinder-volume daemon. | |
| \item[{volume plug-in\index{volume plug-in|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-plug-in}} | |
| Provides support for new and specialized types of back-end | |
| storage for the Block Storage volume manager. | |
| \item[{volume worker\index{volume worker|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-volume-worker}} | |
| A cinder component that interacts with back-end storage to manage | |
| the creation and deletion of volumes and the creation of compute | |
| volumes, provided by the cinder-volume daemon. | |
| \item[{vSphere\index{vSphere|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-vsphere}} | |
| An OpenStack-supported hypervisor. | |
| \end{description} | |
| \subsection{W} | |
| \label{\detokenize{common/glossary:w}}\begin{description} | |
| \item[{Watcher\index{Watcher|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-watcher}} | |
| Code name for the {\hyperref[\detokenize{common/glossary:term-infrastructure-optimization-service-watcher}]{\sphinxtermref{\DUrole{xref,std,std-term}{Infrastructure Optimization service}}}}. | |
| \item[{weight\index{weight|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-weight}} | |
| Used by Object Storage devices to determine which storage | |
| devices are suitable for the job. Devices are weighted by size. | |
| \item[{weighted cost\index{weighted cost|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-weighted-cost}} | |
| The sum of each cost used when deciding where to start a new VM | |
| instance in Compute. | |
| \item[{weighting\index{weighting|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-weighting}} | |
| A Compute process that determines the suitability of the VM | |
| instances for a job for a particular host. For example, not enough RAM | |
| on the host, too many CPUs on the host, and so on. | |
| \item[{worker\index{worker|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-worker}} | |
| A daemon that listens to a queue and carries out tasks in | |
| response to messages. For example, the cinder-volume worker manages volume | |
| creation and deletion on storage arrays. | |
| \item[{Workflow service (mistral)\index{Workflow service (mistral)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-workflow-service-mistral}} | |
| The OpenStack service that provides a simple YAML-based language to | |
| write workflows (tasks and transition rules) and a service that | |
| allows to upload them, modify, run them at scale and in a highly | |
| available manner, manage and monitor workflow execution state and state | |
| of individual tasks. | |
| \end{description} | |
| \subsection{X} | |
| \label{\detokenize{common/glossary:x}}\begin{description} | |
| \item[{Xen\index{Xen|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-xen}} | |
| Xen is a hypervisor using a microkernel design, providing | |
| services that allow multiple computer operating systems to | |
| execute on the same computer hardware concurrently. | |
| \item[{Xen API\index{Xen API|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-xen-api}} | |
| The Xen administrative API, which is supported by | |
| Compute. | |
| \item[{Xen Cloud Platform (XCP)\index{Xen Cloud Platform (XCP)|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-xen-cloud-platform-xcp}} | |
| An OpenStack-supported hypervisor. | |
| \item[{Xen Storage Manager Volume Driver\index{Xen Storage Manager Volume Driver|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-xen-storage-manager-volume-driver}} | |
| A Block Storage volume plug-in that enables communication with | |
| the Xen Storage Manager API. | |
| \item[{XenServer\index{XenServer|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-xenserver}} | |
| An OpenStack-supported hypervisor. | |
| \item[{XFS\index{XFS|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-xfs}} | |
| High-performance 64-bit file system created by Silicon | |
| Graphics. Excels in parallel I/O operations and data | |
| consistency. | |
| \end{description} | |
| \subsection{Z} | |
| \label{\detokenize{common/glossary:z}}\begin{description} | |
| \item[{zaqar\index{zaqar|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-zaqar}} | |
| Codename for the {\hyperref[\detokenize{common/glossary:term-message-service-zaqar}]{\sphinxtermref{\DUrole{xref,std,std-term}{Message service}}}}. | |
| \item[{ZeroMQ\index{ZeroMQ|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-zeromq}} | |
| Message queue software supported by OpenStack. An alternative to | |
| RabbitMQ. Also spelled 0MQ. | |
| \item[{Zuul\index{Zuul|textbf}}] \leavevmode\phantomsection\label{\detokenize{common/glossary:term-zuul}} | |
| Tool used in OpenStack development to ensure correctly ordered | |
| testing of changes in parallel. | |
| \end{description} | |
| \chapter{Search in this guide} | |
| \label{\detokenize{index:search-in-this-guide}}\begin{itemize} | |
| \item {} | |
| \DUrole{xref,std,std-ref}{search} | |
| \end{itemize} | |
| \renewcommand{\indexname}{Index} | |
| \printindex | |
| \end{document} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment