: Deepak Vohra
: Kubernetes Management Design Patterns With Docker, CoreOS Linux, and Other Platforms
: Apress
: 9781484225981
: 1
: CHF 43.20
:
: Informatik
: English
: 410
: Wasserzeichen/DRM
: PC/MAC/eReader/Tablet
: PDF
Take container cluster management to the next level; learn how to administer and configure Kubernetes on CoreOS; and apply suitable management design patterns such as Configmaps, Autoscaling, elastic resource usage, and high availability. Some of the other features discussed are logging, scheduling, rolling updates, volumes, service types, and multiple cloud provider zones.
 
div>The atomic unit of modular container service in Kubernetes is a Pod, which is a group of containers with a common filesystem and networking. The Kubernetes Pod abstraction enables design patterns for containerized applications similar to object-oriented design patterns. Containers provide some of the same benefits as software objects such as modularity or packaging, abstraction, and reuse.

CoreOS Linux is used in the majority of the chapters and other platforms discussed are CentOS with OpenShift, Debian 8 (jessie) on AWS, and Debian 7 for Google Container Engine. 

< div>
CoreOS is the main focus becayse Docker is pre-installed on CoreOS out-of-the-box. CoreOS: 
    < i>Supports most cloud providers (including Amazon AWS EC2 and Google Cloud Platform) and virtualization platforms (such as VMWare and VirtualBox)
  • Provi es Cloud-Config for declaratively configuring for OS items such as network configuration (flannel), storage (etcd), and user accounts 
  • Pr vides a production-level infrastructure for containerized applications including automation, security, and scalability
  • Leads the drive for container industry standards and founded appc 
  • Provid s the most advanced container registry, Quay 
 
Docker was made available as open source in March 2013 and has become the most commonly used containerization platform. Kubernetes was open-sourced in June 2014 and has become the most widely used container cluster manager. The first stable version of CoreOS Linux was made available in July 2014 and since has become one of the most commonly used operating system for containers. 
& bsp;

What You'll Learn

  • Use Kubernetes with Docker
  • Create a Kubernetes cluster on CoreOS on AWS
  • Apply cluster management design patterns
  • Use multiple cloud provider zones
  • Work with Kubernetes and tools like Ansible
  • Discover the Kubernetes-based PaaS platform OpenShift
  • Create a high availability website
  • Build a high availability Kubernetes master cluster
  • Use volumes, configmaps, services, autoscaling, and rolling updates
  • Manage compute resources
  • Configu e logging and scheduling


Who This Book Is For

< iv>Linux admins, CoreOS admins, application developers, and container as a service (CAAS) developers. Some pre-requisite knowledge of Linux and Docker is required. Introductory knowledge of Kubernetes is required such as creating a cluster, creating a Pod, creating a service, and creating and scaling a replication controller. For introductory Docker and Kubernetes information, refer to Pro Docker (Apress) and Kubernetes Microservices with Docker (Apress). Some pre-requisite knowledge about using Amazon Web Services (AWS) EC2, CloudFormation, and VPC is also required. 



Deepak Vohra is an Oracle Certified Associate and a Sun Certified Java Programmer. Deepak has published in Oracle Magazine, OTN, IBM developerWorks, ONJava, DevSource, WebLogic Developer's Journal, XML Journal, Java Developer's Journal, FTPOnline, and devx.
Contents at a Glance4
Contents6
About the Author14
About the Technical Reviewer15
Introduction16
Part I: Platforms20
Chapter 1: Kubernetes on AWS21
Problem21
Solution21
Overview21
Setting the Environment22
Configuring AWS25
Starting the Kubernetes Cluster29
Testing the Cluster35
Configuring the Cluster36
Stopping the Cluster39
Summary40
Chapter 2: Kubernetes on CoreOS on AWS41
Problem41
Solution41
Overview42
Setting the Environment43
Configuring AWS Credentials43
Installing Kube-aws43
Setting Up Cluster Parameters45
Creating a KMS Key46
Setting Up an External DNS Name47
Creating the Cluster47
Creating an Asset Directory47
Initializing the Cluster CloudFormation48
Rendering Contents of the Asset Directory48
Customizing the Cluster49
Validating the CloudFormation Stack52
Launching the Cluster CloudFormation52
Configuring DNS53
Accessing the Cluster57
Testing the Cluster59
Summary65
Chapter 3: Kubernetes on Google Cloud Platform66
Problem66
Solution66
Overview66
Setting the Environment67
Creating a Project on Google Cloud Platform67
Enabling Permissions72
Enabling the Compute Engine API73
Creating a VM Instance79
Connecting to the VM Instance83
Reserving a Static External IP Address84
Creating a Kubernetes Cluster84
Creating a Kubernetes Application and Service88
Stopping the Cluster92
Using Kubernetes with Google Container Engine94
Creating a Google Container Cluster94
Connecting to the Google Cloud Shell97
Configuring kubectl97
Testing the Kubernetes Cluster98
Summary104
Part II: Administration and Configuration105
Chapter 4: Using Multiple Zones106
Problem106
Solution107
Overview108
Setting the Environment108
Initializing a CloudFormation110
Configuring cluster.yaml for Multiple Zones110
Launching the CloudFormation114
Configuring External DNS115
Running a Kubernetes Application116
Using Multiple Zones on AWS118
Summary131
Chapter 5: Using the Tectonic Console132
Problem132
Solution132
Overview133
Setting the Environment133
Downloading the Pull Secret and the Tectonic Console Manifest135
Installing the Pull Secret and the Tectonic Console Manifest137
Accessing the Tectonic Console138
Using the Tectonic Console139
Removing the Tectonic Console149
Summary149
Chapter 6: Using Volumes150
Problem150
Solution150
Overview151
Setting the Environment152
Creating an AWS Volume154
Using an awsElasticBlockStore Volume156
Creating a Git Repo160
Using a gitRepo Volume164
Summary167
Chapter 7: Using Services168
Problem168
Solution169
Overview169
Setting the Environment170
Creating a ClusterIP Service171
Creating a NodePort Service174
Creating a LoadBalancer Service181
Summary185
Chapter 8: Using Rolling Updates186
Problem186
Solution186
Overview187
Setting the Environment188
Rolling Update with an RC Definition File189
Rolling Update by Updating the Container Image192
Rolling Back an Update199
Using Only Either File or Image201
Multiple-Container Pods201
Rolling Update to a Deployment201
Summary213
Chapter 9: Scheduling Pods on Nodes214
Problem214
Solution214
Overview215
Defining a Scheduling Policy215
Setting the Environment217
Using the Default Scheduler218
Scheduling Pods without a Node Selector228
Setting Node Labels228
Scheduling Pods with a Node Selector229
Setting Node Affinity235
Setting requiredDuringSchedulingIgnoredDuringExecution237
Setting preferredDuringSchedulingIgnoredDuringExecution244
Summary251
C