Skip to content

Instantly share code, notes, and snippets.

@robszumski
Forked from mateobur/CloudFormationTemplateOpenShift.yaml
Last active November 19, 2021 10:05
Show Gist options
  • Save robszumski/d918e6033daaf836c5c0030e3878437b to your computer and use it in GitHub Desktop.
Save robszumski/d918e6033daaf836c5c0030e3878437b to your computer and use it in GitHub Desktop.
CloudFormation Template OpenShift 3.9

Blog post to follow: https://sysdig.com/blog/deploy-openshift-aws/

Pre-reqs

AWS account must have accepted the CentOS terms via AWS marketplace See the AMI IDs in the CloudFormation file Upload the stack file to your own S3 bucket Replace your SSH key name, stack name, etc below:

aws cloudformation create-stack \
 --region us-west-1 \
 --stack-name robszumski-openshift-39 \
 --template-url "https://s3-us-west-1.amazonaws.com/openshift-origin-cloudformation/CloudFormationTemplateOpenShift.yaml" \
 --parameters \
   ParameterKey=AvailabilityZone,ParameterValue=us-west-1a \
   ParameterKey=KeyName,ParameterValue=robszumski \
 --capabilities=CAPABILITY_IAM

Clone 3.9

$ git clone https://github.com/openshift/openshift-ansible.git
$ cd openshift-ansible
$ git checkout origin/release-3.9

Set some configs

  1. Generate an htpassword file for login to the Console, this is referenced in the inventory file. There are several web generators for these.
  2. Grab the IPs for your infra node, master (and etcd), and workers. Put those in the inventory file (see example in this gist).

Run the prep-playbook

Reference your modified inventory file, and the prepare playbook (see file in this gist).

$ ansible-playbook ~/Documents/openshift-origin-ansible/prepare.yaml -i ~/Documents/openshift-origin-ansible/inventory --key-file ~/.ssh/id_rsa

Install OpenShift pre-reqs

$ ansible-playbook -i ~/Documents/openshift-origin-ansible/inventory --key-file ~/.ssh/id_rsa \
  ~/Documents/openshift-ansible/playbooks/prerequisites.yml

This will take a while.

Deploy OpenShift

We are going to disable a few checks due to smaller nodes.

$ ansible-playbook -i ~/Documents/openshift-origin-ansible/inventory --key-file ~/.ssh/id_rsa \
  ~/Documents/openshift-ansible/playbooks/deploy_cluster.yml \
  -e openshift_disable_check=package_version,disk_availability,memory_availability

This will take a while.

Access Console

The Console should be up and running on the master node DNS address at 8443.

AWSTemplateFormatVersion: '2010-09-09'
Metadata: {}
Parameters:
###########
KeyName:
Description: The EC2 Key Pair to allow SSH access to the instance
Type: 'AWS::EC2::KeyPair::KeyName'
AvailabilityZone:
Description: Availability zone to deploy
Type: AWS::EC2::AvailabilityZone::Name
Mappings:
#########
RegionMap:
us-east-1:
CentOS7: "ami-ae7bfdb8"
us-east-2:
CentOS7: "ami-9cbf9bf9"
us-west-1:
CentOS7: "ami-65e0e305"
Resources:
##########
openshiftvpc:
Type: "AWS::EC2::VPC"
Properties:
CidrBlock: 10.0.0.0/28
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: openshift-cf-vpc
internetgatewayos:
Type: AWS::EC2::InternetGateway
gatewayattachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref internetgatewayos
VpcId: !Ref openshiftvpc
subnet:
Type: 'AWS::EC2::Subnet'
Properties:
VpcId: !Ref openshiftvpc
CidrBlock: 10.0.0.0/28
AvailabilityZone: !Ref AvailabilityZone
routetable:
Type: 'AWS::EC2::RouteTable'
Properties:
VpcId: !Ref openshiftvpc
subnetroutetableasoc:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
RouteTableId: !Ref routetable
SubnetId: !Ref subnet
route:
Type: "AWS::EC2::Route"
Properties:
RouteTableId: !Ref routetable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref internetgatewayos
openshiftmaster:
Type: 'AWS::EC2::Instance'
Properties:
Tags:
- Key: Name
Value: openshift-master
InstanceType: t2.medium
KeyName: !Ref KeyName
AvailabilityZone: !Ref AvailabilityZone
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
SubnetId: !Ref subnet
GroupSet:
- !Ref mastersecgroup
ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", CentOS7]
openshiftworker1:
Type: 'AWS::EC2::Instance'
Properties:
Tags:
- Key: Name
Value: openshift-worker1
InstanceType: t2.large
KeyName: !Ref KeyName
AvailabilityZone: !Ref AvailabilityZone
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
SubnetId: !Ref subnet
GroupSet:
- !Ref workersecgroup
ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", CentOS7]
openshiftworker2:
Type: 'AWS::EC2::Instance'
Properties:
Tags:
- Key: Name
Value: openshift-worker2
InstanceType: t2.large
KeyName: !Ref KeyName
AvailabilityZone: !Ref AvailabilityZone
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
SubnetId: !Ref subnet
GroupSet:
- !Ref workersecgroup
ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", CentOS7]
volume1:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !GetAtt openshiftmaster.AvailabilityZone
Size: 50
DeletionPolicy: Delete
volat1:
Type: AWS::EC2::VolumeAttachment
Properties:
Device: '/dev/xvdb'
VolumeId: !Ref volume1
InstanceId: !Ref openshiftmaster
volume2:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !GetAtt openshiftworker1.AvailabilityZone
Size: 50
DeletionPolicy: Delete
volat2:
Type: AWS::EC2::VolumeAttachment
Properties:
Device: '/dev/xvdb'
VolumeId: !Ref volume2
InstanceId: !Ref openshiftworker1
volume3:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !GetAtt openshiftworker2.AvailabilityZone
Size: 50
DeletionPolicy: Delete
volat3:
Type: AWS::EC2::VolumeAttachment
Properties:
Device: '/dev/xvdb'
VolumeId: !Ref volume3
InstanceId: !Ref openshiftworker2
workersecgroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref openshiftvpc
GroupDescription: Security group for the worker Kubernetes nodes
SecurityGroupIngress:
- IpProtocol: -1
FromPort: -1
ToPort: -1
CidrIp: 10.0.0.0/28
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
mastersecgroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref openshiftvpc
GroupDescription: Security group for the master Kubernetes node
SecurityGroupIngress:
- IpProtocol: -1
FromPort: -1
ToPort: -1
CidrIp: 10.0.0.0/28
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '8443'
ToPort: '8443'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '10250'
ToPort: '10250'
CidrIp: 0.0.0.0/0
[OSEv3:children]
masters
etcd
nodes
[OSEv3:vars]
ansible_ssh_user=centos
ansible_sudo=true
ansible_become=true
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
deployment_type=origin
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_install_examples=true
openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16'
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/Users/robszumski/Documents/openshift-origin-ansible/htpasswd'}]
openshift_disable_check=disk_availability,docker_storage,memory_availability
[masters]
ec2-aaaaaaaaaa.us-west-1.compute.amazonaws.com
[etcd]
ec2-aaaaaaaaaa.us-west-1.compute.amazonaws.com
[nodes]
ec2-aaaaaaaaaa.us-west-1.compute.amazonaws.com openshift_node_labels="{'region':'infra','zone':'west'}" openshift_schedulable=true
ec2-bbbbbbbbbb.us-west-1.compute.amazonaws.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
ec2-cccccccccc.us-west-1.compute.amazonaws.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
@Activeghost
Copy link

Activeghost commented Feb 13, 2019

Note: if you install docker-ce via prepare.yml, the prerequisite script will fail. It's checking for a version of docker and not docker-ce, so the detection logic proceeds to attempt installation of the package in the ansible scripts.

Update roles/container_runtime/tasks/package_docker.yml @Ln16 to pass this check (changing to 'docker-ce' will allow it to pass, but the playbook needs refactoring to support docker-ce installs and checks). I haven't had an opportunity to check to see whether without installing the prepare.yml the prereq succeeds.

... and you will need to update the deploy_cluster dependencies starting here (and in several other places):
roles/openshift_health_checker/openshift_checks/package_availability.py @ln59

I'd recommend not installing the docker-ce (and perhaps skip the prepare playbook). If you do (and then uninstall) remember to flush the ansible fact cache so it doesn't "remember" your docker-ce facts and fail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment