Created
March 22, 2017 05:04
-
-
Save mechazod/437553ce3b79a8e0696ca376b8d28bae to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Step 1: Start Redis | |
This demo provides an introductory overview of Habitat from the comfort of your browser. Below, we'll show you how to quickly download and run existing packages to create services. | |
After starting a service, we'll demonstrate how easy Habitat makes it to inject configuration changes into a single service and, lastly, how you can automatically cluster the infrastructure and apply changes to any number of services regardless of where they are running (in containers, VMs, or on bare metal). | |
Habitat centralizes application configuration, management, and behavior around the application itself, not the infrastructure that the app runs on. To begin, let's download and start a service from an existing Habitat package. | |
$ hab start core/redis | |
Result: | |
Success! In a single command, you've downloaded the Redis package (including its dependencies) and started the service. | |
Notice that Habitat packages start up with a supervisor to manage our processes. | |
Next, we'll ask the supervisor what is configurable in this service. | |
Step 2: Ask the Supervisor What's Configurable | |
In traditional packaging formats, the settings you can change are not typically discoverable. | |
One powerful aspect of Habitat is that services are managed by a Supervisor. The Supervisor handles many things including, as we'll see below, reporting on what is configurable within our running service. | |
Better yet, presume you now have a hundred instances running in a group and you want to see what can be configured. No problem, you can simply ask any single instance and get back the same response. | |
From the beginning, Habitat was built with configuration in mind. Instead of constantly destroying, re-building, and re-deploying packages for each change, Habitat produces an immutable package artifact where the author defines which configuration settings should be exposed. Give it a try. | |
$ hab sup config core/redis | |
Result: | |
Have a look! Scroll to the top of the output and note that the tcp-backlog is set to 511. | |
Not only can the supervisor report what's configurable, it can actually assist us with changing config items as well. | |
First, let's start up a service with a variable override, then we'll see how to apply the same change through the supervisor. | |
Step 3: Configure the Service Through Environment Variables | |
In the first step, we started up the Redis service and, as you might have noticed, there was a warning about the TCP backlog setting. Next, the supervisor informed us that many settings—including the TCP backlog—are configurable. | |
With Habitat, there are a couple of ways to make config changes to your service. In the next step, we'll walk you through passing a config file in to a running supervisor, but first let's take the standard approach of passing the change in as an environment variable. | |
Suppose you wanted to test your change by starting a single instance. Start the Redis service again, this time overriding the TCP backlog setting. You'll notice that the warning goes away upon starting up the service. | |
$ HAB_REDIS="tcp-backlog=128" hab start core/redis | |
Result: | |
Success! The setting was overridden and the TCP warning message is no longer present. | |
Suppose you wanted to apply this change to all of your instances at once. You can do that too. | |
Step 4: Making Permanent Changes via Configuration Files | |
In the previous step, we made a one-off change while starting the service. That was convenient, but what if we want to make this change permanent and distribute it across all of our instances? This is where Habitat excels. | |
Habitat deals with configuration settings in a couple of places. First, you'll have your typical config file(s) based upon your particular package (e.g. redis.conf) and second, Habitat will look for a .toml file where you can define additional config items. | |
Earlier, the Habitat supervisor read the contents of the config.toml file for us when we asked what was configurable in our service. Presume you're working on this file locally in your text editor - a snippet is displayed below. | |
Let's change the tcp-backlog setting to 128 in our config.toml, then we'll apply that file to our group of instances in the next step. | |
$ tcp-backlog = 128 | |
change from 512 to 128 | |
Step 5: Configure the Service Through Discovery | |
We've updated our config file with the new setting. At this point you would typically be faced with the hectic, time-consuming task of rolling this change out to all of your nodes. | |
Imagine, for a moment, if you could simply apply the change in a single command with each node picking it up automatically… imagine no more! The Habitat supervisor has it under control so that you can quickly update the entire group. | |
Before we dive into the topic of setting up groups, let's presume you've already got a large service group up and running which contains many database instances. | |
Note that the TCP backlog warning is present in the two sample running nodes. Now, go ahead and apply the updated config.toml file by uploading it to one of the nodes in our service group. Then watch the magic happen as the peers discover and apply the change, clearing the TCP backlog issue. | |
$ hab config apply redis.default 1 /tmp/config.toml --peer 172.17.0.4 | |
Result: | |
Applying configuration ConfigFile redis.default 1 (F: gossip.toml, C: 21791f2efcee073816460e687bf5154) to redis.default | |
Joining peer: 172.17.0.4:9638 | |
Configuration applied to: 172.17.0.4:9638 | |
Finished applying configuration | |
And just like that, you've applied your change and re-started your nodes. | |
Check the other windows and you'll see the TCP backlog warning is no longer present after the re-start. | |
You might be wondering how we connect nodes into a group. We'll learn how in the next step. | |
Step 6: Setting Up a Service Group Topology | |
That last step was powerful! Now, you're probably wondering how these services were set up to communicate with each other. It's actually not as complicated as you might suspect thanks to some built-in Habitat features. | |
By default, Habitat places running services into a Service Group with a standalone topology. However, as we'll see below, you can explicitly start services in a user-defined group that can optionally apply alternative topologies. | |
Suppose your application needs to share state and requires a certain start order for its services. In this situation, you can leverage a leader/follower topology by simply adding a couple more instructions to the hab start command. | |
$ hab start -t leader core/redis | |
Result: | |
hab-sup(MN): Starting core/redis | |
hab-sup(GS): Supervisor 172.17.0.4: 2e4cc8a8-d89e-4739-b14f-8ff526b99da5 | |
hab-sup(GS): Census redis.default: 32d54eaf-35e4-4052-8fc9-d16773251778 | |
hab-sup(GS): Starting inbound gossip listener | |
hab-sup(GS): Starting outbound gossip distributor | |
hab-sup(GS): Starting gossip failure detector | |
hab-sup(CN): Starting census health adjuster | |
hab-sup(SC): Updated redis.config | |
hab-sup(TP): Restarting because the service config was updated via the census | |
hab-sup(TL): 1 of 3 census entries; waiting for minimum quorum | |
Our first node is up and running at 172.17.0.4. | |
Note the last line. The leader/follower topology requires a minimum of three nodes. | |
Let's spin up two more Redis instances and connect all three in a group by referencing that IP. | |
Step 7: Adding Services to an Existing Group | |
Now that we've got our first node running, we went ahead and added a second running node for you since this topology requires a minimum of three. Once we add a third, we'll see them elect a leader for the group. | |
Adding a new node to a group is as simple as setting a flag that references any existing peer (by IP address or hostname) in the target Service Group. | |
Start the third node, then switch between all three windows to see that each node has been connected to one another. | |
$ hab start -t leader core/redis --peer 172.17.0.4 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment