Category Archives: Posts

Rancher Application Pipeline

Rancher Labs has invited me to give a demo during their Web Meetup on Wednesday 07/27/2016. I’m presenting the Application Pipeline we have build at LeanKit. Github, Drone CI and Rancher.

Disclaimer: A lot of this code is specific to how WE do things. It may or may not work out of the box for you. YMMV

Presentation

Video – My Part starts at about 1:23:00
Rancher-Application-Pipeline-Demo

Demo Code

vote-demo-web
vote-demo-worker
vote-demo-results

Example Rancher-Catalogs

These are built using the drone-rancher-catalog Drone plugin.

rancher-vote-demo-web
rancher-vote-demo-worker
rancher-vote-demo-results

Drone CI

Drone CI Project
drone-rancher-catalog – Drone plugin to build custom rancher catalogs.
drone-cowpoke – Drone plugin to kick off a Rancher Stack upgrade via Cowpoke Service.
buildgoggles – NPM module to create more descriptive tags for our builds.

Cowpoke

cowpoke – Connector Service that scans Rancher Environments and upgrade stacks.

Convert an Existing WordPress Site to Docker

I had a “classic” install of WordPress, Apache/PHP, and Mysql installed on the root drive, that I wanted to convert to Docker containers.

Why?

Install
Installing is easy. Installing and starting publicly available images is a single docker run command.

Maintain
With a little bit of planning updating your apps is a simple process.

  • docker pull the latest image.
  • docker rm -f the current container (remember to use volumes on data you need to save).
  • docker run with the same options and you have the latest app.

Scale
In the off chance that someone besides my Mom (hi Mom!) reads my blog, scaling containerized apps is a breeze.

  • Add another host.
  • docker run your app with the same options.
  • Add a load balancer.

Note: These commands assume you are running as root.

Saving Your WordPress Content

Dump Your Database

This will dump the WordPress database to a sql file. You will need your MySQL server hostname, user, and password. If you don’t remember what that is, check the wp-config.php file in your current WordPress install.

Enter your WordPress DB user password when prompted.

mysqldump -h [hostname] -p -u [user] [database] > ~/wordpress.sql

Backup the wp-content directory

All the stuff you need to save is in the wp-content file in your WordPress install.

tar cvzf ~/wp-content.tar.gz ./wp-content

Docker all the Things

Install Docker

Run the install script from http://www.docker.com:

wget -qO- https://get.docker.com/ | sh

Install MySQL Container

Make a storage directory
Make a directory to persistently store the DB files. This will make it easier to update the Docker Image later.

mkdir -p /data/mysql 

Install/Start a MySQL container
This is really as simple as:

docker run -d \
--volume=/data/mysql:/var/lib/mysql \
--restart=always \
--publish=172.17.42.1:3306:3306 \
--env='MYSQL_USER=wp-user' \
--env='MYSQL_PASSWORD=myAwesomePassword' \
--env='MYSQL_DATABASE=wordpress' \
--env='MYSQL_ROOT_PASSWORD=myAwesomeRootPassword' \
--name=mysql \
mysql:5.6

What’s going on here?

  • docker run – Start a Container from an Image. If that image is not already downloaded, it will try to download it from the specified repository.
  • -d – Run the Container as a Daemon (in the background).
  • --volume /data/mysql:/var/lib/mysql – Mount the local directory /data/mysql in the Container at /var/lib/mysql.
  • --restart=always – Tell docker to restart the container on boot and anytime it dies.
  • --publish 172.17.42.1:3306:3306 – Here we are telling docker to map port 3306 in the container to the special internal docker0 ip address 172.17.42.1. This prevent external access but allow the local host and any local docker containers to communicate with MySQL.
  • --env 'MYSQL_USER=wp-user' – Use env to pass parameters in to your container. In this case we are passing in the default user, database, and passwords. Good Images will document the available environmental variables.
  • --name mysql – This is the friendly name you can set for easy control of the container.
  • mysql:5.6 – This is the Image Name followed by the Image Tag. This is calling the registry.hub.docker.com default library/mysql image. This and other public images can be found at https://registry.hub.docker.com/

ADVANCED: Why not use --link?
The problem with linking containers is, if the “Linked To” container (mysql) is restated for any reason it will get a new IP address. Any Linking containers (wordpress) will need to be restarted to pick up that new IP address. This really becomes an issue if you have multiple containers linked or nested dependent containers. I believe that linking causes more issues then it solves. Its better to treat all of your containers as if they were on separate hosts.

Import the DB

Import that .sql file you saved earlier. Use the IP and credentials you specified in the previous step.

 mysql -h 172.17.42.1 -p -u wp-user wordpress < ~/wordpress.sql

Install WordPress

Extract Your wp-content Backup
Just like the MySQL Container you will need persistent storage for the WordPress content. Create a directory and extract the tarball. You may need to update the owner/group so the WordPress container can modify the data.

mkdir -p /data/wordpress
cd /data/wordpress
tar xvzf ~/wp-content.tar.gz
chown -R www-data:www-data ./wp-content

Install/Start the WordPress Container
Just like MySQL we are using the public WordPress Image. Set the env variables with the IP and credentials you defined when you set up the MySQL container. This container is bound to port 80 on all of the host IP addresses.

docker run -d \
--restart=always \
--publish=80:80 \
--volume=/data/wordpress/wp-content:/var/www/html/wp-content \
--env='WORDPRESS_DB_HOST=172.17.42.1' \
--env='WORDPRESS_DB_USER=wp-user' \
--env='WORDPRESS_DB_PASSWORD=myAwesomePassword' \
--env='WORDPRESS_DB_NAME=wordpress' \
--name=wordpress \
wordpress:latest

Check docker

At this point we should be done. You can check to see if the docker containers are running.

# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                        NAMES
685247608fff        wordpress:latest    "/entrypoint.sh apac   3 weeks ago         Up About an hour    0.0.0.0:80->80/tcp           wordpress           
75b5732cabdc        mysql:5.6           "/entrypoint.sh mysq   3 weeks ago         Up About an hour    172.17.42.1:3306->3306/tcp   mysql 

Browse to port 80 on your server and you should see your site.

Troubleshooting Tip

If something has gone wrong you can check the logs from your containers.

docker logs -f <container>

Role based Puppet/MCollective in EC2 – Part 3 Installing MCollective with SSL

This post will take you through adding MCollective orchestration software to the puppetmaster you just finished building. This howto includes configuring MCollective over SSL. This builds on the puppetmaster you built in Part 2. Ubuntu 14.04 running on EC2. On a side note, if you have the budget, I highly suggest supporting Puppet Labs and their efforts by subscribing to Puppet Enterprise. Of course if you did, you wouldn’t need these instructions 🙂

About MCollective.

MCollective is orchestration software by Puppet Labs. It allows you to connect to and run “jobs” simultaneously across groups of instances. These groups of instances can be manually defined or selected based on something like the Facter facts we defined in the user-data in Part 1 MCollective consists of 3 basic components:

  • Middleware: This is really the “server” portion. ActiveMQ messaging software runs here. I run this on my puppetmaster.
  • Agent: mcollectived runs on all the instances you want to control with mcollective.
  • Client: mco command. This is where you control the agents. I think of this part as the command console. I also install this on the puppetmaster.

ActiveMQ
Apache ActiveMQ is really the “server” portion of MCollective. You can use other messaging servers, but that’s way beyond the scope of what is covered here. It’s written in Java.

  • Listening Port: 61614
  • Start, Stop, Restart: service activemq [action]
  • Configuration: /etc/activemq/instances-enabled/mcollective/
  • Logs: Not setup by default. ActiveMQ Logging

mcollectived
mcollectived is the “agent” portion. It connects to the ActiveMQ server, then the server can pass messages back. It’s written in Ruby.

  • Start, Stop, Restart: service mcollective [action]
  • Configuration: /etc/mcollective
  • Logs: /var/log/mcollective.log

mco – the “client”
MCollective calls the places where you run the mco command the “client”. I think this is terribly confusing, it’s too close to “agent”. I generally install mco on my puppetmaster instances. Be careful who you give access to the mco command – it can be the equivalent of giving root access to all your instances. Mistakes here can affect all of your instances at once. It is possible to lock down what a user can do with this command, but that’s beyond the scope of this tutorial. This is also written in Ruby.

Installing MCollective.

I find the Puppet Labs documentation extremely confusing and out of date. But there is hope! I just about had it all figured out when I discovered that you can do the full installation of middleware/agent/client with SSL though the puppetlabs/mcollective Puppet module. Using the module almost makes the install easy.

Install puppetlabs/mcollective Puppet module.

Install the module with the puppet command. It will install a bunch of dependencies.

puppet module install puppetlabs/mcollective
Install my site_mcollectve Module.

I wrote a module that wraps the puppetlabs/mcollective module. Mostly the site_mcollective module gives you a place keep the certs and define your users so Puppet can install them.

Clone the Repo into `/etc/puppet/modules`.
I put the module up on GitHub – https://github.com/jgreat/site_mcollective

cd /etc/puppet/modules
git clone https://github.com/jgreat/site_mcollective.git

Setup Config File.
All the configuration is done in a yaml file. This will require Hiera to work. Create a folder to store yaml configs for Hiera and copy the site_mcollective.yaml to it.

mkdir /etc/puppet/yaml
cp /etc/puppet/modules/site_mcollective/site_mcollective.yaml /etc/puppet/yaml

Modify/Create /etc/puppet/hiera.yaml.
Include site_mcollective as a :hierarchy: source.

---
:backends:
  - yaml
:yaml:
  :datadir: /etc/puppet/yaml
:hierarchy:
  - site_mcollective
  - common

Restart apache2.
You will need to restart the puppetmaster to read the hirea.yaml.

service apache2 restart

Modify /etc/puppet/yaml/site_mcollective.yaml.
Change the values to suit your site.

  • middleware_hosts: a list of your puppetmaster/mcollective servers.
  • activemq_password: Change it to something long and random.
  • activemq_admin_password: Change it to something long and random.
  • ssl_server_public: Change cert file name to your server name.
  • ssl_server_private: Change key file name to your server name.
  • users: a list of your users that can run mco. These users must already have accounts.
Copy the Server Certificates and Keys.

MCollective can use the puppetmaster SSL certificate authority for its SSL certificates. We will copy the files from the puppetmaster directories into the site_mcollective module so they can be distributed to the appropriate places.

puppetmaster CA Certificate.

cp /var/lib/puppet.example.com/ssl/certs/ca.pem /etc/puppet/modules/site_mcollective/files/server/certs/

puppetmaster Server Certificate and Key.
The key is only readable by root, you will need to change the group ownership to puppet so the puppetmaster process can read it.

cp /var/lib/puppet.example.com/ssl/certs/puppet.example.com.pem /etc/puppet/modules/site_mcollective/files/server/certs/
cp /var/lib/puppet.example.com/ssl/private_keys/puppet.example.com.pem /etc/puppet/modules/site_mcollective/files/server/keys/
chgrp puppet /etc/puppet/modules/site_mcollective/files/server/keys/puppet.example.com.pem 
Adding Users.

We need to generate a SSL certificate and key for each of our users. Puppet has a really simple interface to do this. I’m using the ubuntu user as an example.

Generate a Certificate and Key.

puppet cert generate ubuntu

Copy User Certificate and Key.
Again the user keys are only readable by root, you will need to change the group ownership to puppet so the puppetmaster process can read it.

cp /var/lib/puppet.example.com/ssl/certs/ubuntu.pem /etc/puppet/modules/site_mcollective/files/user/certs/
cp /var/lib/puppet.example.com/ssl/private_keys/ubuntu.pem /etc/puppet/modules/site_mcollective/files/user/keys/
chgrp puppet /etc/puppet/modules/site_mcollective/files/user/keys/ubuntu.pem
Install Middleware Server.

Now we are going to use puppet to install the middleware server, agent and mco on the puppetmaster instance.

Create/Modify your site.pp.
Assign the site_mcollective class to your puppetmaster. Instead of assigning classes to traditional node definitions, I’m using the site_role fact I created in the user-data when I built the instance. See Part 1 for details on setting up the roles with user data or simple shell script. The site_mcollective cass has 3 install_type‘s available.

agent – Default. Just the agent portion, install this on most instances.
client – The mco software and the agent. Install this the instances you want to run commands from.
middleware – The “server”. ActiveMQ, agent and the client. I’m adding site_mcollective with install_type => 'middleware' on my instances with the puppetmaster role.

case $::site_role {
    puppetmaster: {
        class { 'site_mcollective': 
            install_type => 'middleware', 
        }
    }
    default: {
    }
}

Run puppet agent.
Now run the puppet agent on your master instance. You should see a whole bunch of actions happen. Hopefully it’s all green output.

puppet agent -t

Test MCollective.
Now login as the user you set up and run mco ping. If everything setup correctly you should see a list of instances running the agent.

ubuntu@puppet01:~$ mco ping
puppet01                                 time=61.73 ms


---- ping statistics ----
1 replies max: 61.73 min: 61.73 avg: 61.73 

Hurrah it’s working!

Install on Additional Agents.

I’m adding site_mcollective on my instances with the mrfancypantsapp role. Since install_type => 'agent' is the default, you don’t need to specify it.

case $::site_role {
    puppetmaster: {
        class { 'site_mcollective': 
            install_type => 'middleware', 
        }
    }
    mrfancypantsapp: {
        class { 'site_mcollective': }
    }
    default: {
    }
}

Run puppet agent.
Wait for the automatic puppet agent run or trigger it manually.

puppet agent -t

Test MCollective.
Run mco ping from your puppetmaster. You should now see additional hosts.

ubuntu@puppet01:~$ mco ping
puppet01                                 time=61.73 ms
mrfancypantsapp-prd-ec2-111-111-111-111  time=75.33 ms
mrfancypantsapp-prd-ec2-111-111-111-112  time=80.31 ms
mrfancypantsapp-stg-ec2-111-111-111-231  time=63.34 ms

---- ping statistics ----
4 replies max: 80.31 min: 61.73 avg: 70.18 

Now we can use the site_role fact to run commands on servers that are only in that role.

ubuntu@puppet01:~$ mco ping -F site_role=mrfancypantsapp
mrfancypantsapp-prd-ec2-111-111-111-111     time=75.33 ms
mrfancypantsapp-prd-ec2-111-111-111-112     time=80.31 ms
mrfancypantsapp-stg-ec2-111-111-111-231     time=63.34 ms

---- ping statistics ----
3 replies max: 80.31 min: 63.34 avg: 72.99
Install Client (mco).

I’m adding site_mcollective install_type => 'client' on my instances with the secure_jumppoint role. From these instances I can run the mco command to send commands to other servers.

case $::site_role {
    puppetmaster: {
        class { 'site_mcollective': 
            install_type => 'middleware', 
        }
    }
    mrfancypantsapp: {
        class { 'site_mcollective': }
    }
    secure_jumppoint: {
        class { 'site_mcollective': 
            install_type => 'client',
        }
    }
    default: {
    }
}

Run puppet agent.
Wait for the automatic puppet agent run or trigger it manually.

puppet agent -t

Now you can log into the secure_jumppoint instance and run mco commands.

Next Steps.

Now you have Puppet and MCollective working, “what do I do with it?” You will just have to see the next post – Part 4 for working with Puppet/MCollective, tips and some of my best practices.

Role based Puppet/MCollective in EC2 – Part 2 Installing puppetmaster

This how to will show you how to build a “production ready” puppetmaster in EC2 based on an Ubuntu 14.04 AMI.

Prerequisites

It will help a lot if you already have some EC2 experience. I’m not going to explain all the ins and outs of the EC2 pieces, just what you will need to set up. This howto also assumes that you will be working in a VPC (since that’s the current default).

Set up Security Groups

In a traditional infrastructure you would need to approve any new agents that connect to the puppetmaster. In our case we want to be able to take advantage of autoscaling, so new instances may need access at anytime without manual approval. We will allow instances to automatically register and control access to the puppetmaster through security groups. Two groups will need to be created:

  • puppet-agent – This group will be assigned to all the hosts that run the puppet agent. This group does not need any inbound ports configured.
  • puppet – This group will be assigned to our puppetmaster and have permissions to allow the puppet-agent security group to access the system. This group should have inbound TCP ports 8140 and 61613 open to the security group id of the puppet-agent group.
Create the puppetmaster instance.

You can get the current Ubuntu 14.04 AMI here:

https://cloud-images.ubuntu.com/locator/ec2/

  • Click the AMI-xxxxxx link for 14.04 with EBS storage.
  • Pick the m3.large size – m3.medium did not have enough memory to run MCollective.
  • Up root storage to at least 15GB.
  • Assign the puppet and puppet-agent security groups (don’t forget to assign a default group with ssh access).
  • Launch and pick your ssh key.
Naming the System

For portability and scaling you will want to name your system something like puppet01 and create a cname called puppet that points to puppet01. This way if you need to have more puppetmaster servers or a need a new one you can point the puppet.example.com cname to a load-balancer or another host.

  • Once the instance has started, assign the instance an Elastic IP.
  • Register a cname like puppet01.example.com in DNS to the ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com public name for the Elastic IP.
  • Register a cname puppet.example.com in DNS to puppet01.example.com.

/etc/hosts Run ec2metadata --local-ipv4 to find the local IP address. As root edit /etc/hosts and add your local address with the names you have registered in DNS.

127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

10.0.1.192 puppet01.example.com puppet01
10.0.1.192 puppet.example.com puppet

/etc/hostname Put puppet01 in /etc/hostname

puppet01

Reboot so the instance will start up with the new hostname. If this is all successful you should be greeted with a prompt that reads: ubuntu@puppet01:~$

Install the PuppetLabs Apt Repo

The standard Ubuntu repos have a version of Puppet in them, but it’s usually a bit dated. Install the PuppetLabs repo with their handy .deb package, to get the current version of Puppet.

wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb
sudo dpkg -i ./puppetlabs-release-trusty.deb
Configure puppetmaster

We are going to configure puppetmaster before we install it. This is because the the puppetmaster-passenger package will use the settings in the master section to configure the Apache vhost for you.

/etc/puppet/puppet.conf Create the directory

sudo mkdir -p /etc/puppet

We are going to set the master section to use puppet.example.com for its certname and vardir. This will keep the master files separate from the agent files make it easier to copy the ssl certs to new instances if you have to scale up.
As root create the /etc/puppet/puppet.conf file.

[main]
pluginsync = true

[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
certname = puppet.example.com
vardir = /var/lib/puppet.example.com
ssldir=/var/lib/puppet.example.com/ssl

[agent]
server = puppet.example.com
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet

/etc/puppet/autosign.conf By default a new Puppet agent that connects to the master will need to manually have its SSL cert signed on the master. This isn’t really practical for an autoscaling infrastructure. Instead we have the master automatically sign all the certs and control access through EC2 Security Groups. As root create a /etc/puppet/autosign.conf file.

*
Install the Packages.

Here’s the real trick PuppetLabs doesn’t tell you. You don’t want to install the puppetmaster package, that installs the Ruby Webrick webserver that can only handle a handful of clients. You want to install the puppetmaster-passenger package. This package installs and configures Apache with Ruby Passenger so you can handle a “Production” load. No fussing with gems, compiling passenger or configuring vhosts, it’s all done for you. As for scale I’m running 50-75 instances on one m3.large with no issues. I suspect that I could do 200 instances before I need to look at scaling up.

sudo apt-get update
sudo apt-get install puppetmaster-passenger puppet

About the puppetmaster Service
The puppetmaster is Ruby Passenger process that runs through the apache2 service. Here’s the general layout of things:

  • Stop, start or restart: service apache2 [action]
  • puppetmaster config, manifests and modules: /etc/puppet
  • puppetmaster working files: /var/lib/puppet.example.com
  • Apache2 puppetmaster vhost: /etc/apache2/sites-available/puppetmaster.conf
  • Access logs: /var/log/apache2/other_vhosts_access.log
  • General puppetmaster logs: syslog /var/log/syslog
Test out the agent.

At this point you should be able to run the Puppet agent locally on the master instance. Since there’s no configuration no actual actions will be taken. Remove the agent lock file.

sudo rm /var/lib/puppet/state/agent_disabled.lock 

Enable the Puppet agent service in /etc/default/puppet.

sudo sed -i /etc/default/puppet -e 's/START=no/START=yes/' 

Run the Puppet agent with the -t flag so it will show output.

sudo puppet agent -t

You should output similar to this.
The Puppet agent color codes its output so it should be all green. If you see red something has gone wrong.

Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet01.example.com
Info: Applying configuration version '1402079876'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.01 seconds

Congratulations you now have a functional puppetmaster server. Next we will use the puppet to install MCollective with SSL support.

Role based Puppet/MCollective in EC2 – Part 1 Roles

This is the first in a short series of posts that will take you through building simple role based infrastructure in EC2 using Puppet for configuration and MCollective for orchestration.

What do you mean “role based”?

When you are talking AWS EC2 or any other “cloud infrastructure” traditional ways of naming and organizing systems are just not flexible enough. When autoscaling, instances come and instances go. Your 4 web servers may, at any moment turn into 6 servers. When you scale back down it might not be the original 4. The EC2 assigned DNS like ec2-54-86-7-157.compute-1.amazonaws.com really don’t tell you anything about what that server does. To solve this problem, I create a couple of simple definitions (in this case Puppet “facts”) that I use to organize the systems in a human way. With tools like Puppet I can deploy my configuration and code to servers in various roles. MCollective allows me to run commands and manage the systems simultaneously by role. It doesn’t matter if it’s 4 servers or 100.

A bit of background.

Puppet and MCollective use a program called Facter to generate “facts” about a system. Facter runs on the agent systems during the Puppet agent run. These facts end up as top level variables in Puppet. For example $::hostname is facter hostname.

Assigning roles.

Now we need a way to assign roles to the instances when we boot them. To keep it simple, I’m using inline yaml in the user-data to create some basic “facts” at build time. Using yaml makes it easy just populate the user-data box in the AWS Management GUI, AWS CLI or for use in more advanced automation like CloudFormation. Here is what I include in the user-data:

  • env: Instance environment – Examples: dev, qa, stg or prd
  • role: Instance role – Examples: www, api, mongodb

{env: prd, role: mrfancypantsapp}

This simple python script reads the user-data (provided to the instance by an AWS magic url) and spits out key=value pairs for Facter so we can use these values in Puppet. I namespace these values with site=’mysite’ so they don’t stomp on values set by other data sources. Just place this script in the

/etc/facter/facts.d directory and Facter will process it on each run. I include this script in our custom AMI builds so this works at boot.

#!/usr/bin/env python
site = "site" # change this to your own site

import yaml
import subprocess

userData = subprocess.check_output('/usr/bin/ec2metadata --user-data', shell=True)
facts = yaml.load(userData)
for k in facts.iterkeys():
    print site + "_" + k + "=" + facts[k]

These facts will now be top level variables in Puppet. You can see them and other things Puppet/Facter thinks are important for identifying your system by running facter on the agent host. Using this simple yaml makes it easy to add more facts at build time.

But wait, I have a bunch of systems already running.

You can’t modify user-data on running instances, but all Facter wants is something that spits out key=value pairs. For legacy systems I use a simple shell script to echo values. Place this script in the /etc/facter/facts.d directory.

#!/bin/sh
echo site_env=prd
echo site_role=mrfancypantsapp