PyTN2017

Code and presentation for PyTennessee 2017

Presentation

Slides

Demo Code

py-vote-demo-web
vote-demo-worker
vote-demo-results

Example Rancher-Catalogs

These are built using the drone-rancher-catalog Drone plugin.

rancher-py-vote-demo-web
rancher-vote-demo-worker
rancher-vote-demo-results drone-rancher-catalog

Drone CI

Drone CI Project
drone-cowpoke – Drone plugin to kick off a Rancher Stack upgrade via Cowpoke Service.

Rancher

Rancher

Cowpoke

cowpoke – Connector Service that scans Rancher Environments and upgrade stacks.

Nodevember

I’m presenting at nodevember.

Demo Code

vote-demo-web
vote-demo-worker
vote-demo-results

Example Rancher-Catalogs

These are built using the drone-rancher-catalog Drone plugin.

rancher-vote-demo-web
rancher-vote-demo-worker
rancher-vote-demo-results drone-rancher-catalog

Drone CI

Drone CI Project
drone-cowpoke – Drone plugin to kick off a Rancher Stack upgrade via Cowpoke Service.
buildgoggles – NPM module to create more descriptive tags for our builds.

Rancher

Rancher

Cowpoke

cowpoke – Connector Service that scans Rancher Environments and upgrade stacks.

Rancher Application Pipeline

Rancher Labs has invited me to give a demo during their Web Meetup on Wednesday 07/27/2016. I’m presenting the Application Pipeline we have build at LeanKit. Github, Drone CI and Rancher.

Disclaimer: A lot of this code is specific to how WE do things. It may or may not work out of the box for you. YMMV

Presentation

Video – My Part starts at about 1:23:00
Rancher-Application-Pipeline-Demo

Demo Code

vote-demo-web
vote-demo-worker
vote-demo-results

Example Rancher-Catalogs

These are built using the drone-rancher-catalog Drone plugin.

rancher-vote-demo-web
rancher-vote-demo-worker
rancher-vote-demo-results

Drone CI

Drone CI Project
drone-rancher-catalog – Drone plugin to build custom rancher catalogs.
drone-cowpoke – Drone plugin to kick off a Rancher Stack upgrade via Cowpoke Service.
buildgoggles – NPM module to create more descriptive tags for our builds.

Cowpoke

cowpoke – Connector Service that scans Rancher Environments and upgrade stacks.

Install Puppet 4 (Open-source Version)

Puppet 4 Master with PuppetDB, and SSL.

Puppet 4 Management TL;DR

Service:

  • Start: service puppetserver start
  • Stop: service puppetserver stop

Agent Run:

  • puppet agent -t

Configs:

  • /etc/puppetlabs/puppet

Manifests:

  • /etc/puppetlabs/code

Logs:

  • /var/log/puppetlabs

SSL Certs:

  • /etc/puppetlabs/puppet/ssl

Ports:

  • puppetserver: 8140
  • mcollective (not covered here): 61613

The System

We assume you already have a server. These instructions are geared to running on Ubuntu 14.04LTS. If you are using another OS, YMMV.

I’m running in Azure on a Standard_D2_v2 (2 core 7GB system). This is more then enough for puppet with some overhead for other admin tasks.

I suggest setting the puppet server to a static internal IP address to make it easier to bootstrap future clients.

FQDN

Before you install anything, configure your system to know its Fully Qualified Domain Name. If the FQDN is correct puppet will create the correct SSL certs for you when you first start up the server.

You will want to create the system’s name and also an alias for puppet and puppetdb. This will make moving or scaling the puppetserver service a lot easier in the future. It will also separate the puppet-agent service on the system from the puppetserver service.

  • System FQDN: a1admpuppet01.jgreat.me
  • Puppet Alias(CNAME): puppet.jgreat.me
  • PuppetDB Alias(CNAME): puppetdb.jgreat.me

Set the system name using the special 127.0.1.1 IP in /etc/hosts. Set the alias names to use the system IP address.

/etc/hosts

127.0.1.1 a1admpuppet01.jgreat.me a1admpuppet01
10.0.1.50 puppet.jgreat.me puppet
10.0.1.50 puppetdb.jgreat.me puppetdb

Make sure /etc/hostname is set to the system’s short name.

/etc/hostname

a1admpuppet01

Test this all out. hostname -f should now return FQDN

$ hostname -f
a1admpuppet01.jgreat.me

Install Puppetserver

Set up PuppetLabs Apt Repo

Get the puppetlabs-release package and install. This will setup the PuppetLabs Apt Repo for you

wget https://apt.puppetlabs.com/puppetlabs-release-pc1-$(lsb_release -sc).deb
dpkg -i puppetlabs-release-pc1-$(lsb_release -sc).deb
apt-get update

Install puppetserver

Install the puppetserver and puppet-agent package, but don’t start the server yet (doesn’t start automatically)

apt-get install -y puppetserver puppet-agent

Configure puppetserver

The puppetserver config has moved to /etc/puppetlabs/puppet/puppet.conf. The important bits are setting the dns_alt_names to the the puppet alias.

Note: autosign = true will set the server to automatically accept new clients and sign certs for them. You are relying on the network layer for security. Don’t do something silly like publishing port 8140 to the internet.

/etc/puppetlabs/puppet/puppet.conf

[main]
server = puppet.jgreat.me
environment = production
runinterval = 30m

[master]
dns_alt_names = puppet,puppet.jgreat.me
environment_timeout = 0
autosign = true
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puppetlabs/code

Start puppetserver

Start the puppetserver service

service puppetserver start

You can monitor the progress of the service.

tail -f /var/log/puppetlabs/puppetserver/puppetserver.log

Test puppet-agent

Test your puppet agent (you may have to re-login to find puppet in the path)

# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for d1admpuppet01.jgreat.me
Info: Applying configuration version '1456088496'
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.03 seconds

Install PuppetDB

The easy way to do this is to use puppet to install puppetdb.

Install the puppetdb puppet module.

puppet module install puppetlabs-puppetdb

Here is a sample manifest.

node 'a1admpuppet01.jgreat.me' {
  # Set service to start automatically
  service { 'puppetserver':
    ensure => running,
    enable => true,
  }

  # Install and configure puppetdb
  class { 'puppetdb': }
  class { 'puppetdb::master::config': }
}

That’s it. You should now have a puppet server with puppetdb as a storage/reporting backend.

Configure Storage on the Fly with Puppet 4

Here’s a snippet of puppet I’m using to configure storage. Now whenever I add new disks to the instance, puppet will expand the storage for me.

I apply this before I install the docker-engine package.

package { 'lvm2': }
$::disks.each |$d, $v| {
  if ($d =~ /^sd[c-z]+/) {
    # Create pv if not a pv
    exec { "/sbin/pvcreate /dev/${d}":
      unless => "/sbin/pvs --noheadings /dev/${d}",
    }
    # Create VG if not exists
    exec { "/sbin/vgcreate ${vg} /dev/${d}":
      unless => "/sbin/vgs ${vg}",
    }
    # Add disk if not in the vg
    exec { "/sbin/vgextend ${vg} /dev/${d}":
      unless => "/sbin/pvs --noheadings -o vg_name /dev/${d} | /bin/grep ${vg}",
    }
  }
}
# create volume if it doesn't exist
exec { "/sbin/lvcreate --extents 100%FREE -n ${lv} ${vg}":
  unless  => "/sbin/lvs ${vg}/${lv}",
}
# Create ext4 filesystem
exec { "/sbin/mkfs.ext4 -j -b 4096 /dev/${vg}/${lv}":
  unless  => "/sbin/blkid /dev/${vg}/${lv} | /bin/grep 'TYPE=\"ext4\"'",
  require => Exec["/sbin/lvcreate --extents 100%FREE -n ${lv} ${vg}"],
}
# extend volume if room in data vg
exec { "/sbin/lvextend --extents +100%FREE ${vg}/${lv}":
  unless => "/sbin/vgs --noheadings -o vg_free ${vg} | /bin/grep -P '^\\s+0\\s$'"
}
file { '/var/lib/docker':
  ensure => directory,
}
mount { '/var/lib/docker':
  ensure  => 'mounted',
  atboot  => true,
  device  => "/dev/${vg}/${lv}",
  fstype  => 'ext4',
  options => 'defaults,nobootwait,nobarrier',
  dump    => '0',
  pass    => '2',
  require => [
    File['/var/lib/docker'],
    Exec["/sbin/mkfs.ext4 -j -b 4096 /dev/${vg}/${lv}"],
  ]
}

Convert an Existing WordPress Site to Docker

I had a “classic” install of WordPress, Apache/PHP, and Mysql installed on the root drive, that I wanted to convert to Docker containers.

Why?

Install
Installing is easy. Installing and starting publicly available images is a single docker run command.

Maintain
With a little bit of planning updating your apps is a simple process.

  • docker pull the latest image.
  • docker rm -f the current container (remember to use volumes on data you need to save).
  • docker run with the same options and you have the latest app.

Scale
In the off chance that someone besides my Mom (hi Mom!) reads my blog, scaling containerized apps is a breeze.

  • Add another host.
  • docker run your app with the same options.
  • Add a load balancer.

Note: These commands assume you are running as root.

Saving Your WordPress Content

Dump Your Database

This will dump the WordPress database to a sql file. You will need your MySQL server hostname, user, and password. If you don’t remember what that is, check the wp-config.php file in your current WordPress install.

Enter your WordPress DB user password when prompted.

mysqldump -h [hostname] -p -u [user] [database] > ~/wordpress.sql

Backup the wp-content directory

All the stuff you need to save is in the wp-content file in your WordPress install.

tar cvzf ~/wp-content.tar.gz ./wp-content

Docker all the Things

Install Docker

Run the install script from http://www.docker.com:

wget -qO- https://get.docker.com/ | sh

Install MySQL Container

Make a storage directory
Make a directory to persistently store the DB files. This will make it easier to update the Docker Image later.

mkdir -p /data/mysql 

Install/Start a MySQL container
This is really as simple as:

docker run -d \
--volume=/data/mysql:/var/lib/mysql \
--restart=always \
--publish=172.17.42.1:3306:3306 \
--env='MYSQL_USER=wp-user' \
--env='MYSQL_PASSWORD=myAwesomePassword' \
--env='MYSQL_DATABASE=wordpress' \
--env='MYSQL_ROOT_PASSWORD=myAwesomeRootPassword' \
--name=mysql \
mysql:5.6

What’s going on here?

  • docker run – Start a Container from an Image. If that image is not already downloaded, it will try to download it from the specified repository.
  • -d – Run the Container as a Daemon (in the background).
  • --volume /data/mysql:/var/lib/mysql – Mount the local directory /data/mysql in the Container at /var/lib/mysql.
  • --restart=always – Tell docker to restart the container on boot and anytime it dies.
  • --publish 172.17.42.1:3306:3306 – Here we are telling docker to map port 3306 in the container to the special internal docker0 ip address 172.17.42.1. This prevent external access but allow the local host and any local docker containers to communicate with MySQL.
  • --env 'MYSQL_USER=wp-user' – Use env to pass parameters in to your container. In this case we are passing in the default user, database, and passwords. Good Images will document the available environmental variables.
  • --name mysql – This is the friendly name you can set for easy control of the container.
  • mysql:5.6 – This is the Image Name followed by the Image Tag. This is calling the registry.hub.docker.com default library/mysql image. This and other public images can be found at https://registry.hub.docker.com/

ADVANCED: Why not use --link?
The problem with linking containers is, if the “Linked To” container (mysql) is restated for any reason it will get a new IP address. Any Linking containers (wordpress) will need to be restarted to pick up that new IP address. This really becomes an issue if you have multiple containers linked or nested dependent containers. I believe that linking causes more issues then it solves. Its better to treat all of your containers as if they were on separate hosts.

Import the DB

Import that .sql file you saved earlier. Use the IP and credentials you specified in the previous step.

 mysql -h 172.17.42.1 -p -u wp-user wordpress < ~/wordpress.sql

Install WordPress

Extract Your wp-content Backup
Just like the MySQL Container you will need persistent storage for the WordPress content. Create a directory and extract the tarball. You may need to update the owner/group so the WordPress container can modify the data.

mkdir -p /data/wordpress
cd /data/wordpress
tar xvzf ~/wp-content.tar.gz
chown -R www-data:www-data ./wp-content

Install/Start the WordPress Container
Just like MySQL we are using the public WordPress Image. Set the env variables with the IP and credentials you defined when you set up the MySQL container. This container is bound to port 80 on all of the host IP addresses.

docker run -d \
--restart=always \
--publish=80:80 \
--volume=/data/wordpress/wp-content:/var/www/html/wp-content \
--env='WORDPRESS_DB_HOST=172.17.42.1' \
--env='WORDPRESS_DB_USER=wp-user' \
--env='WORDPRESS_DB_PASSWORD=myAwesomePassword' \
--env='WORDPRESS_DB_NAME=wordpress' \
--name=wordpress \
wordpress:latest

Check docker

At this point we should be done. You can check to see if the docker containers are running.

# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                        NAMES
685247608fff        wordpress:latest    "/entrypoint.sh apac   3 weeks ago         Up About an hour    0.0.0.0:80->80/tcp           wordpress           
75b5732cabdc        mysql:5.6           "/entrypoint.sh mysq   3 weeks ago         Up About an hour    172.17.42.1:3306->3306/tcp   mysql 

Browse to port 80 on your server and you should see your site.

Troubleshooting Tip

If something has gone wrong you can check the logs from your containers.

docker logs -f <container>

Encrypted Volume Setup

Based on blog post:
http://thesimplecomputer.info/full-disk-encryption-with-ubuntu

Notes.

Remember when it cost hundreds of thousands of dollars for a few TB of storage? Well I just bought a pair of 4TB disk for about $250, WE LIVE IN THE FUTURE!!!

Okay, I want to encrypt the data I’m storing. I currently have my home encrypted using ecryptfs (i.e. “Encrypt my Home” in Ubuntu Setup). This is fine for home, but I have services that I want to run while not logged in. So I’m going to use btrfs on top of a pair of LUKS devices.

Encryption

Prep the Devices? – Nope.

There is a lot of advice as a best practice to prep the devices by writing noise with /dev/urandom or AES ciphertext (as suggested above), but on big devices like my 4TB drive it would take 11+ hours to complete. I’m not paranoid enough about this data to wait.

The down side is that someone could figure out the bounds of the data, That I have 1.6TB used instead of the full 4TB.

Anyways I can always fill the rest of the drive with noise after the fact with /dev/zero. http://security.stackexchange.com/questions/29682/remedy-for-not-having-filled-the-disk-with-random-data

Benchmark.

Which encryption option are best:

root@caesar:/# cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       789590 iterations per second
PBKDF2-sha256     451972 iterations per second
PBKDF2-sha512     362077 iterations per second
PBKDF2-ripemd160  562540 iterations per second
PBKDF2-whirlpool  167183 iterations per second
#  Algorithm | Key |  Encryption |  Decryption
     aes-cbc   128b   166.0 MiB/s   186.3 MiB/s
 serpent-cbc   128b    81.7 MiB/s   215.1 MiB/s
 twofish-cbc   128b   196.9 MiB/s   242.1 MiB/s
     aes-cbc   256b   129.7 MiB/s   135.9 MiB/s
 serpent-cbc   256b    88.0 MiB/s   215.8 MiB/s
 twofish-cbc   256b   189.4 MiB/s   230.1 MiB/s
     aes-xts   256b   178.0 MiB/s   181.4 MiB/s
 serpent-xts   256b   204.0 MiB/s   200.6 MiB/s
 twofish-xts   256b   224.4 MiB/s   213.0 MiB/s
     aes-xts   512b   133.9 MiB/s   142.0 MiB/s
 serpent-xts   512b   189.2 MiB/s   204.3 MiB/s
 twofish-xts   512b   221.1 MiB/s   224.1 MiB/s

Hash
Its my understanding that sha1 as a hash method is not really recommended anymore. sha512 isn’t that much less efficient then sha256 on my CPU so sha512 it is.

Algorithm twofish-xts performs the highest, but I was a but surprised that the 256 vs 512 bit performance pretty much negligible. twofish-xts-plain64 512bit is the choice.

Create the partitions.

/dev/sdb and /dev/sdc are my devices.

Use gparted to create a gpt partition table and create a primary partition using the whole disk.

Create the encrypted device.

Repeat for each device.

root@caesar:/# cryptsetup luksFormat --cipher twofish-xts-plain64 --key-size 512 --hash sha512 --iter-time 1000 /dev/sdc1

WARNING!
========
This will overwrite data on /dev/sdc1 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase: 
Verify passphrase: 

Open the encrypted devices

Make them available as /dev/mapper devices.

root@caesar:/# cryptsetup luksOpen /dev/sdc1 data-01
Enter passphrase for /dev/sdc1: 
root@caesar:/# cryptsetup luksOpen /dev/sdd1 data-02
Enter passphrase for /dev/sdd1: 

Get UUIDs for devices

We need this to setup crypttab to mount at boot.

root@caesar:~# cryptsetup luksUUID /dev/sdc1
fafc39b8-204e-4bb4-b7d8-808abb8ba53f
root@caesar:~# cryptsetup luksUUID /dev/sdd1
f08efefc-2882-445a-969f-b4e0c6c99c70

/etc/crypttab

The first field is the mapper device name. Second is the UUID.

data-01 UUID=fafc39b8-204e-4bb4-b7d8-808abb8ba53f
data-02 UUID=f08efefc-2882-445a-969f-b4e0c6c99c70

Back up the LUKS headers

Backup the headers for the devices to a file somewhere encrypted. If your header gets corrupted this is the only way you may be able to recover the data.

cryptsetup luksHeaderBackup /dev/sdc1 --header-backup-file /home/jgreat/sdc1-data-01.img
cryptsetup luksHeaderBackup /dev/sdd1 --header-backup-file /home/jgreat/sdd1-data-01.img

Create your file system

You can now use the /dev/mapper devices to make the filesystem/volume management of your choice. I’m creating a set of btrfs mirrors.

btrfs mirror.

Create the mirror

root@caesar:/# mkfs.btrfs -d raid1 -m raid1 /dev/mapper/data-01 /dev/mapper/data-02
Btrfs v3.17
See http://btrfs.wiki.kernel.org for more information.

Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
adding device /dev/mapper/data-02 id 2
fs created label (null) on /dev/mapper/data-01
    nodesize 16384 leafsize 16384 sectorsize 4096 size 7.28TiB

Show the results

root@caesar:/# btrfs fi show
Label: none  uuid: 6adadbe1-9194-4cda-abe0-5779b305aa76
    Total devices 2 FS bytes used 640.00KiB
    devid    1 size 3.64TiB used 2.03GiB path /dev/mapper/data-01
    devid    2 size 3.64TiB used 2.01GiB path /dev/mapper/data-02

Btrfs v3.17

Mount the devices Use the UUID or one of the /dev paths.

root@caesar:/# mount UUID=6adadbe1-9194-4cda-abe0-5779b305aa76 /data

Balance btrfs You may want to balance the devices since they will show a slight mismatch.

root@caesar:/# btrfs balance /data
Done, had to relocate 6 out of 6 chunks

Create a subvolume I’m going to create a subvolume.

root@caesar:/# btrfs sub create /data/media
Create subvolume '/data/media'

/etc/fstab Add an entry to fstab so we can mount on boot

UUID=6adadbe1-9194-4cda-abe0-5779b305aa76 /data  btrfs   defaults 0 2

Role based Puppet/MCollective in EC2 – Part 3 Installing MCollective with SSL

This post will take you through adding MCollective orchestration software to the puppetmaster you just finished building. This howto includes configuring MCollective over SSL. This builds on the puppetmaster you built in Part 2. Ubuntu 14.04 running on EC2. On a side note, if you have the budget, I highly suggest supporting Puppet Labs and their efforts by subscribing to Puppet Enterprise. Of course if you did, you wouldn’t need these instructions 🙂

About MCollective.

MCollective is orchestration software by Puppet Labs. It allows you to connect to and run “jobs” simultaneously across groups of instances. These groups of instances can be manually defined or selected based on something like the Facter facts we defined in the user-data in Part 1 MCollective consists of 3 basic components:

  • Middleware: This is really the “server” portion. ActiveMQ messaging software runs here. I run this on my puppetmaster.
  • Agent: mcollectived runs on all the instances you want to control with mcollective.
  • Client: mco command. This is where you control the agents. I think of this part as the command console. I also install this on the puppetmaster.

ActiveMQ
Apache ActiveMQ is really the “server” portion of MCollective. You can use other messaging servers, but that’s way beyond the scope of what is covered here. It’s written in Java.

  • Listening Port: 61614
  • Start, Stop, Restart: service activemq [action]
  • Configuration: /etc/activemq/instances-enabled/mcollective/
  • Logs: Not setup by default. ActiveMQ Logging

mcollectived
mcollectived is the “agent” portion. It connects to the ActiveMQ server, then the server can pass messages back. It’s written in Ruby.

  • Start, Stop, Restart: service mcollective [action]
  • Configuration: /etc/mcollective
  • Logs: /var/log/mcollective.log

mco – the “client”
MCollective calls the places where you run the mco command the “client”. I think this is terribly confusing, it’s too close to “agent”. I generally install mco on my puppetmaster instances. Be careful who you give access to the mco command – it can be the equivalent of giving root access to all your instances. Mistakes here can affect all of your instances at once. It is possible to lock down what a user can do with this command, but that’s beyond the scope of this tutorial. This is also written in Ruby.

Installing MCollective.

I find the Puppet Labs documentation extremely confusing and out of date. But there is hope! I just about had it all figured out when I discovered that you can do the full installation of middleware/agent/client with SSL though the puppetlabs/mcollective Puppet module. Using the module almost makes the install easy.

Install puppetlabs/mcollective Puppet module.

Install the module with the puppet command. It will install a bunch of dependencies.

puppet module install puppetlabs/mcollective
Install my site_mcollectve Module.

I wrote a module that wraps the puppetlabs/mcollective module. Mostly the site_mcollective module gives you a place keep the certs and define your users so Puppet can install them.

Clone the Repo into `/etc/puppet/modules`.
I put the module up on GitHub – https://github.com/jgreat/site_mcollective

cd /etc/puppet/modules
git clone https://github.com/jgreat/site_mcollective.git

Setup Config File.
All the configuration is done in a yaml file. This will require Hiera to work. Create a folder to store yaml configs for Hiera and copy the site_mcollective.yaml to it.

mkdir /etc/puppet/yaml
cp /etc/puppet/modules/site_mcollective/site_mcollective.yaml /etc/puppet/yaml

Modify/Create /etc/puppet/hiera.yaml.
Include site_mcollective as a :hierarchy: source.

---
:backends:
  - yaml
:yaml:
  :datadir: /etc/puppet/yaml
:hierarchy:
  - site_mcollective
  - common

Restart apache2.
You will need to restart the puppetmaster to read the hirea.yaml.

service apache2 restart

Modify /etc/puppet/yaml/site_mcollective.yaml.
Change the values to suit your site.

  • middleware_hosts: a list of your puppetmaster/mcollective servers.
  • activemq_password: Change it to something long and random.
  • activemq_admin_password: Change it to something long and random.
  • ssl_server_public: Change cert file name to your server name.
  • ssl_server_private: Change key file name to your server name.
  • users: a list of your users that can run mco. These users must already have accounts.
Copy the Server Certificates and Keys.

MCollective can use the puppetmaster SSL certificate authority for its SSL certificates. We will copy the files from the puppetmaster directories into the site_mcollective module so they can be distributed to the appropriate places.

puppetmaster CA Certificate.

cp /var/lib/puppet.example.com/ssl/certs/ca.pem /etc/puppet/modules/site_mcollective/files/server/certs/

puppetmaster Server Certificate and Key.
The key is only readable by root, you will need to change the group ownership to puppet so the puppetmaster process can read it.

cp /var/lib/puppet.example.com/ssl/certs/puppet.example.com.pem /etc/puppet/modules/site_mcollective/files/server/certs/
cp /var/lib/puppet.example.com/ssl/private_keys/puppet.example.com.pem /etc/puppet/modules/site_mcollective/files/server/keys/
chgrp puppet /etc/puppet/modules/site_mcollective/files/server/keys/puppet.example.com.pem 
Adding Users.

We need to generate a SSL certificate and key for each of our users. Puppet has a really simple interface to do this. I’m using the ubuntu user as an example.

Generate a Certificate and Key.

puppet cert generate ubuntu

Copy User Certificate and Key.
Again the user keys are only readable by root, you will need to change the group ownership to puppet so the puppetmaster process can read it.

cp /var/lib/puppet.example.com/ssl/certs/ubuntu.pem /etc/puppet/modules/site_mcollective/files/user/certs/
cp /var/lib/puppet.example.com/ssl/private_keys/ubuntu.pem /etc/puppet/modules/site_mcollective/files/user/keys/
chgrp puppet /etc/puppet/modules/site_mcollective/files/user/keys/ubuntu.pem
Install Middleware Server.

Now we are going to use puppet to install the middleware server, agent and mco on the puppetmaster instance.

Create/Modify your site.pp.
Assign the site_mcollective class to your puppetmaster. Instead of assigning classes to traditional node definitions, I’m using the site_role fact I created in the user-data when I built the instance. See Part 1 for details on setting up the roles with user data or simple shell script. The site_mcollective cass has 3 install_type‘s available.

agent – Default. Just the agent portion, install this on most instances.
client – The mco software and the agent. Install this the instances you want to run commands from.
middleware – The “server”. ActiveMQ, agent and the client. I’m adding site_mcollective with install_type => 'middleware' on my instances with the puppetmaster role.

case $::site_role {
    puppetmaster: {
        class { 'site_mcollective': 
            install_type => 'middleware', 
        }
    }
    default: {
    }
}

Run puppet agent.
Now run the puppet agent on your master instance. You should see a whole bunch of actions happen. Hopefully it’s all green output.

puppet agent -t

Test MCollective.
Now login as the user you set up and run mco ping. If everything setup correctly you should see a list of instances running the agent.

ubuntu@puppet01:~$ mco ping
puppet01                                 time=61.73 ms


---- ping statistics ----
1 replies max: 61.73 min: 61.73 avg: 61.73 

Hurrah it’s working!

Install on Additional Agents.

I’m adding site_mcollective on my instances with the mrfancypantsapp role. Since install_type => 'agent' is the default, you don’t need to specify it.

case $::site_role {
    puppetmaster: {
        class { 'site_mcollective': 
            install_type => 'middleware', 
        }
    }
    mrfancypantsapp: {
        class { 'site_mcollective': }
    }
    default: {
    }
}

Run puppet agent.
Wait for the automatic puppet agent run or trigger it manually.

puppet agent -t

Test MCollective.
Run mco ping from your puppetmaster. You should now see additional hosts.

ubuntu@puppet01:~$ mco ping
puppet01                                 time=61.73 ms
mrfancypantsapp-prd-ec2-111-111-111-111  time=75.33 ms
mrfancypantsapp-prd-ec2-111-111-111-112  time=80.31 ms
mrfancypantsapp-stg-ec2-111-111-111-231  time=63.34 ms

---- ping statistics ----
4 replies max: 80.31 min: 61.73 avg: 70.18 

Now we can use the site_role fact to run commands on servers that are only in that role.

ubuntu@puppet01:~$ mco ping -F site_role=mrfancypantsapp
mrfancypantsapp-prd-ec2-111-111-111-111     time=75.33 ms
mrfancypantsapp-prd-ec2-111-111-111-112     time=80.31 ms
mrfancypantsapp-stg-ec2-111-111-111-231     time=63.34 ms

---- ping statistics ----
3 replies max: 80.31 min: 63.34 avg: 72.99
Install Client (mco).

I’m adding site_mcollective install_type => 'client' on my instances with the secure_jumppoint role. From these instances I can run the mco command to send commands to other servers.

case $::site_role {
    puppetmaster: {
        class { 'site_mcollective': 
            install_type => 'middleware', 
        }
    }
    mrfancypantsapp: {
        class { 'site_mcollective': }
    }
    secure_jumppoint: {
        class { 'site_mcollective': 
            install_type => 'client',
        }
    }
    default: {
    }
}

Run puppet agent.
Wait for the automatic puppet agent run or trigger it manually.

puppet agent -t

Now you can log into the secure_jumppoint instance and run mco commands.

Next Steps.

Now you have Puppet and MCollective working, “what do I do with it?” You will just have to see the next post – Part 4 for working with Puppet/MCollective, tips and some of my best practices.

Role based Puppet/MCollective in EC2 – Part 2 Installing puppetmaster

This how to will show you how to build a “production ready” puppetmaster in EC2 based on an Ubuntu 14.04 AMI.

Prerequisites

It will help a lot if you already have some EC2 experience. I’m not going to explain all the ins and outs of the EC2 pieces, just what you will need to set up. This howto also assumes that you will be working in a VPC (since that’s the current default).

Set up Security Groups

In a traditional infrastructure you would need to approve any new agents that connect to the puppetmaster. In our case we want to be able to take advantage of autoscaling, so new instances may need access at anytime without manual approval. We will allow instances to automatically register and control access to the puppetmaster through security groups. Two groups will need to be created:

  • puppet-agent – This group will be assigned to all the hosts that run the puppet agent. This group does not need any inbound ports configured.
  • puppet – This group will be assigned to our puppetmaster and have permissions to allow the puppet-agent security group to access the system. This group should have inbound TCP ports 8140 and 61613 open to the security group id of the puppet-agent group.
Create the puppetmaster instance.

You can get the current Ubuntu 14.04 AMI here:

https://cloud-images.ubuntu.com/locator/ec2/

  • Click the AMI-xxxxxx link for 14.04 with EBS storage.
  • Pick the m3.large size – m3.medium did not have enough memory to run MCollective.
  • Up root storage to at least 15GB.
  • Assign the puppet and puppet-agent security groups (don’t forget to assign a default group with ssh access).
  • Launch and pick your ssh key.
Naming the System

For portability and scaling you will want to name your system something like puppet01 and create a cname called puppet that points to puppet01. This way if you need to have more puppetmaster servers or a need a new one you can point the puppet.example.com cname to a load-balancer or another host.

  • Once the instance has started, assign the instance an Elastic IP.
  • Register a cname like puppet01.example.com in DNS to the ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com public name for the Elastic IP.
  • Register a cname puppet.example.com in DNS to puppet01.example.com.

/etc/hosts Run ec2metadata --local-ipv4 to find the local IP address. As root edit /etc/hosts and add your local address with the names you have registered in DNS.

127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

10.0.1.192 puppet01.example.com puppet01
10.0.1.192 puppet.example.com puppet

/etc/hostname Put puppet01 in /etc/hostname

puppet01

Reboot so the instance will start up with the new hostname. If this is all successful you should be greeted with a prompt that reads: ubuntu@puppet01:~$

Install the PuppetLabs Apt Repo

The standard Ubuntu repos have a version of Puppet in them, but it’s usually a bit dated. Install the PuppetLabs repo with their handy .deb package, to get the current version of Puppet.

wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb
sudo dpkg -i ./puppetlabs-release-trusty.deb
Configure puppetmaster

We are going to configure puppetmaster before we install it. This is because the the puppetmaster-passenger package will use the settings in the master section to configure the Apache vhost for you.

/etc/puppet/puppet.conf Create the directory

sudo mkdir -p /etc/puppet

We are going to set the master section to use puppet.example.com for its certname and vardir. This will keep the master files separate from the agent files make it easier to copy the ssl certs to new instances if you have to scale up.
As root create the /etc/puppet/puppet.conf file.

[main]
pluginsync = true

[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
certname = puppet.example.com
vardir = /var/lib/puppet.example.com
ssldir=/var/lib/puppet.example.com/ssl

[agent]
server = puppet.example.com
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet

/etc/puppet/autosign.conf By default a new Puppet agent that connects to the master will need to manually have its SSL cert signed on the master. This isn’t really practical for an autoscaling infrastructure. Instead we have the master automatically sign all the certs and control access through EC2 Security Groups. As root create a /etc/puppet/autosign.conf file.

*
Install the Packages.

Here’s the real trick PuppetLabs doesn’t tell you. You don’t want to install the puppetmaster package, that installs the Ruby Webrick webserver that can only handle a handful of clients. You want to install the puppetmaster-passenger package. This package installs and configures Apache with Ruby Passenger so you can handle a “Production” load. No fussing with gems, compiling passenger or configuring vhosts, it’s all done for you. As for scale I’m running 50-75 instances on one m3.large with no issues. I suspect that I could do 200 instances before I need to look at scaling up.

sudo apt-get update
sudo apt-get install puppetmaster-passenger puppet

About the puppetmaster Service
The puppetmaster is Ruby Passenger process that runs through the apache2 service. Here’s the general layout of things:

  • Stop, start or restart: service apache2 [action]
  • puppetmaster config, manifests and modules: /etc/puppet
  • puppetmaster working files: /var/lib/puppet.example.com
  • Apache2 puppetmaster vhost: /etc/apache2/sites-available/puppetmaster.conf
  • Access logs: /var/log/apache2/other_vhosts_access.log
  • General puppetmaster logs: syslog /var/log/syslog
Test out the agent.

At this point you should be able to run the Puppet agent locally on the master instance. Since there’s no configuration no actual actions will be taken. Remove the agent lock file.

sudo rm /var/lib/puppet/state/agent_disabled.lock 

Enable the Puppet agent service in /etc/default/puppet.

sudo sed -i /etc/default/puppet -e 's/START=no/START=yes/' 

Run the Puppet agent with the -t flag so it will show output.

sudo puppet agent -t

You should output similar to this.
The Puppet agent color codes its output so it should be all green. If you see red something has gone wrong.

Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet01.example.com
Info: Applying configuration version '1402079876'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.01 seconds

Congratulations you now have a functional puppetmaster server. Next we will use the puppet to install MCollective with SSL support.

Role based Puppet/MCollective in EC2 – Part 1 Roles

This is the first in a short series of posts that will take you through building simple role based infrastructure in EC2 using Puppet for configuration and MCollective for orchestration.

What do you mean “role based”?

When you are talking AWS EC2 or any other “cloud infrastructure” traditional ways of naming and organizing systems are just not flexible enough. When autoscaling, instances come and instances go. Your 4 web servers may, at any moment turn into 6 servers. When you scale back down it might not be the original 4. The EC2 assigned DNS like ec2-54-86-7-157.compute-1.amazonaws.com really don’t tell you anything about what that server does. To solve this problem, I create a couple of simple definitions (in this case Puppet “facts”) that I use to organize the systems in a human way. With tools like Puppet I can deploy my configuration and code to servers in various roles. MCollective allows me to run commands and manage the systems simultaneously by role. It doesn’t matter if it’s 4 servers or 100.

A bit of background.

Puppet and MCollective use a program called Facter to generate “facts” about a system. Facter runs on the agent systems during the Puppet agent run. These facts end up as top level variables in Puppet. For example $::hostname is facter hostname.

Assigning roles.

Now we need a way to assign roles to the instances when we boot them. To keep it simple, I’m using inline yaml in the user-data to create some basic “facts” at build time. Using yaml makes it easy just populate the user-data box in the AWS Management GUI, AWS CLI or for use in more advanced automation like CloudFormation. Here is what I include in the user-data:

  • env: Instance environment – Examples: dev, qa, stg or prd
  • role: Instance role – Examples: www, api, mongodb

{env: prd, role: mrfancypantsapp}

This simple python script reads the user-data (provided to the instance by an AWS magic url) and spits out key=value pairs for Facter so we can use these values in Puppet. I namespace these values with site=’mysite’ so they don’t stomp on values set by other data sources. Just place this script in the

/etc/facter/facts.d directory and Facter will process it on each run. I include this script in our custom AMI builds so this works at boot.

#!/usr/bin/env python
site = "site" # change this to your own site

import yaml
import subprocess

userData = subprocess.check_output('/usr/bin/ec2metadata --user-data', shell=True)
facts = yaml.load(userData)
for k in facts.iterkeys():
    print site + "_" + k + "=" + facts[k]

These facts will now be top level variables in Puppet. You can see them and other things Puppet/Facter thinks are important for identifying your system by running facter on the agent host. Using this simple yaml makes it easy to add more facts at build time.

But wait, I have a bunch of systems already running.

You can’t modify user-data on running instances, but all Facter wants is something that spits out key=value pairs. For legacy systems I use a simple shell script to echo values. Place this script in the /etc/facter/facts.d directory.

#!/bin/sh
echo site_env=prd
echo site_role=mrfancypantsapp