All Posts

Kubernetes - Kops - GitLab

This will go over how to get GitLab up and running using the following toolchain AWS Terraform Kops Kops is a really nice tool to help easily spin up a Kubernetes cluster in AWS that allows you to have a lot of control over how its spun up. I used terraform to pre-create a VPC structure that Kops should be able to use to build on top of.

AWS Lambda RDS backup job

I just started playing around a lot with different parts of AWS. Before whenever I wanted to run a one off job like a RDS snapshot, I would put a cron job on a server I knew would never have more then one of. So for example I would have a cron on my salt master server that looked like This 0 * * * * /opt/aws-scripts/api-rds-snapshot.py That would take an hourly snapshot of an RDS instance.

kubernetes api scale replication controller

This isn’t well found in the Kubernetes docs. If you want to scale up a replication controller, you can run the following. size = 2 master_ip = '10.100.0.2' rc_name = 'blog' headers = {'Content-Type': 'application/strategic-merge-patch+json'} payload = '{"spec":{"replicas":%s}}' % size r = requests.patch("http://{ip}:8080/api/v1/namespaces/default/replicationcontrollers/{n}".format(ip=master_ip, n=rc_name), headers=headers, data=payload) The example is using python requests module but its pretty similair using something like curl. You just want to replace the top three lines.

kubernetes deployment pipeline

Overview A bit of background. Our development process suffered from a lot of tech rot. There was a base vagrant image that spun up infrastructure that sort of matched our development environment in AWS. The big issue was it was not supported officially by anyone in the organization. The config management to bootstrap the vagrant images was in Chef but in development and production we were using Salt. A lot of the developers didn’t even use the vagrant image and just setup their local environments to try to mimic production as best they could.

Dynamic haproxy config in SaltStack

If you are anything like me, you like to over utilize haproxy. Salt stack has a nice feature for files that allow you to add to them in different state files. Say you want to have a stock haproxy config but sick and tired of having to maintain 5 different versions of a haproxy config. So our basic formula layout looks like this - packages/haproxy.sls - web/init.sls The haproxy.sls stores our basic haproxy install, template and makes sure the service is running

Monitor Salt with Monit

Sometimes salt has the tendency to crash. So we can use monit to fix that problem This assumes you already have the EPEL repo installed yum install monit Now with monit installed we can edit the following config /etc/monit.conf With the following contents set daemon 5 with start delay 5 set logfile /var/log/monit.log set idfile /var/lib/monit.id set statefile /var/run/monit.state set mailserver localhost port 25 with timeout 30 seconds set mail-format { from: monit@hostname.

guestfish problems with virt-filesystems

I was trying to use guestfish to increase a qcow2 partition without booting live and fdisking and all that mess. So I tried to run it and was getting # virt-filesystems --long --parts --blkdevs -h -a disk.qcow2 libguestfs: error: /usr/bin/supermin-helper exited with error status 1. To see full error messages you may need to enable debugging. See http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs Scratching my head a bit and figured out you need to update the guestfs appliance packages

Openvswitch / KVM / Libvirt / Ubuntu / VLANs the right way

There are a lot of old blog posts out there to getting KVM guests to use different vlans via openvswitch. There are a lot that tell you to create fake bridges or create the ports via ovs-vsctrl and add tell libvirt to use that created interface or portgroup. Then there are almost no blogs that really say, when you setup openvswitch, this is how you make the interface settings stick. The correct way to do it is this basic flow

Get Mandos working in Ubuntu

I’ve been doing a lot of playing around with full dis encryption. Now there’s one big problem when you do full disk encryption is when the server reboots you are left at a prompt to enter your password to mount the drive. This is solved by a tool call mandos. This is a client/server tool that the mandos client is loaded into the initrd image on the server and on boot will query the server and if the server will send back the encryption key to the client to use.

Verify user's password on the command line

If there’s any chance you need to verify a user’s password on the command line and you are root you can use openssl with the info from /etc/shadow. So first we want to grab the entry from /etc/shadow cat /etc/shadow | grep mike That will give us something that looks like mike:$6$tCFXiZHH$tFN8HZg/hXxYePSLZHVyBWuCFKlyesvKGKefwef2qR.DEKrrkvDUhewfwefuM.kU1HewfwE3HvprG/oMnizG2.:15734:0:99999:7::: So the items we want are the $6 and the $tCFXiZHH. The $6 is important because that tells us the password is using sha512 for encryption.