Wednesday, October 6, 2010

Overview of a Puppet Split CA architecture


The Puppet Master CA is the only Certificate Authority (CA) in the whole infrastructure. It issues certificates for all Puppet agents. It also manages the Puppet Master systems.

The Puppet Masters are only responsible for compiling catalogs requested by Puppet Agents - they don't act as CA themselves. They only accept Puppet Agents which certificates have been issued by the Puppet Master CA.

The Puppet Agent retrieves their certificates from the Puppet Master CA the first time they run. They connect to the Puppet Masters afterwards to get their catalogs. They won't contact the Puppet Master CA anymore.

Puppet Master CA


The Puppet Master CA manages all Puppet Masters. In particular it distributes its own Certificate Revocation List (CRL) file to every Puppet Master. The Puppet Master CA also issues certificates to Puppet Agents.

Puppet Master


A Puppet Master runs under Apache and Passenger. Apache ssl module is configured to require certificates signed by the Puppet Master CA (/etc/apache2/site-available/puppetmaster):
# Require certificates to be valid
SSLVerifyClient require
SSLVerifyDepth  1

The Puppet Master is also configured to not act as a Puppet CA (/etc/puppet/puppet.conf):
[main]
ca = false

Puppet Agent


Puppet Agents retrieve their certificate from the Puppet Master CA and request their catalog from one of the Puppet Masters (/etc/puppet/puppet.conf):
[agent]
ca_server = PUPPET_MASTER_CA
server = PUPPET_MASTER

Conclusion


From a security perspective setting the SSLVerifyClient option to require increases the protection of Puppet Masters from unknown requests and revoked Puppet Agents. Having the Puppet Master CA manage the Puppet Masters also facilitates the distribution of the Puppet Master CA CRL.

On the reliability front new systems won't be added to the infrastructure if the Puppet Master CA is unavailable. However existing Puppet Agents are still functional as long as they can connect to a Puppet Master.

Monday, September 27, 2010

Deploying a Hadoop cluster on EC2/UEC with Puppet and Ubuntu Maverick



A Hadoop Cluster running on EC2/UEC deployed by puppet on Ubuntu Maverick.

How it works


The Cloud Conductor is located outside the AWS infrastructure as it needs AWS credentials to start new instances. The Puppet Master runs in EC2 and uses S3 to check which clients it should accept.

The Hadoop Namenode, Jobtracker and Worker are also running in EC2. The Puppet Master automatically configures them so that each Worker can connect to the Namenode and Jobtracker.

The Puppet Master uses Stored Configuration to distribute configuration between all the Hadoop components. For example the Namenode IP address is automatically pushed to the Jobtracker and the Worker nodes so that they can connect to the Namenode.

Ubuntu Maverick is used since Puppet 2.6 is required. The excellent Cloudera CDH3 Beta2 packages provide the base Hadoop foundation.

Puppet recipes and the Cloud Conductor scripts are available in a bzr branch on Launchpad.

Setup the Cloud Conductor


The first part of the Cloud Conductor is the start_instance.py script. It takes care of starting new instances in EC2 and registering them in S3. Its configuration lives in start_instance.yaml. Both files are located in the conductor directory of the bzr branch.

The following options are available on the cloud conductor:
  • s3_bucket_name: Sets the name of the S3 bucket used to store the list of instances started by the Cloud Conductor. The Puppet Master uses the same bucket to check which Puppet Client should be accepted.
  • ami_id: Sets the id of the AMI the Cloud Conductor will use to start new instances.
  • cloud_init: Sets specific cloud-init parameters. All of the puppet client configuration is defined here.Public ssh keys (for example from Launchpad) can be configured using the ssh_import_id option. The cloud-init documentation has more information [1] about what can be configured when starting new instances.

A sample start_instance.yaml file looks like this:

# Name of the S3 bucket to use to store the certname of started instances
s3_bucket_name: mathiaz-hadoop-cluster
# Base AMI id to use to start all instances
ami_id: ami-c210e5ab
# Extra information passed to cloud-init when starting new instances
# see cloud-init documentation for available options.
cloud_init: &site-cloud-init
ssh_import_id: mathiaz


Once the Cloud Conductor is configured a Puppet Master can be started:


./start_instance.py puppetmaster

Setup the Puppet Master


Once the instance has started and its ssh fingerprints can be verified the puppet recipes are deployed on the Puppet Master:


bzr branch lp:~mathiaz/+junk/hadoop-cluster-puppet-conf ~/puppet/
sudo mv /etc/puppet/ /etc/old.puppet
sudo mv ~/puppet/ /etc/


The S3 bucket name is set in the Puppet Master configuration /etc/puppet/manifests/puppetmaster.pp:


node default {
class {
"puppet::ca":
node_bucket => "https://mathiaz-hadoop-cluster.s3.amazonaws.com";
}
}


And finally the Puppet Master installation can be completed by puppet itself:

sudo puppet apply /etc/puppet/manifests/puppetmaster.pp

A Puppet Master is now running into EC2 with all the recipes required to deploy the different components of a Hadoop Cluster.

Update the Cloud Conductor configuration


Since the Cloud Conductor starts instances that will connect to the Puppet Master it needs to know some information about the Puppet Master:
  • the Puppet Master internal IP address or DNS name. For example the DNS name of the instance (which is the FQDN) can be used.
  • the Puppet Master certificate (located in /var/lib/puppet/ssl/ca/ca_crt.pem):

On the Cloud Conductor the information gathered on the Puppet Master is added to start_instance.yaml:

agent:
# Puppet server hostname or IP
# In EC2 the Private DNS of the instance should be used
server: domU-12-31-38-00-35-98.compute-1.internal
# NB: the certname will automatically be added by start_instance.py
# when a new instance is started.
# Puppetmaster ca certificate
# located in /var/lib/puppet/ssl/ca/ca_crt.pem on the puppetmaster system
ca_cert: |
-----BEGIN CERTIFICATE-----
MIICFzCCAYCgAwIBAgIBATANBgkqhkiG9w0BAQUFADAUMRIwEAYDVQQDDAlQdXBw
[ ... ]
k0r/nTX6Tmr8TTU=
-----END CERTIFICATE-----

Start the Hadoop Namenode


Once the Puppet Master and Cloud Conductor are configured the Hadoop Cluster can be deployed. First in line is the Hadoop Namenode:


./start_instance.py namenode

After a few minutes the Namenode puppet client requests a certificate:
puppet-master[7397]: Starting Puppet master version 2.6.1
puppet-master[7397]: 53b0b7bf-723c-4a0f-b4b1-082ebec84041 has a waiting certificate request

The Master signs the CSR:


CRON[8542]: (root) CMD (/usr/local/bin/check_csr https://mathiaz-hadoop-cluster.s3.amazonaws.com)
check_csr[8543]: INFO: Signing request: 53b0b7bf-723c-4a0f-b4b1-082ebec84041


And finally the Master compiles the manifest for the Namenode:


node_classifier[8989]: DEBUG: Checking url https://mathiaz-hadoop-cluster.s3.amazonaws.com/53b0b7bf-723c-4a0f-b4b1-082ebec84041
node_classifier[8989]: INFO: Getting node configuration: 53b0b7bf-723c-4a0f-b4b1-082ebec84041
node_classifier[8989]: DEBUG: Node configuration (53b0b7bf-723c-4a0f-b4b1-082ebec84041): classes: ['hadoop::namenode']
puppet-master[7397]: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find stage hadoop-base specified by Class[Hadoop::Base] at /etc/puppet/modules/hadoop/manifests/init.pp:142 on node 53b0b7bf-723c-4a0f-b4b1-082ebec84041


Unfortunately there is a bug related to puppet stages. As a workaround the puppet agent can be restarted:


sudo /etc/init.d/puppet restart

Looking at the syslog file on the Namenode the Puppet Agent installs and configures the Hadoop Namenode:


puppet-agent[1795]: Starting Puppet client version 2.6.1
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Key[cloudera]/File[/etc/apt/cloudera.key]/ensure) defined content as '{md5}dc59b632a1ce2ad325c40d0ba4a4927e'
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Key[cloudera]/Exec[import apt key cloudera]) Triggered 'refresh' from 1 events
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Sources_list[canonical]/File[/etc/apt/sources.list.d/canonical.list]/ensure) created
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Sources_list[cloudera]/File[/etc/apt/sources.list.d/cloudera.list]/ensure) created
puppet-agent[1795]: (/Stage[apt]/Apt::Apt/Exec[apt-get_update]) Triggered 'refresh' from 3 events


The first stage of the puppet run sets up the Canonical partner archive and the Cloudera archive. The Sun JVM is pulled from the Canonical archive while Hadoop packages are downloaded from the Cloudera archive.

The following stage creates a common Hadoop configuration:


puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/var/cache/debconf/sun-java6.seeds]/ensure) defined content as '{md5}1e3a7ac4c2dc9e9c3a1ae9ab2c040794'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/Package[sun-java6-bin]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/Package[hadoop-0.20]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/var/lib/hadoop-0.20/dfs]/ensure) created
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/etc/hadoop-0.20/conf.puppet]/ensure) created
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/etc/hadoop-0.20/conf.puppet/hdfs-site.xml]/ensure) defined content as '{md5}1f9788fceffdd1b2300c06160e7c364e'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/Exec[/usr/sbin/update-alternatives --install /etc/hadoop-0.20/conf hadoop-0.20-conf /etc/hadoop-0.20/conf.puppet 15]) Triggered 'refresh' from 1 events
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/etc/default/hadoop-0.20]/content) content changed '{md5}578894d1b3f7d636187955c15b8edb09' to '{md5}ecb699397751cbaec1b9ac8b2dd0b9c3'

Finally the Hadoop Namenode is configured:


puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Package[hadoop-0.20-namenode]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/File[hadoop-core-site]/ensure) defined content as '{md5}2f2445bf3d4e26f5ceb3c32047b19419'
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/File[/var/lib/hadoop-0.20/dfs/name]/ensure) created
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Exec[format-dfs]) Triggered 'refresh' from 1 events
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Service[hadoop-0.20-namenode]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Service[hadoop-0.20-namenode]) Failed to call refresh: Could not start Service[hadoop-0.20-namenode]: Execution of '/etc/init.d/hadoop-0.20-namenode start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:177


There is another bug in the Hadoop init script this time: the Namenode cannot be started. The puppet agent can be restarted or the next puppet run will start it:


sudo /etc/init.d/puppet restart

The Namenode daemon is running and logs information to its log file in /var/log/hadoop/hadoop-hadoop-namenode-*.log:


[...]
INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
[...]
INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 8200: starting
INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 8200: starting

Start the Hadoop Jobtracker


The next component to start is the Hadoop Jobtracker:


./start_instance.py jobtracker

After some time the Puppet Master compiles the Jobtracker manifest:


DEBUG: Checking url https://mathiaz-hadoop-cluster.s3.amazonaws.com/2faa4de9-c708-45ab-a515-ae041a9d0239
node_classifier[30683]: INFO: Getting node configuration: 2faa4de9-c708-45ab-a515-ae041a9d0239
node_classifier[30683]: DEBUG: Node configuration (2faa4de9-c708-45ab-a515-ae041a9d0239): classes: ['hadoop::jobtracker']
puppet-master[23542]: Compiled catalog for 2faa4de9-c708-45ab-a515-ae041a9d0239 in environment production in 2.00 seconds


On the instance the puppet agent configures the Hadoop Jobtracker:


puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/File[hadoop-mapred-site]/ensure) defined content as '{md5}af3b65a08df03e14305cc5fd56674867'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/File[hadoop-core-site]/ensure) defined content as '{md5}2f2445bf3d4e26f5ceb3c32047b19419'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/Package[hadoop-0.20-jobtracker]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/Service[hadoop-0.20-jobtracker]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/Service[hadoop-0.20-jobtracker]) Failed to call refresh: Could not start Service[hadoop-0.20-jobtracker]: Execution of '/etc/init.d/hadoop-0.20-jobtracker start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:135


There is the same bug in the init script. Let's restart the puppet agent:


sudo /etc/init.d/puppet restart

The Jobtracker connects to the Namenode and error messages are logged on a regular basis to both the Namenode and Jobtracker log files:


INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8200, call
addBlock(/hadoop/mapred/system/jobtracker.info, DFSClient_-268101966, null)
from 10.122.183.121:54322: error: java.io.IOException: File
/hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes,
instead of 1
java.io.IOException: File /hadoop/mapred/system/jobtracker.info could only be
replicated to 0 nodes, instead of 1


This is normal as there aren't any Datanode daemon available for data replication.

Start Hadoop workers


It's now time to start the Hadoop Worker to get an operational Hadoop Cluster:


./start_instance.py worker

The Hadoop Worker holds both a Data node and a Task tracker. The Puppet agent configures them to talk to the Namenode and Job tracker respectively.

After some time the Puppet Master compiles the catalog for the Hadoop Worker:


node_classifier[8368]: DEBUG: Checking url https://mathiaz-hadoop-cluster.s3.amazonaws.com/b72a8f4d-55e6-4059-ac4b-26927f1a1016
node_classifier[8368]: INFO: Getting node configuration: b72a8f4d-55e6-4059-ac4b-26927f1a1016
node_classifier[8368]: DEBUG: Node configuration (b72a8f4d-55e6-4059-ac4b-26927f1a1016): classes: ['hadoop::worker']
puppet-master[23542]: Compiled catalog for b72a8f4d-55e6-4059-ac4b-26927f1a1016 in environment production in 0.18 seconds


On the instance the puppet agent installs the Hadoop worker:


puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/File[hadoop-mapred-site]/ensure) defined content as '{md5}af3b65a08df03e14305cc5fd56674867'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Package[hadoop-0.20-datanode]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/File[/var/lib/hadoop-0.20/dfs/data]/ensure) created
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Package[hadoop-0.20-tasktracker]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/File[hadoop-core-site]/ensure) defined content as '{md5}2f2445bf3d4e26f5ceb3c32047b19419'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-datanode]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-datanode]) Failed to call refresh: Could not start Service[hadoop-0.20-datanode]: Execution of '/etc/init.d/hadoop-0.20-datanode start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:103
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-tasktracker]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-tasktracker]) Failed to call refresh: Could not start Service[hadoop-0.20-tasktracker]: Execution of '/etc/init.d/hadoop-0.20-tasktracker start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:103


Again the same init script bug - let's restart the puppet agent:


sudo /etc/init.d/puppet restart

Once the worker is installed the Datanode daemon connects to the Namenode:


INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 10.249.187.5:50010 storage DS-2066068566-10.249.187.5-50010-1285276011214
INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.249.187.5:50010


Similarly the Task Tracker daemon registers itself with the Jobtracker:
INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/domU-12-31-39-03-B8-F7.compute-1.internal

The Hadoop Cluster is up and running.

Conclusion


Once the initial setup of the Puppet master is done and the Hadoop Namenode and Jobtracker are up and running adding new Hadoop Workers is
just one command:

./start_instance.py worker

Puppet automatically configures them to join the Hadoop Cluster.

Thursday, July 8, 2010

Vote for the Ubuntu stack exchange

This morning Evan email hit my inbox: there is a suggestion to create a stack exchange for Ubuntu.

I've always  been impressed by the stackoverflow and serverfault web sites. Granted forums have been around for a long time - however I love the user interaction provided by the folks behind Stack Exchange. A couple of months ago they created area51 to request new ideas that could use the same framework behind stackoverflow and serverfault. And again the user experience  for handling these requests is great.

In my opinion Stack Exchange provides an excellent user experience that fosters user contributions and collaboration - in-line with the values of the Ubuntu community.

So I went over to area51 and voted on the on-topic and off-topic questions for the Ubuntu proposal.

Tuesday, July 6, 2010

Velocity 2010: Fast by default - Thursday

Thursday was the last day of the conference and followed the same format as Wednesday: keynotes in the morning, three parallel tracks in the afternoon.

Creating Cultural Change


John Rauser from Amazon shared a few experiences about creating cultural changes inside and outside organizations.

Here are some key takeaways:

  • Try something new

  • Seek group identity

  • Welcome newcomers

  • Be relentless happy


Theses ideas actually reminded me of how the Ubuntu community is been built up.






In the Belly of the Whale Operations at Twitter


John Adams of Twitter presented a few insights on how operations are run at Twitter.

He outlined several principles to keep in mind when building their infrastructure:

  • Nothing works the first time. Plan to rebuild everything more than once.

  • Deploy faster and more often as less code will change.

  • Detect problem as early as possible - to recover fast.

  • Disable/enable features in production aka Feature darkmode.


To support these guiding principles he listed some of the tools that are used:

  • configuration management done with puppet and svn.

  • Reviewboard to review changes made to the infrastructure

  • Ganglia to take care of monitoring

  • Scribe to collect and aggregate all logs into Hadoop HDFS using LZO compression.

  • Murder to deploy their code to all of their systems via bittorrent.

  • Google analytics to track errors pages while Whale Watcher to track errors in logs.

  • Unicorn to powers their rails stack.







Lightning talks


Thursdays lightning talks covered another round of useful tools in helping optimizing page loads:

  • httpwatch: a commercial tool that loads web pages and analyses it

  • pagetest

  • speedtracer: chrome browser extension that provides an insight on what the browser is doing when a loading a page

  • fiddler2







Moving Fast


Robert Johnson of Facebook gave a talk about the culture of moving fast at Facebook. Here are a few short sentence to summarize his points:

  • How to scale? Have a team that reacts fast.

  • The release cycle: Make changes every day as frequent small changes makes it easier to figure out what went wrong.

  • Control and responsibility to one person.


He finished with a few lessons that were learned:

  • New code is slow.

  • Give developers room to try things.

  • Nobody's job is to say no.







Practice of Continuous Deployment


Throughout the conference I heard multiple times the idea of continuous deployment. With continuous integration being pushed on on the developer side, its pendant on the ops side is continuous deployment: tests, build, deploy. Deploy multiple times a day with a good monitoring system to identify quickly when things go wrong. When things go wrong it's easier to identify what changed as the number of changes is rather low. All the big shops have a deployment dashboard to review what went live, when and by whom.

The launchpad team is already following this idea: Launchpad edge has a daily update of the code running against the production database. Releases (with DB schema changes) are conducted on a monthly basis. And Ubuntu is providing something similar as the development version is always available for installation - and releases are cut every 6 months.

Monday, July 5, 2010

Velocity 2010: Fast by default - Tuesday and Wednesday

Here is a report on Velocity 2010, the Web Performance and Operations conference. In its third year it grew to more than 1100 attendees - this year was sold out.

Tuesday workshops


Tuesday was dedicated to workshops even though most of them turned out to be presentations with demos given the number of participants. So not a lot of hands-on sessions. Here is a small selection of talks I found interesting throughout the day:

Infrastructure automation with Chef


Overview of the chef project lead by the high energy and opinionated Adam Jacob from Opscode.

For me the most exciting part was the ability that chef provides a complete view in your infrastructure and the ability to query your infrastructure any way you want.

Adam gave a few high impact principles:
Being able to reconstruct a business from a source code repository, a data backup and bare metal resources.

Another interesting feature from the knife tool was the ability to start/spawn new instances in EC2 from the command line. For example the following command will give you an ec2 instance running your rails role within a few minutes:
knife ec2 server create 'role[rails]'

Protecting "Cloud" Secrets With Grendel


A technical overview of the Grendel project: OpenPGP as a software service.

The project gives the ability to share encrypted documents between multiple people. From the security perspective each user private key is stored in the cloud encrypted by a pass phrase only known to the user transmitted via http basic auth.

Wednesday sessions


Wednesday was the first day of the conference with keynotes in the morning and three tracks in the afternoon.

Datacenter Infrastructure Innovation


James Hamilton from Amazon Web Services gave an interesting overview of the different parts of building a data center.

An interesting point he made was that data center should target 100% usage of their servers while the industry standard is around 10 to 15% utilization on average. This objective lead to the introduction of spot instances in EC2 so that resource usage could be maximized and Amazon cloud infrastructure can be flat lined. That reminds me of some comments from Google engineers stating that they try to pile as much work as possible on each of their servers. At their scale having a server powered off is costing money.

He covered other topics:

  • air conditioning: DC could be run way hotter they are now

  • power: the cost of power has a small part of the total cost of running a data center - server hardware being more than half of the cost. This is an interesting point with regards to the whole green computing movement.







Speed matters


Urs Hölzle from Google covered the importance of having web page that load fast and a range of improvements Google had been working on for the last years: from the web browser (via chrome) down to the infrastructure (such as dns).

He also highlighted that Google page ranking process now takes into account the speed at which a page loads. As heard multiple times during the conference there is now empirical evidence that links directly the page load speed to revenue: the faster a page load the more people will stay on the web site.






Lightning talks


Wednesdays lightning demos show cased a list of tools focusing on highlighting performance bottleneck and helping out tracking why page load are slow and how to improve them:






Getting Fast: Moving Towards a Toolchain for Automated Operations


Lee Thompson and Alex Honor reported on the work of the devtools-toolchain group. The group formed a few months ago to share experiences and build up a set of best practices. Of the use cases they've outlined KaChing's Continuous Deployment was the most interesting one:
Release is a marketing concern.

Facebook operations


Tom Cook of Facebook gave a sneak peak at the life of operations in Facebook.

Very interesting talk about the developement practices of one of the busiest website of the internet. Facebook is running of two data center (one on the east coast, one of the west coast) while they're building their own data center in Oregon.

Their core OS is Centos 5 with a customized kernel. For system management cfengine is set to update every 15 minutes with a cfengine run taking around 30 seconds. All of the changes are peer reviewed.

On the deployment front bug fixes are pushed out once a day while new features are rolled out on a weekly basis. Code is pushed to 10000s of servers using bittorrent swarms. Coordination is done via IRC with the engineer available in case something goes wrong.

The developer is responsible for writing the code as well as testing and deploying it. New code is then exposed to a subset of real traffic. Ops are embedded in engineering teams and take part of design decisions. They're actually an interface to other ops.

As a summary tom gave a few points:

  • version control everything

  • optimize early

  • automate++

  • use configuratiom mgmgt

  • plan to fail

  • instrument everything

  • don't waste time on dumb stuff






Thursday, April 8, 2010

Using puppet in UEC/EC2: Improving performance with Phusion Passenger

Now that we have an efficient process to start instances within UEC/EC2 and get them configured for their task by puppet we'll dive into improving the performance of the puppetmaster with Phusion Passenger.

Why?


The default configuration used by puppetmasterd is based on webrick which doesn't really scale well. One popular choice to improve puppetmasterd performance is to use mod passenger from the libapache2-mod-passenger package.



Apache2 setup


The configuration is based on the Puppet passenger documentation. It is available from the bzr branch as we'll use puppet to actually configure the instance running puppetmasterd.

The puppet module has been updated to make sure the apache2 and libapache2-mod-passenger packages are installed. It also creates the relevant files and directories required to run puppetmasterd as a rack application.

Passenger and SSL modules are enabled in the apache2 configuration. All of their configuration is done inside a virtual host definition. Note that the SSL options related to certificates and private keys files points directly to /var/lib/puppet/ssl/.

Apache2 is also configured to only listen on the default puppetmaster port by replacing apache2 default ports.conf and disabling the default virtual site.

Finally the configuration of puppetmasterd has been updated so that it can correctly process the certificate clients while being run under passenger.

Note that puppetmasterd needs to be run once in order to be able to generate its ssl configuration. This happens automatically when the puppetmaster package is installed since puppetmasterd is started during the package installation.



Deploying an improved puppetmaster


Log on the puppetmaster instance and update the puppet configuration using the bzr branch:
bzr pull --remember lp:~mathiaz/+junk/uec-ec2-puppet-config-passenger /etc/puppet/

Update the configuration:
sudo puppet --node_terminus=plain /etc/puppet/manifests/puppetmaster.pp

On the Cloud Conductor start a new instance with start_instance.py. If you're starting from scratch remember to update the start_instance.yaml
file with the puppetmaster CA and internal IP:
./start_instance.py -c start_instance.yaml AMI_NUMBER

Following /var/log/syslog on the puppetmaster you should see the new instance requesting a certificate:
Apr 8 00:40:08 ip-10-195-93-129 puppetmasterd[3353]: Starting Puppet server version 0.25.4
Apr 8 00:40:08 ip-10-195-93-129 puppetmasterd[3353]: 7d6b61a7-3772-4c41-a23d-471b417d9c47 has a waiting certificate request

Now that the puppetmasterd process is run by apache2 and mod-passenger you can check in /var/log/apache2/other_vhosts_access.logs.log the http requests made by the puppet client to get its certificate signed:
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:40:06 +0000] "GET /production/certificate/7d6b61a7-3772-4c41-a23d-471b417d9c47 HTTP/1.1" 404 2178 "-" "-"
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:40:08 +0000] "GET /production/certificate_request/7d6b61a7-3772-4c41-a23d-471b417d9c47 HTTP/1.1" 404 2178 "-" "-"
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:40:08 +0000] "PUT /production/certificate_request/7d6b61a7-3772-4c41-a23d-471b417d9c47 HTTP/1.1" 200 2082 "-" "-"
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:40:08 +0000] "GET /production/certificate/7d6b61a7-3772-4c41-a23d-471b417d9c47 HTTP/1.1" 404 2178 "-" "-"
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:40:08 +0000] "GET /production/certificate/7d6b61a7-3772-4c41-a23d-471b417d9c47 HTTP/1.1" 404 2178 "-" "-"

Once check_csr is run by cron the certificate will be signed and the puppet client is able to retrieve its certificate:
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:42:08 +0000] "GET /production/certificate/7d6b61a7-3772-4c41-a23d-471b417d9c47 HTTP/1.1" 200 2962 "-" "-"
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:42:08 +0000] "GET /production/certificate_revocation_list/ca HTTP/1.1" 200 2450 "-" "-"

The puppet client ends up requesting its manifest:
ip-10-195-93-129.ec2.internal:8140 10.195.94.224 - - [08/Apr/2010:00:42:09 +0000] "GET /production/catalog/7d6b61a7-3772-4c41-a23d-471b417d9c47?facts_format=b64_zlib_yaml&facts=eNp [....] HTTP/1.1" 200 2354 "-" "-"



Conclusion


I've just outlined how to configure mod passeenger to run puppetmasterd which is a much more efficient setup than using the default webrick server. Most of the configuration is detailed in the files available in the bzr branch.

Wednesday, April 7, 2010

Using puppet in UEC/EC2: Node classification

In a previous article I discussed how to set up an automated registration process for puppet instances. We'll now have a look at how we can tell these instances what they should be doing.

Going back to the overall architecture the Cloud conductor is the component responsible for starting new instances. Of all the three components it's him that has the most knowledge about what an instance should be: it is the one responsible for starting a new instance after all.

Using S3 to store node definitions


We'll use the puppet external node feature to connect the Cloud conductor with the puppetmaster. The external node script -node_classifier.py - will be responsible for telling which classes each instance is supposed to have. Whenever a puppet client connects to the master the node_classifier.py script is called with the certificate name. It is responsible for providing a description of the classes, environments and parameters for the client on its standard output in a yaml format.

Given that the Cloud conductor creates a file with the certificate name for each instance it spawns we'll extend the start_instance.py script to store the node classification in the content of the file created in the S3 bucket.

You may have noticed that instances started by start_instance.py don't have an ssh public key associated with them. So we're going to create a login-allowed class that will install the authorized key for the ubuntu user.



Setup the puppetmaster to use the node classifier


We'll use the Ubuntu Lucid Beta2 image as the base image on which to build our Puppet infrastructure.

Start an instance of the Lucid Beta2 AMI using an ssh key. Once it's running write down its public and private DNS addresses. The public DNS address will be used to setup the puppetmaster via ssh. The private DNS address will be used as the puppetmaster hostname given out to puppet clients.

Log on the started instance via ssh to install and setup the puppet master:


  1. Update apt files:



    sudo apt-get update



  2. Install the puppet and bzr packages:



    sudo apt-get install puppet bzr



  3. Change the ownership of the puppet directory so that the ubuntu user can directly edit the puppet configuration files:



    sudo chown -R ubuntu:ubuntu /etc/puppet/



  4. On the puppetmaster check out the tutorial3 bzr branch:



    bzr branch --use-existing-dir lp:~mathiaz/+junk/uec-ec2-puppet-config-tut3 /etc/puppet/

    You'll get a conflict for the puppet.conf file. You can ignore the conflict as the puppet.conf file from the branch is the one that supports an external node classifier:
    bzr resolve /etc/puppet/puppet.conf



Edit the node classifier script scripts/node_classifier.py to set the correct location of your S3 bucket.

Note that the script is set to return 1 if the certificate name doesn't have a corresponding file in the S3 bucket. You may want to change the return code to 0 if you want to use the normal nodes definition. See the puppet external node documentation for more information.

The puppetmaster configuration in puppet.conf has been updated to use the external node script.

There is also the login-allowed class defined in the manifests/site.pp file. It sets the authorized key file for the ubuntu user.

On the puppetmaster edit manifests/site.pp to update the public key with your EC2 public key. You can get the public key from ~ubuntu/.ssh/authorized_key on the puppetmaster.

To bootstrap the new puppetmaster configuration run the puppet client:
sudo puppet --node_terminus=plain /etc/puppet/manifests/puppetmaster.pp

Note that you'll have to set the node_terminus to plain to avoid calling the node classifier script when configuring the puppetmaster itself. Otherwise the puppet run would fail since the puppetmaster certificate name (which defaults the to fqdn of the instance) doesn't have a corresponding file in the S3 bucket.

We have now our puppetmaster configured to look up the node classification for each puppet client.



Update start_instance.py to provide a node definition


It's time to update the Cloud conductor to provide the relevant node classification information whenever it starts a new instance.

Update the bzr branch on the Cloud conductor system:
bzr pull --remember lp:~mathiaz/uec-puppet-config-tut3

The start_instance.py script has been updated to write the node classification information when it creates the instance file in the S3 bucket. That information is actually set in the start_instance.yaml file under the node key. All of the node classification information expected by the puppetmaster from the external node classifier script is set under the node key in start_instance.yaml. See the puppet external node documentation for more information on the information that can be provided by the external node script.

Review the start_instance.yaml file to make sure the S3 bucket name, the puppetmaster server IP and CA certificate are still valid for your own setup.

Start an instance:
./start_instance.py -c start_instance.yaml AMI_NUMBER

Following /var/log/syslog you should see something similar to this:
Apr 7 19:15:37 domU-12-31-39-07-D6-52 puppetmasterd[1644]: 77ad2a3c-5d52-4ca7-9fea-b99b767b09d0 has a waiting certificate request

The instance has booted and registered with the puppetmaster.
Apr 7 19:16:01 domU-12-31-39-07-D6-52 CRON[2188]: (root) CMD (/usr/local/bin/check_csr --log-level=debug https://mathiaz-puppet-nodes-1.s3.amazonaws.com)
Apr 7 19:16:02 domU-12-31-39-07-D6-52 check_csr[2189]: DEBUG: List of waiting csr: 77ad2a3c-5d52-4ca7-9fea-b99b767b09d0
Apr 7 19:16:02 domU-12-31-39-07-D6-52 check_csr[2189]: DEBUG: Checking 77ad2a3c-5d52-4ca7-9fea-b99b767b09d0
Apr 7 19:16:02 domU-12-31-39-07-D6-52 check_csr[2189]: DEBUG: Checking url https://mathiaz-puppet-nodes-1.s3.amazonaws.com/77ad2a3c-5d52-4ca7-9fea-b99b767b09d0
Apr 7 19:16:03 domU-12-31-39-07-D6-52 check_csr[2189]: INFO: Signing request: 77ad2a3c-5d52-4ca7-9fea-b99b767b09d0

The puppetmaster checked if the client request is expected and signs it.
Apr 7 19:17:39 domU-12-31-39-07-D6-52 node_classifier[2240]: DEBUG: Checking url https://mathiaz-puppet-nodes-1.s3.amazonaws.com/77ad2a3c-5d52-4ca7-9fea-b99b767b09d0
Apr 7 19:17:39 domU-12-31-39-07-D6-52 node_classifier[2240]: INFO: Getting node configuration: 77ad2a3c-5d52-4ca7-9fea-b99b767b09d0
Apr 7 19:17:39 domU-12-31-39-07-D6-52 node_classifier[2240]: DEBUG: Node configuration (77ad2a3c-5d52-4ca7-9fea-b99b767b09d0): classes: [login-allowed]
Apr 7 19:17:39 domU-12-31-39-07-D6-52 puppetmasterd[1644]: Compiled catalog for 77ad2a3c-5d52-4ca7-9fea-b99b767b09d0 in 0.01 seconds

The puppetmaster compiled a manifest for the client according to the information provided by the node classifier script.

Make sure that the instance that has been started doesn't have any ssh key associated with it:
euca-describe-instances

Make a note of the instance ID and its public DNS name.

Login into the instance:


  1. Run euca-get-console-output instance_ID to get the ssh fingerprint. You may need to scroll back to get the fingerprints.




  2. Login into the instances using your EC2 public key:



    ssh -i ~/.ssh/ec2_key ubuntu@public_dns





Conclusion


The start_instance.py script is currently very simple and should be considered as a proof of concept.

Storing the node classification information into an S3 bucket makes it also easy to edit the content of the file. It also provides an easy way to get a list of the nodes that have been started by the Cloud Conductor as well as their classification.

If you look at the start_instance.py script you'll notice that the ACL on the S3 bucket is 'public-read'. That means anyone can read the list of your nodes as well as the list of classes and other node classification information for each of them. You may wanna use S3 private url instead.

We now have a puppet infrastructure where instances are started by a Cloud conductor in order to achieve a specific task. These instances automatically connect to the puppetmaster to get configured automatically for the task they've been created for. All of the instances configuration is stored in a reliable and scalable system: S3.

With instances being created on demand our puppet infrastructure can grow quickly. The puppetmaster can easily be responsible for managing hundreds of instances. Next we'll have a look at how improving the performance of the puppetmaster.

Tuesday, March 30, 2010

MySQL 5.1 Bug Zap: Bug day result

Today was targeted at looking through the mysql-dfsg-5.1 bugs to triage them. We ended up with all bugs having their importance set and their status set to Triaged or Incomplete.

Tomorrow will be dedicated to fixing most of them as well as some upgrade testing. I'll also have a look at the new mysql upstart job that has replaced the mysld_safe script.

Looking at the bugs today I've found a couple of bugs that look easy to fix:

  • Bug 552053:  mysqld_safe should be available in mysql-server

  • Bug 498939: mysql- packages section on synaptic


To get started grab a copy of the package branch:
bzr init-repo mysql-dfsg-5.1/
cd mysql-dfsg-5.1/
bzr branch lp:ubuntu/mysql-dfsg-5.1

Fix a bug and push the branch to launchpad:
bzr push lp:~YOUR-LOGIN/ubuntu/mysql-dfsg-5.1/zap-bug-XXXXXX

And finish up by creating a merge proposal for the Lucid package branch. I'll take a look at the list of merge proposal throughout the day and include them in the upload schedule for tomorrow.

Monday, March 29, 2010

Ubuntu Server Bug Zap: MySQL 5.1

Following up on the kvm and samba bug zap days I'm organizing a two day bug
zap around MySQL.



First phase: bug triaging


First in line is triaging all the bugs related to mysql-dfsg-5.1 package. As
of Tue Mar 30 00:23:04 UTC 2010 there are 27 bugs waiting to be looked at.


The goal is to have the importance set for all bugs and have as many bugs
status moved to either Triaged or Invalid/Won't Fix.


A few resources are available to help out:








Objective: get the list of bugs to zero.


Thursday, March 25, 2010

Using puppet in UEC/EC2: Automating the signing process

I outlined in the previous article how to setup a puppetmaster instance on UEC/EC2 and how to start instances that will automatically register with the puppetmaster. We're going to look at automating the process of signing puppet client certificate requests.



Overview


Our puppet infrastructure on the cloud can be broken down into three components:



  • The Cloud conductor responsible for starting new instances in our cloud.

  • A Puppetmaster responsible for configuring all the instances running in our cloud.

  • Instances acting as puppet clients asking to be setup correctly.


The idea is to have the Cloud conductor start instances and notify the puppetmaster that these new instances are coming up. The puppetmaster can then automatically sign their certificate requests.


We'll use S3 as the way to communicate between the Cloud conductor and the puppetmaster. The Cloud conductor will also assign a random certificate to each instance it starts.


The Cloud conductor will be located on a sysadmin workstation while the puppetmaster and instances will be running in the cloud. The bzr branch contains all the scripts necessary to setup such a solution.




The Cloud conductor: start_instance.py



  1. Get the tutorial2 bzr on the Cloud conductor (an admin workstation):



    bzr branch lp:~mathiaz/+junk/uec-ec2-puppet-config-tut2



    In the scripts/ directory start_instance.py plays the role of the Cloud conductor. It creates new instances and stores their certname in S3. The start_instance.yaml configuration file provides almost the same information as the user-data.yaml file we used in the previous article.



  2. Edit the start_instance.yaml file and update each setting:



    • Choose a unique S3 bucket name.

    • Use the private DNS hostname of the instance running the puppetmaster.

    • Add the puppetmaster ca certificate found on the puppetmaster.



  3. Make sure your AWS/UEC credentials are available in the environment. The start_instance.py uses these to access EC2 to start new instances and S3 to store the instance certificate names.



  4. Start a new instance of the Lucid Beta1 AMI:



    ./start_instance.py -c ./start_instance.yaml ami-ad09e6c4



    start_instance.py starts a new instance using the AMI specified on the command line. The instance user data holds a random UUID for the puppet client certificate name. start_instance.py also creates a new file in its S3 bucket named after the puppet client certificate name.



  5. On the puppetmater looking at the puppetmaster log you should see a certificate request show up after some time:



    Mar 19 19:09:33 ip-10-245-197-226 puppetmasterd[20273]: a83b0057-ab8d-426e-b2ab-175729742adb has a waiting certificate request







Automating the signing process on the puppetmaster


It's time to setup the puppetmaster to check if there are any certificate requests waiting and signs only the ones started by the Cloud conductor. We'll use the check_csr.py cron job that will get the list of waiting certificate requests via puppetca --list and checks whether there is a corresponding file in the S3 bucket.



  1. On the puppetmaster get the tutorial2 bzr branch:



    bzr pull --remember lp:~mathiaz/+junk/uec-ec2-config/tut2 /etc/puppet/




  2. The puppetmaster.pp manifest has been updated to setup the check_csr.py cron job to run every 2 minutes. You need to update the cron job command line in /etc/puppet/manifests/puppetmaster.pp with your own S3 bucket name.



  3. Update the puppetmaster configuration:



    sudo puppet /etc/puppet/manifests/puppetmaster.pp




  4. Watching /var/log/syslog you should see check_csr being run by cron every other minute:



    Mar 19 19:10:01 ip-10-245-197-226 CRON[21858]: (root) CMD (/usr/local/bin/check_csr --log-level=debug https://mathiaz-puppet-nodes-1.s3.amazonaws.com)



    check_csr gets the list of waiting certificate requests and checks if there is a corresponding file in its S3 bucket:



    Mar 19 19:10:03 ip-10-245-197-226 check_csr[21859]: DEBUG: List of waiting csr: a83b0057-ab8d-426e-b2ab-175729742adb
    Mar 19 19:10:03 ip-10-245-197-226 check_csr[21859]: DEBUG: Checking a83b0057-ab8d-426e-b2ab-175729742adb
    Mar 19 19:10:03 ip-10-245-197-226 check_csr[21859]: DEBUG: Checking url https://mathiaz-puppet-nodes-1.s3.amazonaws.com/a83b0057-ab8d-426e-b2ab-175729742adb



    If so it will sign the certificate request:



    Mar 19 19:10:03 ip-10-245-197-226 check_csr[21859]: INFO: Signing request: a83b0057-ab8d-426e-b2ab-175729742adb







S3 bucket ACL


For now the S3 bucket ACL is set so that anyone can get the list files available in the bucket. However only authenticated requests can create new files in the bucket. Given that the filename are just random UUID this is not a big issue.




Using SQS instead of S3


Another implementation of the same idea is to use SQS to handle the notification of the puppetmaster by the Cloud conductor about new instances. While SQS would seem to be the best tool to provide that functionality it is not available in UEC in Lucid.




Conclusion


We end up with a puppet infrastructure where legitimate instances are automatically accepted. Now that instances can easily show up and be automatically enrolled what should these be configured as? We'll dive into this issue in the next article.


Wednesday, March 24, 2010

Using puppet in UEC/EC2: puppet support in Ubuntu images

One of the focus for the Lucid release cycle in the Ubuntu Server team is to improve the integration between puppet and UEC/EC2. I'll discuss in a series of articles how to setup a puppet infrastructure to manage Ubuntu Lucid instances running on UEC/EC2. I'll focus on the bootstrapping process rather than writing puppet recipes.


Today we'll look at configuring a puppetmaster into an instance and how to start instances that will register automatically with the puppetmaster.


We'll work with the Lucid Beta1 image on EC2. All the instances started through out this article will be based on this AMI.



Puppetmaster setup


Let's start by creating a puppetmaster running on EC2. We'll setup all the puppet configuration via ssh using a bzr branch on Launchpad: lp:~mathiaz/+junk/uec-ec2-puppet-config-tut1.


Start an instance of the Lucid Beta1 AMI using an ssh key. Once it's running write down its public and private DNS addresses. The public DNS address will be used to setup the puppetmaster via ssh. The private DNS address will be used as the puppetmaster hostname given out to puppet clients.


We'll actually install the puppetmaster using puppet itself.


Log on the started instance via ssh to install and setup the puppet master:



  1. Update apt files:



    sudo apt-get update




  2. Install the puppet and bzr packages:



    sudo apt-get install puppet bzr




  3. Change the ownership of the puppet directory so that the ubuntu user can directly edit the puppet configuration files:



    sudo chown ubuntu:ubuntu /etc/puppet/




  4. Get the puppet configuration branch:



    bzr branch --use-existing-directory lp:~mathiaz/+junk/uec-ec2-puppet-config-tut1 /etc/puppet/



    Before doing the actual configuration let's have a look at the content of the /etc/puppet/ directory created from the bzr branch.


    The layout follows the recommended puppet practices. The puppet module available in the modules directory defines a puppet::master class. The class makes sure that the puppetmaster package is installed and that the puppetmaster service is running. The manifests/puppetmaster.pp file defines the default node to be configured as a puppetmaster.



  5. We'll now run the puppet client to setup the instance as a puppetmaster:



    sudo puppet /etc/puppet/manifests/puppetmaster.pp







Starting a new instance


Now that we have puppetmaster available in our cloud we'll have look at how a new instances of the Lucid Beta1 AMI can be started and automatically setup to register with the puppetmaster.


We're going to use the cloud-config puppet syntax to boot an instance and have it configure itself to connect to the puppetmaster using its user data information:



  1. On the puppetmaster instance create a user-data.yaml file to include the relevant puppetmaster configuration:



    cp /usr/share/doc/cloud-init/examples/cloud-config-puppet.txt user-data.yaml




  2. Update the server setting to point to the puppetmaster private dns hostname. I also strongly recommend to include the puppmaster ca certificate as the ca_cert setting.


    The example certname setting uses a string extrapolation to make each puppet client certificate unique: for now %i is replace by the instance Id while %f is replaced by the FQDN of the instance.


    The sample file has extensive comments about the format of the file. One of the key point is that you can set any of the puppet configuration options via the user data passed to the instance.


    Note that you can remove all the comments to make the user-data.yaml file easier to copy and paste. However don't remove the first line (#cloud-config) as this is used by the instance boot process to start the puppet installation.



  3. Launch a new instance using the content of the user-data.yaml file you've just created as the user-data option passed to the new instance.



  4. You can watch the puppetmaster log on the puppetmaster instance to see when the new instance will request a new certificate:



    tail -f /var/log/syslog




  5. After some time you should see a request coming in:



    puppetmasterd[2637]: i-fdb31b96.ip-10-195-18-227.ec2.internal has a waiting certificate request



    During the boot process of the new instance the puppet cloud-config plugin used the user-data information to automatically install the puppet package, generate the /etc/puppet/puppet.conf file and start the puppetd daemon.



  6. You can then approve the new instance:



    sudo puppetca -s i-fdb31b96.ip-10-195-18-227.ec2.internal




  7. Watching the puppetmaster log you'll see that after some time the new instance will connect and get its new manifest compiled and sent:



    puppetmasterd[2637]: Compiled catalog for i-fdb31b96.ip-10-195-18-227.ec2.internal in 0.03 seconds





In conclusion we now have an instance acting as a puppetmaster and have a single user-data configuration for the whole puppet infrastructure. That user data can be passed to new instances which will automatically register with our puppetmaster.


Even though we're able to make all our instances automatically register with our puppetmaster we still need to manually sign each request as outlined in step 6 above. We'll have a look at automating this step in the next article.


Wednesday, February 17, 2010

FOSDEM 2010

I had the opportunity to attend FOSDEM this year. The most amazing (and frustrating) part of the event was the huge number of talks that were given. Making choices was sometimes hard. However the FOSDEM team recorded most of the sessions and the videos are available now. Perfect for such a busy event!

Here a few highlights of the presentations I attended:

Linux distribution for the cloud


I started the day by attending Peter Eisentraut (a Debian and PostgreSQL core developer) session about Linux distributions for the cloud. He focused on the provisioning aspect of clouds by giving a history of how operating systems were installed: from floppy drives to Cloud images. He dedicated one slide to Ubuntu's cloud offering including Canonical Landscape commenting that Ubuntu is clearly the leader of distributions in the cloud space. He also outlined what were the current problems such as lack of standards and integration of existing software stacks. He pointed out that linux distributions could drive this.

The second part of his talk was focused on the Linux desktop and the impact of cloud services on it. Giving ChromeOS as an example he outlined how applications themselves were moved to the cloud. He then listed the problems with Cloud services with regards to the Free Software principles: non-free server side code, non-free hosting, little or not control over data, lack of open-source community.

He concluded by outlining the challenge in the domain: how could free software principles be transposed to the Cloud and its services? One great reference is Eben Moglen talk named "Freedom in the Cloud".

Beyond MySQL GA


Kristian Nielsen, a MariaDB developer, gave an overview of the Developer ecosystem around MySQL. He listed a few patches that were available to add new functionalities and fix some bugs: the Google patch, Percona patches, eBay patches, Galera multi-master replication for InnoDB as well as a growing list of storage engines. Few options are available to use them:

  • packages from third party repositories (such as ourdelta and percona)

  • MariaDB maintains and integrates most of the patches

  • a more do-it-yourself approach where you maintain a patch serie.



I talked with Kristian after the presentation about leveraging bzr and LP to make the maintenance easier. It could look like this:

  1. Each patch would be available and maintained in a bzr branch - in LP or else where.

  2. The Ubuntu MySQL package branch available in LP would be used as the base for creating a debian package (or the Debian packaging branch since Debian packages are also available in LP via bzr)

  3. bzr-looms would glue the package bzr branch with the patches bzr branches. The loom could be available from LP or elsewhere.

  4. bzr-builder would be used to create a recipe to build binary packages out of the loom branch.

  5. Packages would be published in PPAs ready to be installed on Ubuntu systems.


The Cassandra distributed database


I finally managed to get in the NoSQL room to attend Eric Evans overview of the Cassandra project. He is a full time developer and employee of Rackspace. The project was started by Facebook to power their inbox search. Even though the project had been available for some years the developer community really started to grow in March 2009. It is now part of the Apache project and about to graduate to a Top Level Project there.

It is inspired by Dynamo from Amazon and provide a O(1) DHT with eventual consistency and a consistent hashing. Multiple client APIs are available:

  • Thrift

  • Ruby

  • Python

  • Scala


I left before the end of the talk as I wanted to catch the complete presentation about using git for packaging.

Cross distro packaging with (top)git


Thomas Koch gave an overview of using git and related tools to help in maintaining Debian packaging. He works in web shop where every web application is deployed as a Debian package.

The upstream release tarball is imported in a upstream git branch using the pristine-tar tool. Packaging code (ie the debian/ directory) is kept in a different branch.

Patches to the upstream code are managed by topgit as seperate git branches. He also noted that topgit was able to export the whole stack of patches in the quilt Debian source format using the tg export command.

Here is the list of tools associated with his workflow:

  • pristine-tar

  • git-buildpackage

  • git-import-orig

  • git-dch

  • topgit


The workflow he outlined looked very similar to the one based around bzr and looms.

Scaling Facebook with OpenSource tools


David Recordon from Facebook gave a good presentation on the challenges that Facebook runs into when it comes to scale effectively.

Here are a few numbers I caught during the presentation to give an idea about the scale of the Facebook infrastructure (Warning: they may be wrong - watch the video to double check):

  • 8 billion minutes spent on Facebook every day

  • 2.5 billions of pictures uploaded every month

  • 400 billion page/view per month

  • 25 TB of log per day

  • 40 billions pictures stored in 4 resolutions which bring a grand total of 160 billions photos

  • 4 millions of line codes in php


Their overall architecture can be broken into the following components:

  1. Load balancers

  2. Web server (php): Most of the code is written in PHP: the language is simple, it fits well for fast development environments and there are a lot of developers available. A few of the problems are CPU, Memory, how to reuse the PHP logic in other systems and the difficulty to write extensions to speed up critical parts of the code. An overview of the HipHop compiler was given: a majority of their PHP code can be converted to C++ code which is then compiled and deployed on their webserver. An apache module is coming up soon probably as a fastcgi extension.

  3. memcached (fast, simple): A core component of their infrastructure. It's robust and scales well: 120 millions queries/second. They wrote up some patches which are now making their way to upstream.

  4. Services (fast, complicated): David gave an overview of some of the services that Facebook opensourced:

    • Thrift: an rpc server, now part of the Apache incubator.

    • Hive: build on top of hadoop it is now part of the Apache project. It's an SQL-like frontend to hadoop aiming at simplifying access to the hadoop infrastructure so that more people (ie non-engineers) can write and run data analysis jobs.

    • Scribe: a performant and scalable logging system. Logs are stored in a hadoop/hive cluster to help in data analysis.



  5. Databases (slow, simple): 1000's of MySQL servers are used as a persistence layer. InnoDB is used for the storage engine and multiple independent clusters are used for reliability. Joins are done at the webserver layer. The database layer is actually the persistence storage layer with memcached acting as a distributed index.


Other talks that seemed interesting


I had planned to a attend a few other talks as well. Unfortunately either their schedule was conflicting with another interesting presentation or the room was completely full (which seemed to happen all day long with the NoSQL room). Here is a list of them:

  1. NoSQL for Fun & Profit

  2. Introduction to MongoDB

  3. Cloudlets: universal server images for the cloud

  4. Continuous Packaging with Project-Builder.org