Monday, December 21, 2009

RFC: Boot-time configuration syntax for UEC/EC2 images

As part of the Boot-time configuration for UEC/EC2 images specification a configuration file can be passed to instances as user-data to customize some part of the instance without writing and maintaining custom scripts.

The goal is to support most common operations done on instance boot as well as help to bootstrap the instance to be part of an existing configuration management infrastructure.

It currently supports:

  • apt configuration

  • package installation

Other requested features looked into include:

  • runurl support

  • ssh host keys setup

Should these be included as well?

Here is an example of a configuration file (using YAML as the syntax):
# Update apt database on first boot
# (ie run apt-get update)
# Default: true
apt_update: false

# Upgrade the instance on first boot
# (ie run apt-get upgrade)
# Default: false
apt_upgrade: true

# Add apt repositories
# Default: none

# PPA shortcut:
#  * Setup correct apt sources.list line
#  * Import the signing key from LP
#  See for more information
- source: "ppa:user/ppa"    # Quote the string

# Custom apt repository:
#  * Creates a file in /etc/apt/sources.list.d/ for the sources list entry
#  * [optional] Import the apt signing key from the keyserver
#  * Defaults:
#    + keyserver:
#    + filename: 00-boot-sources.list
#    See sources.list man page for more information about the format
- source: "deb lucid main restricted" # Quote the string
keyid: 12345678 # GPG key ID published on a key server

# Custom apt repository:
#  * The apt signing key can also be specified
#    by providing a pgp public key block
#  The apt repository will be added to the default sources.list file:
#  /etc/apt/sources.list.d/00-boot-sources.list
- source: "deb ./" # Quote the string
key: | # The value needs to start with -----BEGIN PGP PUBLIC KEY BLOCK-----
Version: SKS 1.0.10


# Add apt configuration files
#  Add an apt.conf.d/ file with the relevant content
#  See apt.conf man page for more information.
#  Defaults:
#   + filename: 00-boot-conf

# Creates an apt proxy configuration in /etc/apt/apt.conf.d/01-proxy
- filename: "01-proxy"
content: |
Acquire::http::Proxy "";

# Add the following line to /etc/apt/apt.conf.d/00-boot-conf
#  (run debconf at a critical priority)
- content: |
DPkg::Pre-Install-Pkgs:: "/usr/sbin/dpkg-preconfigure --apt -p critical|| true";

# Provide debconf answers
# See debconf-set-selections man page.
# Default: none
debconf_selections: |     # Need to perserve newlines
# Force debconf priority to critical.
debconf debconf/priority select critical

# Override default frontend to readline, but allow user to select.
debconf debconf/frontend select readline
debconf debconf/frontend seen false

# Install additional packages on first boot
# Default: none
- openssh-server
- postfix

I would like to get feedback on the format as well as ideas for other features, either on the wiki page or in the comments section.

Monday, November 30, 2009

RFP: packages to promote to main and demote to universe for Lucid Lynx LTS

The Ubuntu Server team is requesting feedback on the list of packages to be promoted to main and demoted to universe during this release cycle.

Lucid being an LTS release we wanna make sure that packages in main are maintainable for 5 years.  Useful packages should be promoted to main while packages that provide duplicated functionalities or are not maintained anymore should be demoted to universe.

The LucidServerSeeds wiki page is used to track packages under discussion. If you want to add a package to this discussion you should edit the relevant section (either Proposed universe demotions or Proposed main promotions) of the wiki page.

For example the current list of proposed packages to be moved to universe includes

  • nis

  • elinks

  • lm-sensors

  • sensord

  • cricket

  • radvd

  • logwatch

  • vlock

  • lilo

  • libxp6

The current packages being discussed for main promotion include acl, ctdb and tdb-tools  (to support Samba cluster). A switch from autofs 4 to autofs 5 is also under discussion.

Any feedback is welcome and should be added to the wiki page.

Friday, October 16, 2009

Oct 13 - Oct 16 Wrap-up


  • loads of testing. Uncovered new bugs and help Dusting fix most of them.
    • multiple installs on two sets of hardware in Montreal.
    • stress testing.

  • help scott and other to debug their UEC install.
  • review and upload image-store-proxy (working now).

Upgrades testing

  • help out mvo to add logic to handle mysql 5.0 upgrade from jaunty to karmic.
  • Support MySQL cluster setup.


  • review and sponsor checkbox for marc.
  • review and sponsor landscape-client new upstream release.

Monday, October 5, 2009

Sep 28 - Oct 02 Wrap-up

Loads of Karmic Beta -server isos testing.

One day of UEC Beta testing: chased down with Dustin and Matt the failure of the auto-registration upstart scripts. Turns out to be a bug in Upstart - known by Scott who has a simple fix (dbus related).

Investigate failed RAID installation: this is a known boot loader issue. Added a section about it to the Karmic Release note.

Install UNR Karmic beta on my mini 10v. Write up blog post about it. Looks slick.

Put shorewall back into main. Fell off to universe due to a package rename in Debian.

More work on directory/krb5 infrastructure using puppet: add support for slapd modules and schemas to the puppet configuration. Slow progress towards a fully automated deployment of a directory+krb5 infrastructure for testing purposes in EC2.

Update server team knowledge with the lists of daily New,Undecided Bugs so that daily triaging can be kicked off. The lists are automatically generated on

Thursday, October 1, 2009

Test run: Ubuntu Netbook Remix 9.10 Beta on my Dell Mini 10v

Impressive for a beta release. Of course there are few glitches but overall it feels great: I'm writing this article from my Mini 10v running an Ubuntu Netbook Remix 9.10 Beta live system.

At the begining of the week I received a Dell Mini 10v I had ordered a few of weeks ago. I had chosen to upgrade some of the default components: my Mini 10v comes with 2Gb of RAM and a 16 GB SSD drive. And of course Ubuntu Hardy 8.04 LTS is installed by default at the factory. Now that the Beta of Karmic has been released I decided to take the opportunity to download the Ubuntu Netbook Remix iso and boot from a usb stick to see how this variant of Ubuntu looked like.

Load Ubuntu Netbook Remix on a USB key

But first things first. In order to be able to boot the UNR Beta iso, I had to put it on a usb stick. The USB Startup Disk Creator application located under System -> Administration proved to be best option:

  1. Download the UNR Beta iso image.

  2. Connect your usb key to the computer. I was actually using a 1GB SD card from my camera with an USB adapter.

  3. Open USB Startup Disk Creator.

  4. Select the UNR beta iso image and the usb drive (which may need to be formatted).

  5. And make the startup disk.

The boot experience

I plugged the usb stick in one of the Mini 10v usb port, powered on my netbook and hit F12 early in the boot sequence to bring the boot menu. And there - as the second choice - was my USB stick.

Loading the whole system took some time in which I had the time to admire the new boot experience - well I wasn't that surprised as my main laptop had been running Karmic for a while now. But still it looked slick as the new black and white theme matched very well with my Mini 10v colours - black for most of the parts with a light grey stripe below the keyboard.

After being auto-logged in I was greeted with the new launcher and started to poke around. Turns out that tapping on the touch pad doesn't work. I had to use the buttons at the bottom to actually click (which is a bit annoying since the pad is sensitive around the click area - it can lead to some mouse movement while trying to click).

No wireless available

Restricted driver popped up to tell me that I could install some non-free drivers. I had two choices all related to the wireless card:

  1. The B43xxx wireless driver. I tried to activate it: packages seemed to get installed - however the driver was still disabled after that.

  2. The STA wireless driver. Tried to activate it as well. This time the driver seemed to have correctly installed. However a reboot of the system was required - which is a bit annoying when you run from a live USB key.

Selecting each driver popped a prompt for entering a password in order to be able to install packages. Turns out the password is empty and just pressing the Enter key make things go away. I wonder if this dialogue could be completely deactivated during a live session - that would improve the experience of complete new user.

So no wireless available on my Mini 10v running from the live USB key. Time to plug a wired network cable. And a few seconds later I was connected to the Internet.

Application Names ...

In the Favorites sub menu - which is the first thing you see when your session starts - there are a couple of applications: Mozilla Web Browser, Evolution Mail and Calendar, Cheese, Empathy, Help, Ubuntu One and Install Ubuntu Netbook Remix 9.10. All of these choices have recognizable names except for Cheese and Empathy. Of course I know about these being a long time Ubuntu user - however it may be more difficult for a first time user. Even though there is a small webcam as part of the Cheese icon and the Empathy icon kind of relates to communication having a descriptive name would probably be helpful.

... and Ubuntu One ...

As for the Ubuntu One option, it doesn't give a clue of what this is about. So my curious nature lead me to start the application (well... I knew what Ubuntu One was as I had been an early beta-tester). The Ubuntu One icon appeared in the top menu bar. I could go the web and log into my account by right-clicking on the icon. However I didn't find an obvious way to associate my local instance with my remote account.

... and sound

Further poking around lead me to the Sound and Video sub menu where I tried to record a sound. First attempt failed. Opening the Volume Control from the File Menu and going to the Input tab showed me that the input was actually muted. Unmute it and voila - a few moments later I could hear my voice being played back!

So all in all I was pleasantly surprised by the beta version of UNR. A few glitches here and there (to be reported in LP of course) but overall the experience was positive!

Next step:

Actually install the system on the local SSD drive and experience the fast boot of Ubuntu on my Mini 10v. With an SSD drive I expect it to be below (9.)10 seconds.

Sunday, September 27, 2009

Sep 20 - Sep 25 Wrap-up

Spent most of my week in Portland to attend conferences.


  • Attended LDAPCon 2009 and published report.

  • Attended LinuxCon 2009.

Image Store Proxy

  • Updated image-store-proxy to 1.0. This version brings support for gpg signed images. Still need testing against the real-world Canonical Image Store infrastructure.

Friday, September 25, 2009

A summary of LDAPCon 2009

On Sunday, September 20th and Monday, September 21st I attended LDAPCon 2009 in Portland, OR. Most of the open source projects were there - with the notable absence of Port 389 (Redhat) - as well as some vendors (Apple and UnboundID). Most of the slides are available online.

Apache Directory project

The Apache Directory folks gave several presentations:

Apache Directory Server provides an integrated product with most of the standard network services: in addition to ldap, dns, dhcp, ntp and kerberos services can be enabled as part of a deployment. Kerberos support seems to be in early stage as it almost works. Another interesting aspect of the project is its integration with the Eclipse environment. Apache Directory Server is embedded in Apache Directory Studio. The latter provides a management tool for Directory administrator. If the Eclipse integration in Ubuntu is improved Apache Directory Studio would be a very good addition to the archive.

An overview of implementing replication in the Apache Directory Server project was given. RFC 4533 is used as the basis for LDAP replication in OpenLDAP. The goal here was to be able to replicate between Apache Directory Server and OpenLDAP. This may be the start to a standard replication protocol between different directory products.

Three components needed to be implemented:

  • the consumer part is the easiest and can be a standalone component. It receives LDAP entries updates and can do whatever it wants with them. It reminds me of similar requests I heard at the MySQL User Conference last April where people were interested in having an easier access to the MySQL replication log.

  • the producer is more complex to implement as it requires to keep a log of the modifications done on the server.

  • conflict resolution is the hardest part and mandatory if multi-master is to be supported. The Apache Directory Server decided to implement a strategy of last writer wins as they're trying to not require any user intervention for conflict resolution. I'm not convinced this is the best strategy though.

While implementing replication support they've also added support for store procedures and triggers.

LSC Project: LDAP Synchronization Connector

Corporate environments usually have multiple identity repositories and keeping all of them in sync can be quite a challenge. The LSC project aims at automating the task of keeping all identity stores up-to-date. Written in java it can read and write to any database or LDAP directory. On-the-fly transformation of data sources are possible and the framework tries to make it easy to implement new synchronisation policies.

Another great tool that could be added to the directory administrator toolbox to help integrate Ubuntu in existing infrastructures.

Storing LDAP Data in MySQL Cluster (OpenLDAP and OpenDS)

This was a joined presentation between the OpenLDAP and OpenDS projects. A new backend has been added to store entries using the MySQL Cluster NDB API. The main advantage is to be able to access the same data over SQL and LDAP as well as providing a highly-available infrastructure with data distributed on multiple nodes. Both OpenDS and OpenLDAP have worked together to create a common data model highlighting that cooperation does happen in the LDAP space.

A Panel discussion among the representatives of the various LDAP Projects on roadmaps

Sunday ended up with a panel where representatives of different directory vendors answered questions from the audience. Each open source project briefly outlined a few points they were trying to improve: documentation for OpenLDAP, data migration for Apache Directory and multiple schema support for OpenDS. The issue of virtual directories was also discussed with the need of more GUIs to cover administration tools as well as workflows. Apache Directory Studio was given as a potential good starting point to build these higher level tools. The subject of standard ACL's was also covered. It seems that this is still a sensitive issue in the community and projects are still arguing about a common solution. One option put forward was to look at the X500 ACL model and start from there.

The last item of discussion covered how to expand the user base of directories. The world of directories is rather small and its use cases are usually associated with Identity Management (User and Group, Authentication). Having good client APIs was mentioned as an option. However the whole group ran out of ideas quickly and got kind of stuck in front of this problem.

Directory Standardization Status

Directory standardization happens within two bodies: X500 in ISO/IEC and LDAP in IETF. The most important topic currently discussed in both bodies is password policies. A new draft of an IETF document is being worked on by Howard Chu and Ludovic Poitou.

Other topics being worked on cover:

  • Internationalization (with Unicode support in LDAPprep and SASLprep)

  • simple LDAP Transactions (to cover adding entries to different containers)

  • replacing DIGEST-MD5 with SCRAM

  • vCard support

On the front of Directory Application schemas support for NFSv4 Federated Filesystem and an Information Model for Kerberos are currently being worked on with drafts available for review.

The question of starting a new LDAP working group within the IETF was raised. Topics that could be covered include:

  • LDAP Chaining Operation

  • Access controls: based on the X.500 model with extensibility added.

  • LDIF update

  • LDAP Sync/ LDAP Sync-based Replication

  • Complex Transactions

  • Password Policies

  • Directory views

  • Schema versioning

LDAP in the java world

LDAP support in java is being actively worked on especially on the SDK front. OpenDS, Apache Directory Server and UnboundID have released new open-sourced SDKs to improve the aging JNDI and Netscape java SDKs. All of them are rather low-level implementations. The three projects are also working together to find a common ground.

There is also some progress made at the persistence level. The DataNucleus project gave an overview of adding LDAP support to the standard JDO interface. The goal is to provide a reference implementation of JDO for an LDAP data store.

Unified Authentication Service in OpenLDAP

Howard Chu gave an overview of the new modules developed in OpenLDAP related user authentication. Based on the work from nss-ldapd the nssov overlay provides integration with the pam stack as well as the nss stack. Disconnected mode in the pcache overlay has been added in the latest version of openldap as discussed during the Ubuntu Developer Summit last May. Most of this work is already available in Ubuntu Karmic and improvements should be made during the Lucid release cycle.

Another interesting module is the integrated certification authority. If a search request for the userCertificate and userKey attributes for an entry is made and these attributes don't exist they're generated on the fly. This should help out in creating an X.509 base PKI.

LDAP Innovations in the OpenDS project

The last session of the conference was given by Ludovic Poitou of the OpenDS project. New features available in OpenDS include tasks as well as extended syntax rules. Time matching rules have also been added so that queries like "give me entries that have a last login time older than 3 weeks" can be expressed directly in ldap and processed by the server. That brought some interesting issues when clients and servers don't share the same timezone.

A few gems from beer conversations

After the official sessions ended most of the attendees congregated to have diner followed by beers. Howard showcased his G1 phone running slapd while Ludovic was showing off an LDAP client application on his iPhone. And of course by then end of the conference both systems were connected: the iPhone was able to look up contact information on the G1 running slapd.

On an unrelated note OpenLDAP is faster than OpenDS, even in beer drinking. However the OpenLDAP project was compared to a Beetle car with a Porsche engine whereas OpenDS was actually building a Porsche.

Even though not all the players in the directory space were represented at the conference, most of the key players from the open source world were there presenting their work. Friendly competition exists amongst the different projects which turns into cooperation on topics that matters such as interoperability and data formats.

It seems that the directory world is rather small and its use cases are restricted to specific situations compared to RDBMS. This is rather unfortunate as directories offer a compelling alternatives to databases as a data store infrastructure. The community seems to be aware of this issue and is looking into breaking out of its traditional fields of applications.

Friday, September 18, 2009

Sep 11 - Sep 18 Wrap-up


Package image-store-proxy to enable the Image Store tab in Eucalyptus. The package (python-image-store-proxy) has made its way to main and on the -server isos in time for alpha6 with the help of Thierry and Kees.


Kept on investigating the use of puppet to build an ldap/krb5 infrastructure on EC2. Integrated dnsmasq and puppetmaster configuration. Discovered a few bugs along the way and reported them upstream. My current work is available from an LP branch. And puppet is awesome!

Alpha6 ISO testing

Loads of alpha6 testing.

Landscape-client Stable Release Update

Reviewed the landscape-client and smart SRU requests from the Landscape team.

Bug scripts

With the help of Brian my bug scripts are now regularly run on All bug lists used in the SRU review and the triaging process can be found on


Updated my status report script to publish a draft of my activity report on my blog as the weekly "wrap-up".

Friday, September 11, 2009

Sep 07 - Sep 11 Wrap-up


Upload new sssd package to fix lintian errors and pull two fixes from upstream. Brainstorm with upstream about testing the package.

Prepare and upload openldap 2.4.18 to Karmic once the FFe was granted. That should complete the last part of the specification and brings disconnected mode support on the client via the cache overlay.

Looked into using puppet to build an openldap/krb infrastructure to test all the directory related components on the client side (sssd, openldap pcache overlay). The idea is to be able to pull up and down complete environments within minutes using a combination of EC2/UEC and puppet.

Follow up on puppet promotion into main for karmic.

Ended up writing a custom puppet type to handle slapd modules using the default karmic configuration. This gave me a good overview of how puppet is working.

Imate-store packaging

Looked at packaging. Follow-up call with Gustavo. Should have a package ready on Monday in time for alpha6. More polishing will be done for beta.

Apport in the default server install

Add apport to the default server install as requested by steve beattie for the karmic-qa-apport-in-ubuntu-server specification.

Linux-virtual missing virtio modules

Chase down and confirmed that linux-virtual kernel doesn't have any of the virtio modules. Bug 423426 is milestoned and should be on the release team radar. This has a high importance as virtio vms cannot boot in karmic. Tim is working on it.

Mysql maintenance

Caught up on (lots of) mysql 5.0 and 5.1 bugs. Updated DebuggingMysql page in the process of triaging bugs.

Upload mysql 5.0 and 5.1 to fix a couple of bugs. Both mysql-server-core-5.{0,1} packages provide mysql-server-core which should be used by packages requiring the mysqld binary (such as akonadi).


Write up a script to get a list of ubuntu-server SRU bugs assigned to people. This produces the remaining list to be reviewed during the team meeting with the updated SRU workflow in the ubuntu-server team.


Reviewed checkbox merge proposal from Marc. Asked for a FFe as there is one new feature.


Arrange travel for LDAPcon/linuxcon in Portland, OR next week.

Saturday, June 13, 2009

Merges of the Weekend: suggestions from the Ubuntu Server team

Got some time for a couple of merges this week-end? I've just updated the list of packages that look easy to merge on the Ubuntu Server Team roadmap:

  • vsftpd and amavisd-new in main

  • asterisk, heimdal and boinc in universe.

The Merging wiki page gives an overview on the process and once your debdiff is ready you can upload it to karmic or file a bug and ask for sponsorship if you don't have access to the archive yet.

And if you knock down every package on the list above Merge-O-Matic provides with a full list of packages waiting to be merged.

Thursday, April 30, 2009

Are configuration management tools still needed in the cloud?

Cloud is the buzz word of the year and with the Ubuntu Enterprise Cloud available in Ubuntu 9.04 everyone will be able to build its own private cloud to experiment. As a cloud base infrastructure provides more flexibility and dynamism in the computing infrastructure it seems that configuration management tools will become more and more important in the future.

What's the purpose of a configuration management tool ?

According to wikipedia:
Configuration management (CM) is a field of management that focuses on establishing and maintaining consistency of a system's or product's performance and its functional and physical attributes with its requirements, design, and operational information throughout its life. For information assurance, CM can be defined as the management of security features and assurances through control of changes made to hardware, software, firmware, documentation, test, test fixtures, and test documentation throughout the life cycle of an information system

While the definition above is quite generic system administrators are using configuration management tools such as puppet or cfengine in order to automate system deployments and making sure that every instance providing a specific service has the same configuration. Another service provided by these tools is to automatically distribute configuration changes to all running systems.

How does this apply to a cloud infrastructure?

The cloud model as implemented by the Ubuntu Enterprise Cloud is based on the golden image principle. Each system is based on a static image. The cloud infrastructure is then used to spawn new instances of a specific image. This is one of the characteristics of such an infrastructure: deploying new systems is easier, faster and cheaper. Potential resources are much larger than before.

However one of the issue with the golden image model is that over time there is a drift between running systems and the golden image. When a configuration update is made to the service the offline golden image also needs to be updated. Moreover a configuration management system is needed to push the changes to running systems.

Let's take the example of a web hosting infrastructure running 20 instances of an apache server. How would a new virtual host be defined?

With a configuration management system a new virtual host is defined in the central repository and the tool deploys the new virtual host definition to all running systems.

Applying the combination of the golden image feature with the ease of deployment provided by a cloud infrastructure would lead to defining the new virtual host in one running system, updating the golden image, spawning 20 new instances and swapping them with the old ones  in the web infrastructure.

It seems strange at first to re-bundle a new image and redeploy all of your servers just for a one-line configuration change. One reason may be that system re-installation has always been seen as a last resort option in a traditional infrastructure. This assumption is no longer true in the cloud with its fast and easy provisioning feature.

What are the advantages of the golden image pattern?

Rolling back a configuration change is much faster as both revisions of the service are running at the same time. System administrator don't need to learn another tool and can just use their standard ways of administrating a single server.

However some issues remain:

How about applying a configuration change to different golden images? A dozen of images still need to be booted and the change made everywhere. We're back to square one.  Configuration management tools have the concept of classes and each system will apply specific configurations according to the defined classes. This is done to avoid redundancy in configuration definition. Having just a set of golden images creates redundant configuration. However the amount of images to change is much smaller than dealing with hundreds of instances.

How about tracking changes between configuration? Most of the configuration management tools suggest to keep the central repository under revision control so that changes made to the environment can easily be tracked. With golden images we're lacking the tools to store multiple versions of golden images and perform image diffs: what is the difference between the running system and its base offline image, between two revisions of the same golden image, between two running systems? Having access to such tools would be very useful to system administrators to help the debugging process.

In conclusion configuration management tools have been used for some time by groups running big infrastructures with lots and lots of systems to manage. The dynamism of the cloud brings the same problems to its users even if they are only using a couple of instances to run their infrastructure. Configuration management tools should probably be considered as an essential tool when moving into the cloud.

Thursday, March 5, 2009

March, 12th 2009: The Thursday samba bugs were exterminated

I call for Ubuntu Bug warriors to unite on the 12th day of the month of March of the year of 2009 and march all together to squash bugs related to the samba package. Instructions for first timers will be provided in a wiki page as well as a list of prime targets. Veterans are also encouraged to join and focus on the most complex issues while providing support for the rest of the troops in the #ubuntu-bugs IRC channel on Freenode.

Join us in the battle for improving the robustness of the three daemons smbd, nmbd and winbindd and turn the next Ubuntu Bug day into a victory for all of the Samba users in Ubuntu.