Monday, September 27, 2010

Deploying a Hadoop cluster on EC2/UEC with Puppet and Ubuntu Maverick



A Hadoop Cluster running on EC2/UEC deployed by puppet on Ubuntu Maverick.

How it works


The Cloud Conductor is located outside the AWS infrastructure as it needs AWS credentials to start new instances. The Puppet Master runs in EC2 and uses S3 to check which clients it should accept.

The Hadoop Namenode, Jobtracker and Worker are also running in EC2. The Puppet Master automatically configures them so that each Worker can connect to the Namenode and Jobtracker.

The Puppet Master uses Stored Configuration to distribute configuration between all the Hadoop components. For example the Namenode IP address is automatically pushed to the Jobtracker and the Worker nodes so that they can connect to the Namenode.

Ubuntu Maverick is used since Puppet 2.6 is required. The excellent Cloudera CDH3 Beta2 packages provide the base Hadoop foundation.

Puppet recipes and the Cloud Conductor scripts are available in a bzr branch on Launchpad.

Setup the Cloud Conductor


The first part of the Cloud Conductor is the start_instance.py script. It takes care of starting new instances in EC2 and registering them in S3. Its configuration lives in start_instance.yaml. Both files are located in the conductor directory of the bzr branch.

The following options are available on the cloud conductor:
  • s3_bucket_name: Sets the name of the S3 bucket used to store the list of instances started by the Cloud Conductor. The Puppet Master uses the same bucket to check which Puppet Client should be accepted.
  • ami_id: Sets the id of the AMI the Cloud Conductor will use to start new instances.
  • cloud_init: Sets specific cloud-init parameters. All of the puppet client configuration is defined here.Public ssh keys (for example from Launchpad) can be configured using the ssh_import_id option. The cloud-init documentation has more information [1] about what can be configured when starting new instances.

A sample start_instance.yaml file looks like this:

# Name of the S3 bucket to use to store the certname of started instances
s3_bucket_name: mathiaz-hadoop-cluster
# Base AMI id to use to start all instances
ami_id: ami-c210e5ab
# Extra information passed to cloud-init when starting new instances
# see cloud-init documentation for available options.
cloud_init: &site-cloud-init
ssh_import_id: mathiaz


Once the Cloud Conductor is configured a Puppet Master can be started:


./start_instance.py puppetmaster

Setup the Puppet Master


Once the instance has started and its ssh fingerprints can be verified the puppet recipes are deployed on the Puppet Master:


bzr branch lp:~mathiaz/+junk/hadoop-cluster-puppet-conf ~/puppet/
sudo mv /etc/puppet/ /etc/old.puppet
sudo mv ~/puppet/ /etc/


The S3 bucket name is set in the Puppet Master configuration /etc/puppet/manifests/puppetmaster.pp:


node default {
class {
"puppet::ca":
node_bucket => "https://mathiaz-hadoop-cluster.s3.amazonaws.com";
}
}


And finally the Puppet Master installation can be completed by puppet itself:

sudo puppet apply /etc/puppet/manifests/puppetmaster.pp

A Puppet Master is now running into EC2 with all the recipes required to deploy the different components of a Hadoop Cluster.

Update the Cloud Conductor configuration


Since the Cloud Conductor starts instances that will connect to the Puppet Master it needs to know some information about the Puppet Master:
  • the Puppet Master internal IP address or DNS name. For example the DNS name of the instance (which is the FQDN) can be used.
  • the Puppet Master certificate (located in /var/lib/puppet/ssl/ca/ca_crt.pem):

On the Cloud Conductor the information gathered on the Puppet Master is added to start_instance.yaml:

agent:
# Puppet server hostname or IP
# In EC2 the Private DNS of the instance should be used
server: domU-12-31-38-00-35-98.compute-1.internal
# NB: the certname will automatically be added by start_instance.py
# when a new instance is started.
# Puppetmaster ca certificate
# located in /var/lib/puppet/ssl/ca/ca_crt.pem on the puppetmaster system
ca_cert: |
-----BEGIN CERTIFICATE-----
MIICFzCCAYCgAwIBAgIBATANBgkqhkiG9w0BAQUFADAUMRIwEAYDVQQDDAlQdXBw
[ ... ]
k0r/nTX6Tmr8TTU=
-----END CERTIFICATE-----

Start the Hadoop Namenode


Once the Puppet Master and Cloud Conductor are configured the Hadoop Cluster can be deployed. First in line is the Hadoop Namenode:


./start_instance.py namenode

After a few minutes the Namenode puppet client requests a certificate:
puppet-master[7397]: Starting Puppet master version 2.6.1
puppet-master[7397]: 53b0b7bf-723c-4a0f-b4b1-082ebec84041 has a waiting certificate request

The Master signs the CSR:


CRON[8542]: (root) CMD (/usr/local/bin/check_csr https://mathiaz-hadoop-cluster.s3.amazonaws.com)
check_csr[8543]: INFO: Signing request: 53b0b7bf-723c-4a0f-b4b1-082ebec84041


And finally the Master compiles the manifest for the Namenode:


node_classifier[8989]: DEBUG: Checking url https://mathiaz-hadoop-cluster.s3.amazonaws.com/53b0b7bf-723c-4a0f-b4b1-082ebec84041
node_classifier[8989]: INFO: Getting node configuration: 53b0b7bf-723c-4a0f-b4b1-082ebec84041
node_classifier[8989]: DEBUG: Node configuration (53b0b7bf-723c-4a0f-b4b1-082ebec84041): classes: ['hadoop::namenode']
puppet-master[7397]: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find stage hadoop-base specified by Class[Hadoop::Base] at /etc/puppet/modules/hadoop/manifests/init.pp:142 on node 53b0b7bf-723c-4a0f-b4b1-082ebec84041


Unfortunately there is a bug related to puppet stages. As a workaround the puppet agent can be restarted:


sudo /etc/init.d/puppet restart

Looking at the syslog file on the Namenode the Puppet Agent installs and configures the Hadoop Namenode:


puppet-agent[1795]: Starting Puppet client version 2.6.1
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Key[cloudera]/File[/etc/apt/cloudera.key]/ensure) defined content as '{md5}dc59b632a1ce2ad325c40d0ba4a4927e'
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Key[cloudera]/Exec[import apt key cloudera]) Triggered 'refresh' from 1 events
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Sources_list[canonical]/File[/etc/apt/sources.list.d/canonical.list]/ensure) created
puppet-agent[1795]: (/Stage[apt]/Hadoop::Apt/Apt::Sources_list[cloudera]/File[/etc/apt/sources.list.d/cloudera.list]/ensure) created
puppet-agent[1795]: (/Stage[apt]/Apt::Apt/Exec[apt-get_update]) Triggered 'refresh' from 3 events


The first stage of the puppet run sets up the Canonical partner archive and the Cloudera archive. The Sun JVM is pulled from the Canonical archive while Hadoop packages are downloaded from the Cloudera archive.

The following stage creates a common Hadoop configuration:


puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/var/cache/debconf/sun-java6.seeds]/ensure) defined content as '{md5}1e3a7ac4c2dc9e9c3a1ae9ab2c040794'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/Package[sun-java6-bin]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/Package[hadoop-0.20]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/var/lib/hadoop-0.20/dfs]/ensure) created
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/etc/hadoop-0.20/conf.puppet]/ensure) created
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/etc/hadoop-0.20/conf.puppet/hdfs-site.xml]/ensure) defined content as '{md5}1f9788fceffdd1b2300c06160e7c364e'
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/Exec[/usr/sbin/update-alternatives --install /etc/hadoop-0.20/conf hadoop-0.20-conf /etc/hadoop-0.20/conf.puppet 15]) Triggered 'refresh' from 1 events
puppet-agent[1795]: (/Stage[hadoop-base]/Hadoop::Base/File[/etc/default/hadoop-0.20]/content) content changed '{md5}578894d1b3f7d636187955c15b8edb09' to '{md5}ecb699397751cbaec1b9ac8b2dd0b9c3'

Finally the Hadoop Namenode is configured:


puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Package[hadoop-0.20-namenode]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/File[hadoop-core-site]/ensure) defined content as '{md5}2f2445bf3d4e26f5ceb3c32047b19419'
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/File[/var/lib/hadoop-0.20/dfs/name]/ensure) created
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Exec[format-dfs]) Triggered 'refresh' from 1 events
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Service[hadoop-0.20-namenode]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1795]: (/Stage[main]/Hadoop::Namenode/Service[hadoop-0.20-namenode]) Failed to call refresh: Could not start Service[hadoop-0.20-namenode]: Execution of '/etc/init.d/hadoop-0.20-namenode start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:177


There is another bug in the Hadoop init script this time: the Namenode cannot be started. The puppet agent can be restarted or the next puppet run will start it:


sudo /etc/init.d/puppet restart

The Namenode daemon is running and logs information to its log file in /var/log/hadoop/hadoop-hadoop-namenode-*.log:


[...]
INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
[...]
INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 8200: starting
INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 8200: starting

Start the Hadoop Jobtracker


The next component to start is the Hadoop Jobtracker:


./start_instance.py jobtracker

After some time the Puppet Master compiles the Jobtracker manifest:


DEBUG: Checking url https://mathiaz-hadoop-cluster.s3.amazonaws.com/2faa4de9-c708-45ab-a515-ae041a9d0239
node_classifier[30683]: INFO: Getting node configuration: 2faa4de9-c708-45ab-a515-ae041a9d0239
node_classifier[30683]: DEBUG: Node configuration (2faa4de9-c708-45ab-a515-ae041a9d0239): classes: ['hadoop::jobtracker']
puppet-master[23542]: Compiled catalog for 2faa4de9-c708-45ab-a515-ae041a9d0239 in environment production in 2.00 seconds


On the instance the puppet agent configures the Hadoop Jobtracker:


puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/File[hadoop-mapred-site]/ensure) defined content as '{md5}af3b65a08df03e14305cc5fd56674867'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/File[hadoop-core-site]/ensure) defined content as '{md5}2f2445bf3d4e26f5ceb3c32047b19419'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/Package[hadoop-0.20-jobtracker]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/Service[hadoop-0.20-jobtracker]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1035]: (/Stage[main]/Hadoop::Jobtracker/Service[hadoop-0.20-jobtracker]) Failed to call refresh: Could not start Service[hadoop-0.20-jobtracker]: Execution of '/etc/init.d/hadoop-0.20-jobtracker start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:135


There is the same bug in the init script. Let's restart the puppet agent:


sudo /etc/init.d/puppet restart

The Jobtracker connects to the Namenode and error messages are logged on a regular basis to both the Namenode and Jobtracker log files:


INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8200, call
addBlock(/hadoop/mapred/system/jobtracker.info, DFSClient_-268101966, null)
from 10.122.183.121:54322: error: java.io.IOException: File
/hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes,
instead of 1
java.io.IOException: File /hadoop/mapred/system/jobtracker.info could only be
replicated to 0 nodes, instead of 1


This is normal as there aren't any Datanode daemon available for data replication.

Start Hadoop workers


It's now time to start the Hadoop Worker to get an operational Hadoop Cluster:


./start_instance.py worker

The Hadoop Worker holds both a Data node and a Task tracker. The Puppet agent configures them to talk to the Namenode and Job tracker respectively.

After some time the Puppet Master compiles the catalog for the Hadoop Worker:


node_classifier[8368]: DEBUG: Checking url https://mathiaz-hadoop-cluster.s3.amazonaws.com/b72a8f4d-55e6-4059-ac4b-26927f1a1016
node_classifier[8368]: INFO: Getting node configuration: b72a8f4d-55e6-4059-ac4b-26927f1a1016
node_classifier[8368]: DEBUG: Node configuration (b72a8f4d-55e6-4059-ac4b-26927f1a1016): classes: ['hadoop::worker']
puppet-master[23542]: Compiled catalog for b72a8f4d-55e6-4059-ac4b-26927f1a1016 in environment production in 0.18 seconds


On the instance the puppet agent installs the Hadoop worker:


puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/File[hadoop-mapred-site]/ensure) defined content as '{md5}af3b65a08df03e14305cc5fd56674867'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Package[hadoop-0.20-datanode]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/File[/var/lib/hadoop-0.20/dfs/data]/ensure) created
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Package[hadoop-0.20-tasktracker]/ensure) ensure changed 'purged' to 'latest'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/File[hadoop-core-site]/ensure) defined content as '{md5}2f2445bf3d4e26f5ceb3c32047b19419'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-datanode]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-datanode]) Failed to call refresh: Could not start Service[hadoop-0.20-datanode]: Execution of '/etc/init.d/hadoop-0.20-datanode start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:103
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-tasktracker]/ensure) ensure changed 'stopped' to 'running'
puppet-agent[1030]: (/Stage[main]/Hadoop::Worker/Service[hadoop-0.20-tasktracker]) Failed to call refresh: Could not start Service[hadoop-0.20-tasktracker]: Execution of '/etc/init.d/hadoop-0.20-tasktracker start' returned 1:  at /etc/puppet/modules/hadoop/manifests/init.pp:103


Again the same init script bug - let's restart the puppet agent:


sudo /etc/init.d/puppet restart

Once the worker is installed the Datanode daemon connects to the Namenode:


INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 10.249.187.5:50010 storage DS-2066068566-10.249.187.5-50010-1285276011214
INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.249.187.5:50010


Similarly the Task Tracker daemon registers itself with the Jobtracker:
INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/domU-12-31-39-03-B8-F7.compute-1.internal

The Hadoop Cluster is up and running.

Conclusion


Once the initial setup of the Puppet master is done and the Hadoop Namenode and Jobtracker are up and running adding new Hadoop Workers is
just one command:

./start_instance.py worker

Puppet automatically configures them to join the Hadoop Cluster.

23 comments:

  1. Great work, works well for me!

    ReplyDelete
  2. Wonderful post! Youve made some very astute observations and I am thankful for the the effort you have put into your

    writing. Its clear that you know what you are talking about. I am looking forward to reading more of your sites content.
    Hadoop training

    ReplyDelete
  3. ous web based learning applications give quality seminars on information science and furthermore give an authentication to it. ExcelR Data Science Courses

    ReplyDelete
  4. Nice Post ! really enjoyed reading this article. Thanks for sharing such detailed information.
    AI Training in Hyderabad

    ReplyDelete
  5. Nice information thanks for sharing it’s very useful. This article gives me so much information.

    AWS Training in Hyderabad

    ReplyDelete
  6. cool stuff you have and you keep overhaul every one of us
    Data Analyst Course

    ReplyDelete
  7. I am really enjoying reading your well written articles. It looks like you spend a lot of effort and time on your blog. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work. data science training in Hyderabad

    ReplyDelete
  8. Glad to chat your blog, I seem to be forward to more reliable articles and I think we all wish to thank so many good articles, blog to share with us.
    data analytics courses

    ReplyDelete
  9. Very nice blogs!!! i have to learning for lot of information for this site…Sharing for wonderful information. Thanks for sharing this valuable information to our vision. You have posted a trust worthy blog keep sharing,
    Data Science Training in Hyderabad
    Data Science Course in Hyderabad

    ReplyDelete
  10. Excellent Blog! I would like to thank for the efforts you have made in writing this post. I am hoping the same best work from you in the future as well. I wanted to thank you for this websites! Thanks for sharing. Great websites!
    data science certification

    ReplyDelete
  11. This is also a very good post which I really enjoyed reading. It is not every day that I have the possibility to see something like this..
    data scientist online course

    ReplyDelete
  12. Great info! I recently came across your blog and have been reading along. I thought I would leave my first comment. I don’t know what to say except that I have.
    full stack developer course with placement


    ReplyDelete
  13. Thanks for sharing

    Shreyan IT, helps companies/employers get matched with talented candidates who meet their requirements. We provide staffing solutions for local, national, and global recruitment needs. Our goal is to assist job seekers in finding new positions while also assisting employers in finding the best applicant for their available positions.

    ReplyDelete
  14. Hi,
    I appreciate this detailed guide on deploying a Hadoop cluster on EC2/UEC with Puppet and Ubuntu Maverick. It provides a comprehensive step-by-step approach, making it easier for those looking to set up their own Hadoop environment. Thanks for sharing!
    Data Analytics Courses in Nashik

    ReplyDelete
  15. This article appears to provide a helpful guide on deploying a Hadoop cluster on EC2/UEC using Puppet and Ubuntu Maverick. Using automation tools like Puppet can simplify the process of setting up and managing Hadoop clusters, making it easier for developers and administrators to work with big data solutions.

    Data Analytics Courses In Kochi



    ReplyDelete
  16. Impressive deployment of a Hadoop cluster on EC2/UEC using Puppet and Ubuntu Maverick. Streamlined setup and automated configuration make this an efficient and valuable solution. Thank you.
    Data Analytics Courses In Dubai

    ReplyDelete
  17. This kind of knowledge-sharing is instrumental in enabling organizations to harness the full potential of their data. Thank you so much for sharing.
    Data Analytics Courses In Chennai

    ReplyDelete
  18. Thank you so much for providing a great tutorial on Deploying a Hadoop cluster on EC2/UEC with Puppet and Ubuntu Maverick. I was having some trouble related to this but after after reading your blog is is all cleared.
    Visit - Data Analytics Courses in Delhi

    ReplyDelete
  19. good blog
    Data Analytics Courses In Vadodara

    ReplyDelete