How to develop Cloud Computing Infrastructure In-house with Open Source Software

By Partho, Gaea News Network
Friday, May 7, 2010

Cloud computing technology has been high on demand as it meets the needs of both IT providers and Internet users. Corporate clients have great interest in the cloud, including infrastructure outsourcing, software as a service key processes as a service as well as next-generation distributed computing. How about building a cloud computing infrastructure in-house with an open source software. We got across OpenNebula software that meets most of demands for building in-house cloud computing infrastructure. It is a leading cloud computing solution. It provides a leading open source and enterprise-grade toolkit to easily build Infrastructure-as-a-Service clouds. The OpenNebula is an cloud toolkit for flexible architecture, interfaces and components that fit into any existing data center. OpenNebula is an open and flexible tool build to fit into the existing data center environment to build any type of Cloud deployment. It provides Cloud interfaces to the private or hybrid infrastructure, and supports the deployment of public clouds.

The latest OpenNebula v1.4 Cloud has a virtual computing environment that is accessible through two different Operating Systems and configurations. OpenNebula v1.4 Cloud provides two different remote interfaces OCCI and EC2. With this open source software you could launch virtual machines based on a variety of available images with different Operating Systems and configurations.

cloud

OpenNebula offers two clouds - dummy and real

Dummy

There is a dummy cloud intended to try out the two interfaces. Operation upon this cloud results on virtual networks and machines resources creations. No real action whatsoever will be performed. It means there will be the illusion

Real

The other real OpenNebula cloud has been designed to provide a feel of what can be achieved using OpenNebula as an infrastructure tool.

To use the cloud download and Install OpenNebula software

For both interfaces OCCI and EC2 the url to access the cloud is same OpenNebula (cloud.opennebula.org)

OCCI

Install the client

Check the requirements here.
In the source code directory run ./install.sh -c occi.

There is an optional configuration that can be accomplished by setting the ONE_AUTH and OCCI_URL environment variables that have been explained here

EC2

Install the client

Check the requirements.

In the OpenNebula source code directory run ./install.sh -c ec2

Here’s how you can create a cloud computing infrastructure with OpenNebula

Step 1

Installing OpenNebula

To begin with you need to follow a few simple steps to Install the OpenNebula Software

Download and untar the OpenNebula tarball
Now change the untared folder and run scons to compile OpenNebula

$ scons [OPTION=VALUE]

Use the argument expression [OPTIONAL] to set non-default paths

OpenNebula can be installed in two modes  - system-wide or self-contained directory.  In both case you need to run OpenNebula as root. The option can be specified while running the install script.

./install.sh <install_options>

Here the <install_options> can be any one of the options mentioned below

-u: user that will

-g: group of the user that will run OpenNebula, defaults to user executing install.sh

-k:  keep current configuration files useful when upgrading

-d: target installation directory. It will specify the path for self-contained install. In case it is not defined, the installation will be performed system wide

-r: remove OpenNebula, only in case -d was not specified, alternately rm -rf $ONE_LOCATION will do the job

Here’s how to do self-contained installation

~$ wget <opennebula tar gz>
~$ tar xzf <opennebula tar gz>
~$ cd one-1.4
~/one-1.4$ scons -j2
[ lots of compiling information ]
scons: done building targets.
~/one-1.4$ ./install.sh -d /srv/cloud/one

Step 2

Configuring OpenNebula

  • The OpenNebula daemon has to organize the operation of all the modules  and control the VM’s life-cycle.
  • The drivers to access specific cluster systems
  • The scheduler to take VM placement decision

OpenNebula Daemon

The configuration file for the daemon is called oned.conf and it is placed inside the $ONE_LOCATION/etc directory. Alternately, it can be placed in/ etc/one if OpenNebula is installed system-wide.

For more detailed description of all the configuration options for the OpenNebula deamon can be found here

Here’s an instance to show how to configure OpenNebula to work with KVM and a shared FS

# Attributes
HOST_MONITORING_INTERVAL = 60
VM_POLLING_INTERVAL      = 60

VM_DIR = /srv/cloud/one/var #Path in the cluster nodes to store VM images

NETWORK_SIZE = 254     #default
MAC_PREFIX   = “00:03″

#Drivers
IM_MAD = [name="im_kvm", executable="one_im_ssh", arguments="im_kvm/im_kvm.conf"]
VM_MAD = [name="vmm_kvm",executable="one_vmm_kvm",
default="vmm_kvm/vmm_kvm.conf", type= "kvm" ]
TM_MAD = [name="tm_nfs", executable="one_tm", arguments="tm_nfs/tm_nfs.conf" ]

Note: VM_DIR is set to the path where the front-end’s $ONE_LOCATION/var directory is mounted in the cluster nodes

Scheduler

The scheduler module manages the assignments between pending cluster nodes and Virtual Machines. OpenNebula architecture defines this module as a separate process that can be started independently of oned. The software offers a Rank Scheduling policy. The policy is aimed at prioritizing those resources more suitable for the VM. You can configure several resources and load aware policies by indicating the RANK expressions in the VM definition files. To know more about configuring these policies, you can refer to the scheduling guide (https://www.opennebula.org/documentation:rel1.4:schg).

Note: OpenNebula can also operate without the scheduling process in a VM management mode. Start or migration of VM in this case is explicitly performed using the onevm command.

Drivers

Drivers are separate processes that communicate with the OpenNebula core using an internal ASCII protocol. Before loading a driver, two run commands RC files are sourced to optionally obtain environmental variables.

There two RC files

i) $ONE_LOCATION/etc/mad/defaultrc - The variables are defined using sh syntax and upon read exported to the driver’s environment.

# Debug for MADs [0=ERROR, 1=DEBUG]
# If set, MADs will generate cores and logs in $ONE_LOCATION/var.
ONE_MAD_DEBUG=
# Nice Priority to run the drivers
PRIORITY=19

ii) There’s a specific file for each driver that might re-define the defaultrc variables.

To look into each driver’s configuration guide for specific options

Step 3

Building the OpenNebula Private Cloud infrastructures

OpenNebula Private Cloud provides a flexible platform for infrastructure users to ensure faster delivery and scalability of services to meet dynamic demands of services end-users. The services are hosted in VMs and submitted, monitored and controlled in the Cloud by using the virtual infrastructure interfaces

  • Command line interface
  • XML-RPC API
  • Libvirt virtualization API or any of its management tools

Here’s a sample session that illustrates the functionality provided by the OpenNebula CLI for Private Cloud Computing.

$ onehost list
HID NAME                      RVM   TCPU   FCPU   ACPU    TMEM    FMEM  STAT
0 host01                               0    800    800    800 8194468 7867604    on
1 host02                               0    800    797    800 8387584 1438720   on

Now submit a VM to OpenNebula by using onevm. Then build a VM template to submit the image placed in the /opt/nebula/images directory.

CPU    = 0.5
MEMORY = 128
OS     = [
kernel   = "/boot/vmlinuz-2.6.18-4-xen-amd64",
initrd   = "/boot/initrd.img-2.6.18-4-xen-amd64",
root     = "sda1" ]
DISK   = [
source   = "/opt/nebula/images/disk.img",
target   = "sda1",
readonly = "no" ]
DISK   = [
type     = "swap",
size     = 1024,
target   = "sdb"]
NIC    = [ NETWORK = "Public VLAN" ]

You need to ensure that VM fits into at least one of both hosts, let’s submit the VM

$ onevm submit VM.template

It should be an ID that we can use to identify the VM for monitoring and controlling. Use this onevm command

$ onevm list
ID     USER     NAME   STAT  CPU     MEM        HOSTNAME             TIME
0   oneadmin    one-0  runn     0       65536              host01               00 0:00:02

The STAT field tells the state of VM. In case it is in the runn state, the virtual machine is up and running. Depending on how we set up the image we might be aware of it’s IP address. That is the case we can try now and log into the VM.

To perform migration we might use the onevm command. For instance to move VM (with VID=0) to host02 (HID=1), you can use

$ onevm livemigrate 0 1

How to Manage Virtual networks

With OpenNebula you can create Virtual Networks by mapping them on top of the physical ones.

There are two types of Virtual Network in OpenNebula

  • Fixed  - Defining a fixed set of IP-Mac pair addresses
  • Ranged - Defining the class network

To define a fixed VN you need the following info

NAME: Name of the Virtual Network.
TYPE: Fixed, in this case.
BRIDGE: Name of the physical bridge in the physical host where the VM should connect its network interface.
LEASES: Definition of the IP-MAC pairs. If an IP is defined, and there is no associated MAC, OpenNebula will generate it using the following rule: MAC = MAC_PREFFIX:IP. So, for example, from IP 10.0.0.1 and MAC_PREFFIX 00:16, we get 00:16:0a:00:00:01. Defining only a MAC address with no associated IP is not allowed.

Here’s an example on how to create a Fixed Virtual Network called Public with the set of public IPs to be used by the VMs, just create a file with the following contents

NAME = “Public”
TYPE = FIXED

#We have to bind this network to ”virbr1” for Internet Access
BRIDGE = vbr1

LEASES = [IP=130.10.0.1, MAC=50:20:20:20:20:20]
LEASES = [IP=130.10.0.2, MAC=50:20:20:20:20:21]
LEASES = [IP=130.10.0.3]
LEASES = [IP=130.10.0.4]

Here’s an example how to create a Ranged Virtual Network template

Ranged VN allow support for a base network address and a size

NAME: Name of the Virtual Network.
TYPE: Ranged, in this case.
BRIDGE: Name of the physical bridge.
NETWORK_ADDRESS: Base network address to generate IP addresses.
NETWORK_SIZE: Number of hosts that can be connected using this network. It can be defined either using a number or a network class (B or C).

Here’s an example of Ranged Virtual Network template

NAME = “Red LAN”
TYPE = RANGED

#Now we’ll use the cluster private network (physical)
BRIDGE = vbr0

NETWORK_SIZE    = C
NETWORK_ADDRESS = 192.168.0.0

After defining a template for VN, the onevnet command can be used to create it.

To create the previous networks put their definitions in two different files, public.net and red.net respectively.

$ onevnet -v create public.net
$ onevnet -v create red.net

onevnet can be used to query OpenNebula about available VNs

$ onevnet list
NID USER        NAME              TYPE    BRIDGE  #LEASES
2   oneadmin   Public              Fixed       vbr1          0
3  oneadmin    Red LAN         Ranged   vbr0          0

Discussion

hazem elbaz
May 20, 2010: 6:20 am

this email is about configure cloud computing, try to apply it , if u free , try try try .

this is my advice to you hazem, try again and more

YOUR VIEW POINT
NAME : (REQUIRED)
MAIL : (REQUIRED)
will not be displayed
WEBSITE : (OPTIONAL)
YOUR
COMMENT :