Red Hat Cloud FoundationsReference Architecture Edition One: Automating Private IaaS Clouds on BladesVersion 1.0September 2010
2.3 Deployment Models 2.3.1 Private CloudThe cloud infrastructure is operated solely for an organization. It may be managed by the organization or a
host all all 127.0.0.1/32 trust host all all 10.0.0.1/8 md5 # IPv6 local connections: host
wget $JON_LICENSE_URL -O /home/$JON_USER/$JON_ROOT/jbossas/server/default/deploy/rhq.ear.rej/license/license.xml echo " * Starting JON for the fi
key --skip user --name=admin --password=$1$1o751Xnc$kmQKHj6gtZ50IILNkHkkF0 --iscrypted %packages @ Base pexpect ntp SDL ruby %post ( # MOTD echo >&
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -
rm -rf /var/lib/pgsql/data su - postgres -c "initdb -D /var/lib/pgsql/data" # update software yum -y update #prepare actions to be performed
chmod +x /tmp/add_cumin_user.py /tmp/add_cumin_user.py admin 24^gold #Start Cumin echo "--- Starting cumin ---" chkconfig cumin on service c
In the Home tab, click the Add Management Data Source button. Supply a Name and the Address of the MRG VM, then select the Submit button. 6.4.3 RHEV-M
) >> /var/log/rc.local2.out 2>&1 The instMgmt2.sh script:• adds cobbler system entry for system• sets boot order so PXE is first• powers
• configure the previously generated ssh onto the system• prepare the mount of the GFS2 file system• use the other cluster member an NTP peer• deploy
6.6 Create First HostsWhen the satellite VM boots for the third time, it creates the first of each of the RHEL/KVM and RHEV hypervisor hosts. This th
2.3.2 Public CloudThe cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling
# This script will install and prepare a system to be a RHEL host# Source env varsif [[ -x varDefs.sh ]] ; then source varDefs.shelif [[ -x /root/var
echo -e "\nCreating cobbler system entry ...\n"cobbler system add --name=${IPname} --profile=${RHELH_PROFILE} --mac=${rawMAC//-/:} --ip=${IP
while [[ ! `ilocommand -i //${LOGIN}:${ILO_PW}@${iloIP} set /system1/bootconfig1/bootsource5 bootorder=5 | grep status=0` ]]; do sleep 2; done a) In
indx=`expr $indx + 1` tIP=`printf "%s.%d" ${IP_DOMAIN} ${indx}` host ${tIP} > /dev/null 2>/dev/null done echo "${fhost} ${t
echo "Aliases for this host already exist!" else vcmcommand --vcmurl //${LOGIN}:${VCM_PW}@${VCM_IP} show profile ${PNAME} | grep -A 6 &q
indx=1 tname=`printf "rhev-nfs-client-%02d" ${indx}` while [[ `echo ${nfsClients} | grep ${tname}` ]] do indx=`expr $indx + 1` tn
• adds the new NFS export stanza to the cluster configuration file• sets the boot order to boot PXE first• registers the host with satellite after ins
IPnum=`/root/resources/GetAvailRhevh.sh | awk '{print $2}'`# Delete any previously defined cobbler system entryif [[ `cobbler system list |
# Wait for system to register with satellite indicating installation completionecho -e "\nWaiting for system to register with satellite ...\n&quo
# Assumes Satellite uses the lowest IP address IP_DOMAIN=${SAT_IP%.*} indx=1 tIP=`printf "%s.%d" ${IP_DOMAIN} ${indx}` host ${tIP} > /dev
2.3.3 Hybrid CloudThe cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are
timezone America/New_York auth --enablemd5 --enableshadow rootpw --iscrypted $1$1o751Xnc$kmQKHj6gtZ50IILNkHkkF0 selinux --permissive reboot firewall -
# put standard rc.local back into place /bin/mv /etc/rc.d/rc.local.shipped /etc/rc.d/rc.local #reboot to get networks going (restart of network would
Begin the install by opening a Command Prompt window, changing to the C:\saved directory, and issuing the command to install..\RHEVM_47069.exe -s -f1c
Xeon Core i7" -CompatibilityVersion $clusversions[$clusversions.length-1] #change cluster policy write "Changing Cluster Policy ..." $
} while ( $timeout -and $stat -ne "Up" ) if ( $timeout -eq 0) { throw 'DATACENTERTIMEOUT' } #Approve any rhev hosts that are prese
6.7.3 Upload ISO ImagesWith RHEV-M now operational, upload the guest tools ISO image and the virtio drivers virtual floppy disk. Start -> All Prog
7 Dynamic Addition and Removal of HostsAs workloads ramp up and the demand for CPU cycles increases, additional hosts may be added to bear the burden
vcmcommand --vcmurl //${LOGIN}:${VCM_PW}@${VCM_IP} show server | grep ${pname} > /dev/null 2>/dev/nullif [[ $? -ne 0 ]]then echo "HP Virtu
# release semaphore/bin/rm /tmp/UpdateNFSClient# Get the count of systems registered with this name (should be 0)initReg=`/root/resources/listRegSyste
echo "Didn't find a varDefs.sh file!"fi# The blade profile must be passedif [[ $# -ne 1 ]]then echo 'Usage - $0 <HP Virtual
3 Red Hat and Cloud Computing 3.1 Evolution, not Revolution – A Phased Approach to Cloud ComputingWhile cloud computing requires virtualization as a
/root/resources/prep_stor_host.sh ${pname}# Update cluster configuration for NFS presentation# create a semaphore for unique client nameswhile [[ -e /
7.3 Host RemovalSubsequently, remHost.sh is used to remove a host of either type from the RHEV-M configuration../remHost.sh rhevh-02 remHost.sh#!/bin
nfsClient=`riccicmd -H ${MGMT1_IP} cluster configuration | grep ${host} |cut -d\" -f4`if [[ $? -ne 0 ]]then echo "Cluster resource not fou
echo "Usage: $0 <host for removal>\n" exit -1 else host=$1fi# Confirm that the parameter passed is an existing hostif [[ ! `sac
8 Creating VMsThis section includes the kickstart and other scripts used in the creation of these RHEV VMs:• Basic RHEL • RHEL with Java Application•
10. In console,select the desired option when the PXE menu appears 11. The VM installs and registers with the local RHN Satellite server 8.1 RHELThe
selinux --permissiverebootfirewall --enabledskipxkey --skip%packages @ Base%post(/usr/bin/yum -y update) >> /root/ks-post.log 2>&1 8.2 RH
%post(# execute any queued actions on RHN to sync with the activation keyrhn_check -vv/usr/bin/yum -y update) >> /root/ks-post.log 2>&1 8
%post(# set required firewall ports/bin/cp /etc/sysconfig/iptables /tmp/iptables/usr/bin/head -n -2 /tmp/iptables > /etc/sysconfig/iptables/bin/ec
/etc/sysconfig/iptables/bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 4447 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptable
The following figure illustrates a phased approach to technology adoption starting with server consolidation using virtualization, then automating lar
cd /root/rhq-agent/bin\mv rhq-agent-env.sh rhq-agent-env.sh.origwget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/rhq-agent-env.sh# deploy
textnetwork --bootproto dhcplang en_USkeyboard usurl --url http://sat-vm.cloud.lab.eng.bos.redhat.com/ks/dist/ks-rhel-x86_64-server-5-u5zerombrclearpa
#Get configuration fileswget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/sesame.conf -O /etc/sesame/sesame.confwget http://sat-vm.cloud.l
3. Shut down the VM 4. In the RHEV-M Virtual Machines tab, select the VM that was just shut down and click the Make Template button specifying a Name
if ($clus.IsInitialized -eq $true) { $my_clusId = $id } else { write-host "Cluster of Template is not initialized!" exi
8.5 RHEL MRG Grid RenderingA VM to execute the MRG Grid rendering application is created using the following kickstart to:• install RHEL 5.5 with the
/bin/cp /etc/sysconfig/iptables /tmp/iptables/usr/bin/head -n -2 /tmp/iptables > /etc/sysconfig/iptablescat <<EOF>>/etc/sysconfig/ipta
0.1.noarch.rpm http://download1.rpmfusion.org/nonfree/el/updates/testing/5/i386/rpmfusion-nonfree-release-5-0.1.noarch.rpmwget http://irish.lab.bos.re
9 References1. The NIST Definition of Cloud ComputingVersion 1507 October 2009http://csrc.nist.gov/groups/SNS/cloud-computing/cloud-def-v15.doc2. Abo
Appendix A: Configuration Files, Images, Etc.This appendix contains various configuration files, images, tar/zip files used in the construction of the
3.2 Unlocking the Value of the CloudRed Hat's approach does not lock an enterprise into one vendor's cloud stack, but instead offers a rich
hardware_handler "0" path_selector "round-robin 0" prio_callout
<resources> <ip address="10.16.136.20" monitor_link="1"/>
A.2 Satellite A.2.1 answers.txt# Administrator's email address. Required. # Multiple email addresses can be used, separated with commas. # # E
# Example: # ssl-set-org-unit = Information Systems Department ssl-set-org-unit = Reference Architecture # Location information for the SSL certificat
# ssl-config-sslvhost = ssl-config-sslvhost = Y # *** Options below this line usually don't need to be set. *** # The Satellite server's hos
A.2.2 AppPGPWhen generating the application channel security key for satellite, the input to gpg is specified in this file.%echo Generating a standar
3 PTR cloud-138-3.cloud.lab.eng.bos.redhat.com. [ . . . ]253 PTR cloud-138-253.cloud.lab.eng.bos.redhat.com. 254 PTR cloud-138-254.cloud.lab.eng.bos.r
} host ra-c7000-01-db4-ilo { option host-name "ra-c7000-01-db4-ilo.cloud.lab.eng.bos.redhat.com"; hardwar
} host ra-c7000-01-db14-ilo { option host-name "ra-c7000-01-db14-ilo.cloud.lab.eng.bos.redhat.com"; hardw
search cloud.lab.eng.bos.redhat.com bos.redhat.com nameserver 10.16.136.10 nameserver 10.16.136.1 nameserver 10.16.255.2 A.2.7 settingsCobbler's
3.3 Redefining the CloudCloud computing is the first major market wave where open source technologies are built in from the beginning, powering the v
# allow access to the filesystem as Cheetah templates are evaluated # by cobblerd as code. cheetah_import_whitelist: - "random" - &quo
# controls whether cobbler will add each new profile entry to the default # PXE boot menu. This can be over-ridden on a per-profile # basis when addi
ro: ~ ip: off vnc: ~ # configuration options if using the authn_ldap module. See the # the Wiki for details. This can be ignored if you a
# if using cobbler with manage_dhcp, put the IP address # of the cobbler server here so that PXE booting guests can find it # if you do not set this c
# Are you using a Red Hat management platform in addition to Cobbler? # Cobbler can help you register to it. Choose one of the following: # "o
restart_dhcp: 1 # if set to 1, allows /usr/bin/cobbler-register (part of the koan package) # to be used to remotely add new cobbler system records to
tftpd_conf: /etc/xinetd.d/tftp # cobbler's web directory. Don't change this setting -- see the # Wiki on "relocating your cobbler inst
b) Select Install now c) Choose OS Version (e.g., Windows Server 2008 R2 Enterprise (Full Installation)) d) Accept License terms e) Choose Custom ins
<mac address='52:54:00:c0:de:01'/> <source bridge='cloud0'/> <model type='virtio'/>
CCB_ADDRESS = $(PUBLIC_HOST):$(PUBLIC_PORT) COLLECTOR.CCB_ADDRESS = # Avoid needing CCB within the VPN PRIVATE_NETWORK_NAME = mrg-vm # Set TCP_FORWARD
Today each IaaS cloud presents a unique API to which developers and ISVs need to write in order to consume the cloud service. The Deltacloud effort is
CONDOR_DEVELOPERS = NONE CONDOR_HOST = mrg-vm.cloud.lab.eng.bos.redhat.com COLLECTOR_HOST = $(CONDOR_HOST) COLLECTOR_NAME = Grid On a Cloud FILESYSTEM
# Plugin configuration MASTER.PLUGINS = $(LIB)/plugins/MgmtMasterPlugin-plugin.so QMF_BROKER_HOST = mrg-vm.cloud.lab.eng.bos.redhat.com A.4.4 cumin.c
# METHOD can be "trust", "reject", "md5", "crypt", "password", # "krb5", "ident"
A.4.7 BlenderBlender is an open source 3D content creation suite and is used in movie rendering example in this paper. Version 2.48a was used for com
[BLEND_FILE, FRAME_DIR, frame]) jobFile.write("Log = %s/frame%d.log\n" % [LOGS_HOME, frame]) jobFile.write("O
sleep 5 echo Sunmitting DAG ... condor_submit_dag /home/admin/render/production/jobs/render.dag hello.sh – simple script which write date of the start
• jon-plugin-pack-soa-2.4.0.GA.zip – plugin supporting the SOA platform server• rhq-enterprise-agent-3.0.0.GA.jar – contains the JON RHQ agent compris
# is used. If this and RHQ_AGENT_JAVA_HOME are not set, the# agent's embedded JRE will be used.##RHQ_AGENT_JAVA_EXE_FILE_PATH="/usr/lo
# RHQ_AGENT_IN_BACKGROUND - If this is defined, the RHQ Agent JVM will# be launched in the background (thus causing this script to exit immediat
#Script Generated by user#Generated on: Tue May 11 15:37:56 2010#Set Enclosure TimeSET TIMEZONE EST5EDT#Set Enclosure InformationSET ENCLOSURE ASSET T
4 Red Hat Cloud: Software Stack and Infrastructure ComponentsFigure 8 depicts the software stack of Red Hat Cloud Foundation components.Figure 8: Red
SET SERVER POWERDELAY 7A 0SET SERVER POWERDELAY 8A 0SET SERVER POWERDELAY 9A 0SET SERVER POWERDELAY 10A 0SET SERVER POWERDELAY 11A 0SET SERVER POWERDE
#Set SNMP InformationSET SNMP CONTACT ""SET SNMP LOCATION ""SET SNMP COMMUNITY READ "public"SET SNMP COMMUNITY WRITE &qu
SET EBIPA SERVER NONE 5BSET EBIPA SERVER NONE 6BSET EBIPA SERVER NONE 7BSET EBIPA SERVER NONE 8BSET EBIPA SERVER NONE 9BSET EBIPA SERVER NONE 10BSET E
ASSIGN OA "mlamouri"ENABLE USER "mlamouri"ADD USER "spr"SET USER CONTACT "spr" ""SET USER FULLNAME &
# If your connection is dropped this script may not execute to conclusion.# SET OA NAME 2 ra-c7000-01-oa2SET IPCONFIG STATIC 2 10.16.136.254 255.
----------------------------------------------------------------------------MAC Address Type : VC-Defined Pool ID : 10 Address Start : 00-
**************************************************************************** ENET-VLAN INFORMATION ***************************************************
====================================================================== ID Enclosure Bay Type Firmware Version Status =========
**************************************************************************** MAC-CACHE INFORMATION **************************************************
BL460c G6 ------------------------------------------------------------------------- enc
4.1 Red Hat Enterprise LinuxRed Hat Enterprise Linux (RHEL) is the world's leading open source application platform. On one certified platform,
****************************************************************************==========================================================================
-------------------------------------------------------------------------- 2 (Flex enc0:1 2 -- -- enc0:2:d1 -- -- N
-------------------------------------------------------------------------- 1 enc0:10 3 QLogic QMH2562 8Gb enc0:3:d10 test1
-------------------------------------------------------------------------- 1 enc0:8 3 QLogic QMH2562 8Gb enc0:3:d8 -- --
-------------------------------------------------------------------------- 2 enc0:5 4 QLogic QMH2562 8Gb enc0:4:d5 -- --
**************************************************************************** SSL-CERTIFICATE INFORMATION *********************************************
enc0:1:X6 ra-c7000-01 Not Linked absent Auto -- -- -------------------------------------------------------------------------- e
**************************************************************************** UPLINKSET INFORMATION ***************************************************
www.redhat.com 198
Appendix B: BugzillasThe following Red Hat bugzilla reports were open(ed) issues at the time of this exercise.1. BZ 518531 - Need "disable_tpa&qu
Red Hat Cloud Foundations Reference ArchitectureEdition One: Automating Private IaaS Clouds on Blades1801 Varsity Drive™Raleigh NC 27606-2072 USAPhone
4.2 Red Hat Enterprise Virtualization (RHEV) for Servers Red Hat Enterprise Virtualization (RHEV) for Servers is an end-to-end virtualization solutio
4.3 Red Hat Network (RHN) SatelliteAll RHN functionality is on the network, allowing much greater functionality and customization. The Satellite serv
access options. 4.4 JBoss Enterprise MiddlewareThe following JBoss Enterprise Middleware Development Tools, Deployment Platforms and Management Envir
Management:• JBoss Operations Network (JON): An advanced management platform for inventorying, administering, monitoring, and updating JBoss Enterpris
• manage, monitor and tune applications for improved visibility, performance and availability. One central console provides an integrated view and con
5 Reference Architecture System ConfigurationThis reference architecture in deploying the Red Hat infrastructure for a private cloud used the configu
5.1 Server ConfigurationHardware Systems SpecificationsManagement Cluster Nodes[2 x HP ProLiant BL460c G6]Quad Socket, Quad Core (16 cores)Intel® Xeo
5.2 Software ConfigurationSoftware VersionRed Hat Enterprise Linux (RHEL)5.5(2.6.18-194.11.3.el5 kernel)Red Hat Enterprise Virtualization Manager (RH
5.3 Blade and Virtual Connect ConfigurationAll the blades are using logical Serial Numbers, MAC addresses and FC WWNs. A single 10Gb network and two
LUNs were created and presented as outlined in the following table.Volume Size Presentation PurposeMgmtServices 1 TB Management ClusterVolume Grou
Table of Contents 1 Executive Summary...6 2 Cloud Computing: Def
6 Deploying Cloud Infrastructure ServicesThis section provides the detailed actions performed to configure Red Hat products that constitute the infra
• NFS service10. Provision MGMT-2 node using temporary node from Satellite11. Create file system management services in MGMT-2• NFS service based on e
MGMT1_ICIP=192.168.136.10MGMT1_FC=mgmt_node1MGMT1_MAC=00:17:A4:77:24:00MGMT1_NAME=mgmt1.cloud.lab.eng.bos.redhat.comMGMT1_PW=24^goldMGMT2_ILO=10.16.13
3. A set of python based XMLRPC scripts were developed for remote communication with the ricci cluster daemon and can also be obtained from http://pe
ks.cfginstall cdrom key <Installation Number>lang en_US.UTF-8 keyboard us #xconfig --startxonboot skipx network --device eth0 --bootproto static
@editors @graphical-internet @graphics @java @kvm @legacy-software-support @text-internet @base-x kexec-tools iscsi-initiator-utils bridge-utils fipsc
# # copy the entire content onto the created machine # mkdir /mnt/sysimage/root/distro # (cd /mnt/source; tar -cf - . ) | (cd /mnt/sysimage/root/dist
cat /root/distro/resources/temp.rc.local.add >> /etc/rc.d/rc.local #update to latest software yum -y update ) 2>&1 | tee /root/ks_post2.o
/bin/mount -o ro /dev/scd0 /root/distro # source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then so
label ks kernel vmlinuz append ks=cdrom:/ks.cfg initrd=initrd.img console=ttyS0,115200 nostoragelabel local localboot 1 label memtest86 kernel
5 Reference Architecture System Configuration...25 5.1 Server Configuration...
ilocommand --ilourl //${LOGIN}:${MGMT2_ILO_PW}@${MGMT2_ILO} set /map1/oemhp_vm1/cddr1 oemhp_image=http://irish.lab.bos.redhat.com/pub/projects/cloud/r
VCMNUM=`echo $VCMFILE | wc -w` ILONUM=`echo $ILOFILE | wc -w` OANUM=`echo $OAFILE | wc -w` #install storage array command tool if [[ $SANUM -eq 0 ]] t
• creates and presents volumes from storage array• configures system to access presented volumes• configures LVM group to be used with management serv
sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} unmap volume MgmtServices sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} unmap volume GFS2 sa
if [[ $NUM -gt 0 ]] then echo "Aliases for this host already exists!" else cd /sys/class/fc_host for f in host* do
#get the data from the array, assumes all virtual disk have VD the name sacommand --saurl //${MSA_USER}:${MSA_PW}@${array} "show volumes" |
# source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/r
client = xmlrpclib.Server(SATELLITE_URL, verbose=0) key = client.auth.login(SATELLITE_LOGIN, SATELLITE_PASSWORD) #retrieve all the
10.16.143.254 --nameserver 10.16.136.1,10.16.255.2 --hostname sat-vm.cloud.lab.eng.bos.redhat.com rootpw --iscrypted $1$1o751Xnc$kmQKHj6gtZ50IILNkHkkF
restrict default kod nomodify notrap nopeer noquery restrict 127.0.0.1 driftfile /var/lib/ntp/drift keys /etc/ntp/keys server 10.16.136.10 server 10.1
8.2 RHEL with Java Application...136 8.3 RHEL with JBoss...
# configure DNS /bin/cp /root/resources/db.* /var/named/ /bin/cp /root/resources/named.conf /etc/ chkconfig named on # cobbler preparation /usr/sbin/s
echo "-" >> /var/log/rc.local.out 2>&1 echo "- begin sat install" >> /var/log/rc.local.out 2>&1 echo &qu
( nice -n -15 satellite-sync --iss-parent=irish.lab.bos.redhat.com --ca-cert=/pub/RHN-ORG-TRUSTED-SSL-CERT; nice -n -15 satellite-sync --iss-parent=ir
yum -y update rhn-satellite restart 4. configCobbler.sh - configure cobbler using this script that:• performs recommended SELinux changes • updates s
cat <<'EOF'>>/etc/cobbler/named.template #for $zone in $forward_zones zone "${zone}." { type master; file "$zone&
satellite-sync --step=channels --channel=rhn-tools-rhel-x86_64-server-5 satellite-sync --step=channels --channel=rhel-x86_64-server-cluster-storage-5
6.2.4 Post Satellite Installation ActionsAfter the satellite has completed installation, the following actions must be performed: 1. User interaction
3. Call post_satellite_build_up.sh, recording the output./root/resources/post_satellite_build_up.sh 2>&1 | \ tee /tmp/post_sat.outThe post_sa
echo "-" date /root/resources/prep_MgmtVMs.sh #Prep and create RHEL/KVM hosts echo "-" echo "Prepping first RHEL host" e
#create the tenant org tenantOrg = client.org.create(key, "tenant", "tenant", "24^gold", "Mr.", "Shadow&q
1 Executive SummaryRed Hat's suite of open source software provides a rich infrastructure for cloud providers to build public/private cloud offe
#Double loop through the org setting trusts o1 = 0 while o1 < len(Orgs) - 1: o2 = o1 + 1 while o2 < len(Orgs): try:
#open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into infrastructure org key = client.auth.login(INFRA_LOGIN, INFRA_PASSWD) #cre
INFRA_LOGIN = "infra"INFRA_PASSWD = "24^gold"INFRA_ENTITLE = [ 'monitoring_entitled', 'provisioning_entitled'
tenant_ak = client.activationkey.create(key, 'tenantMRGGridExec', 'Key for MRG Grd Exec Nodes', TENANT_PARENT, TENANT_ENTITLE, Fal
--ksmeta="NAME=$MGMT1_NAME NETIP=$MGMT1_IP ICIP=$MGMT1_ICIP ICNAME=$MGMT1_IC" –kopts="console=ttyS0,115200 nostorage" cobbler sync
#Add cobbler profile and system entries for the MRG VM cobbler profile add --name=${MRGGRID_PROFILE} --distro=${MGMT_DISTRO} --kickstart=/root/resourc
"""#open channelclient = xmlrpclib.Server(SATELLITE_URL, verbose=0)#log into infrastructure orgkey = client.auth.login(INFRA_LOGIN, INF
filePtr = open(fileName, 'r') ksName = os.path.basename(fileName).split('.')[0] if len(ksName) < 6: today = d
#open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into infrastructure org key = client.auth.login(INFRA_LOGIN, INFRA
then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]]then source /root/distro/resources/varDefs.shelse echo "
2 Cloud Computing: DefinitionsCloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing
{RHEVM_IP} netconsole=${RHEVM_IP} rootpw=${passwd} ssh_pwauth=1 local_boot"# Update cobbler system filescobbler sync h) prep_tenantKS.sh perform
if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.s
# This script will generate a GPG key to use for the App Channel, # resign the javaApp package, and make the pub key available. # -- create profile w
match = signproc.expect([pexpect.EOF]) • createAppSatChannel.py – create Application custom channel in Satellite#!/usr/bin/python """ T
SATELLITE_URL = "http://sat-vm.cloud.lab.eng.bos.redhat.com/rpc/api" TENANT_LOGIN = "tenant" TENANT_PASSWD = "24^gold" T
SATELLITE_URL = "http://sat-vm.cloud.lab.eng.bos.redhat.com/rpc/api"INFRA_LOGIN = "tenant"INFRA_PASSWD = "24^gold"KSTREE
TENANT_LOGIN = "tenant"TENANT_PASSWD = "24^gold"def main(): if len(sys.argv) < 3: print "Usage: ",sys.argv[
if __name__ == "__main__": sys.exit(main()) v) addGPGKey_tenant.py - loads the GPG key into satellite and associates it with a stated kic
6.3 Provision First Management NodeThe invocation of post_satellite_build_up.sh, among other actions, starts the installation of the first management
device scsi ccisszerombr clearpart --all --initlabel part /boot --fstype=ext3 --size=200 part pv.01 --size=1000 --grow part swap --size=10000 --maxs
2.2 Service Models 2.2.1 Cloud Infrastructure as a Service (IaaS)The capability provided to the consumer is to provision processing, storage, network
fipscheck imake kexec-tools libsane-hpaio mesa-libGLU-devel ntp perl-XML-SAX perl-XML-NamespaceSupport python-imaging python-dmidecode pexpect sg3_uti
:RH-Firewall-1-INPUT - [0:0] -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT -A INPUT -i v
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -
setsebool -P named_write_master_zones on chmod 770 /var/named chkconfig named on # Set NFS daemon ports cat <<EOF>>/etc/sysconfig/nfs QUOT
if [[ -x varDefs.sh ]] ; then source varDefs.shelif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.shelif [[ -x /root/resources/varDefs.sh ]]
# Deploy preconfigured cluster configuration file and set SELinux label /bin/mv /root/cluster.conf /etc/cluster/cluster.conf /sbin/restorecon /etc
# Add both interfaces of each node known_hosts of the other ssh ${MGMT1_NAME} ssh ${MGMT2_NAME} date ssh ${MGMT1_NAME%.${FQD}} ssh ${MGMT2_NAME%
EOF # Mount the share GFS2 storage on both nodes and make dir for VM config files /usr/bin/ssh ${MGMT1_IP} /bin/mount -t gfs2 /dev/mapper/GF
#Acquire short hostname SHORTHOST=`hostname --short` #Modifying hostname to match name used to define host HBAs at MSA storage array NODE=`echo $SHORT
elif [[ -x /root/resources/varDefs.sh ]] ; then echo /root/resources/varDefs.sh source /root/resources/varDefs.sh elif [[ -x /root/distro/resource
2.2.4 Examples of Cloud Service Models 9
ii) buildMpathAliases.sh – refer to section 6.2.1 c) createMgmtVMs.sh – This script:• allocates LVM volumes for each Management VM• downloads and
# jon-vm nlines=`wc -l /etc/libvirt/qemu/jon-vm.xml | awk '{print \$1}'` hlines=`grep -n "</disk>" /etc/libvirt/qemu/jon-vm.
• waits for the RHEVM to be configured• adds the cluster monitor to the RHEVM VM entry in the cluster configuration file• removes the crontab entry#!/
else echo "rhev-check.sh not found!" fi 6.4 Creating Management Virtual MachinesWhile the satellite management VM was created early in t
%packages @ Base postgresql84 postgresql84-server java-1.6.0-openjdk.x86_64 %post ( # MOTD echo >> /etc/motd echo "RHN Satellite kickstart
-A RH-Firewall-1-INPUT -p udp --dport 1161 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 1162 -m state --state NEW -j ACCEPT -A
source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/res
if [ -z "$SVC_SCRIPT" ]; then echo " - No previous installations found." return fi echo " - Found JON/JOPR/RHQ service scri
JON_USER="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --jon-rootdir=*) JON_ROOT="`echo $i | sed 's/[-a-
fi # if specified JON user is not present, we must create it /bin/egrep -i "^$JON_USER" /etc/passwd > /dev/null if [ $? != 0 ]; then ech
Komentáře k této Příručce