OpenStack Baremetal
Currently using a fresh install of Fedora 20 to install OpenStack Havana.
To resolve some issues, you should do a complete update of your system:
sudo yum update -y
Perform a system reboot for any pending updates.
Add write permissions to the log directory, if this issue has not yet been resolved:
sudo chmod a+w /var/log/
Make sure sshd is started and enabled:
sudo systemctl start sshd.service sudo systemctl enable sshd.service
Contents
RDO
Here are the following instructions to install RDO, Red Hat’s Distribution of OpenStack: Source: http://openstack.redhat.com/Quickstart
Step 0: Prerequisites A RHEL-based Linux distribution, we will be using Fedora 20 Minimum 2 GB of RAM Minimum 1 Network Adapter CPU with hardware virtualization extensions Use a fully qualified domain name to avoid DNS issues with Packstack
Step 1: Software repositories If on Fedora 20, skip to step 2
sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
Step 2: Install Packstack Installer
sudo yum install -y openstack-packstack
Step 3: Run Packstack to install OpenStack
packstack --allinone
Once it has successfully installed, you may log into the OpenStack Dashboard at http://$YOURIP/dashboard with the login credentials from /root/keystonerc_admin. Store the packstack answer-file as you will need it later on.
Troubleshooting: If an error occurs during installation, you must run packstack again with the answer-file so that any passwords you've already set will be reused.
packstack --answer-file packstack-answers-$DATE-$TIME.txt
If you somehow lost the answer-file, you can override the http://openstack.redhat.com/forum/discussion/19/unable-to-login-at-dashboard-user-name-password-not-oke/p1
Any issues installing Nagios may be resolved by manually installing it using the steps found here: http://nagios.sourceforge.net/docs/3_0/quickstart-fedora.html
If you cannot access the dashboard due to an OpenStack API error, take a look at the Apache and Horizon error logs. Trouble accessing the dashboard in Fedora 19 may be resolved by following these steps. http://www.blog.sandro-mathys.ch/2013/08/install-rdo-havana-2-on-fedora-19-and.html
Uninstalling RDO: I recommend using the Slightly smaller hammer method. DO NOT use the Big hammer method unless your computer is ONLY being used for OpenStack purposes. http://openstack.redhat.com/Uninstalling_RDO
Baremetal Configuration
Source: https://wiki.openstack.org/wiki/Baremetal Add/replace the following settings in /etc/nova/nova.conf:
[DEFAULT] scheduler_host_manager = nova.scheduler.baremetal_host_manager.BaremetalHostManager firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = nova.virt.baremetal.driver.BareMetalDriver ram_allocation_ratio = 1.0 reserved_host_memory_mb = 0 [baremetal] net_config_template = $pybasedir/nova/virt/baremetal/net-static.ubuntu.template tftp_root = /tftpboot power_manager = nova.virt.baremetal.ipmi.IPMI driver = nova.virt.baremetal.pxe.PXE # Found in packstack answer-file under ‘MYSQL_PW’ sql_connection = mysql://root:$PASS@localhost/nova_bm
Log into mysql using the password above:
mysql -u root -p Enter password:
MariaDB [(none)]> CREATE DATABASE nova_bm; MariaDB [(none)]> exit;
Before running any nova commands, you must source your login credentials as root:
sudo bash source /root/keystonerc_admin
Initialize the database:
nova-baremetal-manage db sync
Ensure the following packages are installed:
yum info dnsmasq ipmitool iscsi-initiator-utils syslinux
To support PXE image deployments, follow these steps:
sudo mkdir -p /tftpboot/pxelinux.cfg sudo cp /usr/share/syslinux/pxelinux.0 /tftpboot/ sudo chown -R nova /tftpboot
Troubleshooting: For readability, you may set debug=False in /etc/nova/nova.conf.
Ensure that all OpenStack services have a :-) state:
nova-manage service list
Check the logs in /var/log/nova/ for more details.
AMQP server issues might be resolved by restarting qpidd:
sudo systemctl restart qpidd.service # You may need to restart the compute and conductor services afterwards. sudo systemctl restart openstack-nova-compute.service sudo systemctl restart openstack-nova-conductor.service
Dnsmasq
Disable any existing dnsmasq service
sudo service dnsmasq disable && sudo pkill dnsmasq
Add/replace the following settings in /etc/dnsmasq.conf:
port=0 dhcp-host=$COMPUTE_NODE enable-tftp tftp-root=/tftpboot dhcp-boot=pxelinux.0 bind-interfaces pid-file=/var/run/dnsmasq.pid interface=$INTERFACE dhcp-range=$DHCP_RANGE
Where $COMPUTE_NODE is the MAC address of the compute node set to a static IP (eg. aa:bb:cc:dd:ee:ff,10.42.0.100). Repeat for all NICs. Where $INTERFACE is the network adapter serving the compute nodes and can be found on the left side of an ifconfig command (eg. eth1). Where $DHCP_RANGE is the IP range of your compute nodes (eg. 10.42.0.2,10.42.0.250,24h).
Dnsmasq must be the only process on the network answering DHCP requests from the MAC addresses of the enrolled bare metal nodes. You must disable neutron-dhcp and/or quantum-dhcp:
sudo systemctl stop neutron-dhcp-agent.service sudo systemctl disable neutron-dhcp-agent.service
Start and enable Dnsmasq:
sudo systemctl start dnsmasq.service sudo systemctl enable dnsmasq.service
Troubleshooting: You may be able to resolve issues with enabling TFTP by configuring SELinux
Diskimage-builder (x86)
Make sure the following packages are installed:
yum install -y python-lxml libvirt-python libvirt qemu-system-x86
git clone https://github.com/openstack/diskimage-builder.git cd diskimage-builder
# build the image your users will run bin/disk-image-create -u base -o my-image fedora
# and extract the kernel & ramdisk bin/disk-image-get-kernel -d ./ -o my -i $(pwd)/my-image.qcow2
# build the deploy image bin/ramdisk-image-create deploy -a amd64 -o my-deploy-ramdisk fedora
Diskimage-builder (ARM)
Fedora currently does not have an ARM-based Cloud image available, but we can build our own from an existing setup.
In this example, I have installed Fedora 20 on an ARMv7 Calxeda Highbank node.
For diskimage-builder to work, this cloud image must only have a root partition. If the boot or swap partitions were separate, this will not build properly. If you are using a kickstart file to do an automated install, use the following line as a reference:
part / --fstype=ext4 --size=2000
When your installation is complete, you will probably want to perform a complete update:
sudo yum update -y
After setting up your installation exactly how you want it to be for your cloud image, poweroff your node and boot into a live CD or connect the drive to another computer to gain access to your drive without having Fedora being loaded on it. This will prevent Fedora from modifying any part of your installation while you are copying it into a raw image.
Run lsblk and take note of which device that is your Fedora ARM installation (i.e sdb, sdc, sdd...), and make sure it is NOT mounter.
Instead of cloning the entire disk, find the END of the partition:
sudo sfdisk -luS /dev/sdX
This number does not have to be exact, but you must round up to a conveniently larger number.
dd uses a default block size of 512 bytes, but Nova prefers 1M. Take your end number and divide it by 2048, and round up to the nearest whole number (i.e 2,100,000/2048 => 1026).
When you are ready to begin cloning, be very careful when using the dd command.
Clone to local device:
dd if=/dev/sdX bs=1M count=1026 of=fedora.raw
or
Clone over SSH:
dd if=/dev/sdX bs=1M count=1026 | gzip | ssh $IP_ADDRESS 'gzip -d | dd of=fedora.raw'
Source: http://ubuntuforums.org/showthread.php?t=1840320#post_11234165
After the image has been created, you must copy it to the target system with the correct architecture. You can use the same system you installed your template Fedora installation. Convert the image to qcow2 format:
qemu-img convert -p -f raw -O qcow2 fedora.raw fedora.qcow2
Copy fedora.qcow2 and replace /root/.cache/image-create/fedora-20.armhf.qcow2, keeping a copy of fedora.qcow2 in case it expires:
sudo cp fedora.qcow2 /root/.cache/image-create/fedora-20.armhf.qcow2
Make sure the following packages are installed:
yum install -y python-lxml libvirt-python libvirt qemu-system-arm
git clone https://github.com/openstack/diskimage-builder.git cd diskimage-builder
Before proceeding with the build process, some modifications to diskimage-builder need to be made.
In elements/rpm-distro/pre-install.d/01-override-yum-arch add the following before the last else statement:
elif [ "armhf" = "$ARCH" ]; then basearch=armhfp arch=armhfp
Comment out all lines in:
elements/redhat-common/pre-install.d/15-remove-grub elements/base/install.d/10-cloud-init
# build the image your users will run bin/disk-image-create -a armhf -u base -o my-image fedora
# and extract the kernel & ramdisk bin/disk-image-get-kernel -d ./ -o my -i $(pwd)/my-image.qcow2
# build the deploy image bin/ramdisk-image-create deploy -a armhf -o my-deploy-ramdisk fedora
Our Calxeda Highbank nodes will not boot with the extracted kernel using diskimage-builder. You can use the extracted vmlinuz, or download the correct version from http://www.rpmfind.net/linux/rpm2html/search.php?query=kernel&submit=Search+...&system=&arch=armv7hl and extract the vmlinuz file from the rpm, then convert it to a U-Boot image:
mkimage -A arm -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n ‘Fedora 20 ARMv7’ -d my-vmlinuz my-vmlinuz
Source: https://fedoraproject.org/wiki/Architectures/ARM/TrimSlicePRO
Copy over the diskimage-builder directory to your compute host and proceed with the next step.
Glance
glance image-create --name my-vmlinuz --public --disk-format aki < my-vmlinuz glance image-create --name my-initrd --public --disk-format ari < my-initrd
# Replace variables with the values of id from the previous two commands glance image-create --name my-image --public --disk-format qcow2 --container-format bare \ --property kernel_id=$VMLINUZ_ID \ --property ramdisk_id=$INITRD_ID < my-image.qcow2
glance image-create --name deploy-vmlinuz --public --disk-format aki < my-vmlinuz glance image-create --name deploy-initrd --public --disk-format ari < my-deploy-ramdisk.initramfs
Flavor
# pick a unique number FLAVOR_ID=123
# change these to match your hardware RAM=1024 CPU=2 DISK=100
nova flavor-create my-baremetal-flavor $FLAVOR_ID $RAM $DISK $CPU
# Replace variables with the values of id from the last two glance commands nova flavor-key my-baremetal-flavor set \ "baremetal:deploy_kernel_id"=$DEPLOY_VMLINUZ_ID \ "baremetal:deploy_ramdisk_id"=$DEPLOY_INITRD_ID
Hardware Enrollment
IPMI username and password can usually be found in the product manual. Typically the default username is “admin” or “ADMIN”, and password is “admin”. As a last resort you can temporarily install Fedora on the compute node, and run a few ipmitool commands to display/reset its username and password:
ipmitool user list 1 ipmitool user set password 2 <new_password>
Source: http://www.openfusion.net/linux/ipmi_on_centos
# create a "node" for each machine nova baremetal-node-create --pm_address=10.42.0.100 --pm_user=ADMIN --pm_password=admin $COMPUTE-HOST-NAME $CPU $RAM $DISK $FIRST-MAC
Also add each interface including $FIRST-MAC using the ID from the previous command:
nova baremetal-interface-add $ID $MAC
Before proceeding, ensure that nova-baremetal-deploy-helper is running. You can run it on startup by adding the following line to /etc/rc.d/rc.local:
nohup nova-baremetal-deploy-helper &
Now launch the image from the dashboard http://localhost/dashboard/project/images_and_snapshots/ selecting the baremetal flavor, default public network, and any other configuration you might want. You may also launch an instance via command line:
nova boot --flavor my-baremetal-flavor --image my-image my-baremetal-node
It may take a couple of minutes for the compute host to boot, so be patient. You can view the current task of the instance on the dashboard at http://localhost/dashboard/project/instances/ or via command line:
nova list nova show $ID
Once booted, ensure that your BIOS settings are set correctly, with Network boot having priority over others.
The compute node should first load the deploy images from the compute host. If it reaches the Netcat stage, you will see waiting… and the node will soon reboot. After it has rebooted, the compute node will load the kernel and ramdisk, and also any cloud-init commands, but it should be fully accessible over SSH.
Troubleshooting: Power controls of compute node over ipmitool: