OPS435 Python3 Lab 8
This lab is currently being reviewed. The final version will be ready by November 11, 2019
Contents
- 1 LAB OBJECTIVES
- 2 INVESTIGATION 1: Extra VM Setup
- 3 INVESTIGATION 2: Fabric practice
- 4 INVESTIGATION 3: Multiplying your work
- 5 Final Task - Apply fabfile.py to your VM on myvmlab
- 6 LAB 8 SIGN-OFF (SHOW INSTRUCTOR)
- 7 LAB REVIEW
LAB OBJECTIVES
- 1. Use the fab program to execute administrative tasks on remote host via Python functions under the Fabric framework.
- 2. Create python functions using Fabric API to perform Linux system administrative tasks on controlled Linux systems.
Overview
- Completing this lab will give you a taste of what is involved in automating remote system/network administration tasks. We will look at and use the Fabric package in this lab. Using Fabric you can automate monitoring, deploying software, and updating many systems at the same time repeatedly.
REFERENCE
- 1. These links are helpful for learning more about Fabric's features:
Category | Resource Link |
|
|
|
|
|
- 2. You should have learned the following topics in OPS235 and or OPS335. Please review them to prepare for some of the tasks in this lab:
- Configure and allow a regular user to run the sudo command.
- The man page on sudo
- Configure sudoers using the configuration file: /etc/sudoers.
- Managing critical system log files: /var/log/messages, /var/log/maillog, /var/log/secure
- Retrieve current firewall setting using the iptables -L -n -v command
INVESTIGATION 1: Extra VM Setup
- In order to experience Fabric's features in a realistic way, we're going to set up several virtual machines (You need at least one more VM). To begin with they are all going to have the same configuration. Please make sure that each VM has direct network connection with other VMs you wish to control and configure.
PART 1 - Set up your controller
- In this lab you will use your existing vm centos7 as a workstation to control other VMs which we'll call workers. Later in the lab, we will try to control and monitor your vm in myvmlab using the fabfile we are going to develop.
- Install fabric using yum. Once it's installed you should have a fab command available. Type the following command to get the command line option:
fab --help
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...
Options:
-h, --help show this help message and exit
-d NAME, --display=NAME
print detailed info about command NAME
-F FORMAT, --list-format=FORMAT
formats --list, choices: short, normal, nested
-I, --initial-password-prompt
Force password prompt up-front
--initial-sudo-password-prompt
Force sudo password prompt up-front
-l, --list print list of possible commands and exit
--set=KEY=VALUE,... comma separated KEY=VALUE pairs to set Fab env vars
--shortlist alias for -F short --list
-V, --version show program's version number and exit
-a, --no_agent don't use the running SSH agent
-A, --forward-agent forward local agent to remote end
--abort-on-prompts abort instead of prompting (for password, host, etc)
-c PATH, --config=PATH
specify location of config file to use
--colorize-errors Color error output
-D, --disable-known-hosts
do not load user known_hosts file
-e, --eagerly-disconnect
disconnect from hosts as soon as possible
-f PATH, --fabfile=PATH
python module file to import, e.g. '../other.py'
-g HOST, --gateway=HOST
gateway host to connect through
--gss-auth Use GSS-API authentication
--gss-deleg Delegate GSS-API client credentials or not
--gss-kex Perform GSS-API Key Exchange and user authentication
--hide=LEVELS comma-separated list of output levels to hide
-H HOSTS, --hosts=HOSTS
comma-separated list of hosts to operate on
-i PATH path to SSH private key file. May be repeated.
-k, --no-keys don't load private key files from ~/.ssh/
--keepalive=N enables a keepalive every N seconds
--linewise print line-by-line instead of byte-by-byte
-n M, --connection-attempts=M
make M attempts to connect before giving up
--no-pty do not use pseudo-terminal in run/sudo
-p PASSWORD, --password=PASSWORD
password for use with authentication and/or sudo
-P, --parallel default to parallel execution method
--port=PORT SSH connection port
-r, --reject-unknown-hosts
reject unknown hosts
--sudo-password=SUDO_PASSWORD
password for use with sudo only
--system-known-hosts=SYSTEM_KNOWN_HOSTS
load system known_hosts file before reading user
known_hosts
-R ROLES, --roles=ROLES
comma-separated list of roles to operate on
-s SHELL, --shell=SHELL
specify a new shell, defaults to '/bin/bash -l -c'
--show=LEVELS comma-separated list of output levels to show
--skip-bad-hosts skip over hosts that can't be reached
--skip-unknown-tasks skip over unknown tasks
--ssh-config-path=PATH
Path to SSH config file
-t N, --timeout=N set connection timeout to N seconds
-T N, --command-timeout=N
set remote command timeout to N seconds
-u USER, --user=USER username to use when connecting to remote hosts
-w, --warn-only warn, instead of abort, when commands fail
-x HOSTS, --exclude-hosts=HOSTS
comma-separated list of hosts to exclude
-z INT, --pool-size=INT
number of concurrent processes to use in parallel mode
Please note and study the -H, -f, -l, and --port options.
PART 2 - Create master Worker image
- Create a new virtual machine, and allocate for it 1GB of RAM and 8GB of disk space. Install a Basic Web Server configuration of CentOS in that VM using the same CentOS .iso file you used for your first machine in this course.
- Make sure that:
- The hostname of the system is worker1.
- It has a static IP address appropriate for your virtual network.
- Create a regular user using your Seneca email name as the user name: [seneca_id].
- Add this new regular user to the wheel group using the following command:This will allow the user to run the sudo command.
usermod -a -G wheel [seneca_id]
- After installation ensure that you can access worker1 from your main vm using the static IP address you've assigned to it.
Set up SSH key login
- In order for an automated system to be able to connect to your VM and administer it - you will need to be able to connect to it using SSH keys. You've done this in both OPS235 and OPS335.
- Create a new SSH key pair (one private, and one public) on your main VM with your regular user (don't do it under root). Once you have both keys, set things up so that
- your regular user on your controller VM can SSH to the worker VM as the same regular user without prompting for a password. (ie. add the contents of your pub key to ~/.ssh/authorized_keys)
- your regular user on your controller VM can SSH to the worker VM as root without propmting for a password. (ie. add the contents of your pub key to /root/.ssh/authorized_keys)
PART 3 - Clone the Workers
- We're only simulating the real world where you'd have hundreds of VMs in one or more clouds, but you can just imagine that the VMs you're creating on your computer are actually being created on an Amazon or Microsoft server.
- ** Optional ** Make four clones of the master worker image you've just created. Then make sure that each of them has a unique IP address. That's all you're required to change manually. All the other configuration on the workers (inlcuding the hostnames) will be set by Fabric. Normally you would have some kind of automation doing all this cloning and IP address assignment as well, but we don't have time for that this semester.
- Make snapshots of all your workers so that you can easily restore them to the original state after you modify them.
INVESTIGATION 2: Fabric practice
- We will start with some basics. Fabric runs python programs on the controller and the workers. You create an "instruction" file on your controller, and execute it on the controller using the fab program. When you do that - you specify which workers you want your instructions to be executed on.
- The instructions are stored in a python file. Let's start with a simple one named fabfile.py (the default filename used by fab without the '-f' optino):
PART 1: Simplest example
Getting the hostname on the remote worker
-
from fabric.api import * # set the name of the user on the remote host env.name = '[seneca_id]' # Will get the hostname of this worker: def getHostname(): name = run("hostname") print(name)
- To check for syntax error, run the following command in the same directory as your fabfile.py:
fab -l
- you should get a list of tasks stored in your fabfile.py:
[rchan@centos7 lab8]$ fab -f fabfile.py -l Available commands: getHostname
- To perform the task of getHostname on the worker machine 192.168.122.169, we run it on the controller machine like this:
[rchan@centos7 lab8]$ fab -f fabfile.py -H 192.168.122.169. getHostname [192.168.122.169] Executing task 'getHostname' [192.168.122.169] run: hostname [192.168.122.169] out: c7-rchan [192.168.122.169] out: c7-rchan Done. Disconnecting from 192.168.122.169... done.
- All this has done is get the hostname of the worker and print it (on the controller).
- In the command above we're using the fab program to import the file fabfile.py and execute the getHostname function on the worker 192.168.122.169. Note that the IP address of your first worker will likely be different.
- If you did all the setup right and you get a password prompt when execute the above command, read the prompt carefully and see who's password it prompted you for. If it is not the same as your [seneca_id], verify that you have the following line in your fabfile and you can ssh to your worker vm without password:
env.user = '[seneca_id]'
- In the above you have:
- Lines with an IP address telling you which worker the output is for/from.
- Messages from the controller (e.g. "Executing task...", and "run: ...").
- Output from the worker ("out: ...")
- Output on the controller from your fab file ("worker1" which came from the "print()" call)
- You should get used to the above. It's a lot of output but it's important to understand where every part is coming from, so you are able to debug problems when they happen.
Part 2: Set up more administrative tasks
- Let's pretend that we needed to deploy a web server on several machines. We'll set up a simple example of such a deployment here.
Getting the disk usage on remote worker
- Add a getDiskUsage() function to your fabfile.py file:
# to get the disk usage on remote worker def getDiskUsage(): current_time = run('date') diskusage = run('df -H') header = 'Current Disk Usage at '+current_time print(header) print(diskusage)
- Note that each call to "run()" will run a command on the worker. In this function we get the date/time of the remote work, and then get the disk usage. The print() function print out both the values returned.
- If you try to run it the same way as before:
$ fab --fabfile=fabfile.py -H 192.168.122.169 getDiskUsage
- You should get the following output:
[rchan@centos7 lab8]$ fab --fabfile=fabfile.py -H 192.168.122.169 getDiskUsage [192.168.122.169] Executing task 'getDiskUsage' [192.168.122.169] run: date [192.168.122.169] out: Sun Nov 10 13:17:16 EST 2019 [192.168.122.169] out: [192.168.122.169] run: df -H [192.168.122.169] out: Filesystem Size Used Avail Use% Mounted on [192.168.122.169] out: devtmpfs 947M 0 947M 0% /dev [192.168.122.169] out: tmpfs 964M 0 964M 0% /dev/shm [192.168.122.169] out: tmpfs 964M 9.7M 954M 2% /run [192.168.122.169] out: tmpfs 964M 0 964M 0% /sys/fs/cgroup [192.168.122.169] out: /dev/mapper/centos-root 7.7G 5.6G 2.1G 73% / [192.168.122.169] out: /dev/vda1 1.1G 298M 766M 29% /boot [192.168.122.169] out: tmpfs 193M 17k 193M 1% /run/user/42 [192.168.122.169] out: tmpfs 193M 0 193M 0% /run/user/1000 [192.168.122.169] out: Current Disk Usage at Sun Nov 10 13:17:16 EST 2019 Filesystem Size Used Avail Use% Mounted on devtmpfs 947M 0 947M 0% /dev tmpfs 964M 0 964M 0% /dev/shm tmpfs 964M 9.7M 954M 2% /run tmpfs 964M 0 964M 0% /sys/fs/cgroup /dev/mapper/centos-root 7.7G 5.6G 2.1G 73% / /dev/vda1 1.1G 298M 766M 29% /boot tmpfs 193M 17k 193M 1% /run/user/42 tmpfs 193M 0 193M 0% /run/user/1000 Done. Disconnecting from 192.168.122.169... done.
You'll find that yum prompts you to answer questions, which you don't want to do in an automated environment. And also yum prints too much output, which also isn't helpful in an automated environment. We'll fix it by adding two switches to yum: "-y" and "-d1":
- Notice also that all of the four commands can be run as many times as you want, the result will be the same. This is not always so easy.
Update all the rpm packages on remote worker
- Now that we have a web server running, we also want to put a website on it. The website can be of any complexity, but to keep this demonstration simple we'll have a single HTML file. You can pretend that it's as complex as you like. Create an index.html file like this:
<h1>My fancy web server</h1>
- And since we're pretending that it's a large website with many files and directories, we'll compress it into an archive named webcontents.tar.bz2 using a tar command. You've done this since OPS235.
- Once you have your archive, make sure it's in the same directory as your fab file. Then add the following to your setupWebServer() function:
with cd("/var/www/html/"): put("webcontents.tar.bz2", ".") run("tar xvf webcontents.tar.bz2") run("rm webcontents.tar.bz2")
- There is something weird in the code above that you haven't seen before but it's required for some uses of Fabric: the with statement.
- The problem is that separate run commands each execute in a brand new session, each with its own shell. They are not like separate lines in a single shell script even though they look like they should be.
- That means if you run a cd command and then a tar command separately - the tar command will not run in the directory where you think it will. In order to fix this you have to nest commands inside a with - it's like a run but with persistant results.
- The code we added to the function will cd to the default web site directory on the worker, upload your web contents tarball from your controller to that directory on the worker, extract it, and delete the tarball.
- After it's done - you should have a working web server and simple website on your worker1.
- Except you won't be able to access it because of the firewall. We'll deal with that in the next section.
Part 2: Set up the firewall
- Recall that in our OPS courses we've been using iptables instead of firewalld, which is installed by default in CentOS. Let's make sure that our workers have that set up as well. In the same fabfile.py you've been using all along, add a new function like this:
-
# Will uninstall firewalld and replace it with iptables def setupFirewall(): run("yum -y -d1 remove firewalld") run("yum -y -d1 install iptables-services") run("systemctl enable iptables") run("systemctl start iptables")
- That should by now look prett obvious. On the worker you're going to uninstall firewalld, install iptables, and make sure that the iptables service is running.
- Execute the function for worker1 and double-check that it worked.
Allow access to Apache through the firewall
- The default setup of iptables also doesn't allow access to our web server. We'll need to add some more to our function to allow it. This would probably make more sense in setupWebServer() but for now let's put it into setupFirewall():
-
run("iptables -I INPUT -p tcp --dport 80 -j ACCEPT") run("iptables-save > /etc/sysconfig/iptables")
- Easy enough, but there's on problem - if we run this more than once, we're going to end up with duplicate iptables rules for port 80 (check with iptables -L).
- In order to avoid that - we have to first check whether the rule exists before we add it. We can do that like this:
-
iptables -C INPUT -p tcp --dport 80 -j ACCEPT"
- Unfortunately that command answers "yes" or "no" by succeeding or failing depending on whether that rule exists. In Fabric when a command fails - the entire fab file execution stops, assuming that it's an unrecoverable error. We need to prevent that with another with statement:
-
with settings(warn_only=True): firewallAlreadySetUp = run("iptables -C INPUT -p tcp --dport 80 -j ACCEPT") if firewallAlreadySetUp.return_code == 1: ... move your iptables rules setup here ...
- Test your new setupFirewall function on worker1, and make sure it opens access to Apache but does not create duplicate rules every time it's run.
INVESTIGATION 3: Multiplying your work
- After completing all the previous parts of the lab - you should have a working fabfile.py with two working functions: setupFirewall() and setupWebServer().
** Optional **You were asked to test them on worker1. Now let's run these two functions on all your workers at the same time. The command is almost the same, except for the list of IP addresses:
fab --fabfile=fabfile.py -H 192.168.56.11,192.168.56.12,192.168.56.13,192.168.56.14,192.168.56.15 setupWebServer
- Again - your IP addresses will be different but the command will be the same.
- You can also reconfigure the firewall on all the workers at the same time, using a command like this on your controller:
fab --fabfile=fabfile.py -H 192.168.56.11,192.168.56.12,192.168.56.13,192.168.56.14,192.168.56.15 setupFirewall
And imagine that you might have 10, 50, 100 servers to do this on - could you do it without the automation?
Final Task - Apply fabfile.py to your VM on myvmlab
- Since your account on your vm on myvmlab is a regular user with sudo privilege. You need to make the following changes to your fabfile.py before applying it to your vm on myvmlab:
- Change env.user from 'root' to your account on your vm in myvmlab.
- Change all the commands that need super user privilege from calling the run() function to instead calling the sudo() function.
- Test your updated fabfile.py until you get the same result as when you apply it to your own worker VM.
LAB 8 SIGN-OFF (SHOW INSTRUCTOR)
- Have Ready to Show Your Instructor:
- Complete all the parts of the lab and show the version of your fabfile.py which works on your vm on myvmlab.