Difference between revisions of "OPS435 Python3 Lab 8"

From CDOT Wiki
Jump to: navigation, search
(PART 1 - Configure your controller workstation)
(LAB OBJECTIVES)
Line 1: Line 1:
 
= LAB OBJECTIVES =
 
= LAB OBJECTIVES =
  
:1. Explore the Fabric package (written in Python) and study its command line tool "fab".
+
:1. Explore the Fabric Python library and its command line tool "fab".
:2. Create remote tasks using Python module that interface with the Fabric's API.
+
:2. Create Fabric scripts utilize Fabric's API to define tasks that can be executed by the '''fab''' program.
:3. Use the '''fab CLI''' (command line interface) to execute regular/administrative tasks on remote host with the '''Fabric''' framework.
+
:3. Use the '''fab''' command to execute fabric script to perform regular/administrative tasks on remote Linux machines.
  
 
== Overview ==
 
== Overview ==
: Fabric is a Python library and command-line tool for streamling the use of SSH for application deployment or system administration tasks. It has two major components:
+
: Fabric is a Python library and command-line tool for streamlining the use of SSH for application deployment or system administration tasks. It has two major components:
 
:# a command-line interface program called "fab" that lets you execute arbitrary Python functions  
 
:# a command-line interface program called "fab" that lets you execute arbitrary Python functions  
 
:# a set of Python APIs that you can use and call in your Python functions to make executing shell commands over SSH much easier.  
 
:# a set of Python APIs that you can use and call in your Python functions to make executing shell commands over SSH much easier.  
: We are going use the "fab" and API of Fabric to write and execute Python functions (or tasks), to automate interactions with remote Linux machines in this lab.  
+
: We are going use the Fabric API and its '''fab''' command to define and execute Python functions (or tasks), to automate interactions with remote Linux machines in this lab.  
  
 
== REFERENCE ==
 
== REFERENCE ==

Revision as of 10:42, 3 July 2020

LAB OBJECTIVES

1. Explore the Fabric Python library and its command line tool "fab".
2. Create Fabric scripts utilize Fabric's API to define tasks that can be executed by the fab program.
3. Use the fab command to execute fabric script to perform regular/administrative tasks on remote Linux machines.

Overview

Fabric is a Python library and command-line tool for streamlining the use of SSH for application deployment or system administration tasks. It has two major components:
  1. a command-line interface program called "fab" that lets you execute arbitrary Python functions
  2. a set of Python APIs that you can use and call in your Python functions to make executing shell commands over SSH much easier.
We are going use the Fabric API and its fab command to define and execute Python functions (or tasks), to automate interactions with remote Linux machines in this lab.

REFERENCE

1. These links are helpful for learning more about Fabric's features:
Category Resource Link
Official Fabric tutorial
[1]
Better Fabric tutorial
[2]
Official Fabric website
[3]
Please note that the version of Fabric we are going to use on matrix.senecacollege.ca for this lab is 1.14 and it supports only Python version 2.
2. You should have some experience on the following topics in OPS235 and or OPS335. Please review them to prepare for the activities in this lab:
  • create and configure a regular user on a Linux system.
  • configure and manage sudo privilege for a regular user
  • Configure sudoers using the visudo command
  • using the yum command to install, remove, and update rpm packages
  • Retrieve current firewall setting using the iptables -L -n -v command

INVESTIGATION 1: The Fabric Framework

The Fabric framework consists of the following components:
  1. the Fabric Python Library - the fabric package
  2. the Fabric API - fabric.ap
  3. the Fabric CLI - fab - run Fabric script, default to fabfile.py in the current working directory
  4. Fabric script: contains Python functions (or tasks) to be executed by the "fab" CLI.
  5. Controller workstation - the machine that has the Fabric package installed and will run the "fab" CLI
  6. Remote machine: the target machine on which a Fabric task will be executed.

PART 1 - Configure your controller workstation

In this lab you will use your user account on matrix.senecacollege.ca as your Fabric controller workstation.
The Fabric package version 1.14.0 has already been installed on matrix.senecacollege.ca. You should have access to the fab command on matrix. Login to matrix.senecacollege.ca and run the following command to confirm the version of the fabric package:
    $ fab --version
Type the following command to get the command line option of fab:
fab --help
You should get something similar to the following:
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...

Options:
  -h, --help            show this help message and exit
  -d NAME, --display=NAME
                        print detailed info about command NAME
  -F FORMAT, --list-format=FORMAT
                        formats --list, choices: short, normal, nested
  -I, --initial-password-prompt
                        Force password prompt up-front
  --initial-sudo-password-prompt
                        Force sudo password prompt up-front
  -l, --list            print list of possible commands and exit
  --set=KEY=VALUE,...   comma separated KEY=VALUE pairs to set Fab env vars
  --shortlist           alias for -F short --list
  -V, --version         show program's version number and exit
  -a, --no_agent        don't use the running SSH agent
  -A, --forward-agent   forward local agent to remote end
  --abort-on-prompts    abort instead of prompting (for password, host, etc)
  -c PATH, --config=PATH
                        specify location of config file to use
  --colorize-errors     Color error output
  -D, --disable-known-hosts
                        do not load user known_hosts file
  -e, --eagerly-disconnect
                        disconnect from hosts as soon as possible
  -f PATH, --fabfile=PATH
                        python module file to import, e.g. '../other.py'
  -g HOST, --gateway=HOST
                        gateway host to connect through
  --gss-auth            Use GSS-API authentication
  --gss-deleg           Delegate GSS-API client credentials or not
  --gss-kex             Perform GSS-API Key Exchange and user authentication
  --hide=LEVELS         comma-separated list of output levels to hide
  -H HOSTS, --hosts=HOSTS
                        comma-separated list of hosts to operate on
  -i PATH               path to SSH private key file. May be repeated.
  -k, --no-keys         don't load private key files from ~/.ssh/
  --keepalive=N         enables a keepalive every N seconds
  --linewise            print line-by-line instead of byte-by-byte
  -n M, --connection-attempts=M
                        make M attempts to connect before giving up
  --no-pty              do not use pseudo-terminal in run/sudo
  -p PASSWORD, --password=PASSWORD
                        password for use with authentication and/or sudo
  -P, --parallel        default to parallel execution method
  --port=PORT           SSH connection port
  -r, --reject-unknown-hosts
                        reject unknown hosts
  --sudo-password=SUDO_PASSWORD
                        password for use with sudo only
  --system-known-hosts=SYSTEM_KNOWN_HOSTS
                        load system known_hosts file before reading user
                        known_hosts
  -R ROLES, --roles=ROLES
                        comma-separated list of roles to operate on
  -s SHELL, --shell=SHELL
                        specify a new shell, defaults to '/bin/bash -l -c'
  --show=LEVELS         comma-separated list of output levels to show
  --skip-bad-hosts      skip over hosts that can't be reached
  --skip-unknown-tasks  skip over unknown tasks
  --ssh-config-path=PATH
                        Path to SSH config file
  -t N, --timeout=N     set connection timeout to N seconds
  -T N, --command-timeout=N
                        set remote command timeout to N seconds
  -u USER, --user=USER  username to use when connecting to remote hosts
  -w, --warn-only       warn, instead of abort, when commands fail
  -x HOSTS, --exclude-hosts=HOSTS
                        comma-separated list of hosts to exclude
  -z INT, --pool-size=INT
                        number of concurrent processes to use in parallel mode

Please note and study the following command-line options:

  1. -H,
  2. -f,
  3. -i,
  4. -l,
  5. --port
  6. --user
  7. --initial-sudo-password-prompt

Set up SSH key login

In order for an automated system to be able to connect to your VM and administer it - you will need to be able to connect to it using SSH keys. You've done this in both OPS235 and OPS335.
Create a new SSH key pair (one private, and one public) on your main VM with your regular user (don't do it under root). Once you have both keys, set things up so that
  • your regular user on your controller VM can SSH to the worker VM as the same regular user without prompting for a password. (ie. add the contents of your pub key to ~/.ssh/authorized_keys)
  • your regular user on your controller VM can SSH to the worker VM as root without propmting for a password. (ie. add the contents of your pub key to /root/.ssh/authorized_keys)

PART 3 - Clone the Workers

We're only simulating the real world where you'd have hundreds of VMs in one or more clouds, but you can just imagine that the VMs you're creating on your computer are actually being created on an Amazon or Microsoft Cloud.
** Optional ** Make four clones of the master worker image you've just created. Then make sure that each of them has a unique IP address. That's all you're required to change manually. All the other configuration on the workers (inlcuding the hostnames) will be set by Fabric. Normally you would have some kind of automation doing all this cloning and IP address assignment as well, but we don't have time for that this semester.
Make snapshots of all your workers so that you can easily restore them to the original state after you modify them.

INVESTIGATION 2: Fabric practice

We will start with some basics. Fabric runs python programs on the controller and the workers. You create an "instruction" file on your controller, and execute it on the controller using the fab program. When you do that - you specify which workers you want your instructions to be executed on.
The instructions are stored in a python file. Let's start with a simple one named fabfile.py (the default filename used by fab without the '-f' optino):

PART 1: Simplest example

Getting the hostname on the remote worker

from fabric.api import *

# set the name of the user on the remote host
env.user = '[seneca_id]'

# Will get the hostname of this worker:

def getHostname():
    name = run("hostname")
    print(name)
To check for syntax error, run the following command in the same directory as your fabfile.py:
fab -l
you should get a list of tasks stored in your fabfile.py:
[rchan@centos7 lab8]$ fab -f fabfile.py -l
Available commands:

    getHostname
To perform the task of getHostname on the worker machine 192.168.122.169 (replace with the actual IP of your worker VM), we run it on the controller machine like this:
[rchan@centos7 lab8]$ fab -f fabfile.py -H 192.168.122.169 getHostname
[192.168.122.169] Executing task 'getHostname'
[192.168.122.169] run: hostname
[192.168.122.169] out: c7-rchan
[192.168.122.169] out: 

c7-rchan

Done.
Disconnecting from 192.168.122.169... done.
All this has done is get the hostname of the worker and print it (on the controller).
In the command above we're using the fab program to import the file fabfile.py and execute the getHostname function on the worker 192.168.122.169. Note that the IP address of your first worker will likely be different.
If you did all the setup right and you get a password prompt when execute the above command, read the prompt carefully and see who's password it prompted you for. If it is not the same as your [seneca_id], verify that you have the following line in your fabfile and you can ssh to your worker vm without password:
env.user = '[seneca_id]'
In the above you have:
  • Lines with an IP address telling you which worker the output is for/from.
  • Messages from the controller (e.g. "Executing task...", and "run: ...").
  • Output from the worker ("out: ...")
  • Output on the controller from your fab file ("worker1" which came from the "print()" call)
You should get used to the above. It's a lot of output but it's important to understand where every part is coming from, so you are able to debug problems when they happen.

Part 2: Set up more administrative tasks

Let's pretend that we need collect the disk usage on several machines so that we can plan for storage maintenance. We'll set up a simple example of such a deployment here.

Getting the disk usage on remote worker

Add a getDiskUsage() function to your fabfile.py file:
# to get the disk usage on remote worker
def getDiskUsage():
    current_time = run('date')
    diskusage = run('df -H')
    header = 'Current Disk Usage at '+current_time
    print(header)
    print(diskusage)
Note that each call to "run()" will run a command on the worker. In this function we get the date/time of the remote work, and then get the disk usage. The print() function print out both the values returned.
If you try to run it the same way as before:
$ fab --fabfile=fabfile.py -H 192.168.122.169 getDiskUsage
You should get the following output:
[rchan@centos7 lab8]$ fab --fabfile=fabfile.py -H 192.168.122.169 getDiskUsage
[192.168.122.169] Executing task 'getDiskUsage'
[192.168.122.169] run: date
[192.168.122.169] out: Sun Nov 10 13:17:16 EST 2019
[192.168.122.169] out: 

[192.168.122.169] run: df -H
[192.168.122.169] out: Filesystem               Size  Used Avail Use% Mounted on
[192.168.122.169] out: devtmpfs                 947M     0  947M   0% /dev
[192.168.122.169] out: tmpfs                    964M     0  964M   0% /dev/shm
[192.168.122.169] out: tmpfs                    964M  9.7M  954M   2% /run
[192.168.122.169] out: tmpfs                    964M     0  964M   0% /sys/fs/cgroup
[192.168.122.169] out: /dev/mapper/centos-root  7.7G  5.6G  2.1G  73% /
[192.168.122.169] out: /dev/vda1                1.1G  298M  766M  29% /boot
[192.168.122.169] out: tmpfs                    193M   17k  193M   1% /run/user/42
[192.168.122.169] out: tmpfs                    193M     0  193M   0% /run/user/1000
[192.168.122.169] out: 

Current Disk Usage at Sun Nov 10 13:17:16 EST 2019
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 947M     0  947M   0% /dev
tmpfs                    964M     0  964M   0% /dev/shm
tmpfs                    964M  9.7M  954M   2% /run
tmpfs                    964M     0  964M   0% /sys/fs/cgroup
/dev/mapper/centos-root  7.7G  5.6G  2.1G  73% /
/dev/vda1                1.1G  298M  766M  29% /boot
tmpfs                    193M   17k  193M   1% /run/user/42
tmpfs                    193M     0  193M   0% /run/user/1000

Done.
Disconnecting from 192.168.122.169... done.

Update all the rpm packages on remote worker

Let's pretend that we need to update software packages installed on several machines due to security patches. Let's name the task as 'performSoftwareUpdate()':
# to perform software update on remote worker
def performSoftwareUpdate():
    status = run('yum update -y')
    print(status)
Do a syntax check with the "fab -l" command.
When you try to run it the same way as before, you encounter some issue as shown below:
[rchan@centos7 lab8]$ fab --fabfile=fabfile.py -H 192.168.122.169 performSoftwareUpdate
[192.168.122.169] Executing task 'performSoftwareUpdate'
[192.168.122.169] run: yum update -y
[192.168.122.169] out: Loaded plugins: fastestmirror, langpacks
[192.168.122.169] out: You need to be root to perform this command.
[192.168.122.169] out: 


Fatal error: run() received nonzero return code 1 while executing!

Requested: yum update -y
Executed: /bin/bash -l -c "yum update -y"

Aborting.
Disconnecting from 192.168.122.169... done.
As you already know, you need superuser privilege in order to perform software update on a Linux system. There are two ways to do it on Fabric. The first one is simple. Edit you fabfile.py and change the env.user line as shown below:
env.user = 'root'
Save the fabfile.py with the change and run it again.
If you see the password prompt again, make sure that you can ssh from your controller as a regular user to your worker vm as root without password.
The other way is to replace all the run() function calls for commands that need superuser privilege by the sudo() function calls in your fabfile.py. You are asked to investigate this in the final investigation of this lab.

Part 3: Setting and Checking Security Configuration

Recall that in our OPS courses we've been using iptables instead of firewalld, which is installed by default in CentOS. Let's make sure that our workers have that set up as well. In the same fabfile.py you've been using all along, add a new function like this:
# Will uninstall firewalld and replace it with iptables
def setupFirewall():
    run("yum -y -d1 remove firewalld")
    run("yum -y -d1 install iptables-services")
    run("systemctl enable iptables")
    run("systemctl start iptables")
That should by now look pretty obvious. On the worker you're going to uninstall firewalld, install iptables, and make sure that the iptables service is running.
Execute the function for worker1 and double-check that it worked.
**Warning** Do not do this on your vm on myvmlab. If you do, you may lock yourself out for good.

Check firewall configuration

To check your firewall configuration your remote worker, you can retrieve its current configuration by creating another Fabric task called "getFirewallConfigure(). Let's put the following code to your fabfile.py:
def getFirewallConfig():
    fw_config = run("iptables -L -n -v")
    print(fw_config)
Try to run the getFirewallConfig() task the same way as before.
Troubleshoot if you encounter any issue.

INVESTIGATION 3: Multiplying your work

After completing all the previous parts of the lab - you should have a working fabfile.py with three working functions: getDiskUsage(), performSoftwareUpdate() and getFirewallConfig().

** Optional **You were asked to test them on worker1. Now let's run these three functions on all your workers at the same time. The command is almost the same, except for the list of IP addresses:

fab --fabfile=fabfile.py -H 192.168.122.169,192.168.122.170,192.168.122.171,192.168.122.172 getDiskUsage
Again - your IP addresses will be different but the command will be the same.
You can also run all three tasks on all the workers at the same time, by adding any task to your fabfile.py:
def doAllThree():
    getDiskUsage()
    getFirewallConfig()
    performSoftwareUpdate()
And run the following command on your controller:
fab --fabfile=fabfile.py -H 192.168.122.169,192.168.122.170,192.168.122.171,192.168.122.172 doAllThree

And imagine that you might have 10 tasks to be done on 10, 50, 100 servers - could you do it without the automation?

INVESTIGATION 4 - Apply fabfile.py to your VM on myvmlab

Replace run() function calls with sudo()

Since your account on your vm on myvmlab is a regular user with sudo privilege. You need to make the following changes to your fabfile.py before applying it to your vm on myvmlab:
  • Change env.user from 'root' to your account on your vm in myvmlab.
  • Change all the commands that need super user privilege from calling the run() function to instead calling the sudo() function. Here is an example on replacing run() with sudo():
    def getFirewallConfig():
        fw_config = sudo("iptables -L -n -v")
        print(fw_config)
Test your updated fabfile.py until you get the same result as when you apply it to your own worker VM.

Create a Fabric task called makeUser()

The makeUser() function should perform the following:
  • create a new user called "ops435p" with home directory "/home/ops435p".
  • add it to the sudo group called "wheel".
  • add your professor's ssh public key to the file named "authorized_keys" in ~ops435p/.ssh directory. Make sure that you set the proper permissions on both the directory ~ops435p/.ssh and the file "~ops435p/.ssh/authorized_keys.
Add the makeUser() to your final version of fabfile.py.
Test the new task makeUser() on your local VM first, and deploy to your vm on myvmlab.
After the successful deployment of the makeUser() task on your vm on myvmlab, ask your professor to verify and confirm that the new user account "ops435p" on myvmlab has been created correctly.

LAB 8 SIGN-OFF (SHOW INSTRUCTOR)

Have Ready to Show Your Instructor:
  • Complete all the parts of the lab and upload the version of your fabfile.py which works on your vm on myvmlab to Blackboard.