OSTEP 01-2021 Startup Activity
Challenge
1. Set up two VMs on separate physical machines using KVM-QEMU, both serving the same web content on a non-public network. Configure these machines for fail-over, so that if one goes offline, the other one will take over for it.
2. On a third physical machine, set up a VM with a publicly-accessible way to reach the web content.
3. Configure each VM to start automatically when the host boots. Ensure that the hosts and the VMs are well-secured (no unnecessary services, SELinux enabled, and so forth).
4. Document the solution, and ensure that everyone on the team is able to recreate or extend the solution if needed.
Teams Solution
Both team A and B implemented very similar solutions, so this is a refined and combined version of the solutions provided by both teams.
Setup VMs
Prerequisites
1. Download the Server edition ISO of the latest Fedora version. At the time when this was written, Fedora 33 is the latest version.
2. Create a virtual image for the VM using dd if=/dev/zero of=[path_to_image] bs=1M count=[size_of_image_in_MB]
.
Setup Using GUI
Note: To use the GUI setup method from a remote location, enable X11 forwarding and compression when setting up the SSH session: ssh -XC [target_address_or_FQDN]
. Since virt-manager
can only be used with root privileges, ensure that your magic cookie inside ~/.Xauthority
is added to /root/.Xauthority
; details are located here.
Startup virt-manager
and begin creating a new VM. Follow the instructions to provide the necessary information; the following notes may help:
- In the case when auto-OS-detection does not complete, manually select the closest OS type or the generic type.
- Since the VMs will not be performing extensive computing, RAM was set to 4096MB and CPU cores was set to 2.
Forbid SSH Login As Root User
Inside the VM, do the following:
1. Edit the /etc/ssh/sshd_config
file to update or add PermitRootLogin no
.
2. Restart the SSH server with systemctl restart sshd
.
Enable AutoStart
Once a VM is running, using virt-manager:
1. Right click on the VM and select open.
2. In the newly opened window, go to hardware details (light bulb icon) and select Boot Options.
3. Tick the Autostart box to enable starting the VM on host boot up.
Known Problems
- There was a permission problem when attempting to start the
default
virtual network. More specifically, the user managing the virtual assets was unable to access the file/var/lib/libvirt/dnsmasq/default.conf
. Solution: executerestorecon -rv /var/lib/libvirt
to reapply the default permissions defined by SELinux to the files and directories used for virtualization.
-
virt-manager
or any GUI that requires root privileges may fail to start in a long logged-in SSH session. Solution: re-start the SSH session and re-add the user's magic cookie to root's.Xauthority
file.
- Ensure that MAC addresses for each VM's NIC is different after duplicating them. VM's having the same MAC on their NICs will prevent the VMs from being able to access each other. Solution: remove the MAC address from the XML will make libvirt regenerate different MAC addresses.
Setup macvtap
Bridge
Steps to expose a VM to the research network:
1. Shutdown the VM with virsh shutdown [VM_name]
if it is currently powered on.
2. Using virsh edit [VM_name]
, change the VM's <interface>
section to the following:
<interface type='direct'> <source dev='eno1' mode='bridge'/> <target dev='macvtap0'/> <model type='e1000e'/> ... </interface>
Notes:
- The MAC address, and the
<address ...
sub-tag should remain unchanged.
- The
<source ...
sub-tag should be inserted or replace the original as-is in the section above.
3. Start the VM with virsh start [VM_name]
.
4. Inside the VM, execute the following command to setup a static address; fill in the blanks where necessary, i.e., connection name, and interface name (hint: use ifconfig
to get the interface name):
nmcli con add type ethernet con-name [connection_name] ifname [interface_name] ip4 [unused_static_ipv4_address]/[CIDR] gw4 [gateway_ipv4_address] ipv4.dns "[space_separated_DNS_server_ipv4_addresses]"
Notes:
- The IPv4 addresses in the
nmcli
command should be match those provided by the OSTEP team's network infrastructure spreadsheet.
5. Bring up the connection with nmcli con up [connection_name] ifname [interface_name]
.
6. Confirm network access to and from the VM with curl
, i.e., on separate machine, do curl [static_ip_of_the_VM]
.
Setup Apache Server
Inside the VM, do the following:
1. Install Apache Server with sudo dnf install httpd
.
2. Create an index.html
file inside the Apache Server public folder at /var/www/html
with the template:
<html> <header></header> <body> <h1>Team X Startup Page</h1> <h2>On [machine_name]</h2> </body> </html>
3. Start Apache Server with sudo systemctl enable --now httpd
.
4. Confirm the web page is served inside the VM with curl localhost
.
Allow Apache Server Through Firewall
One of the objective of this activity is to prevent public access to the Apache servers' contents. To achieve this, the team decided to use the internal zone of firewalld
to restrict access.
1. Allow only access from within the same subnet:
sudo firewall-cmd --zone=internal --add-source=[subnet_address]/[CIDR]
Example: sudo firewall-cmd --zone=internal --add-source=192.168.1.0/24
allows addresses from 192.168.1.1
to 192.168.1.255
.
2. Allow the http
service:
sudo firewall-cmd --zone=internal --add-service=http
3. (Optional) If the http
service was previously allowed through the default zone, it should be removed:
sudo firewall-cmd --remove-service=http
Notes:
- No
--zone
option means to operate on the default zone.
4. With the command curl [VM_ip_address]
:
- Confirm the web page is accessible to a machine within the subnet.
- Confirm the web page is not accessible to a machine outside the subnet.
5. When the firewall setting are satisfactory, use sudo firewall-cmd --runtime-to-permanent
to permanently apply the changes.
Setup Nginx
1. Install Nginx with dnf install nginx
.
2. Edit the Nginx config file located /etc/nginx/nginx.conf
to add the following pieces:
http { ... # Add the upstream block upstream backend { server [URI to Apache Server VM 1]; server [URI to Apache Server VM 2]; } server { ... # Add the location block to override root location / { proxy_pass $scheme://backend; } ... } ... }
3. Allow Nginx to establish connections to the Apache Server VMs by executing setsebool -P httpd_can_network_connect on
.
4. Enable and start the Nginx service with systemctl enable --now nginx
service.
5. Allow Nginx through the firewall by executing the following in order:
firewall-cmd --add-service=http firewall-cmd --runtime-to-permanent
6. Confirm access to the public-facing VM from outside of the subnet is available with curl [IP_address_of_the_public-facing_VM]
; the curl command should return the content served by the Apache Server VMs.