Open main menu

CDOT Wiki β

Historical Hera Config

Revision as of 13:06, 6 December 2006 by Bhearsum (talk | contribs) (Roadmap: checkpoint save)

Mozilla@Seneca Cluster Administration

Description

This project page is for all system and network administration tasks on the Mozilla cluster at Seneca.

Leader(s)

Contributor(s)

None as of yet.

Roadmap

Documentation

Documentation is extremely important in this project. Because many people have administrative access to the cluster machines soft copy documentation is necessary. Anyone who needs to know something about the cluster should be able to find it within the documentation.

So far, there are three pages (so far) that must be kept up-to-date: Forwarded Ports, List of Cluster Users, and Cluster Machine Tasks.

  • Any time that a new port is forwarded the Forwarded Ports must be updated. This page is mainly used as a reference for users in case they forget what port to connect on for ssh, http, etc.
  • Any time a new user is given access to a cluster machine or a VM running on a cluster machine the List of Cluster Users must be updated. This is just a quick reference page.
  • Cluster Machine Tasks is the most important of all three. It tracks exactly who has access to what machine or VM, what servers are running on them, and in the case of VMs their IP address.

This is something that everyone needs to keep in mind when working on the cluster, not just the administrators.

Real World IPs

This is a high priority item that we do not have direct control over. The people in charge are working to make this happen and hopefully it will happen soon.

When we do get them a few things must happen:

  • Firewalls must be put into place on all physical machines. It will be extremely important to lock them down tightly.
  • Port forwards from physical machines to VMs must be done.
  • The Forwarded Ports must be updated with the new port numbers.
  • We need to find out if ACS needs to be informed when new port forwards are done.

Windows and Linux Generic VMs

As the cluster usage becomes higher and higher it is important to have easy and quick ways to bring up new machines. Generic Ubuntu images have been made already but the quality of them is currently unknown. Some testing should be done to make sure they are stable and complete. I would like to have this done before the start of the next Winter 2007 semester as there will probably be an influx of requests for VMs by new DPS909 students.

When the images are created the steps for creating a VM and the steps for bringing up a new VM should be documented. This is listed in the table in the project details section.

Details

Task Priority Status
Test a Buildbot TryScheduler with MailNotifier. Low Getting errors from CVS. Need to e-mail the mailing list again.
Document all cluster machine tasks. High Ongoing
Create new windows images. Low Waiting for enough things to add to make it worthwhile. So far the following are needed:
  • autoconf
  • ssh
  • wget
Real world IPs for all machines High Waiting on ITT
Investigate high latency on the cluster Low ACS did network upgrades on the cluster. This is tentatively solved.
Create clean images of commonly used machines. Ubuntu, Windows 2003, others? Medium Ubuntu images done. Not sure how to do Windows images yet because of product key issues.
Document the steps involved in making a generic VM. Pre-installed software should be included in this. Medium
Document the steps involved in bringing up a new VM. This should cover configuration changes as well as who needs to be notified. Is there a list of things that ACS needs yet? Medium
Find a better way to give VNC access to VMs. Ideally I want GDM running on a vnc port. Medium Justin is currently looking into this.

Related Links

News

November 2006

bhearsum 13:16, 15 November 2006 (EST)

  • A redid the Ubuntu server generic image as an edgy install. Along with this I made notes about how to make a generic install. I will be doing an Ubuntu Desktop install shortly, but I'm waiting on Justin's stuff re: VNC.

bhearsum 12:40, 24 November 2006 (EST)

  • The cluster was down for maintenance recently. Some network upgrades were done at that time. Nobody has felt any lag since this happened. The cluster lag is *tentatively* solved.