NSF logo


CCL logo

Building the Servers for the Cloud Computing Lab

Required Hardware and Software

Below is a listing of the hardware and software utilized for our test environment setup in the lab.

Required client/server resources:

  • Server 2008 R2 box with Active Directory Domain Services (ADDS), DNS and DHCP
  • Server 2008 R2 box to serve as Additional Domain Controller
  • Server 2008 box to host the TFTP service and vCenter Server/vSphere Client
  • ESXi 5.1 server
  • PXE bootable client workstation(s)

Required implementation software:


Optional (but helpful) software:

  • A non-IE browser (Chrome, Firefox, etc.)
  • 7-zip file manager

A word on the above: The security settings in place in server deployments of internet explorer can make browsing to various websites to get the software you need a headache. For security purposes it would be wise to obtain all the necessary software on a different workstation and import them through the use of removable media. If convenience is important, however, we suggest using an alternate browser – and the 7-zip file manager utility available from (http://www.7-zip.org/) provides a speedy alternative to unzipping archived folders in the usual manner.


Required Licenses to set up the Test Environment

Below is a table with the number of licenses needed to recreate our test environment. These numbers would need to be adjusted and scaled for any production environment implemented.

SOFTWARE LICENSE QUANTITY NEEDED
Windows Server 2008 R2 2
ESXi 5.1 (Server and host servers) 21
VM licenses 20
vCenter Server 5.1.0b 1

Currently Running in Test Environment:

  • Trial versions of Windows Server 2008 R2 and vCenter Server 5.1.0b
  • Trial Licenses for VMs - Windows XP, 7, Ubuntu, SUSE 11
  • Student DreamSpark Access - ESXI 5.1
  • NetLab Academy Edition supporting 16 active pods concurrently

Cost Considerations for Expansion into Production Environment:

  • DTCC has existing Enterprise license agreement with Microsoft
  • Beyond Dreamspark licenses, DTCC may need agreement with VMWare IT Academy at ~$750/year
  • Potential expansion of NetLab to 32 active pods requires NetLab Professional Edition for one time upgrade fee of $13,700

up arrow

Setting Up a Deployment Server

This section assumes that the environment already has a server set up to provide DNS, DHCP and ADDS services to the network.

  1. In order to prepare the auto-deploy server, set up TFTP. The TFTP feature provided by windows server 2008 doesn’t get along well with vCenter and vSphere, so we opted to use the TFTPd64 application (available from http://tftpd32.jounin.net/tftpd32_download.html). To configure TFTPd64 once it’s installed:
    • Create a folder named “TFTP_Root” in the root directory of C:\
    • Open the application and click the TFTP Server tab, then click Options and ensure that PXE Compatibility is checked.
    • Return to the TFTP Server tab and click Browse to set the current directory parameter to point to your TFTP_Root directory. Be sure to specify that the server interface parameter’s value is the same as that of the DNS/DHCP/ADDS server that the application was installed on. Leave the application running.
  2. Important: Create inbound and outbound firewall rules that allow tftpd64.exe to touch the greater network.
  3. Download and install VMWare PowerCLI. This package will provide both the 32- and 64-bit versions of PowerCLI, so be sure to rename the correct version’s shortcut to something obvious in order to avoid mistakes.
  4. Install the VMWare vSphere client and Auto Deploy features, accepting the default settings in all cases where applicable.
  5. Once the TFTP server is configured and the vSphere client is installed, preparing for auto deploy can begin. First, open windows powershell and input the command “Set-ExecutionPolicy RemoteSigned.” This will enable you to apply commands from PowerCLI and administer auto deploy in the GUI.
  6. Second, download the Auto Deploy GUI fling from VMWare and install it.
  7. Provide the necessary TFTP boot information. Go into the Home menu in the vSphere Client and click the Auto Deploy button (the green arrow). There should be a link that reads “Download TFTP Boot ZIP” (NOTE: Be sure to go into Internet Explorer and click Tools > Internet Options > Security > Custom Level and then select the radio button that enables file downloads, or you will be unable to obtain the file.) Once the file is downloaded, unzip it and place its contents in the TFTP_Root folder you created earlier.
  8. Use the Vib2Zip application downloaded with the ESXi customizer to convert the NIC drivers packaged in the .vib file into .zip format. The ESXi customizer can be found at http://www.v-front.de/p/esxi-customizer.html, and Vib2Zip can be downloaded from http://www.v-front.de/p/esxi5-community-packaging-tools.html. These pages also have directions on how to use these utilities.
  9. In the vSphere Client’s home menu, click the Auto Deploy button. In the Software Depot tab, right click the upper frame and select Add .zip Depot. Navigate to the folder containing your ESXi depot and add it, then do the same with the newly converted drivers. Following these steps, right-click again and select Add HA Depot to get the required files from VMWare’s servers.
  10. In the Image Profile tab, right-click the VMWare-ESXi-yourversion-standard depot and select Clone to create an editable copy of the depot with whatever name you choose. This will be the image that will ultimately deploy to the PXE booted workstations. Be sure to specify that the copy is community-supported in the drop-down menu so that you can add non-VMWare software packages to the image. When the client asks if you wish to commit this change, click NO.
  11. Right-click the new image and select Add Software Packages, then specify the drivers you converted from the .vib file and commit the change.
  12. In the Deploy Rule tab, right-click the upper frame and create a new rule, specifying the domain on which the rule will be active and the IP range corresponding to the DHCP scope you set aside for PXE booting your workstations.
  13. After the rule has been created, right-click it and set it to Active.
  14. Attempt to PXE boot a workstation. If the auto deploy configuration was successful, a dialogue should automatically engage, ending with “Sleeping for five minutes and then rebooting.”

up arrow

Setting Up the Primary ESXI Server
  1. After installing ESXi, you can get into the administration menus by pressing F2.
    • Set up the administrative network by choosing a network adapter, choosing DHCP (not recommended,) or static IPv4 addressing;
    • Configuring IPv6 addressing (we disabled IPv6 in our lab); and
    • Setting up an IP address, netmask and preferred DNS server as well as a DNS suffix (example: esxi.com) and hostname.
  2. Note that the older Optiplex 755 computers will give you some trouble when installing ESXi 5.1 unless you first go into the machine’s BIOS and, in the security section, make sure that Execute Disable is set to ON. Also be sure to use the ESXi customizer you downloaded earlier to add the proper drivers do your install image, or the installation process won’t recognize the computer’s NIC.
  3. In the vSphere client, enter the new IP address of your ESXi server and its login credentials in order to begin remote administration.

Setting up the vCenter Server Appliance
  1. In vSphere:
    • Highlight your main ESXi server and click File > Deploy OVF Template.
    • Use Browse to navigate to the directory where your vCenter Server Appliance files are stored.
    • Select the .ovf file and click Open.
    • Click through to note the appliance’s installation requirements and then name your appliance.
    • Select a datastore for the appliance to reside on and click through to the disk format screen, selecting the format that best meets your needs (in our test environment, we used the “Thick Provision Lazy Zeroed” option.)
    • Choose a network for the appliance to reside on and in the next screen set its IPv4 information.
      Once these steps are completed, the appliance is ready for deployment.
  2. After the appliance is finished building, boot it up in the vSphere client and take note of the screen that displays when the VM finishes powering on.

  3. vSphere client screen

  4. Open a Chrome window and navigate to https://(the ip address of your appliance):5480, then login as root with the default password of vmware. You will be confronted with a screen detailing license information etc. for the appliance, and then a prompt to use the setup wizard to establish an initial configuration of VMWare Single Sign On and other features. For our purposes, accepting the defaults was sufficient.

  5. vCenter Server Appliance Screen Summary

  6. In the Network tab, click the Address button. Specify under the eth0 settings that the IPv4 Address Type should be set to static and then input the desired address and subnet mask if those settings are not currently satisfactory.

  7. vSphere Network Address Settings screen

  8. Above, you can also check the address settings for the appliance’s default gateway and preferred DNS server – be sure to change the appliance’s hostname as well. NOTE: After each configuration change in the appliance, be sure to click the "Save Settings" box on the right.
  9. In the System tab, click the Time Zone button and use the drop-down list to select your local time zone.
  10. In the Admin tab, click the Toggle SSH setting button on the right and then reboot the appliance.
  11. vSphere Admin tab

  12. Once the appliance has finished rebooting, enter the command line (you can use the same credentials as before) and enter the command useradd –m NETLAB. This will create a new user in the appliance with the name NETLAB. To set this new user’s password, enter passwd NETLAB and then input and confirm your new password at the prompts. Note that you will receive a warning if you attempt to set a password that is not strong enough, but this will not prevent you from actually using that password.
  13. In the vSphere client, select your primary ESXi server and then click the Configuration tab in the right pane. In the Software section below, click Virtual Machine Startup/Shutdown and then click Properties in the upper right

  14. vSphere ESXi server configuration tab

  15. In this window, you can move the server appliance and any other critical VMs up to the automatic startup section so that they will automatically come up after the ESXi server finishes booting.

  16. Virtual Machine System Settings 1 Virtual Machine System Settings 2

  17. Close your vSphere client and login to the IP address of the server appliance. Enter Hosts and Clusters and right-click the vCenter host and select New Datacenter, giving the newly created object an appropriate name. Next, navigate to Roles and right-click the Administrator role, then select Clone and name the new role NETLAB. Return to Hosts and Clusters and right-click the new datacenter and select Add Permission. Click the Add button, select the NETLAB user from the list that appears, and then use the drop-down list in the upper right to select the NETLAB role and click OK.

Setting up an Openfiler SAN

An important thing to remember when building your openfiler VM is that the minimum requirements for disk space and memory are inadequate. I like to use a minimum of 1GB of RAM and about 10GB of disk space before building it out, taking time to remove the floppy drive, USB controller, sound card and printer components.

  1. Once the VM is built out, open its console in vSphere and press Enter to start automatic installation.
  2. Select the appropriate system language, accept that “ALL DATA” will be removed when the VM’s hard disk is formatted for openfiler, then continue and accept the default EXTLINUX boot loader. After configuring the VM’s networking, select your time zone and then set the root password. Default settings are sufficient for the remainder of the install; allow the VM to reboot and then power it off once the login prompt appears.

  3. openfiler setup screen

  4. Use vSphere to edit the VM’s settings and add a second virtual hard disk. In our environment this is about 100GB.
  5. Open Chrome and go to https://(your openfiler IP address):446. Here, enter the default login credentials (openfiler and password) to get into openfiler’s management client. Here, select the System tab and edit openfiler’s network access configuration. Be sure to create an ACL by adding the IP and subnet mask of your network and then clicking update.

  6. openfiler net access configuration screen

  7. In the Services tab, enable NFS and start the service.
  8. In the Volumes tab, go to the bottom of the page and create a new physical with the following values (select the 100GB partition you provisioned earlier by clicking /dev/sdb):
    • Mode: Primary
    • Partition type: Physical volume
    • Starting cylinder: (default)
    • Ending cylinder: (maximum possible)
  9. Return to the Volumes tab and click on the new volume that you created, then click the Volume Groups link on the right. In the Create a new volume group section, follow these steps:
    • Enter a name for your volume group
    • Check the box associated with the physical volume you just provisioned
    • Click Add volume group
  10. When this is completed, click the Add Volume link on the right and scroll down to Create a volume in (your volume group’s name.) Enter a name for the volume and a brief description, then use the slider to set the Required Space parameter to use all of the space available within the volume group. Leave the Filesystem/Volume type parameter at its default setting. Once you click through, the new volume should appear, represented as a green circle.

  11. Openfiler Volumes

  12. After setting up the new volume, select the Shares tab and click on the volume you just configured. Enter a name in the subfolder prompt that appears, then click Create Sub-Folder. This is the directory that will eventually serve as remote storage for your diskless ESXi hosts. Click the Make Share button, then change its settings in the following manner:
    • Change the Share Access Control mode to Public guest access
    • Scroll down to Host access configuration and select the radio buttons for the following parameters:
      • SMB = RW
      • NFS = RW
      • HTTP(S) = NO
      • FTP = NO
      • Rsync = NO
    • Click the Restart services checkbox
    • Click Edit and change UID/GID Mapping to no_root_squas
    • Click Update

Now the folder path for remote storage should be visible.


up arrow

Provisioning Computers to Act as Diskless Hosts
  1. Open the server manager in your DNS/DHCP server and navigate to the DHCP section. You will need to create a DHCP reservation for each diskless host computer in your environment associated with the devices’ MAC addresses. Be sure to document this somewhere other than in the listing on the server. Power on each computer to make sure that they auto-deployed with the correct IP configuration. Return to the vSphere client and add the hosts to the appliance’s datacenter.
  2. For each host this process will be the same:
    • Select the host
    • Click the Configuration tab and then select Storage from the Hardware section
    • Click Add Storage, then select theNetwork File System radio button
    • Click Next

  3. Diskless Host Add Storage screen

  4. Enter the IP address of your Openfiler SAN, the path of the shared folder you created, and the name of the datastore associated with that file path.

  5. Diskless Hosts Datastore screen

  6. In each host’s Configuration tab, click the Security Profile option in the Software section and then click Properties on the right.

  7. Security Profile Screen

  8. Scroll down the list of firewall ports that appears until you find the VM Serial Port Connected Over Network checkbox and make sure that it’s selected.

  9. Diskless Host Firewall Properties screen

  10. Return to the Configuration tab and select Networking from the Hardware section. Click Add Networking, then select the VMKernel radio button. Select the option to use vSwitch0 (NOTE: Uncheck any options associated with creating a new virtual switch before doing so,) then click through until you get the option to tell the new VMK to obtain IP information automatically. Leave all other options as default and finish building the new VMK. Immediately go back into the Add Networking menu, add a second VMK - again with automatically obtained IP settings – but for this one change the network name to vMotion and check the option box that allows the VMK to be used for vMotion traffic. These settings will enable the diskless hosts to communicate with and access remote storage

  11. vsphere switch screen

  12. Once all necessary configuration steps are complete, right-click each host and select the Host Profile option, then click Create Profile from host. Name the profile something distinctive – the assigned IP address of the host works well – and then right-click and select Host Profile again, but this time click the Manage Profile option. A list of host profiles will appear, so select the one that you created from the specific host you’re working with and apply it (once a host has a profile applied to it, you can check which profile is associated to the host by selecting the Manage Profile option again. This is also how you’ll change or remove profiles from the host).

up arrow

Registering Your Infrastructure with NETLAB
  1. In the administrative console:
    • Click Virtual Machine Infrastructure.
    • Click Virtual Datacenters and Management Agents and select Add Datacenter.
    • In the form that appears, add the relevant required information about your datacenter. In most cases this will be the datacenter’s name, the IP address of the server appliance that is associated to the datacenter (this value goes in the Hostname field), and the appliance’s root login credentials.
    • Click Add Datacenter to complete registration.
    • To verify that the datacenter is properly registered, click the Test button to check connectivity.
  2. To add an ESXi host:
    • Return to the Virtual Machine Infrastructure screen and click Virtual Machine Host Servers.
    • After clicking the Add Host button, select the relevant datacenter from the list and wait for NETLAB to detect the hosts associated with that datacenter.
    • After clicking on a host you’ll be prompted to fill in several fields including:
      • hostname (use the IP address of the host;)
      • outside IP address (irrelevant unless you are running a dual-homed network);
      • inside IP address (same as the IP address of the host;)
      • inside vSwitch name (this can be left blank unless you are running a dual-homed network); and
      • communication path.
  3. To add VMs to netlab’s inventory, return to Virtual Machine Infrastructure and select Virtual Machine Inventory, then scroll to the bottom of the window and click Import Virtual Machines, then select your datacenter.

  4. Import Virtual Machine Insfrastructure Virtual Machine Inventory Screen

  5. NETLAB will automatically scan it for unregistered VMs and present a list with check boxes you can use to select the virtual machines you want before clicking Import Selected Virtual Machine

  6. Import Virtual Machine
  7. Check the configuration settings for each VM (the defaults are usually fine) and click Import Virtual Machines

  8. Virtual Machine Configuration screen

  9. Clicking OK will return you to NETLAB’s virtual machine inventory.

up arrow

Creating and Assigning a Pod In NETLAB
  1. In the administrative console click Equipment Pods. Scroll down to the bottom of the window and click Add a Pod, then select the type of pod you want to create.

  2. Adding a pod

  3. For the purposes of our environment, we were using the single-host pod version found about 1/3 of the way down the selection menu.

  4. Pod version screen

  5. After selecting the pod type, choose a pod number and on the following screen choose a name for the equipment pod.

  6. Pod Wizard Administration screen
  7. Once the pod is created, click the magnifying glass next to the PC assignment.

  8. Pod Management Admin screen

  9. Use the drop-down menus to specify that the pod will use the virtual machine inventory, the datacenter associated with your server appliance, and the name of the virtual machine you want to associate to the pod, respectively.

  10. Pod PC Configuration screen

  11. Return to the administrative console and click Manage Classes.

  12. Pod Manage Classes screen

  13. Select the class name you want to associate your pod to, then scroll down and click the Pod Assignment button.

  14. Pod Assignment screen

  15. Find the ID number of your pod and click it, then click the Add Class Level Pod Assignment button near the bottom of the window.

  16. Pod Assignment Class Level Pod Assignment Class Level 2

  17. Assign the pod either to teams or individual users.

  18. Pod Assignment screen

Note that each addition must be performed separately. Once the pod has been given assignments, it’s ready for reservation by the associated teams or users.


up arrow