Cloud Computing Lab VSphere

Hardware and Licensing Requirements

This page catalogs the hardware, software, and licensing that was used to successfully test and ultimate create the Cloud Computing Lab. This is not necessarily a minimum requirements document because there may be aspects that can be accomplished with a less robust configuration.

Required Hardware

CCL Master Server Box: DELL Poweredge R710

  • BIOS 2.1.9
  • 2x Intel Xeon e5620, 2.40 GHz, Quad-core
  • level 2 cache, 4x 256 kb
  • level 3 cache, 3 mb
  • Memory: 144 GB, eccddr3 1067Mz
  • 4x 146 GB 10000 RPM HDD
  • 4x 900 GB 10000 RPM HDD
  • RAID controller(hardware), perc h700 integrated
  • 1 10GB Network Interface Card (NIC) (additional)
  • 4x 1 GB/s ethernet ports
  • 2x 10 GB/s ethernet ports

CCL Switch Environment: 2 CISCO 2960-S Switches (CISCO Catalyst 2960S-24TD-L)

  • 24 Gigabit Ethernet ports
  • 1G/10G SFP+ slots
  • USB interfaces for management and file transfers
  • LAN Base or LAN Lite Cisco IOS® Software feature set

CCL Lab Workstation Environment: 20 DELL Optiplex 790 Workstations

  • Windows 8
  • Intel i7 @ 3.40 GHz
  • 8GB RAM
  • 500 GB Hard Drive
  • Integrated NIC (Enabled w/PXE)


Required Software and Licensing

CCL Master Server Box OS : ESXi 5.5

CCL VMWare Environment : vSphere 5.5
  • 1 vSphere 5.5 License
  • 21 ESXi 5.5 Licenses (for server and hosts)
    • host machines need at least 2 cores
    • minimum of 4GB of RAM


Setting up Auto Deploy

Autodeploy flowchart - description below image

Autodeploy Flowchart Description:

In the case presented in the illustration, PXE works as follows:

  1. The target ESXi host (the PXE client) is booted.
  2. The target ESXi host makes a DHCP request.
  3. The DHCP server responds with the IP information and provides information about the location of a TFTP server.
  4. When the client receives the information, it contacts the TFTP server requesting the file that the DHCP server specified (in this case, the network boot loader).
  5. The TFTP server sends the network boot loader, and the client executes it.
  6. PXELINUX or gPXE searches for a configuration file on the HTTP server, and boots a kernel according to that configuration file.
  7. The client downloads the files it needs and then loads them.
  8. The system boots the ESX installer.
  9. The installer runs as directed by the PXE configuration file.
  10. The installer uses the installation media media depot stored on the network.
  11. ESX is installed. 12. Based on the loaded profile the host is assigned to vCenter.

VMware Auto Deploy Administrator’s Guide

Required client/server resources:

  • vCSA to host DHCP, TFTP and auto-deploy services
  • ESXi 5.5 server
  • PXE bootable client workstation(s)

Required implementation software:

  • vCenter / vSphere (installation DVD in the 456 lab)
  • VMWare auto deploy GUI
  • NIC drivers
  • ESXi 5.5 .zip depot
  • MWare PowerCLI

Optional (but helpful) software:

  • A non-IE browser (chrome, firefox etc)

Server Setup

The steps for configuring the CCL environment can be found in the Configuration Guide.


Integrating the vCenter Server Appliance with NETLAB

Setting up a trunk line between the large ESXi server and NETLAB:

  • At least one NIC needs to have a cable running from the ESXi host to the control switch associated with NETLAB. This must also be configured as a trunk line in order to allow proper communication between NETLAB and the contents of the vCSA’s datastore.
  • Console into the control switch using the appropriate credentials (you should use the defaults suggested by the NETLAB documentation to maintain proper automation and support compatibility).
  • Input the following commands:
    • interface x/x
    • description inside connection for ESXi Server
    • switchport mode trunk
    • switchport nonegotiate
    • no switchport access vlan
    • no shutdown
Create a NETLAB+ User on the Appliance:
  • Login to the appliance’s CLI with the username and password you configured when you built it out from .ovf
  • Enter useradd –m NETLAB
  • To change the new user’s password, enter passwd NETLAB. You will be prompted to enter and then confirm the new password for the NETLAB user
Create a NETLAB Role in the Appliance:
  • Enter the appliance through vSphere and click on Administration > Roles.
  • Right click the Administrator role and select Clone, entering NETLAB for the new role object’s name.
  • Right-click on the NETLAB role and select Add Permission.
  • In the window that appears, click Add and then select the NETLAB account and click OK.
  • Back in the Assign Permissions window, use the drop-down menu on the right to select
  • NETLAB and associate the cloned administrative permissions to the NETLAB user you created earlier.
Create a New VSwitch and Bind it to a Physical NIC
  • In the appliance’s vSphere view, navigate to Inventory > Hosts and Clusters and click on the ESXi host you want to configure in the left pane.
  • In the main pane, click Configuration and then click Networking in the Hardware Group box, then click Add Networking in the upper left.
  • To allow the ESXi host kernel to communicate with the inside NETLAB network, select the VMkernel radio button and click next.
  • Select the Create a Virtual Switch radio button, then select the physical NIC that’s associated with the trunk line to the control switch.
  • In the next screen, set the Network Label to “NETLAB Inside” and check the box labeled “Use this port group for management traffic”.
  • Enter a unique IP address from the table that appears on page 77 of NetDevGroup’s “Remote PC Guide Series – Volume 2” document.