Thursday, 17 January 2019

VMware NSX Lab on ESXi - VCP6-NV Part2

In the previous post, we were able to get the 3x ESXi Nested hosts installed on the ESXi core server. Now we need to get the NSX Manager and Controllers setup.

There are a few steps needed here (well covered elsewhere):
  1. Create Cluster
  2. Move 3x ESXi VMs to cluster
  3. Setup Distributed Switch
  4. Make ESXi VMs talk to the dSwitch as their primary network path
  5. Configure NTP on the host ESXi and all the ESXi VMs
Once this is completed, the system is ready for the deployment of the NSX Manager. This is an OVA file, so deploy it as a usual VM (to the ESXi core server, not one of the ESXi VMs !). You will have to go through and answer some questions passwords for CLI admin end privilege mode, DNS name and IP (this needs a static IP), DNS settings and NTP server list. Match this to your network and let the install complete.

Once the VM is running, you connect to the server via HTTPS. You will use the password you setup in the config above, and then this will get you in to the web GUI for the NSX Manager.

The 2 sections you need to change here (and won't need to change again) are Manage Appliance settings - for the NTP details, and Manage vCentre Registration - to connect the server to vCentre.

Once both are configured, you can exit the web interface, and go back to vCentre.

To access NSX manager, you need to go to the Networking and Security icon/menu option. Note this may take 1-2 mins to appear as the NSX manager initially talks to the vCentre to setup.

You need to prepare the Cluster for NSX and VXLan (done under the Host Preparation option under Installation and Upgrade)


The next item on the list is the NSX controller install. This is where the second trick came about. You can easily deploy the controller, but as per the way the LAB has been setup so far, it will sit in the Deploying phase until the system times it out and deletes the controller. So it never completes - and after some review I found that this was because any VM within one of the ESXi VMs (as all the controllers will be) does NOT is not able to get network access. I tested this by building a CentOS VM on one of the ESXi VMs, and I could not get any DHCP. Why ? 

The answer is HERE - Promiscuous Mode & Forged Transmits are required to be enabled on the dSwitch ! Makes complete sense when it is detailed like William Lam does, but of course you don't think about that when just creating the lab.

So make the required changes to the dSwitch, deploy the controller, and it will succeed. It took a while on my machine to be in an CONNECTED state (up to 10m) as I think it is looking for DNS resolution of the vSphere hosts names (which I don't have on my network), but in the end the controllers come up fine (note you can only build one at a time).

 

There is a recommended sequence for booting up the devices listed from VMware HERE. So it goes:

  1. ESXi Host w/ vCenter Server
  2. NSX Manager
  3. ESXi VMs with the controllers (autostart should be on for the controllers)
  4. Anything else

A strange issue that I had was that after the reboot of the ESXi server, the NSX manager would not show up in the security console of the Web Client (it said "No NSX Managers" available). This turned out to be a session issue, where the previous NSX session was still present. So to fix this - you have to close all the existing sessions and logout/login to the web client as per HERE. See image below.


Another issue that happened after a reboot was that once all was online and working, the NSX cluster would show as Not Ready (with no notes under it as to why the issue), and clicking Resolve would not change the status.
Some searching found THIS article (scroll to last comment), which suggested a reboot of the vCenter appliance. Completed that, and all green ! See original error screen that was encountered below. Some other guides can be found HERE as well on the issue of the NSX Cluster being Not Ready.



Now we are ready to LAB NSX SDN ! Looking forward to this part.

EDIT Feb 2019 - Having passed VCP6-NV, I have continued to work with this lab in different configurations. A few more notes came to be useful !
1. ESXi 6.5 GA is NOT supported by NSX 6.3 - you need to upgrade to 6.5 U1 or 6.7
2. For the distributed switch you need 2 NICs on each ESXi host (basically one for standard and one for distributed). Lots of trial and error here as I was playing with moving form the standard switch to the distributed version.

VMware NSX Lab on ESXi - VCP6-NV Part1


I'm presently working towards the VMware VCP6-NV certification, and as part of the learning process I wanted to setup the NSX on my home lab for some hands on experience. NSX is VMware's Software Defined Networking product (SDN), that will allow you to build virtual networks on demand across a physical underlying vSphere infrastructure. As shown above, it allows you to create a virtual, segmented environment on demand, without the need of adding or changing any hardware or physical networking to build your networks as required.

The purpose of these posts on NSX is to show how to set it all up on a single ESXi host, and some of the tricks you need to know to make it work that are not covered in the setup guides. The actual standard NSX setup steps that I followed are detailed in VMware NSX for vSphere Introduction and Installation and VMware NSX Cookbook (listed below), so to get a detailed instruction process, please refer to them.

These posts on NSX assume you have done the vSphere6 fundamentals course (or have a good working knowledge of vSphere). Most of the guides around are based on using VMware Workstation (which is fine depending on what hardware you have access to) which I wont try to repeat, but my lab is the following:
  • Single ESXi 6.7 host
  • 32GB Ram
  • Intel Core i7
  • 256SSD + 1TB Sata
  • 2x Intel NIC
I originally built this server since I prefer to have VMs running on a separate machine to the one im working on. It got me through CCIE SP, so its gotta be able to handle NSX right ? Well as usual there are a few tricks to learn.

For reference, the following is the main guides/series I am using for study -
  • VMware NSX Cookbook PDF (Bayu Wibowo, Tony Sangha)
  • VCP6-NV Official Cert Guide 2V0-641 PDF (Elver Sena Sosa)
  • VMware Certified Professional 6-Network Virtualization 2v0-641 Videos (Lynda - Bill Ferguson)
  • VMware NSX for vSphere Introduction and Installation (Pluralsight- Jason Nash)
  • VMWare VCP-NV NSX vBrownbag Podcast series Videos
After doing some of the videos and reading the docs, the design in my lab I want to build is the following (including version details) - 
  • 3x ESXi hosts - v6.7 (to host the NSX controllers - 1 in each ESXi)
  • 1x vSphere vCentre server - v6.7
  • 1x NSX Manager - v6.4.3 (note I tested v6.4.4 but had install issues)
  • 3x NSX Controllers - v6.4.3 (inherited from NSX manager)
Which should then allow me to build an NSX environment and test.

A note on the interface for vCenter - the HTML5 UI is not full featured yet, so some things (e.g. the NSX logical switches menu option - which is kind of key !)  may not work or be present, so use the FLEX client if you find you can't locate something you thought should be there (get to it via https://[vcentre_ip or DNS]/vsphere-client). The images I have in these posts are from the HTML5 interface.

I setup the vCentre server first, then spun up the ESXi hosts. Just make sure when you create ESXi hosts that you chose OS as OTHER and scroll down to the bottom of the list to select ESXi 6.5 (or whatever ESXi version you are using). You will get a warning that esxi is not supported, but that's fine as this is not production.

This is where the first tricks became apparent. Whilst I learnt this the long way AFTER installing esxi on to esxi and finding issues later, ill just point them out here to make the process easier. This action of installing esxi on top of esxi is called NESTED ESXi. Once you start doing some google searching on this, you will find some secrets (especially with v6.7).
  1. Boot mode must be EFI, Not BIOS
  2. The CPU virtual architecture needs to be enabled on the esxi VM, otherwise you can't create new VMs within the ESXi VM.

See below for images of where the options are in the vCentre server new VM setup.


Its also worth looking in to the requirements of the ESXi server hardware required to support the  NSX controllers. This can be accessed via the VMware website HERE, and the relevant content is in this table:


As you can see, each esxi server will need to be able to provide 4 vCPU, 28GB and 4GB ram to each controller to work. After some trial and error I found that I actually needed 10-12 GB of ram on each esxi to then be able to install 1 controller (along with 4vCPU and 40GB+ HDD), so it may be worth making that 12GB and 60GB-80GB depending on how much space you have. The controllers don't use much in the way of resources once installed:  approx. 2GB RAM and 1-2 vCPUs, but you have to get past the install part first ! 

Note that the underlying ESXi host will manage the RAM for you so even though you don't have 36GB (3x 12GB for the 3xESXi VMs), they will install and work fine.

Let the install for the Nested ESXi complete from there complete (and you need 3 hosts for the lab - so create a template and copy or build 2 more VMs the same). It is a good idea to set the management ports to static IPs in your LAN for easy access to the working hosts once they have completed setup (via the DCUI).

In the next post we'll look at the NSX install and the distributed switch requirements.