Overclock.net banner

1 - 14 of 14 Posts

·
Premium Member
Joined
·
4,146 Posts
Discussion Starter #1
I'm a VCP and I am working on re-architecting my company's VMware infrastructure. Currently it's a messy bunch of unconnected ESXi hosts. They don't even have vCenter Server. So I have three ESXi hosts connected to a Cisco switch stack. As far as I know, each port the ESXi nodes are connected to set to access mode.

During VCP training, I was repeatedly told that ports connected to ESXi servers should be configured to trunk mode.
I'm not a networking specialist at all, but I need to ramp up those skills for my job. It'll be useful no matter how you cut it.

So here's my understanding of trunking: it makes it so the switch port stops worrying about what VLAN it's configured for, and delegates that control to another device connected to the port, usually another switch, or as would be the case here, a VMware vSwitch.

So if I connect my ESXi nodes to trunk ports and set up the switch to ensure each trunk port has access to every VLAN I might want my VMs to be in, I should be able to do whatever I want with my vSwitches within vCenter, right?

Here's my goal: I don't administer the switches, our colo vendor does, so I need to put in a request with them to do any work on the switches. What I want to do is pick one of my ESXi nodes (4 physical NICs) and tell the colo to configure the 4 switch ports it's connected to to trunk mode.

To put it in practical terms, here's how things are currently set up with this particular ESXi server:

vmnic0 is physically attached to a port with access to VLAN 10.200.1.0/24.
vmnic1 and 2 are teamed and attached to ports with access to VLAN 10.30.10.0/28.
vmnic3 is attached to a port with access to VLAN 10.200.2.0/24.

Those ports can't access any other network than the above.

vmnic0 is bound to vSwitch0, vmnic1+2 to vSwitch1, and vmnic3 to vSwitch2.

If I'm understanding this right, setting the four switch ports to trunk mode, each with access to 10.200.1.0, 10.30.10.0 and 10.200.2.0, will ensure that each vmnic can communicate with each of those three VLANs, and changing the setting should cause no downtime at all in terms of management, VM and storage connectivity.

And once it's all set up, since I don't need three vSwitches, I will merge them all into vSwitch0 and divide traffic within the vSwitch.

So question #1: do I understand trunking right?
Question #2: will my plan outlined above cause any downtime?
Question #3: do I need to mess with the VLAN ID setting in vSphere in order to achieve the desired results? It's currently set to 0. If I need to match the VID of each network on the vSwitch to the VID that network is known as on the Cisco switch, that's fine and it would make perfect sense, but I need to know if I need to worry about it
smile.gif


I hope this makes sense; thanks for reading!
 

·
Registered
Joined
·
817 Posts
i'm not a networking expert either, but i'm studying for the ccna, so i'll try to help explain trunk ports. basically, access ports can only belong to one vlan and generally connect host devices to a switch. if you wanted to connect 2 switches via access ports, you would need access ports for each vlan. trunk ports do not belong to any vlan, but any vlan can cross it. that way, only one trunk port is needed to connect 2 switches with multiple vlans.
Quote:
Originally Posted by Shub View Post

So here's my understanding of trunking: it makes it so the switch port stops worrying about what VLAN it's configured for, and delegates that control to another device connected to the port, usually another switch, or as would be the case here, a VMware vSwitch.
pretty much. frame are tagged with what vlan they belong to when they leave a trunk port. the receiving switch will forward the frames to the appropriate vlan.

here's a quote from my study guide. sorry i'm not able to help with questions 2 or 3. hopefully someone else can.
Quote:
Trunk ports do not belong to a single VLAN. Any or all VLANs can traverse trunk links to reach other switches. Only Fast or Gigabit Ethernet ports can be used as trunk links.

When utilizing trunk links, switches need a mechanism to identify which VLAN a particular frame belongs to. Frame tagging places a VLAN ID in each frame, identifying which VLAN the frame belongs to. Tagging occurs only when a frame is sent out a trunk port.

Cisco switches support two frame-tagging protocols, Inter-Switch Link (ISL) and IEEE 802.1Q.
 

·
Premium Member
Joined
·
14,051 Posts
Thread title)
Allows a device or endpoint to access multiple VLANs as opposed to 'access' mode which only allows access to a singular VLAN

1) Seems like it.
2) Likely would be a small amount if done correctly. Might be worth doing outside of business hours for a production environment.
3) Most likely. I don't have any specific experience with these implementations but I would imagine the VLAN ID setting is which VLAN tag for the hypervisor to pipe to the virtual machine. 0 is likely for untagged traffic (or traffic that is tagged at the switch port itself when in access mode).
 

·
Registered
Joined
·
2,039 Posts
To implement what you're talking about, I would put the host into maintenence mode and get all the VMs off. Then reconfigure the entire network structure. How many VMNICS per host?

In our environment we have 2 types of hosts

1: GIant beefy host in our main cluster with 12 nics and room for more cards
2. A blade server with only 2 nics

In the big beefy hosts, we have our management network on it's own separate vSwitch, that is connected to a switch whose port is set to access mode. However, on the smaller hosts, we have the management network on a vlan. This isin the only vswitch we have in those hosts which contains host and guest traffic. On the larger hosts that have multiple nics, we have vswitches separate for management traffic, iscsi, and guest network traffic.

284

In above picture is a snippet of our guest network vswitch, not all vlans are shown, but then the guests can use any of the ports in the vswitch, as long as all ports are configured to trunk. Also with vmware, make sure that the ports for the vswitch aren't configured in a channel-group on the switch. Each switch should have the same configuration, but will react as their own port.

When planning this move, make sure to name each vm network the same so when you do get a cluster it will be easy for HA to take over.

Luckily I took a network engineering degree that had a virtualization requirement taught by a VCP. If you have any other network specific questions, ask Beers as I remember he is Cisco certified, but I understand the fun you're dealing with, so let me know if you want any other help with vmware and network setup.
 

·
Premium Member
Joined
·
4,146 Posts
Discussion Starter #5
Thanks guys, I'm glad I didn't misunderstand the basics of what trunking means, at least.

This VMware infrastructure I'm dealing with was put together hastily with servers that were removed from their former purpose with very little planning or consideration.
The network was likewise not planned well. I have three servers, two with 4 NICs and one with 8. The one I'm planning on testing all this with has 4. It's been dedicated to virtualize servers in our DMZ. Problem is it's a server with two 6-core Xeons and 36 GB of RAM, and there's only one VM in the DMZ: a 1 vCPU, 512 MB vRAM affair. Because the ports the server is connected to only grant access to the management, DMZ and storage networks and not the production network, that's just a huge, huge waste of horsepower sitting in my rack. Setting the four switch ports to trunk mode will eliminate that problem.

I also can't vMotion the one VM to another node because, like I mentioned, there's no vCenter Server. I'm working on that, but first I need this server to be set up right. The VM isn't extremely important, so if it goes down, I'm not too concerned, but maybe I'll do it during a maintenance window anyway, just to do it by the book.

I see the wisdom of connecting a physical NIC to an access-mode port for management only, but that's not worth the hassle in my case. There is also the fact that we're moving everything to Cisco UCS by the end of June, and as you may or may not know, with UCS there's exactly one patch cable running from each UCS server to the fabric interconnect switch. Adding more cables only adds bandwidth, nothing else. Switching is completely virtualized, so I'll be leaving that task up to vSphere.

Like you said, of course I'll make sure the networks are called the same on each ESXi node so there's no issue with vMotion or HA -- they hammer that into your head during VCP training
tongue.gif


I see in the screenshot above that you are using the VID setting, so I'll have to figure out the VID by which the switch stack knows each VLAN and set those up accordingly in vSphere.

It's funny how I used to hate networking and I'm starting to like it a lot more now, and that's a good thing since I'm finding out that it's hard to be a good VCP while being ignorant of networking. I think my next goal will be a CCNA.
 

·
Premium Member
Joined
·
8,252 Posts
It is was easier to manage your VLAN's via port groups, and just have all ports connected to your ESXi servers be trunk ports. This is especially true if you have a separate network team and have to bug them every time you want a VLAN added. Just trunk 'em all and let ESX port groups sort 'em out!

I don't think what you are planning will disrupt anything, but it never hurts to just power down the VM's on one host, cold migrate them to another host, then the host in maintenance mode and make the change.

I have my VCP, and while it taught me a lot about vSphere and how things worked, it didn't cover the practical aspects quite as well as I thought they could have. In my class it ended up being me, a coworker, and another contractor who constantly explaining the real world uses or problems with various items (such as S vMotion not renaming you vDisks in 5.0).
 

·
Registered
Joined
·
2,039 Posts
So your DMZ systems will be on the same box as internal systems? Your security guy must be less concerned about a bridge around his firewall. Our security guy still doesn't trust virtual switching techologies, so our DMZ hosts are only for DMZ, and nothing else. I had to get him the white papers on the ports needed for Vcenter stuff (update manager, vmotion, etc) so he would open the ports from the vcenter server to the DMZ hosts. Otherwise we wanted to put another Vswitch in the main cluster with those huge machines, and just run the DMZ there, but i guess it's not "secure enough"

We did the management network on it's own ports so when we do vmotion within the cluster, we don't affect guest network traffic.

As for getting the right vlans, hopefully your network guys are nice. Luckily I am a network admin( and a backup admin,linux admin,san admin,don't forget vmware admin...) as well, so I already knew which vlans I would need to allow to trunk into the host. Not sure how concerned your security guy is but you may want to use "switchport trunk allowed vlan " command on the switches so only the traffic needed gets through.
 

·
Premium Member
Joined
·
4,146 Posts
Discussion Starter #8
Funny you should say that about "my network guys" -- I don't have any
smile.gif
I'm a one-man IT team. All the networking side is managed by a couple of vendors. All I need to do is open a change request with what I want done, and they'll do it or tell me why I shouldn't do it that way. And they'll definitely tell me whatever I need to know. Most of them know me and they know I'm friendly and easy to work with.

The whole DMZ thing has troubled me in the past. I don't know if the white paper you're referring to is the same as the one I've read about virtualized DMZs -- this is the one I'm referring to: http://www.vmware.com/files/pdf/dmz_virtualization_vmware_infra_wp.pdf
As you can see, each of the three designs can offer good security if done properly. Something like vShield Zones would be sufficient, or I could architect something around a pfSense VM.
The way I have it set up now (dedicated host) is how a VMware support tech suggested I do it when I called them for advice. I guess I'm just annoyed that the host is a powerhouse, and the fact that it can only do DMZ traffic is a colossal waste.
But really my ultimate goal here is to take control of a chunk of switching and firewalling away from the colo vendor.
 

·
Registered
Joined
·
2,039 Posts
No, the white papers I was referring to were which ports Vcenter requires through a firewall to a host in a DMZ. We currently have our DMZ hosts in a DMZ with our DMZ guests. I do see your dilemma though. with a machine that small in the DMZ, you might as well just make it physical server and put that big host to use internal.
 

·
Premium Member
Joined
·
8,252 Posts
You could either use one seperate VLAN for the DMZ host, or if you want the firewall VM too, just put both VM's in the same port group and VLAN, and route your VM through the firewall vm.

Alternatively you can burn up a port and dedicate that port to the DMZ server and firewall and put them on their own vSwitch. You mentioned you had a host with 2 extra NIC's, this might be the easiest route to take, though you lose the ability to vmotion the DMZ server if you did that.

Depending upon your switching you could use private VLAN's and put the DMZ server into an "Isolated" VLAN (meaning it cannot talk to any other host except for those placed in "Promiscuous" mode), then toss your IPS/IDS into a Private VLAN that is Promiscuous.
 

·
Premium Member
Joined
·
4,146 Posts
Discussion Starter #11
I really gotta think of ways to have a DMZ network on the same hosts as other non-DMZ guests while keeping everything secured. We're moving everything to a UCS blade chassis soon, I'll be using distributed vSwitches, so the networks will all have to be the same between hosts.
 

·
Registered
Joined
·
2,039 Posts
If your company can afford it, the Cisco V switch even complies to our security guys stingyness. He's read the docs on that and said id our boss buys that v switch then we can put DMZ hosts on the same boxes as internal.
 

·
Premium Member
Joined
·
4,146 Posts
Discussion Starter #13
You mean the Nexus 1000V switch? I'm not sure that that's useful at all with a UCS setup. The 6120XP fabric interconnects do a lot of the same things; I'm thinking that the policy-based VM connectivity provided by UCS Manager + VMware Distributed vSwitch + VMware vShield Zones should achieve similar results with a careful design.
 

·
Premium Member
Joined
·
4,146 Posts
Discussion Starter #14
If anybody was still wondering, I figured that when you use trunk ports with a ESXi host, you must use the VID setting in vSphere or it won't know what the hell to do with those trunk ports
smile.gif
 
1 - 14 of 14 Posts
Top