We have 4 hosts that are identical. These host servers only have room for 2, 1GB nics and 2, 10GB nics. The 1GB nics are used for management and VM access. We also have 2 Nexus 10GB switches configured to be redundant. The hosts will have many small VMs that add up to roughly 200GB of memory in use on each
Would it be better to:
1. Team the 10GB nics and run both vmotion and iscsi over them. This would still let me have the redundancy of the switches, but if vmotion maxes out the bandwidth it may cause disconnects to the iscsi san.
2. Use a 10GB nic for iscsi and the other for vmotion. This would guarantee that during vmotion there are no delays connecting to the iscsi san but lose the redundancy of the second switch.
I'm sorry if this is an amateur question. We are moving off Xenserver to VMWare and the servers were purchased for Xenserver in mind. Had we known we were going with VMWare from the beginning we would have purchased something with additional nic ports or the option to add them.