The problem I was experiencing was caused by an incorrect setup. After speaking with a VMWare technician, he explained to me that you should not use an etherchannel configuration on the switch that connects to the hosts' vMotion NICs. The reason has to do with the load balancing policy associated with the NIC team.
In order for an etherchannel configuration to work, the load balancing policy for the NIC team needs to be set to, "Route based on IP hash". You are not able to use this load balancing policy with a vMotion NIC team. The load balancing policy for a correct vMotion NIC team would be, "Route based on the originating virtual port ID". This is because one NIC is active while the other is in standby.
After removing the etherchannel configuration from the switch and re-doing my configuration according to the referenced KB article above, all hosts could communicate on the vMotion network appropriately. I asked the VMWare tech if he had any insight as to why three hosts were working while one was not. He did not have an answer, and was surprised that those other three hosts were working with the configuration I had to begin with.