I have the NetApp Virtual Storage Console installed -> VASA Provider for the VVOLs.
So the other solution seems to be the better solution...
But for a small environment with nothing extra this looks simple.
I have the NetApp Virtual Storage Console installed -> VASA Provider for the VVOLs.
So the other solution seems to be the better solution...
But for a small environment with nothing extra this looks simple.
Here is the complete support log : https://rb.gy/lpumwu
Thanks for your questions Andre.
/usr/lib/vmware-vmafd/bin/vmafd-cli get-pnid --server-name localhost
perf-vcsa.mydomain.com
For some reason all but the esxi host which contains the vcsa vm are now connected. I am thinking of removing the problem esxi host from the inventory and re-adding. The trouble is removing a host containing the vm running vSphere software from vSphere sounds like it could be a recipe for disaster.
Another thing I have noticed is that within vSphere ( only ) the vsphere vm has "(1)" appended to its name as the shutdown old copy still existed. I'm not sure whether changing the name it displays would help but this option is greyed out ( as are all options)
If I list the vms for the problem server ( perf-esx2) from the esxi command line it appears with the correct name
19 | perf-vcsa | [perf-PROD01-D-East-SVC-ds] perf-vcsa-clone/perf-vcsa-clone.vmx | other3xLinux64Guest vmx-10 | VMware vCenter Server Appliance |
The name in the inventory shouldn't matter at all.
You didn't say whether you tried Method 1 from the KB article?
Regarding the firewall, can you check whether the required communication between the host, and vCenter Server is allowed on the firewall?
André
Hi Vijay
Thanks for your help. It seems the last time I patched the appliance back in August that file got changed to be empty. I used the port 5480 management UI to update the appliance and at the time didn't have any obvious issues.
I'll review the file you sent and I can always spin up a temporary instance and check the file and see if I can replace it. I'll post an update maybe next week when I get a chance to try a few things.
Have a good weekend!
JoneOstebojoeflint any luck?
I'm just now trying to setup a connection to our AD over LDAP. I can join the AD domain just fine, and pull users out of that authentication method without issue.
However, if I set up an Identity source using LDAP to point to our domain, I get he same behavior Jone describes.
The UI immediately throws the error, the network inspector shows that it receives an immediate about verifying that you have network access, I can SSH into the VCSA and run an nslookup for the domain, the individual domain controllers, and ping the relevant IPs.
The firewall on the DCs doesn't seem to be the issue - they don't ever see any traffic on the relevant ports to drop.
This happens if I leave it set to use any DC in the domain (which by default tried LDAPS, I believe) or if I manually specify ldap://dc.domain.tld:389 / :636 / :etc.
In short, I don't believe the VCSA is ever actually trying, and I don't know why.
JoneOstebo:
I'm facing a similar situation, but I'd like to add some helpful debugging that I've found.
On your vCenter server run:
`tails -f /var/log/vmware/sso/vmware-identity-sts-default.log`.
On your ldap server:
Observe the logs with a command such as `journalctl -fu slapd` if you're running in linux.
That should be able to help you some.
I'm in an odd scenario like you where I can connect through ldap:://some.server.ip:389 , but not through ldaps.
The error that I am receiving on vCenter is the following:
2020-10-23T15:42:09.375Z pool-2-thread-14 vsphere.local INFO com.vmware.identity.interop.ldap.SslX509EqualityMatchVerificationCallback] Server SSL certificate signature verified.
[2020-10-23T15:42:09.394Z pool-2-thread-14 vsphere.local aafef6f3-b397-495a-80e8-d85c3843fb74 WARN com.vmware.identity.interop.ldap.LdapErrorChecker] Error received by LDAP client: com.vmware.identity.interop.ldap.OpenLdapClientLibrary, error code: -1
[2020-10-23T15:42:09.394Z pool-2-thread-14 vsphere.local WARN com.vmware.identity.idm.server.ServerUtils] cannot bind connection
On my openLdap server, I can observe a TLS connection being made and then promptly closed (connection lost).
As a side, I can connect through the ldapsearch search command from the vCenter server.
Hey UThomas,
This could be an issue regarding the Linux partitions to be higher in used space than the Threshold defined by the product. If this is happening for all the products and changing constantly between Green and Yellow I assume this could be because of logs increasing and then the rotation cleaning them continuously.
Could you please log inside your VCSA using SSH and list the Filesystems used space by running df -h
Hi,
the appliance type before upgrade was small/default and I changed that to small/large.
Filesystem Size Used Avail Use% Mounted on devtmpfs 9.3G 0 9.3G 0% /dev tmpfs 9.4G 968K 9.4G 1% /dev/shm tmpfs 9.4G 1.2M 9.4G 1% /run tmpfs 9.4G 0 9.4G 0% /sys/fs/cgroup /dev/sda3 46G 5.2G 39G 12% / /dev/sda2 120M 27M 85M 24% /boot /dev/mapper/vtsdblog_vg-vtsdblog 15G 57M 14G 1% /storage/vtsdblog /dev/mapper/lifecycle_vg-lifecycle 98G 3.6G 90G 4% /storage/lifecycle /dev/mapper/log_vg-log 9.8G 743M 8.6G 8% /storage/log /dev/mapper/seat_vg-seat 541G 735M 513G 1% /storage/seat /dev/mapper/dblog_vg-dblog 15G 441M 14G 4% /storage/dblog /dev/mapper/vtsdb_vg-vtsdb 541G 103M 513G 1% /storage/vtsdb /dev/mapper/netdump_vg-netdump 985M 2.5M 915M 1% /storage/netdump /dev/mapper/archive_vg-archive 49G 275M 47G 1% /storage/archive /dev/mapper/updatemgr_vg-updatemgr 98G 374M 93G 1% /storage/updatemgr /dev/mapper/imagebuilder_vg-imagebuilder 9.8G 37M 9.3G 1% /storage/imagebuilder /dev/mapper/db_vg-db 9.8G 362M 9.0G 4% /storage/db /dev/mapper/autodeploy_vg-autodeploy 9.8G 48M 9.3G 1% /storage/autodeploy tmpfs 9.4G 4.8M 9.3G 1% /tmp /dev/mapper/core_vg-core 49G 53M 47G 1% /storage/core tmpfs 1.0M 0 1.0M 0% /var/spool/snmp
I had the same issue today in my lab.
Tried to upgrade 7.0 U1 and it took forever, when it didn't seem right I started checking logs and VAMI would result to below error.
vcsa update to 7.0u1 Returning from install(), data conversion failed post install hook failed
Tried what I could find about the issue and tried what is in below to get access to VAMI, once I got access I saw the version didnt match between VAMI and vSphereUI.
What I noticed though was UI was upgraded and could see the new options for Namespaces, also as I staged the updates from VAMI first and then installed it so after first attempt it still showed update is available but numbers of packages went down, guess hence UI was updated. Neither did version in VAMI and vSphereUI
Eventually rolled back as I wanted to have lab accessible for NSX-T hands-on.
Mmm I can see that you have plenty of free space. After increasing the size did you restart the vCenter?
Yes, of course. But this didn't help.
And one more question, were you facing this issues before changing the size? Also if you connect to the VAMI on port 5480 which is the status of the services?
All services are operational in VAMI.
For clarification : I choose during the vCSA upgrade the new deployment type "small" with "large" data size. I didn't change anything manually.
Hello,
I have just migrated a VSAN cluster from 6.7U3 to 7.0U1.
Apart of big network issue with Mellanox ConnetctX-4 LX cards that I'm trying to solve upgrading driver/firmware (now the cards go to 2Gbit/s and VSAN is super slow), I have noticed that VCenter (that has some disks Thick even on VSAN), has 2 disks of 1.4TB of size (99% free but thick), VMDK7 and VMDK13...
Filesystem Size Used Avail Use% Mounted on
devtmpfs 5.9G 0 5.9G 0% /dev
tmpfs 5.9G 932K 5.9G 1% /dev/shm
tmpfs 5.9G 1.2M 5.9G 1% /run
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sda3 46G 5.2G 39G 12% /
/dev/sda2 120M 27M 85M 24% /boot
/dev/mapper/lifecycle_vg-lifecycle 98G 3.6G 90G 4% /storage/lifecycle
/dev/mapper/vtsdblog_vg-vtsdblog 15G 73M 14G 1% /storage/vtsdblog
/dev/mapper/core_vg-core 25G 45M 24G 1% /storage/core
/dev/mapper/vtsdb_vg-vtsdb 1.4T 108M 1.4T 1% /storage/vtsdb
/dev/mapper/archive_vg-archive 49G 1.2G 46G 3% /storage/archive
/dev/mapper/db_vg-db 9.8G 232M 9.1G 3% /storage/db
/dev/mapper/updatemgr_vg-updatemgr 98G 908M 93G 1% /storage/updatemgr
/dev/mapper/netdump_vg-netdump 985M 2.5M 915M 1% /storage/netdump
/dev/mapper/imagebuilder_vg-imagebuilder 9.8G 37M 9.3G 1% /storage/imagebuilder
/dev/mapper/autodeploy_vg-autodeploy 9.8G 37M 9.3G 1% /storage/autodeploy
tmpfs 5.9G 7.2M 5.9G 1% /tmp
/dev/mapper/dblog_vg-dblog 15G 105M 14G 1% /storage/dblog
/dev/mapper/seat_vg-seat 1.4T 349M 1.4T 1% /storage/seat
/dev/mapper/log_vg-log 9.8G 1.7G 7.6G 19% /storage/log
tmpfs 1.0M 0 1.0M 0% /var/spool/snmp
How is that happened? And, more important, how can I solve this safely?
Thanks
Manuel
Hi Andre,
I used option2 - moving vm without shared data storage ( clone, power off an source esxi host, power on at destination host).
Both host and vCenter Server are the same side of the firewall and can connect
Interestingly, there are several esxi hosts and all can connect ( with their vms and associated datastores) to the vCenter server. Only the esxi host containing the vCenter server and all the other vms on that host are showing as disconnected or inaccessible
Moderator: Thread moved to the vCenter Server area.
Now fixed. I needed to change the serverIP on the problem esx host in the vpxa.cfg file. Details here VMware Knowledge Base . Pretty sure I had previously looked and changed this one but apparently not.
Thanks for the help with this one