Importing a OVF exported from vCloud into VMware Workstation fails

We’re backing out of a vCloud provider and trying to drag our VMs back into our local vSphere cluster.

I’ve used ovftool to export our VMs from vCloud into OVF templates. I then import the OVF into VMware Workstation 14 and from there drag/drop the VM into our vSphere cluster. There is likely a way to get the ovftool to export in a format that will work directly with vSphere but since this is working I’m just going with it.

This process worked fine on all of our VMs until I got to a group of them that had access to an extra network in vCloud. When trying to import these VMs into VMware Workstation I get the following error:

The source contains more than one network. This target supports at most one network.

I cracked open the ovf file in a text editor and found this near the very top:

    <ovf:NetworkSection>
        <ovf:Info>The list of logical networks</ovf:Info>
        <ovf:Network ovf:name="server-net">
            <ovf:Description/>
        </ovf:Network>
        <ovf:Network ovf:name="myorg-it-pa-protected">
            <ovf:Description/>
        </ovf:Network>
    </ovf:NetworkSection>
    <vcloud:NetworkConfigSection ovf:required="false">
        <ovf:Info>The configuration parameters for logical networks</ovf:Info>
        <vcloud:NetworkConfig networkName="server-net">
            <vcloud:Description/>
            <vcloud:Configuration>
                <vcloud:IpScopes>
                    <vcloud:IpScope>
                        <vcloud:IsInherited>true</vcloud:IsInherited>
                        <vcloud:Gateway>10.201.207.254</vcloud:Gateway>
                        <vcloud:Netmask>255.255.248.0</vcloud:Netmask>
                        <vcloud:IsEnabled>true</vcloud:IsEnabled>
                    </vcloud:IpScope>
                </vcloud:IpScopes>
                <vcloud:ParentNetwork href="" name="server-net"/>
                <vcloud:FenceMode>bridged</vcloud:FenceMode>
                <vcloud:RetainNetInfoAcrossDeployments>false</vcloud:RetainNetInfoAcrossDeployments>
            </vcloud:Configuration>
            <vcloud:IsDeployed>false</vcloud:IsDeployed>
        </vcloud:NetworkConfig>
        <vcloud:NetworkConfig networkName="myorg-it-pa-protected">
            <vcloud:Description/>
            <vcloud:Configuration>
                <vcloud:IpScopes>
                    <vcloud:IpScope>
                        <vcloud:IsInherited>true</vcloud:IsInherited>
                        <vcloud:Gateway>10.201.2.254</vcloud:Gateway>
                        <vcloud:Netmask>255.255.255.0</vcloud:Netmask>
                        <vcloud:IsEnabled>true</vcloud:IsEnabled>
                    </vcloud:IpScope>
                </vcloud:IpScopes>
                <vcloud:ParentNetwork href="" name="test-net"/>
                <vcloud:FenceMode>bridged</vcloud:FenceMode>
                <vcloud:RetainNetInfoAcrossDeployments>false</vcloud:RetainNetInfoAcrossDeployments>
            </vcloud:Configuration>
            <vcloud:IsDeployed>false</vcloud:IsDeployed>
        </vcloud:NetworkConfig>
    </vcloud:NetworkConfigSection>

In here you can see two networks, “myorg-it-pa-protected” and “server-net” on Line 3 and 6

The networking configuration doesn’t really matter to me since it doesn’t match with our vSphere deployment all I want to do is get these VMs imported. I’ll edit their networking afterwards.

I ended up deleting “myorg-it-pa-protected” by taking out lines 6-8 and lines 29-45. I then save/closed the OVF file and ran it through a hashing app to get the files SHA256 value.

I then opened the .mf file that sits in the same directory as the OVF file and updated the SHA256 entry for the OVF file. I was then able to import my VMs into VMware Workstation.

On Mac/Linux you can use “sha256sum <filename>” to get the SHA256 value of the edited OVF file. On Windows I use tools like HashTab and HashCalc OR if you have the Linux Subsystem installed on Windows 10 you can just use “sha256sum <filename>”.

Datastores not listed after deploying VMware Replication Appliance

Just did a fresh deployment of the VRM 6.5.1 appliance into vCenter 6.5.1u1 which controls our vSphere 5.5 hosts.

Installation and configuration went smoothly but when I went to setup a test replication for a VM I could not complete the setup because none of my datastores were being listed.

A reboot of vCenter did not help.

Restarting the VRM service via the appliances WebUI fixed the problem. A reboot of the appliance would have also probably worked.

You can restart the service via: https://<APPLIANCE FQDN>:5480/

  1. Click ‘Configuration’ under the ‘VM’ page
  2. Click ‘Restart’ at the bottom

Pretty straight forward solution but I didn’t find this in the first few pages of Google results. Might save someone else a bunch of troubleshooting.

Error 1603 when upgrading vCenter 6.0u1 to 6.0u2

Recently ran across this one:

2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: MonitorStatusFile: Other process terminated with 0, exiting
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: MonitorStatusFile: Process exited with a '0' exit code; no status monitoring so assuming success
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: MonitorStatusFile: called parse callback 0 times
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: MonitorStatusFile: No need to wait for process to complete
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: MonitorStatusFile: Process's job tree still hasn't terminated, waiting
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: MonitorStatusFile: Wait on process's job tree has completed: 0
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: Leaving function: MonitorStatusFile
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| E: LaunchProcAndMonitorStatus: Job still alive, terminating
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: Leaving function: LaunchProcAndMonitorStatus
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: RunFirstLastUpdateboot: Successfully ran boot script: "C:\Windows\system32\cmd.exe /S /C ""D:\VMware\vCenter Server\bin\run-updatebootrb-scripts.bat"""
2016-06-22 10:23:19.878-07:00| vcsInstUtil-3634789| I: Leaving function: VM_RunUpdateBoot
2016-06-22 10:23:20.065-07:00| vcsInstUtil-3634789| E: wWinMain: MSI result of install of "D:\Temp\VMware-VIMSetup-all-6.0.0-3634788\vCenter-Server\Packages\vcsservicemanager.msi" may have failed: 1603 (0x00000643)
2016-06-22 10:23:20.065-07:00| vcsInstUtil-3634789| E: LaunchPkgMgr: Operation on vcsservicemanager.msi appears to have failed: 1603 (0x00000643)
2016-06-22 10:23:20.065-07:00| vcsInstUtil-3634789| I: PitCA_MessageBox: Displaying message: "Installation of component VCSServiceManager failed with error code '1603'. Check the logs for more details."

 

The upgrade would get to the VCSServiceManager step, fail and back out. It then left our existing vCenter 6.0u1 installation unable to start.

I did all the standard things you’ll find on VMwares Support site (and recommended by the support rep I got a hold of):

2119768 Error code 1603 when upgrading to vCenter Server 6.0
2127519 Installing the VMware vCenter Server 6.0 fails with the vminst.log error: MSI result of install of “C:\vCenter-Server\Packages\vcsservicemanager.msi” may have failed : 1603
2137365 Upgrade of vCenter from 5.x to 6.0 fails with “Installation of component VCSServiceManager failed with error code ‘1603’. Check the logs for more details.”
2113068 Upgrading or installing VMware vCenter Server 6.0 fails with the vminst.log error: Error in accessing registry entry for DSN
2119169 Installing VMware vCenter Server 6.0 using a Microsoft SQL database fails with the error: An error occurred while starting service ‘invsvc’

None helped.

While waiting for my VMware Support rep to dig through the log files, on a hunch, I made the following changes:

  1. Checkmarked ‘IPv6’ in the network stack for the servers network card
  2. Re-ran the vCenter installer separately by right clicking it and ‘Running As Administrator’ (\VMware-VIMSetup-all-6.0.0-3634788\vCenter-Server\VMware-vCenter-Server.exe)

The installation then succeeded and we have a functioning vCenter again.

Two fun facts:

  1. The UAC is disabled on our server
  2. IPv6 was (and still is) disabled via the registry using these utilities even though I’ve now re-checked IPv6 in the network cards network stack

Our server is in a fairly unique configuration I suspect but hopefully this will help someone else.

Networking randomly dies on a 2012 R2 vSphere VM

Strange issue. Simple solution.

We had a Windows Server 2012 R2 Domain Controller sitting on vSphere 5.5 (2068190) which would randomly lose it’s network connection.

When you logged into the system locally the network interface appeared to be up but you could not connect to anything outside of the VM.

If I rebooted the VM it would work for a few hours or less and then the network would drop out again.

Digging through the event viewer I came across these:

Log Name:      System
Source:        Microsoft-Windows-Iphlpsvc
Date:          2/15/2016 7:01:51 PM
Event ID:      4202
Task Category: None
Level:         Error
Keywords:      
User:          SYSTEM
Computer:      MYSERVER.MYDOMAIN
Description:
Unable to update the IP address on Isatap interface isatap.{FBE3D830-A8CB-4C9C-809E-25DD9DB086F5}. Update Type: 0. Error Code: 0x57.


Log Name:      System
Source:        Microsoft-Windows-Iphlpsvc
Date:          2/15/2016 4:43:33 PM
Event ID:      4202
Task Category: None
Level:         Error
Keywords:      
User:          SYSTEM
Computer:      MYSERVER.MYDOMAIN
Description:
Unable to update the IP address on Isatap interface isatap.{FBE3D830-A8CB-4C9C-809E-25DD9DB086F5}. Update Type: 1. Error Code: 0x490.

The VM had a E1000 NIC attached to it. I figured the issue was the VM NIC model and got some backup to my theory from here: https://community.spiceworks.com/topic/504405-windows-server-2012-r2-guest-os-on-vmware-keeps-losing-gateway-connection

The solution appears to have been removing the E1000 NIC and adding either a E1000E NIC or in my case a VMXNET 3 NIC.

Error 1603 while upgrading vCenter 5.5 to 6.0u1

Upgrading vCenter recently from vCenter 5.5 to 60.u1 and the upgrade would consistently fail displaying two error messages.

First it would display “An error occurred while invoking external command: ‘Database instance not defined for mssql provider'”

Then the installation would appear to proceed until it got to installing the VCSServiceManager. Then I would get Error 1603 saying that it couldn’t talk to the database server.

We run our vCenter Server database off a separate MSSQL 2012 Standard server.

I found plenty of resources on VMware’s site for this error:

None of these solved the issue for us.

Some how I ended up on this article: Installing or Upgrading to vCenter Server 6.0 fails with the error: Unable to get port number for mssql provider (2125492)

That isn’t the error we were getting but the solution ended up fixing the problem for us.

This issue is caused due to the use of certain ASCII characters in the Microsoft SQL Server user's password used for the DSN on the vCenter Server. 

To resolve this issue, ensure your Microsoft SQL Server user password used in the DSN does not contain the following:

    - (dash) 
    ? (question mark) 
    _ (underscore)
    ( (left parentheses)
    = (equal sign)
    ! (exclamation mark)
    , (comma)

Once the password has been updated removing any of the above characters:

    If you are performing a fresh installation, attempt the fresh install again.
    If you are performing an upgrade, roll back your vSphere environment to the pre-upgraded state, upgrade the vCenter Server database password stored and re-run the upgrade. For more information on updating the vCenter Server database password, see Changing the vCenter Server database user ID and password (1006482).

We had been using most of those special characters in the password for the vCenter user accessing our MSSQL Server.

I changed the password to something that didn’t have those special characters on the MSSQL server and then did the following to update the password in vCenter:

  1. Change the MSSQL users password
  2. Updating the database user password stored in vCenter Server (2000493)
  3. Updated the password in the ODBC connector
  4. Restarted the VMware vCenter Server service

The upgrade was successful after that.

Note: My upgrades were getting far enough to completely remove the existing installation of vCenter 5.5 but not far enough to alter the database. I had to revert my vCenter VM to my pre-upgrade Snapshot so vCenter 5.5 was back up and running before I could change the password.