Salt: Salt-Cloud: There was a query error: Required field "deviceChange" not provided (not @optional)

Created on 20 Dec 2016  路  30Comments  路  Source: saltstack/salt

Description of Issue/Question

It was working at older salt-cloud version using same command (salt-cloud -l debug ....)

Setup

(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)

Steps to Reproduce Issue

 salt-cloud -l debug -m /etc/salt/cloud.maps.d/vc01.map
<snipped>
[DEBUG   ] Virtual hardware version already set to vmx-08
[DEBUG   ] Setting cpu to: 2
[DEBUG   ] Setting memory to: 2048 MB
[DEBUG   ] Changing type of 'Network adapter 1' from 'e1000' to 'vmxnet3'
[ERROR   ] There was a query error: Required field "deviceChange" not provided (not @optional)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 348, in run
    ret = mapper.run_map(dmap)
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 2231, in run_map
    profile, local_master=local_master
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1288, in create
    output = self.clouds[func](vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 2496, in create
    config_spec.deviceChange = specs['device_specs']
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 537, in __setattr__
    CheckField(self._GetPropertyInfo(name), val)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 954, in CheckField
    CheckField(itemInfo, it)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 915, in CheckField
    raise TypeError('Required field "%s" not provided (not @optional)' % info.name)
TypeError: Required field "deviceChange" not provided (not @optional)
[root@salt01t cloud.profiles.d]#

Versions Report

Salt Version:
            Salt: 2016.11.1

Dependency Versions:
 Apache Libcloud: 0.20.1
            cffi: 1.6.0
        cherrypy: 3.2.2
        dateutil: 1.5
           gitdb: 0.6.4
       gitpython: 1.0.1
           ioflo: Not Installed
          Jinja2: 2.7.2
         libgit2: 0.21.0
         libnacl: Not Installed
        M2Crypto: 0.21.1
            Mako: Not Installed
    msgpack-pure: Not Installed
  msgpack-python: 0.4.8
    mysql-python: Not Installed
       pycparser: 2.14
        pycrypto: 2.6.1
          pygit2: 0.21.4
          Python: 2.7.5 (default, Nov  6 2016, 00:28:07)
    python-gnupg: Not Installed
          PyYAML: 3.11
           PyZMQ: 15.3.0
            RAET: Not Installed
           smmap: 0.9.0
         timelib: Not Installed
         Tornado: 4.2.1
             ZMQ: 4.1.4

System Versions:
            dist: centos 7.3.1611 Core
         machine: x86_64
         release: 3.10.0-514.2.2.el7.x86_64
          system: Linux
         version: CentOS Linux 7.3.1611 Core

[root@salt01t cloud.profiles.d]# pip list | grep pyvmomi
pyvmomi (6.5)
pyvmomi-community-samples (5.5.0-2014.dev)
[root@salt01t cloud.profiles.d]#


Bug RIoT Salt-Cloud TEAM RIoT ZRELEASED - 2016.11.3 fixed-pending-your-verification severity-critical severity-high

All 30 comments

@tjyang can you share a sanitized version of your profile and provider file to help try to replicate this issue.

Also does this only occur when running a map file or does it occur on normal provision such as salt-cloud -p <profilename> <vmname>?

I have tried running the following and received the same error.

salt-cloud -p base-linux web02
[INFO ] salt-cloud starting
[ERROR ] There was a profile error: Required field "deviceChange" not provided (not @optional)

Cheers,

Neil

Probably should have included this....

Salt Version:
Salt: 2016.11.1

Dependency Versions:
Apache Libcloud: 0.20.1
cffi: Not Installed
cherrypy: 3.2.2
dateutil: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.8
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pygit2: Not Installed
Python: 2.7.5 (default, Nov 6 2016, 00:28:07)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4

System Versions:
dist: centos 7.3.1611 Core
machine: x86_64
release: 3.10.0-514.2.2.el7.x86_64
system: Linux
version: CentOS Linux 7.3.1611 Core

pyvmomi (6.5)

Cheers,

Neil

And for completeness, this....

salt-cloud -p base-linux web02 -l debug
[DEBUG ] Reading configuration from /etc/salt/cloud
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG ] Including configuration from '/etc/salt/cloud.providers.d/vmware.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.providers.d/vmware.conf
[DEBUG ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/base.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/base.conf
[DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/elk.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/elk.conf
[DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/haproxy.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/haproxy.conf
[DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/logstash.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/logstash.conf
[DEBUG ] Configuration file path: /etc/salt/cloud
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO ] salt-cloud starting
[DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG ] LazyLoaded parallels.avail_locations
[DEBUG ] LazyLoaded proxmox.avail_sizes
[DEBUG ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG ] LazyLoaded rackspace.reboot
[DEBUG ] LazyLoaded openstack.list_locations
[DEBUG ] LazyLoaded rackspace.list_locations
[DEBUG ] Could not LazyLoad vmware.optimize_providers: 'vmware.optimize_providers' is not available.
[DEBUG ] The 'vmware' cloud driver is unable to be optimized.
[DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG ] LazyLoaded parallels.avail_locations
[DEBUG ] LazyLoaded proxmox.avail_sizes
[DEBUG ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG ] LazyLoaded rackspace.reboot
[DEBUG ] LazyLoaded openstack.list_locations
[DEBUG ] LazyLoaded rackspace.list_locations
[DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG ] LazyLoaded parallels.avail_locations
[DEBUG ] LazyLoaded proxmox.avail_sizes
[DEBUG ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG ] LazyLoaded rackspace.reboot
[DEBUG ] LazyLoaded openstack.list_locations
[DEBUG ] LazyLoaded rackspace.list_locations
[DEBUG ] Generating minion keys for 'web02'
[DEBUG ] LazyLoaded cloud.fire_event
[DEBUG ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event: tag = salt/cloud/web02/creating; data = {'profile': 'base-linux', 'event': 'starting create', '_stamp': '2016-12-21T09:43:17.344710', 'name': 'web02', 'provider': 'vmware:vmware'}
[DEBUG ] Virtual hardware version already set to vmx-11
[DEBUG ] Setting cpu to: 2
[DEBUG ] Setting memory to: 2048 MB
[ERROR ] There was a profile error: Required field "deviceChange" not provided (not @optional)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 284, in run
self.config.get('names')
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1458, in run_profile
ret[name] = self.create(vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1288, in create
output = self.cloudsfunc
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 2496, in create
config_spec.deviceChange = specs['device_specs']
File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 537, in __setattr__
CheckField(self._GetPropertyInfo(name), val)
File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 954, in CheckField
CheckField(itemInfo, it)
File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 915, in CheckField
raise TypeError('Required field "%s" not provided (not @optional)' % info.name)
TypeError: Required field "deviceChange" not provided (not @optional)

Sorry I'm not trying to troll or spam this forum just trying to be helpful...I think!

Cheers,

Neil

@asmodeus70

Sorry I'm not trying to troll or spam this forum just trying to be helpful...I think!

You are helping on this debugging effort, Thanks.

Hi, @asmodeus70

I can only create the VMs from my profile using -m syntax.
I am having problem to use -p syntax to create a particular VM.

You are able to see following command to create web02 VM.

salt-cloud -p base-linux web02 -l debug

but from your log, you only have base.conf, no base-linux.conf in /etc/salt/cloud.profiles.d.

Including configuration from '/etc/salt/cloud.profiles.d/base.conf'

Would you explain what is " -p base-linux web02" ?

Ah yes, I have a base.conf with the following contents....

base-linux:
  provider: vmware
  clonefrom: Gold_Centos7_Template

  ## Optional arguments
  num_cpus: 2
  memory: 2GB 
  devices:

    disk:
      Hard disk 1:
        size: 22

    network:
      Network adapter 1:
        name: 'Project PIP VLAN 80' 
        switch_type: standard
        adapter_type: vmxnet3

  domain: example.com
  dns_servers:
    - 8.8.8.8
    - 8.8.4.4

  resourcepool: Resources
  cluster: "G9 Blades"

  datastore: VMFS3_PoolB2
  folder: 'Project Pip'
  datacenter: Wales
  template: False
  power_on: True

  deploy: True
  customization: True
  ssh_username: root
  password: *******

  hardware_version: 11
  image: rhel7_64Guest

lb1:
  extends: base-linux                                                                                                               
  minion:
    master: 10.150.0.14
    startup_states: highstate
lb2:
  extends: base-linux
  minion:
    master: 10.150.0.14
    startup_states: highstate

web01:
  extends: base-linux
  minion:
    master: 10.150.0.14
    startup_states: highstate

web02:
  extends: base-linux
  devices:
    network:
      Network adapter 1:
      ip: 10.150.0.24
      gateway: [10.150.0.1]
      subnet_mask: 255.255.255.0
    grains:
      roles:
          - web
      env: production
  mine_functions:
    host:
        - mine_function: grains.get
        - host
    id:路
        - mine_function: grains.get
        - id
    internal_ip:
        - mine_function: ip4_interfaces
        - interface: eno16780032
  minion:
    master: 10.150.0.14
    startup_states: highstate

This config used to work with 2016.3.4 (Boron) and still does on my live Salt master.

Cheers,

Neil

@asmodeus70, thanks for your base.conf file example. I now know(remember) how to use the -p syntax properly ;). PS. put your copy and paste text surrounded by ''' quote will make it shown as code segment.

@Ch3LL, do you need more information for this issue to out of "information needed" label ?

spinning up a test case now to see if i can't replicate this. Thanks for the additional infromation. will post results here in a bit.

So i don't have a vmware cluster currently due to some maintenance so I was hoping you guys could help troubleshoot this.

In our testing we did not see this. Here was the profile we used:

qa-test-vm:
  provider: vsphere01
  clonefrom: clone-template
  num_cpus: 1
  memory: 2GB
  devices:
    disk:
      Hard disk 1:
        size: 30
  resourcepool: testresourcepool
  datastore: onedatastore
  datacenter: saltdatacenter
  host: host.net
  template: False
  power_on: True
  deploy: True
  ssh_username: root
  password: <PASSWORD>
  wait_for_ip_timeout: 30

So what I can see the differences being and relating that to the code is you have a network device set up under devices. Since in the code here it would only get to this stack trace if devices are configured I'm guessing we need to dive into that area.

So can you try removing the network portion and see if it is successful?

Also it would help if you could add this code to see if we can get some more debug information:

diff --git a/salt/cloud/clouds/vmware.py b/salt/cloud/clouds/vmware.py
index 6845360..f31d7d7 100644
--- a/salt/cloud/clouds/vmware.py
+++ b/salt/cloud/clouds/vmware.py
@@ -2379,6 +2379,7 @@ def create(vm_):

     if devices:
         specs = _manage_devices(devices, object_ref)
+        log.debug('specs are the following: {0}'.format(specs))
         config_spec.deviceChange = specs['device_specs']

     if extra_config:

And that might give us some more information into this issue. Thanks

OH also forgot to mention it might be worth it to try different pyvmomi versions as well. Possibly downgrading to an older version. This might help narrow it down as well. Thanks.

@Ch3LL
I customized your qa-tesrt-vm for my environment.

qa-test-vm:
  provider: vc01
  clonefrom: centos7t01template
  num_cpus: 1
  memory: 2GB
  devices:
    deviceChange:
    disk:
      Hard disk 1:
        size: 30
  resourcepool: Resources
  cluster: Dev_Cluster
  datastore: vmware_nfs_dev
  datacenter: test_Datacenter
  folder: TestVMs
  #host: host.net, not need to set here,it will be assigned into one host in cluster
  template: False
  power_on: True
  deploy: True
  ssh_username: root
  password: test1234
  wait_for_ip_timeout: 30

Here is the output from "salt-cloud -l debug --profile qa-test-vm centos7t04" command.

<snipped>
[DEBUG   ] Sending event: tag = salt/cloud/centos7t04/creating; data = {'profile': 'qa-test-vm', 'event': 'starting create', '_stamp': '2016-12-22T09:06:54.405729', 'name': 'centos7t04', 'provider': 'vc01:vmware'}
[DEBUG   ] Setting cpu to: 1
[DEBUG   ] Setting memory to: 2048 MB
[DEBUG   ] @Ch3LL: specs are the following: {'nics_map': [], 'device_specs': [None]}
[ERROR   ] There was a profile error: Required field "deviceChange" not provided (not @optional)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 284, in run
    self.config.get('names')
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1458, in run_profile
    ret[name] = self.create(vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1288, in create
    output = self.clouds[func](vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 2497, in create
    config_spec.deviceChange = specs['device_specs']
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 537, in __setattr__
    CheckField(self._GetPropertyInfo(name), val)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 954, in CheckField
    CheckField(itemInfo, it)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 915, in CheckField
    raise TypeError('Required field "%s" not provided (not @optional)' % info.name)
TypeError: Required field "deviceChange" not provided (not @optional)
[root@salt01t ~]#

trying different version of pyvmomi didn't help either.
I tried "6.0.0, 6.0.0.2016.4, 6.0.0.2016.6, 6.5" versions.

[root@salt01t ~]# pip install pyvmomi==
Collecting pyvmomi==
  Could not find a version that satisfies the requirement pyvmomi== (from versions: 5.5.0-2014.1, 5.5.0-2014.1.1, 5.1.0, 5.5.0, 5.5.0.2014.1.1, 6.0.0, 6.0.0.2016.4, 6.0.0.2016.6, 6.5)
No matching distribution found for pyvmomi==
[root@salt01t ~]#

from https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
Looks like "deviceChange" really need to be added somewhere in qa-test-vm profile specification.
I don't know where and how deviceChange need to be specified.

I have similar results to @tjyang

No devices being loaded works ok.

salt-cloud -p base-linux web02 -l debug
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG   ] Including configuration from '/etc/salt/cloud.providers.d/vmware.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.providers.d/vmware.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/base.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/base.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/elk.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/elk.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/haproxy.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/haproxy.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/logstash.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/logstash.conf
[DEBUG   ] Configuration file path: /etc/salt/cloud
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO    ] salt-cloud starting
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG   ] LazyLoaded rackspace.reboot
[DEBUG   ] LazyLoaded openstack.list_locations
[DEBUG   ] LazyLoaded rackspace.list_locations
[DEBUG   ] Could not LazyLoad vmware.optimize_providers: 'vmware.optimize_providers' is not available.
[DEBUG   ] The 'vmware' cloud driver is unable to be optimized.
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG   ] LazyLoaded rackspace.reboot
[DEBUG   ] LazyLoaded openstack.list_locations
[DEBUG   ] LazyLoaded rackspace.list_locations
[DEBUG   ] Generating minion keys for 'web02'
[DEBUG   ] LazyLoaded cloud.fire_event
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = salt/cloud/web02/creating; data = {'profile': 'base-linux', 'event': 'starting create', '_stamp': '2016-12-22T09:14:38.515294', 'name': 'web02', 'provider': 'vmware:vmware'}
[DEBUG   ] Virtual hardware version already set to vmx-11
[DEBUG   ] Setting cpu to: 2
[DEBUG   ] Setting memory to: 2048 MB
[DEBUG   ] clone_spec set to:
(vim.vm.CloneSpec) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   location = (vim.vm.RelocateSpec) {
      dynamicType = <unset>,
      dynamicProperty = (vmodl.DynamicProperty) [],
      service = <unset>,
      folder = <unset>,
      datastore = 'vim.Datastore:datastore-12',
      diskMoveType = <unset>,
      pool = 'vim.ResourcePool:resgroup-20',
      host = <unset>,
      disk = (vim.vm.RelocateSpec.DiskLocator) [],
      transform = <unset>,
      deviceChange = (vim.vm.device.VirtualDeviceSpec) [],
      profile = (vim.vm.ProfileSpec) []
   },
   template = false,
   config = (vim.vm.ConfigSpec) {
      dynamicType = <unset>,
      dynamicProperty = (vmodl.DynamicProperty) [],
      changeVersion = <unset>,
      name = <unset>,
      version = <unset>,
      uuid = <unset>,
      instanceUuid = <unset>,
      npivNodeWorldWideName = (long) [],
      npivPortWorldWideName = (long) [],
      npivWorldWideNameType = <unset>,
      npivDesiredNodeWwns = <unset>,
      npivDesiredPortWwns = <unset>,
      npivTemporaryDisabled = <unset>,
      npivOnNonRdmDisks = <unset>,
      npivWorldWideNameOp = <unset>,
      locationId = <unset>,
      guestId = <unset>,
      alternateGuestName = <unset>,
      annotation = <unset>,
      files = <unset>,
      tools = <unset>,
      flags = <unset>,
      consolePreferences = <unset>,
      powerOpInfo = <unset>,
      numCPUs = 2,
      numCoresPerSocket = <unset>,
      memoryMB = 2048,
      memoryHotAddEnabled = <unset>,
      cpuHotAddEnabled = <unset>,
      cpuHotRemoveEnabled = <unset>,
      virtualICH7MPresent = <unset>,
      virtualSMCPresent = <unset>,
      deviceChange = (vim.vm.device.VirtualDeviceSpec) [],
      cpuAllocation = <unset>,
      memoryAllocation = <unset>,
      latencySensitivity = <unset>,
      cpuAffinity = <unset>,
      memoryAffinity = <unset>,
      networkShaper = <unset>,
      cpuFeatureMask = (vim.vm.ConfigSpec.CpuIdInfoSpec) [],
      extraConfig = (vim.option.OptionValue) [],
      swapPlacement = <unset>,
      bootOptions = <unset>,
      vAppConfig = <unset>,
      ftInfo = <unset>,
      repConfig = <unset>,
      vAppConfigRemoved = <unset>,
      vAssertsEnabled = <unset>,
      changeTrackingEnabled = <unset>,
      firmware = <unset>,
      maxMksConnections = <unset>,
      guestAutoLockEnabled = <unset>,
      managedBy = <unset>,
      memoryReservationLockedToMax = <unset>,
      nestedHVEnabled = <unset>,
      vPMCEnabled = <unset>,
      scheduledHardwareUpgradeInfo = <unset>,
      vmProfile = (vim.vm.ProfileSpec) [],
      messageBusTunnelEnabled = <unset>,
      crypto = <unset>,
      migrateEncryption = <unset>
   },
   customization = <unset>,
   powerOn = true,
   snapshot = <unset>,
   memory = <unset>
}
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = salt/cloud/web02/requesting; data = {'_stamp': '2016-12-22T09:14:39.445995', 'event': 'requesting instance', 'kwargs': {'profile': 'base-linux', 'domain': 'example.com', 'dns_servers': ['8.8.8.8', '8.8.4.4'], 'power_on': True, 'minion': {'master': '10.150.0.14'}, 'ssh_username': 'root', 'insecure': True, 'driver': 'vmware:vmware', 'resourcepool': 'Resources', 'cluster': 'G9 Blades', 'verify_ssl': False, 'priv_key': '-----BEGIN RSA PRIVATE KEY-----\nMIIJKQIBAAKCAgEAvpw3mdGz9oIIMHJGc72FnLoOpmIqMMZtDiZsNi//kmY+qZ+T\nFWNpHMuHBADYsx+hzCEYtlwGHvHixKpPDjxIVBGzo+SDej8OD/U6wz/s3wuzPyds\nPIkvIlMnGbAySJQCFEcTCYy93xR0iQmJZPE2p9j1iEn817uDCE7Dg5ldTMBz4CWx\nTky9r+4U7DUkYQDykm47Fz9iGq/OSaAtvh8yQwOSDu/wcnyPRUEWnuWjY9bTk+4C\nDAG7EH0cbqw5t9HjbHwO1t7eozcCgjlkQ5IeG9k0UKYaylPqS0rPo8LdmwsvWvDf\nxo8kKlFzb0SwW1sdPrGMCSEeGPS/hpZslXhJllv7r8lXOOckw0vkHGnv0fQyKD2R\nksfX8BPFRm+r6Qee005qe6WvgjSmM5wTE2PtzTFO55oY/FJ0muIubHQUxLfAzfhU\nBcZlo8xry9MsO+yJYnlQ2eEzoWsSDsKVIq2nQPsWM+OEriHlsIzPd7/QJPrk8uKn\nKgqNpbkkiCaXQyejlVsPIRN6+lWqph2pZbRH/8ZZx+IL/KCq1TF4f7r6sg+y2H9K\nvRZluP1Y9+4us/SMLmjclmR6nhbovm//X/QrnOrUIYPZxrsi3+xbkcipmje0LMQN\nYImrSBaTkqsot/xwu7BBTIAwixtd4dOFqpQAmrFlHY33zv/xHFlTxHLKkBMCAwEA\nAQKCAgAW18XaC0WT7zVoCOnkiPvwMmP7EJyZx83d+kDRpaLtOo+b6GHKGGXKa6G8\nmxVdMhdDzVuWzyR0pOxWQUrEG/lXCeALbiFLjy7yPqNSLuEGQfKzSNgx0QbzCCAR\ndgowpHwzTek8Jb/DckIdAiWTEU1JhDi4opVIjVJAtp+7r698uTupyxqalZvN76xW\nOnAnp+NqJQez24a4SAVi0W1p8XcmjND+NUcNkVviItDBLigMmPw2KDnakoKGkVfM\n85rfKqokoCFvIrpvuPH2BUa8ylANalRYre9b+/6CE28a4wmHwSllrqRCuXjOEYDR\n/ttbt2mjz8dGeM+QkhQ53uEYcJfw1gReoZzLb7LLHy9P1y2lvCcYu/xu5qjeR7lk\no/9Pm7depJatZ9mbCcSaiEbrzi34UMPFPLV2Gmj9t2T/Yb5QZ0VwK/ZpwYEqdR3H\nPTgFzfrgZ2g5qirQl2DegJyypcSSfxdi84HnmY4Q8MQ/VpW46Nok58Ce35IaqaFC\n7/dJXG3lsZ64ElOLbL7FfrtL9wc6G+1dZJaGeSlwYjfpvFR+zNVt1c0bo0OzX8Gr\nJUisEmvK82bmqvyxWHtNcvEo59A6xqHQVFJYmtvbxsbg50nau7ceKg7e7GjRB+4m\nTUoX/74twYCQKgxvcuXSbhy2w/cDQdCpwJWnKtMOOeoL+yN9oQKCAQEA0Z+ybXXG\nWTLRkc6cpJuDa2mtrYz+YvLTOMRv/WQakwiozrl8d1d0rAfLF82WyhSJqm9Ef6CM\n57E2lORpyzDeAN3gx9a4Y95QRlGP8nYevt7iVjSp6ME3eN1ZBwDq0QzKneqx1VeJ\nSGs0X0Kum3l88/hwYKL21TmJ7vAxpX0++rxa2g0Yy4mZlCw6XsEf/CvsdBD42Jv5\nn/RJ3SLd+G2D4YaeinaEHzlV5TjomcQnpCzlLo+ybH33Odd8xk6iBPGz++HukBIe\nMWAR4wA5BTU92pDQxQZk0fQHCKFZlAq17woe6CuYqVkGrkrPjKVyuVF9dCxu/lME\nox131XqIWmOweQKCAQEA6MepddhJZ1hlTxfxYWkB4LJqbOhjiAoszsIdAGVI+oUC\naJI6OcjNV5uYZwThi0DmN2eOv6NvGGSaEiOF7XrPhTePmLVwlwqenPn0EkeVhLy9\nu3Ad5nBeRVciSl7aGo8cWREa1LdgDzSmltuYvnpEM++ENMg0gYDi404RwRxR5AuZ\nslfVAgG/qaDEnZHO3iRAlzUDYsAHtOZ5N8WQaV1LV4+NwA9NfPbYdBLtZmoc3n19\nRn7jhBm3MBkU3prHKOHRw1CgqYG+qdG2astJswCEmZLnI03gr/d79i270F+csAsa\nN4x9MLgMCtt8icnUWkR9PJJ2U7BFnuv9RZVgK/nZ6wKCAQEAivVGHnGYTsD1U6aX\nCDde4vFnBEkWyRkXE+aEJoEZbKas4HztGV+MJA32f4z1jHgY2jZwPfp77Yr4F7Ni\nviNix0hOHaslCG7y3+ppddz/fJ/bgjHfAEA1OZXh0la1UmccWZqe0EH661rFmPBn\nNwFN55ylQipFXguKeC4Zew6PlT8PKsTLzwEkIak/+Fglj9C+KiKmE1EyJOqXnFPk\nuS4/4lyO9FKkOt6TJiSXbHcvoBFyy27OZEUMgfdq6zptBMIFAdA/iJm6EhkRQl1/\nbwhgPGcLPdCDPPp7PylWbGC9Qfx/iIB97qTpXNiSxTVX26k9dKmP2l/GDysVqRpV\nETZMYQKCAQEAop6ZrFIlNaVzYQYBA2Qwg6Eg6GSQ5AD3vJmvWQ9pJFq3jAZb1vKJ\nQaLZnV6zkm0MZ7hY6VhrzEa7u+BFN9qMDyz5jF11Ao/QrymPcRXBRrH0enWg7dOi\niB6PPhV1mQhRbYedju1sljLaDpnq42bXLtEtMxKKW86GsvVfQeFe9EmGXikuDfDa\nzM4bjVjHhDkfRoMqklpFCAPauzOx52ndsJYBGSOXpq1sGer/HoUTFfvlANK0bxzn\n9RoQklLev5jCyggRtVmGsWxoW8MZAYxjFkaiYu+NAGGMoDbi9ndVJ0caUaQ78UUi\nZJNhNYFicI1YUHChaWXDvXpvvaTVQuORcQKCAQAdoLxcEr8brKrx+n8yZ1FzhH4E\nKe6TBxPvjfsTd9v+0iH0b8X/suzkA2VZ1avjBbx6W5Q3dP9bCoUPqUdxosDSI2SA\nxWG9KTnSqeyBipon9srdMDrkol4JAYqmTf639Aceux6k8hkq/yD/bWoSkiJRP1x1\nbG7aUaQH7Nmfust9+hp+fx0TcWj3EeVguQz9c0hFkHiVBDuEq2/OHbjE/TYD0Y/b\npOZeW5xcoExzA42WcUmEFk7SBKkoDhWlB/hGVjPMNOyeO6iwB5lr9SeLc/EEblAj\nl8cLya/u7FUteU44ielRv0BVQ6qiqyQ5lLo0z2O85MOW9NnxEGessgcjUQ8K\n-----END RSA PRIVATE KEY-----', 'user': 'NOR\\radleyN', 'customization': True, 'datastore': 'VMFS3_PoolB2', 'password': 'vizolution', 'pub_key': '-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAvpw3mdGz9oIIMHJGc72F\nnLoOpmIqMMZtDiZsNi//kmY+qZ+TFWNpHMuHBADYsx+hzCEYtlwGHvHixKpPDjxI\nVBGzo+SDej8OD/U6wz/s3wuzPydsPIkvIlMnGbAySJQCFEcTCYy93xR0iQmJZPE2\np9j1iEn817uDCE7Dg5ldTMBz4CWxTky9r+4U7DUkYQDykm47Fz9iGq/OSaAtvh8y\nQwOSDu/wcnyPRUEWnuWjY9bTk+4CDAG7EH0cbqw5t9HjbHwO1t7eozcCgjlkQ5Ie\nG9k0UKYaylPqS0rPo8LdmwsvWvDfxo8kKlFzb0SwW1sdPrGMCSEeGPS/hpZslXhJ\nllv7r8lXOOckw0vkHGnv0fQyKD2RksfX8BPFRm+r6Qee005qe6WvgjSmM5wTE2Pt\nzTFO55oY/FJ0muIubHQUxLfAzfhUBcZlo8xry9MsO+yJYnlQ2eEzoWsSDsKVIq2n\nQPsWM+OEriHlsIzPd7/QJPrk8uKnKgqNpbkkiCaXQyejlVsPIRN6+lWqph2pZbRH\n/8ZZx+IL/KCq1TF4f7r6sg+y2H9KvRZluP1Y9+4us/SMLmjclmR6nhbovm//X/Qr\nnOrUIYPZxrsi3+xbkcipmje0LMQNYImrSBaTkqsot/xwu7BBTIAwixtd4dOFqpQA\nmrFlHY33zv/xHFlTxHLKkBMCAwEAAQ==\n-----END PUBLIC KEY-----', 'esxi_host_ssl_thumbprint': '54:ED:32:2D:B3:EC:CF:72:E7:62:58:7E:65:1B:EE:88:B0:6B:20:09', 'name': 'web02', 'clonefrom': 'Gold_Centos7_Template', 'url': '192.168.1.137', 'num_cpus': 2, 'deploy': True, 'hardware_version': 11, 'devices': None, 'inline_script': None, 'datacenter': 'Wales', 'template': False, 'memory': '2GB', 'folder': 'Project Pip', 'os': 'bootstrap-salt', 'image': 'rhel7_64Guest'}}
[INFO    ] Creating web02 from template(Gold_Centos7_Template)
[INFO    ] [ web02 ] Waiting for clone task to finish [0 s]
[INFO    ] [ web02 ] Waiting for clone task to finish [5 s]
[INFO    ] [ web02 ] Waiting for clone task to finish [10 s]
[INFO    ] [ web02 ] Waiting for clone task to finish [15 s]
[INFO    ] [ web02 ] Successfully completed clone task in 16 seconds
[INFO    ] [ web02 ] Waiting for VMware tools to be running [0 s]
[INFO    ] [ web02 ] Waiting for VMware tools to be running [5 s]
[INFO    ] [ web02 ] Waiting for VMware tools to be running [10 s]
[INFO    ] [ web02 ] Waiting for VMware tools to be running [15 s]
[INFO    ] [ web02 ] Waiting for VMware tools to be running [20 s]
[INFO    ] [ web02 ] Successfully got VMware tools running on the guest in 23 seconds

With device hard disk the build fails.

$ salt-cloud -p base-linux web02 -l debug
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG   ] Including configuration from '/etc/salt/cloud.providers.d/vmware.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.providers.d/vmware.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/base.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/base.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/elk.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/elk.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/haproxy.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/haproxy.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/logstash.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/logstash.conf
[DEBUG   ] Configuration file path: /etc/salt/cloud
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO    ] salt-cloud starting
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG   ] LazyLoaded rackspace.reboot
[DEBUG   ] LazyLoaded openstack.list_locations
[DEBUG   ] LazyLoaded rackspace.list_locations
[DEBUG   ] Could not LazyLoad vmware.optimize_providers: 'vmware.optimize_providers' is not available.
[DEBUG   ] The 'vmware' cloud driver is unable to be optimized.
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG   ] LazyLoaded rackspace.reboot
[DEBUG   ] LazyLoaded openstack.list_locations
[DEBUG   ] LazyLoaded rackspace.list_locations
[DEBUG   ] Generating minion keys for 'web02'
[DEBUG   ] LazyLoaded cloud.fire_event
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = salt/cloud/web02/creating; data = {'profile': 'base-linux', 'event': 'starting create', '_stamp': '2016-12-22T09:17:57.907928', 'name': 'web02', 'provider': 'vmware:vmware'}
[DEBUG   ] Virtual hardware version already set to vmx-11
[DEBUG   ] Setting cpu to: 2
[DEBUG   ] Setting memory to: 2048 MB
[DEBUG   ] specs are the following: {'nics_map': [], 'device_specs': [None]}
[ERROR   ] There was a profile error: Required field "deviceChange" not provided (not @optional)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 284, in run
    self.config.get('names')
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1458, in run_profile
    ret[name] = self.create(vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1288, in create
    output = self.clouds[func](vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 2497, in create
    config_spec.deviceChange = specs['device_specs']
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 537, in __setattr__
    CheckField(self._GetPropertyInfo(name), val)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 954, in CheckField
    CheckField(itemInfo, it)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 915, in CheckField
    raise TypeError('Required field "%s" not provided (not @optional)' % info.name)
TypeError: Required field "deviceChange" not provided (not @optional)

With device network being loaded it also fails.

 salt-cloud -p base-linux web02 -l debug
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG   ] Including configuration from '/etc/salt/cloud.providers.d/vmware.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.providers.d/vmware.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/base.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/base.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/elk.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/elk.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/haproxy.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/haproxy.conf
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/logstash.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/logstash.conf
[DEBUG   ] Configuration file path: /etc/salt/cloud
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO    ] salt-cloud starting
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG   ] LazyLoaded rackspace.reboot
[DEBUG   ] LazyLoaded openstack.list_locations
[DEBUG   ] LazyLoaded rackspace.list_locations
[DEBUG   ] Could not LazyLoad vmware.optimize_providers: 'vmware.optimize_providers' is not available.
[DEBUG   ] The 'vmware' cloud driver is unable to be optimized.
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG   ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG   ] LazyLoaded rackspace.reboot
[DEBUG   ] LazyLoaded openstack.list_locations
[DEBUG   ] LazyLoaded rackspace.list_locations
[DEBUG   ] Generating minion keys for 'web02'
[DEBUG   ] LazyLoaded cloud.fire_event
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event: tag = salt/cloud/web02/creating; data = {'profile': 'base-linux', 'event': 'starting create', '_stamp': '2016-12-22T09:20:23.577314', 'name': 'web02', 'provider': 'vmware:vmware'}
[DEBUG   ] Virtual hardware version already set to vmx-11
[DEBUG   ] Setting cpu to: 2
[DEBUG   ] Setting memory to: 2048 MB
[ERROR   ] There was a profile error: 'NoneType' object has no attribute 'keys'
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 284, in run
    self.config.get('names')
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1458, in run_profile
    ret[name] = self.create(vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1288, in create
    output = self.clouds[func](vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 2495, in create
    specs = _manage_devices(devices, object_ref)
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 670, in _manage_devices
    if device.deviceInfo.label in list(devices['network'].keys()):
AttributeError: 'NoneType' object has no attribute 'keys'

FYI..For our provisioning we create a template in VMware that has both a hard disk and network card as part of the base build. The salt-cloud is used to assign IP addresses and assign larger hard disks.

I'm only mentioning this in case it makes a difference to the provisioning procedure.

Cheers,

Neil

Update.

Somehow I've managed to get the following command to work....

salt-cloud -p web02 web02

Using this conf file.

web02:                                                                                                                                                                                                                                                                        
  provider: vmware
  clonefrom: Gold_Centos7_Template

  ## Optional arguments
  num_cpus: 2
  memory: 2GB 
  devices:
    network:
      Network adapter 1:
        name: 'Project PIP VLAN 80' 
        switch_type: standard
        ip: 10.150.0.24
        gateway: [10.150.0.1]
        subnet_mask: 255.255.255.0
        adapter_type: vmxnet

  domain: example.com
  dns_servers:
    - 8.8.8.8
    - 8.8.4.4

  resourcepool: Resources
  cluster: "G9 Blades"

  datastore: VMFS3_PoolB2
  folder: 'Project Pip'
  datacenter: Wales
  template: False
  power_on: True

  deploy: True
  customization: True
  ssh_username: root
  password: ******

  hardware_version: 11
  image: rhel7_64Guest

  minion:
    master: 10.150.0.14

And these file versions.

Salt Version:                                                                                                                                                                                                                                                                 
           Salt: 2016.11.1
路
Dependency Versions:
           cffi: 1.9.1
       cherrypy: 3.2.2
       dateutil: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: 1.6.6
         Jinja2: 2.7.2
        libgit2: Not Installed
        libnacl: 1.5.0
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.8
   mysql-python: Not Installed
      pycparser: 2.17
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.7.5 (default, Nov  6 2016, 00:28:07)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 16.0.2
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.6
路
System Versions:
           dist: centos 7.3.1611 Core
        machine: x86_64
        release: 3.10.0-514.2.2.el7.x86_64
         system: Linux
        version: CentOS Linux 7.3.1611 Core

However the salt-cloud -m still doesn't work.

All I did was to install and old version of PyVmomi (pyVmomi==6.0.0.2016.6) and then upgraded it!

pyvmomi (6.5)

Cheers,

Neil

@asmodeus70 , Thanks for the tip, on my other test(slow) vmware farm I am able to create a VM by -p syntax after deletion of drive devices declaration(only keep network section).

Thanks to both of you for trying to troubleshoot this. I'm baffled as to why our tests didn't catch it, but I do have one theory based off of @asmodeus70 's results in installing an older version of pyvmomi and then upgrading it.

Can you try this: pip install pyVmomi==5.5.0.2014.1.1 adn then pip install --upgrade pyvmomi? I think this might have occurred during testing.

Either way I'll see about gettting another engineer assigned to this issue to get it fixed before 2016.11.2.

I'm hoping the above works for you guys though so we can find a workaroudn in the meantime.

Oh also can either of you confirm by chance whether this was working in 2016.11.0 or not?

Also just realized that the log I had you add is showing : [DEBUG ] @Ch3LL: specs are the following: {'nics_map': [], 'device_specs': [None]} which I believe is the core of the issue. As stated before I am not super knowledgable in this area but just trying to progress the issue so we can figure this out a workaround before a fix is submited and I don't have access to a vsphere to figure this out.

I believe since the value of device_specs is none and it gets here into pyvmomi code its then throwing that typeerror.

So we need to figure out why teh call to _manage_devices is returning none for device_specs: https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/vmware.py#L2528

Since it looks like you are cloning from a template I believe this is where you would end in the code:
https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/vmware.py#L651

The only place i see it equaling None is here: https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/vmware.py#L656 and here https://github.com/saltstack/salt/blob/v2016.11.1/salt/cloud/clouds/vmware.py#L837 which was added in commit 0a3d266d but that was added in 2016.3.4 as well so not sure if this is where the issue is. But possibly could because a lot of people used pvmomi version5 in 2016.3.4 due to an issue with ssl.

If someone is willing to add some print statement in some of those areas we might be able to figure this out before an engginer who is a lot more knowledgeable then i can get to this and fix it. Just want to clarify that I might be completely wrong from above information. Just speculating.

@Ch3LL , I will give it a try by adding print statements at those function calls.

@Ch3LL, In your insertion of debug statement example you are using vmware.py in develop branch. It is kind of hard for me to follow.

So far my workaround is not use "devices:" in profile which still allow me to create VM using my own centos7 template.

Is it possible that you have saltstack engineer to look at this issue,after holiday season is over ?

@tjyang, @asmodeus70 I've got the same issue, without mention of 'devices' section, it works correctly.

@fishhead108, glad you got the work around. my centos template already have drive layout created before they turned into template. It will be great to move drive layout creation out of vm template stage, once the "devices" issue is resolved.

@tjyang @asmodeus70 @fishhead108 I have been able to duplicate this issue in my environment:

[DEBUG   ] Using datastore used by the template suseleap422
[DEBUG   ] Setting cpu to: 1
[DEBUG   ] Setting memory to: 256 MB
[ERROR   ] There was a profile error: Required field "deviceChange" not provided (not @optional)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 284, in run
    self.config.get('names')
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1458, in run_profile
    ret[name] = self.create(vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1288, in create
    output = self.clouds[func](vm_)
  File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/vmware.py", line 2496, in create
    config_spec.deviceChange = specs['device_specs']
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 537, in __setattr__
    CheckField(self._GetPropertyInfo(name), val)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 954, in CheckField
    CheckField(itemInfo, it)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 915, in CheckField
    raise TypeError('Required field "%s" not provided (not @optional)' % info.name)
TypeError: Required field "deviceChange" not provided (not @optional)

This was easy to duplicate. The image that I was cloning from had a 16gb disk and I set my size to 10. I guessed that this would be the cause by spending a lot of time looking at the section that adds disk devices and realizing that the only thing that it really checks is sizes, and I ended up duplicating the error on my first try.

Would you please double-check and tell me what you have the size set to in your devices section, and how big that disk actually is in the template that you are cloning from?

@techhat, thanks for good finding. Following is my test result.

  1. centos7template40G01 is a 40G template.
    With following wrong 20G size disk
vc01-centos7-40G:
  provider: vc01
  clonefrom: centos7template40G01
  num_cpus: 2
  memory: 4GB
  devices:
    disk:
      Hard disk 1:
        size: 20
<snipped>
  1. I am getting following expected error message
[me@salt01 cloud.profiles.d]$ sudo salt-cloud -p vc01-centos7-40G centos7t01-40G-by-saltcloud
[ERROR   ] There was a profile error: Required field "deviceChange" not provided (not @optional)
[me@salt01 cloud.profiles.d]$

  1. So when I change the size to 50 or 60 gb, I am getting following error message.
[me@salt01 cloud.profiles.d]$ sudo salt-cloud -p vc01-centos7-40G centos7t01-40G-by-saltcloud
[ERROR   ] Error creating centos7t01-40G-by-saltcloud: (vim.fault.NoPermission) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   msg = 'Permission to perform this operation was denied.',
   faultCause = <unset>,
   faultMessage = (vmodl.LocalizableMessage) [],
   object = 'vim.Folder:group-v286',
   privilegeId = 'VirtualMachine.Config.EditDevice'
}
centos7t01-40G-by-saltcloud:
    ----------
    Error:
        Error creating centos7t01-40G-by-saltcloud: (vim.fault.NoPermission) {
           dynamicType = <unset>,
           dynamicProperty = (vmodl.DynamicProperty) [],
           msg = 'Permission to perform this operation was denied.',
           faultCause = <unset>,
           faultMessage = (vmodl.LocalizableMessage) [],
           privilegeId = 'VirtualMachine.Config.EditDevice'
        }

  1. if I take out following 3 lines
    disk:
      Hard disk 1:
        size: 20

I can create the VM without issue.

  1. Here is the version info.
Salt Version:
            Salt: 2016.11.2

Dependency Versions:
 Apache Libcloud: 0.20.1
            cffi: 0.8.6
        cherrypy: 3.2.2
        dateutil: 1.5
           gitdb: 0.6.4
       gitpython: 1.0.1
           ioflo: 1.3.8
          Jinja2: 2.8
         libgit2: 0.21.0
         libnacl: 1.4.3
        M2Crypto: 0.21.1
            Mako: 0.8.1
    msgpack-pure: Not Installed
  msgpack-python: 0.4.8
    mysql-python: 1.2.3
       pycparser: 2.14
        pycrypto: 2.6.1
          pygit2: 0.21.4
          Python: 2.7.5 (default, Sep 15 2016, 22:37:39)
    python-gnupg: 0.3.8
          PyYAML: 3.11
           PyZMQ: 15.3.0
            RAET: Not Installed
           smmap: 0.9.0
         timelib: 0.2.4
         Tornado: 4.2.1
             ZMQ: 4.1.4

System Versions:
            dist: centos 7.2.1511 Core
         machine: x86_64
         release: 3.10.0-327.36.3.el7.x86_64
          system: Linux
         version: CentOS Linux 7.2.1511 Core


@tjyang looks like you were having permissions issues with VMware. Were you able to get those cleaned up?

@techhat, No. I tried to get support from VMWare side but unfortunately my support contract doesn't cover API support.(ie pyvmomi interact with VMWare SOAP SDK). Hopefully others on this thread can confirm your fix.

If it is alright, I am going to throw my experience with this out into the ring as well.
I just recently updated to salt-cloud 2016.11.2 and am now experiencing this same issue. However, this only appears to happen if I only have network under the devices section. If I add a hard disk to the devices section while the network is still there, it works fine.

Does not work,

  - web2.stage:
      clonefrom: qa-centos-template
      resourcepool: qahost
      minion:
        master: saltmaster
        grains:
          role:
            - php
      devices:
        network:
          Network adapter 1:
            name: dev-network
            ip: XX.XX.XX.236
            gateway: [ XXX.XX.XX.1 ]
            subnet_mask: 255.255.255.0
            switch_type: standard
            adapter_type: vmxnet3

but this does:

  - web2.stage:
      clonefrom: qa-centos-template
      resourcepool: qahost
      minion:
        master: saltmaster
        grains:
          role:
            - php
      devices:
        disk:
          Hard disk 2:
            size: 500
            thin_provision: True
        network:
          Network adapter 1:
            name: dev-network
            ip: XX.XX.XX.236
            gateway: [ XXX.XX.XX.1 ]
            subnet_mask: 255.255.255.0
            switch_type: standard
            adapter_type: vmxnet3

I encountered the same error as everyone else here and removing the disk entries fixed it for me. However I think something else is going on other than what the PR addresses.

I was getting the same error, but the disk size I was specifying in my state was the exact same size as template (16GB). I added a log statement in cloud/vmware.py and confirmed that the size in my template was the same size that the state was giving. I was able to fix my problem just by adding a mode option to the disk.

        disk:
          # disk in the template, needs a mode line because the disk exists
          Hard disk 1:
            size: 16
            mode: independent_persistent
          # disk does not exists so doesn't need anything extra
          Hard disk 2:
            size: 40

Looking at that section of the code (starting at line 619) I see that a Non-Empty disk_spec object is created for the following cases:

  • template disk size is smaller that state disk size (i.e. expand the created disk size)
  • mode is specified

All other cases the disk_spec is none and I think that triggers the error.

What is deceiving is that on line 622 a function is called to create a disk_spec:

 disk_spec = _get_size_spec(device, devices)

But the actual code for _get_size_spec only generates a spec if the template is smaller than the state

    812 def _get_size_spec(device, devices):
    813     size_gb = float(devices['disk'][device.deviceInfo.label]['size'])
    814     size_kb = int(size_gb * 1024.0 * 1024.0)
    815     disk_spec = None
    816     if device.capacityInKB < size_kb:
    817         # expand the disk
    818         disk_spec = _edit_existing_hard_disk_helper(device, size_kb)
    819     return disk_spec

Long story short I think a disk_spec is always needed even if nothing changes.

This used to work earlier without any issues and in the newer version of Salt it doesn't. So this definitely is caused by a change made somewhere in the code. I'd recommend merge requests to the driver be carefully reviewed and merged if there are no tests included. I updated to the latest driver version for testing and it's now breaking a lot of things. I'll take a look at this and attempt to fix it.

Was this page helpful?
0 / 5 - 0 ratings