Possibly similar to the issue described in the closed ticket #34991, adding a second Synology DSM instance fails.

arch | x86_64
-- | --
dev | false
docker | true
hassio | false
installation_type | Home Assistant Container
os_name | Linux
os_version | 4.4.59+
python_version | 3.8.5
timezone | UTC
version | 0.115.0
virtualenv | false
configuration.yamlAll configured via UI.
Nothing found in debug logs matching the time or failure message.
This issue is still present in 0.116.0.
Same here, trying to connect to virtual DSM.
On the DSM side, it shows successful login:
User [<home assistant user>] from [<home assistant ip>] logged in successfully via [DSM].
And
Env:
arch | x86_64
-- | --
chassis | vm
dev | false
docker | true
docker_version | 19.03.11
hassio | true
host_os | HassOS 4.14
installation_type | Home Assistant OS
os_name | Linux
os_version | 5.4.69
python_version | 3.8.5
supervisor | 249
timezone | UTC
version | 0.116.4
virtualenv | false
Hi @ThomasPrior,
hi @stkiller,
please could you provide more detailed information for both NAS systems, so that it issue can be reproduced?
In my environment (_0.116.4_) with a physical ds218play (_version:DSM 6.2.3-25426 Update 2, hostname:diskstation_) and a virtual XPnology (_model:ds918x, version:DSM 6.2.3-25426, hostname:xdiskstation_) - each time the admin user was used - it works fine ... both systems are added and shown on HomeAssistant properly.
regards,
Michael
Model Name: VirtualDSM
Hostname: vsynology
DSM: 6.2.3-25426 Update 2
Permissions: full administrator
meanwhile I updated my comment with some more questions.
Please answer them, too (_and please for both NAS systems_).
regards,
Michael
Have tried both UI methods of getting the integration working with a virtual instance on three home assistant installs (1 venv, 2 docker). No change in results.
The physical Synology is:
Model: DS920+
Hostname: SynologyDS920
DSM: 6.2.3-25426 Update 2
Permissions: full admin
Method: Auto discovered.
After analysing the code, the following conditions are raising the "Missing data:..." exception:
Therefore please check on the failing DSM
A volume is configured on the virtual instance, but no disks exist as it's virtual. A serial number is present and is unique to the virtual instance.
In my case, serial number, cpu usage and mac address(es) should be fine. But, I mount one of the shares using the following code on startup:
mount -t nfs <base_station_ip>:/volume2/surveillance /volume1/surveillance
which is a hack that allows me using a 'network share' in surveillance station. I guess I could've mounted a separate partition, but I didn't know then how much space do I need.
Thanks @mib1185 to take a look at this 馃檹
Let me know on your progression, and PR.
Unfortunately I can't test, I have "only" one NAS 馃槄
So, using custom component with copied source, got this:
# if (
# not api.information.serial
# or api.utilisation.cpu_user_load is None
# or not api.storage.disks_ids
# or not api.storage.volumes_ids
# or not api.network.macs
# ):
# raise InvalidData
_LOGGER.debug("HA!!!!!!!!!________________________________")
_LOGGER.debug(api.information.serial)
_LOGGER.debug(api.utilisation.cpu_user_load)
_LOGGER.debug(api.storage.disks_ids)
_LOGGER.debug(api.storage.volumes_ids)
_LOGGER.debug(api.network.macs)
to output this:
2020-10-28 15:57:49 DEBUG (SyncWorker_12) [custom_components.synology_dsm.config_flow] HA!!!!!!!!!________________________________
2020-10-28 15:57:49 DEBUG (SyncWorker_12) [custom_components.synology_dsm.config_flow] <string_serial>
2020-10-28 15:57:49 DEBUG (SyncWorker_12) [custom_components.synology_dsm.config_flow] 5
2020-10-28 15:57:49 DEBUG (SyncWorker_12) [custom_components.synology_dsm.config_flow] []
2020-10-28 15:57:49 DEBUG (SyncWorker_12) [custom_components.synology_dsm.config_flow] ['volume_1', 'volume_2']
2020-10-28 15:57:49 DEBUG (SyncWorker_12) [custom_components.synology_dsm.config_flow] [<string_mac>]
Taking into consideration that this is a virtual DSM, it makes sense that disk_ids is an empty array.
On a slightly unrelated note, while trying to find source code, I downloaded it first from current master, and was able to reproduce #42493 on "0.116.4"
@stkiller thanks for great analyse and the results.
Now the root cause is clear to me and I will think about a solution.
The easiest solution should be simple remove the condition or not api.storage.disks_ids from _login_and_fetch_syno_info(), because all other references or usages of api.storage.disks_ids are conditional, but not required so that no regressions are expected (_of course has to be tested_).
I will create a new PR for that...
Just wanted to say thanks for looking into this and then so quickly putting forward a path to resolution.
Hi @ThomasPrior & @stkiller,
Is 0.107.1 solves the issue on your side ?
On #42493 there is complain about authentication.
Do you also experience that ?
Thanks.
Most helpful comment
Just wanted to say thanks for looking into this and then so quickly putting forward a path to resolution.