After upgrade to latest release (v 0.109) all Google Cast devices disappeared. They were not added through integration page, but added and working through manual cast: configuration.
Disabling manual configuration trying to add them through Integrations, restarting, deleting etc doesn't make any difference.
configuration.yaml
cast:
media_player:
- host: !secret chromecast_living_ip
- host: !secret chromecast_loft_ip
- host: !secret google_home_mini_ip
- host: !secret google_nest_hub_ip
- host: !secret google_home_mini_2_ip
- host: !secret google_lenovo_clock_ip
none
Not sure if this is related to https://github.com/home-assistant/core/pull/33922 as this is the only change I've seen for this release that has anything to do with Cast integration.
I don't use and have option to use mDNS, all devices are on same network and subnet.
Try running the HA docker with --net=host
/ network_mode: host
See https://github.com/home-assistant/core/issues/34874#issuecomment-621610878
It is running in host mode - using host IP address already.
Not running it in privileged mode though.
cast documentation
cast source
(message by IssueLinks)
Hey there @emontnemery, mind taking a look at this issue as its been labeled with a integration (cast
) you are listed as a codeowner for? Thanks!
(message by CodeOwnersMention)
@BeardedTinker please share your docker configuration.
Please also share a startup log with these logs enabled:
https://github.com/home-assistant/core/issues/34874#issuecomment-621409094
Here is (I think) last docker command i used:
sudo docker run -itd --name="home-assistant" --restart=always -v /volume1/docker/home-assistant:/config -p 8123:8123 -e "TZ=Europe/Zagreb" --net=host homeassistant/home-assistant:latest
And for log, here is snippet where any of those components are listed:
2020-05-01 10:40:43 INFO (MainThread) [homeassistant.setup] Setting up cast
2020-05-01 10:40:43 INFO (MainThread) [homeassistant.setup] Setup of domain cast took 0.0 seconds.
2020-05-01 10:40:46 INFO (MainThread) [homeassistant.setup] Setting up media_player
2020-05-01 10:40:46 INFO (MainThread) [homeassistant.components.media_player] Setting up media_player.emby
2020-05-01 10:40:46 INFO (MainThread) [homeassistant.setup] Setting up zeroconf
2020-05-01 10:40:48 INFO (MainThread) [homeassistant.setup] Setup of domain media_player took 1.9 seconds.
2020-05-01 10:40:48 INFO (MainThread) [homeassistant.setup] Setup of domain zeroconf took 1.4 seconds.
2020-05-01 10:40:48 INFO (MainThread) [homeassistant.components.media_player] Setting up media_player.cast
2020-05-01 10:40:49 DEBUG (SyncWorker_18) [homeassistant.components.cast.discovery] Starting internal pychromecast discovery.
2020-05-01 10:41:47 INFO (SyncWorker_10) [homeassistant.components.zeroconf] Starting Zeroconf broadcast
2020-05-01 10:41:47 ERROR (MainThread) [homeassistant.core] Error doing job: Exception in callback EventBus.async_listen_once.<locals>.onetime_listener(<Event homeassistant_start[L]>) at /usr/src/homeassistant/homeassistant/core.py:665
Traceback (most recent call last):
File "/usr/local/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/src/homeassistant/homeassistant/core.py", line 677, in onetime_listener
self._hass.async_run_job(listener, event)
File "/usr/src/homeassistant/homeassistant/core.py", line 384, in async_run_job
target(*args)
File "/usr/src/homeassistant/homeassistant/components/template/binary_sensor.py", line 164, in template_bsensor_startup
self.async_check_state()
File "/usr/src/homeassistant/homeassistant/components/template/binary_sensor.py", line 276, in async_check_state
state = self._async_render()
File "/usr/src/homeassistant/homeassistant/components/template/binary_sensor.py", line 215, in _async_render
state = self._template.async_render().lower() == "true"
File "/usr/src/homeassistant/homeassistant/helpers/template.py", line 222, in async_render
return compiled.render(kwargs).strip()
File "/usr/local/lib/python3.7/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/usr/local/lib/python3.7/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/usr/local/lib/python3.7/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "<template>", line 1, in top-level template code
TypeError: '>' not supported between instances of 'NoneType' and 'int'
2020-05-01 10:41:48 WARNING (MainThread) [homeassistant.helpers.service] Unable to find referenced entities media_player.display_me
These are all lines in log I see that have zeroconf, cast or media_player in them. And error, I've left it here, but it is I think related to binary sensor.
If I run in full debug log HA, I can see my chromcast devices (IP addressed) listed through Mikrotik integration.
@BeardedTinker According to the Docker documentation the port option is incompatible with --net=host
: https://docs.docker.com/compose/compose-file/#ports
Same issue for me except manually entering no longer works, either. I saw a Cast error in my log only once. It just said the devices couldn't be set up. I'm also on docker.
Yes @emontnemery - port flag is discarded when using --net=host on first run after warning that it will be ignored.
Just to be sure, I've recreated it without -p flag, and result is the same.
Still nothing seen with auto-discovery and nothing added with manual configuration.
@BeardedTinker is home assistant and your Cast devices on the same network? If they're not, you need to setup mDNS forwarding, have a look here: https://github.com/home-assistant/core/issues/34968#issuecomment-622481934
All devices are on the same network 192.168.1.x range, same gateway, same subnet, same VLAN (no VLAN). Also, all cast devices have fixed IP address through DHCP reservation.
I have exactly the same issue, after upgrading to 0.109 all cast devices went offline. Tried removing them and adding them again but not possible. Last version it worked was 0.108.x.
I have the same issue, running HassOS on Virtualbox
I have the same issue, after upgrading to 0.109, all cast devices went offline.
Docker Home Assistant Core (with --net=host ), all devices on the same network, all devices with fixed IP, no VLAN, no custom components installed (only HACS, last version),
Las version it worked was 0.108.x.
What is your host OS?
My host OS is Ubuntu Server 18.04
OK, so somehow mDNS is not working from within the container.
Can you try to run this script: https://github.com/home-assistant-libs/pychromecast/blob/master/examples/list_chromecasts.py both from the host and from within the container:
python list_chromecasts.py --show-debug
The script needs pychromecast and python-zeroconf.
Also, please share the docker command line or composer file.
I am not sure of doing well what you request, my knowledge is limited :-(. But these are the results:
Container: No Devices Found
Host:
Traceback (most recent call last):
聽聽 File "list_chromecast.py", line 7, in
聽聽聽聽 import pychromecast
ModuleNotFoundError: No module named 'pychromecast'
-The docker command:
docker run -d \
--name="Homeassistant" \
--restart=always \
--net=host \
-v /home/XXXX/docker/homeassistant:/config \
-e TZ=Europe/XXXX \
homeassistant/home-assistant:latest
@danibercero it's missing the required python modules for the script.
You should be able to install with pip3 install pychromecast zeroconf
Thanks! But the result at host is the same "No Devices Found" :-(
@danibercero OK, mDNS is not finding the Chromecasts either on the host or in the container. Do you have some firewall settings which might block mDNS?
@emontnemery No, no firewall enabled. I use Pi-Hole but I have disabled it.
@emontnemery - I have fixed my setup. Synology had local firewall on (always - last 4+ years), but after this update (probably new pycrhomecast) had rendered integration not operational.
Adding in firewall rules to ignore traffic on UDP ports 1900 and 5353 + just to be safe TCP ports 8008 and 8009 for Google devices enables integration ones again.
My setup was running in bridged mode, changed it to host and it started working as well.
@BeardedTinker Great!
You're right, there is a change in Home Assistant 0.109 which makes working mDNS mandatory for a working setup with Google Cast devices.
I've been looking for forums for a day and I don't know what to do. I don't have a firewall on my Ubuntu Server, mDNS should be enabled by default (I have installed avahi-daemon), the homeassistant docker is running in HOST.
I can ping Chromecast from the host and container with the "ping Chromecast" command.
Everything seems to be correct, I am completely lost, I would appreciate any help.
Thanks!
For reference, I had the same issue and solved it by opening port 5353 udp in my iptables:
iptables -A INPUT -s xx.xx.xx.xx/24 -p udp -m udp --dport 5353 -j ACCEPT
where xx.xx... is my network.
Another way to check for mDNS broadcasts is:
apt install avahi-utils
avahi-browse --all --ignore-local --resolve --terminate
@danibercero pinging the chromecasts is not enough. The now mandatory mDSN discovery works the other way around. I.e. the chromecasts broadcast their details via mDNS and your host running HA needs to be able to receive them on port 5353 UDP.
How do your iptables look like? Does iptables -S
contain any DROP
?
Even if you did not setup iptables, it might have been done just by installing docker.
You're right, there is a change in Home Assistant 0.109 which makes working mDNS mandatory for a working setup with Google Cast devices.
@emontnemery to me this feels like a breaking change, especially given the issues (https://github.com/home-assistant/core/issues/34874 https://github.com/home-assistant/core/issues/34968 and this) that have been opened for this in the last days.
What do you think, should it be noted as breaking change in the 109 release notes ?
I'd also be curios understanding why this is now mandatory. Is it only related to https://github.com/home-assistant/core/pull/33922 ?
I much preferred the manual adding way that did not rely on mDNS . I totally get that this makes things much more easy for less tech savvy users, but is much more hard when you care about security and running in separate subnets/vlans.
@jakommo
no avahi-browse result :-(
~$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 1883 -j ACCEPT
-A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9000 -j ACCEPT
-A DOCKER -d 172.17.0.6/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5050 -j ACCEPT
-A DOCKER -d 172.17.0.7/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 1880 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 53 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p udp -m udp --dport 53 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
Hmm, not sure if the -P FORWARD DROP
is at play here, but I have it in my config as well and it works, but my setup looks different.
You could test with stopping the docker service and flushing all iptables and see if it works then.
If not, it might be worth to check if your router/access point is dropping multicast traffic
Please note iptables -S
only shows rules for the default filter
table.
Run the following to show rules for all tables:
sudo iptables -S ; echo ; echo 'nat:' ; echo ; sudo iptables -t nat -S ; echo ; echo 'mangle:' ; echo ; sudo iptables -t mangle -S; echo; echo 'raw:' ; echo ; sudo iptables -t raw -S; echo; echo 'security:' ; echo ; sudo iptables -t security -S
Another thing worth checking is if there is another process listening on port 5353:
sudo netstat -aeelup | grep mdns
@jakommo
Not relying on mDNS had the following issues:
The two first issues were more about code complexity and maintainability, the third one was the last straw.
I think it's a good idea to add this to the 0.109 release notes (although a bit late).
XX@homeassistant:~$ sudo iptables -S ; echo ; echo 'nat:' ; echo ; sudo iptables -t nat -S ; echo ; echo 'mangle:' ; echo ; sudo iptables -t mangle -S; echo; echo 'raw:' ; echo ; sudo iptables -t raw -S; echo; echo 'security:' ; echo ; sudo iptables -t security -S
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 1883 -j ACCEPT
-A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9000 -j ACCEPT
-A DOCKER -d 172.17.0.6/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5050 -j ACCEPT
-A DOCKER -d 172.17.0.7/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 1880 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 53 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p udp -m udp --dport 53 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
nat:
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 1883 -j MASQUERADE
-A POSTROUTING -s 172.17.0.5/32 -d 172.17.0.5/32 -p tcp -m tcp --dport 9000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.6/32 -d 172.17.0.6/32 -p tcp -m tcp --dport 5050 -j MASQUERADE
-A POSTROUTING -s 172.17.0.7/32 -d 172.17.0.7/32 -p tcp -m tcp --dport 1880 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 53 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p udp -m udp --dport 53 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 1883 -j DNAT --to-destination 172.17.0.3:1883
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 9000 -j DNAT --to-destination 172.17.0.5:9000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 5050 -j DNAT --to-destination 172.17.0.6:5050
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 1880 -j DNAT --to-destination 172.17.0.7:1880
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.2:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 89 -j DNAT --to-destination 172.17.0.2:80
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 53 -j DNAT --to-destination 172.17.0.2:53
-A DOCKER ! -i docker0 -p udp -m udp --dport 53 -j DNAT --to-destination 172.17.0.2:53
mangle:
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
raw:
-P PREROUTING ACCEPT
-P OUTPUT ACCEPT
security:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
XX@homeassistant:~$ sudo netstat -aeelup | grep mdns
udp 0 0 0.0.0.0:mdns 0.0.0.0:* systemd-resolve 1493291 1583/systemd-resolv
udp 213504 0 0.0.0.0:mdns 0.0.0.0:* root 1333285 26773/python3
udp 213504 0 0.0.0.0:mdns 0.0.0.0:* root 1333283 26773/python3
udp 213504 0 0.0.0.0:mdns 0.0.0.0:* root 1333281 26773/python3
udp 0 0 0.0.0.0:mdns 0.0.0.0:* root 1333279 26773/python3
udp 213504 0 0.0.0.0:mdns 0.0.0.0:* root 1332274 26773/python3
udp 213504 0 0.0.0.0:mdns 0.0.0.0:* root 1332273 26773/python3
udp 213504 0 0.0.0.0:mdns 0.0.0.0:* root 1332272 26773/python3
udp 0 0 0.0.0.0:mdns 0.0.0.0:* root 1332266 26773/python3
udp 0 0 0.0.0.0:mdns 0.0.0.0:* avahi 1316505 25488/avahi-daemon:
udp6 0 0 [::]:mdns [::]:* systemd-resolve 1493293 1583/systemd-resolv
udp6 0 0 [::]:mdns [::]:* avahi 1316506 25488/avahi-daemon:
I just received good news !!
My server is an old wired laptop, configured with a fixed IP. I just turned on Wi-Fi (with dhcp) and ... CAST devices appeared!
I read in some forum that there are routers that do not pass mDNS packets between Wi-Fi and cable.
Thank you very much for your help! And sorry for the lost time ;-) I will investigate if it is a router problem or LAN interface configuration.
I love Home Assistant and its community!
Great!
@danibercero Is the router problem what's described here: https://superuser.com/questions/730288/why-do-some-wifi-routers-block-multicast-packets-going-from-wired-to-wireless ?
A couple of additional tips should be added here https://www.home-assistant.io/integrations/discovery/#mdns-and-upnp :
Glad to hear you got it working @danibercero .
@emontnemery thanks for elaborating. It makes sense now.
Check firewall rules. What should be checked for?
AFAIK for mDNS discovery it should be enough to have incoming UDP 5353 open.
Not sure if UDP 1900 for SSDP is also needed. I don't have it and my google home's are detected.
maybe @BeardedTinker knows more, since he mentioned it in https://github.com/home-assistant/core/issues/34931#issuecomment-623152128
Wi-Fi router may block mDNS multicast packages. Can this be substantiated?
I know that in Unifi AP's multicast can be disabled (not sure what the default is though) and client isolation might also cause it to fail.
Google has some general tips here https://support.google.com/chromecast/thread/355932?hl=en, maybe best link there?
Some additional trouble shooting tips for the docs: https://github.com/home-assistant/home-assistant.io/pull/13291
Unfortunately, I'm my issues isn't resolved with any of the above options. I'm running on HA Supervised via Ubuntu 18.04. If mDNS was my issue, wouldn't I also not be able to reach homeassistant.local? I have always had UPnP disabled on my netgear router, but that wouldn't affect mdns, correct?
@Justahobby01
Please enable debug logging and share a log:
logger:
default: info
logs:
homeassistant.components.cast: debug
homeassistant.components.cast.media_player: debug
homeassistant.components.zeroconf: debug
pychromecast: debug
pychromecast.discovery: debug
pychromecast.socket_client: debug
zeroconf: debug
Please also try to run the pychromecast script, both on the host and in the container:
https://github.com/home-assistant/core/issues/34931#issuecomment-623098863
I'm using HASSIO.. but on the same network I have a Fedora server. From fedora I can run:
[root@fsrv ~]# avahi-resolve-address 10.0.0.2
10.0.0.2 fsrv.local
[root@fsrv ~]# avahi-resolve-address 10.0.0.3
10.0.0.3 hassio.local
[root@fsrv ~]# avahi-resolve-address 10.0.0.16
Failed to resolve address '10.0.0.16': Timeout
So the problem seems to be my CCast device. Maybe it's because is a first generation device?
Cast firmware version: 1.36.159268
Country Code: CW
Mac Address: 6C:xx:xx:xx:xx:xx
IP Address: 10.0.0.16
All my other devices see and can cast on it.
After rebooting everything, eventually my Ccast show up again. avahi-resolve-address still times out, but running tcpdump searching for mdns, eventually it started to show up. I really do not know which of the things I did fixed it.
This closed ticket already warns for this issue, now it seems that Docker users can no longer use the Chromecast devices :( https://github.com/home-assistant/docker/issues/23
I can reproduce this by running inside the container list_chromecasts.py, it returns 0 devices. When running it on the server, hosting the container, it returns all my devices...
it seems that Docker users can no longer use the Chromecast devices
That's not the case.
Please see the note here on what is needed to make mDNS work in a Docker container:
https://www.home-assistant.io/integrations/discovery/#mdns-and-upnp
This is also reflected in the guide for setting up HA in Docker: https://www.home-assistant.io/docs/installation/docker/
I am running docker in net host but running HA in docker and chromecast and other google devices within seperate vlan. running avahi contrainer. but running ha in net host parameter denied running ha in seperate docker macvlan. so can;t run ha within the iot vlan. somehow avahi is not discovering the google devices and therefore ha is discovering also.
@Martinvdm As explained here, https://www.home-assistant.io/integrations/discovery/#mdns-and-upnp, HA and the devices should be on the same network (including VLAN) for mDNS discovery to work.
If this is not the case, you need to setup mDNS forwarding.
@emontnemery I can confirm that it works now with the network host setting. I missed that part in the documentation when i setup HA i guess, or it is added later :) Thank you!
I wasn't happy with using host networking for Home Assistant, and found that you can just run Avahi Reflector on either the host or another container with host networking. That will pass the mDNS packets to the Home Assistant container and the Chromecasts will show up again. I'm currently running https://hub.docker.com/r/kmlucy/docker-avahi alongside Home Assistant and everything works perfectly.
@johnnymijnhout You're right, the documentation has been updated very recently :)
@emontnemery i do understant, but mdns is just working fine. I think the best fix here (and also all other google/cast related issues in github) is to only depend on mDNS when the cast devices are not filled in as host in the configuration file. It seems to be logical to , that when HA needs discovering the cast devices, that mDNS is needed, but when host devices are filled in the config file, it seems unlogical to me.
@Martinvdm the reason for relying on mDNS is explained here: https://github.com/home-assistant/core/issues/34931#issuecomment-623396088
I wasn't happy with using host networking for Home Assistant, and found that you can just run Avahi Reflector on either the host or another container with host networking.
This is the best solution i think! I tried this and it works great. Now i only need to expose the avahi container to the host network, and i can still limit homeassistant by ports. It works great, thank you!
In case anyone can help, I have continued investigating my problem. In my router, by deactivating the IGMP Snooping option, I see Cast devices in my wired LAN again, without having to activate the wifi.
@danibercero i do have this setting in my UniFi switch/controller. I do not have an USG. Strange thing is that my clients on LAN can discover clients in IOT, but Home Assistant container can鈥檛, despite using Avahi.
I did investigate this further more. It does seems to be an issue in HA docker.
Iam running Home Assistant with net=host requirement. Some results:
i checked IPTABLES and made shure that this rows are in:
sudo iptables -I INPUT -d 224.0.0.0/4 -j ACCEPT
sudo iptables -I FORWARD -d 224.0.0.0/4 -j ACCEPT
IGMP Snooping is enabled in switch (unify) and Multicast Enhancement is enabled
running avahi between IOT and LAN vlan
i use chromecast and google devices just ok with other devices in the house
22:49:45.231428 IP (tos 0x0, ttl 255, id 35454, offset 0, flags [DF], proto UDP (17), length 386)
Chromecast-Ultra.domain.local.mdns > 224.0.0.251.mdns: [udp sum ok] 0*- [0q] 1/0/3 _googlecast._tcp.local. PTR Chromecast-Ultra-4b647692a2c9055eb97b8b18ace185ce._googlecast._tcp.local. ar: Chromecast-Ultra-4b647692a2c9055eb97b8b18ace185ce._googlecast._tcp.local. (Cache flush) TXT "id=4b647692a2c9055eb97b8b18ace185ce" "cd=C1A51BAA367D65F36523A1CAC0B55A3E" "rm=" "ve=05" "md=Chromecast Ultra" "ic=/setup/icon.png" "fn=Chromecast" "ca=200709" "st=0" "bs=FA8FCA7146AE" "nf=1" "rs=", Chromecast-Ultra-4b647692a2c9055eb97b8b18ace185ce._googlecast._tcp.local. (Cache flush) SRV 4b647692-a2c9-055e-b97b-8b18ace185ce.local.:8009 0 0, 4b647692-a2c9-055e-b97b-8b18ace185ce.local. (Cache flush) A 10.10.11.13 (358)
so my conclusion is that is seems to be something with Home assistant i think.
@martinvdm if list_chromecasts.py
does not work, it most likely means sending and/or receiving of mDNS multicast packets is not working correctly.
You have a very complicated setup, can you test to temporarily put Chromecasts and Home Assistant on the same network to make sure list_chromecasts.py
is working on host and in container.
Next, suggest to run wireshark on both networks while again running list_chromecasts.py
to verify forwarding is working.
Edit: Please also check there is no other rules blocking the traffic enabled on the host, for example https://github.com/home-assistant/core/issues/34931#issuecomment-623394430
Thanks for the advise. I continously searching for a solution. My setup is not very complicated. Just HA docker container with net=host (advise) in LAN and IoT devices in other vLAN. That list_chromecast.py isn't listing any devices i can;t level this with the fact that i do see all multicast traffic on the host with tcpdump. I did check iptables, offcourse there are some docker default entries, but Home Assistant is on the host.
I did try home assistant in the iOT vlan withhout network_mode: host, that seems to work fine. So it has to be something on the host level i think.
@Martinvdm
My setup is not very complicated. Just HA docker container with net=host (advise) in LAN and IoT devices in other vLAN.
It's still a bit more complex than a typical domestic setup, and something in the setup seems incorrect since mDNS discovery is working when homassistant is in the IoT VLAN but breaks when it's not.
The UDP broadcast traffic across VLANs may be dropped by a switch, router, AP, the host running HA etc.
To debug further, I suggest to log packets when list_chromecasts.py --show-debug
is running.
I would suggest to log:
socat UDP4-RECV:mdns,broadcast,reuseaddr - | hexdump -C
socat
is that tcpdump will capture packets dropped by iptables rules but socat
will not. It's possible to make socat output in pcap format also, this example will just dump the data.Then compare differences between the logs.
In the packet logs you should see something like this (as parsed with wireshark):
| protocol | info | comment |
| --- | --- | --- |
|MDNS| Standard query 0x0000 PTR _googlecast._tcp.local, "QM" question
| Query from pychromecast to discover chrome cast devices |
|MDNS| Standard query response 0x0000 PTR Chromecast-Audio-b3904b74c13867cd723fd589d5f305e7._googlecast._tcp.local TXT, ....
| Response from a Chromecast device |
|MDNS| Standard query response 0x0000 PTR Google-Cast-Group-657e8e9d261e42e79fc536748ad083c5-1._googlecast._tcp.local TXT, ...
| Response from a Chromecast device|
@emontnemery great thanks for the help. Will try that and report
Sorry could'nt get socat working, command:
socat UDP4-RECV:mdns,broadcast,reuseaddr | hexdump -C
keep getting:
socat[26636] E exactly 2 addresses required (there are 1); use option "-h" for help
i don't know socat very well.....
@Martinvdm There was a mistake in the socat command, it should be:
socat UDP4-RECV:mdns,broadcast,reuseaddr - | hexdump -C
Sorry about that.
Ok great. that command is accepted, but not giving any output, neither from the host, neither from within the ha container. I did migrated the container to the IOT vlan and chromecast started working in ha. tcpdump giving chromecast info again, but this was also there in the main lan. not quite sure, but it seems to be the host which is not receiving the mdns info from the over vlan., despite using avahi-reflector.
Did tried it with Windows and Bonjour browser from both vlans working just fine and can see al mdns info, including googles from other vlan.
so again, it seems to be my ubuntu docker host somehere here i think
Maybe this can be helpful: https://serverfault.com/questions/163244/linux-kernel-not-passing-through-multicast-udp-packets
The comment about tcpdump -e
to make sure the packets you see are for the right VLAN is interesting.
You're sure mDNS forwarding between the VLANs is working correctly?
I'm closing this issue since it seems Home Assistant is working fine as long as mDNS is allowing device discovery.
If this is not the case, please go ahead and open another issue.
Most helpful comment
I wasn't happy with using host networking for Home Assistant, and found that you can just run Avahi Reflector on either the host or another container with host networking. That will pass the mDNS packets to the Home Assistant container and the Chromecasts will show up again. I'm currently running https://hub.docker.com/r/kmlucy/docker-avahi alongside Home Assistant and everything works perfectly.