Core: Media_Player discovery - duplicates

Created on 27 Jan 2017  Â·  16Comments  Â·  Source: home-assistant/core

Make sure you are running the latest version of Home Assistant before reporting an issue.

You should only file an issue if you found a bug. Feature and enhancement requests should go in the Feature Requests section of our community forum:

Home Assistant release (hass --version):
0.36.1

Python release (python3 --version):

Component/platform:
media_player.roku

Description of problem:
devices are being discovered/displayed multiple times

Expected:
only display once

Problem-relevant configuration.yaml entries and steps to reproduce:

  - platform: roku
    host: 192.168.1.18
    name: "Master Roku"


    1. 2.
  1. 3.

Traceback (if applicable):


Additional info:

Three Roku:

image

All 16 comments

For a temporary local fix, are you using your own customised groups? If not you could create some groups and specify that you only want that single media player to be displayed: HA Groups. It's not a full fix but its a temporary workaround for you :smile: unless you have already tried that.

@JammyDodger231 I do use groups/views but keep my home/default unchanged so I can see all components and make testing and setup easier for now since I am making changes so frequently.

I also just noticed that it is duplicating my media_player.panasonic_viera as well. See here:

image

image

Had this experience as well after first upgrade to .39.2 but after a restart it went back to normal. Just filing this for historical/informational purposes.

image

Actually this seems to be an issue in 0.39.2 as well. It's a time thing. starts off fine and just keeps adding more devices every few minutes or so..

image

I let this thing run overnight and woke up to over a hundred media players.. ;(

image

I'm guessing this is a discovery issue related to roku,

Possible: #4009

Same issue here which I only noticed in .39.2
Two rokus have spawned about a dozen each. Everything seems to work fine, but entities and history are filled with clones.

This also proposes a problem when you write scripts or automations that
call the media player directly. The original one no longer responds to the
service request, you have to call the latest clone.

On Mar 3, 2017 4:21 AM, "cogneato" notifications@github.com wrote:

Same issue here which I only noticed in .39.2
Two rokus have spawned about a dozen each. Everything seems to work fine,
but entities and history are filled with clones.

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/home-assistant/home-assistant/issues/5588#issuecomment-283905701,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AXxUsCO617hy3J6JbmMdQ9_QHW8tkHH2ks5rh9ucgaJpZM4Lv6OI
.

Same here....duplicate unnamed media players in 0.39.2

image

From what I can tell, this happens because HASS essentially runs this code every SCAN_INTERVAL:

results = []
netdis.scan()
    for disc in netdisco.discover():
        for service in netdisco.get_info(disc):
            results.append((disc, service))
netdis.stop()

# Create players/sensors for each item in `results`

Each time this runs, HASS obtains a list of all discovered devices. It then takes this list and initializes them. However, HASS does not check for any devices it previously discovered.

Comparing this code to older versions, I see that HASS didn't check for previously-discovered devices either, but that's because netdisco only gave results for new devices it discovered. With the latest versions, netdisco now lists all discovered devices.

I was able to create a simple fix for this:

+   already_discovered = []
+
    @asyncio.coroutine
    def new_service_found(service, info):
        """Called when a new service is found."""
        if service in ignored_platforms:
            logger.info("Ignoring service: %s %s", service, info)
            return

        logger.info("Found new service: %s %s", service, info)

        comp_plat = SERVICE_HANDLERS.get(service)

        # We do not know how to handle this service.
        if not comp_plat:
            return

+       if (service, info) in already_discovered:
+           return
+
+       already_discovered.append((service, info))

        component, platform = comp_plat

        if platform is None:
            yield from async_discover(hass, service, info, component, config)
        else:
            yield from async_load_platform(
                hass, component, platform, info, config)

@balloob What are your thoughts on this approach? I'm particularly interested in knowing:

  1. Whether this is an acceptable solution. If so:
  2. Should I be storing the already_discovered data somewhere else?
  3. Should we be clearing already_discovered when certain events are fired, like a restart or reload?

@colinodell oh my bad, you are right this is a bug I introduced. So focussed on killing the CPU usage that I missed this.

Yes, that would be an appropriate fix!

Probably want to do already_discovered = set() and use already_discovered.add, it is slightly faster.

This should be 0.39.3

Although, this is actually a bug in the media player. We implement a unique id so we can filter out uniques when new devices get added. It seems that whichever platforms you all are using, does not implement unique_id.

So the real fix here is that any platform that is discoverable, needs to implement a unique_id property so Home Assistant can filter out duplicates. This is better than keeping track of discovered things as that can lead to running out of memory

This is better than keeping track of discovered things as that can lead to running out of memory

@balloob,

My "quick fix" only stores strings and tuples (as returned by netdisco) for devices which have been discovered:

image

The list does not grow when devices are re-discovered additional times, so I would not expect memory to increase over time. And given the small amount of data it does store, I can't imagine the amount of memory used is significant. (Though to be fair, I'm not a Python expert, so I don't know if keeping a handle on this data prevents it and related objects from being garbage collected. If that's the case, perhaps we can hash the data and just store the hashes?)

This approach also has the added benefit of not needing to initialize new components in order to check whether their unique_ids are the same. I'd imagine this would reduce memory and CPU cycles.

Furthermore, it prevents repeated errors for discovered devices which fail to initialize:

17-03-03 14:31:55 ERROR (Thread-9) [homeassistant.components.device_tracker.netgear] Failed to Login
17-03-03 14:31:55 ERROR (MainThread) [homeassistant.components.device_tracker] Error setting up platform netgear

However, if you still feel that implementing unique_id is the preferred approach I'm happy to do that - just let me know :)

I've implemented my approach in #6381 just in case you want to use it. If not, feel free to reject it.

Thanks! I will have your fix go out as 0.39.3. We'll try a more proper approach later.

That works for me :) Thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

soldag picture soldag  Â·  143Comments

WilldabeastHA picture WilldabeastHA  Â·  203Comments

nodkan picture nodkan  Â·  161Comments

balloob picture balloob  Â·  371Comments

Bergasha picture Bergasha  Â·  176Comments