Napari: IDEA: a semantic plugin API

Created on 3 Sep 2020  路  3Comments  路  Source: napari/napari

馃殌 Feature

Support a semantic plugin API, enabling analysis plugin developers to define the inputs and outputs of their plugin using microscopy concepts like "space" and "time" and biological concepts like "nuclear stain".

Motivation

I've been thinking about various segmentation tools (Allen Cell Segmenter, cellpose) & what they might look like as napari plugins. One theme that comes up is that many of these tools are designed with assumptions around what biological thing is in various channels. The Allen Cell Segmenter is designed for specific antibodies. cellpose needs a nuclear channel and a cytoplasmic channel. However, the burden is on the user to specify for the tool which channel corresponds to which

Pitch

Wouldn't it be nice if a plugin developer could just specify that they need a nuclear channel and a cytoplasm channel and napari just knew that the layer called "DAPI" was the nuclear stain? Or just knew that two of the outputs of skimage.color.rgb2hed corresponded to nuclear and cytoplasmic channels?

Obviously, this would require some knowledge engineering work to build the inference machine... I'm not sure if there are existing ontologies in this space or the extent to which the various gene ontologies could be repurposed. In addition to channels & layers, a semantic API would also be useful simply to distinguish between dimensions of the array (e.g. temporal vs spatial dimensions). And of course, care would be needed to get the UI/UX right, as it would be important to offer the right level of "assist" to the end user without letting the "magic" obfuscate what's happening.

Thoughts?

discussion

Most helpful comment

I think this could well be an interesting application but from the knowledge engineering side, this would likely have to be implemented as a set of structured queries over a KG with the relevant logic coded into the queries.

There might be some interesting ways this could be built, especially if the main logical outcome was driven by a well-delineated class of entities. Justin, you seem to be very interested in mapping imaging parameters and procedures to sub cellular anatomy. Is that the main use case? X -> nucleus, Y -> receptor, etc?

All 3 comments

My immediate thoughts @neuromusic is that this sounds like a great idea for a plugin which could take more plugins. i.e. the napari interface shouldn't have to understand what DAPI etc is as these concepts are too domain specific, change to frequently to even handle all of a domain like cellular biology, but we could make a napari-cell-biology-interface plugin which itself did the translation between concepts like DAPI and "blue" colormap etc. and then these segmentation tools could depend on the napari-cell-biology-interface so as not to all have to reinvent these concepts themselves.

If this approach was successful you could then imagine the napari-neuroimaging-interface plugin for a different community etc, but we wouldn't be introducing these domain specific concepts into napari itself, and ideally it would be possible to translate from all these interfaces into the highly abstract and flexible napari plugin interface.

It's a super interesting idea though, and curious what others think!!

I think this could well be an interesting application but from the knowledge engineering side, this would likely have to be implemented as a set of structured queries over a KG with the relevant logic coded into the queries.

There might be some interesting ways this could be built, especially if the main logical outcome was driven by a well-delineated class of entities. Justin, you seem to be very interested in mapping imaging parameters and procedures to sub cellular anatomy. Is that the main use case? X -> nucleus, Y -> receptor, etc?

Justin, you seem to be very interested in mapping imaging parameters and procedures to sub cellular anatomy.

Yeah, that's a common use of the biologists we're working with. Through the design of the microscope, a biological marker, and the experiment, the biologist will map some aspect of subcellular anatomy to a "color" channel in the imaging data. There's a chain of associations/inferences (my molecule binds to a receptor that is expressed by a certain gene and fluoresces over a range of wavelengths & I've filtered light of this wavelength to a certain channel in my data file) that come together as a mapping that is convenient for the scientist (channel=gene or channel=structure)

Was this page helpful?
0 / 5 - 0 ratings