Godot: Add Accessibility for Blind Developers Who Use Screenreaders

Created on 6 Dec 2017  Â·  117Comments  Â·  Source: godotengine/godot

Operating system or device, Godot version, GPU Model and driver (if graphics related):
Operating System: Arch Linux, might be true for blind users on other operating systems
Godot Version: 2.1.4, may apply to all
Issue description:

The Godot game editor does not work with the Orca screenreader.
I was expecting to at least be able to use Orca's flat review with the editor application, but nothing happened; it is equivellent to turning off the monitor.
Steps to reproduce:
Install the GNOME desktop environment, along with the Orca screenreader. MATE will also work.
With Orca running, open the Godot application. There will be no speech output from Orca.
Link to minimal example project:

feature proposal core usability

Most helpful comment

If this feature is implemented in the core of the engine itself then it would be possible to developeprs to make apps for blind people, not only allow blind developpers to work.
+1 to the proposal

All 117 comments

This would definitely be a welcome improvement, but I can already say that it will likely take a while to get implemented, if at all. Godot's editor uses its own GUI toolkit, which is the same used in games, and is fully independent from the native toolkits or cross-platform ones such as Qt or GTK+, which would have good screen reader support.

So to get this to work in Godot, it would have to be implemented from scratch in the core Controls. I don't know how hard this is, but probably non trivial.

I have actually thought about this but I don't even know where to start. Godot editor is very visual, you do pretty much everything with the mouse. We would need a person with visual impairment to give us consulting about this and guide us in this task, advising us with what is actually needed and how it should behave. Without an actual user to provide feedback, it's mostly a game of guessing to implement that.

I would be happy to guide you guys in adding this. I'm blind myself. :) At least, I could refer you guys to frameworks (MSAA and UIA come to mind as possibilities, though UIA is preferable to me at least, since NVDA (https://github.com/nvaccess/nvda) (an open source screen reader) uses it) and beta test it as you guys go. In the meantime, is there an alternative godot editor or method of creating the game by hand, without the editor?

If this feature is implemented in the core of the engine itself then it would be possible to developeprs to make apps for blind people, not only allow blind developpers to work.
+1 to the proposal

I'm glad to see this is being looked at and the only thing I have to add about this is that screen reader software is also use by people who have learning disabilities/difficulties such as dyslexia. As much as I hate the way these things sometimes leave people out, I think it could be argued that this broadens the case for such a development.

NV Access (NVDA) would be a good first place to contact as both lead developers are blind and so have a proper insight into what's necessary. Maybe a collaborative effort?

Hi,

I think that this will be a huge effort. That can be done but considering native accessibility protocols an implementation made from scratch can be very problematic for users of all platforms.

If you plan to go forward with this, I recommend to you to use the QT model, an abstract accessibility event layer on top of the basic UI controls first and a platform dependent implementation.

Another approach is to use Text To Speech directly. This can be more platform-independent and so much easy to work on. There are some problems with this, too, but they are more related to the engine than the runtime environment.

Hope this can help you deside.

Cheers,

Such a model would work too, if contacting the low-level accessibility
APIs failed in some way. I 100 percent agree that NVAccess would be
your best bet for getting the highest level of accessibility in. :)

On 5/18/18, Francisco R. Del Roio notifications@github.com wrote:

Hi,

I think that this will be a huge effort. That can be done but considering
native accessibility protocols an implementation made from scratch can be
very problematic for users of all platforms.

If you plan to go forward with this, I recommend to you to use the QT model,
an abstract accessibility event layer on top of the basic UI controls first
and a platform dependent implementation.

Another approach is to use Text To Speech directly. This can be more
platform-independent and so much easy to work on. There are some problems
with this, too, but they are more related to the engine than the runtime
environment.

Hope this can help you deside.

Cheers,

--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-390176511

--
Signed,
Ethin D. Probst

As a partial solution, are there any docs, or even places to look, to implement a game by hand without using the editor? I've seen the command line docs, but I'm wondering if it's even practical/remotely enjoyable to work with the scene graphs and such by hand, thus completely eliminating the need for the editor.

As a more practical example, right now I'm implementing an audio-based Asteroids-like game. Asteroids emit sounds to mark their positions, and the player centers them in the audio field by turning/flying around. I don't particularly care about the visuals, but they'd be useful for debugging purposes so I can ask sighted folks if, say, my mental concept of my ship's orientation matches with the crude cylinders/spheres I'm using for debugging/collision detection. I'm wondering if it's even remotely possible to position a 5-unit-diameter sphere at 0,0, randomly position and move a bunch of 25-unit-diameter spheres, implement wrapping behavior, etc. by hand-coding it all without the need of the Godot editor. Or will it be a mess of editing non-human-friendly XML?

At the moment I am doing this more or less by hand in Rust with an ECS. That's not terrible, but I do wonder if I gain anything by leveraging an actual game engine. Ideally the Asteroids thing is just a proof-of-concept, and I'd eventually like to make richer audio games without having to build all this stuff up by hand. Whether Godot is that solution, I don't know...

Either way, I do think that implementing a full accessibiity interface to the UI toolkit is probably a bit much to ask. If the data files are mostly text but human-unfriendly, a middle-ground solution might be a DSL that exposes a simplified interface for the kinds of tasks I described above (positioning shape primitives, attaching sounds, adding logic to entities. etc.)

As a partial solution, are there any docs, or even places to look, to implement a game by hand without using the editor?

I think the easiest way (without hacking around) would be to generate the scenes procedurally via script. You could either do that at runtime or make a script that saves the scenes and resources to disk so you can add to the project. The initial scaffolding would require a project.godot file for the project settings (which can be an empty file made by the OS at first) and a main scene that runs at start (which can also be set and saved via script). Might be complex to manage, but it's possible.

Technically, you can make a script that runs directly from the command line and that script makes the whole Godot project for you (at least I can't think of anything that would prevent that). You can then run and export the game from the command line as well. You would need to open the project in editor at least once for importing the assets (and every time the assets change) but that could be solved by running the headless version of Godot, as some people are doing for use with Continuous Integration servers.

If the data files are mostly text but human-unfriendly, a middle-ground solution might be a DSL that exposes a simplified interface for the kinds of tasks I described above (positioning shape primitives, attaching sounds, adding logic to entities. etc.)

The text resource format is not much hard to read, but still it's probably simpler to just use GDScript to create stuff for the project, since you can use the base engine for writing and reading the format (avoiding compatibility issues).

Unfortunately, GDScript is not the best language for visually impaired people, since the white-space is important. I heard complains from a blind developer about Python because of this.

Frankly, I think that the complaints about whitespace placement is
most likely (excuse my language) bitchy wining. Whitespace is
perfectly fine (I'm blind and am able to work with python very well,
as well as other whitespace-oriented languages like F# and such).
Those who complain about whitespace are usually those who are either
unenthusiastic at the thought of programming (which raises the
question of exactly why they're checking it out in the first place) or
those who haven't even tried and who are just complaining to complain.
I don't mean to be so inflammatory but it really pisses me off when I
see people complaining about that. It just makes absolutely no sense.
I've never heard of the headless version of Godot. Perhaps that could
be used to create projects and interact with the engine as well? That
might be a complete solution to our problems at the moment. Where can
I find this version? (Or is it in the Godot package?)

On 5/24/18, George Marques notifications@github.com wrote:

As a partial solution, are there any docs, or even places to look, to
implement a game by hand without using the editor?

I think the easiest way (without hacking around) would be to generate the
scenes procedurally via script. You could either do that at runtime or make
a script that saves the scenes and resources to disk so you can add to the
project. The initial scaffolding would require a project.godot file for
the project settings (which can be an empty file made by the OS at first)
and a main scene that runs at start (which can also be set and saved via
script). Might be complex to manage, but it's possible.

Technically, you can make a script that runs directly from the command line
and that script makes the whole Godot project for you (at least I can't
think of anything that would prevent that). You can then run and export the
game from the command line as well. You would need to open the project in
editor at least once for importing the assets (and every time the assets
change) but that could be solved by running the headless version of Godot,
as some people are doing for use with Continuous Integration servers.

If the data files are mostly text but human-unfriendly, a middle-ground
solution might be a DSL that exposes a simplified interface for the kinds
of tasks I described above (positioning shape primitives, attaching
sounds, adding logic to entities. etc.)

The text resource format is not much hard to read, but still it's probably
simpler to just use GDScript to create stuff for the project, since you can
use the base engine for writing and reading the format (avoiding
compatibility issues).

Unfortunately, GDScript is not the best language for visually impaired
people, since the white-space is important. I heard complains from a blind
developer about Python because of this.

--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-391900905

--
Signed,
Ethin D. Probst

@ethindp I won't argue about what is or isn't, since I'm not blind myself, but I'll ask you to tone down. People have different backgrounds and might have proper reasons to find it hard to work, while you learned to work with it. The person I know works as a developer for a living, I find hard to believe he "didn't try hard enough" (it's still second-hand for me though, so I won't argue about that).

I've never heard of the headless version of Godot. Perhaps that could
be used to create projects and interact with the engine as well? That
might be a complete solution to our problems at the moment. Where can
I find this version? (Or is it in the Godot package?)

It is called the server platform and works on Linux only. It's not distributed (maybe it'll be once 3.1 is released), so you need to compile yourself to access it. This is just the regular Godot engine, but it has dummy drivers for video in order to not require a graphics card.

I'm not sure how much you can interact with it, likely you can't do it manually. Still, it can be used to run scripts and importing assets, so it's possible to make a project with it if you do everything with scripts (at least in theory).

I'll echo what @ethindp says in that indentation shouldn't be an issue. Most modern screen readers have settings to speak indentation automatically (I.e. "8 spaces def hello_world():`, and I know lots of prolyphic blind Python developers.

I wonder if there's even another middle-ground solution--a line-oriented command line tool that can manipulate the scene graph in real-time, drive another instance, and let you save its output. I'm thinking of an interaction style like:

$ godot-shell mygame.godot
Welcome to the Godot shell.

> new entity
Entity created with ID 0.
> add component 0 position
Component "position" added with ID 0.0.
> set component 0.0 [0, 0]
Component 0.0 set.
> save
Scene graph saved.
>

I'm then imagining a separate instance running in another window, displaying whatever the engine is currently outputting, playing sounds, etc. Other commands might spawn a GUI text editor to edit scripts.

Would this kind of workflow make sense if the aim is to create a more simplistic game? I.e. I don't really care what my player/world models look like, beyond them being simple shapes so I can mentally model them, attach sounds, then have a collision detection system that makes some sort of sense. I also think a CLI is a bit easier to prototype and experiment with than is a full-blown DSL.

I may start hacking on this if folks who know more than I do about Godot think it's viable. Been looking for a good CLI interface problem to sink my teeth into, and have a few Rust libraries I'd like to put through their paces. Is the file/data format documented anywhere?

@George Marques, I apologize for my rude behavior... just letting that
off my chest. Because it really pisses me off and seems like wining
more than an actual truth, you know?
@Nolan Darilek, your idea may have merit; would it be possible to code
an alternative UI interface, perhaps using something like WXWidgets?
If we used WX, it would solve all our accessibility issues in one go.
(Personally, I've never fully managed to figure out WX.... looked way
too complex for me.... heh :)).

On 5/25/18, Nolan Darilek notifications@github.com wrote:

I'll echo what @ethindp says in that indentation shouldn't be an issue. Most
modern screen readers have settings to speak indentation automatically (I.e.
"8 spaces def hello_world():`, and I know lots of prolyphic blind Python
developers.

I wonder if there's even another middle-ground solution--a line-oriented
command line tool that can manipulate the scene graph in real-time, drive
another instance, and let you save its output. I'm thinking of an
interaction style like:

$ godot-shell mygame.godot
Welcome to the Godot shell.

> new entity
Entity created with ID 0.
> add component 0 position
Component "position" added with ID 0.0.
> set component 0.0 [0, 0]
Component 0.0 set.
> save
Scene graph saved.
>

I'm then imagining a separate instance running in another window, displaying
whatever the engine is currently outputting, playing sounds, etc. Other
commands might spawn a GUI text editor to edit scripts.

Would this kind of workflow make sense if the aim is to create a more
simplistic game? I.e. I don't really care what my player/world models look
like, beyond them being simple shapes so I can mentally model them, attach
sounds, then have a collision detection system that makes some sort of
sense. I also think a CLI is a bit easier to prototype and experiment with
than is a full-blown DSL.

I may start hacking on this if folks who know more than I do about Godot
think it's viable. Been looking for a good CLI interface problem to sink my
teeth into, and have a few Rust libraries I'd like to put through their
paces. Is the file/data format documented anywhere?

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-392111005

--
Signed,
Ethin D. Probst

Likewise re: WX. If I was going to do a GUI, I'd probably go with GTK
since I'm on Linux, but I've never sat down to learn GTK or WX, and that
would seem like one too many variables to play with here.

I think I'd initially start with a CLI that can start/stop a rendering
Godot process and drive this headless version. If I did this in Rust,
I'd probably build a struct and serde implementation to a subset of the
format that the interface would support. With that as a starting point,
it should be easy to build a GUI later or in parallel.

That would work, yes; but how exactly are you going to do this in
rust? I think rust would be perfect for this, but from what I've seen,
the engine editor at least is written in C++. Actually, it seems the
core is, too. So, since (according to
https://doc.rust-lang.org/beta/nomicon/ffi.html) Rust can't interface
with C++, we'd need to create a C interface. That in itself would be
an absolute nightmare. :)

On 5/25/18, Nolan Darilek notifications@github.com wrote:

Likewise re: WX. If I was going to do a GUI, I'd probably go with GTK
since I'm on Linux, but I've never sat down to learn GTK or WX, and that
would seem like one too many variables to play with here.

I think I'd initially start with a CLI that can start/stop a rendering
Godot process and drive this headless version. If I did this in Rust,
I'd probably build a struct and serde implementation to a subset of the
format that the interface would support. With that as a starting point,
it should be easy to build a GUI later or in parallel.

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-392121156

--
Signed,
Ethin D. Probst

I was thinking of editing the files directly, which seems more doable if
you expose a more limited interface that grows over time (I.e. initially
only expose basic shapes, rigid bodies, sounds, etc. then add more over
time.) I'm looking at the format and it appears to be mostly INI, I.e.:

[node name="WallContainer" type="Node" parent="." index="0"]

editor/display_folded = true

Although I didn't know section names could be quite that free-form. :) I
guess there's nothing in the INI spec, such that it is, to prevent that.

So as an MVP I'd want:

 * Rigid dynamic/static 2-D bodies for collision detection and motion
 * A few basic shape primitives to mentally model the bodies, and
represent them on-screen for visual feedback. Cylinders for characters,
spheres, and some sort of blocky polygon for walls/grounds should be
enough for now.
 * Spatial audio
 * Non-spatial audio for music/UI sounds
 * Access to scripts, and the ability to add scripted behaviors to any
of the above that support it

I don't want to call that a small feat, but if I could figure out the
nodes needed to implement that featureset, then create a text user
interface for building that structure and then calling into the Godot
tooling for export/rendering, we might be onto something. There are lots
of node types, but if those are documented in any sort of
machine-readable way, it might be possible to eventually tap some sort
of Rust code generator or write a macro to replicate the same UI en
mass...but it's best to figure out what that UI looks like before
reaching for a macro to autogenerate it. :)

If I'm right about everything being INI-based, I might put together an
MVP PoC to see where this goes. To be clear, I'm not aiming for a
general-purpose editor use case here, just some way to build audio games
without necessarily coding the whole thing by hand, and leveraging an
actual game engine to do the cross-platform export.

But as fun as this brainstorming is, I gotta get to work and finish out
the week. I'll put up a repo sometime soon and see where it goes. I
generally prefer GitLab because of its integrated CI, but I could be
convinced to stay on GitHub if folks are opposed.

The file format is described (at least partially) in the documentation: http://docs.godotengine.org/en/latest/development/file_formats/tscn.html

However, making a completely new tool for this is a lot of work and is bound to become obsolete quite easily. At least you should use GDNative as a bridge (which is a C interface for the C++ engine), that would give access to all information about classes and their properties. As much as I appreciate the effort, I think you first need to understand a bit what Godot has to offer, so you don't reinvent all the wheels.

If you really want to do with Rust, you can try the bindings for GDNative (though I can't attest it's stability): https://github.com/GodotNativeTools/godot-rust

Personally, I would try to do something with the Godot core itself (maybe as a C++ module), since you can then access all the API easily, including all the resource savers and loaders, so you don't need to worry about file formats.


If you're going for a CLI interface, it should be based on arranging the scene tree, which is the most important part of any Godot game. Designing this process beforehand is probably more important: creating an easy way to understand how the tree is laid out and tools to move nodes to a specific tree position should be the cornerstone of the design. If you get that, you don't even need to limit the feature-set, since pretty much anything can be done with the tree.

After that, you need a way to set properties of the individual nodes, including the resources that are used to give then a proper functionality (e.g. a CollisionShape2D node needs a Shape2D resource to give its shape). Scripts are also resources, so they can be set via this interface as well (though you might want to give them a special status). This would substitute what the Inspector is in the GUI.

Also, I would make commands as close to GDScript as possible. For instance, you want to create a Sprite, you could simply do Sprite.new(), which is the same you would do in a script. This way the learning curve for someone already used to scripting would be less steep.

For audio, Godot uses players and buses. There are players for positional audio (both 2D and 3D) and a regular one for non-positional audio, which are all nodes in the tree as well. Surround is also supported (though I never used it). To work with the buses, you need to interact with the AudioServer, to set volume, mute or solo them, and add effects. You might be able to use the AudioBusLayout resource directly, but I'm not sure how much of it is exposed to the external API. The effects are just resources, so they can be edited by the same interface.


All of this in my humble opinion, of course. I have used Godot for quite a long time now, so I might be somewhat biased.

Hey, thanks, this is helpful. I'm not particularly wedded to messing
around with the data files directly. Happy to learn how this might
work--most of the getting started docs I encounter teach by building
games, and no one assumes you'd need to build your own tools first. :)

I haven't touched C++ in nearly 2 decades, so I'll likely stick with
Rust and deal with a less complete API or file issues if I hit any
instability. Does gdnative support reading/writing project, scene and
other files directly? Or is it meant to support writing games in C and
calling into the engine? I did a brief grep through the Rust repo for
"project" and didn't find anything. The examples seem focused on
building games directly. If you'd not mind giving me an entrypoint for
where I'd start to build a project and populate it programatically, I'll
read up on that and move further work to my own repo.

Thanks again for humoring me. :)

GDNative is meant to make games, essentially replacing scripts. But as I said before, you can do pretty much anything with scripts, and GDNative uses the same API.

Hi,

For scene or alike and complex objects, you can use a format like this file.

In this case, it is just for a map for a project called audioquake, but it can be an example. P.D: Don't try to build this, it may not work.

For GUI, you can also use QT5.11, I saw that it includes many improvements for accessibility. In this case, you will only need to add some sort of accessibility support for hard custom controls like the camera.

Cheers,

The format for audio quake maps is very limited. I personally don't like it.

On 5/25/18, Francisco R. Del Roio notifications@github.com wrote:

Hi,

For scene or alike and complex objects, you can use a format like this
file.

In this case, it is just for a map for a project called audioquake, but it
can be an example. P.D: Don't try to build this, it may not work.

For GUI, you can also use QT5.11, I saw that it includes many improvements
for accessibility. In this case, you will only need to add some sort of
accessibility support for hard custom controls like the camera.

Cheers,

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-392214077

--
Signed,
Ethin D. Probst

Wait, I'm being an idiot. You said at the beginning of this thread that
you didn't need the editor at all, except for when importing assets. I
think it would have been more correct for me to ask if I needed the
GUI to create games. I'm happy running them from the GUI, I just want
to develop them on the console. And it looks like I can.

So I guess what I want is a "Godot from scratch" series that assumes you
can't/don't want to use the editor, similar to how you'd go about
writing a game with pygame or another high-level library. I imagine once
I've figured out how to create a .project file that just launches a
script, and maybe a script that just opens a window and spins a cube or
something, I can apply that upward. I doubt such a thing exists, so
maybe I'll try it myself and blog the process. Then maybe I'll build a
CLI prototyping tool later, once I know what I might want to prototype
without code.

You know... I wonder if we can somehow make the engine not only an
engine with an editor and such, but a library too? Like, your given
the C++ libraries (static or dynamic libs) and can build your game
however you like, using Godot for all the game engine stuff (i.e.
audio and such). If this was possible we could forego the editor
entirely. So you could add assets as, say, .caf files, and then, at
runtime, you'd compile them in memory as compiled asset data files (we
could just call them CADFs). So your game would look like, at startup:

//...
// Compile assets
godot::assets::LoadAll({asset, asset, asset}); // load each asset individually
or
Godot::Assets::LoadAllAssetsFrom("assets"); // load all the assets at
once from a folder.
// Compile all the assets
auto assets = Godot::Assets::GetAllLoadedAssets();
bool success = Godot::Assets::Cadf::CompileAllAssets(assets);
if (success) {
// assets compiled
} else {
// error
}

Granted, this is my style of coding, and I doubt its the way the
engine is written. But if it were possible to do that, it would be
epic. Or we could just forego asset compilation at all, just load all
of 'em when we need 'em, or all of 'em at startup.

On 5/25/18, Nolan Darilek notifications@github.com wrote:

Wait, I'm being an idiot. You said at the beginning of this thread that
you didn't need the editor at all, except for when importing assets. I
think it would have been more correct for me to ask if I needed the
GUI to create games. I'm happy running them from the GUI, I just want
to develop them on the console. And it looks like I can.

So I guess what I want is a "Godot from scratch" series that assumes you
can't/don't want to use the editor, similar to how you'd go about
writing a game with pygame or another high-level library. I imagine once
I've figured out how to create a .project file that just launches a
script, and maybe a script that just opens a window and spins a cube or
something, I can apply that upward. I doubt such a thing exists, so
maybe I'll try it myself and blog the process. Then maybe I'll build a
CLI prototyping tool later, once I know what I might want to prototype
without code.

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-392224709

--
Signed,
Ethin D. Probst

I think it would have been more correct for me to ask if I needed the
GUI to create games. I'm happy running them from the GUI, I just want
to develop them on the console. And it looks like I can.

You don't need to GUI to run games. In fact, the editor just creates another Godot process with the arguments to run the game (with all the debugging options as well). Running the game is a non issue: you can simply run Godot from the project's directory without arguments, the default is to run the project.

The editor is pretty much a Godot "game" itself. It uses the available nodes and extends them. The part about importing, it's not that you need the GUI, you just need the importing tools that comes with the editor. Essentially you just put all assets in the project folder (in any structure your like) and when you open the editor it'll import everything.

"Importing" in this case means storing metadata (if you want to loop and trim audio, or apply filter to an image, etc.) and sometimes converting to a format that Godot understands and can use (like importing a model to a Godot mesh). The import process can only be done by the tools build (i.e. the editor), even if the headless version.

When you export the game, only the imported assets are packed, the original ones are ignored. For the game itself this process is transparent: you can treat the assets as if they were in the original location, the loaders will know to get the imported version.

Just like the editor is just a game, you can make your own "game" that is actually a tool to make games. I mean, you can even create a script that extends MainLoop or SceneTree and run directly via terminal with:

$ godot -s my_script.gd

The problem is that you want a true CLI tool, Godot won't make it easy. Even when running a script it'll open a window, and everything sent via stdin will be ignored.


I wonder if we can somehow make the engine not only an
engine with an editor and such, but a library too? Like, your given
the C++ libraries (static or dynamic libs) and can build your game
however you like, using Godot for all the game engine stuff (i.e.
audio and such).

While the editor and the engine are made for each other, you're not entirely dependent of the editor to make the game. You cannot split Godot into pieces though, you need to take the whole package (although you can disable most of the modules without problem). Since the editor is a Godot game, it's technically possible to replace the code by your own game. That would mean creating the scenes on the fly with code.

Godot is not meant to be used as a library or framework, and likely won't ever be, but you can hack around it and make a MainLoop implementation that does not even use nodes at all, instead just call the servers directly. Of course, you would need to build the engine from source, but I imagine this is the least of the concerns.

Any of this is technically possible, and maybe not that hard, but it's venturing into uncharted territory.

Sorry for being noisy, folks.

So I've since done a fairly deep dive into Godot and have read lots of
the documentation. Normally I don't do this for a non-library tool like
this because it's kind of a long path towards the desired
result--essentially building all the tools in your woodworking shop
before you can cut a single sheet of plywood. :)

I'm slowly coming around to the view that the editor itself might be
made accessible by essentially building a mini screen reader in
GDScript. This has the added benefit of making the game UIs themselves
accessible. It isn't without precedent, either. See
this
for a Unity UI accessibility plugin.

In looking at the Control class, there do appear to be signals exposed
for focus enter/leave. There are also signals for input events, which I
hope includes arrowing around text entry fields and the like. It'd be
limited for a generic OS-level screen reader, but just might work for
something domain-specific.

I've started on something
here. Unfortunately,
I can't add this plugin to my project because I need the inaccessible
editor to add the accessibility plugin. :) If anyone could add this
plugin to the project and submit a PR with the project.godot changes,
that'd be helpful. Essentially what I'm going to do is hook the
node_added (or whatever it's called) signal on SceneTree, intercept
any nodes that descend from Control, and connect to their
focus_entered signal. Once there, I'll start adding logic to print out
presentation messages for each node type that will eventually be piped
to an as-of-yet-unwritten TTS API. Please let me know if there are any
obvious reasons why this wouldn't work.

And how would I make this a non-editor plugin that would work with any
Godot UI, including those in non-editor games?

Hopefully we can eventually move discussion to this other repo and stop
spamming this issue. :)

If I understand you right, if this were to work, we could choose to
include it in our games or not, right? Is there any way we could add
it to the editor but not to the projects we create, or to remove it
from the project but add it into the editor?

On 5/26/18, Nolan Darilek notifications@github.com wrote:

Sorry for being noisy, folks.

So I've since done a fairly deep dive into Godot and have read lots of
the documentation. Normally I don't do this for a non-library tool like
this because it's kind of a long path towards the desired
result--essentially building all the tools in your woodworking shop
before you can cut a single sheet of plywood. :)

I'm slowly coming around to the view that the editor itself might be
made accessible by essentially building a mini screen reader in
GDScript. This has the added benefit of making the game UIs themselves
accessible. It isn't without precedent, either. See
this
for a Unity UI accessibility plugin.

In looking at the Control class, there do appear to be signals exposed
for focus enter/leave. There are also signals for input events, which I
hope includes arrowing around text entry fields and the like. It'd be
limited for a generic OS-level screen reader, but just might work for
something domain-specific.

I've started on something
here. Unfortunately,
I can't add this plugin to my project because I need the inaccessible
editor to add the accessibility plugin. :) If anyone could add this
plugin to the project and submit a PR with the project.godot changes,
that'd be helpful. Essentially what I'm going to do is hook the
node_added (or whatever it's called) signal on SceneTree, intercept
any nodes that descend from Control, and connect to their
focus_entered signal. Once there, I'll start adding logic to print out
presentation messages for each node type that will eventually be piped
to an as-of-yet-unwritten TTS API. Please let me know if there are any
obvious reasons why this wouldn't work.

And how would I make this a non-editor plugin that would work with any
Godot UI, including those in non-editor games?

Hopefully we can eventually move discussion to this other repo and stop
spamming this issue. :)

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-392255752

--
Signed,
Ethin D. Probst

I don't know.

I did figure out the format for adding a plugin to the project.godot
file. I now have a function that should be adding
focus_entered/mouse_entered handlers to each Control and prints a
message when they have focus. Unfortunately, I'm not actually getting
any of these events, nor am I sure how to get the actual node that
originated them.

Nothing in this is editor-specific, though, and it should automatically
detect the addition of any UI control anywhere in the scene tree and,
eventually, implement a screen reader for it.

OK, here's where I am after some intermittent holiday weekend hacking:

  1. Repository moved
    here. I'd
    rather work on it under my potential game development company identity,
    but I'll be releasing the plugin under the MIT license if it goes
    anywhere, so collaboration welcome.
  2. In theory, I'm generating classes with accessibility-specific code
    whenever a Control is added to the tree.
  3. Was told on IRC that tab/shift-tab behavior is already defined in the
    UI, but when the editor launches, nothing has focus so tab/shift-tab
    don't do anything until something gets focus.
  4. I'm trying to set an initial focus, and I seem to be doing so with
    some random LineEdit widget, but tab/shift-tab doesn't seem to move the
    focus. This could be because they're being captured by the LineEdit control.
  5. Setting an initial focus to anything else doesn't seem to work--at
    least, not according to my focus_entered callback.

I'm having a tough time with this because I'm new to the engine, but I'm
coming at it from a direction which most newbies don't take. :) If
anyone can help me crack this initial focus puzzle, I think I can make
lots of rapid progress. Once I can use tab/shift-tab to navigate between
controls, I can look at which properties each exposes and start creating
an accessible presentation for them. I'm also happy to file issues
against the engine itself--I just don't know what specific behaviors to
ask for just yet.

Thanks.

You probably need to set the focus_mode of the Control. I believe the default is to not accept focus, except for the ones that need it (like text and button controls). You may also need to set the focus neighbors, I'm not sure how well the TAB works without it (or if it works at all).

Thanks. I just started hacking around with FOCUS_MODE_ALL. With that, I
can call grab_focus on previously unfocusable items.

Regarding neighbors, I read at
http://docs.godotengine.org/en/3.0/classes/class_control.html:

    If the user presses Tab, Godot will give focus to the closest node
to the right first, then to the bottom. If the user presses Shift+Tab,
Godot will look to the left of the node, then above it.

That's exactly what I'd implement if I did so myself, so it sounds like
the default is perfect.

If you have a moment, could you please fire up the editor in a new
project, maybe one with an empty project.godot just to keep things
similar, and see under what conditions tab/shift-tab work in the editor?
I think the person who helped test things on IRC may have created their
own layout, whereas what I want to do is test things directly in the
editor. If tab/shift-tab don't work, then I have a specific bug to file.
The current belief under which I'm operating is that they don't work
initially, but do after an item is clicked on.

And, interestingly enough, when I track focus_exited I do get an event
when I press tab. Unfortunately, I never get another focus_entered, so
once my focus leaves a control, it never returns to one.

Thanks for your help.

http://docs.godotengine.org/en/3.0/classes/class_nodepath.html#class-nodepath

If you have a moment, could you please fire up the editor in a new
project, maybe one with an empty project.godot just to keep things
similar, and see under what conditions tab/shift-tab work in the editor?

Tab does nothing at first opening. If I click somewhere to give focus, then it starts cycling around the controls, but it does not look like all of them are focused. The tabs of the dock containers does not seem to be focused, which means some parts of the interface can't be accessed via keyboard only.

Thanks. This issue is spinning out of control a bit. I'm going to file
another regarding brainstorming ideas around editor focus.

Not sure what to do, I filed #19230 but there hasn't been much activity. I also posted on the forum and was advised to file an issue. Thoughts on where to ask for help next? I can't see whether or not my addon is actually setting focus, whether or not Tab/Shift-tab moves it if so, why I get a focus_exited event on every keypress even if it isn't a navigation key. I don't want to be annoying, but I feel like I could make a lot of progress contributing with 10 minutes of help from an experienced Godot developer. Thought I'd poke this issue and bump #19230, which has a more focused (no pun intended) list of questions.

Thanks.

Just wanted folks to know that this repository is making rapid progress. I can now:

  • tab/shift-tab around the editor, getting feedback about which control has focus. Currently has semi-intelligent presentation of Button and LineEdit. More to come soon.
  • Arrow left/right in LineEdit controls, getting feedback about which character has focus. Additional LineEdit support to come once I see how rapidly PRs are handled. I just submitted one adding a caret_moved signal.

Help welcome, though be advised that this addon now requires a version of Godot with unmerged PRs. Specifically, see https://gitlab.com/lightsoutgames/godot-accessibility/issues/1 for a checklist of all submitted PRs and their merge status.

If you'd like to help but don't want to step on any toes, a great way to do so would be building a TTS module. Right now this addon just prints to the console. What I'd like is an API like this:

TTS.speak("hello, world.", interrupt = True)
TTS.stop() # Interrupts speech if in progress
TTS.rate = 200 # We'd need to coordinate some sort of rate algorithm between engines.

Anything more complicated, like voices, can wait for later. If someone wants to help and creates such an API, I can probably make it work under Linux and Android. Otherwise I'll get to this eventually, though the screen reader takes priority for me.

If you can tell me how you implement new GDScript APIs, I can
implement that for windows at least.

On 6/12/18, Nolan Darilek notifications@github.com wrote:

Just wanted folks to know that this
repository
is making
rapid progress. I can now:

  • tab/shift-tab around the editor, getting feedback about which control has
    focus. Currently has semi-intelligent presentation of Button and
    LineEdit. More to come soon.
  • Arrow left/right in LineEdit controls, getting feedback about which
    character has focus. Additional LineEdit support to come once I see how
    rapidly PRs are handled. I just submitted one adding a caret_moved
    signal.

Help welcome, though be advised that this addon now requires a version of
Godot with unmerged PRs. Specifically, see
https://gitlab.com/lightsoutgames/godot-accessibility/issues/1 for a
checklist of all submitted PRs and their merge status.

If you'd like to help but don't want to step on any toes, a great way to do
so would be building a TTS module. Right now this addon just prints to the
console. What I'd like is an API like this:

TTS.speak("hello, world.", interrupt = True)
TTS.stop() # Interrupts speech if in progress
TTS.rate = 200 # We'd need to coordinate some sort of rate algorithm between
engines.

Anything more complicated, like voices, can wait for later. If someone wants
to help and creates such an API, I can probably make it work under Linux and
Android. Otherwise I'll get to this eventually, though the screen reader
takes priority for me.

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-396575401

--
Signed,
Ethin D. Probst

I don't know. I'm blind and learning all of this too, so if I'm learning
how to build the TTS module then I'm not building the screen reader
module. :) I'll get to learning that eventually, but my hands are full
pushing GDScript as far as I can, then tweaking the engine when I can't
get something out of GDScript. This whole process is very much like when
I began building my Android screen reader. I didn't know what was
possible, just started hacking on things, and only really got a sense
for what I was doing a few months in.

And my assumption is that you'll need GDNative, unless GDScript offers
some sort of FFI. I guess you'd build a C/C++ module that calls Windows'
speech APIs and exports the functions to GDScript. Then I'd take that
module and make it work on Linux/Android using Speech-dispatcher and
Android's native TTS. I suppose I could also do web as well using
Javascript's TTS API. If GDScript does offer FFI to C/C++, please do
let me know.

Thanks!

I'll look around and see if I can export some functions using the Tolk
library to communicate with screen readers.

On 6/12/18, Nolan Darilek notifications@github.com wrote:

I don't know. I'm blind and learning all of this too, so if I'm learning
how to build the TTS module then I'm not building the screen reader
module. :) I'll get to learning that eventually, but my hands are full
pushing GDScript as far as I can, then tweaking the engine when I can't
get something out of GDScript. This whole process is very much like when
I began building my Android screen reader. I didn't know what was
possible, just started hacking on things, and only really got a sense
for what I was doing a few months in.

And my assumption is that you'll need GDNative, unless GDScript offers
some sort of FFI. I guess you'd build a C/C++ module that calls Windows'
speech APIs and exports the functions to GDScript. Then I'd take that
module and make it work on Linux/Android using Speech-dispatcher and
Android's native TTS. I suppose I could also do web as well using
Javascript's TTS API. If GDScript does offer FFI to C/C++, please do
let me know.

Thanks!

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-396669298

--
Signed,
Ethin D. Probst

That looks almost perfect. Let me know if you have a GDNative module
working for this. If not, I'll implement something for Linux in a month
or so, perhaps.

If you don't want to make it as a module (which requires recompiling the whole engine), the only way to access external libraries is to use GDNative. You can't do FFI with GDScript.

I'll just add it to the GDScript global functions. That fine?

On 6/12/18, George Marques notifications@github.com wrote:

If you don't want to make it as a module (which requires recompiling the
whole engine), the only way to access external libraries is to use GDNative.
You can't do FFI with GDScript.

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-396736512

--
Signed,
Ethin D. Probst

I think it'd make better sense as a static class with members. I.e.:

TTS.speak("Hello, world", interrupt = True)
TTS.stop()

That, at least, is how other modules seem to do it.

A quick google search reveals a nice GDNative tutorial complete with a
sample project using scons. If you start from there, I should be able to
take that and add speech-dispatcher support. Not sure how one adds, say,
JS to interface with the WebSpeech API, but one thing at a time.

Well crap. I already committed and uploaded it to my fork of the repo
as a part of GDScript. If you visit https://github.com/ethindp/godot,
you can see how I did it. I know, I went outside the norm, but Tolk,
even in C/C++, is not a class-based library. So right now (I think)
you can invoke it like so:

tts_load()
tts_output("Hi", false);
tts_unload()

In order for this to work with a screen reader, you'll need some extra
files (along with the screen reader):

  • JAWS, Window Eyes, ZoomText, System Access: Nothing, works through COM.
  • NVDA: NVDAControllerClient32.dll and NVDAControllerClient64.dll
    (https://www.dropbox.com/s/7txp6iyi65sx12z/nvdaControllerClient32.dll?dl=1
    and https://www.dropbox.com/s/y0pdyxhos31hv9n/nvdaControllerClient64.dll?dl=1)
  • Dolphin screen readers: Dolapi32.dll
    (https://www.dropbox.com/s/m04mpzi7z6bfu5i/dolapi32.dll?dl=1)
  • Microsoft Speech API (MSSAPI/SAPI): sapi32.dll and sapi64.dll
    (https://www.dropbox.com/s/7czg0rt9ht9yloq/SAAPI32.dll?dl=1 and
    https://www.dropbox.com/s/e2fxek89p6muz2h/SAAPI64.dll?dl=1)
    Alternatively, you can download all of them here:
    https://www.dropbox.com/sh/aatj7myhczyxs5u/AAA6K5aAZWis9uAF4CsNz_-Za?dl=1.
    I haven't tested this, I just incorporated it. Sorry if that was a bad
    idea or not. :) I'm not sure how we'll get
    tts_speak/tts_output/tts_braille to work since they require a wchar_t.
    This exports the following functions:
    tts_load(): Initializes Tolk by loading and initializing the screen
    reader drivers and setting the current screen reader driver, provided
    at least one of the supported screen readers is active. Also
    initializes COM if it has not already been initialized on the calling
    thread. Calling this function more than once will only initialize COM.
    You should call this function before using the functions below. Use
    tts_is_loaded to determine if Tolk has been initialized.
    tts_is_loaded(): Tests if Tolk has been initialized. Returns true if
    Tolk has been initialized, false otherwise.
    tts_unload(): Finalizes Tolk by finalizing and unloading the screen
    reader drivers and clearing the current screen reader driver, provided
    one was set. Also uninitializes COM on the calling thread. Calling
    this function more than once will only uninitialize COM. You should
    not use the functions below if this function has been called.
    tts_try_sapi(bool): Sets if Microsoft Speech API (SAPI) should be used
    in the screen reader auto-detection process. The default is not to
    include SAPI. The SAPI driver will use the system default synthesizer,
    voice and soundcard. This function triggers the screen reader
    detection process if needed. For best performance, you should call
    this function before calling tts_load(). Parameters: trySAPI: whether
    or not to include SAPI in auto-detection.
    tts_prefer_sapi(bool): If auto-detection for SAPI has been turned on
    through tts_try_sapi(), sets if SAPI should be placed first (true) or
    last (false) in the screen reader detection list. Putting it last is
    the default and is good for using SAPI as a fallback option. Putting
    it first is good for ensuring SAPI is used even when a screen reader
    is running, but keep in mind screen readers will still be tried if
    SAPI is unavailable. This function triggers the screen reader
    detection process if needed. For best performance, you should call
    this function before calling tts_load. Parameters: preferSAPI:
    whether or not to prefer SAPI over screen reader drivers in
    auto-detection.
    tts_detect_screen_reader(): Returns the common name for the currently
    active screen reader driver, if one is set. If none is set, tries to
    detect the currently active screen reader before looking up the name.
    If no screen reader is active, NULL is returned. Note that the drivers
    hard-code the common name, it is not requested from the screen reader
    itself. You should call tts_load once before using this function.
    tts_has_speech(): Tests if the current screen reader driver supports
    speech output, if one is set. If none is set, tries to detect the
    currently active screen reader before testing for speech support. You
    should call tts_load once before using this function.
    tts_has_braille(): Tests if the current screen reader driver supports
    braille output, if one is set. If none is set, tries to detect the
    currently active screen reader before testing for braille support. You
    should call tts_load once before using this function.
    tts_output(string, bool): Outputs text through the current screen
    reader driver, if one is set. If none is set or if it encountered an
    error, tries to detect the currently active screen reader before
    outputting the text. This is the preferred function to use for sending
    text to a screen reader, because it uses all of the supported output
    methods (speech and/or braille depending on the current screen reader
    driver). You should call tts_load once before using this function.
    This function is asynchronous. Parameters: str: text to output.
    interrupt: whether or not to first cancel any previous speech.
    tts_speak(string): Speaks text through the current screen reader
    driver, if one is set and supports speech output. If none is set or if
    it encountered an error, tries to detect the currently active screen
    reader before speaking the text. Use this function only if you
    specifically need to speak text through the current screen reader
    without also brailling it. Not all screen reader drivers may support
    this functionality. Therefore, use tts_output whenever possible. You
    should call tts_load once before using this function. This function is
    asynchronous. Parameters: str: text to speak. interrupt: whether or
    not to first cancel any previous speech.
    tts_braille(string): Brailles text through the current screen reader
    driver, if one is set and supports braille output. If none is set or
    if it encountered an error, tries to detect the currently active
    screen reader before brailling the given text. Use this function only
    if you specifically need to braille text through the current screen
    reader without also speaking it. Not all screen reader drivers may
    support this functionality. Therefore, use tts_output whenever
    possible. You should call tts_load once before using this function.
    Parameters: str: text to braille.
    tts_is_speaking(): Tests if the screen reader associated with the
    current screen reader driver is speaking, if one is set and supports
    querying for status information. If none is set, tries to detect the
    currently active screen reader before testing if it is speaking. You
    should call tts_load once before using this function. Returns true if
    text is being spoken by the screen reader, false otherwise.
    tts_silence(): Silences the screen reader associated with the current
    screen reader driver, if one is set and supports speech output. If
    none is set or if it encountered an error, tries to detect the
    currently active screen reader before silencing it. You should call
    tts_load once before using this function. Returns true on success,
    false otherwise.
    Enjoy! :)

On 6/12/18, Nolan Darilek notifications@github.com wrote:

I think it'd make better sense as a static class with members. I.e.:

TTS.speak("Hello, world", interrupt = True)
TTS.stop()

That, at least, is how other modules seem to do it.

A quick google search reveals a nice GDNative tutorial complete with a
sample project using scons. If you start from there, I should be able to
take that and add speech-dispatcher support. Not sure how one adds, say,
JS to interface with the WebSpeech API, but one thing at a time.

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-396753704

--
Signed,
Ethin D. Probst

Tolk itself doesn't have to be class-based to export a class-based interface. Take a look at the tutorial for a pure C-based example of creating a GDScript class. First hit on Google for "gdnative tutorial".

I won't be able to use this because I already have an engine fork with a custom signal. I'd like to avoid tweaking the engine unless necessary, and only in the minimal ways necessary to build accessibility in addons.

Good start, though.

Ah, to hell with it, I'm just going to take a stab at it. :) Except it's
going to be in Rust, so that's that. Looks like there's a Tolk Rust
binding. I can't do the Windows port, so you'll have to fill in those
blanks yourself.

There's a Tolk rust binding? Where? I couldn't find it... I'll look for it. :)

On 6/12/18, Nolan Darilek notifications@github.com wrote:

Ah, to hell with it, I'm just going to take a stab at it. :) Except it's
going to be in Rust, so that's that. Looks like there's a Tolk Rust
binding. I can't do the Windows port, so you'll have to fill in those
blanks yourself.

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/godotengine/godot/issues/14011#issuecomment-396768693

--
Signed,
Ethin D. Probst

It's the first hit when googling "tolk rust" for me. Additionally it
shows up on cargo search tolk..

In any case, another Godot question. If I have a GDNative module that
exposes a TTS class to GDScript, how do I ensure that class is
globally available? I've created my .gdnlib file, but the tutorial at
http://docs.godotengine.org/en/3.0/tutorials/plugins/gdnative/gdnative-c-example.html
just shows a graphic when creating the .gdns file. What does a .gdns
file look like so I can create one by hand? Looks like I load the class
in via preload rather than just make it globally available, is that
accurate?

Thanks.

if I have a GDNative module that exposes a TTS class to GDScript, how do I ensure that class is globally available?

Maybe it's possible to add it as an autoload singleton. In fact, there's a place for GDNative singletons in the editor, so maybe there's an even better way to create them. However there's no documentation about it and I have no idea how it works.

just shows a graphic when creating the .gdns file. What does a .gdns
file look like so I can create one by hand?

After following the instructions it generates a file like this:

[gd_resource type="NativeScript" load_steps=2 format=2]

[ext_resource path="res://bin/gdexample.gdnlib" type="GDNativeLibrary" id=1]

[resource]

resource_name = "gdexample"
class_name = "gdexample"
library = ExtResource( 1 )

An update:

I'm thinking of either abandoning the project entirely or forking the engine. As a bit of context, I spent 4-5 years building Spiel, a fairly major Android screen reader, during a time when Android's accessibility APIs weren't very great. I like to think that I'm somewhat knowledgeable about what makes a good accessibility API, and am also sensitive to the performance needs of the underlying OS and user experience. Ultimately I abandoned the project in large part because it was difficult to build atop an incomplete accessibility API, and I had to resort to hack upon hack to make Spiel useful.

Godot is kind of the same way. There are actually enough UI signals for a reasonably sophisticated accessibility API. But I'll need a few more for an accessibility addon to have enough information. Specifically, #19522 would have made things easier, but I was asked to refactor that into addon-specific code. Now #19814 is also being debated. I understand not wanting to add a million signals, but I'm trying to be sensitive to that. All of my changes should stay confined to the UI layer, and I can't imagine this having much of an effect unless someone is playing a typing game where every last bit of performance is needed to render an EditLine at 120 FPS. :) Even so, the core is going to need some additions if folks really want this to happen.

And at the end of the day I just wanted to make games, so if I have to rigorously defend every PR on my own without much support from the folks who'd like to see this happen, or if I'm building all the addons by myself, I'm not really doing that. Not trying to stir stuff up, I just wanted to leave an update here either to inspire some sort of process discussion on how we might make this happen if it is really wanted, or to let anyone considering taking it up know that there may be some pushback.

Sounds like #19840 may be discussed at an upcoming PR meeting. If folks genuinely do want Godot to be more accessible, I'd appreciate more advocates than me at that meeting. I don't even know when said meeting is, or if it won't conflict with actual work meetings I need to attend for $dayjobs. :) I'm just realizing I can't do this alone, and am going to need some help from any interested parties.

Thanks.

@ndarilek If you're still interested in working on this, I'd like to help.

I know it's been a while, but I'm going to try picking this up again. Folks have recommended one large patchset with all needed accessibility fixes to the engine, so I'll give that a shot and hope no one asks me to split it up. :)

But I'm hitting some limits in therms of what I as a blind person can do. At the moment I have a speech interface that lets me tab around the editor, interact with menus, interact with/change some project settings, etc. I'm getting to the point where I'm ready to try taking on some timple text-based tutorials, but haven't quite unlocked enough of the interface. Could someone please answer the following:

  • Where is the input map tab under Project -> Project Settings? I see a tree of options--config, run, editor, editor plugins, etc. but no input map. I'm trying to get to a list of action names so I can see what is available. I'm wondering if this is in a part of the interface that I haven't made accessible yet, though I'm surprised it isn't in the category tree.
  • Can I programatically trigger an action? In particular, some tutorials tell me to right-click a node, but I don't see a way to do this from the keyboard. I'd like to bind the Menu key to right-click so I can follow some of the steps in this tutorial.
  • Alternatively, is there a keyboard-friendly way to right-click, and a list of keyboard shortcuts anywhere? Maybe my issue is that Menu works, but I don't have enough engine accessibility exposed to know that.

And @malcolmhoward, if your offer to collaborate is still open a year later, I'm down. I'm willing to give this one more shot. I see you tried merging some Festival support. If you're interested in continuing that work, I have a Rust-based TTS plugin that currently supports Speech Dispatcher under Linux and Tolk under Windows. Would love to diversify that support (and to test the Windows Tolk support for that matter, since Linux is my primary platform.)

Thanks.

Ignore my first question above. I discovered and added accessible support to TabContainer. I'm now able to find, and select, the Input Map tab. Still a bit confused about whether there's a keyboard-friendly way to right-click nodes in the scene tree, or failing that, how I can trigger that action programatically.

What would be very useful is if I could set up a screenshare call with someone familiar with Godot development to help me through a few initial tasks. Things I'd like to accomplish:

  • Creating an initial empty game field.
  • Creating a player node/scene and adding it to the area.
  • Learning how to access the properties on each node/scene so I can set them without dragging/dropping.
  • Learning how to attach scripts to my nodes. I've gotten enough of the settings accessible that I've managed to set my external editor to gedit, so once I've figured out how to attach a script to a node and edit that script, I'm back in my accessible editor for that task.

If I could do those things, I think I could make much faster progress. Even unearthing accessibility issues performing any of the above would help me focus my efforts. Right now I'm just shotgunning, trying to find things that aren't accessible and making them work when what I need to do is focus in on my tasks and branch out from there.

Thanks.

@ndarilek Yep, I would still like to help work this. I will need a week or so to regain my footing with everything related to this issue, and then I'd be happy to jump onto a call. I not an expert at the engine, but I think I have built enough game prototypes to get us started.

I sort of figured out how to add scripts. I decided to start building a
simple game alongside the accessibility plugin so that collaborators
have a larger common frame of reference to discuss accessibility issues.
I created a Player Area2D node, and lots of tabbing around revealed the
button to add a script, which popped open Gedit and was a very usable
experience. Progress!

Asteroids is sort of my goto for learning an engine or framework, so
next I thought I'd add a Main scene to encapsulate all the game logic
that couldn't fit within a single entity type. I created a scene, set
its root node type to Node, and I cannot for the life of me find a
button to add a script. I have no idea whether it isn't possible to add
a script to this particular scene for some reason, etc.

I instanced a Player as a child of the scene, and I can see the player
and its button to clear the script. But I have no idea why I can't add a
script to just the Main scene root node.

My work is here
for anyone interested in following along. You may also need the TTS
plugin
though it may work
without it too and just print errors whenever attempting to speak
something. But either way, removing the accessibility plugin would
possibly reveal why my simple node setup isn't letting me attach a
script to Main. Could be that I'm not correctly selecting the Main scene
in the tree, in which case that's helpful feedback.

Also, is it fair to say that many of these buttons in the editor are in
toolbars? I'm trying to come up with an easier means of navigating
through all these buttons via the keyboard, because
tabbing/shift-tabbing through them all is a bit slow. If many are in
toolbars, I may implement tabbing between the toolbars and
right/left-arrowing between the toolbar components.

The main node in the tree is no different from the other nodes as far as I can tell, except the fact that it's selected when your new scene tab is first opened upon creation.

OK, more progress. I've exposed more properties on Tree to speech. I
thought that up/down-arrow were navigating within the tree, but they
seem to be selecting items. I think what's happening is that I have
multiple scene nodes selected, so the button to add a script doesn't
appear. I can sort of get the tree into a state where only one item is
selected by expanding/collapsing items, at which point the Add Script
button appears for Main.

Can someone please explain the logic for pressing up/down arrows in a
tree? I assumed it navigated to a single item and selected whatever was
focused, but in something like a tree of scenes that I assume supports
multi-select, I'm wondering if it behaves differently? Is there a
different keystroke for navigating in a tree and selecting a single
item, or is there something to deselect the current item?

Thanks.

Lots of good progress yesterday. The property inspector has some
collapsable things that have no keyboard focusability or interactivity,
but I unearthed those and did some dark magic with simulating mouse
clicks, so I can now expand/collapse node properties via the keyboard.
Trees are about 80% accessible, though multi-column trees with different
controls still pose a challenge.

Speaking of, I'm confused about how the input map editor works. I see a
series of actions represented by tree items. The tree has 3 columns: 0 =
action, 1 = deadzone. Expanding each action type seems to expose
children for device types, and I assume from here I can configure
actions, but I'm confused about lots of things:

 * What is the unlabeled third column in this tree?

 * Say I add an action, "speak_coordinates". I get a tree item for the
action with no children. How do I add a child for a key mapping? Maybe
it has to do with some interaction with this unlabeled third column that
I'm not supporting accessibly yet?

 * What is the unlabeled TextureButton on this screen? Clicking it
seems to close the settings, but it isn't the labeled "Close" button so
maybe it's another action? Save/Undo?

Thanks for any help.

@ndarilek The third column of the input map editor contains a button to add a new input event to an existing action (next to actions in the tree, it's represented by a "plus" symbol). For existing input events, it contains two buttons, one to edit the input event (the left one, represented by a pen) and one to remove it (the right one, represented by a cross).

The TextureButton that closes the settings is most likely the WindowDialog close icon (represented by a cross), it's implemented here: https://github.com/godotengine/godot/blob/750f8d4926edb14269d9f6a117c5a9fd4765373a/scene/gui/dialogs.cpp#L338-L345

The Project Settings is a modal dialog, hence the use of WindowDialog.

Thanks, that's helpful. I'm now exposing button counts in column
announcements for Tree and am getting an announcement of the fact that
there are buttons.

Thanks for pointing me to where the Close TextureButton is
implemented--I'm having a hard time tracking some of these down in the
source, and am trying to set meta fields on some of these so my plugin
can provide accessible descriptions/labels.

I just added a get_button_tooltip(...) method to TreeItem to expose
buttons' tooltips, since buttons in columns are only returned as
Texture objects and there's no way to get at the associated tooltip
via that. Hopefully that isn't an objectionable merge when I submit a
larger PR for this plugin.

Thanks again.

I've now implemented a bit of hackery that lets you right-arrow to a
column of buttons in a tree, use Home/End to cycle between them, and use
Space to activate one. Via this mechanism, I can now activate the dialog
to add a key to a created action.

I have a question about how this works, though, complicated by the fact
that I can't use the keyboard to explore this dialog. Does it intercept
a single set of keypresses, then let me confirm/cancel that? Or does it
intercept every key sent to it, and make the last key what is
ultimately bound to the action?

I.e. say I add an action, "speak_coordinates", which I want bound to
"c". If this dialog appears and I press "c", then attempt to tab to the
OK button, does it:

a) bind "c" to the action, ignoring subsequent keypresses?

b) bind "c", then "tab" when I attempt to tab to the OK button?

Thoughts welcome on how to handle this edge case. I may add a meta
property to this specific dialog telling the plugin to either only
support capturing the first InputEvent or, if that already happens,
redirecting focus to the confirmation button after the first event is
handled. But I've looked through the code, and am not immediately
certain whether A, B, or something entirely different is true.

Thanks.

@ndarilek When you trigger the button that adds a new input event, it will present a dropdown with four options:

  • Key
  • Joy Button
  • Joy Axis
  • Mouse Button

The Key option displays a modal dialog (on top of the existing one) that asks the user to press a key (or a key with modifiers, e.g. Ctrl+K). If the user presses another key before confirming, it will replace the key that was currently defined in the confirmation dialog. This dialog will keep listening for keyboard events until the user confirms by clicking "OK" or cancels by clicking "Cancel" in the dialog. Since this dialog listens for all keys (including "special" keys such as Tab), keyboard navigation isn't possible. This is because the dialog will replace the current choice with Tab. Likewise, the Escape key cannot be used to cancel the dialog. We should aim on improving this dialog's usability :slightly_smiling_face:

In contrast, the other types (Joy Button, Joy Axis and Mouse Button) don't listen for events, they just use dropdown menus in modal dialogs instead.

Got it. So I think that, for accessibility purposes, I'll arbitrarily
decide that the first key in this dialog "wins" when the plugin is
running, so subsequent tab/enter/escape presses do what they're supposed
to. I've discovered that redirecting focus to the OK button when in this
dialog causes whatever key used to trigger the button not to be saved.
I.e. if I use Space to trigger the dialog to appear, then Enter to
trigger the OK button, Space is saved as the key, not Enter.

So here's the behavior I'm witnessing now, and I'm wondering if anyone
has any thoughts on what to do about it. I trigger the button that opens
the dialog on pressing "ui_accept", so Space or Enter by default. When I
trigger the dialog, whatever key I use to trigger the dialog to appear
is the key set for the command. So as above, if I use Space to trigger
ui_accept, and Enter to close the dialog, my command is set to Space.
Likewise if I trigger the dialog to appear with Enter and use Space to
press the OK button, the command is bound to Enter.

I think what's happening is that my press event sends the signal to
click the button, which in turn opens the dialog listening to key
events. That dialog then gets the release of the key I just pressed
(I.e. Space) and sets it as the command. Wondering if anyone has any
thoughts on how to work around this? I tried is_action_released
instead of is_action_pressed, my thought being that it would detect
the release of the pressed action somehow and trigger on that, but no dice.

Here's my code. Essentially it identifies the selected button to be
clicked, then sends the signal. Is there any way I can make the node
accept the entire event, including its release? Or is there something
else going on here that's causing the event release to reach the dialog
and trigger the capture? Suggestions welcome--I'm learning GDScript by
jumping in the deep end so don't have a clue:

var button_index


func tree_input(event):
     var item = node.get_selected()
     var column
     if item:
         for i in range(node.columns):
             if item.is_selected(i):
                 column = i
                 break

     # button_index is set to 0 in an item_selected callback based on 
get_button_count(...) != 0

     if item and column and button_index != null:
         if event.is_action_pressed("ui_accept"):
             node.accept_event() # How can I accept the corresponding 
release of the press here so it doesn't leak through?
             return node.emit_signal("button_pressed", item, column, 
button_index + 1)
         var new_button_index = button_index
         if event.is_action_pressed("ui_home"):
             node.accept_event()
             new_button_index += 1
             if new_button_index >= item.get_button_count(column):
                 new_button_index = 0
         elif event.is_action_pressed("ui_end"):
             node.accept_event()
             new_button_index -= 1
             if new_button_index < 0:
                 new_button_index = item.get_button_count(column) - 1
         if new_button_index != button_index:
             button_index = new_button_index
             var tooltip = item.get_button_tooltip(column, button_index)
             var text = ""
             if tooltip:
                 text += tooltip + ": "
             text += "button"
             tts.speak(text, true)

Just thought of something @ndarilek. Apart from the fact that AFAICT there is no way to "right click" from keyboard, some keyboard commands (e.g. Del) depend on what is focused/selected. Does your Godot/screen reader combo tell you that or would we need an annotation (probably in "meta")?

If you mean determining which node currently has focus, I track
focus_entered and lots of other signals to report changes via speech.

And I'm simulating left mouse-clicks, so I'll probably branch out to
right-clicks soon. Though, having found the list of keyboard shortcuts
in the editor along with the New Script/Scene buttons, this isn't so
immediately critical.

Hacky solution found. The only circumstance I've yet found where an
AcceptDialog gets focus is this dialog to set keyboard shortcuts. So
if that happens, I add a oneshot timer for 5 seconds that autocloses the
dialog. In that 5 seconds, you press whatever key combo you want
assigned to that action, and the dialog confirms the change automatically.

This is hacky as fuck, so if someone has a better solution then I'm all
ears. :) I may also need to add an obscure meta property to this dialog
in the engine so the accessibility plugin knows to special-case it and
announce instructions to press a key.

With this in place, I'm able to add actions to my game and respond to
them. I now have keys to speak my player's coordinates, heading, etc. as
well as to quit the game. Think I'm getting close to being able to
develop simple audio games with my accessible interface.

So naturally I have another question. :) Is there a signal that I can
hook in my EditorPlugin to detect when the game I'm running exits?
When the editor initially launches, I have to set a starting UI focus so
that Tab/Shift-Tab can even navigate, otherwise there's no current focus
to find a next/previous focus from. But when a launched game exits,
focus is unset, and I haven't tracked down a signal to catch to handle
that. Presumably something as significant as launching a separate
game/scene tree within the editor doesn't just vanish without sending a
signal.

Whew, and the road goes ever onward...

I don't think there is something, but you could make use of WM_NOTIFICATION_QUIT in your game script somehow maybe? Send a boolean variable over to the EditorPlugin? (While I have used WM_NOTIFICATION_QUIT to autosave on exit, I haven't used EditorPlugin almost at all)

Hmm, what is WM_NOTIFICATION_QUIT?

And what happens visually in the editor window when I run a game? I know
the game appears in a separate window, but does anything in the editor
change? Wondering if it switches to a different screen or something. I
see this in my console:

Running: /home/nolan/src/godot/bin/godot.x11.tools.64 --path
/home/nolan/Projects/godot-accessibility --remote-debug 127.0.0.1:6007
--allow_focus_steal_pid 32312 --position 328,225

which suggests to me that maybe the interface itself changes to
something that isn't yet accessible, even after the game closes.

NOTIFICATION_WM_QUIT_REQUEST is a notification identifier (it's defined in MainLoop).

For instance, you can react to notifications in GDScript by writing a _notification(what) function:

func _notification(what):
    if what == NOTIFICATION_WM_QUIT_REQUEST:
        print("User requested the project to quit")

You can also make Godot not quit automatically when the user clicks the "Close" button or presses Alt+F4. See Handling quit requests in the documentation.

To answer your latest question, the Output panel of the editor will open automatically when you run a game by default. This behavior can be disabled by unchecking Run > Output > Always Open Output On Play in the Editor Settings. It will stay open after you close the game, unless you check Run > Output > Always Close Output On Stop in the Editor Settings.

This panel displays all messages printed by the running game, and is located at the bottom of the editor window. When the project isn't running, it can be expanded or folded manually by clicking on it.

Hello,

How can I test this under windows?

Thanks,

Windows testing is a bit dicy right now. First you need to compile
godot-tts, which is
Rust-based, and requires setting up Rust's Tolk
library
and running a screen
reader. Tolk-rs' maintainer isn't actively working on the project
anymore, but I've offered to take it over and make it a bit easier to
work with. I just spend most of my time in Linux, so Windows isn't a
priority. You also need my fork of the
engine
.

So, in short, very rough under Windows right now. Help on that front
very much appreciated. It's doable--I just have my hands full.

I'll probably put together a screencast in a week or two showing off
what's possible so far and recruiting help. So you'll at least be able
to see it in action.

Thanks,

Ok, native devel is not a strong point in my case...

I will try to build myself.

Anyway I got this when trying to build godot-tts

   Compiling gdnative-sys v0.5.0
error: failed to run custom build command for `gdnative-sys v0.5.0`

Caused by:
  process didn't exit successfully: `C:\Users\Franci\source\repos\godot-tts\target\debug\build\gdnative-sys-01490563416791dd\build-script-build` (exit code: 101)
--- stderr
thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any of [\'clang.dll\', \'libclang.dll\'], set the LIBCLANG_PATH environment variable to a path where one of these files can be found (skipped: [])"', src\libcore\result.rs:999:5

So I'm stuck...

Hmm, I have the following in my EditorPlugin:

func _notification(what):
     print("Notified: %s" % what)
     if what == MainLoop.NOTIFICATION_WM_QUIT_REQUEST:
         print("User requested the project to quit")

That does print things, but never "User requested the project to quit",
even when I quit the editor itself. Thoughts?

Also, I tried an experiment where I made every node
keyboard/mouse-focusable, thinking that should make focus land somewhere
from which I could tab. I then started printing on focus_exited, and
discovered that focus is removed from where it last lands on game
launch. So it isn't, in fact, landing somewhere unfocusable. It's
nowhere at all.

For now I have a workaround wherein I set an initial focus on screen
change if nothing is focused, so pressing F1-F3 gets things working
again. Is it possible for an EditorPlugin to intercept GUI input? I
see a `forward_gui_input_ method (or something similar, docs aren't open
now) but it isn't documented. If so, I can capture input and set focus
somewhere if focus is unset.

Thanks.

I asked this on the forum, but didn't get an answer. This work depends
on a GDNative TTS plugin I wrote. Can I make GDNative plugins available
for others to use, and if so, how? I don't mean how to cross-compile and
set up CI, but rather:

 * Is there a standard way of making an archive of compiled binaries
available for other plugins/games to use? Presumably folks aren't
expected to build third party GDNative libraries in cases where they're
exporting to other platforms. Or are third party pre-built GDNative
plugins not a thing?

 * I have const TTS = preload("res://godot-tts/godot_tts.gdns") in my
script. This assumes a set location for my plugin library. Likewise, the
libraries themselves have res:// paths which assume locations in
res://godot-tts/target/debug. I don't want to impose a project layout on
anyone, so am wondering if any of these paths can be relative?

Thanks.

  • Is there a standard way of making an archive of compiled binaries
    available for other plugins/games to use? Presumably folks aren't
    expected to build third party GDNative libraries in cases where they're
    exporting to other platforms. Or are third party pre-built GDNative
    plugins not a thing?

Unfortunately, there's no standard for that yet, so you will have to compile libraries and make them available using GitHub Releases or similar.

  • I have const TTS = preload("res://godot-tts/godot_tts.gdns") in my
    script. This assumes a set location for my plugin library. Likewise, the
    libraries themselves have res:// paths which assume locations in
    res://godot-tts/target/debug. I don't want to impose a project layout on
    anyone, so am wondering if any of these paths can be relative?

I don't think you can make the GDNativeLibrary use relative paths (unless you create it at run-time, but that sounds quite involved). Still, add-ons are often located in res://addons, which is the standard location used by editor plugins. Therefore, you could ask users to place everything in res://addons/godot-tts, which should play well with most projects.

Ah, OK, so basically build a GitHub release or equivalent containing my
pre-built binaries/docs/whatever, and ship everything configured to
recommend and use res://addons/godot-tts? Cool, thanks, that's helpful!

I think that, once I've added support for setting node properties from
the editor, I'll start working on actual audio games to see how far I get.

Unfortunately, I'm struggling with making some of the properties a bit
more accessible. Specifically, I've added keyboard support to
expand/collapse EditorInspectorSection, at which point I can tab through
any contained properties. What I can't do is get labels for
some--EditorPropertyVector2, for instance.

I assume there are text labels for these somewhere? They aren't made
available as Label instances. Could someone please point me to where
these property labels are rendered? I'm scraping the tree trying to find
them, but am coming up blank.

Thanks a bunch.

EditorPropertyVector2 displays two fields (EditorSpinSlider) marked as "x" and "y". These EditorSpinSliders are children of a VBoxContainer by default, or an HBoxContainer if interface/inspector/horizontal_vector2_editing is enabled in the Editor Settings.

The initial rendering is done here: https://github.com/godotengine/godot/blob/24e1039eb6fe32115e8d1a62a84965e9be19a2ed/editor/editor_properties.cpp#L1150-L1181

Sorry, I was unclear. Where is the label for this property rendered?
Presumably in a Node2D, one of these properties references position. I'm
wondering how to find the label text for that property, given that it
doesn't appear as a Label anywhere I can find.

Sorry if I'm missing it--I did find the EditorPropertyVector2
implementation a while back, but it only seems to render the UI for
setting the property and not its associated label.

Thanks.

The labels (as well as everything that @Calinou mentioned) are rendered in the inspector.

From the top, the inspector's contains the following:
Inspector | Node (those are two tabs)
[ name of node you're inspecting, e.g. "marker"]
a text box for filtering properties
[Script Variables header (optional), with an arrow for collapsing]
[any exported script variables appear here]
[ class of node you're inspecting, e.g. Node2D]
[Transform header - with a tiny arrow that lets you collapse the section]
Position label - rendered to the left of the two boxes that EditorPropertyVector2 makes
Rotation degree - ditto, to the left of a single text input box
Scale label - same as position, the two boxes are labeled x,y so same as Position
[Z Index header]
[ super class of node you're inspecting, e.g. CanvasItem in case of a Node2D]
[Visibility header, with an arrow...]
[Visible label next to a tick box]
[Two color selectors]
[Show behind parent label next to a tickbox]
[Layer mask - a complex container full of tiny little boxes]
[Material header, with an arrow...]
[Node label] - it's a super super class every node inherits from, so it's always at the bottom
This section always contains two headers, both with arrows for folding:
[Pause - a dropdown]
[Script - a dropdown which lets you select a script]

For e.g. 3D nodes, the inspector can get very complex - that was just a simple Node2D example I described, I am thinking your best bet is to somehow make a shortcut for quickly accessing the most important things - the Transform section and Pause/Script at the bottom, and you can always easily navigate to exported properties because they're at the top.

P.S. If you select the Node tab at the top, it opens a completely different menu in place of the Inspector.

PPS. If you filter the properties, you only have the labels, e.g. Node2D, and only one header (of the kind that has the arrow for collapsing) which contains the property you want. I just checked and you do can filter for a built-in property such as position, so this might save you a lot of time :)

Oh, I figured it out. Forgot that a node's parents aren't necessarily
its superclasses. In this case, I had a simple check for node is EditorProperty and spoke its label if one was set. In this case, the
LineEdit for the position X/Y components has in its node ancestry tree
an EditorPropertyVector2 which, in turn, has the label.

Cool, now we have labels speaking for editor properties in the
inspector. Thanks for the layout description as well. That helps. Going
to have to think about how to make that more accessible.

It seems property labels aren't drawn using nodes, but rather using low-level draw_string() calls. (Try searching for draw_string in editor/editor_inspector.cpp.) I'm not sure how these could be made accessible, or if turning them into nodes will be necessary.


For future reference, here's some additional information about EditorInspector/EditorProperty. I wrote this before searching for instances of draw_string(), so what follows may be superfluous.

I haven't played much with the editor inspector code, but from what I understand, editor properties are added to an AddedEditor struct when they're registered: https://github.com/godotengine/godot/blob/24e1039eb6fe32115e8d1a62a84965e9be19a2ed/editor/editor_inspector.cpp#L865-L873

This AddedEditor struct has a label string, which will be passed to instanced EditorProperty nodes: https://github.com/godotengine/godot/blob/24e1039eb6fe32115e8d1a62a84965e9be19a2ed/editor/editor_inspector.cpp#L1320-L1367

Finally, this label string is used to populate the Label node using EditorProperty::set_label_reference, but only if horizontal Vector2 editing is disabled: https://github.com/godotengine/godot/blob/24e1039eb6fe32115e8d1a62a84965e9be19a2ed/editor/editor_properties.cpp#L1178

Otherwise, EditorProperty::set_bottom_editor will be used to place the editor below the label: https://github.com/godotengine/godot/blob/24e1039eb6fe32115e8d1a62a84965e9be19a2ed/editor/editor_properties.cpp#L1158

I now have the Menu key triggering right-clicks. This seems to allow
programatic interaction with rows in trees.

I also fixed some issues calculating text for PopupMenu items. This
has the added benefit that many items I thought were unlabeled actually are.

Now the right-click menu items in trees are accessible, or at least
partially. I seem to be experiencing some off-by-one issues that are
tricky to diagnose.

Getting there...

I promise not to hijack this issue for this purpose, but where's the
best place to get help with general questions using the engine?

A few days ago, I discovered that 2-D audio streams seem to presuppose
that they're panned/attenuated from screen center, and don't take
rotation into account. This isn't what I want from a top-down audio game
which will have rotation, possibly off-screen sound sources heard in the
distance, etc. The viewport docs seem to indicate that 3-D audio is
possible even for 2-D nodes, but I'm not clear if I can just add a 3-D
stream player to 2-D nodes and have their positions synced, if I have to
sync the X and Z positions manually to the 2-D X/Y and pin the stream's
Y to 0, if I have to figure out how to use an ortho camera and render
3-D objects in 2-D, etc.

I asked
this
and tried posting
this
on Reddit, but the former has no responses and the latter isn't visible.
I don't know if I don't get Reddit, if I need to be approved, etc.

Anyhow, I don't want to turn this issue into a support thread, but I'm
using Godot in an unusual way, and it may turn out to be impossible even
after doing all this work. I'm prepared for that--I decided to give a
month to pushing this forward and seeing how far I get with it--but part
of succeeding here involves knowing whether what I want is even
achievable even with accessibility. And if I can create an audio game
development environment, I'll be putting together materials to show
others how to do it, so hopefully my learnings can help others. But I'm
having a tough time getting answers to these niche questions, and that
seems to be this week's bottleneck.

In other news, I'm also working on automating Windows builds of the TTS
plugin, and have put together an accessible starter project making it
easy to both demo my work and create in an accessible environment. I'll
update when I have binary builds of the TTS plugin available for download.

Thanks.

Adding a Listener2D node has been requested in https://github.com/godotengine/godot-proposals/issues/49. It would let you configure where the sound is being listened from in a 2D scene, like it can already be done in 3D with the Listener node.

I can confirm the Reddit post appears as removed, I don't know why. I manually approved it, but it probably won't appear in the front page as it was posted yesterday. Try posting it again, it should be fine for now :)

Ugh, so frustrating. Just resubmitted the post, viewed in a private tab,
shows as removed again.

Thanks for the proposal link.

Sorry to use this issue for support like this.

Progress slowed a bit while I tried doing battle with Appveyor to build
the TTS plugin under Windows. I've since given up on that hot mess, and
will probably provision my own Windows VM sometime soon and set up a
GitLab CI runner. If anyone has a spare Windows VM sitting around for
that purpose, I'd appreciate it.

Panels are now focusable. This fixes last week's issue where quitting a
game in the editor broke keyboard focus.

I also refactored the TTS plugin a bit to route calls through a TTS.gd
script. For now this sends all calls to the native Rust library, but
future versions might dispatch calls to a Java/Kotlin module on Android,
etc.

I'm also doing a bit of work on my first Godot game, a spatial audio
version of Asteroids. Last week's Appveyor distraction killed progress
in that area, but I'm hoping to get to it more this week. Working on
navigation, wrapping, speaking coordinates/heading, etc. I also had to
hack around the lack of 2-D spatial audio, which I think I've done but
have yet to test.

@francipvb Have you tried installing Clang/LLVM? I'm actually hitting this trying to build the latest godot-tts under Windows as well, and discovering I likely need clang installed. The build works under Appveyor, so I suspect they have it installed there and it may be your missing piece.

I may try this later this week, but Godot doesn't run in my Windows VM, so even if that gets it working, I'll be limited in how much Windows testing I can do.

I have an accessible starter that sets up a directory structure into which you can compile godot-tts. Ignore the instructions to fetch the godot-tts build from Appveyor. I can't figure out how to combine Linux and Windows builds into a single zip, so I'm abandoning that in favor of setting up my own GitLab CI runner. But since I can't run Godot under Windows currently, that's getting downgraded in priority for now.

I may try this later this week, but Godot doesn't run in my Windows VM, so even if that gets it working, I'll be limited in how much Windows testing I can do.

You should be able to install a software OpenGL implementation in the VM: https://github.com/pal1000/mesa-dist-win

Godot will run slowly, but this way, both the GLES3 and GLES2 renderers will work.

@francipvb Have you tried installing Clang/LLVM? I'm actually hitting this trying to build the latest godot-tts under Windows as well, and discovering I likely need clang installed. The build works under Appveyor, so I suspect they have it installed there and it may be your missing piece.

I may try this later this week, but Godot doesn't run in my Windows VM, so even if that gets it working, I'll be limited in how much Windows testing I can do.

I have an accessible starter that sets up a directory structure into which you can compile godot-tts. Ignore the instructions to fetch the godot-tts build from Appveyor. I can't figure out how to combine Linux and Windows builds into a single zip, so I'm abandoning that in favor of setting up my own GitLab CI runner. But since I can't run Godot under Windows currently, that's getting downgraded in priority for now.

I left this untouched, but I will give a try.

Cheers,

Hello @ndarilek,

I gave up a try into this. I've installed LLVM toolchain and it worked.

Cheers,

Sweet! Do you mean it compiled, or does it actually talk under Windows?
I haven't had a chance to try getting software OpenGL working yet.

I think I have an OpenGL implementation (I have a NVIDIA GPU).

I'm just building your godot branch.

Hello,

How I connect these two things now?

Thanks,

Note that I've built the branch from github.

I think it's documented well in the README for the starter. Please let
me know if you have any questions regarding that.

Sorry, I am unsure where I have to look, because the godot-tts repository doesn't have a README.

Cheers,

Oh, sorry, thought you saw
this.
Don't follow the godot-tts download instructions since they assumed I
could get Appveyor working. And the last bit about losing focus on game
quit is no longer true.

Good luck, hope this works under Windows.

Having some real struggles with the right-click popup menus I get when
clicking on tree nodes in the editor.
Here
is the code I'm working with. In particular, sometimes get_item_index
is returning -1 and I don't know why. That makes it pretty much
impossible to get any details on a PopupMenu item, and the methods for
doing so seem to expect an idx parameter which I assume is an index.

Looking at the source, it seems -1 means the item isn't found. But I'm
passing in the ID as retrieved by id_focused, so I don't know why this
signal would hand me an ID that isn't found.

Anyhow, I attempted to just sub in the ID when I get -1, but that's
clearly not right, as I often click on one thing and get something
entirely different. Help with this very appreciated--I've spent hours on
it and don't know whether these particular control instances are buggy
or if something else is going on here. Things work in other menus, but
not the tree item context menus in the node list. Thanks.

Oh, sorry, thought you saw this. Don't follow the godot-tts download instructions since they assumed I could get Appveyor working. And the last bit about losing focus on game quit is no longer true. Good luck, hope this works under Windows.

Apparently this link is to a private repo...

Thanks,

Doh, fixed, sorry.

I'm sorry, I don't want to turn this into a general support thread, but
in this case I don't know if I've done something wrong and the editor is
flagging it with a warning or if I've genuinely made a mistake. I'm kind
of doing a bit of a hybrid workflow where I edit some things in the
editor and then, with it shut down, edit some things in the files by
hand. So it's possible I've done something wrong, though I'm not getting
syntax errors.

I can't get 3-D audio playing, and put together this
example
to make things
as simple as possible, but it doesn't work. My intent is to create a
single 3-D spatial node with an audio stream rumble and listener as
children, then play the rumble. This works in 2-D, but not in 3-D. Am I
doing something wrong here?

Apologies again, I asked in the forum without
response
,
and text tutorials are hard to come by. Sometimes I don't know whether
I'm hitting a Godot bug, or if the editor is flagging an error and my
plugin isn't exposing it.

Thanks for any help. Other than this audio issue, I'm making some decent
progress on Asteroids-style navigation with a spoken interface. There
are still wrinkles, but working with the Godot editor as a totally blind
developer is doable so far.

I've broken accessibility support out into a separate ScreenReader
node that can, theoretically, be added to any SceneTree to make
in-game UIs accessible. This makes my plugin not only an editor
accessibility enhancement, but an equivalent to Unity's accessibility
plugin that has been used in many shipping games. Of course,
functionality is still lagging behind, but to the best of my knowledge,
Unity's toolchain is not yet accessible so we're ahead in that regard.

I can't get 3-D audio playing, and put together this example to make things as simple as possible, but it doesn't work. My intent is to create a single 3-D spatial node with an audio stream rumble and listener as children, then play the rumble. This works in 2-D, but not in 3-D. Am I doing something wrong here?

You need to add a 3D Camera to the scene. You can then put the listener as a sub-child of the camera and move the camera, or move the listener directly. That should fix the issue...

which I think is a bug, since no error/warning whatsoever is displayed in the editor when no camera is present and sound is not played.

There are still wrinkles, but working with the Godot editor as a totally blind developer is doable so far.

This is heart-warming to hear, thank you for your work in making Godot more accessible.

Ah, I thought the listener was enough. Can I add the camera and prevent
it from rendering while still keeping whatever property makes the
listener work? I do want to be able to render something to 2-D, just
not a full 3-D scene. Or can I tweak the 3-D camera to render 2-D
somehow--give it an orthogonal perspective maybe?

Thanks, been pouring through the sources trying to track that down.

Ah, I thought the listener was enough

I think it should be, it might be a bug, I don't have much knowledge in that area.

Can I add the camera and prevent it from rendering while still keeping whatever property makes the listener work?

You can put all the 3d stuff in a Viewport and it will not render (remember to enable the listener property).

I do want to be able to render something to 2-D, just not a full 3-D scene. Or can I tweak the 3-D camera to render 2-D somehow--give it an orthogonal perspective maybe?

You can render both 3D in 2D (via ViewportContainer) and 2D in 3D via (Mesh, Material, ViewportTexture).
There is some documentation here:
https://docs.godotengine.org/en/3.1/tutorials/viewports/viewports.html
and demo projects here:
https://github.com/godotengine/godot-demo-projects
under the viewport subfolder.

OK, I got my stream playing! I've read lots of the docs already, and am
also a bit limited by my inability to see what's actually being
rendered, so I hope you don't mind a few more questions inline. I'm
asking these elsewhere but this is the only place I'm getting any
answers, and I'm sorry to anyone who feels spammed, but I'm almost at
the point where I'm considering hiring a Godot consultant to help with
this open source project, which I'd like to avoid due to limited funds.
BUt my questions:

On 9/25/19 10:38 AM, Fabio Alessandrelli wrote:
>

You can put all the 3d stuff in a |Viewport| and it will not render
(remember to enable the listener property).

Wait, I thought that was the whole point of a viewport? So it sounds
like what you're saying I can do is:

 * Create a 2-D game like I currently am, letting it render to the
default viewport that gets created.

 * Create a separate viewport with a camera as a child and a listener
as a child of the camera.

And in that case the camera won't render anything even if the viewport
is a child of the scene tree? Is that because its size is set to 0 by
default, or is there some other reason I can process code on an
invisible viewport? That's entirely counter-intuitive to me, but if that
works then it would greatly simplify my work. I also want to make sure
that the camera continues to process even if the viewport isn't visible.

Do my AudioStreamPlayer3D nodes need to be children of the viewport as
well? I was actually looking through the engine source, but it wasn't
immediately obvious to me that the listener needed to be a camera child,
so I'm not sure how to tell if AudioStreamPlayer3D nodes need to be
children of the same viewport.

Thanks again.

OK, hitting another odd issue I need help with. If I add a Raycast2D
node, then try to set its collision mask property via the editor, the
menu of layers pops up when I press Enter on the button. But pressing
Enter on a layer doesn't seem to close the menu and select a layer.
Enter works fine on other PopupMenus, which is what this seems to be.

Any ideas why this may be happening? And if not, could someone please
point me to where this particular menu is implemented in the engine so I
can investigate? Poked around in editor/ but it isn't immediately
obvious, and I'm also uncertain what specific criteria would prevent
some PopupMenu nodes from responding to Enter. Maybe something is
working on _gui_input and blocking it?

Thanks.

OK, hitting another odd issue I need help with. If I add a Raycast2D node, then try to set its collision mask property via the editor, the menu of layers pops up when I press Enter on the button. But pressing Enter on a layer doesn't seem to close the menu and select a layer.

Sorry about the late reply.
Yes, the PopMenu is not closed automatically in that case.
You need to press Esc to close it.
The idea, is that when you set the collision layer (which is a bitmask field) you might want to set more than one bit at once, so the popup stays open.
The relevant code is setting PopupMenu.hide_on_checkable_item_selection to false.
See:
https://docs.godotengine.org/en/3.1/classes/class_popupmenu.html#class-popupmenu-property-hide-on-checkable-item-selection

https://github.com/godotengine/godot/blob/master/editor/editor_properties.cpp#L799

https://github.com/godotengine/godot/blob/master/scene/gui/popup_menu.cpp#L1145

Sorry, should have mentioned that I'd cracked this one already. I'm
trying not to spam this issue overly much if I can avoid it. :)

Making some good progress, though most of it is on my game since the
accessibility layer is far enough along. There's still plenty more to
do, though.

Next question, is there some way to intercept and filter touchscreen
interaction before it reaches Controls or other nodes? I'd like to
start working on some sort of explore-by-touch support as found in
VoiceOver for iOS or TalkBack for Android, and also as implemented in
Unity's accessibility plugin. Essentially, I'd like to intercept touches
so that double-tapping a control is needed to trigger it, and regular
screen touches only serve to reveal the interface. Quick swipes in
certain directions also work as Tab/Shift-tab. I don't know if I need
viewport overrides, some sort of custom layer that I can use as a
filter, etc. I'd rather not change how every single control behaves,
instead just filtering what interactions reach them in a way that can be
toggled on and off while a game is running.

Thanks.

Next question, is there some way to intercept and filter touchscreen interaction before it reaches Controls or other nodes?

You can use the Control._gui_input(event) in the root GUI element or even Node._input(event) (possibly even in the root viewport).
You can then use SceneTree.set_input_as_handled() or Viewport.set_input_as_handled() to block them after checking the input type for e.g. event is InputEventScreenTouch.

Check out:
Input event flow:
https://docs.godotengine.org/en/3.1/tutorials/inputs/inputevent.html

_input method in Node.
https://docs.godotengine.org/en/3.1/classes/class_node.html#class-node-method-input

set_input_as_handled() method in SceneTree.
https://docs.godotengine.org/en/3.1/classes/class_scenetree.html#class-scenetree-method-set-input-as-handled

_gui_input in Control
https://docs.godotengine.org/en/3.1/classes/class_control.html#class-control-method-gui-input

Thanks, lots to look over here. Here's another:

https://docs.godotengine.org/en/3.1/classes/class_itemlist.html#class-itemlist-method-select

"Note: This method does not trigger the item selection signal." Why is
that? I'm having to override some keyboard-handling on widgets because,
for instance, pressing down-arrow on the lowest item of a tree/list will
skitter focus onto a neighboring widget instead of simply not advancing
the selection. Same with ItemList. I'm also having to implement my own
selection logic. One odd issue I'm hitting is in the file selector,
where selecting an item in the list isn't updating the filename in the
text field. I end up having to select the directory that the file is in,
then type the filename into this field.

Given that I'm having to implement my own selection focus logic, I'm
wondering if the fact that select doesn't fire the signal might be
what's causing this? And that begs the question, why doesn't select
fire the signal indicating that an item was selected?

Thanks.

OK, I'm to the point where I'm exporting a game that uses the
accessibility plugin for its UI. This is posing a new challenge which I
don't know how to address.

I have code that attempts to guess a label for some fields. Part of this
algorithm involves traversing up through a node's parents, finding any
EditorProperty instances, and returning their labels if any. This
works if I run my game in the editor, or via a binary that has the
editor built in. It fails with an exported binary, because
EditorProperty isn't defined in binaries without the editor. Things
I've tried:

if node.get_class() == "EditorProperty" doesn't work because I need
the is check, not just class equality, and I'd rather not check
whether the class name equals any descendant of that class's name.

if Engine.is_editor_hint() doesn't work because the failing code still
runs in the exported binary.

Can I retrieve a class by its name into a variable, then perform logic
based on whether the variable is null or not? I need to run an is
check or its equivalent. Failing that, can I move this check into a
separate context that isn't run in exported binaries, or evaluates to
null? I still want this code to run generally, but would be happy to
move the editor-specific checks out of the code path for exports.

For reference, here's the code I'm trying to make work:

func guess_label():
     var tokens = PoolStringArray([])
     var to_check = node
     while to_check:
         if Engine.is_editor_hint():
             print(to_check)
             if to_check is EditorProperty and to_check.label:
                 tokens.append(to_check.label)
             if (to_check is EditorProperty or to_check.get_class() == 
"EditorInspectorCategory") and to_check.get_tooltip_text():
                 tokens.append(to_check.get_tooltip_text())
             var label = tokens.join(": ")
             if label:
                 return label
         to_check = to_check.get_parent()

Thanks.

@ndarilek,

Would substituting an is_class check for the get_class check solve this problem? The docs for that method are here.

Yup, worked nicely. Thanks! The alternative I was considering would have
been much messier.

Glad to hear it. I went back and read some of these posts and it's not
entirely clear what the outstanding issues are and what's been resolved,
but you did mention there is some difficulty with figuring out what's being
rendered. I need this plugin for a project I'm working on so I'm open to
helping out wherever you need a sighted developer for testing or resolving
weird quirks. Feel free to contact me with what you need done or populate
the issue tracker on your gitlab repos with bugs and feature requests you
need help with and I'll tackle whatever I think I can manage. My email is
ellen.h.[email protected]

On Mon, Oct 14, 2019, 9:50 AM Nolan Darilek notifications@github.com
wrote:

Yup, worked nicely. Thanks! The alternative I was considering would have
been much messier.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot/issues/14011?email_source=notifications&email_token=AAOY262NSXUDGOMDFEF5NPTQOSPOXA5CNFSM4EG4X6EKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBFRGQQ#issuecomment-541791042,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAOY264T3T7KWWSTBIIJUFLQOSPOXANCNFSM4EG4X6EA
.

We may be able to close this one soon, OTOH I've had lots of questions answered here that IRC/the forums haven't helped with. That's true again. :) Hopefully I've kept the S/N ratio high, but when I need help with something, it's generally above and beyond what I can find in a tutorial or book, and there are plenty of skilled folks watching this issue and helping me build this support out.

DId lots of work on this over the holidays. My godot-tts plugin now supports Android and HTML 5, and I've exported accessible games to both platforms. I also did some initial work on touchscreen accessibility. On my Linux desktop, UIs can now be explored accessibly on my $70 HDMI touchscreen. Simple touches speak the UI control being touched, a double-tap anywhere on-screen activates the last focused control, and a quick swipe right/left acts like Tab/Shift-tab and moves focus between elements.

Unfortunately, swiping doesn't work at all on Android, and I'd appreciate help figuring out why. More specifically, the swipes themselves are detected just fine. They then inject ui_focus_next or ui_focus_prev actions using code like this:

func press_and_release(action):
    var event = InputEventAction.new()
    event.action = action
    event.pressed = true
    get_tree().input_event(event)
    event.pressed = false
    get_tree().input_event(event)

func swipe_right():
    press_and_release("ui_focus_next")

func swipe_left():
    press_and_release("ui_focus_prev")

And those actions don't trigger on Android, even though the swipe_right/swipe_left functions run just fine. Any idea why that might be?

I plugged a keyboard into my phone, and Tab at least triggers ui_focus_next. So the action works in terms of being recognized, but as generated via the above code it doesn't seem to trigger. I also tried Input.action_press and Input.action_release, but that made things stop working even on the desktop. So clearly my code does something Input.action_press doesn't, but it doesn't do enough to make Android happy. I tried to find where the events were converted to actions, but it isn't clear to me whether or not these paths are platform-specific. Clearly they have to be since Linux/X11 and Android diverge.

Running out of ideas here and am open to suggestions. Thanks for the help so far.

Superseded by https://github.com/godotengine/godot-proposals/issues/983.

Note that this might be worth splitting in a separate proposal again. We're closing old proposals on this repository to encourage migrating them to the new godot-proposals tracker where we want all proposals to reside and be discussed.

Was this page helpful?
0 / 5 - 0 ratings