@simon @Iconfactory Happy trails. If you do give it a go and have a chance, do report back on how the accessibility looks. very interested in this and will jump at it once I’m not budgeting my brains out.
@NoahCarver @Iconfactory Well, that's unfortunate. The values seem to be unreadable with VoiceOver. I think they're rendering them in some sort of video display rather than using actual controls. I'm able to turn on screen recognition and see them that way, but they don't read particularly well.
@simon A picture-in-picture video is quite an ingenious approach to presenting live, running data even when the app is backgrounded. Just... not an accessible one. It does support audio alerts; not a replacement for VO access but could be useful maybe? @NoahCarver @Iconfactory
@jscholes @NoahCarver @Iconfactory Agreed. And unfortunately, one instance where it would be necessary to redesign the interface to make it accessible. It's not what I'd call a cheap app; but it is a very useful-sounding one. I hope this is something they're considering.
@simon @jscholes @NoahCarver @Iconfactory I haven't tried it personally either, but I was going to email them with some potential ideas how this could work.
One is obviously using audio, more like the charm utility that @talon made a while ago, which used a continuous sound to indicate things like CPU and memory state.
At that point, maybe the usage stats could sit on the dynamic island, and I think that is actually a place where you could have a voiceover label that would read all the information out.
Another way I thought of is if they had a shortcut that you could assign to a voiceover gesture, which as long as the app was running would also speak this information out.
I think that would probably work to make information like this voiceover accessible.
The audio alerts that are in the app could also potentially be combined with spoken alerts with text-to-speech as well. Sorry for the wall text I was dictating.
@pitermach @simon @jscholes @NoahCarver @Iconfactory @talon We’ve thought about the accessibility issues here and it’s a hard problem.
The main issue is that we’re producing more information than can be read aloud in the one second between updates. There would be a string of information like this:
“12% performance, 34% efficiency, 56% graphics, 100 Mbps download, 200 Kbps upload, 2.2 GB memory”
That can’t be read in a single second, and even if it could, it would be overwhelming.
@chockenberry @pitermach @simon @jscholes @NoahCarver @Iconfactory That's why I've decided to use sound in my resource monitor. For percentages scaling pitch or volume works easily enough. But for values where the upper limit is variable or even unknown I took some liberty defining values that I thought sounded good and expected to work for most things. It can use either pitch, volume or cross fade between different sounds. This is for PC so I was able to add a push to play feature which I suppose won't work well on touchscreens though. But sound I believe is an excellent way to present this information very quickly without relying on speech. Speech could be triggered manually. It can also pan the sound of each individual core from left to right so you can immediately tell which core is active and how much for example. So I would say speech isn't your only option :)
@pitermach @simon @jscholes @NoahCarver @Iconfactory @talon
It’s like using the screen reader on every frame of a movie. It can’t keep up.
Sounds for these metrics is an interesting idea, but again there would be a lot of them and probably hard to discern. The ones we’re doing now are just when you cross a threshold (like 90%). How do you tell 10% from 20% with audio?
I love supporting this community but in this case I’m just don’t have any good ideas on how to do it.
@chockenberry Personally, I'd settle for being able to read the values at all, even if I have to swipe between them. And maybe if the VoiceOver cursor is on a particular field and it updates by more than 5-10%, it should auto-speak the new value. But it is necessarily going to be harder to absorb as much information by speech and that's just the way it is. If you wanted to add audio alerts in a future update that would be seriously amazing, and I would definitely use that. But the ability to read each specific field and monitor a particular one seems like it would be a good starting point. I don't know if that sounds like something you could/would do, and other opinions welcome from this thread, but I know I'd personally use it that way.
@pitermach @jscholes @NoahCarver @Iconfactory @talon
@chockenberry @pitermach @simon @jscholes @NoahCarver @Iconfactory If you have a windows PC handy, I have an implementation of this idea here: https://www.iamtalon.me/charm
The idea is that a broad overview can be good enough, especially once you get used to the sounds and their pitches. If you need a specific value, you could tap the control and then you could get the speech output from VoiceOver to tell you exact numbers.
With some careful consideration of what sounds you use, audio clutter doesn't seem to be a big concern, especially if you can toggle the sounds you want to know about, and disable the ones that you don't. At least I have not gotten any feedback about sounds being overwhelming, though that might be influenced a little by the fact that you can change the sounds if you don't like them, or even how the sounds work. I think of it a little bit like keeping the info in your periphery vision while being able to focus on them if you need to know exactly what's up by pressing a key. If you hear something sounds off, you can focus on it and figure out what it is, or if you know the sounds well, you might be able to tell immediately just by listening.