-
-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "Only in edit controls" mode for typing echo #17505
base: master
Are you sure you want to change the base?
Conversation
Great work on this, thank you. |
I think this needs a changelog entry as well. |
See test results for failed build of commit 8a5bed9868 |
See test results for failed build of commit 259e6d5452 |
Very nice addition @cary-rowen! Thanks. I have some questions:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cary-rowen , thanks for this.
I think that you may add a helper function in globalCommands, similar to the toggleBooleanValue
function, for config flags, and use it for toggling typed characters and words.
For reference, see toggleBooleanValue
in globalCommands.py
, introduced in PR #16994
…olean to integer conversion.
Hi @CyrilleB79
Could you please confirm if these behaviors are what you're asking about, and what the expected behavior should be in these cases? Many thanks |
Regarding default settings, I'd also love to hear suggestions from NV Access, I'd personally like to see new options provided as defaults for the same reasons as @CyrilleB79. @nvdaes |
@cary-rowen IMO, the expected behaviour with the new option in ssh password case and in read-only edit fields such as synthesizer is that NVDA should not speak anything since nothing is actually visible on the screen. |
thaaaaanks! should I really beleave that from now on we can get rid of the t. This pc 10 of 18, or space. playback stopped, and space. Playback resumed, with out needing to install the best ever addon made by @cary-rowen that addon was the best, because I alwayes liked and needed to keep typed characters and words, but I hated how they are announced everywhere, I was thinking, how stupid, the option it self is called speeck ('"typed"') characters and words, but why whe were hearing it everywhere? whe are not typing anything! whe are using our keyboard to use and explor our computer! but from now on, at least we don't have to use an addon to solv this really annoying this. And, really thank you @cary-rowen for resolving such problem for us until now. befor your addon releace on the stor, I was using an old and less buggy vertion of an unknown addon called typing settings to solv this for my self. greate feature! |
@cary-rowen speaking typed characters outside of edit fields does not make sense anyway, because they are not typed, but only pressed. For that we have the input help feature already if people need to learn the keyboard layout outside of edit fields. |
@Adriani90 hi. No, I also don't agree for removal of that in my opinion because, input help is a thing, but this is stil needed for some people where they need to press some keys and need to be announced for them what they've preased and what came up, or what happened as a result, while some newbie people need to turn on input help, press a fue keys that they need, and turn of input help, and make sure they've pressed or pressing the correct key. Also, for a groupe of users with old habit, better to not remove it but rename it instead for example, from on, to everywhere. so the options are as folows. off, in edit boxes only, everywhare. |
@CyrilleB79
I agree that there shouldn't be any feedback in the read-only edit box after enabling the new options introduced by this PR. This is currently as expected. cc @seanbudd |
I agree with @amirmahdifard I am totally not in favor of removing the original mode. |
Thanks @SaschaCowley and @CyrilleB79 for the review. As @CyrilleB79 commented:
I'm not sure about this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also update the copyright headers of any files you have touched to reflect that they have been modified in 2025.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good, just a few small things to go
Thanks @SaschaCowley |
See test results for failed build of commit e669f7f4cf |
user_docs/en/changes.md
Outdated
@@ -51,6 +51,8 @@ To use this feature, "allow NVDA to control the volume of other applications" mu | |||
* Short versions of the most commonly used command line options have been added: `-d` for `--disable-addons` and `-n` for `--lang`. | |||
Prefix matching on command line flags, e.g. using `--di` for `--disable-addons` is no longer supported. (#11644, @CyrilleB79) | |||
* Microsoft Speech API version 5 and Microsoft Speech Platform voices now use WASAPI for audio output, which may improve the responsiveness of those voices. (#13284, @gexgd0419) | |||
* The keyboard settings for "Speak typed characters" and "Speak typed words" now have three options: Off, Always, and Only in edit controls. (#17505, @Cary-rowen) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order here·Does it need to be adjusted?
* The keyboard settings for "Speak typed characters" and "Speak typed words" now have three options: Off, Always, and Only in edit controls. (#17505, @Cary-rowen) | |
* The keyboard settings for "Speak typed characters" and "Speak typed words" now have three options: Off, Only in edit controls, and Always. (#17505, @Cary-rowen) |
See test results for failed build of commit cd4e125c45 |
See test results for failed build of commit 16ca2516ea |
Closes nvaccess#13284 Summary of the issue: Currently, SAPI5 and MSSP voices use their own audio output mechanisms, instead of using the WavePlayer (WASAPI) inside NVDA. This may make them less responsive compared to eSpeak and OneCore voices, which are using the WavePlayer, or compared to other screen readers using SAPI5 voices, according to my test result. This also gives NVDA less control of audio output. For example, audio ducking logic inside WavePlayer cannot be applied to SAPI5 voices, so additional code is required to compensate for this. Description of user facing changes SAPI5 and MSSP voices will be changed to use the WavePlayer, which may make them more responsive (have less delay). According to my test result, this can reduce the delay by at least 50ms. This haven't trimmed the leading silence yet. If we do that also, we can expect the delay to be even less. Description of development approach Instead of setting self.tts.audioOutput to a real output device, do the following: create an implementation class SynthDriverAudioStream to implement COM interface IStream, which can be used to stream in audio data from the voices. Use an SpCustomStream object to wrap SynthDriverAudioStream and provide the wave format. Assign the SpCustomStream object to self.tts.AudioOutputStream, so SAPI will output audio to this stream instead. Each time an audio chunk needs to be streamed in, ISequentialStream_RemoteWrite will be called, and we just feed the audio to the player. IStream_RemoteSeek can also be called when SAPI wants to know the current byte position of the stream (dlibMove should be zero and dwOrigin should be STREAM_SEEK_CUR in this case), but it is not used to actually "seek" to a new position. IStream_Commit can be called by MSSP voices to "flush" the audio data, where we do nothing. Other methods are left unimplemented, as they are not used when acting as an audio output stream. Previously, comtypes.client.GetEvents was used to get the event notifications. But those notifications will be routed to the main thread via the main message loop. According to the documentation of ISpNotifySource: Note that both variations of callbacks as well as the window message notification require a window message pump to run on the thread that initialized the notification source. Callback will only be called as the result of window message processing, and will always be called on the same thread that initialized the notify source. However, using Win32 events for SAPI event notification does not require a window message pump. Because the audio data is generated and sent via IStream on a dedicated thread, receiving events on the main thread can make synchronizing events and audio difficult. So here SapiSink is changed to become an implementation of ISpNotifySink. Notifications received via ISpNotifySink are "free-threaded", sent on the original thread instead of being routed to the main thread. To connect the sink, use ISpNotifySource::SetNotifySink. To get the actual event that triggers the notification, use ISpEventSource::GetEvents. Events can contain pointers to objects or memory, so they need to be freed manually. Finally, all audio ducking related code are removed. Now WavePlayer should be able to handle audio ducking when using SAPI5 and MSSP voices.
c55eccf
to
4040cd5
Compare
e18a216
to
7c25e4c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cary-rowen I've reviewed changes in globalCommands.py.
Looks good to me, except for minor inconsistencies in comments for translators.
@@ -536,38 +563,36 @@ def script_previousSynthSetting(self, gesture): | |||
ui.message("%s %s" % (previousSettingName, previousSettingValue)) | |||
|
|||
@script( | |||
# Translators: Input help mode message for toggle speaked typed characters command. | |||
description=_("Toggles on and off the speaking of typed characters"), | |||
# Translators: Input help mode message for cycling the reporting of typed words. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Translators: Input help mode message for cycling the reporting of typed words. | |
# Translators: Input help mode message for cycling the reporting of typed characters. |
# Translators: Reported when the user cycles through speak typed characters modes. | ||
# {mode} will be replaced with the mode; e.g. Off, On, Only in edit controls. | ||
messageTemplate=_("Speak typed characters: {mode}"), | ||
) | ||
|
||
@script( | ||
# Translators: Input help mode message for toggle speak typed words command. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Translators: Input help mode message for toggle speak typed words command. | |
# Translators: Input help mode message for cycling the reporting of typed words. |
Link to issue number:
Fixes #16848, related #10331, #3027
Summary of the issue:
Currently NVDA can only toggle typing echo (characters and words) on or off globally. Users want more granular control to only have typing feedback in edit controls, while keeping it off in other contexts like listss or non-edit areas.
Description of user facing changes
Description of development approach
The implementation:
Testing strategy:
Tested the following scenarios:
Known issues with pull request:
None identified.
Code Review Checklist:
@coderabbitai summary