You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Keywords: CoreSceneUnderstanding, VoiceOver, accessibility, alt text
Description
In a few areas throughout the system, rich captions/descriptions are generated for images:
When texts with images are announced through AirPods
In the Magnifier app in Detect mode
In VoiceOver with “Image Descriptions” enabled
The image attached results in a description like “A white chair next to a wooden table on a white rug” while the current Vision framework might only be capable of picking out a chair, a table. An API for these richer descriptions would allow app developers to make more accessible apps.
As a real example, I work on the Mastodon social app. It would be a huge benefit to pre-populate some placeholder alt text for images when composing a new post.
Files
The text was updated successfully, but these errors were encountered:
Description
In a few areas throughout the system, rich captions/descriptions are generated for images:
The image attached results in a description like “A white chair next to a wooden table on a white rug” while the current Vision framework might only be capable of picking out a chair, a table. An API for these richer descriptions would allow app developers to make more accessible apps.
As a real example, I work on the Mastodon social app. It would be a huge benefit to pre-populate some placeholder alt text for images when composing a new post.
Files
The text was updated successfully, but these errors were encountered: