![]() The off-screen model needs to shadow the graphics engine, but the engines don’t make this easy. This created a delay between the introduction of new features and assistive technology’s ability to support it. An object might not convey useful information, and in any case it took some time to identify it, develop the heuristics needed to support it and then ship a new version of the screen reader. An assistive technology could implement support for a new feature only once it had been introduced into the operating system or application. This reverse engineering of information is both fallible and restrictive. This works when an object’s role or state can easily be determined, but in many cases the relevant information is unclear, ambiguous or not available programmatically. The assistive technology would also try to obtain information about the state of an object by the way it is drawn - for example, tracking highlighting might help deduce when an object has been selected. For example, in a Windows application, the screen reader might present the Window Class name of an object. To inform a user about an object, an assistive technology would try to determine what the object is by looking for identifying information. The heuristics required for an assistive technology to make the same deduction are actually very complex, which causes some problems. A human might look at that object (in the context of other information on screen) and correctly deduce it is a button. For example, the operating system might issue instructions to draw a rectangle on screen, with a border and some shapes inside it that represent text. Recognizing the objects in this off-screen model was done through heuristic analysis. ![]() Rich Schwerdtfeger’s seminal 1991 article in Byte, “ Making the GUI Talk,” describes the then-emerging paradigm in detail. That model could be read by screen readers or used by screen magnifiers to zoom in on the user’s current point of focus within the interface. As applications made drawing calls through the graphics engine to draw text, carets, text highlights, drop-down windows and so on, information about the appearance of objects on the screen could be captured and stored in a database called an off-screen model. They dealt with this by intercepting the drawing calls sent to the graphics engine and using that information to create an alternate off-screen version of the interface. ![]() So, assistive technologies on those platforms had to find a new way to obtain information from the interface. Everything was now drawn on screen as a picture, including pictures of text. The arrival of graphical interfaces such as OS/2, Mac OS and Windows meant that key information about what was on the screen could no longer be simply read from a buffer. The information could then be manipulated - for example, magnified or converted into an alternative format such as synthetic speech. Assistive technologies could obtain this information by reading directly from the screen buffer or by intercepting signals being sent to a monitor. With the text-based DOS operating system, the characters on the screen and the cursor position were held in a screen buffer in the computer’s memory. To understand the role of an accessibility API in making Web applications accessible, it helps to know a bit about how assistive technologies provide access to applications and how that has evolved over time. How do assistive technologies present a web application to make it accessible for their users? Where do they get the information they need? One of the keys is a technology known as the accessibility API (or accessibility application programming interface, to use its full formal title). A firm grasp of the technology is paramount to making informed decisions about accessible design. Successful web accessibility is about anticipating the different needs of all sorts of people, understanding your fellow web users and the different ways they consume information, empathizing with them and their sense of what is convenient and what frustratingly unnecessary barriers you could help them to avoid.Īrmed with this understanding, accessibility becomes a cold, hard technical challenge. A big part of accessibility is, therefore, an easily met responsibility of web developers. If appropriate content semantics are not available, then assistive technologies will use old and unreliable techniques to make the interface usable. Technologies work together to extract accessibility information from a web interface and appropriately present it to the user.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |