Advancements in the accessibility of Facebook

In December 2011, I wrote this overview of the accessibility of social network sites and apps, and I had to paint a rather sad picture about most of the accessibility experiences. As time went by, some things improved here and there, others stalled.

One social network that caused some excitement in the community when they announced a dedicated accessibility team, however, was Facebook. And since then, the team has made some great leaps forward in over-all accessibility, and also listening to feedback from users on both their official Facebook accessibility page and Twitter.

I left Facebook in June 2012, but recently returned for various reasons, and now think it’s time to review a few things that now work much better, especially for screen reader users. I’ll be taking a look at both desktop and mobile sites as well as the iOS and Android native clients.

Disclaimer: I’m neither employed nor paid by Facebook for this review. This is purely my personal opinion and an attempt to highlight what good can be done if any company put a dedicated accessibility team in place.

Desktop site

What most people will probably be using first is the standard desktop site of Facebook. I used both NVDA and Firefox on Windows, and VoiceOver and Safari on OS X, to do my testing.

The sign-up process still requires a CAPTCHA to be solved. Since the audio CAPTCHAs have become unintelligible over-all, trying to solve the visual CAPTCHA in Firefox with the WebVisum extension, or getting sighted help, are the only viable options to get signed up.

Once the sign-up process has been completed, filling out one’s profile information works much better over-all than it used to. Auto-complete suggestions, keyboard focus handling and the over-all consistency when showing or hiding sections of the profile editing process provide a smooth experience. There are some quirks when filling out employment life stations, because Facebook has the tendency to fill in employers from friends one might have added already. While this might be a good idea in general, pre-filling the text field once it gains focus without waiting for the user to type, is not the best usability idea, IMO. But this happens to people without assistive technologies, too, so this is not an accessibility-specific issue per se, but it causes a bit of confusion.

In many areas, Facebook’s keyboard navigation and focus handling have improved. When posting a status update, sharing a link, adding a friend and adjusting the friend settings, in chat, dealing with notifications, all have improved significantly. Dialogs now behave modally, and the tab key is trapped so one cannot navigate outside of the dialog’s controls.

The search at the top of the page is a delight to use, the auto-completion and navigation by arrow keys work very well with both screen reader/browser combinations. What I found was that VoiceOver and Safari don’t read the checked/unchecked state of some menu items, like when adjusting if a friend is an acquaintance or a close friend. However, since NVDA and Firefox read these states just fine, it is a bug on the VO+Safari side, which I also notified Apple about through official channels.

There are some rare cases where dialogs don’t work as consistently as others, or get confused by unexpected keystrokes. However, as the monthly updates from the accessibility team indicate, this is an area that is constantly being improved, so these quirks should become less and less of a problem.

In summary, it can be said that the difference to when before the team started, is a difference between night and day! The experience has become much more user-friendly, and therefore more efficient. Low-vision users will also be delighted to hear that the FB Access team is constantly improving the high contrast experience as well.

Mobile site

A probably even bigger difference is what happened to the mobile version of the Facebook site over the last year. I used both Firefox for Android on a Nexus 4 running Jelly Bean 4.2.2, and VoiceOver and Mobile Safari on an iPhone 4S and an iPod Touch 5th generation running the latest iOS 6.1.3.

To recap, the last mobile experience I checked was a set of static pages that were loaded and unloaded constantly. It was very stripped down and hardly contained any semantic information other than links and inputs. No headings and other really useful stuff.

Well, that has changed drastically. The mobile sites look very much like the native app counterparts, and they also now use a lot of good semantics, both HTML and WAI-ARIA, to communicate. As on the desktop, the notifications for friend requests, messages and other notifications are present and can have pop-overs displayed or hidden when tapped. On the iPhone, these were announced quite nicely, and for Firefox for Android, a bug was just fixed for the Firefox 24 Nightly builds that now also causes TalkBack to speak the fact that these buttons have pop-overs.

When displaying the main menu on the left side of the mobile screen, information is nicely accessible. Status updates have headings, so it is easy with quick navigation gestures to jump through them to get an overview. The status update widget is also accessible, including the buttons above and below the actual text entry, and the rest of the page is nicely hidden from view so it doesn’t clutter up the screen reader view. Note that for Firefox for Android, you need a June 1 or later nightly build or the 24 release to take full advantage of this widget. Part of my using Facebook regularly was finding those bugs in our implementation and helping fix them.

I found that with Firefox for Android, I can navigate the mobile news feed just as efficiently as the native Android app.

There still seems to be a bit of a problem on the Messages page. At least in Firefox for Android, i am unable to re-open the main menu. On the iPhone, it works, so it’s probably a glitch in our accessibility support which needs investigating.

Other than that, one can reach most common things. I can use the friend suggestions, events, notes, and other stuff just fine in both browsers.

A note about tablets: On both my iPad Mini and a Nexus 7, I got the desktop version of Facebook, maybe with some tweaks, but in general, it looked a lot like in Safari on OS X, or Firefox on the desktop. Again, techniques used work just fine, and most, if not all, stuff is spoken by VoiceOver and TalkBack. So on a tablet where there is more screen estate available, the experience is nicely scaled up to give the user the advantage of the bigger screen.

The native clients

Both iOS and Android offer native Facebook apps. Both are largely accessible nowadays. There are some problems with the iOS app where if you double tap an entry that has a photo or link attached, that photo or link are opened straight away instead of the actual Facebook entry. So commenting on these is difficult, in case of a link even impossible. The Android version does not suffer from this limitation, because all tapable items in the main timeline can be explored or swiped to. This means a little more swipes per entry, but the extra granularity also has its advantages. The Facebook app for iOS should definitely be changed to open the actual entry and allow the user to view the link or photo from that detailed entry view. The UIAccessibility protocol makes this possible.

I found some unlabeled buttons in both the iOS and Android apps, but as the May accessibility team update states, they’re constantly looking for these and fix them.

On iOS, there is also the separately available Facebook Messenger app, an app that only contains the Messages part of Facebook. Its accessibility has become quite good over the last year. I’ve set it up so it can push me new messages, whereas I did not allow the main Facebook app to do that. This way, the dedicated messenger app works just like an SMS alternative like WhatsApp.

Facebook Messenger for Firefox

End of last year, Mozilla introduced what we call the Social API, allowing social networks to offer parts of their services as always-present side bars in the browser, without the need to always keep a tab open with the whole site working in the background. First ones to take advantage were Facebook with the Messenger for Firefox add-on. Unlike other add-ons, you just activate it from this page. If you’re logged into Facebook already, everything is being taken care for you. A new side bar, which you can switch to via the F6 key, presents you with very recent updates, your list of online contacts, and some settings. If you navigate to one of those contacts and press Enter, you’ll be taken to an ordinary text area where you can type your message. Enter will send it, and if you press Escape in NVDA to turn off virtual cursor, you can navigate upwards to follow along what your contact is writing, then press e again to move back to the text area, press Enter for focus mode, and type away your reply.

The HTML Facebook renders for this add-on does not yet have all the accessibility features known from the built-in Chat on the Facebook site. I’ve notified the team of that, and since this is all coming from Facebook, as soon as they improve that, you should see an improvement right away without having to update anything on your end. You can close your Facebook tabs, navigate and browse around, and the side bar will keep you online and available to others for chatting.

In Summary

Accessibility has come quite a long way over the past year on Facebook! Yes, there are always areas where it can still improve, but especially the vast improvement in mobile browsers and on the desktop site are really worth highlighting! The native apps also have improved significantly over what was there a year ago.

The one remaining really big annoyance is the CAPTCHA that bites users not only at sign-up, but can also hit if Facebook thinks the IP address one is connecting from is so unlikely that it can’t be you. Happened to me twice in my past Facebook life when I was at my employer’s place or the hotel we were staying in. Still burdening the user with these unreadable and unintelligible CAPTCHAs in 2013 when there are many better methods of user verification that run on the server side, is not a good way to treat your honest users. I sincerely hope this will soon be a thing of the past!

I’d like to give the Facebook accessibility team a shout out for the work they’re doing! Keep it up, you’re definitely on the right path!

And to all users of Facebook, no matter which assistive technology you use: If you find problems with Facebook, let them know! There is a link on the Facebook Access page linking straight to an accessible feedback form allowing you to send your problems and suggestions straight to them. It is difficult for the team to try and catch everything by themselves, especially since everybody uses Facebook in a slightly different, and sometimes unexpected, fashion. If you speak out, you can be helped and the site improved to fit your use cases!

Happy facebooking!

Posted in Accessibility, ARIA, Firefox, Mobile | Tagged , , , , | 7 Comments

Recap of Beyond Tellerrand 2013

On may 27 and 28, I attended the Beyond Tellerrand 2013 conference. Tellerrand is the German word for “edge of a plate”. The conference is targeted primarily at web developers and designers, but provides many tracks that look way beyond the edge of the plate of their daily work. It was my first time attending, and the third incarnation of this conference as a whole.

Monday kicked off with a keynote talk by Jeremy Keith. His talk revolved around the fact that everybody participating on the web is a publisher, a designer and therefore a contributor to our social and cultural heritage and history records. And that that cultural heritage is endangered by the fact that the web is full of services that may disappear all of a sudden. A very prominent example is GeoCities. GeoCities was founded in the early days of the web in the 1990s, bought by Yahoo in 1998, and shut down in late 2009. At that shutdown, all 7 million user pages were deleted from the web. Content that had been accumulating for 15 years suddenly gone. Cat photos, poems, rumblings, everything that made up part of recent history.

This demonstrates a problem all of today’s startup impose on their potential users: They all want our data, but nobody tells us what happens to that data once a Google, Facebook, Twitter or Yahoo! buys them out and usually shuts them down afterwards. Nobody tells the average user to backup that data in case the service goes away. Jeremy’s message, paraphrased: If somebody tells you that “the internet never forgets”, tell them they’re talking bullshit and quote one of those Posterous, Gowallas or GeoCities.

Jeremy also showed that initiatives like archive.org attempt to diminish that problem by archiving what they can get their hands on, to preserve this cultural good of human history. And his over-all message was: We may not agree with the design choices somebody makes, but everything anybody publishes on the web is a good that’s worth preserving regardless of the looks of it.

Next, Aaron Gustafson gave a quite inspiring talk about how easy it is to fall back into one’s own usage patterns when designing the UI of anything, and how we should all aspire to put ourselves in our users’ shoes more and be empathic to their needs and usage patterns, to give them a better user experience even if it is not exactly fitting our own. This, of course, also affects users in need of accessibility aids, and it again broadcasts the message that good user experience is good for, and therefore accessible, to everyone.

I unfortunately missed the next talk by Blaine Cook titled “Reinventing Online”, so I’ll just quote the conference program here:

The web has come a long way, and the new tools we have available to us are, frankly, incredible. The shift we’re facing is back to the web, in a post-apps world. Our users expect more today than they did five years ago (before the iPhone), and we expect more today. The beautiful thing is that there are many amazing opportunities for us to create rich web-native experiences that work across all the amazing platforms that have blossomed over the past few years. We’ll do better if we’re able to question our assumptions in order to design and build things that are truly user-centric.

At the same time, on a second stage, a series of a little more than lightning talks (about 20 minutes each) was started by Eric Eggert, who gave a concise overview of a few simple techniques to make one’s sites more accessible. He concentrated on five things that all were fortunately not your usual “use headings” and “use alt text” stuff. His message: HTML is accessible by default, don’t break it by reinventing the wheel.

After the lunch break, Kate Kiefer Lee from MailChimp shared a lot of insights about what it means to find your voice in the context of your organization or company. The most important message: Always be empathic to how you’ll make your users feel with what you say in what context. For example, a generally humorous or sarcastic tone on a Daily Deals web site is not appropriate when it comes to the contact form on that site. A 404 page should acknowledge that something went wrong, but not blame the user, as in “You typed the URL wrong”. A newsletter unsubscription confirmation may or may not insult the user by setting a certain tone. Kate also referenced Jeremy’s theme from the first talk where he talked about the gobbledygook wording of press releases when another Silicon Valley startup had been bought out. Her message, again paraphrased: All this “We’re excited to announce….”, “we’re thrilled to share…..” etc., is total bullshit and self-indulgent because you give no damn about your users or their data, don’t acknowledge how you’ll make them feel by telling them you’ve been bought out and just got a lot of money in cash or stock. They couldn’t care less about that, what’s important to them is what happens to their content they trusted you with, once your service has been shut down.

Next, Harry Roberts gave the first (and only) web developer/designer centric tech talk of the day by talking about how to approach huge web projects from a CSS point of view. Breaking down the tasks, introducing good naming conventions, centering on classes, not IDs in the CSS part of a web project, are all techniques to make the CSS scalable and therefore future-proof.

The last regular talk of the day was held by Mandy Brown. She gave a very interesting insightful talk about how things changed from the book hand-written on parchment to printed books and now to the dynamic web content we’re dealing with every day. Her message: Don’t be afraid, but embrace, that things will never be finished, never set in stone. Even books were never set in stone. The web is even less so, but this opens up a whole range of chance we should never be afraid to take.

After another break, the day was finished with a special talk by James Victore titled “Your Work Is A Gift”. In a very entertaining way, with a lot of great examples, James showed us how he made a paradigm shift from working for a boss, a company, a pay check to working for the pure fun of it. Treating your work as a gift allows you to step outside your previous attitude of working primarily  for money. Instead, embrace your work as something you love to do, something you want to share because you love it so much. I found this message very inspiring and resonating with me. Every time I come to a point where I bump into another totally inaccessible web site and ask myself “What am I doing this all for?”, I always come to the same conclusion: I’m doing this because I love doing it! Every line of this blog, every moment I work on helping to make the web, apps and other things more accessible, helps someone somewhere out there to live a better life. And if that isn’t motivating, I don’t know what is!

The second day started with a talk by Chris Heilmann titled “Fixing the mobile web”. He went into the history of the promise at the first iPhone launch that everything would be HTML, CSS and JavaScript, and that one would not need an SDK to build great things for the iPhone. Every iPhone user knows the reality that quickly replaced that promise. Especially stock browsers on older devices are what breaks the mobile web today. They’re old, they expose users to security vulnerabilities, and they are hurting the mobile web at every turn. Firefox OS is here to try and fix that problem, by providing a modern platform to build on with HTML5, JavaScript and CSS, on phones that are as cheap as feature phones, but offer far more than just playing Snake or sending and receiving SMS. He encouraged everyone to brush up on their HTML5, CSS and JavaScript and get modern apps, with access to hardware features and all, written for Firefox OS, and even Firefox for Android and other modern browsers on mobile devices, and not go the route of native apps, because that would limit the user base to a rich audience who can afford an iPhone or Android smartphone. Chris also emphasized that those users gaining access to the web through Firefox OS devices never have had a desktop PC before. The mobile device will be their first contact with the web. And that contact should make them feel good!

Next, Meagan Fisher described with great insight how she went from, as she put it herself, pixel fucking in Photoshop to designing with a content-first, responsive-always approach. She highlighted that the content strategy comes first and foremost, and from there, design and implementation can ensue. As a designer, it is important to be at the center of communication with the CEO, developers, and clients alike. Designing in the browser also allows for much easier sharing work in progress with other departments and the client, giving them a chance to see the work that has already been done and being able to provide feedback in the process. Responsive design is key in accomplishing the broadest outreach of the content to potential customers, and fixed, pixel-exact photoshopping is no longer a way to accomplish it.

The third talk of the morning was probably the most geeky and topic-centered I’ve witnessed at a conference in a long time. Erik van Blokland showed us how to create responsive fonts. They are an important part of responsive design, but typography is also a highly theoretical topic involving a lot of knowledge about visual perspective and how the human eye perceives letters in general. I’m sorry to say that Erik lost me two minutes into his talk, and that the only thing I understood from it was that the goal is to make fonts responsive to device sizes and orientation changes like images and design in general, but that accomplishing that requires a lot of in-depth knowledge about how fonts are being created and tweaked.

After the lunch break, Brad Frost kicked off the afternoon with an introduction to an approach he calls Atomic Design. He basically comes from a chemistry approach where the most basic element is an atom. Some atoms form molecules, more molecules form organisms, these eventually form the planet, and the planet is part of the universe filled with atoms and molecules and organisms and planets. Transferred to the web, atoms are your HTML tags and CSS rules, molecules are small snippets like a search form, organisms are parts of pages that logically belong together, planets are templates, and the equivalent of the universe is the site filled with content. Brad introduced a development tool he called Pattern Lab, that helps web designers accomplish this progressive development and helps them to preserve atoms, molecules and organisms by enabling them to reuse and slightly tweak them, and bind them together in templates as needed.

Next up was Josh Brewer from Twitter. His talk was titled “Photoshop lies and other sobering truths about product design”. He was announced by Marc Thile, the conference organiser, with an admission that Josh felt his talk overlapped with meagan Fisher’s talk earlier by over sixty percent. To make up for that, he had joked that he would sing through his talk if he had a guitar. I don’t know where from, but Marc actually managed to get a guitar for Josh, and Josh did something I have never seen happen at any conference I’ve been to, and probably won’t again anytime soon. He sang his talk, and the song’s title was “Photoshop, you’re a liar!”. His “talk” had a few very interesting insights into UI development at Twitter, and one of his greatest encouragements was: Prototype on the real thing. If you want to try out new UI design, use real data, don’t use any abstract non-real thing. And use it for a few days or weeks to see if it really is what you as a user would want. This is a really powerful message, and it aligns very much with what I tell developers who ask me for advice on how to make something accessible: Test, test, test! use what you develop! Expose yourself to it and use day to day data for it, not some abstract, static set of data points. Only that will show you were your design is still in need of improvement. Josh sang for the whole 45 minute slot, only twice broke off the song and actually used narrative to highlight some parts of his talk. Absolutely incredible!

Next, Elliot Jay Stocks had the admittedly difficult task of delivering the last talk of the day. He did very well in my opinion, although of course stepping onto that stage after Josh’s song must have been hard as hell! His message: Responsive design is the way to go, but there is still some resistance to it in some parts of the web designer community, and maybe rightfully so, maybe not. He summarized the two days very well in his talk by saying: We’re not building for the here and now, we’re building for the future. What we build today must still work on devices in ten or fifteen or twenty years, and only responsive and responsible design will make sure that that is the case, but no Photoshop pixel fucking will.

All through those two days, I again and again and again had the feeling that these speakers were advertising for a concept that the accessibility community has been trying to get into web developers’ and web designers’ heads for way over the last ten years: Building responsive sites makes sure everybody can access them. They are scalable when zooming, when the screen size changes, etc., etc., etc.. Responsive design also usually leads to better markup on the HTML side of things, so screen reader users benefit from it, too.

So thanks to the mobile internet revolution instigated by the iPhone and Android, the web design community is finally picking up on a theme that will make every site they build much more accessible to a huge variety of people. It’s not called accessibility, it’s called responsive web design, but it is the same theme. Accessibility is always hard to sell, responsive web design apparently is much easier to sell to a lot of CEOs and product managers. And you know what? That’s fine with me! As long as it gets the job done and many more people with age or disability-related visual impairments, with motor impairments etc., etc., etc. can use many more sites in the future much more easily!

It is expected that, within a couple of days or weeks, the talks will become available to watch for everybody. I will provide links once this is the case, and I encourage everybody to watch at least some of these great talks to look beyond her or his own edge of the plate!

Posted in General | Tagged | 2 Comments

Easy ARIA Tip #6: Making clickables accessible

It often happens that designers and web developers agree on the fact that they do not like the standard buttons or the styling capabilities of buttons in browsers. To work around this, they then resort to what’s called clickable text. It is in many cases a simple span or div element with some funky styling that makes it look like a button with some fancy twists. A JavaScript click handler then does the magic behind the scenes that happens if the user clicks on that particular styled text with the mouse.

Semantically, these styled text bits are totally meaningless to screen readers. The screen reader may or may not recognize that the text is clickable, but it can neither be tabbed to, nor is it known if this is a button, checkbox without a state that’s obvious, etc., etc.

Keyboard users also suffer from these, since these text bits are not tabable. Just adding an onclick handler does not automatically make these things focusable using the tab key.

Fortunately, there is WAI-ARIA. And with some simple additions to your markup, you can make these accessible and still profit from the fancy styling capabilities you get from using spans or divs instead of semantically correct buttons. Here’s the recipe:

Make it focusable

To do this, simply add the tabindex="0" bit to the span or div. Giving tabindex a value of 0 makes sure it fits into your tab order in the logical flow of your HTML code.

Make it a button

WAI-ARIA gives us the ability to tell assistive technologies such as screen readers for the blind that a certain element, or set of elements, actually means something that is not immediately obvious from the markup itself. In our case, even though the styling makes the span visually look like a button, the screen reader is not able to deduce that from the HTML and CSS instructions. To help it, you add role="button" to the element that receives the click. Ideally, this is the same that also already received the tabindex attribute above.

If it’s a graphic instead of text, also give it a label

Sometimes, you may end up with a clickable image instead of text. That’s fine, and both above parts of the recipe still apply, but in this case, and to be platform-independent, you should add aria-label="My button label" to the item. You can also do this with spans containing text if you want to be absolutely sure the screen reader speaks the right thing. aria-label takes a literal, and localizable, string as its value and translates that into the spoken label. Yes, for graphics, this even overrides the use of the alt attribute, if specified. And because some browser/screen reader combox like Safari and VoiceOver on the Mac have some problems with the alt attribute on occasion, aria-label puts you on the safe side.

Make Space and Enter activate the click handler

Yes, because this is no button in the original semantic sense, and browsers do not take into account WAI-ARIA markup except for when mapping stuff to the assistive technology APIs, you have to add a keypress handler that makes space and enter activate the onclick handler. In the regular desktop UI of most, if not all operating system, space is used to activate buttons, and enter is used to activate the default button of a dialog. But since in most cases we are not dealing with something that might have a default button except when it’s the submit button of a form, using enter in addition to space is OK here.

And that’s all there is to it! You need nothing more than that to make your fancy looking clickable buttons accessible on the basic level. Of course, if your button is a toggle and expands and collapses something, you may want to consider adding aria-expanded, as described in Easy ARIA Tip #5.

What about checkable clickables?

With a few tweaks, this will get you going as well:

  • Instead of “button”, use “checkbox” as the role, or “radiobutton”, if only one should be checked at a given time.
  • use aria-checked with a value of “true” for checked or “false” for non/checked items. In the same routine where you swap out the images to indicate the different states, also change the attribute or attributes accordingly. Make sure the attributes are never undefined, so are always either checked or not.
  • If dealing with radio buttons, enhance the onkeypress handler that reacts to the space bar and add support for arrows up and down to change focus to the next or previous radio buttons respectively. Tab should immediately jump to the next non-radio button outside that group of radio buttons.

These techniques can be used on both desktop and mobile. On mobile, you may want to react to touch events instead of click events, but I am sure you are already aware of that. :-)

Posted in Accessibility, ARIA | Tagged , , , , | 10 Comments

Switching to Android full-time – an experiment

A few weeks ago, I decided to conduct an experiment. I wanted to determine if Android 4.2.2 “Jelly Bean” was finally ready for me to switch to full-time, away from an iPhone.

Background

I’ve been an iPhone user for four years, ever since the original iPhone 3G S came out with VoiceOver support in June 2009. What Apple did back then was revolutionary, completely opening up a wealth of apps and services to people with vision impairments without the need to purchase extra assistive technologies at prices that were again the amount of the phone they were supposed to make accessible. Instead, VoiceOver, the screen reader for iOS, was bundled with the operating system for free.

At the same time, Google also announced first steps in accessibility for Android. But this paled by comparison, offering little more than a command shell for the Android platform with some speech output.

Later, TalkBack came about and gave at least some access to Android apps in Android 2.x. However, this access was still very limited compared to Apple’s model, as Jamie Teh points out in a blog post.

In October 2011, Android 4.0 AKA Ice Cream Sandwich came out, and compared to what was offered in previous versions, was a big step forward in terms of accessibility. Not quite there yet, as this AFB review spells out, it offered touch screen access for the first time, more than two years after Apple came out with VoiceOver, and with a model that still left a lot to be desired.

The biggest step forward came in June 2012, when Google announced Android 4.1 AKA Jelly Bean. With it came a revised model of touch screen access, called Explore By Touch, that closely resembles the model Apple, and now also Microsoft, have employed. Similar gestures allow for easy transition between platforms.

We had just started work on accessible Firefox for Android, and Jelly Bean meant that we had to add quite some magic to make it work. But we did, and the warm reception and good feedback from the blind and low vision community has been humbling and inspirational!

So when with Android 4.2, and especially the 4.2.2 updates, the gesture recognition seemed to solidify and become more reliable, I decided that it was time to give Android a serious chance to replace my iPhone as my regular smartphone device. I was also inspired by this MACWORLD podcast episode, where Andy Ihnatko talks about his switch from an iPhone 4S to an Android device, not from an accessibility, but from a general usability point of view. After all, Android has matured quite a bit, and I wanted to take advantage of that and finally use Firefox for Android full-time!

First steps

So on the 23rd of March, I got my shiny new Nexus 4. I decided to go for a Google phone because those get the latest updates of Android fastest. Moreover, they come with a stock user interface, nothing home-grown like the HTC Sense or Samsung Galaxy devices have. On my partner’s HTC One, for example, a TalkBack user cannot even use the dial pad to enter a phone number.

The hardware is quite OK. The phone feels solid, the glass surface on the front and back feel smooth and pleasant to the touch. The phone quality is a bit muffled both on the sending as well as the receiving end. My best friend who has a slight hearing problem had trouble understanding me. The speaker on the back also leaves a bit to be desired, esp since the speaker in the iPhone 4S that I am used to is quite good. I also found out during the course of my testing that I have occasional problems with Wifi connections becoming very slow, download rates plunging or downloads breaking up alltogether. Deleting and re-adding the access point entry seems to have, at least temporarily, fixed the issue. This is also being discussed lively in the Android project issue tracker, so is nothing specific to my device alone.

I was betrayed of the initial setup experience. No matter what I tried, the gesture that was described in the Jelly Bean accessibility guide for both the Nexus 4 and Nexus 7 devices, didn’t work. TalkBack would not start at all. So my sighted partner had to do that setup for me. We could then turn on TalkBack. After an update to Jelly Bean 4.2.2, we could also enable the quick button and gesture sequence to turn on TalkBack while the phone is running regularly. This experience did not leave that good of an impression with me.

Setting up accounts was a breeze. To be more flexible, I got my calendars and contacts off of iCloud and store them in an OwnCloud installation at my web space provider’s server. I didn’t want to go the Google Contacts route because of recent announcements that left me uncertain whether this would be supported across platforms in the future. For OwnCloud, I installed a CalDAV and CardDAV provider software from the Play Store that works like a charm with the Nexus 4.

However, some of the stock apps like Calendar don’t work that well with TalkBack, or at least not if one is used to the excellent support of Calendar in iOS.

BUMMER! Calendar works signifficantly less good with TalkBack than the Calendar app on iOS does with VoiceOver.

Multi-lingual input

Because I am writing in both English and German frequently, I wanted a way to quickly switch between these two input languages. The problem with one is that, if I write the other language, the auto-correct will often try to deduce German words out of English vocabulary, or vice versa. Fortunately, this is as convenient as on iOS once set up. In Languages and Input Settings, with the stock Android keyboard, one needs to disable the System Language checkbox and then enable the languages one wants to have supported. Next to the space bar, there is now a new button that cycles through available languages.

BUMMER: iOS does announce the new language switched to, TalkBack doesn’t.

This can be a real productivity killer if one uses more than two languages frequently.

The next problem arises with German umlauts. Sighted people long-tap the a, o and u characters for the ä, ö and ü characters, and s for the ß character. TalkBack users have a big problem here, since neither TalkBack nor the alternate screen reader Spiel allow for keys to be long-tapped. On iOS, when in touch-typing mode, one touches the letter in question and leaves the finger there, taps the screen with a second finger, and can then double-tap and hold to simulate a long-tap on the letter, and finally choose the relevant special character. Since iOS 6, a German keyboard with dedicated umlaut characters is also available, and on the iPad, even the ß character has a dedicated key.

On Android, the stock keyboard does not come with such extra keys, and accessibility does not allow to bring up the umlauts. Alternative keyboards from the Play Store such as the SwiftKey or the “German keyboard with Umlauts” app offer no accessible keyboards. It appears that accessibility is tightly integrated with the Android keyboard alone. Asking around in the community did also not yield any positive result on this matter.

BUMMER! No umlauts for blind users on Android! This also is true for accented characters in French, Spanish or other languages.

Text editing is another problem that lags behind terribly in Android if you do not use an external keyboard. On iOS, one can control the cursor, do text selection, do editing functions such as cut, copy and paste. On Android, there are gestures to move by character, word, or paragraph, but there is no way to select text or bring up the editing functions of a text field in a controlled fashion. I do not want to have to always use an external keyboard!

Moreover, if you do not swipe, but use the one-finger exploration method, it depends on where on a text field your finger lands, where the cursor goes once you double-tap. Unlike on iOS, where it always goes to the beginning or end first, or indicates where the cursor goes once you touch a text field’s contents, on Android there is no such speech feedback.

BUMMER! No controlled or advanced text editing is possible with TalkBack.

Apps

If you|d like to read up on some of the stock apps and their TalkBack support, or lack thereof, I would like to point you to Kiran Kaja|s excellent Nexus 7 reviews part 1 and part 2. Here, I would like to add a few impressions of apps I use regularly.

But before I do that, I would like to point out one big common denominator: Unlabeled graphical buttons. They are everywhere! This includes Android apps stock on the device, but more so many apps from the app store. This is the more bewildering considering that the Android app compilers even warn developers of missing contentDescription attributes, which are used to give accessibility labels to image buttons or image views. One developer who I contacted with a request to add those, said in his reply e-mail, paraphrased: “Oh I got those warnings, but always ignored them because I didn’t know what they meant. Oh yeah I know TalkBack, but always thought it useless. Now I know what this is all for, and you’ll get the buttons labeled in the next update.” So there is a warning, but the compiler does not indicate what this is used for, and that ignoring this warning basically means excluding a potential group of customers from using one’s app!

Twitter: There were several Twitter clients mentioned in the comments to Kiran’s posts above, and even Plume, the one considered most accessible, has several unlabeled buttons in the New Tweet screen, leading me to try three different ones before I found the one that sent my tweet. I guess “accessible” means a much lower bar in much of the Android community compared to others, or?

App.net: Another social network I use frequently.There are two clients out there that are quite popular: Dash and Robin. Both added accessibility contentDescriptions upon my request and are fully accessible.

WordPress: I found several unlabeled buttons in the UI of that app. Since it is open source, I decided to go in and fix them myself. I found that the current trunk version has a much revamped UI, using a component that adds accessibility by default, so the next version will actually be much nicer for free. I had to add only a few contentDescription strings to buttons that don’t take part in this new mechanism.

WhatsApp: Works except for some buttons that aren’t labeled. Because the layout is very similar to the iOS version, I figured out quickly that the right one of the text field sends the message, the left one adds media.

Amazon: With a few exceptions, works as well as the iOS version.

Push notifications on the lock screen: One thing I dearly missed when I started using Android was the fact that new notifications were not pushed to my lock screen immediately, and didn’t wake up the device. i am so used to the workflow of tapping a push notification to act on it from the lock screen that this really felt like a serious drawback. Fortunately, there is an app for that called Notification Lock Screen Widget. The instalation has to be done by a sighted person, since it requires adding a widget to the lock screen, but after that, it works quite well with TalkBack. One double-taps the notification one wants to act on, then finds the slide area and unlocks the phone. App is opened, one can reply or do whatever is necessary.

The camera

Yes, this blind guy talks about the camera! I use it quite frequently on iOS to take shots of stuff around me, sometimes even to send them to social networks to ask what something is, or if the milk has reached its due date yet. Since iOS 6 and on the iPhone 4S, I even use panorama shots frequently. VoiceOver gives me instructions if I hold the camera too high or too low, if I’m turning too fast or too slowly. If I want to take a picture of a person, face recognition tells me if a face has moved into the camera view and where the face is located. Once it’s centered, I can take a shot, and these are usually pretty good I’m told!

BUMMER! None of the above is possible with the Camera app on Android. I can take pictures, but panorama or facial recognition is not possible.

Once I’ve taken photos, I may want to re-use them later. In iOS, this has been a no-brainer for ages. VoiceOver tells me what orientation the photo is in when I’m in the gallery, if it’s a photo or a video, and when it was shot.

BUMMER! The Gallery in Android is totally inaccessible. There is onlya Cancel button and a blank screen, nothing more.

I also use ABBYY TextGrabber to do optical character recognition on letters or other written stuff. On iOS, I can easily take a snapshot and have it recognized. The result is usually also pretty good.

BUMMER! TextGrabber on Android, although usable with TalkBack, suffers from the above mentioned inaccessibility of the camera and gives bad results in 50% of the time, and no result in the oter 50%. A sighted user can achieve similarly good results on both iOS and Android, so this is clearly a shortcoming in the way the camera cannot be accessed.

I also use LookTel Money Reader on every travel to the U.S. or Canada to recognize different bank notes.

BUMMER! The Ideal Accessibility currency recognizer only works with U.S. money, not with Canadian, Euros or British pounds.

Scrolling in lists

In iOS, when I have a list of a hundred tweets in Twitterrific or TweetList, I can simply swipe through and read them continuously. This is not possible on Android. Swiping in TalkBack only gives me the elements currently visible on the screen. In order to continue reading, I have to stop my flow, do the gesture to advance a screen, then touch at the top most list item, and continue reading by swiping right. The alternative screen reader Spiel offers continuous swiping in some lists, but I found that this does not work reliably everywhere. For me, this is a huge productivity killer. It interrupts my flow every 6 or 7 items, breaks concentration and is a distraction. it requires me to think about where to put my finger next i norder to not miss anything.

BUMMER! No continuous reading of long lists is possible in a reliable fashion. TalkBack doesn’t offer it at all, Spiel only in some limited lists.

Navigation and travel

I travel quite a bit, and also like to find out about my surroundings. The Maps application in iOS 6 is a magnificent piece of software in accessibility terms. I’ve never had such accessible maps at my finger tips. When walking, I get announcements spoken to me of upcoming cross roads etc. Previously, one would have to purchase expensive extra devices like the Trekker Breeze to get some of this functionality. Alternatively, one can also use Ariadne GPS to get some more features tailored towards the needs of the visually impaired.

BUMMER! The Maps app on Android only offers limited navigation capabilities. Maps themselves aren’t accessible at all.

And if I want to go somewhere in Germany, I most often will use the German railway company Deutsche Bahn. They offer apps for both iOS and Android, one for looking up travel routes, one to purchase and store electronic train tickets to later show to the on-board service personnel to have them scan it. Information about seating and when and where to change trains is all accessible on iOS. Of course, finding routes, too. Standard date and time pickers are being used, and everything works just nicely.

BUMMER! While the Tickets app looks like it could be equally accessible on Android, the app for finding one’s travel route doesn’t allow a TalkBack user to specify a departure or arrival date and time. Because Android does not offer a standard date and time picker, or at least I’ve never seen one anywhere, the company decided to use an animated spinning wheel to adjust the values for date and time. This custom view is totally inaccessible, and there is no alternative method of input. I contacted the railway company with this problem, and they said they’d look into it, but the only way I see that this can be solved is by using an alternative UI if TalkBack or another screen reader is being detected. Until then, there is no way I can find my travel routes using just the Nexus 4.

eBooks

On iOS, ever since the first iPad was announced in February of 2010, the iBooks application has been a fully accessible eBook reader. Along with Apple’s iBooks, it supports ePub and PDF. In iOS 6, PDF support even got raised to a level almost comparable to that of ePub and iBooks. One can review text, read it on a refreshable braille display, even in grade 2 if one so desires, find individual words and review them, etc.

More recently, Adobe Reader on iOS also became accessible by supporting the relevant protocols within the UIKit framework.

Kiran already hints at it in his post, and even the Bookshare GoRead application does not improve the situation. The only way one can consume eBooks on Android is by letting them be dumped into one’s ears through the speech synthesizer in chunks. No way to rewind, no way to review words or phrases. No way to read on a braille display. It’s basically like listening to an audio book on a cassette player with broken rewind and fast-forward keys.

The screen where the eBook content is being displayed is a total black hole for TalkBack. Nothing there.

BUMMER! eBooks are close to inaccessible! And there are no APIs to support developers to improve the situation. While other platforms offer rich content display/editing, Android doesn’t.

Braille

Braille support needs to be installed separately via an application from the Play Store called BrailleBack. It is new, as new as Jelly Bean itself is. My braille display isn’t supported yet. However I’ve opened an issue against BrailleBack and even provided some info about my display, so in hopes that BRLTTY will support it soon, Brailleback also will.

On iOS, the display is fully supported right out of the box.

In conclusion

If I replaced my iPhone with the Nexus 4 full-time at this point, I would be missing out on all “BUMMER!” items above. It would be like stepping back a few years in accessibility, but taking the knowledge with me that there is something out there that offers me all these things.

Despite my desire to use Firefox for Android on a daily basis, meaning whenever I open a web page on a mobile device, I am not prepared to do that for this big a sacrifice. I am also not prepared to constantly carry two phones around with me except when I know I’ll be working professionally with them at my destination.

In short: The experiment, tailored towards my usage patterns at this point in time, has failed.

However, I will keep the Nexus 4 and use it for testing, because it is so nice and fast. And I will use it to keep close tabs on future Android development. Android 5.0 is around the corner, and I will definitely check against the above points when it is released to see if any of these items have improved.

This experiment has also led to some conclusions regarding Firefox OS accessibility which you all will hopefully see the results of in a few months! So stay tuned! :)

Posted in Mobile, Uncategorized | Tagged , , | 42 Comments

Review: Dell Latitude 10 inch tablet with Windows 8

At the 2013 CSUN conference, I was invited by Accessibility Partners to participate in an interview pertaining to tablet computer accessibility. Having used iPads for my personal use for years, and also having ventured into Android tablets such as the Google NEXUS 7 in my work for Firefox for Android, I was very interested to see what this would be about.

During the interview, the team showed me a Dell Latitude 10 inch Touch tablet running Windows 8. We talked about various aspects of general tablet accessibility, but also went into specifics of what the platform should offer.

It was one of those situations where, at the time of the interview, I actually said that built-in solutions would be sufficient if they offered all the accessibility features, and that the possibility to install additional assistive technologies would not be of highest importance. I said this being under the impression that, for the price that was quoted to me, this had to be a device similarly equipped like the Microsoft Surface RT.

However, as I found out later, the Dell Latitude comes with an Intel processor and doesn’t run Windows 8 RT, but rather a full version of Windows 8. One could even purchase Windows 8 Pro at an additional price while ordering.

A full Windows 8 gives one several possibilities over the RT variant:

  • the ability to run Windows 7 desktop applications.
  • being able to install additional or alternative screen reading or other assistive technology on it.

Both NVDA and Window-Eyes have versions out supporting the touch screen on Windows 8 devices. Freedom Scientific announced at CSUN that JAWS 15, coming out in the fall, will also support touch on Windows 8.

What makes the Dell tablet even more interesting is that it is priced at the same level as the Surface RT, while offering the above mentioned advantages. Unlike the Surface Pro, it also only has the weight of a Surface RT, and as far as I know, no moving fan inside.

After these facts set with me for a while, and I also had a chance to look at the Microsoft Surface during the same conference, I decided, contrary to my initial statement during the interview, to purchase one of the Dell tablet models. Especially the keyboards and the stand that folds out of the back of the Surface units, and which felt quite fragile to me, didn’t leave that good of an impression with me.

I went for the 64 GB Touch Essentials model, not needing any of the additional enterprise-centric features, but wanting a little more breathing room storage-wise.

Disclaimer: This review reflects purely my personal view on both Dell’s and Microsoft product offerings. I’m not being paid for writing this. It serves merely as information for me and interested readers.

Unboxing and first start

The tablet arrived on Monday. The accessories I had also ordered, the docking station and wireless keyboard and mouse, had arrived last week already.

The Latitude’s natural orientation is landscape, with a physical Start button being the anchor at the bottom center. On the left side there’s a Kensington lock slot, and the up and down volume buttons. At the top edge, towards the right-hand side, there are also two buttons: The one farthest to the right turns on and off orientation locking, the one next to it locks and unlocks the device. Next to these buttons is an SD card slot. You have to press down on it for the tray to come out of the casing. On the right side, there’s a standard 3.5 millimeter head phone jack and a standard USB port. On the back, there’s an 8 megapixel camera in the top center, and loud speakers in the bottom right and left corners. The tablet also has a front-facing camera, which is integrated into the glass surface, located in the top center. Below the Start button, on the bottom edge, is the connector for AC power or the docking station.

Photo of the front with the glass surface and the Start button

Front view of the tablet

Photo of the back side with the camera at the top and speakers in the bottom corners

Backside with speakers and camera

Side view of the tablet sitting in its docking station, slightly tilted backwards

Tablet sitting in its docking station

The device feels solid to the touch, no squishy parts. The glass surface also feels like high quality. The tablet’s thickness and weight are about that of the Apple iPad 3.

After turning it on and waiting for a minute or two, the Windows Setup wizard came up asking for the region and keyboard layout. At this point, I could press Start button+Volume up to start Narrator. It came up instantly, offering me immediate access to the setup wizard. It also gave instructions on what to do if one wasn’t familiar with the gestures. The basic gestures are the same as known from iOS or Android 4.1+: Touch anywhere will give you the object under the finger, swiping left and right moves by objects, and double-tapping activates the current object. This also accounts for the keyboard. Like on iOS, split tap is also available: Hold one finger on the screen and tap with a second to activate the current object. For keyboard input, one can also change the behavior to enter the key when the finger is lifted, but unlike on iOS, this is not available as a gesture to change. One has to go into the Narrator Settings dialog for Navigation to change this.

The setup was OK except for the fact that for the creation of a Microsoft account, one needs to solve a CAPTCHA. Since the audio didn’t work well for me at all, I had to get sighted assistance for this to be bypassed.

And after that, Windows did what it sometimes does: It gave me a dialog saying that the installation couldn’t be completed. After clicking the OK button, the machine rebooted and gave me the same Setup wizard again. No choices were remembered, except for my Microsoft account, which I did not have to recreate and thus not solve another CAPTCHA. On this second attempt, installation went through, and I was brought to the Start screen.

A post-PC device

One might think that this is just a notebook with a touch screen, having an Intel processor and running a full version of Windows 8. Using the wireless keyboard and mouse, one could arguably get the impression. But for all intents and purposes: This is a tablet. A what Steve Jobs once called post-PC device. Yes, Windows 7 desktop is available, and yes, you can interact with the computer using a mouse and keyboard. However, the start screen, AKA start menu, and all modern UI applications are very touch screen centric. If a Windows 8 modern UI app comes up, the best way to interact with it, even as a blind user, is by touch screen gestures. The keyboard often cannot reach all parts of the app, and using the mouse does not make sense for someone who cannot see the pointer. Using the touch screen, however, gives one direct access, an intermediate feeling, so to speak, for the application.

Here’s one general problem with Windows 8, though, and this affects sighted users probably more than blind ones: You never know what kind of UI will hit you next. To quote an example from this ZDNet article by Matt Baxter-Reynolds, you may be in a Windows 7 style e-mail client, double-click a photo and find yourself in a photo viewer that is Windows 8 modern, AKA Metro, style. There is no Close button, no task bar. To close it, on a touch screen, you swipe your finger down from the top, literally swiping the window off the screen. Using the mouse, you have to do the same. As a blind user, Alt+F4 will close Metro apps just like any other, and the Windows key will open the Start screen, so the learning curve for us is probably a bit lower than for sighted folks! :)

But this general usability problem can hit us, too: Metro apps may or may not be fully keyboard accessible. So to be on the safe side, if seriously running Windows 8, I strongly recommend purchasing either a tablet or a convertible laptop, meaning one that has a touch screen. Using parts of Windows 8 without one is not going to be much joy. Trust me, I tried it in a virtual machine on a MacBook for a couple of months.

The state of screen reader accessibility

For Windows 7 apps, there’s not much difference really to what you’re probably used to. For Metro apps, the experience can be quite different. Even among screen readers, using built-in or downloaded Metro apps is going to give you different experiences. The Store works quite reasonably with Narrator, but currently not so well with NVDA. Other apps may have labeled buttons, but rich list entries may not be properly coded so their contents makes sense. Other apps, such as the Amazon one, have touch-sensitive areas that cannot be activated by a screen reader. It is currently not possible to log into one’s Amazon account, for example.

The accessibility programming interface used for Metro apps is called UI Automation, an API that Microsoft has been sporting since Windows Vista days, and evolved over time. However, as I hear from various sources, implementing it in apps, or support for it in assistive technologies, is not trivial, and even inside Microsoft, not all apps do it consistently or correctly. And this shows in the user experience. For example, one can swipe to elements in IE that may be off-screen. Double-tapping them will bring up stuff that is totally different from what was intended. The reason is that IE doesn’t scroll the screen in accordance with what Narrator is reading. On Android, if something is not on the screen, it cannot be accessed and thus not be clicked. It has to be scrolled into view. In iOS, the operating system usually makes sure screen follows what VoiceOver tries to access. But this uncertainty in some apps really leaves one with a shaky feeling sometimes.

Also, I felt reminded of the old Windows screen reader days occasionally, when suddenly the whole application went blank because something apparently went wrong on the API side. Remember your Refresh the Screen key combinations? Probably won’t do much good here, but the experience is quite similar on occasion.

I also found that, with both NVDA and Narrator, which are the two screen reading solutions I tried so far, the operating system sometimes interprets dragging the finger around the touch screen as one of its standard “swipe in” gestures. I more than once accidentally made the Windows 7 desktop front window by swiping in from the left, when all I intended was swipe to go to the next object. Or I closed the current window by dragging my finger downwards vertically for too long. The fact that Windows doesn’t give whole control over the gestures to the assistive technologies, but reserves some to itself regardless, can make for some unpleasant surprises.

One problem I found with NVDA was that it sometimes tried to apply its hierarchical object navigator thinking too strictly, requiring me to use the swipe up and down gestures to go to parent/child objects. For a medium designed for random access, this should be more transparent. I should be touching what’s under my finger, not its container that encompasses a much bigger rectangle. While this approach may seem sane on OS X, for example, where the screen contents has to be broken down for the much smaller trackpad, interacting with the touch screen gives one a real-time one to one view of things, and one should not have to think about different object hierarchy levels then.

In conclusion

This is not meant to be extensive app-by-app testing. I simply haven’t had the tablet long enough to open and try every app that’s on there yet, or download too many of the apps from the Windows Store. But it certainly gave me a first impression already, and that’s what I wanted to share with you.

There definitely is room for improvement on many fronts, both on Microsoft’s side as well as that of assistive technologies. I believe the UIA API must solidify in not being so ambiguous, good, concise documentation must be available for developers to make their applications easily accessible, and accidental gesture activation must be guarded against better so blind users don’t unintentionally open or rearrange things they don’t intend to.

Like Android and its gradually closing gaps in accessibility, the accessibility of Microsoft’s Windows 8 migration onto post-PC devices definitely deserves commending and is developing into a real alternative to the all prominent iOS devices. In terms of accuracy while dragging one’s finger around the screen, and responsiveness, I’d say Windows 8 is even slightly ahead of Android.

So should you as a blind user go with Windows 8? I’d say: It depends. If your assistive technology supports it and you’re not afraid of using a touch screen, then for all means, it’s worth it. If you do not intend to use a device with a touch screen, have reservations etc., then better stick with Windows 7 for as long as possible. As soon as Windows 8 throws you into a Metro app, some things may not be so funny if keyboard accessibility isn’t implemented.

Posted in Accessibility, Windows | Tagged , , , , , | 5 Comments

Sometimes you have to use illegal WAI-ARIA to make stuff work

In this blog post, I’d like to recap an experience I just had while trying to apply some accessibility enhancements to the NoodleApp app.net client.

The problem

NoodleApp uses keyboard shortcuts to allow users to switch back and forth between posts, messages etc. that are displayed on the screen. Using the j and k keys, one can move down and up through the lists respectively. However, this will only change a visual indicator, done in CSS, but not give any indication that a real focus change occurred. If one presses tab, for example, focus will move to the next item depending on where keyboard focus last was, and not where the j and k shortcuts took the user.

This is not new: Twitter uses similar shortcuts, too, and even GMail has them, allowing to move among message threads.

All of these implementations, however, only change a visual indicator. They neither adjust keyboard focus, nor do they communicate a focus change to screen readers. In addition, at least the screen readers on Windows would not immediately be able to use these keyboard shortcuts anyway, since their virtual buffers and quick navigation keys would be captured before they reached the web application.

The easy part

The easy part of the solution to the problem is this:

  1. Add tabindex=”0″ to the ol-element that comprises the whole list to make it keyboard focus-able and include it in the tab order at the order determined by the flow of elements. Since its default location is appropriate, 0 is the correct value here.
  2. Add tabindex=”-1″ to each child li element, of which each contains a single post. This is so they become focus-able, but are not included in the tab order individually. Such an extra tab stop is unnecessary here.
  3. Add a .focus() call to the next reachable message, which is determined in handling the j and k keys, to set focus to the element, or the first post when none is not focused yet, but the user presses j or k.

These give us keyboard focus-ability. What one can do now is press j or k to move through the list of posts, and then press tab to actually move into the post details and onto items such as the user name link or one of the reply, re-post etc. actions. Very handy if one only uses the keyboard to work NoodleApp.

The tricky, AKA screen reader part

All the above does not yet give us any speech when it comes to screen readers. Generic HTML list items, normally non-focus-able, are not something screen readers would speak on focus. Moreover, the list would not be treated as a widget anyway yet.

The latter is easily solved by just adding an appropriate role=”listbox” to the ol element we already added the tabindex+”0″ attribute to above. This causes screen readers on Windows to identify this list as a widget one can enter focus or forms mode on, allowing keys to pass directly to the browser and web app instead of being captured by the screen reader’s virtual buffer.

And here is where it gets nasty. According to the documentation on the listbox role, child elements have to be of role option.

OK, I thought. Great, let’s just add role=”option” to each li element, then.

In Firefox and NVDA, this worked nicely. Granted, there was not any useful speech yet, since the list item spoke all text contained within, giving me the user name and such a couple of times, but hey, for a start, that was not bad at all! NVDA switched into focus mode when it was supposed to, tabbing gave me the child accessibles, all was well.

And then came my test with Safari and VoiceOver on Mac OS X.

And what I found was that role=”option”, despite it being said that this could contain images, caused all child items to disappear. The text was concatenated, but the child accessibles were all flattened straight. Tabbing yielded silence, VoiceOver could interact with text that it then found was not there anyway, etc., etc.

So, while my solution worked great on Windows with Firefox and NVDA, Safari and VoiceOver, a popular combination among blind people, failed miserably.

The solution

I then tried some things to see what effect they would have on VoiceOver>

  • I just added an aria-label to the existing code to see if that would make things better. It did not.
  • I tried the tree and treeitem roles. Result: List was gone completely. Apparently VoiceOver and Safari do not support tree views at present.

Out of desperation, I then thought of the group role. Those list items are essentially grouping elements for several child widgets. So I changed role=”option” to role=”group” and made an aria-label (the name has to be specified by the author) containing the user name, post text and relative time stamp.

And miraculously, it works! It works in both Firefox and NVDA, and Safari and VoiceOver combinations. Screen reader users now get speech when they navigate through the list with j and k, after they have switched their screen reader to focus or forms mode.

Yes, I know it is illegal to have group elements as child elements of a listbox role. But the problem is: neither WAI-ARIA nor HTML5 give me an equivalent to a rich list item known, for example, to XUL. And there is no other equivalent. Grid, gridrow, treegrid, rowgroup etc. are all not applicable, since we are not dealing with tabular, editable content.

Moreover, I cannot even be sure which of the browser/screen reader combinations is right with regards to flattening or not flattening content in role=”option”. The spec is not a hundred percent clear, so either could be right or wrong.

So, to have a solution that works now in popular browser/screen reader combinations, I had to resort to this admittedly illegal construct. Fortunately, it works nicely! Next step is obviously to advocate for a widget type either in HTML or WAI-ARIA that is conceptually an option item, but can hold rich compound child content.

What am I doing this for, anyway?

You may ask yourself: “If I can just read through with my virtual cursor, wyh do I want to use the other navigation method?”

The answer is: Yes, you can read through your list of posts using the virtual cursor. Problem is: Once the view is refreshed, either because you’ve reached the bottom and it loads older posts, or because there were new posts arriving at the top and the view needed refreshing, you lose your place. Using forms/focus mode and the j and k keys will remember your choice even if you load older posts or newer posts arrive after you started reading. You can also use other quick keys like r to reply, f to follow a user, and more, documented at the top of the page of NoodleApp once you open it for the first time. This is not unimportant for efficient reading, to have a means for the screen reader to keep track of where you are. And the virtual buffer concept does not always make this easy with dynamic content.

If you have suggestions

Please feel free to comment if you feel that I’m going about this the wrong way altogether, or if you think there are existing roles more suitable for the task than what I’ve chosen. Just remember that it has to meet the above stated criteria of focus-ability and interact-ability on the browser, not the virtual buffer level.

The code

If you’re interested in looking at the actual code, my commit can be found on Github.

Posted in Accessibility, ARIA | Tagged , , , , , , , , | 9 Comments

Advanced ARIA tip #1: Tabs in web apps

The following article will describe how to properly create accessible tabs in web apps. This is important for both mobile and desktop web applications. Tabs are not native to HTML5, so if you simulate them, you’ll probably use other markup such as lists and list items to generate them. You will have to add WAI-ARIA markup to make these semantically correct. For non-touch-screen interfaces, you’ll also have to add keyboard support manually to make sure the experience is consistent with native apps.

This article assumes that you have at least a basic understanding of what WAI-ARIA is and how to apply attributes. This article will show you which attributes are appropriate for this particular task. If you do not yet know what WAI-ARIA is or want to refresh your memory, go and read, for example, this introduction.

To get tabs to work right, there are a few roles and attributes we’ll need:

tablist
This is the list of tabs itself. It indicates to screen readers that child elements are selectable tabs. It is a container role and allows screen readers to count the number of actual tabs inside.
tab
An actual tab. This must be a keyboard-focusable item. It must be focusable directly, not one of its children.
aria-selected
a boolean attribute that indicates whether the current tab (in this case) is the selected one. aria-selected is applicable to other types of items such as option items as well.
aria-controls
Indicates which element is being controlled by this particular item. We’ll use this to connect a single tab to its actual tab panel.
tabpanel
A single tab panel. This is similar to a dialog page, it contains various controls.
aria-labelledby
The attribute to indicate where the tabpanel gets its label, its title, so to speak, from.
aria-describedby (optional)
The element(s) to provide the descriptive text, for example explanatory dialog text, for this tabpanel.
presentation
A role used to remove certain intermediate objects from the screen reader’s view, but which make semantically sense to keep in the HTML.

The code without WAI-ARIA

<ul id="tabs">
<li><a id="tab1" href="#" onclick="showTab(1);">Tab 1< /a></li>
<li><a id="tab2" href="#" onclick="showTab(2);">Tab 2< /a></li>
<li><a id="tab3" href="#" onclick="showTab(3);">Tab 3</a></li>
</ul>
...
<div id="panel1">
...
</div>
<div id="panel2">
...
</div>
<div id="panel3">
...
</div>

Obviously, you’d add logic to that showTab() function to show and hide the tabs and keep track of which one is currently selected, adjust their styling etc.

Adding proper semantics

As it stands, this would render the tabs as a bunch of links in an unordered list, and the tab panels as mere block containers with controls in them. To now add proper semantics to that, so that screen readers recognize these as tabs, we’ll have to change the same code snippet as follows:

<ul id="tabs" role="tablist">
<li role="presentation"><a id="tab1" href="#" onclick="showTab(1);" role="tab" aria-controls="panel1" aria-selected="true">Tab 1< /a></li>
<li role="presentation"><a id="tab2" href="#" onclick="showTab(2);" role="tab" aria-controls="panel2" aria-selected="false">Tab 2< /a></li>
<li role="presentation"><a id="tab3" href="#" onclick="showTab(3);" role="tab" aria-controls="panel3" aria-selected="false">Tab 3</a></li>
</ul>
...
<div id="panel1" role="tabpanel" aria-labelledby="tab1">
...
</div>
<div id="panel2" role="tabpanel" aria-labelledby="tab2">
...
</div>
<div id="panel3" role="tabpanel" aria-labelledby="tab3">
...
</div>

The above code snippet does the following:

  1. It adds the role of tablist to the ul element, indicating that the children are tabs.
  2. Adds the role presentation to each of the li elements, indicating that the screen reader should ignore the list items themselves.
  3. Adds role of tab to each link, re-mapping their roles to the intended screen-reader recognizable element type.
  4. Adds aria-selected to each of the tabs. When you switch tabs in your JS code, update these to reflect the new state of each. Only one can be selected at any given time, so the values of two should be false, and only one should be true.
  5. Adds aria-controls to each, indicating which panel is referenced by the tab.
  6. Adds a role of tabpanel to each of the div containers.
  7. Adds aria-labelledby referencing the actual tab’s name given to the a elements by the inner text above as labels for the panels.

What your JavaScript now needs to do is:

  1. Hide the old tab, by styling the panel1, panel2, or panel3 container as display:none;. Do not just move the panels out of the visible view port, as this will not hide them from screen readers! Set the tab1, tab2, or tab3′s aria-selected attribute to false.
  2. Make the new panel1, panel2, or panel3 visible. Set the tab1, tab2, or tab3′s aria-selected attribute’s value to true.

The best keyboard interaction model is this:

  1. Left and Right arrow keys should move focus to the new tab, but not yet select it.
  2. Space should actually perform the hiding and un-hiding of the tab panels and adjust the aria-selected attributes. This is how Mac OS X applications with multiple tabs usually do it, for example many multi-tab panels in the System Preferences. This makes sure the user can change focus multiple times without each focus change triggering a dynamic update and possibly network traffic. Only an explicit step to select a tab should then actually trigger the change, and traffic. Mouse or touch can trigger both at the same time.
  3. Tab should immediately move to the first control within the tab panel. It should skip over the remaining tabs.

Common questions

Why links as tabs?
Because they give you focusability for free, without you having to fiddle around with tabindex values.
Why list items?
Because this is still a list, and only list items are valid children of an ordered or unordered list. ;-) And because this gives you more flexibility in styling.
Can I use images instead of text?
Yes, provided the images have alt attributes with proper labeling text set. Do refrain from using the title attribute.
Why hide the unselected panels via display:none;?
Because otherwise, they’d be cluttering up the screen reader user’s view even though they weren’t visible. Screen readers would be able to set focus to items they aren’t supposed to at the moment and could totally mess up your app logic. Moreover, many screen reader actions could produce unpredictable results because simulated clicks could end up at random screen coordinates. In addition, truly hidden panels free up memory, which is especially handsome on low-spec mobile devices.

You can use other structural elements if you wish, provided you set the ARIA roles and attributes as described above, and also remove those elements from the screen reader’s view that are not needed.

When to not use tabs semantics

There are many circumstances where tabs are not the appropriate semantics. For example, if you have a web site, not a web application, that has categories such as “Home”, “Products”, “Support” etc., which may look like tabs, but actually load new pages, then these are not tabs in the intended sense, but should in all cases remain links, because that’s what they are. Bryan Garaventa wrote more about this here.

If it were marked up correctly, the mobile Twitter site would be an ideal candidate for appropriate tabs semantics. Specifically, the “Home”, “Connect”, “Discover”, and “Me” items at the top. They don’t open new pages, but switch a view dynamically instead.

Posted in Accessibility, ARIA, Mobile | Tagged , , , | 27 Comments

Why do native mobile apps seem to win all the time?

Twitter is often a place of small, but thought-provoking bits of information or personal impression. Just today, Mick Curran, one of the NVDA core developers, tweeted this:

and followed it up with this:

Jamie Teh, the other NVDA core developer, replied to this and said:

And this question touches part of the problem, but not the whole. The questions can be continued:

  • Why are there native iOS and Android apps for so many offerings that originate on the web?
  • Why is it that even Google, a company with a clear web focus, creates native Android apps for its services like GMail or Google Plus?

I believe there are several parts to a possible answer:

Developing for mobile browsers is damn hard

Every web developer who has developed mobile sites or web applications ever, and tried to make the site work on as many devices and browsers as possible, knows what hard work this is. On Android alone, the native browser behaves differently in each major version. 2.2 and 2.3 support different stuff than 3.x (the tablet-only version) and 4.0 did. And then there’s Chrome and Firefox, which also run on each of these versions, but which have to be installed separately. Then there’s Android 4.1+ with Chrome preinstalled at least on the NEXUS 7 tablet, but possibly others. iOS, which has a high rate of current version usage, supports other things than Microsoft’s mobile version of IE which runs on Windows Phone. Animations in one browser run once while in another runs forward and backward the same amount, or in a third browser, doesn’t stop animating at all… You get the picture!

The result is that many mobile sites, to support the widest possible range, are cut down to a bare minimum. Many web devs are even afraid to use anything but basic JavaScript and CSS properties because the three year old version of Webkit that runs on a Gingerbread Android browser doesn’t know about these things.

So, for many companies, it is economically saner to produce one native app per platform instead that just talks to their backend on the web, but gives users a consistent user interface to deal with even if they upgrade to a newer device along the way.

Others, like Facebook, decide to cut down their hybrid web content strategy in favor of a more native app to improve the user experience that way, because they feel that they would otherwise get nowhere with what they had. On both iOS and Android, access to device features is easier than from the browser, or it is not possible to use many features from the browser at all.

Other factors also come into play, like how to manage multiple accounts from the same service, e. g., Twitter, through conventional web means like cookies. While the native apps for Android and iOS support multiple accounts, the web app does not, it instead requires one to sign out and in with a different user name and password manually.

Too much clutter

Let’s face it, there are hardly any mobile apps that aren’t either cut down to a point of excruciating pain, or overloaded with too much clutter like a whole bunch of navigation links that take away valuable space especially on small hand-held devices such as iPhones or other smartphones. Native apps do a much better job at providing a single point of focus for the user. Users either want to view a list of articles, do a search, browse categories, but not all at the same time on a 4 inch screen! Or in the case of social media, they want to view a list of posts, or they want to compose one. Trying to squeeze both into a single web page on a small screen is undoubtedly going to create a less than pleasant user experience. In a native app, a tabbed interface allows the majority of the screen estate to be used for a single purpose, a single task, a single point of focus. Links to a web site is only presented in the About section, where it belongs, along with a copyright notice etc. In almost all web presences for mobile and desktop browsers I know, a lot of cruft is being taken along onto every sub page one visits. A huge navigation side bar, a top bar, a search, a footer with a lot of meta data….While this is still relatively OK on desktop computers, mobile devices have much less space, or much more need to scroll and become inefficient if too much stuff is presented at once.

Efficiency is maybe the greatest factor of all. If I want to order a pizza, or shop on Amazon, I want to stay focused and not be distracted by ads, too many offerings, bells and whistles, etc. I want to get the job done. I, myself, even though I work for a company that puts the web in front of all other stuff, find that I haven’t done shopping through the Amazon website in over a year because the native iOS app lets me do it so much faster. What I can shop for in the Amazon app within a minute, usually takes me at least(!) twice as long in any browser/screen reader combo. In other cases, like the one Mick describes in his tweets quoted earlier, the web site is marked up so badly that it is nearly impossible to place an order, whereas the native app makes it very easy to do.

And then there’s the browser UI itself. It takes up some valuable portion of the screen and is a constant reminder that one is not inside a mobile app but rather a web page. In a native app, this aspect becomes completely transparent. The user does not need to care what’s working behind the scenes. They get their job done, they simply launch the app, they simply interact with what they want to accomplish.

The Mozilla market place goes a long way to alleviate this last aspect, by launching the installed web app in a Firefox runtime on one’s Android device that leaves out all the typical browser UI pieces, and gibes a full screen view of the web app.

But all the web apps I’ve tried still remind me that it’s web pages I’m dealing with. They have links they carry onto every sub page I move to while working the app, and they always clutter up parts of my valuable screen estate. And yes, they get in the way! They either require me to explore past them, remember that they’re there and that I should start exploring somewhere one third down from the top, or not move too far to the left to not encounter them, etc.

And interaction models often require new pages to load. Loading pages is a very webby thing that can take quite some seconds before the app becomes useable again. A lot could be accomplished by putting everything in one bigger HTML file plus CSS and JS, and show and hide the currently not focused areas dynamically. Compiled JS is fast enough even on mobile devices that the delays are much much less than actual page loads.

There’s only one mobile web app that I know of that mimics the look and feel of its native Android counterpart, and that’s Mobile Twitter. Unfortunately, its markup is rather wild, so it is barely accessible. I have no access to the Compose button in Firefox for Android, for example. But in principle, that’s what a mobile app should be. it removes all the clutter of a typical web page, gives a tabbed interface and doesn’t bother the user with a composition form that would take up valuable space that can now be used for more relevant content.

Accessibility

Yes, this is often a deciding factor, too, whether I, or other blind users, choose to use a native app rather than the web site or mobile offering. And as the above mentioned Facebook example shows, their switch to a native app allowed them to provide a much better accessibility experience for visually impaired users on iOS.

The reason in many cases is that both iOS and Android have UI components that give a lot of accessibility for free just by using them. iOS and Android app developers only need to do comparably few things to make their even a little more than simple UIs accessible to the assistive technologies preinstalled on those operating systems.

Like with many things on the web, implementing accessibility is not an easy task especially for more complex compound widgets. Fortunately many UI libraries now offer accessibility for free, but how many of them can actually be used efficiently on mobile devices nowadays? In short, the danger — and yes, I use this word consciously — of encountering wild-west web coding in accessibility terms is quite high. This becomes an important factor for Mozilla as we venture into the mobile app space more and more with our own Firefox OS offering. I’m seeing a lot of evangelism work ahead actually, starting with our own implemented apps and expanding that to the author base on the Market Place.

In summary

The theme of too much clutter impacts a big range of people, not only visually impaired ones. People with cognitive disabilities, for example, are also often frustrated and confused by too much navigation or other “webby” stuff on small mobile screens. “Regular” sighted people I talked to prefer native apps for the same reason.

Many web app developers need to become much more conscious of what the actual focus point for their end users is at any given point in their application, and center the whole interaction of a single screen around that. The relevant content must always come first, and things that we used to take for granted like navigation bars etc. need to recede into the background or get their own special corner inside the app to not take up valuable screen estate and constantly get in the way of the user.

If some of these paradigms settle into the minds of a majority of mobile web app developers, the debate of whether the web is the better platform or not may take a more positive turn for the web from the user’s point of view.

Posted in Accessibility, Mobile | 6 Comments

Five years at Mozilla

Exactly on this day five years ago, on Monday, December 3, 2007, I started work at Mozilla as the QA engineer for Accessibility. I’d like to take this small anniversary to look back and look ahead.

When I started, Mozilla Messaging had just been formed to drive and oversee the development of Mozilla Thunderbird. Mozilla Messaging has been folded back into the main Mozilla entity, and Thunderbird has just seen its last major release.

When I started, Google Chrome wasn’t even in existence yet. Today, Firefox is reaffirming its second place in the worldwide browser rankings again, after having struggling a bit in the adjustment period following the long awaited Firefox 4 release and the switch to a six week major release cycle in early 2011. I can certainly feel the Mozilla comeback myself!

When I started, Windows Vista was becoming Microsoft’s big failure. I struggled with it on more than one occasion, and stuck with Windows XP for my main development and testing work until Windows 7 was released in October 2009.

In the last five years, NVDA, whose core developers had just started to implement in-process (fast) virtual buffers, has grown into a real free and open alternative to commercial screen readers in many use cases, and their momentum is still carrying forward! The continuing collaboration between Mozilla and NVDA has led, by testimonials from many users given to me in written and spoken form, to the most rock-solid web surfing experience in Microsoft Windows for blind users. The NVDA team was also never afraid to take chances, question approaches to how certain content should be rendered and interacted with, and innovate on those ideas and questions.

The GNOME desktop was a promising alternative to Windows desktops, and distributions like Ubuntu had the potential to become real end-user alternatives, not just aimed at nerds who weren’t afraid of the terminal. Unfortunately, non-cohesive strategies in the various groups have led to much confusion and frustration, over-all weakening the Linux desktop for end users a lot, distributions changing designs in a major fashion with every release, etc.

The Mozilla accessibility module had no automated test coverage when I started. With the help of other Mozillians, Alexander Surkov and I worked very hard for the first year to lift a mochitest suite off the ground, and shortly before Christmas of 2008, we succeeded, and since then, accessibility has had major, and always improving test coverage on Mozilla’s build machines. It helps to catch accessibility regressions not only in our module, but also in other Gecko code that might negatively impact us.

And then, there was the mobile story, which, when I started, was hardly a story yet. At my first work week in Mountain View, with Mozilla’s headquarters still located in buildings K and S of 1981 Landings Drive, I wrote a blog post about what I thought mobile accessibility might look like in the near and not so near future. Reading this article today, it is very clear that I am not going to be a fortune teller anytime soon. ;) The actual development played out a lot differently than I had anticipated. But let’s take a look at what the Mozilla mobile story has been since then:

I first saw mobile Firefox in action at the Mozilla Summit 2008 in Whistler, BC, Canada. It ran on Nokia N800 devices on a mobile operating system called MAEMO. MAEMO was Nokia’s parallel effort at a smartphone operating system based on Linux and KDE. Their other strong smartphone arm, based on Symbian, never really played a role here. Today, Symbian is history, more or less, and MAEMO is just having a resurrection attempt somewhere else. Nokia is building Windows Phones now and struggling to survive.

The struggle Nokia was going through with the smartphone operating system business also didn’t leave Mozilla unaffected. They changed the name once or twice until they duck away in hiding some time in 2010. Mozilla has since abandoned MAEMO and its cousins I believe.

Later, there was also a Windows Mobile 6.x version of Firefox, which never saw an actual release, and was discontinued when Microsoft announced Windows phone 7 and its major changes in architecture, which made it apparent that Mozilla couldn’t do anything useful with it afterwards. With the release of Windows 8, this changes again, at least for the X86 part of the story.

And then, there’s Android. It rose at the same time as iOS gained momentum. Both of them totally turned the tables on mobile operating systems over the last four years. And so did Mozilla’s efforts. Firefox for Android has been in development since some time in late 2009, and it saw its first, not very well received, releases.

Our struggle with Firefox for Android, always being very slow to start and very sluggish to use, took a massive turn for the better when the team took chances and decided to completely throw away the XUL-based UI and replace it with one written in native Android widgets and Java. The only view that is not Android is the one that’s displaying the important bits: The web content. After all, everything Mozilla does is about the web, and the surrounding UI is just the necessity to enable everyone to enjoy it.

As I wrote earlier this year, this opened up the possibility for accessibility for Firefox for Android. All previous efforts described had no, or only a little, chance to become accessible. But since the rewrite, since October 2011, the accessibility team, with the help from the mobile team, has managed to make Firefox for Android accessible to blind and visually impaired users on all versions of Android we support, including Android 4.1/4.2 Jelly Bean, with the recent release of Firefox 17 to the Google Play Store.

This, along with the fact that the accessibility module has test coverage, are probably the two biggest goals I have helped the project to accomplish. There are many others, like the accessibility of Firebug, which are definitely worth mentioning, but these are the two I consider most valuable for the project. The test coverage helped us strengthen our over-all module so that it nowadays is rock-solid and can support all the platforms we need it to. The different platform-dependent layers are also getting stronger, with Windows and Android probably being the most stable, with Linux following close behind.

And this brings me to the one thing that I consider the most problematic part of my work: OS X. While it does have basic VoiceOver support now, since version 16 even enabled by default, it’s still not nearly as good as it should be, certainly not nearly as good as Safari is on the Mac. In the last five years, I managed to drive only small improvements. While each of these is a great success in itself, it still needs a lot of attention.

There are other things that worry me, too, but those are not under my direct influence, but take several organizations to complete. WAI-ARIA 1.0, for example. It was almost at a recommendation level when I started five years ago, and it still hasn’t reached the 1.0 stage yet. Things are dragging and dragging, and it sometimes feels like this thing will never see the 1.0 release. I’m convinced that it will, eventually, but this is probably the longest release cycle I’ve been involved more or less heavily ever!

In the last five years, i also ventured out into different other areas. In 2011, a dedicated accessibility team was formed at Mozilla, which now consists of six members in total. I moved from the QA team to this accessibility team, and while QA is still the most important part of my work, it’s not the only one any more. I am also overseeing which of our work needs backporting to the Aurora and Beta branches, and work with release management to make this as smooth as possible for everyone, communicating what is needed and why. With not everybody as fluent in accessibility as our team is, communication is key to success here.

Speaking of “accessibility literacy”: I’m happy to see that over-all awareness of accessibility requirements has spread widely throughout the Mozilla project. I’ve always been available across all teams for questions and input on accessibility matters. While in the beginning, this was mostly me being proactive in reaching out to others and plant the seeds, I’m nowadays being pinged from everywhere and anywhere within the massively grown organization on matters and projects. This is absolutely awesome!

As for the outlook on 2013, the one big exciting piece is going to be Firefox OS. This is a project that will be our companion at work for quite some time to come. And here, it’s not just going to be reactively adding accessibility to something that’s already there, but actively designing, implementing and testing new interaction models while taking the Mozilla platform to its own level of an operating system. I’m excited as hell, my brain is continually super-charged with ideas, and I can’t wait to talk about first results on this blog!

Also, Firefox OS work will involve a lot of evangelism with web app developers to make sure they have the right tools and documentation to deliver accessible Firefox OS aps and provide a seamless experience for all their potential users.

Let’s rock!

Posted in General, Mozilla | Tagged , , , | 2 Comments

A top 100 killer web accessibility blog

Last night, I received an e-mail from Jimmy Atkinson, owner of the Web Hosting Database, who informed me that this blog is now listed in the 100 killer web accessibility resources, blogs, forums and tutorials article. I must admit I’m totally blown away by this, and would just like to whole-heartedly thank Jimmy for this recognition!

I highly recommend you check out this resource, it’s a great list of places to look for accessibility information on the web! And I feel extremely honored to be listed among tose great sites! Thanks again!

Posted in Accessibility | Tagged , | 1 Comment

An overview of accessible app.net clients

In December of 2011, I wrote about the accessibility of social networks, or rather, how sad the picture looked. Since then, not much has changed in those mentioned networks, except a good update to the Facebook app and some improvements to their site, and some improvements to the Twitter app for iOS and some tweaks to their desktop web site that make usability a little better. But the over-all picture is still roughly the same as almost a year ago.

But there’s a new kid on the block, and that’s turning into an accessibility success story! app.net started out as a social network alternative that puts users first in their focus, not advertising companies. As such, it also has a different business model. app.net is a paid service. For a monthly fee of US$5, or an annual fee of US$36, one can join as a user and enjoy a feature set that’s getting richer with every passing week. The client application landscape is also turning out quite exciting. And because some people in the accessibility community jumped the bandwagon and contacted authors early on about accessibility, the number of accessible clients is growing steadily, by now easily outnumbering good accessible Twitter clients. Furthermore, there are no such restrictions on app.net as there are on Twitter. Where some Twitter clients face imminent death because of Twitter’s new API rules, the app.net client landscape is thriving healthily even only three months after the service started.

In this blog post, I will cover accessible clients for the app.net service, ordered by platform. It will be updated with new information as I become aware of it. I will also mention some apps in each platform’s “Other” section that I’ve tried and not found accessible, or which have problems severe enough to prevent productive use. I will concentrate on full-featured clients, and will not cover services like IFTTT since they merely act as intermediary between an actual client and the user.

But before I start and you want to test an application or two, be aware that you need an app.net account, and that you have to pay a fee to get on board. app.net is not free!

Web

Being a Mozillian, it’s obvious that I’ll start out with clients for the web. :-)

Alpha

Alpha is app.net’s own offering of a web client. And it’s the most accessible one out there. It’s simple, easy to use, and its controls are not hidden behind mouse hover effects or such. Everything is reachable using a keyboard only, and the markup is, while not perfect, screen-reader friendly enough to use it efficiently. It also works quite nicely with Firefox for Android.

It could use some more structure, like a semantically unique starting and ending point for each post other than a simple div that would allow screen reader users to jump quickly from post to post using their preferred quick navigation feature.

quickApp

quickApp is also a decently accessible client. It’s a bit more shiny and has more controls in its user interface than Alpha, but, except for some buttons which are labeled with titles but without alt text, it’s OK. It also lacks the ability to quickly jump from post to post.

NoodleApp

NoodleApp is an App.net client built by a Mozillian. Initially, it had some problems, but with a recent update, its interactable items became links. It also could use some semantic that would allow screen reader users using quick navigation methods to jump from post to post. Noodle does support the letter shortcuts j and k to move a visual focus from post to post, and other shortcuts such as r to reply then operate on that, but screen readers aren’t told yet that this is happening. Focus must be set to somewhere within these items, or screen readers won’t notice. A simple CSS-generated differently colored frame is not tracked.

Other than this, and with normal screen reader techniques, NoodleApp is now quite useable. One thing to note is that, if you reply to a post, you’re automatically being placed in a layer that shows the conversation you just joined with your reply. To get back to your normal stream, hit Escape. You may have to pass the key through so your screen reader’s virtual buffer does not catch the key.

Others

And that’s it. Other listed web apps have more severe problems, and I cannot recommend them for productively using the app.net service at this time.

  • Appeio has lots of unlabeled graphic elements that make it hard to figure out what the controls do. It also offers little structured elements like headings and such to make it easier for blind users to navigate the site.
  • Appnetizens is probably the weirdest of the app.net web clients I’ve tried. It’s full of layout tables, and the most irritating problem is that I don’t see an individual post’s text. I see all the meta data and such, each in its own layout table, but not the actual text. Add to that basically no alt text for images, be it buttons or graphical links, or other controls.

Other apps listed at the above location are more like alternative and mashup web apps that integrate with multiple social networks and offer different approaches to how one might think about app.net. I haven’t tried any of these.

Browser extensions

These get an own page within the web department, but none of them fit the category of this blog post. I don’t use Chrome, so I obviously haven’t tried Succynct. All the others offer a minimal functionality portion only, like just allowing quickly to post something to app.net, or in the case of Streamified, require a Google Plus account to use its functionality.

iOS

As also shown in many studies on the general subject of app development, the list of iOS clients is the most thriving list of app.net clients. Almost all are for the iPhone, a few for the iPad. Unfortunately, none of those for the iPad are accessible at the time of this writing, so all the following sections will concentrate on the experience on an iPhone or iPod Touch.

Felix

Felix (App Store link) is a feature-rich and fast app.net client that is, according to sighted friends, also visually quite appealing. In its 1.2 release, the author Bill Kunz added VoiceOver support, which I helped test.

Double-tap a post to bring up its details, where you can reply, re-post and do all kinds of other nice stuff like viewing the conversation, starring the post or even whole conversations and more.

The Compose feature is the middle of the five tabs at the bottom. In posts, you can also add pictures, links, and other annotated content.

The Dashboard gives a great overview of your profile, your muted or starred conversations, muted users etc. Also, the Settings can be found here.

If you really want to use app.net as a power user, Felix is definitely an app worth looking at. it has Push notifications, too. But also if you’re just a casual app.net user, Felix is a great choice to check out.

hAppy

hAppy is a new app.net client that is very focused around the conversation theme. Its interface is clean and in many aspects, classical. It has five tabs at the bottom, a Compose button at the top right, and a Settings button at the top left. I had the pleasure of working with Dominik Hauser, the author of hAppy, to make sure it is VoiceOver compatible from the start.

To access controls to a post, one simply double-taps either the post text or author info. The buttons that appear allow to reply, re-post, star posts, view the conversation, open the user’s profile, and access the meta data of a post. The meta data are all clickable items such as mentioned user names, hash tags, links etc.

At the top right of any user’s profile, one can switch between the display of numbers of posts, followers and followings, and a series of actions one can perform like following/unfolowing, muting users etc.

Dominik also blogged about his experience making hApi accessible for VoiceOver users, which provides some great insight into how the UIAccessibility protocol translates into actual work items for a developer.

This is a recommended app if you’re just getting started with app.net, or want to center your activities around conversations primarily and don’t care much for photo uploading and other more advanced features. These will without a doubt also appear in future versions of hAppy, but right now, it’s centered around what most visually impaired users are interested in the most: Text-based communication. Cudos to Dominik for a great first release!

Riposte

Riposte (app store link) is a feature-rich new ADN client that features full VoiceOver accessibility from its very first release on. It has no tabs, but all features are hidden behind a menu that is revealed by double-tapping a button at the top left. Riposte features an interactions view that shows recent stars, re-posts, followers and mentions. It also features multimedia uploading to various services.

Double-tapping a post opens its details view with all options, links, mentioned user names and other items fully accessible. Riposte also lets VoiceOver speak many important information automatically such as how many new posts were received, and other info that is otherwise only communicated visually.

Riposte is definitely an app I would recommend for both ADN starters as well as power users.

Rivr

Rivr, spelled R i v r, was the first client to deliver an update with VoiceOver fixes in the iTunes App Store. Its interface is unique in that it does not just offer your simple post or reply, but that it offers different post styles, augmented with semantics to annotate photos, music, locations, or one’s current mood. Tony Million, the Rivr author, jumped on VoiceOver support spontaneously when I contacted him on app.net. the result is as stunning to hear as the UI is stunning to look at. VoiceOver will say things like “MarcoZehe posted a photo and said”, followed by the text I might have added to that photo. The other annotations are equally human-sounding transcriptions. Tony managed to transfer the visual beauty of Rivr to the VoiceOver experience, too. It’s free, with an optional in-app purchase that will add push notifications for a year.

Twiggy

Twiggy is a basic app.net client. It has accessibility info built-in in its 1.0 release, but that support was a little buggy. Since version 1.3, this situation improved a lot, and it can now be used with VoiceOver properly.

Watercooler

Watercooler is a hybrid social client that brings together app.net and Twitter. While it does the whole feature range of app.net, it only supports the feature set of Twitter that it has in common with app.net. It misses lists and direct message functionality, for example. So it’s not a full replacement for a dedicated Twitter client. The author tells one so in the product description. Its initial version came out with full VoiceOver support, except for one or two graphical buttons which are missing labels. Its interface is very layered, with screens upon screens upon screens. If you’re familiar with the Twitterrific iPhone app, you get the picture, and Watercooler is even more layered. It’s great for some people, and I highly commend the author on including VoiceOver support from the start! my personal style is a bit different. But if you like these kinds of apps, you’ll enjoy Watercooler a lot! it costs $4.99.

Others

There are more apps in the pipeline to come out with VoiceOver support. I, myself, was testing three different iPhone clients, two of which, Felix and hApi, have reached the App Store with VoiceOver support already. The one I’m still testing is not on the app store yet, but will come out with VoiceOver support in its initial release, too. That will make six accessible app.net clients! I also know from at least three more app authors that they’re planning to include VoiceOver support in upcoming updates, so the landscape will much improve over the coming months! I don’t remember ever having seen so many accessible Twitter clients at once in the app store!

Here are some notes on other clients listed in the above location, and when you do a search for app.net on the app store:

  • AppNet Rhino currently crashes if VoiceOver is running. According to public posts on app.net by the authors, VoiceOver support is planned for an upcoming update. This would include both iPhone and iPad, which would be a big win!
  • Adian is completely inaccessible. VoiceOver does not speak anything in the UI.
  • Netbot for both iPhone and iPad is totally inaccessible, too. Like the popular Twitter counterpart Tweetbot, there’s no saying when, or if at all, accessibility will be added. The authors replied to me on Twitter once that it’s on a “future features” list somewhere. I’d say, judging from past experience: Don’t hold your breaths.
  • Spoonbill is also a client whose UI completely eludes VoiceOver. If there’ll be an update to fix this I don’t know.
  • Snap is quite OK, but none of the graphical buttons are labeled. The posts themselves read fine. One downside is that controls to reply etc. are always visible for each post, so getting from one post to the next takes a lot of swipes to the right or left.
  • Synd immediately crashes on launch when VoiceOver is active.
  • Stream reads fairly well, although with a bit of a weird reading order. Also, the tabs and many buttons aren’t labeled. according to a public post by the authors, VoiceOver support is on the agenda for the next release.
  • *Spark is also working not too badly, although a bit shaky, and its buttons and some of its tabs aren’t labeled, either. With a bit of work, they can make this thing run very smoothly with VoiceOver. I just discovered this app at time of this writing, so haven’t made contact with the authors yet to find out where they stand.
  • Nettelator sort of works. It reads posts, but one cannot swipe left or right. The Compose button at the top right has a label of “Button”, the tabs at the bottom are not exposed to VoiceOver at all, and the Compose window itself has some funky behavior with buttons appearing and disappearing magically. Another blind user reported on app.net that Nettelator was crashing for him at startup. So this is also one to be cautious about, since it costs $4.99.

There are a few more on that list, but I haven’t tried those. Some also don’t sound like your classic client, but rather made for specific purposes which only covers a subset of features. If I am missing a client here, please let me know, and I’ll be happy to add info as I get it and test myself!

Android

The list of Android clients is a lot shorter than that of iOS clients, but also here I have some positive things to report.

Dash

Dash is a native Android client with quite good support for the TalkBack screen reader. I tested it on my Galaxy Nexus running Android 4.1 Jelly Bean, and found that I could do everything with it that I desired. There are a few unlabeled buttons here and there, but the author has already indicated that he’ll add the contentDescriptions in one of the near future updates.

This application previously went by the name of Hooha.

Others

I haven’t tried any of the other clients listed. One word of warning about Dabr, though, it’s a web app running in a native wrapper using the simple WebView control that is largely inaccessible to TalkBack, or has annoying enough limitations that one cannot seriously want to try it.

Mac OS

Appetizer

Appetizer is a feature-rich client with an open-minded author behind it that has steadily improved its VoiceOver support. I use it daily and am very productive.

Wedge

Wedge also has some VoiceOver support, but it feels a bit shaky. Some prefer it over Appetizer, so you should definitely try it out yourselves!

Others

Other Mac clients were not tested by me.

Windows

There are exactly three Windows apps listed, and I only tried the non-Windows-8 one, only to find out that the list of posts doesn’t read anything but some technical gibberish. So this is of no use. Since I don’t have a Windows 8 capable machine with a touch screen, I didn’t try the Metro apps, since the experience will no doubt be best using a touch screen.

 In Summary

Especially on iOS, the app.net client landscape is really thriving. It is also great that many iOS developers are aware of VoiceOver, or are open to the idea of adding accessibility support early on. A similar thing can be observed on the Mac, and where I was in contact with the author, also on Android. How the web apps will evolve remains to be seen. The web app landscape currently shows the wild west some of the web is still today, even after 13 years of the Web Content Accessibility guidelines being in existence.

But it can safely be said that app.net is a social network story that has accessibility in the minds of many of those supporting its eco system. Here’s to hoping some of this enthusiasm and spirit will spill over to others, and that the signs of improvements will continue to grow and strengthen there, too!

If you have questions, feel free to comment! If you feel your app has been underrepresented, please let me know as well! This is a living document and will evolve as more apps on different platforms become accessible.

If you’re already on app.net, you can find me there!

Posted in Accessibility, General | Tagged , | Leave a comment

My recap of the Accessibility Day 2012 in Vienna, Austria

On October 25, I took part in the 2012 Accessibility Day A-Tag 2012, in Vienna, Austria. This semi-annual event brings together people of various technology fields and organisations as well as end users with disabilities to exchange, share, and get updated on the latest developments in accessibility. This year’s motto was “mobile accessibility”, and with Mozilla’s recent mobile efforts like Firefox for Android and Firefox OS, this was a perfect venue to share and get feedback about our accessibility development and ideas.

The day started out with a remarkable keynote by Stephanie Rieger. Her talk was titled “Beyond the mobile web”, and she pointed out how seamless mobile devices have made the interaction with the internet. You no longer sit down at your computer, turn it on, go online, do your stuff, and go offline again. You pick up your favourite mobile device, look up something on Amazon, look for a train connection or the traffic situation, look up a celebrity’s info on Wikipedia etc. You don’t even think about it any more if you have a functioning Wifi connection with an always-on internet connection.

She then went on to demonstrate through every-day examples how the boundaries between the so-called “real life” and the internet are fading, and how little some of the companies are, or were until more recently, prepared for it, with their mobile site requirements still stuck in the days when one thought mobile users were on-the-go, hectic quick consumers who went online via WAP or the like.

That led her to her core point which should go out to all who provide it: Put Your Content First! Let the users chose how they view it, which device they use, and don’t put your design first since that will most likely break for many users.

The next talk was by a developer who demonstrated some of the problems he encountered while developing certain browser-based games for various mobile platforms. The capabilities of browsers on various versions of Android, iOS, and even Windows Phone 7 vary greatly and give developers a lot to chew on. Some are even so outdated that modern web stuff won’t work on them at all. It was noted, after I asked, that Firefox for Android is bridging a huge gap, and that the company had recently started adding it to their testing pool. The central message was: For mobile, develop for the oldest, not the latest and greatest. Considering that Firefox supports Android versions all the way back to Android 2.2 and also devices with ARMv6 processors, this is a paradigm we also follow at Mozilla, but enabling those older devices with new web capabilities in the process.

The next talk was by Sylvia Egger. She demonstrated how she converted an old, static web site of the Vienna museum for historic arts into a responsive site, using technologies like REM, SAS, and others. Her unmistakable message: Put mobile first, then scale up to the desktop! The days are over when mobile is only a niche part of a web presence! Do not make the mistake to have a huge-scale desktop site that you need to scale down afterwards. Like with accessibility in mind, it is nowadays less expensive to start developing for mobile and scale up conditionally to the desktop world.

After the lunch break, Sindre Wimberger showed how he developed a responsive and accessible navigation system based on the Austrian Open Government Data initiative, a collection of map and other navigational meta data that allows people with and without disabilities to better navigate around Vienna. he was using pure web technologies for this and demonstrated a few important usability aspects like keyboard access, the right contrast for visually impaired users etc.

He was directly followed by a gentleman from the Austrian government who talked a little about the other side of that project, how the open government data is put together. He also called out to the community to help make the data better if they encountered errors, new obstacles like temporary construction work etc. in the city of Vienna.

After the coffee break, Joshue O’Connor talked about the current state of HTML5 and WAI-ARIA accessibility, provided some advice on when to use HTML5 if it makes sense, but to not be shy and fall back to HTML4.01 or so if that perfectly suits the purpose. I sort of disagree with parts of the statement, since using the HTML5 doc type makes sure WAI-ARIA validates, and other quirks of earlier browser parsing hell have been overcome by the standardized HTML5 parsers.

The highlight of his talk was definitely this video of a guy who uses a single switch to operate his computer and other technology around his house, allowing him to live a much more independent life than he would otherwise. He also plays World of Warcraft, among other things. If you ever wondered what accessibility can be all about, watch this video!

My own talk concluded the day. I gave an overview of Firefox for Android’s coming about, the different stages Mozilla’s mobile development went through before that, and what a great change the move to a native Android widget UI is in terms of accessibility possibilities! If you want to read more about this, go to this blog post.

I demonstrated briefly what Firefox for Android sounds like, letting the UI and two web sites talk a little on a NEXUS 7 running Jelly Bean.

I then went on explaining what Firefox OS is and what the accessibility story is going to be. I recommend this excellent Mozilla Hacks post for more information, better put than I could ever write it here.

After the official day was over, I spent some time talking to people. Especially Mozilla’s venture into the mobile space, and with the accessibility team aligned with this venture, inspired quite some ideas and brought about a lot of confidence in the effort. Firefox OS was perceived as a source for inspiration. Someone, half-jokingly, remared that Sylvia should do a talk on theming Firefox OS at the next Accessibility Day in a year from now. Knowing her, she’ll probably do it! :) And all that without me even being able to demonstrate it live, since I didn’t have a compatible device to flash it on, and my dev/qa device didn’t arrive in time. If that is not inspiration, I don’t know what is! It felt amazing to hear people pick up so positively on the idea and already drawing energy for own ideas from it.

I believe this was a good day to be at, I learned a good bunch of new things and was able to also get across some of the stuff we as Mozilla do to an enthusiastic group of developers and users.

Posted in Accessibility, Mobile | Tagged , , | 3 Comments

Accessibility – what is it good for?

There are those days when you watch a discussion unfold on Twitter, and a point is reached where a statement is made that leaves you more or less speechless for a while.

In this case, it was a discussion started by a German web developer who had to review some applicants for his company, young minds who are supposed to enrich the team they’re joining. He himself is very versed in terms of accessibility, and infused the rest of the existing company with that spirit. He stated more than once how surprised he was how little these young applicants knew about even the most basic rules of web accessibility, such as headings, form element labeling, and alternative texts for images. Others chimed in, encouraging him to do what he was doing, and also advertising it, since it clearly is something that still sets this web dev company apart from many others.

Others chimed in as well, in particular the CEO of one web dev company who stated that accessibility hasn’t played a part in his thinking for over ten years, followed by an apology. He closed with the following tweet, which basically brought the whole discussion flow to an instant halt:

http://twitter.com/molily/status/260412411339759616

Accessibility isn’t part of the recent HTML5 and CSS3 movement. Today beginners don’t get in touch with a11y.

And this was the point that left me speechless for a while, too. Here I am, working at Mozilla, a not for profit organization that has accessibility in its manifesto, that aspires to keep the web open and accessible to everyone. I am fortunate to be a known speaker in the German-speaking world and beyond, and this particular person even watched me talk at a German web conference in December of last year. But still that statement!

I talked with my partner about this –we had also covered the general topic in the past–, and the statement confirms a feeling that is causing frustration among many web accessibility evangelists: We’ve all been teaching and preaching and begging for the basic principles since when? 2000? Even earlier than that? Let’s say for roughly the past 15 years. The HTML5 committees have how many accessibility-related task forces, working groups and what not? I lost track. And here, a web developer comes along and simply states that young people don’t get in touch with accessibility at all these days, and that it isn’t part of the recent HTML5 and CSS3 movement.

After asking him how he arrived at that conclusion, he confirmed my feeling that had dawned slowly, but that, for whatever reason, I had not allowed to reach the surface of my thinking completely:

http://twitter.com/molily/status/260432418958364673

Sure there’s discussion in the committees. But no mainstream HTML/CSS site is covering that. It’s not part of the current agenda.

And this is the problem! Right there! Accessibility is a niche. Even though 20 percent of the US population have one form of disability or another, and the number of elderly people is growing year by year, accessibility is, in the broad population’s thoughts, a niche. An extra feature that one can put on an agenda, a feature list that will never be dealt with, something to keep in mind if and when there’s time.

In addition, the accessibility community is keeping it there. There’s a circle of people who know all this stuff, who meet at four or five conferences a year and tell each other their newest discoveries, applaud each other, and then go on to fight about longdesc on the zillion W3C mailing lists.

But accessibility never reaches the mainstream. Books are published about accessibility specifically. None of these topics ever make it into standard best-practices books. There are special guidelines for web content accessibility, which is a check list that scares the hell out of everyone who just looks at the mere size of the document.

There are sites like WebAIM that document progress in web accessibility, or lack of it, in annual screen reader surveys that show roughly the same picture since they were first started in 2008.

Yes, there have been some advances in some content management systems that incorporate more semantically correct and guideline-conformant coding here and there. And yes, Flash is slowly dying, replaced by Canvas which needs a lot of extra work to make it accessible.

This blog post is by no means about diminishing the accomplishments the accessibility community has made. But we need to go beyond that! We need to leave our comfortable niche and turn the accessibility extra way into the standards way. Make people use headings, correct form element labeling, and other stuff just because it is the right thing to do that benefits everyone, not because “it’s an accessibility requirement”. Accessibility needs to finally shake off the smell of being an unloved burden to meet some government criteria. Every book any web dev buys must simply state as a best practice, without mentioning accessibility at all, that for labeling an input one uses the label element, and that the for attribute of that label element needs to point to the id of the input to be labeled. As a test case, state that this way, a user can also click on the label to get the cursor right. Don’t bother people with screen readers at all. They don’t need to know for these things.

We must get to a point where teachers give their students lesser grade if they deliver semantically incorrect work. An excuse like “but it works” should not be enough to get a good grade.

If we don’t do that, if we do not manage to kick ourselves and each other in the butt and get the accessibility movement into the mainstream world, if we do not manage to make it transparent except maybe for the edge cases, if it does not lose its scary aspects, its smell of the black sheep, if it stays in its niche and we meet at the same conferences in five or ten years still, then the lines from that Bruce Springsteen song will indeed ring true: “What is it good for? Absolutely NOTHING!”

Posted in Accessibility | Tagged | 35 Comments

Support for TalkBack’s Jelly Bean Explore By Touch now in Firefox for Android nightlies

This is just a quick shout-out to you early adopters that, as of the August 22, 2012 nightly build of Firefox for Android, support for Explore By Touch, Jelly Bean style, has landed and is now working. If you use a Nexus 7 or other device that already has Android 4.1 Jelly Bean, you can explore content by dragging your finger, or go sequentially by using the swipe left and right gestures. Activate an item by first setting accessibility focus to it via one of the two means described, and then perform a double-tap anywhere on the screen.

You can get the newest version of the Firefox nightly build either through Software Update (Menu, Settings, About Nightly, Check for Updates link), or by visiting the Nightly builds download page and downloading the standard Android build from there. You will probably not want the ARM V6 build, since that is for devices with that older processor running Android 2.3 or earlier.

Have fun, and feel free to leave us feedback as usual!

Posted in Accessibility, Firefox, Mobile | Tagged , , , , , | 4 Comments

Are web apps accessible enough to replace desktop applications any time soon?

I know, reflections on things usually happen at years-end time, but to be honest, this blog post has been in my head for the last two-and-a-half years, and has thus “seen” a number of year-ends, so I felt that it’s now finally time to put it in writing.

I’ve been with Mozilla since December of 2007 and have seen quite a number of things happening since I started.

  • I was there when Firefox 3 came out, a huge leap forward in providing accessibility for modern, ARIA-enabled web content.
  • I saw the birth of Canvas accessibility and how it finally saw the light in Firefox 13.
  • NVDA has matured into a true free and open-source alternative that can be used in productive and home environments around the world.
  • Accessibility on mobile devices has grown from a very niche market with only a handful of devices to one that is being spread all over the world on millions and millions of devices via iOS and Android, and soon, Firefox OS as well if I have any say in it.

But has all this really managed to revolutionize the way we use the web if we require accessibility to do so? Has the web really grown to become a full desktop application replacement?

It saddens me to say this, but no, it hasn’t, at least for me.

A few examples

And why is that? Because of the efficiency I can get work done in desktop and even mobile applications over that of their browser-centric counterparts. Let me give you a couple of examples:

E-Mail

Despite my bang-up review of the Yahoo! Mail upgrade and its truly desktop-like touch and feel, I haven’t switched to it, primarily because I use so many e-mail accounts from other providers with folders/labels and what not, that I didn’t want to switch it all to pop-mail and thus lose the easy access to them from all my devices. Aside from Yahoo! mail, there is no compelling web mailer that I could conceivably imagine using. GMail is by far not ready for productive use for a person using a screen reader and keyboard only, except maybe if you want to tie yourself to Chrome and ChromeVox and deal with the hassle of switching back and forth between that and your screen reader which you need to turn off for this to work seamlessly. The web mailer of my self-hosted domains is not even close to being called a web app, with everything loading as a separate document and such. And don’t even get me started on the web mailer that’s driving my Mozilla.com e-mail address! That thing was coded in the early 2000s, when there was no ARIA around, and hasn’t been upgraded ever since. And every web developer knows: Crafting accessibility onto something in the aftermath is always a lot harder and more costly than implementing it right from the start. So should there ever be a re-write of the Zimbra webmail interface, I sincerely hope it’ll be done with accessibility in mind from the start!

For E-Mail, which is still the primary source of business communication, for me there simply is no alternative to a native desktop or mobile client. Everything else simply takes too long to get to and to use in a productive manner.

Side note: Even Apple still thinks this way. Why else would they have included a new feature in OS X Lion that, once you sign into certain web sites using Safari, offers you to set up e-mail, calendar, instant messaging, and contact accounts for this particular log in automatically?

Twitter

If you follow my Twitter account, and you watch the applications I tweet from, you’ll hardly ever see “Web” in the applications. Instead, you’ll see Mac and iOS clients, and sometimes the EasyChirp web client, which is a fully accessible web interface for Twitter. And why do I use these native desktop and mobile apps so much more than EasyChirp or even the Twitter web interface? Again because of efficiency. In Yorufukurou, which is the only current accessible Mac client, it takes me exactly one keystroke from the list of tweets to either reply to a tweet or start a new one, and a press of Enter to send off that tweet to the world. In the first case it’s Enter, in the second it’s Tab. Reading a tweet is simply one key press away in the up or down direction. On iOS, my primary mobile OS, it’s a swipe left or right for the previos/next tweet, a double-tap and hold, plus the chosing of an action to start a reply, and the simple tap and double-tap of a button in the upper right-hand corner of my tweets window to start a new tweet.

In Firefox using EasyChirp, the sequence is as follows. This is assuming I have EasyChirp in a pinned (or app) tab which means it automatically loads when I start Firefox:

  1. I have to sign in first. EasyChirp does not sign me on, and it doesn’t keep a Twitter-provided authorization for more than half an hour. Signing in goes as follows:
    1. Either find the “Skip to sign in” link, press enter, then DownArrow, then Enter again to activate the Sign In link. Or: Use NVDA+F7 to open the list of elements, find the sign in link by typing s and i, and pressing Enter to close the dialog and activate the link. Or: Press NVDA+F to open find, type sign in, and press Enter, hoping to land on the actual sign in link, not the skip to sign in one. Press Enter again to activate once you found the right one.
    2. Page loads.
    3. Either sign into Twitter, which involves finding the edit box for the user name, typing it in, tabbing, typing in password, pressing Enter to sign in. Or, continue with next step.
    4. If auto-signed in or just signed in, press B, which is a quick navigation key to find the next button, which takes me to the “Authorize app” button, and press Enter or Space to activate.
    5. Another page load.
  2. Now we’ve arrived in the timeline. Posting a tweet involves these steps:
    1. Press E to go to the next (first) edit box on the page, which happens to be the “What’s happening?” field.
    2. Press Enter to enter focus mode, in other screen readers also called “forms mode”. This is a technical necessity and annoyance having to do with the way browsers have to interact with screen readers on Windows in the most efficient manner possible.
    3. Type my stuff
    4. Because this is a textarea, I have to Tab to the “Post” button and then press Enter or Space.
  3. If I want to read the timeline, it’s a combination of using the H and Q quick navigation keys. Every new tweet area starts with a heading that contains the user name, but the actual tweet text is then contained within a blockquote element, so I go H to find out who wrote it, and then skip over the user avatar graphic by pressing the BlockQuote key to hear what they were tweeting.

Anyone kept count? :-) And this is with an accessible web application! And it doesn’t even take into account that it doesn’t really keep your reading position. In other words, If I come back three hours later and have to re-sign in and the timeline comes up, there’s no way to get back to the point where I left off reading the newest tweet to read back in chronological order. My desktop app can simply sit there, collect the new tweets while I do something else, and when I come back, I can simply arrow around to read the tweets in a chronological order. My mobile client uses the TweetMarker service to remember where I left off and can start from there upon next launch.

News feeds

I very recently became a big fan of FlipBoard for reading my daily news items on the iPhone or iPad. There is, I believe, no web app that can even come close to providing me with that level of comfort. I’ve tried various feed reader extensions for Firefox, I even tried coping with Google Reader, but this, again, only works well if you’re willing to use Chrome and ChromeVox, other browser/screen reader combinations are limited by the techniques they’re using which appear to be specifically tailored towards Google’s own offerings. BTW: There’s not even an app on Mac or Windows that gives me such a reading and usability comfort as FlipBoard does.

Web forums

I am currently switching a community from a privately run mailing list to a web forum. I want to get rid of the ugliness that is the Mailman archive for that mailing list, to a properly threaded and manageable way of organizing that community. But to allow easier access from iOS and Android phones, my forum supports the apps Forum Runner and TapaTalk. Why? Because reading a web forum in a mobile browser is usually not much fun. With all the browser UI in the way, and all the baggage forums keep around, it’s hard to focus on the important. Those apps give a much cleaner, less fragmented and thus more efficient user experience to the forum than any current web offering could provide the user base, myself included.

What would need to change to make this experience better on the web?

There are a lot of things that are in the way of a good user experience especially for keyboard and also for screen reader users. Let me highlight a couple of them:

The web is a very mouse-driven place

All the above examples showed one thing clearly: The web is a mostly mouse-driven place. You can get very far with little effort if you only consider mouse users in your design approaches. As soon as you take keyboard users into account, things become much more complicated. For one, you need to think about a sensible tab order. Second, you need to make sure keyboard focus is always visible so users know where they are. And with the keyboard, it’s less easy to ignore the surrounding browser UI, because keyboard focus may sometimes be there, not in the web content for one reason or another.

Implementing proper accessibility for screen readers is not trivial in many cases

Yes, that’s right! While there are quite a number of things to easily get right, like making sure your images have alternative text, or providing proper labels for inputs by correctly associating labels with the inputs via the for/id combo, there are other things that can go wrong very easily. This ranges from properly hiding content from the view port that you want the screen reader to read anyway, to very complex tasks like making rich widgets accessible via HTML, JS, ARIA and CSS.

And here’s one of the big problems: The matter is so complex that not even those who create the standards get the specs done easily. ARIA 1.0 has been in last-call review state three times I think, and it is still not at final 1.0 state. HTML5 accessibility is going through morphs, removed and added-back elements, heated arguments and what not, and is a very hard matter to get one’s head around, especially if you want to get involved. There are historic design choices that stick with us like a basic MS_DOS kernel sticks with current versions of Windows, which make web developers’ lives unnecessarily complicated. Recently, I was asked why, for example, the primary source for an image has to be the alternative text (alt-attribute), if using the title-attribute would have the benefit of providing sighted people with a visual queue to what a particular item means as well. Guess what: I had no good answer for him.

Providing accessibility for native iOS and Android apps is much easier

That’s right! Providing accessibility for those mobile operating systems is, in many cases, a piece of cake. Apple, and lately Google, too, have crafted their native widgets with so many features that it makes it very easy to provide a visually compelling, yet fully accessible user experience. Take the above mentioned FlipBoard as an example. If you watched the iOS Accessibility track at WWDC 2012, you will have seen that the presenter made a game fully accessible that requires one to drag “weapons” to moving targets on a playing field, with little more than twenty(!) lines of code. TWENTY lines!

Side note: Mac accessibility, if done with COCOA widgets and custom views derived from them, is not much harder to make accessible.

On the other hand, like with many other pieces of the puzzle, web developers have to deal with different implementations of accessibility features across browsers and even some assistive technologies on all platforms. The web is, despite all efforts, a very incoherent place, not only, but also, in accessibility matters. And this, I believe, is one of the primary reasons that the WebAIM screen reader survey #4 still shows no significant increase in people’s perception that the web has become a more accessible place over the past three to four years. There are just so many stepping stones and loopholes that often leave web developers frustrated. Others, although this number is shrinking, aren’t even aware of the different web accessibility efforts and need to be taught from the ground up.

Technical limitations between screen readers and browsers

This primarily applies to Windows, where the accessibility API architecture requires two modes: One for browsing, one for direct interactions with web content. Users have to remember to switch between these modes in many cases, and the browsing mode usually nukes out all custom keyboard shortcuts that the web site may define for other keyboard users.

There is a role for such web content called “application”, but this is better used cautiously, as this blog post and the comments below it show.

So….Any good news?

Yes! Despite all this demise, there have been improvements. jQueryUI is continually adding accessibility to their widgets. Yahoo’!’sYUI has had improvements added for years, too. Dojo Toolkit was the prototyping project for many ARIA-enabled widgets. CKEditor is a fully ARIA-enabled WYSIWYG editor. All these have emerged or vastly improved since I started working at Mozilla in December of 2007.

One of my projects over the next couple of months is to make sure that the Firefox OS UI and the building blocks for app developers for Firefox OS are accessible, so whatever apps that are built from standard UI components, will have accessibility built-in.

I will continue to provide advice, articles and views on the topic of web accessibility and the way it may, or may not, evolve to become a true desktop replacement platform for all users. As I recently said to David Bolter, I love my job. I could see all the above as demotivating factors and think all web accessibility efforts are in vain. But I’m not that kind of person! I, instead, take this as motivation to help drive efforts forward, home in on issues and fix them, and make the web a better place for everyone if I can. And that’s a promise!

Posted in Accessibility | Tagged , , | 9 Comments

Firefox for Android Nightly builds now with Explore By Touch

As of the June 28, 2012 build of Firefox for Android nightly builds, the Explore By Touch feature of the Ice Cream Sandwich version of TalkBack is supported in web content in addition to the already supported browser user interface. Read more about this feature on Eitan’s blog.

Also, more buttons are now labeled so TalkBack can find them. For example, the button to open the menus on devices that do not have a hardware button for this, is now being spoken.

So if you run Ice Cream Sandwich, go grab the latest build, try it out and give us feedback!

Happy browsing!

Posted in Accessibility, Firefox, Mobile | Tagged , , , , | 1 Comment

How to get WebVisum working again

Update: As of some time this lunch time in Europe, WebVisum is back in full working business. The instructions to edit one’s hosts file to point to the IP address for the WebVisum service directly has, therefore, been removed from this post. It is no longer required.

Posted in Accessibility, Firefox | Tagged , , | 7 Comments

Quick Navigation keys now in nightly builds of Firefox native for Android

Yes, you heard correctly! Accessible Firefox for Android nightly builds, as of June 13, have quick navigation keys that those who are blind are most likely familiar with from the desktop screen reader world! In short, these are single letter key presses that allow a blind user using speech output to quickly skim a page for certain elements. Whereas a sighted person can simply glance at the screen to get an idea of the structure, surfing using a screen reader is much more sequentially, and thus most screen readers implement a mechanism to allow a blind person to skim a page nearly as effectively as a sighted person can. This is done by allowing direct jumps to certain types of elements. Especially when one knows a page well, this allows for rapid navigation and interaction.

And the greatest beauty is that it doesn’t matter whether you’re using a physical keyboard, a Bluetooth keyboard connected to your Android device, or the Eyes-Free keyboard in typing mode to use them!

Here’s a list of keys currently implemented. All of them will move in the opposite direction if used together with the shift key.

List of quick navigation keys for accessible Firefox for Android
Key Description
a Moves to next named anchor
b Moves to next button
c Moves to next combobox or listbox
e Moves to next text entry or password field
f Moves to next form field (button, combobox, text entry, radio button, slider, checkbox)
g Moves to next graphic
h Moves to next heading of any level
i Moves to next item in an unordered, ordered or definition list
k Moves to next hyperlink
l Moves to next unordered, ordered or definition list
p Moves to next page tab (in ARIA-enabled web apps)
r Moves to next radio button
s Moves to next separator
t Moves to next data table
x Moves to next checkbox

This should allow much easier and faster navigation on most web sites. To use them, simply arrow into web content, they do not work while you are in the browser UI parts. We also do not allow quick navigation if you’re focused on an entry or password field inside a web form, since you’ll want to enter text into that. You have to use the directional controller or d-pad to move out of the entry field first, and then use quick navigation keys. If you’re using the Eyes-Free keyboard, you can switch to typing mode even when you’re not focused on an entry, and use the quick navigation keys.

Happy browsing!

Posted in Accessibility, Firefox, Mobile | Tagged , , , | 3 Comments

Accessibility in Firefox for Android – Some more technical details

In my previous blog post, I focused on the user-facing aspects of the new accessibility features we’re currently building into Firefox for Android. This blog post is about the more technical details of this support.

First, the fact that we’ve come as far as we have in such a short time is thanks to the fact that, in November last year, the mobile team at Mozilla took the leap and moved Firefox away from a XUL-based user interface to one that is based on native Android widgets.

The first thing this gave us was the accessibility of the native browser UI for free. The only thing that needs to be taken care of is that all graphical buttons and other UI elements follow the Android accessibility guidelines. This includes Explore By Touch, which is a new feature of Ice Cream Sandwich. It simply works out of the box!

What this also gave us is better access from JavaScript to the Talkback feature of Android Accessibility, allowing us to generate so-called utterances easily, which are what TalkBack actually speaks.

So while the browser UI was already accessible for the most part, what we had to do was implement accessibility for the web content area. This is a custom view which is not accessible by default. Fortunately, our architecture allows us to easily interface with both the Java pieces of the Android framework as well as our internal APIs, which include accessibility.

To give blind users access to all web content, we had to implement a method of navigation that allows to go to not only focusable items such as links or form fields, but to any useful element on a page like a paragraph, a heading, a graphic that has interesting alternative text, and so forth. Because Firefox for Android does not have to provide that by default, this special mode of navigation is active only when accessibility is enabled. Our engine detects whether TalkBack is running or something else triggered accessibility within Android to be turned on, and we react by changing the navigational paradigm accordingly.

The magic behind the navigation is no magic at all really: When accessibility is enabled, our keyboard interceptor goes into a special mode where the module for accessibility, internally called AccessFu, is invoked and does the right thing for each navigational direction.

When the module receives a directional key press event, it makes a call into our internal accessibility APIs to navigate in the given direction. The current position is the basis for movement. The following rules apply:

  • If the directional controller is moved upwards, focus leaves the web content and is returned to the native browser UI. At that moment, handlers for the native UI take over and provide the focus movement logic.
  • If from that UI, a downward key is received, focus is returned back to the web content. If accessibility is on, we detect it, and AccessFu takes over again. The user is then returned to the last position known before leaving the web content. In essence, the position is saved, and the user does not have to start from the beginning of the document.
  • If a right or left directional event are received, a call is made into our internal APIs, called nsIAccessiblePivot, asking for the next or previous element to move to. Since we’re in accessibility mode, this is a screen-reader readable element. May be a paragraph, heading, graphic, or something which would otherwise be considered focusable, too, like a text field, other form field, button, or link.
  • If the user presses down on the directional controller, a click, jump, or other associated action is performed on the item. A link gets activated, a button clicked, a checkbox toggled etc.

nsIAccessiblePivot is a new interface introduced to our accessibility core APIs that performs the actual search for the next or previous element, based on the given traversal rule, and returns the object that is to be navigated to. This class walks the internal accessibility tree. It is therefore fast as hell and returns the result almost instantly.

The returned result is an accessible object and is again used by AccessFu to query it for name, role, states, action name etc. An utterance, which is nothing other than the phrase TalkBack should speak for this element, is put together and passed on to TalkBack for speaking. Again, AccessFu is written in JavaScript, and it communicates with C++, which is what the core accessibility APIs are written in on the one hand, and with the Java TalkBack/Android Accessibility interface on the other.

Just one more thing: What’s ticking at the very core of these accessibility APIs is the exact same engine that you are familiar with from the desktop. So if you are developing with accessibility in mind and use proper semantics in HTML, JavaScript and CSS, be assured that this will be interpreted by our engine for mobile in the same way as it is for the desktop. So all rules for semantically correct HTML apply in the same fashion as they do for the desktop: Provide alt texts for graphics, associate labels with form controls, use headings, use WAI-ARIA, etc., etc. This is also true if you develop web sites that will be displayed on iOS devices, by the way. Any accessibility engine on a mobile platform gains most from semantically correct HTML, so please use it!

So what else is to come? We have a number of plans still ahead that we want to realize. First and foremost, we want to get Explore By Touch working inside web content as well. Next up, convenience features that bring browsing onto a more efficient level. Our goal is to provide convenience features like heading or form field navigation, navigation by landmarks and other features you might be familiar with from desktop screen readers or VoiceOver on iOS. As we iron out the initial kinks of our accessibility support, you will see these appear in Fennec (code name for Firefox for Android) nightly builds over the next couple of weeks.

You might now think: “Hey wait, these sound a lot like screen reader features!” And guess what? You’re right! The way the Android accessibility APIs are designed, especially when it comes to custom views, you often don’t have a choice other than to implement at least partial screen reader capabilities yourself. From our standpoint from within the web content area, TalkBack is actually not much more than our bridge to the speech synthesizer. Outside the web content, however, TalkBack is more of a screen reader itself.

I would like to conclude this blog post with a big thank you to Eitan, who has designed the pivot interface and put all this clever architecture together in about half a year. Considering where he started from when Fennec was still fully XUL-based, I think it’s safe to say that we’ve come a long way since then. The whole accessibility team at Mozilla has been providing valuable input to this effort, but Eitan is the driving force behind this. The overwhelmingly positive feedback we received on my blog and on the Eyes-Free mailing list is rewarding and motivating to make accessible Firefox for Android even better, making it the most accessible browser on Android out of the box!

Posted in Accessibility, Mobile | Tagged , , , , | 9 Comments

First round of accessibility support for Android in mobile Firefox

Lots of exciting stuff happening at Mozilla these days! The accessibility team is ramping up its efforts on multiple fronts.

I am pleased to announce that mobile Firefox, code-named Fennec, nightly builds now have a first implementation of accessibility for Android built-in. All you need to do is turn on Talkback, or any accessibility for that matter, and it will start working with a directional controller, or emulation thereof.

Here are the steps to try this out on an Android phone!

Set up accessibility and TalkBack

If you already have a working TalkBack on your Android device, you can safely skip this step, or skim it in case you may find some hints that are useful. If you’ve never used TalkBack or any of the Android accessibility features, you should follow these instructions to make sure Fennec will talk to you in the end.

  1. First, make sure you have an Android device that meets the system requirements for Firefox mobile.
  2. If you have a good Android version and device, log onto the Android market and get the following components installed:

    Note that if you use Android 4.0, KickBack and SoundBack are integrated into TalkBack, so you only need to install that to get the functionality of all three.

  3. Activate accessibility through the following steps:
    1. Go to Settings
    2. Select Accessibility
    3. Enable Accessibility checkbox
    4. Enable TalkBack, KickBack, SoundBack checkboxes

    Note that, if you’re blind and install this for the very first time, you will need sighted assistance to do this.

  4. The result should be that TalkBack, SoundBack and KickBack give you spoken, sound, and haptic feedback when navigating with the directional controller.
  5. If you also installed the Eyes-Free keyboard, refer to its documentation on how to use it.

Downloading and installing Fennec

  1. Allow installations from outside the Android market by enabling the option in Settings, Applications.
  2. Download the Nightly build of Fennec (Firefox mobile) from the Nightly builds download page.
  3. Install Fennec.

Usage

When you launch Fennec, you are thrown to the home page where you can go to various Mozilla resources. use your directional controller in the left and right directions to navigate the content. Press up to transition to the surrounding user interface, where the Awesome bar you’ve come to know from the desktop version, and other stuff lives. Press down from the awesome bar to go back to the content.

While in the content area, you will hear semantic information such as links, headings, graphics, form fields etc., list item information and such. We also announce information about required and invalid form fields, if a text field is multi-line, in which case it is called a “text area”, etc.

The menu opens via the menu button on most Android devices. It talks, and is also navigable via the directional controller.

Press down on the button to activate the item that was last spoken. This is true for content and menu/UI.

Things that do not work

You cannot explore the web content by touch yet. If you have Ice Cream Sandwich, Explore By Touch will only work in the surrounding browser UI. You can use the D-Pad emulation to simulate a directional controller that will still allow you to navigate the web content that way.

There is currently only the navigational controller to use. So if your device has a physical keyboard, you cannot currently use things like pressing h to navigate to headings, for example.

You cannot yet navigate text boxes. The cursor will move the x amount of characters typed inside a textbox, but you will not get any speech feedback yet.

Stay up to date

Nightly is updated on a daily basis. So is our accessibility module. To keep up to date:

  1. Open the menus.
  2. Select the “More” item.
  3. Open the Settings menu item.
  4. Choose “About Nightly”.
  5. Navigate to the “Check for update” link and activate it.
  6. If there is one, an alert will be added to the system alerts. TalkBack will say “Download and install”.
  7. Swipe down on the touch screen to open the alerts.
  8. Navigate to the nightly update alert and press your navigational button.
  9. Allow it to install and restart your browser when prompted.

A word of caution

This is early stage development software, and you will most likely find stuff that doesn’t work yet or is not navigable currently. Do not hesitate to tell us about it!

If you decide to set up Sync so you get your bookmarks and history, as well as your passwords from the desktop profile, be aware that data cautioning warnings apply as much as they do to any pre-release software!

Note also that, because this is early development stage, we may change thingsaround depending on both own usability experience as well as feedback from the community.

Providing feedback

Naturally, we would like to hear from you! We are super excited about this new feature, and we know that there are a lot of things that do not yet work as expected. If you find any, please let us know! The easiest way is to comment on this blog, or if you are familiar with our bug tracking system, feel free to file a bug.

As always, we really look forward to your feedback!

Updates

May 10, 2012
Updated the navigational instructions. The up and down keys now navigate between the content and the surrounding UI such as the awesome bar. The left and right keys are now used to traverse the content. Also noted that navigating text boxes is currently not possible yet.
June 1, 2012
Updated information about Explore By Touch support and how the behavior currently is in text fields.
Posted in Accessibility, Firefox, Mobile | Tagged , , , , , , | 9 Comments