Important talks: Sacha Judd’s “How the tech sector could move in One Direction”

I just watched a very important talk from last year’s Beyond Tellerand conference in Berlin. Sacha Judd (@szechuan) delivered her How the tech sector could move in One Direction at this conference and Webstock in New Zealand a few days ago. It is a great example of how a talk can be insightful, exciting and challenge your biases at the same time.

You can watch the video, read the transcript and get the slides.

I’ve had this talk on my “to watch” list for a long time and the reason is simple: I couldn’t give a toss about One Direction. I was – like many others – of the impression that boy bands like them are the spawn of commercial satan (well, Simon Cowell, to a large degree) and everything that is wrong with music as an industry and media spectacle.

And that’s the great thing about this talk: it challenged my biases and it showed me that by dismissing something not for me I also discard a lot of opportunity.

This isn’t a talk about One Direction. It is a talk about how excitement for a certain topic gets people to be creative, communicate and do things together. That their tastes and hysteria aren’t ours and can be off-putting isn’t important. What is important is that people are driven to create. And it is important to analyse the results and find ways to nurture this excitement. It is important to possibly channel it into ways how these fans can turn the skills they learned into a professional career.

This is an extension to something various people (including me) kept talking about for quite a while. It is not about technical excellence. It is about the drive to create and learn. Our market changes constantly. This is not our parent’s 50ies generation where you get a job for life and you die soon after retirement, having honed and used one skill for your whole lifetime. We need to roll with the punches and changes in our markets. We need to prepare to be more human as the more technical we are, the easier we are to be replaced my machines.

When Mark Surman of Mozilla compared the early days of the web to his past in the punk subculture creating fanzines by hand it resonated with me. As this is what I did, too.

When someone talks about fanpages on tumblr about One Direction, it didn’t speak to me at all. And that’s a mistake. The web has moved from a technical subculture flourishing under an overly inflated money gamble (ecommerce, VC culture) to being a given. Young people don’t find the web. They are always connected and happy to try and discard new technology like they would fashion items.

But young people care about things, too. And they find ways to tinker with them. When a fan of One Direction gets taught by friends how to change CSS to make their Tumblr look different or use browser extensions to add functionality to the products they use to create content we have a magical opportunity.

Our job as people in the know is to ensure that the companies running creation tools don’t leave these users in the lurch when the VC overlords tell them to pivot. Our job is to make sure that they can become more than products to sell on to advertisers. Our job is to keep an open mind and see how people use the media we helped create. Our job is to go there and show opportunities, not to only advertise on hackernews. Our job is to harvest these creative movements to turn them into to the next generation of carers of the web.

I want to thank Sacha for this talk. There is a lot of great information in there and I don’t want to give it all away. Just watch it.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Let’s explain the “why” instead of the “how”

One thing that bugs me a lot is that in the publishing world about the web we have a fetish for the “how” whereas we should strive for the “why” instead.

What do I mean by that? Well, first of all, I count everything that is published as important. This could be a comment, a tweet, a blog post, a presentation, a screencast – doesn’t matter. If it ends up on the web it will be linked to, it will be quoted, it will be taken as “best practice” or “common usage” and people will start arguing over it and adding it to what we call “common knowledge” (spoiler: there is no such thing).

Advice duck telling people to go to W3Schools and learn to be a web developer to make money to finance your real studies
Meme that got around the web some time ago: web development is a means to make a lot of money to finance your real studies and it is possible by following the courses on w3schools. This cheapening of a whole profession to me is an immediate result of giving people solutions instead of inviting them to understand what they are doing.

That is why publications that answer the “how” without also explaining the “why” are dangerous. We explain how something is done and we pride ourselves when this is as short and simple as possible. We do live coding on stage showing how complex things can be done with one small command and five different build systems. We show how simple things are when people use this editor or that development tool or this browser and everything is just a click away and we get amazing insight into the things we do.

Assumed stamina and interest

We expect people who learn the “how” to be sharp and interested enough to get to the “why” themselves. Sadly enough this is hardly ever the case. Instead, the quick “how” also known as the “here is how you do it” becomes an excuse not to even question practices and solutions any longer. “Awesome technology expert $person said and showed on stage that this is how it is done. Don’t waste your time on doing it differently” is becoming a mantra for a lot of new developers.

Moldy advice

The issue with this is that “best practices” are getting more and more short-lived and in many cases very dependent on the environment they are applied in. What fixed performance issues in a Web View on iPhone 3 might be a terrible idea on Chrome on a Desktop, what was a real issue in JavaScript 10 years ago might not even make a minimal difference in today’s engines (string concatenation anyone?).

What is “the why”?

The “why” can be a few different things:

  • Why does doing something in the way we do it work?
  • Why should you use a certain technology?
  • Why is it important to do this, but also understand the environment it is most effective in?
  • Why is using something simple and effective but also dangerous depending on outside factors?
  • Why is a new way of doing something more effective than an older way of doing it?
  • Why is it important to understand what you do and how do you explain to other people that there is a reason to do it?

Explaining the “why” is much, much harder than the “how”. Telling someone to do something in a certain way is giving orders, explaining a procedure. Explaining why it should be done this way means you teach the other person, and it also means you need to deeply understand what you do. The “how” can be repeated by someone who doesn’t know really how something works – and in many cases is – the “why” means you have to put much more effort into understanding what you advocate. The “how” is what lead to boring school books and terrible training folders. The “why” leads to interactive and memorable training experiences.

W3Schools – the kingdom of the how

Getting rid of the fetish of the how is an incredibly frustrating uphill battle. The biggest manifestation of the “how” is W3Schools.com. This site shows you how to do something – even interactively – and thus has become a force majeur in the web development world. It gives you a fast, quick answer to copy and paste without the pesky having to “understand what you are doing” part. This leads to people defending it tooth and nail every time some righteous people set out to kill it. All of these efforts are doomed to fail if they mean setting up yet another resource that will “do things better than w3schools”. The reason sites like W3Schools work are:

  • They give you a short answer and make you feel clever as you achieved something amazing without effort
  • They are easy to link to as an answer to a question without having to explain things
  • They are easy to embed into a tutorial or article as a quick citation to “prove a point”
  • People used them for years and they grew constantly which is something that Google loves

In other words, they are a useful reminder and lookup resource for people who already know the “why” and simply forgot the “how”. Thus, they look like a power tool the experts use and are very tempting for beginners to use as well. Much like buying the same shoes as Usain Bolt should make you an amazing runner…

The only way to “kill W3Schools” is to support resources that explain the how and the why, like MDN or WebPlatform.org – not to create more resources that have the right heart but are doomed to fail as maintaining a documentation resource is an amazing amount of work. Instead of sending new developers to w3schools or a Stackoverflow post that explains how something is done quickly, send them to a deep link on those. We can not expect people we point to solutions to care about how they happened. We have to show them the way, not the destination. By sending them to the destination via a shortcut, we deprive them of their own, personal learning experience and we cheapen our job to something anyone can look up on demand.

The “how” gets outdated, and – in many cases – dangerous practice very, very quickly. The “why” remains as it lights up the way to a solution, a solution that can change over time.

That’s why I’d love people to stop spouting quick answers and let new developers ponder the solution for themselves before telling them a way to do it quickly. We need to learn in order to understand and be empowered to create on our own. You only learn by asking why – let’s be supportive of that instead of feeling smug about pointing out an already existing solution. Web development got to where it is by continuously questioning how we do things and find ways to make it work. If we stop doing that, we stagnate.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Detecting touch: it’s the ‘why’, not the ‘how’

One common aspect of making a website or application “mobile friendly” is the inclusion of tweaks, additional functionality or interface elements that are particularly aimed at touchscreens. A very common question from developers is now “How can I detect a touch-capable device?”

Feature detection for touch

Although there used to be a few incompatibilities and proprietary solutions in the past (such as Mozilla’s experimental, vendor-prefixed event model), almost all browsers now implement the same Touch Events model (based on a solution first introduced by Apple for iOS Safari, which subsequently was adopted by other browsers and retrospectively turned into a W3C draft specification).

As a result, being able to programmatically detect whether or not a particular browser supports touch interactions involves a very simple feature detection:

if ('ontouchstart' in window) {
  /* browser with Touch Events
     running on touch-capable device */

This snippet works reliably in modern browser, but older versions notoriously had a few quirks and inconsistencies which required jumping through various different detection strategy hoops. If your application is targetting these older browsers, I’d recommend having a look at Modernizr – and in particular its various touch test approaches – which smooths over most of these issues.

I noted above that “almost all browsers” support this touch event model. The big exception here is Internet Explorer. While up to IE9 there was no support for any low-level touch interaction, IE10 introduced support for Microsoft’s own Pointer Events. This event model – which has since been submitted for W3C standardisation – unifies “pointer” devices (mouse, stylus, touch, etc) under a single new class of events. As this model does not, by design, include any separate ‘touch’, the feature detection for ontouchstart will naturally not work. The suggested method of detecting if a browser using Pointer Events is running on a touch-enabled device instead involves checking for the existence and return value of navigator.maxTouchPoints (note that Microsoft’s Pointer Events are currently still vendor-prefixed, so in practice we’ll be looking for navigator.msMaxTouchPoints). If the property exists and returns a value greater than 0, we have touch support.

if (navigator.msMaxTouchPoints > 0) {
  /* IE with pointer events running
     on touch-capable device */

Adding this to our previous feature detect – and also including the non-vendor-prefixed version of the Pointer Events one for future compatibility – we get a still reasonably compact code snippet:

if (('ontouchstart' in window) ||
     (navigator.maxTouchPoints > 0) ||
     (navigator.msMaxTouchPoints > 0)) {
      /* browser with either Touch Events of Pointer Events
         running on touch-capable device */

How touch detection is used

Now, there are already quite a few commonly-used techniques for “touch optimisation” which take advantage of these sorts of feature detects. The most common use cases for detecting touch is to increase the responsiveness of an interface for touch users.

When using a touchscreen interface, browsers introduce an artificial delay (in the range of about 300ms) between a touch action – such as tapping a link or a button – and the time the actual click event is being fired.

More specifically, in browsers that support Touch Events the delay happens between touchend and the simulated mouse events that these browser also fire for compatibility with mouse-centric scripts:

touchstart > [touchmove]+ > touchend > delay > mousemove > mousedown > mouseup > click

See the event listener test page to see the order in which events are being fired, code available on GitHub.

This delay has been introduced to allow users to double-tap (for instance, to zoom in/out of a page) without accidentally activating any page elements.

It’s interesting to note that Firefox and Chrome on Android have removed this delay for pages with a fixed, non-zoomable viewport.

<meta name="viewport" value="... user-scalable = no ...">

See the event listener with user-scalable=no test page, code available on GitHub.

There is some discussion of tweaking Chrome’s behavior further for other situations – see issue 169642 in the Chromium bug tracker.

Although this affordance is clearly necessary, it can make a web app feel slightly laggy and unresponsive. One common trick has been to check for touch support and, if present, react directly to a touch event (either touchstart – as soon as the user touches the screen – or touchend – after the user has lifted their finger) instead of the traditional click:

/* if touch supported, listen to 'touchend', otherwise 'click' */
var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');
blah.addEventListener(clickEvent, function() { ... });

Although this type of optimisation is now widely used, it is based on a logical fallacy which is now starting to become more apparent.

The artificial delay is also present in browsers that use Pointer Events.

pointerover > mouseover > pointerdown > mousedown > pointermove > mousemove > pointerup > mouseup > pointerout > mouseout > delay > click

Although it’s possible to extend the above optimisation approach to check navigator.maxTouchPoints and to then hook up our listener to pointerup rather than click, there is a much simpler way: setting the touch-action CSS property of our element to none eliminates the delay.

/* suppress default touch action like double-tap zoom */
a, button {
  -ms-touch-action: none;
      touch-action: none;

See the event listener with touch-action:none test page, code available on GitHub.

False assumptions

It’s important to note that these types of optimisations based on the availability of touch have a fundamental flaw: they make assumptions about user behavior based on device capabilities. More explicitly, the example above assumes that because a device is capable of touch input, a user will in fact use touch as the only way to interact with it.

This assumption probably held some truth a few years back, when the only devices that featured touch input were the classic “mobile” and “tablet”. Here, touchscreens were the only input method available. In recent months, though, we’ve seen a whole new class of devices which feature both a traditional laptop/desktop form factor (including a mouse, trackpad, keyboard) and a touchscreen, such as the various Windows 8 machines or Google’s Chromebook Pixel.

As an aside, even in the case of mobile phones or tablets, it was already possible – on some platforms – for users to add further input devices. While iOS only caters for pairing an additional bluetooth keyboard to an iPhone/iPad purely for text input, Android and Blackberry OS also let users add a mouse.

On Android, this mouse will act exactly like a “touch”, even firing the same sequence of touch events and simulated mouse events, including the dreaded delay in between – so optimisations like our example above will still work fine. Blackberry OS, however, purely fires mouse events, leading to the same sort of problem outlined below.

The implications of this change are slowly beginning to dawn on developers: that touch support does not necessarily mean “mobile” anymore, and more importantly that even if touch is available, it may not be the primary or exclusive input method that a user chooses. In fact, a user may even transition between any of their available input methods in the course of their interaction.

The innocent code snippets above can have quite annoying consequences on this new class of devices. In browsers that use Touch Events:

var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');

is basically saying “if the device support touch, only listen to touchend and not click” – which, on a multi-input device, immediately shuts out any interaction via mouse, trackpad or keyboard.

Touch or mouse?

So what’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities. Specifically, instead of making the decision about whether to react to click or touchend/touchstart mutually exclusive, these should all be taken into consideration as complementary.

Certainly, this may involve a bit more code, but the end result will be that our application will work for the largest number of users. One approach, already familiar to developers who’ve strived to make their mouse-specific interfaces also work for keyboard users, would be to simply “double up” your event listeners (while taking care to prevent the functionality from firing twice by stopping the simulated mouse events that are fired following the touch events):

blah.addEventListener('touchend', function(e) {
  /* prevent delay and simulated mouse events */
blah.addEventListener('click', someFunction());

If this isn’t DRY enough for you, there are of course fancier approaches, such as only defining your functions for click and then bypassing the dreaded delay by explicitly firing that handler:

blah.addEventListener('touchend', function(e) {
  /* prevent delay and simulated mouse events */
  /* trigger the actual behavior we bound to the 'click' event */
blah.addEventListener('click', function() {
  /* actual functionality */

That last snippet does not cover all possible scenarios though. For a more robust implementation of the same principle, see the FastClick script from FT labs.

Being input-agnostic

Of course, battling with delay on touch devices is not the only reason why developers want to check for touch capabilities. Current discussions – such as this issue in Modernizr about detecting a mouse user – now revolve around offering completely different interfaces to touch users, compared to mouse or keyboard, and whether or not a particular browser/device supports things like hovering. And even beyond JavaScript, similar concepts (pointer and hover media features) are being proposed for Media Queries Level 4. But the principle is still the same: as there are now common multi-input devices, it’s not straightforward (and in many cases, impossible) anymore to determine if a user is on a device that exclusively supports touch.

The more generic approach taken in Microsoft’s Pointer Events specification – which is already being scheduled for implementation in other browser such as Chrome – is a step in the right direction (though it still requires extra handling for keyboard users). In the meantime, developers should be careful not to draw the wrong conclusions from touch support detection and avoid unwittingly locking out a growing number of potential multi-input users.

Further links

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)