Don’t

Don’t pay me to speak – share instead

No money

I just made an announcement on Twitter on something I’ve been doing for a while. Something I’d love more people with the same privilege as I enjoy doing:

It is a wonderful situation to be in a full-time employment and get the chance to present at events. It also is a tricky one. Your work contract often doesn’t allow any extra income. And even if that is the case, you need to deal with taxes and paperwork coming from that. You also don’t want to be the person taking a speaking slot away from someone who does it for a living and is great at it. Or someone who starts out and needs the pay to be able to afford it in the first place. You also don’t want to be a speaker because you are a freebie for the conference organisers.

Conference organisers are under a lot of pressure these days. They are rightfully asked to offer a diverse line-up and be open to lots of people to attend. Elitism and gatherings of the privileged are things to avoid. Sometimes it is hard for a small to medium conference to budget for that. It is not enough to offer free tickets. Often people who could benefit from an event and bring a different point of view can’t even afford getting there.

To help making this easier, I’ve been forfeiting my speaker fees for quite a while. Instead I ask conference organisers to put the money into efforts that bring people who can’t afford it to the event. It means no paperwork for me, no worries about annoying my employer and yet it means I am not a freebie presenter.

I hope that this helps a bit making what we have here even better than it is now. Thanks to all the conference organisers who put effort into this.

Photo by Neubie

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

You don’t owe the world perfection! – keynote at Beyond Tellerrand

Yesterday morning I was lucky enough to give the opening keynote at the excellent Beyond Tellerrand conference in Dusseldorf, Germany. I wrote a talk for the occasion that covered a strange disconnect that we’re experiencing at the moment.
Whilst web technology advanced leaps and bounds we still seem to be discontent all the time. I called this the Tetris mind set: all our mistakes are perceived as piling up whilst our accomplishments vanish.

Eva-Lotta Lamm created some excellent sketchnotes on my talk.
Sketchnotes of the talk

The video of the talk is already available on Vimeo:

Breaking out of the Tetris mind set from beyond tellerrand on Vimeo.

You can get the slides on SlideShare:

I will follow this up with a more in-depth article on the subject in due course, but for today I am very happy how well received the keynote was and I want to remind people that it is OK to build things that don’t last and that you don’t owe the world perfection. Creativity is a messy process and we should feel at ease about learning from mistakes.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

You don’t owe the world perfection! – keynote at Beyond Tellerand

Yesterday morning I was lucky enough to give the opening keynote at the excellent Beyond Tellerand conference in Dusseldorf, Germany. I wrote a talk for the occasion that covered a strange disconnect that we’re experiencing at the moment.
Whilst web technology advanced leaps and bounds we still seem to be discontent all the time. I called this the Tetris mind set: all our mistakes are perceived as piling up whilst our accomplishments vanish.

Eva-Lotta Lamm created some excellent sketchnotes on my talk.
Sketchnotes of the talk

The video of the talk is already available on Vimeo:

Breaking out of the Tetris mind set from beyond tellerrand on Vimeo.

You can get the slides on SlideShare:

I will follow this up with a more in-depth article on the subject in due course, but for today I am very happy how well received the keynote was and I want to remind people that it is OK to build things that don’t last and that you don’t owe the world perfection. Creativity is a messy process and we should feel at ease about learning from mistakes.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Animating like you just don’t care with Element.animate

In Firefox 48 we’re shipping the Element.animate() API — a new way to programmatically animate DOM elements using JavaScript. Let’s pause for a second — “big deal”, you might say, or “what’s all the fuss about?” After all, there are already plenty of animation libraries to choose from. In this post I want to explain what makes Element.animate() special.

What a performance

Element.animate() is the first part of the Web Animations API that we’re shipping and, while there are plenty of nice features in the API as a whole, such as better synchronization of animations, combining and morphing animations, extending CSS animations, etc., the biggest benefit of Element.animate() is performance. In some cases, Element.animate() lets you create jank-free animations that are simply impossible to achieve with JavaScript alone.

Don’t believe me? Have a look at the following demo, which compares best-in-class JavaScript animation on the left, with Element.animate() on the right, whilst periodically running some time-consuming JavaScript to simulate the performance when the browser is busy.

Performance of regular JavaScript animation vs Element.animate()To see for yourself, try loading the demo in the latest release of Firefox or Chrome. Then, you can check out the full collection of demos we’ve been building!

When it comes to animation performance, there is a lot of conflicting information being passed around. For example, you might have heard amazing (and untrue) claims like, “CSS animations run on the GPU”, and nodded along thinking, “Hmm, not sure what that means but it sounds fast.” So, to understand what makes Element.animate() fast and how to make the most of it, let’s look into what makes animations slow to begin with.

Animations are like onions (Or cakes. Or parfait.)

In order for an animation to appear smooth, we want all the updates needed for each frame of an animation to happen within about 16 milliseconds. That’s because browsers try to update the screen at the same rate as the refresh rate of the display they’re drawing to, which is usually 60Hz.

On each frame, there are typically two things a browser does that take time: calculating the layout of elements on the page, and drawing those elements. By now, hopefully you’ve heard the advice, “Don’t animate properties that update layout.” I am hopeful here — current usage metrics suggest that web developers are wisely choosing to animate properties like transform and opacity that don’t affect layout whenever they can. (color is another example of a property that doesn’t require recalculating layout, but we’ll see in a moment why opacity is better still.)

If we can avoid performing layout calculations on each animation frame, that just leaves drawing the elements. It turns out that programming is not the only job where laziness is a virtue — indeed animators worked out a long time ago that they could avoid drawing a bunch of very similar frames by creating partially transparent cels, moving the cels around on top of the background, and snapshotting the result along the way.

Example of cels used to create animation frames

Example of creating animation frames using cels.
(Of course, not everyone uses fancy cels; some people just cut out Christmas cards.)

A few years ago browsers caught on to this “pull cel” trick. Nowadays, if a browser sees that an element is moving around without affecting layout, it will draw two separate layers: the background and the moving element. On each animation frame, it then just needs to re-position these layers and snapshot the result without having to redraw anything. That snapshotting (more technically referred to as compositing) turns out to be something that GPUs are very good at. What’s more, when they composite, GPUs can apply 3D transforms and opacity fades all without requiring the browser to redraw anything. As a result, if you’re animating the transform or opacity of an element, the browser can leave most of the work to the GPU and stands a much better chance of making its 16ms deadline.

Hint: If you’re familiar with tools like Firefox’s Paint Flashing Tool or Chrome’s Paint Rectangles you’ll notice when layers are being used because you’ll see that even though the element is animating nothing is being painted! To see the actual layers, you can set layers.draw-borders to true in Firefox’s about:config page, or choose “Show layer borders” in Chrome’s rendering tab.

You get a layer, and you get a layer, everyone gets a layer!

The message is clear — layers are great and you are expecting that surely the browser is going to take full advantage of this amazing invention and arrange your page’s contents like a mille crêpe cake. Unfortunately, layers aren’t free. For a start, they take up a lot more memory since the browser has to remember (and draw) all the parts of the page that would otherwise be overlapped by other elements. Furthermore, if there are too many layers, the browser will spend more time drawing, arranging, and snapshotting them all, and eventually your animation will actually get slower! As a result, a browser only creates layers when it’s pretty sure they’re needed — e.g. when an element’s transform or opacity property is being animated.

Sometimes, however, browsers don’t know a layer is needed until it’s too late. For example, if you animate an element’s transform property, up until the moment when you apply the animation, the browser has no premonition that it should create a layer. When you suddenly apply the animation, the browser has a mild panic as it now needs to turn one layer into two, redrawing them both. This takes time, which ultimately interrupts the start of the animation. The polite thing to do (and the best way to ensure your animations start smoothly and on time) is to give the browser some advance notice by setting the will-change property on the element you plan to animate.

For example, suppose you have a button that toggles a drop-down menu when clicked, as shown below.

Example of using will-change to prepare a drop-down menu for animation

Live example

We could hint to the browser that it should prepare a layer for the menu as follows:

nav {
  transition: transform 0.1s;
  transform-origin: 0% 0%;
  will-change: transform;
}
nav[aria-hidden=true] {
  transform: scaleY(0);
}

But you shouldn’t get too carried away. Like the boy who cried wolf, if you decide to will-change all the things, after a while the browser will start to ignore you. You’re better off to only apply will-change to bigger elements that take longer to redraw, and only as needed. The Web Console is your friend here, telling you when you’ve blown your will-change budget, as shown below.

Screenshot of the DevTools console showing a will-change over-budget warning.

Animating like you just don’t care

Now that you know all about layers, we can finally get to the part where Element.animate() shines. Putting the pieces together:

  • By animating the right properties, we can avoid redoing layout on each frame.
  • If we animate the opacity or transform properties, through the magic of layers we can often avoid redrawing them too.
  • We can use will-change to let the browser know to get the layers ready in advance.

But there’s a problem. It doesn’t matter how fast we prepare each animation frame if the part of the browser that’s in control is busy tending to other jobs like responding to events or running complicated scripts. We could finish up our animation frame in 5 milliseconds but it won’t matter if the browser then spends 50 milliseconds doing garbage collection. Instead of seeing silky smooth performance our animations will stutter along, destroying the illusion of motion and causing users’ blood pressure to rise.

However, if we have an animation that we know doesn’t change layout and perhaps doesn’t even need redrawing, it should be possible to let someone else take care of adjusting those layers on each frame. As it turns out, browsers already have a process designed precisely for that job — a separate thread or process known as the compositor that specializes in arranging and combining layers. All we need is a way to tell the compositor the whole story of the animation and let it get to work, leaving the main thread — that is, the part of the browser that’s doing everything else to run your app — to forget about animations and get on with life.

This can be achieved by using none other than the long-awaited Element.animate() API! Something like the following code is all you need to create a smooth animation that can run on the compositor:

elem.animate({ transform: [ 'rotate(0deg)', 'rotate(360deg)' ] },
             { duration: 1000, iterations: Infinity });

Screenshot of the animation produced: a rotating foxkeh
Live example

By being upfront about what you’re trying to do, the main thread will thank you by dealing with all your other scripts and event handlers in short order.

Of course, you can get the same effect by using CSS Animations and CSS Transitions — in fact, in browsers that support Web Animations, the same engine is also used to drive CSS Animations and Transitions — but for some applications, script is a better fit.

Am I doing it right?

You’ve probably noticed that there are a few conditions you need to satisfy to achieve jank-free animations: you need to animate transform or opacity (at least for now), you need a layer, and you need to declare your animation up front. So how do you know if you’re doing it right?

The animation inspector in Firefox’s DevTools will give you a handy little lightning bolt indicator for animations running on the compositor. Furthermore, as of Firefox 49, the animation inspector can often tell you why your animation didn’t make the cut.

Screenshot showing DevTools Animation inspector reporting why the transform property could not be animated on the compositor.

See the relevant MDN article for more details about how this tool works.

(Note that the result is not always correct — there’s a known bug where animations with a delay sometimes tell you that they’re not running on the compositor when, in fact, they are. If you suspect DevTools is lying to you, you can always include some long-running JavaScript in the page like in the first example in this post. If the animation continues on its merry way you know you’re doing it right — and, as a bonus, this technique will work in any browser.)

Even if your animation doesn’t qualify for running on the compositor, there are still performance advantages to using Element.animate(). For instance, you can avoid reparsing CSS properties on each frame, and allow the browser to apply other little tricks like ignoring animations that are currently offscreen, thereby prolonging battery life. Furthermore, you’ll be on board for whatever other performance tricks browsers concoct in the future (and there are many more of those coming)!

Conclusion

With the release of Firefox 48, Element.animate() is implemented in release versions of both Firefox and Chrome. Furthermore, there’s a polyfill (you’ll want the web-animations.min.js version) that will fall back to using requestAnimationFrame for browsers that don’t yet support Element.animate(). In fact, if you’re using a framework like Polymer, you might already be using it!

There’s a lot more to look forward to from the Web Animations API, but we hope you enjoy this first installment (demos and all)!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Don’t tell me what my browser can’t do!

Chances are, your guess is wrong!

you are obviously in the wrong place
Arrogance towards possible customers never pays out – as shown in “Pretty Woman”

There is nothing more frustrating than being capable of something and not getting a chance to do it. The same goes for being blocked out from something although you are capable of consuming it. Or you’re even willing to put some extra effort or even money in and you still don’t get to consume it.

For example, I’d happily pay $50 a month to get access to Netflix’s world-wide library from any country I’m in. But the companies Netflix get their content from won’t go for that. Movies and TV show are budgeted by predicted revenue in different geographical markets with month-long breaks in between the releases. A world-wide network capable of delivering content in real time? Preposterous?—?let’s shut that down.

On a less “let’s break a 100 year old monopoly” scale of annoyance, I tweeted yesterday something glib and apparently cruel:

“Sorry, but your browser does not support WebGL!” – sorry, you are a shit coder.

And I stand by this. I went to a web site that promised me some cute, pointless animation and technological demo. I was using Firefox Nightly?—?a WebGL capable browser. I also went there with Microsoft Edge?—?another WebGL capable browser. Finally, using Chrome, I was able to delight in seeing an animation.

I’m not saying the creators of that thing lack in development capabilities. The demo was slick, beautiful and well coded. They still do lack in two things developers of web products (and I count apps into that) should have: empathy for the end user and an understanding that they are not in control.

Now, I am a pretty capable technical person. When you tell me that I might be lacking WebGL, I know what you mean. I don’t lack WebGL. I was blocked out because the web site did browser sniffing instead of capability testing. But I know what could be the problem.

A normal user of the web has no idea what WebGL is and?—?if you’re lucky?—?will try to find it on an app store. If you’re not lucky all you did is confuse a person. A person who went through the effort to click a link, open a browser and wait for your thing to load. A person that feels stupid for using your product as they have no clue what WebGL is and won’t ask. Humans hate feeling stupid and we do anything not to appear it or show it.

This is what I mean by empathy for the end user. Our problems should never become theirs.

A cryptic error message telling the user that they lack some technology helps nobody and is sloppy development at best, sheer arrogance at worst.

The web is, sadly enough, littered with unhelpful error messages and assumptions that it is the user’s fault when they can’t consume the thing we built.

Here’s a reality check?—?this is what our users should have to do to consume the things we build:

That’s right. Nothing. This is the web. Everybody is invited to consume, contribute and create. This is what made it the success it is. This is what will make it outlive whatever other platform threatens it with shiny impressive interactions. Interactions at that time impossible to achieve with web technologies.

Whenever I mention this, the knee-jerk reaction is the same:

How can you expect us to build delightful experiences close to magic (and whatever other soundbites were in the last Apple keynote) if we keep having to support old browsers and users with terrible setups?

You don’t have to support old browsers and terrible setups. But you are not allowed to block them out. It is a simple matter of giving a usable interface to end users. A button that does nothing when you click it is not a good experience. Test if the functionality is available, then create or show the button. This is as simple as it is.

If you really have to rely on some technology then show people what they are missing out on and tell them how to upgrade. A screenshot or a video of a WebGL animation is still lovely to see. A message telling me I have no WebGL less so.

Even more on the black and white scale, what the discussion boils down to is in essence:

But it is 2016?—?surely we can expect people to have JavaScript enabled?—?it is after all “the assembly language of the web”

Despite the cringe-worthy misquote of the assembly language thing, here is a harsh truth:

You can absolutely expect JavaScript to be available on your end users computers in 2016. At the same time it is painfully naive to expect it to work under all circumstances.

JavaScript is brittle. HTML and CSS both are fault tolerant. If something goes wrong in HTML, browsers either display the content of the element or try to fix minor issues like unclosed elements for you. CSS skips lines of code it can’t understand and merrily goes on its way to show the rest of it. JavaScript breaks on errors and tells you that something went wrong. It will not execute the rest of the script, but throws in the towel and tells you to get your house in order first.

There are many outside influences that will interfere with the execution of your JavaScript. That’s why a non-naive and non-arrogant?—?a dedicated and seasoned web developer?—?will never rely on it. Instead, you treat it as an enhancement and in an almost paranoid fashion test for the availability of everything before you access it.

Sorry (not sorry)?—?this will never go away. This is the nature of JavaScript. And it is a good thing. It means we can access new features of the language as they come along instead of getting stuck in a certain state. It means we have to think about using it every time instead of relying on libraries to do the work for us. It means that we need to keep evolving with the web?—?a living and constantly changing medium, and not a software platform. That’s just part of it.

This is why the whole discussion about JavaScript enabled or disabled is a massive waste of time. It is not the availability of JavaScript we need to worry about. It is our products breaking in perfectly capable environments because we rely on perfect execution instead of writing defensive code. A tumblr like Sigh, JavaScript is fun, but is pithy finger-pointing.

There is nothing wrong with using JavaScript to build things. Just be aware that the error handling is your responsibility.

Any message telling the user that they have to turn on JavaScript to use a certain product is a proof that you care more about your developer convenience than your users.

It is damn hard these days to turn off JavaScript – you are complaining about a almost non-existent issue and tell the confused user to do something they don’t know how to.

The chance that something in the JavaScript execution of any of your dozens of dependencies went wrong is much higher – and this is your job to fix. This is why advice like using noscript to provide alternative content is terrible. It means you double your workload instead of enhancing what works. Who knows? If you start with something not JavaScript dependent (or running it server side) you might find that you don’t need the complex solution you started with in the first place. Faster, smaller, easier. Sounds good, right?

So, please, stop sniffing my browser, you will fail and tell me lies. Stop pretending that working with a brittle technology is the user’s fault when something goes wrong.

As web developers we work in the service industry. We deliver products to people. And keeping these people happy and non-worried is our job. Nothing more, nothing less.

Without users, your product is nothing. Sure, we are better paid and well educated and we are not flipping burgers. But we have no right whatsoever to be arrogant and not understanding that our mistakes are not the fault of our end users.

Our demeanor when complaining about how stupid our end users and their terrible setups are reminds me of this Mitchell and Webb sketch.

Don’t be that person. Our job is to enable people to consume, participate and create the web. This is magic. This is beautiful. This is incredibly rewarding. The next markets we should care about are ready to be as excited about the web as we were when we first encountered it. Browsers are good these days. Use what they offer after testing for it and enjoy what you can achieve. Don’t tell the user when things go wrong – they can not fix what you messed up.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Don’t use Slack?

Hammer

When I joined my current company last year, we introduced Slack as the tool to communicate with each other. Of course we have the normal communication channels like email, video calls, phones, smoke signs, flag semaphore and clandestinely tapped Morse code stating “please let it end!” during meetings. But Slack seemed cool and amazing much like BaseCamp used to. And Wikis. And CampFire. And many other tools that came and went.

Here’s the thing though: I really like Slack. I have a soft spot for the team behind it and I know the beauty they are capable of. Flickr was the bomb and one of the most rewarding communities to be part of.

Slack is full of little gems that make it a great collaboration tool. The interface learns from your use. The product gently nudges you towards new functionality and it doesn’t overwhelm you with a “here’s 11452 features that will make your more productive” interface. You learn it while you use it, not after watching a few hours of video training or paying for a course how to use it. I went through many other “communication tools” that required those.

Another thing I love about Slack is that it can be extended. You can pull all kind of features and notifications in. It is a great tool, still frolicking in the first rounds of funding and untainted by a takeover by a large corporation and smothered in ads and “promoted content”.

Seeing that I enjoy Slack at work, I set out to consider running a community on my own. Then life and work happened. But when yesterday my friend Tomomi Imura asked if there are any evangelism/developer advocacy Slack groups, I told her I’d started one a few weeks ago and we now have it filling up nicely with interesting people sharing knowledge on a specialist subject matter.

And then my friend and ex-colleague Marco Zehe wanted to be part of this. And, by all means, he should. Except, there is one small niggle: Marco can’t see and uses a screen reader to navigate the web. And Slack’s interface is not accessible to screen readers as – despite the fact that it is HTML – there is no semantic value to speak of in it. It’s all DIVs and SPANs, monkeys and undergrowth – nothing to guide you.

What followed was a quick back and forth on Twitter about the merits of Slack vs. open and accessible systems like IRC. The main point of Marco was that he can not use Slack and it isn’t open, that’s why it is a bad tool to use for team communication. IRC is open, accessible, time-proven and – if used properly – turns X-Factor dropouts into Freddy Mercury and pot noodles into coq au vin.

Marco has a point: there is a danger that Slack will go away, that Slack will have to pivot into something horrible like many other community tools have. It is a commercial product that isn’t open, meaning it can not easily be salvaged or forked should it go pear shaped. And it isn’t as accessible as IRC is.

But: it is an amazing product and it does everything right in terms of interface that you can. Everything IRC is not.

I love IRC. I used IRC before I used email – on a Commodore 64 with a 2400 baud modem and 40 character per line display. I spent a long time on #html getting and publishing HTML documentation via XDCC file transfer. Some of my longest standing friends and colleagues I know from IRC.

If you introduce someone who is used to apps and messaging on mobile devices to IRC though, you don’t see delight but confusion on their faces. Rachel Nabors complained a lot about this in her State of Web Animation talks. IRC is very accessible, but not enjoyable to use. I am sure there are clients that do a good job at that, but most have an interface and features that only developers can appreciate and call usable.

Using IRC can be incredibly effective. If you are organised and use it with strict channel guidelines. Used wrongly, IRC is an utter mess resulting in lots of log files you only can make sense of if you’re good with find and grep.

I have been sitting on this for a long time, and now I want to say it: open and accessible doesn’t beat usable and intelligent. It is something we really have to get past in the open source and web world if we want what we do to stay relevant. I’m tired of crap interfaces being considered better because they are open. I’m tired of people slagging off great tools and functionality because they aren’t open. I don’t like iOS, as I don’t want to locked into an ecosystem. But damn, it is pretty and I see people being very effective with it. If you want to be relevant, you got to innovate and become better. And you have to keep inventing new ways to use old technology, not complain about the problems of the closed ones.

So what about the accessibility issue of Slack? Well, we should talk to them. If Slack wants to be usable in the Enterprise and government, they’ll have to fix this. This is an incentive – something Hipchat tries to cover right now as Jira is already pretty grounded there.

As the people who love open, free, available and accessible we have to ask ourselves one question: why is it much easier to create an inaccessible interface than an accessible one? How come this is the status quo? How come that in 2016 we still have to keep repeating basic things like semantic HTML, alternative text and not having low contrast interfaces? When did this not become a simple delivery step in any project description? It has been 20 years and we still complain more than we guide.

How come that developer convenience has became more important than access for all – a feature baked into the web as one of its main features?

When did we lose that fight? What did we do to not make it obvious that clean, semantic HTML gives you nothing but benefits and accessibility as a side effect? What did we do to become the grouchy people shouting from the balcony that people are doing it wrong instead of the experts people ask for advice to make their products better?

Accessibility can not be added at a later stage. You can patch and fix and add some ARIA magic to make things work to a degree, but the baby is thrown out with the bath water already. Much like an interface built to only work in English is very tough to internationalise, adding more markup to make something accessible is frustrating patchwork.

Slack is hot right now. And it is lovely. We should be talking to them and helping them to make it accessible, helping with testing and working with them. I will keep using Slack. And I will meet people there, it will make me effective and its features will delight the group and help us get better.

Slack is based on web technologies. It very much can be accessible to all.

What we need though is a way to talk to the team and see if we can find some low hanging fruit bugs to fix. Of course, this would be much easier in an open source project. But just consider what a lovely story it would be to tell that by communicating we made Slack accessible to all.

Closed doesn’t have to evil. Mistakes made doesn’t mean you have to disregard a product completely. If we do this and keep banging on about old technology that works but doesn’t delight everybody loses.

So, use Slack. And tell them when you can’t and that you very much would like to. Maybe that is exactly the feature they need to remain what they are now rather than being yet another great tool to die when the money runs dry.

Photo by CogDogBlog

Also on Medium

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

I don’t want Q&A in conference videos

I present at conferences – a lot. I also moderate conferences and I brought the concept of interviews instead of Q&A to a few of them (originally this concept has to be attributed to Alan White for Highland Fling, just to set the record straight). Many conferences do this now, with high-class ones like SmashingConf and Fronteers being the torch-bearers. Other great conferences, like EdgeConf, are 100% Q&A, and that’s great, too.

confused Q&A speaker

I also watch a lot of conference talks – to learn things, to see who is a great presenter (and I will recommend to conference organisers who ask me for talent), and to see what others are doing to excite audiences. I do that live, but I’m also a great fan of talk recordings.

I want to thank all conference organisers who go the extra mile to offer recordings of the talks at their event – you already rock, thanks!

I put those on my iPod and watch them in the gym, whilst I am on the cross trainer. This is a great time to concentrate, and to get fit whilst learning things. It is a win-win.

Much like everyone else, I pick the talk by topic, but also by length. Half an hour to 40 minutes is what I like best. I also tend to watch 2-3 15 minute talks in a row at times. I am quite sure, I am not alone in this. Many people watch talks when they commute on trains or in similar “drive by educational” ways. That’s why I’d love conference organisers to consider this use case more.

I know, I’m spoilt, and it takes a lot of time and effort and money to record, edit and release conference videos and you make no money from it. But before shooting me down and telling me I have no right to demand this if I don’t organise events myself, let me tell you that I am pretty sure you can stand out if you do just a bit of extra work to your recordings:

  • Make them available offline (for YouTube videos I use YouTube DL on the command line to do that anyways). Vimeo has an option for that, and Channel 9 did that for years, too.
  • Edit out the Q&A – there is nothing more annoying than seeing a confused presenter on a small screen trying to understand a question from the audience for a minute and then saying “yes”. Most of the time Q&A is not 100% related to the topic of the talk, and wanders astray or becomes dependent on knowledge of the other talks at that conference. This is great for the live audience, but for the after-the-fact consumer it becomes very confusing and pretty much a waste of time.

That way you end up with much shorter videos that are much more relevant. I am pretty sure your viewing/download numbers will go up the less cruft you have.

It also means better Q&A for your event:

  • presenters at your event know they can deliver a great, timed talk and go wild in the Q&A answering questions they may not want recorded.
  • people at the event can ask questions they may not want recorded (technically you’d have to ask them if it is OK)
  • the interviewer or people at the event can reference other things that happened at the event without confusing the video audience. This makes it a more lively Q&A and part of the whole conference experience
  • there is less of a rush to get the mic to the person asking and there is more time to ask for more details, should there be some misunderstanding
  • presenters are less worried about being misquoted months later when the video is still on the web but the context is missing

For presenters, there are a few things to consider when presenting for the audience and for the video recording, but that’s another post. So, please, consider a separation of talk and Q&A – I’d be happier and promote the hell out of your videos.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Slimming down the web: Remove code to fix things, don’t add the “clever” thing

Today we saw a new, interesting service called Does it work on Edge? which allows you to enter a URL, and get that URL rendered in Microsoft Edge. It also gives you a report in case there are issues with your HTML or CSS that are troublesome for Edge (much like Microsoft’s own service does). In most cases, this will be browser-specific code like prefixed CSS. All in all this is a great service, one of many that make our lives as developers very easy.

If you release something on the web, you get feedback. When I tweeted enthusiastically about the service, one of the answers was by @jlbruno, who was concerned about the form not being keyboard accessible.

The reason for this is simple: the form on the site itself is none insofar there is no submit button of any kind. The button in the page is a anchor pointing nowhere and the input element itself has a keypress event attached to it (even inline):

screenshot of the page source codeclick for bigger

There’s also another anchor that points nowhere that is a loading message with a display of none. Once you click the first one, this one gets a display of block and replaces the original link visually. This is great UX – telling you something is going on – but it only really works when I can see it. It also gives me a link element that does nothing.

Once the complaint got heard, the developers of the site took action and added an autofocus attribute to the input field, and proudly announcing that now the form is keyboard accessible.

Now, I am not having a go here at the developers of the site. I am more concerned that this is pretty much the state of web development we have right now:

  • The visual outcome of our tools is the most important aspect – make it look good across all platforms, no matter how.
  • As developers, we most likely are abled individuals with great computers and fast connections. Our machines execute JavaScript reliably and we use a trackpad or mouse.
  • When something goes wrong, we don’t analyse what the issue is, but instead we look for a tool that solves the issue for us – the fancier that tool is, the better

How can this be keyboard accessible?

In this case, the whole construct is far too complex for the job at hand. If you want to create something like this and make it accessible to keyboard and mouse users alike, the course of action is simple:

  • Use a form elment with an input element and a submit button

Use the REST URL of your service (which I very much assume this product has) as the action and re-render the page when it is done.

If you want to get fancy and not reload the page, but keep all in place assign a submit handler to the form element, call preventDefault() and do all the JS magic you want to do:

  • You can still have a keypress handler on the input element if you want to interact with the entries while they happen. If you look at the code on the page now, all it does is check for the enter button. Hitting the enter button in a form with a submit button or a button element submits the form – this whole code never has to be written, simply by understanding how forms work.
  • You can change the value of a submit button when the submit handler kicks in (or the innerHTML of the button) and make it inactive. This way you can show a loading message and you prevent duplicate form submissions

What’s wrong with autofocus?

Visually and functionally on a browser that was lucky enough to not encounter a JavaScript error until now, the autofocus solution does look like it does the job. However, what it does is shift the focus of the document to the input field once the page has loaded. A screenreader user thusly would never ever learn what the site is about as you skip the header and all the information. As the input element also lacks a label, there isn’t even any information as to what the user is supposed to enter here. You sent that user into a corner without any means of knowing what’s going on. Furthermore, keyboard users are primed and ready to start navigating around the page as soon as it loads. By hijacking the keyboard navigation and automatically sending it to your field you confuse people. Imagine pressing the TV listings button on a TV and instead it just sends you to the poodle grooming channel every time you do it.

The web is obese enough!

So here’s my plea in this: let’s break that pattern of working on the web. Our products don’t get better when we use fancier code. They get better when they are easier to use for everybody. The fascinating bit here is that by understanding how HTML works and what it does in browsers, we can avoid writing a lot of code that looks great but breaks very easily.

There is no shortage of articles lamenting how the web is too slow, too complex and too big on the wire compared to native apps. We can blame tools for that or we could do something about it. And maybe not looking for a readymade solution or the first result of Stackoverflow is the right way to do that.

Trust me, writing code for the web is much more rewarding when it is your code and you learned something while you implemented it.

Let’s stop adding more when doing the right thing is enough.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Two simple syndication tricks to get wrong: have an image and don’t trust iframes

Blogging is simple, but many people forget that there are freaks like me out there who have done this for quite a while who use RSS readers and that there are new folks out there who will share your links on Facebook, Google+ and other social networks.

Beastie Boys - Ill communication

Therefore it is pretty important to do two things in your posts:

  • Do not rely on iframes working
  • Do have an image for social media sites to show as a thumbnail

As to iframes, my good friend Jens Grochtdreis this morning posted about needing a new workflow in web development, proving his point by embedding two of his talks on speakerdeck (which are iframes). To me, his post looked like this in feedly:

Blog post with embeds not showing up - all I got is two headings promising content

Two headings, no content. I reported the problem on Twitter and now he linked the headings to the URLs on speaker deck. Everybody wins.

Most readers will filter out iframe content if it mixes http and https for security reasons. Something we’ll encounter more and more in the future when content security policy becomes a bigger topic (yes, the web is insecure, yes, it is our job to fix that).

The second tip about having an image is a nice-to-have but important. Having an image in your post – a real image element, not a background image – means that Facebook, Google+ and others will show this image next to your blog post title when someone shares the link on these systems. If you don’t pick a thumbnail, Facebook picks the first one. This means your link could show up next to an ad for another product and confuse potential readers. A great example I encountered the other day was Scott Hanselman’s Are you a Phony? post which, with a title like that, showed up next to an ad with an endorsement of his for his hosting service. Confusing message, much?

scott hanselman post shared on Facebook with a wrong thumbnail

Seems to me people click much more on links with contextually relevant images next to them. Thus, providing one makes it nicer for everyone.

Simple, but effective.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)