Hinting at a better web at State of the Browser 2018

State of the browser is an small, annual conference in London. It originated as a format of 20 minute presentations by each browser maker followed by a panel allowing people to hear browser news straight from the horse’s mouth. It has been running for seven years (I think, hard to find out). This year was slightly different as they didn’t do a panel and there were several speakers that aren’t representatives of browser makers.

State of the browser ticks many of my happy boxes when it comes to conferences and I am highly impressed how the organisers manage to pull it off:

  • It has a great and diverse line-up of presenters
  • It is single track, with a sensible talk length
  • It is pragmatic in its approach and keeps costs low by not catering lunch but giving enough time to find some
  • It is ridiculously affordable at 30 GBP
  • And yet, they do a really good job to make you feel welcome and supported as a presenter

The conference has a low-key feel to it and that also keeps the presenters humble. There is a great diversity ticket program in place where attendees can sponsor others. The line-up was diverse and there is a focus on availability and accessibility. All the talks were streamed on YouTube and they have professional transcriptions in place that type along as the speakers present. The conference team is taking notes and publishes resources presenters covered live on the speakers’ pages on the conference site and on Twitter.

My talk this year was hinting at a better web in which I cover the changes the web went through over the years and how as developers we have a harder time keeping up with them. And how tooling and using the right resources in context of our work can help us with that.

I will write a longer article about the topic soon.

The full video stream of the conference is available here. My talk is on from 05:11:00 onwards to 05:38:00

Here is a quick recap of the talks from my POV:

  • Michelle Barker of Mud showed off the power of CSS grids and custom properties to build complex layouts on the web.
  • Dr. Ben Livshits of Brave showed how the advertising model of their browser can make the web more secure and easier for publishers
  • Sara Vieira gave a talk ranting about the overuse of DIVs in design and the general lack of quality in semantic markup and sensible, simple solutions on the web
  • Rowan Merewood of Google gave a talk about Apps, Web Apps and their overlap. His slides are available here .
  • Ada Rose Cannon of Samsung covered “WebXR and the immersive web” showing some interesting VR/AR examples running in Samsung Internet
  • Ruth John talked about using the Web Audio API for music experiments and visualization with a focus on the performance of those APIs.
  • Chris Mills of Mozilla showed the new features of the Firefox Developer Tools in Nightly talking in detail about their WYSIWYG nature. He covered the Grid Inspector, Animation Editor and a few other neat tools
  • Jeremy Keith of clearleft once again gave a highly philosophical talk about how the open web is an agreement
  • Charlie Owen of Nature Publishing ended with a ranty (in a positive sense) keynote about us over-complicating the web and thus making it far less accessible than it should be

I was happy to see some nice feedback on Twitter:

I’ve been a supporter of State of the Browser from the very beginning and I am happy to say that – if anything – it gets better every year. The dedicated team behind it are doing a bang up job.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes) – hinting at a better web

Webhint logo

Humans are a weird bunch. One of our great traits is that when someone tells us that a certain thing is a certain way, we question it. We like debating and we revel in it. The more technical you are, the more you enjoy this. As someone once put it:

Arguing with an engineer is like wrestling with a pig in shit. You realise far too late that it enjoys it.

Now, the web as we have it these days isn’t in a good place.

  • It is slow, full of unwanted content
  • It allows others to track our users without our knowledge
  • It is full of security holes and much less maintained than it should be to prevent them becoming an attack vector
  • It lacks support for physical and digital availability for all.
  • Developers get very simple things wrong and often a mistake is copied and pasted or installed over and over

That means that browsers, which by definition can not break the web, need to be lenient with developer mistakes. All browsers fix a lot of problems automatically and allow for spelling mistakes like “utf8” instead of “UTF-8”. That also makes them bigger, slower and harder to maintain. Which makes us complain about browsers being just that.

It can’t be a problem of lacking resources

Considering how much free, high-quality information we have available this is weird. We have web documentation maintained by all the big players. We have up-to-date information on what browsers can do. We drown in conferences, meetups and we don’t even need to attend them. Often talk videos pop up on YouTube within minutes after the presenter left the stage.

We also have excellent tooling. Browsers have developer tools built in. Editors are free, some even open source and we can configure them to our needs using the languages we use them to write. We have automated testing and auditing tools telling us what to optimise before we release our products.

Maybe it all is too much to take in

The problem seems to be overload. Both of options and especially of opinions. We can’t assume that every web developer out there can go to conferences, follow blog posts and watch videos. We can’t assume that people can deal with the speed of news where a “modern, tiny solution” can turn into a “considered harmful” within a day. We also can’t assume that what is a best practice for a huge web product applies to all smaller ones. Far too often we’ve seen “best practices” come and go and what was a “quick, opinionated way to achieve more by writing less” turned into a stumbling block of the current web. Even more worrying, when a huge successful corporation states that something works for them developers are asked to use these settings and ideas. Even when they don’t apply to their products at all.

There is no one-size-fits-all best practice

The web is incredibly diverse and the same rules don’t apply to everything. We are very quick to point the finger at a glaring problem of a web product but we don’t ask why it happened. Often it is not the fault of the developers, or lack of knowledge. It is a design decision that may even have a sensible reason.

We faced the same issues at my work. Working in a large corporation means many chefs spoiling the broth. It also means that different projects need different approaches. I am happy to give Internet Explorer users a plain HTML page with a bit of CSS and enhance for more capable environments. But not everybody has that freedom – for them a high-quality experience on that browser is the main goal. Everything else isn’t part of the product time buffer and needs to be added on the sly. Different needs for different projects.

Damage control

That said, we didn’t want to allow low-quality products to get out of the door. Often us in the inner circle of the “modern web” preach about best practices. Then some marketing web site by your company makes you look silly because it violates them. We needed a way to evaluate the quality of a project and get a report out of it. We also needed explanations why some of the problems we have with the product are real issues. And we needed documentation explaining how to fix these issues.

This is when we created a product that does all that. It is a scanner that loads a URL and tests all kind of things it returns. It uses third party tools to test for security and accessibility issues and is available open source on GitHub. As we don’t want this to be just a thing of my company, we donated the code to the JS foundation.

At first, we called the product Sonar. That was a copyright issue. So we renamed it to Sonarwhal. It had some success, and then more naming clashing issues cropped up. Furthermore, people didn’t seem to get it.

Today, we released a new, rebranded version of the tool. It is now called Webhint and you can find it on GitHub, use it at or use it as a node module using npm, yarn or whatever other package installer you want to use.

The simplest use case is this:

  • Go to Enter the URL you want to test
  • Wait a bit till all the hints came back
  • Get a report explaining all the things that are sub-optimal to dangerous. You don’t only get the error messages, but detailed explanation what these mean and how to fix the issues.

By default webhint tests for these features of great web products: Performance, Accessibility, Browser Interoperability, Security, Sensible configuration of build tools and PWA Readiness.

Make up your own tests checking what matters to your product

Whilst this is great, it doesn’t solve all the issues we listed earlier. It is a great testing tool, but it has its own opinions and you can’t change them.

  • What if your tool doesn’t need to be PWA ready?
  • What if performance is less of an issue as you run on an intranet behind a firewall?
  • What if you can’t access as you are working in a closed network?
  • What if you don’t use a browser at all but you also want to test a lot of documents for these quality features?

This is where the node version of webhint comes in. You can get it and install it with npm (and others, of course). The package name is hint .

Animation of hint on the command line

That way you can not only scan a URL from the command line, but you can also configure it to your needs. You can define your own hints to test against and turn the out-of-the-box ones on and off. You can turn off the ones reliant on third party scanners. You can even have different configurations for each project.

With the release today of webhint, we took what was Sonarwhal and made it faster, smaller and easier to use. The command line version now has a default setting and adding and removing hints is a lot easier. The startup time and the size of webhint is much smaller and things should be much smoother.

So go and read up on the official release of Webhint, dive into the documentation or just do some trial scans. You’ll be surprised how many things you find that can have a huge problematic impact but are relatively easy to fix.

I hope that a tool like webhint, without fixed opinions and customisable to many different needs whilst still creating readable and understandable reports can help us make a better web. Watch this space, there is a lot more to come.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Is the new world of JavaScript confusing or intimidating? I thought so, and recorded a video course how to feel better

Chris Heilmann smiling behind his laptop as the course is finished

JavaScript used to be easy. Misunderstood, but easy. All you had to do was take a text editor and add some code to an HTML document in a SCRIPT element to enhance it. After a few years of confusion, we standardised the DOM. JavaScript became more predictable. AJAX was the next hype and it wasn’t quite as defined as we’d like it to be. Then we had jQuery because the DOM was too convoluted. Then we got dozens of other libraries and frameworks to make things “easier”. When Node came to be we moved server side with JavaScript. And these days we replaced the DOM with a virtual one. JavaScript has types, classes and convenience methods.

JavaScript is everywhere and it is the hottest topic. This can be confusing and overwhelming for new and old developers. “JavaScript fatigue” is a common term for that and it can make us feel bad about our knowledge. Am I outdated? Am I too slow to keep up? Which one of the dozens of things JavaScript can do is my job? What if I don’t understand them or have no fun doing them?

It is easy to be the grumpy old developer that discards everything new and shiny as unreliable. And it is far too often that we keep talking about the good old days. I wanted to find a way to get excited about what’s happening. I see how happy new, unencumbered developers are playing with hot new tech. I remembered that I was like that.

That’s why I recorded a Skillshare class about JavaScript and how to deal with the changes it went through.

In about an hour of videos you learn what JavaScript is these days, how to deal with the hype and – more importantly – what great advances happened to the language and the ecosystem.

Here’s me explaining what we’ll cover:

The videos are the following. We deliberately kept them short. A huge benefit of this course is to discover your own best way of working whilst watching them. It is a “try things out while you watch” kind of scenario:

  • Introduction (01:46) – introducing you to the course, explaining what we will cover and who it is for.
  • JavaScript today (08:41) – JavaScript isn’t writing a few lines of code to make websites snazzier any longer. It became a huge platform for all kinds of development.
  • Uses for JavaScript (06:25) – a more detailed view on what JavaScript does these days. And how the different uses come with different best practices and tooling.
  • Finding JavaScript Zen (04:15) – how can you stay calm in this new JavaScript world where everything is “amazing”? How can you find out what makes sense to you and what is hype?
  • Evolved Development Environments (10:22) – all you need to write JavaScript is a text editor and all to run it a browser. But that’s also limiting you more than you think.
  • Benefits of Good Editors (12:34) – by using a good editor, people who know JavaScript can become much more effective. New users of JavaScript avoid making mistakes that aren’t helpful to their learning.
  • Version Control (09:15) – using version control means you write understandable code. And it has never been easier to use Git.
  • Debugging to Linting (06:01) – debugging has been the first thing to get right to make JavaScript a success. But why find out why something went wrong when you can avoid making the mistake?
  • Keeping Current in JavaScript (05:11) – JavaScript moves fast and it can be tricky to keep up with that is happening. It can also be a real time-sink to fall for things that sound amazing but have no life-span.
  • Finding the JavaScript Community (03:59) – it is great that you know how to write JavaScript. Becoming part of a community is a lot more rewarding though.
  • Asking for Help (05:47) – gone are the days of writing posts explaining what your coding problem is. By using interactive tools you can give and get help much faster.
  • Final Thoughts (01:11) – thanks for taking the course, how may we help you further?

I wrote this to make myself more content and happy in this demanding world, and I hope it helps you, too. Old-school developers will find things to try out and new developers should get a sensible way to enter the JavaScript world.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Truths: The web is better than any other platform as it is backwards compatible and fault tolerant

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of the web as a publication platform and how we keep repeating its virtues that may not apply to a publisher audience.

The web is better than any other platform as it is backwards compatible and fault tolerant

This has been the mantra of any web standards fan for a very long time. The web gets a lot of praise as it is to a degree the only platform that has future-proofing built in. This isn’t a grandiose statement. We have proof. Web sites older than many of today’s engineers still work in the newest browsers and devices. Many are still available, whilst those gone are often still available in cached form. Both search engines and the fabulous wayback machine take care of that – whether you want it or not. Betting on the web and standards means you have a product consumable now and in the future.

This longevity of the web stems from a few basic principles. Openness, standardisation, fault tolerance and backwards compatibility.


Openness is the thing that makes the web great. You publish in the open. How your product is consumed depends on what the user can afford – both on a technical and a physical level. You don’t expect your users to have a certain device or browser. You can’t force your users to be able to see or overcome other physical barriers. But as you published in an open format, they can, for example, translate your web site with an online system to read it. They can also zoom into it or even use a screenreader to hear it when they can’t see.

One person’s benefit can be another’s annoyance, though. Not everybody wants to allow others to access and change their content to their needs. Even worse – be able to see and use their code. Clients have always asked us to “protect their content”. But they also wanted to reap the rewards of an open platform. It is our job to make both possible and often this means we need to find a consensus. If you want to dive into a messy debate about this, follow what’s happening around DRM and online video.


Standardisation gave us predictability. Before browsers agreed on standards, web development was a mess. Standards allowed us to predict how something should display. Thus we knew when it was the browser’s fault or ours when things went wrong. Strictly speaking standards weren’t necessary for the web to work. Font tags, center tags, table layouts and all kind of other horrible ideas did an OK job. What standards allow us to do is to write quality code and make our lives easier. We don’t paint with HTML. Instead, we structure documents. We embed extra information and thus enable conversion into other formats. We use CSS to define the look and feel in one central location for thousands of documents.

The biggest benefactors of standards driven development are developers. It is a matter of code quality. Standards-compliant code is easier to read, makes more sense and has predictable outcome.

It also comes with lots of user benefits. A button element is keyboard, touch and mouse accessible and is available even to blind users. A DIV needs a lot of developer love to become an interactive element.

But that doesn’t mean we need to have everything follow standards. If we had enforced that, the web wouldn’t be where it is now. Again, for better or worse. XHTML died because it was too restrictive. HTML5 and lenient parsers were necessary to compete with Flash and to move the web forward.

Backwards compatibility

Backwards compatibilty is another big part of the web platform. We subscribed to the idea of older products being available in the future. That means we need to cater for old technology in newer browsers. Table layouts from long ago need to render as intended. There are even sites these days publishing in that format, like Hacker News. For browser makers, this is a real problem as it means we need to maintain a lot of old code. Code that not only has a diminishing use on the web, but often even is a security or performance issue. Still, we can’t break the web. Anything that goes into a “de facto standard” of web usage becomes a maintenance item. For a horror story on that, just look at all the things that can go in the head of a document. Most of these are non-standard, but people do rely on them.

Fault tolerance

Fault tolerance is a big one, too. From the very beginning web standards like HTML and CSS allow for developer errors. In the design principles of the language the “Priority of Constituencies” states it as such:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity

This idea is there to protect the user. A mistake made by a developer or a third party piece of code like and ad causing a problem should not block out users. The worrying part is that in a world where we’re asked to deliver more in a shorter amount of time it makes developers sloppy.

The web is great, but not simple to measure or monetise

What we have with the web is an open, distributed platform that grants the users all the rights to convert content to their needs. It makes it easy to publish content as it is forgiving to developer and publisher errors. This is the reason why it grew so fast.

Does this make it better than any other platform or does it make it different? Is longevity always the goal? Do we have to publish everything in the open?

There is no doubt that the web is great and was good for us. But I am getting less and less excited about what’s happening to it right now. Banging on and on about how great the web as a platform is doesn’t help with its problems.

It is hard to monetise something on the web when people either don’t want to pay or block your ads. And the fact that highly intrusive ads and trackers exist is not an excuse for that but a result of it. The more we block, the more aggressive advertising gets. I don’t know anyone who enjoys interstitials and popups. But they must work – or people wouldn’t use them.

The web is not in a good way. Sure, there is an artisinal, indie movement that creates great new and open ways to use it. But the mainstream web is terrible. It is bloated, boringly predictable and seems to try very hard to stay relevant whilst publishers get excited about snapchat and other, more ephemeral platforms.

Even the father of the WWW is worried: Tim Berners-Lee on the future of the web: The system is failing .

If we love the web the way we are happy to say all the time we need to find a solution for that. We can’t pretend everything is great because the platform is sturdy and people could publish in an accessible way. We need to ensure that the output of any way to publish on the web results in a great user experience.

The web isn’t the main target for publishers any longer and not the cool kid on the block. Social media lives on the web, but locks people in a very cleverly woven web of addiction and deceit. We need to concentrate more on what people publish on the web and how publishers manipulate content and users.

Parimal Satyal’s excellent Against a User Hostile Web is a great example how you can convey this message and think further.

In a world of big numbers and fast turnaround longevity isn’t a goal, it is a nice to have. We need to bring the web back to being the first publishing target, not a place to advertise your app or redirect to a social platform.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Any web site can become a PWA – but we need to do better

Over on his blog, I just go a ding from Jeremy.

Literally any website can—and should—be a progressive web app. Don’t let anyone tell you otherwise. I was at an event last year where I heard Chris Heilmann say that you shouldn’t make your blog into a progressive web app. I couldn’t believe what I was hearing. He repeats that message in this video chat: “When somebody, for example, turns their blog into a PWA, I don’t see the point. I don’t want to have that icon on my homepage. This doesn’t make any sense to me” Excuse me!? Just because you don’t want to have someone’s icon on your home screen, that person shouldn’t be using state-of-the-art technologies!? Excuse my French, but Fuck. That. Shit!
Our imaginations have become so limited by what native mobile apps currently do that we can’t see past merely imitating the status quo like a sad cargo cult.
I don’t want the web to equal native; I want the web to surpass it. I, for one, would prefer a reality where my home screen isn’t filled with the icons of startups and companies that have fulfilled the criteria of the gatekeepers. But a home screen filled with the faces of people who didn’t have to ask anyone’s permission to publish? That’s what I want!

Suffice to say, I am not telling anyone not to use great, modern technologies to the benefit of their end users and their own publishing convenience. And the stack that make up PWA are great to make either more successful than it is now.

PWA presentation at JSPoland
Me, literally telling the world that a PWA can be anything

I want us to do more. I want modern web technologies not to be a personal thing to use. I want it to be what we do at work, not to bring to work or point to at some amazing web person’s web presence or a showcase of a large web company.

All power to us for using whatever great technology in the environment we control, but we need to aim higher. We need to go where mistakes happen and bring the convenience and sensible upgrades to hacky old solutions. I don’t have the power to tell anyone not to use something on their blog. But I also don’t want to have a lot of things out there touted as “PWAs” that are a terrible experience. We’ve done that over and over with all kind of packaging formats. We need to get it right this time as our tools have never been better.

I publicly spoke out over and over again against stores in the current form as they are a barrier to access. A barrier that seems artificial, when we have the web, right?

Maybe. Fact is that a whole new generation of people know apps. Not the web. They know the web as something riddled with ads and malware you need blockers for. In some places where the web is not as conveniently available as it is where we are people even consider Facebook the web. As it is made available to people easier than the bloated web.

When I say that I don’t see the point of turning a blog into a PWA it hits exactly the confusion point of the “app” part. To me, an app is a “do” thing, not a “read” thing. I see no point in having the Wired, the Guardian, The Rolling Stone, The Times etc… app. Icons on a crammed desktop don’t scale. I use a news reader to read news items. I use an RSS aggregator to read blogs. I use an ebook reader to read books (or a browser). I use Spotify or iTunes to listen to music. I don’t have an app for each band or movie.

I’ve been publishing for donkey’s years on the web. And I choose to use a blog as I have no idea how you consume it. And I like that. I don’t think there should be a “Chris Heilmann” icon on your desktop. It should be in the contacts, it should maybe show up as a post or a bookmark. You can’t do anything on this blog except for reading it. Use what makes you most happy to do that.

I very much agree with Jeremy:

I don’t want the web to equal native; I want the web to surpass it.

And that’s exactly what I mean when I don’t want a blog as an app – no matter what format of app. I want people to create PWAs that are more than bookmarks – even offline working ones that give me a notification when new content is available.

Does this mean I say that you shouldn’t use a manifest and service worker to improve web pages or your blog? Hell, no. Go wild – do the right thing. Especially do the one thing that PWAs require: stop publishing over HTTP and secure your servers. Man in the middle attacks need to stop, especially with various governments happily being that man in the middle.

I want the web to succeed where it matters. I want native apps to go away. I don’t want to download an app to get tickets to the subway in Berlin. I don’t want an app for each airport I go to. I very much don’t want an app for each event I attend. I don’t want an app for each restaurant I frequent. I don’t need those relationships and having to give them a part of the limited space on my phone. Or on my desktop/launch bar.

We need the web to beat native where it is terrible: distribution and convenience. I want people to do things without having to go to a store, download and install an app and run it. I want people to get access to free content without a credit card. You need a credit card to access free stuff on app stores – this is a huge barrier. I want people to find the next train, book restaurants, get a doctor and find things regardless of connection and device. I want people to take pictures and sharing them. I don’t want people to use insecure, outdated versions of their apps as it is too much to get 50MB updates every day. I don’t want people to use what comes on the phone and use the browser as the last resort. And for this, we need great PWAs of known entities and great new players.

Try before you buy
PWA is try before you buy

I want people to understand that they are in control. As I said last week in Poland, PWA is proper try before you buy. You go to a URL, you like what you see. With later visits you promote it to get more access, work offline and even give you notifications.

A PWA has to earn that right. And this is where we need kick-ass examples. I have no native Twitter any more, Twitter Lite does the trick and saves me a lot of data and space. I go around showing this to people and I see them kick out native Twitter. That’s what we need.

Every time we promote the web as the cool thing it is we repeat the same points.

  • It is easy to publish
  • it is available for everyone
  • it is not beholden to anyone
  • It is independent of platform, form factor and generally inviting.

When you see the web that millions of people use every day the story is very different.

It is that bad that every browser maker has a department of cross-browser compliance. We all approach big companies pointing out how their products break and what can be done to fix them. We even offer developer resources to not rely on that webkit prefix. In almost all cases we get asked what the business benefit of that is.

Sure, we have a lot of small victories, but it is grim to show someone the web these days. In our bubble, things are great and amazing.

How did that happen? We have the technology. We have the knowledge. We have the information out there in hundreds of talks, books and posts. Who do we reach is the question. Who builds this horrible web? Or who builds great stuff at home and gets mostly frustrated at work because things are beyond repair?

When I say that I don’t want a blog as an app I am not saying that you shouldn’t supercharge your blog. I am not forbidding anyone to publish and use technology.

But, I don’t think that is enough. We need commercial successes. We need to beat the marketing of native apps. We need to debunk myths of native convenience by building better, web based, solutions.

We’ve proven the web works well for self-publishing. Now we need to go where people build an iOS and Android app to have an online presence for their company with higher functionality. We need these people to understand that the web is a great way to publish and get users that do things with your product. We think this is common sense, but it isn’t. We have to remind people again about how great the web is. And how much easier it is using web technology.

For this, we need first and foremost find out how to make money on the web on a huge scale. We need to find a way that people pay for content instead of publishers showing a lot of ads as the simpler way. We need to show numbers and successes of commercial, existing products. Google is spending a lot of money on that with PWA roadshows. Every big web company does. I also all work directly with partners to fix their web products across browsers and turn them into PWAs. And there are some great first case studies available. We need more of those.

I want developers not to have to use their spare time and learn new web technologies on their personal projects. I want companies to understand the value of PWA and – most importantly – fix the broken nonsense they have on the web and keep in maintenance mode.

If you think these and other PWA case studies are by chance and because people involved just love the web – think again. A lot of effort goes into convincing companies to do the “very obvious” thing. A lot of cost of time and money is involved. A lot of internal developers put their career on the line to tell their superiors that there is another way instead of delivering what’s wanted. We want this to work, and we need to remind people that quality means effort. Not adding a manifest and a service worker to an existing product that has been in maintenance hell for years.

Jeremy wants a certain world:

I, for one, would prefer a reality where my home screen isn’t filled with the icons of startups and companies that have fulfilled the criteria of the gatekeepers. But a home screen filled with the faces of people who didn’t have to ask anyone’s permission to publish? That’s what I want!

I want more. I want the commercial world and the marketing hype of “online” not to be about native apps and closed stores. I don’t want people to think it is OK to demand an iPhone to access their content. I don’t want companies to waste money trying to show up in an app store when they could easily be found on the web. I think we already have the world Jeremy describes. And – to repeat – I don’t want anyone not to embrace this if they want to or they think it is a good idea.

Nothing necessary to turn your current web product into a PWA is a waste. All steps are beneficial to the health and quality of your product. That is the great part. But it does mean certain quality goals should be met to avoid users with an “app” expectation not getting what they want. We have to discuss these quality goals and right now quite a few companies roll out their ideas. This doesn’t mean we censor the web or lock out people (there are other people working on that outside of companies). It means we don’t want another “HTML5 Apps are a bad experience” on our hands.

I’ve been running this blog for ages. I learned a lot. That’s great. But I don’t want the web to be a thing for people already believing in it. I want everyone to use it instead of silos like app stores – especially commercial companies. We’ve been shirking away from the responsibility of making the enterprise and products people use day to day embrace the web for too long. The current demise of the native/app store model is a great opportunity to do this. I want everyone with the interest and knowledge to be part of this.

I can’t see myself ever having a phone full with the faces of people. This is what the address book is for. The same way my ebook reader (which is my browser) is what I use to read books. I don’t have an app for each author.

I like the concept of having a feed reader to check in bulk what people that inspire me are up to. I like reading aggregators that do the searching for me. And if I want to talk to the people behind those publications I contact them and talk to them. Or – even better – meet them.

An app – to me – is a thing I do something with. This blog is an app for me, but not for others. You can’t edit. I even turned off comments as I spent more time moderating than answering. That’s why it isn’t a PWA. I could turn it into one, but then I would feel that I should publish a lot more once you promoted me to be on your home screen.

So when I talk about personal blogs not being PWAs to me, this is what I mean. Apps to me are things to do things with. If I can’t do anything with it except for reading and sharing I don’t stop you from publishing it as a PWA. But I am not likely to install it. The same way I don’t download the Kim Kardashian app or apps of bands.

This is not about your right to publish. It is about earning the space in the limited environment that is our user’s home screens, docks and desktops. If you’re happy to have that full of friend’s blogs or people you like – great. I’d rather soon see phones in shops that out-of-the-box come with PWAs for people to do things. Not native apps that need a 200MB update the first time you connect and won’t get that upgrade and become a security risk. I want web access to be front and centre on new devices. And to do that, we need to aim higher and do better.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Better keyboard navigation with progressive enhancement


When building interfaces, it is important to also consider those who can only use a keyboard to use your products. This is a basic accessibility need, and in most cases it isn’t hard to allow for a basic keyboard access. It means first and foremost using keyboard accessible elements for interaction:

  • anchors with a valid href attribute if you want the user to go somewhere
  • buttons when you want to execute your own code and stay in the document

You can make almost everything keyboard accessible using the roving tab index technique, but why bother when there are HTML elements that can do the same?

Making it visual

Using the right elements isn’t quite enough though; you also need to make it obvious where a keyboard user is in a collection of elements. Browsers do this by putting an outline around active elements. Whilst dead useful this has always been a thorn in the side of people who want to control the whole visual display of any interaction. You can remove this visual aid by setting the CSS outline property to none, which is a big accessibility issue unless you also provide an alternative.

By using the most obvious HTML elements for the job and some CSS to ensure that not only hover but also focus states are defined we can make it easy for our users to navigate a list of items by tabbing through them. Shift-Tab allows you to go backwards. You can try it here and the HTML is pretty straight forward.


example how to tab through a list of buttons

Using a list gives our elements a hierarchy and a way to navigate with accessible technology that a normal browser doesn’t have. It also gives us a lot of HTML elements to apply styling to. With a few styles, we can turn this into a grid, using less vertical space and allowing for more content in a small space.

ul, li {
  margin: 0;
  padding: 0;
  list-style: none;
button {
  border: none;
  display: block;
  background: goldenrod;
  color: white;
  width: 90%;
  height: 30px;  
  margin: 5%;
  transform: scale(0.8);
  transition: 300ms;
button:hover, button:focus {
  transform: scale(1);
  outline: none;
  background: powderblue;
  color: #333;
li {
  float: left;
  grid magic by @heydonworks 
li {
  width: calc(100% / 4);
li:nth-child(4n+1):nth-last-child(1) {
  width: 100%;
li:nth-child(4n+1):nth-last-child(1) ~ li {
  width: 100%;
li:nth-child(4n+1):nth-last-child(2) {
  width: 50%;
li:nth-child(4n+1):nth-last-child(2) ~ li {
  width: 50%;
li:nth-child(4n+1):nth-last-child(3) {
  width: calc(100% / 4);
li:nth-child(4n+1):nth-last-child(3) ~ li {
  width: calc(100% / 4);

The result looks pretty fancy and it is very obvious where we are in our journey through the list.

tabbing through a grid item by item

Enhancing the keyboard access – providing shortcuts

However, if I am in a grid, wouldn’t it be better if I could move in two directions with my keyboard?

Using a bit of JavaScript for progressive enhancement, we get this effect and can navigate the grid either with the cursor keys or by using WASD:

navigating inside a grid of elements using the cursor keys going up, down, left and right

It is important to remember here that this is an enhancement. Our list is still fully accessible by tabbing and should JavaScript fail for any of the dozens of reasons it can, we lost a bit of convenience instead of having no interface at all.

I’ve packaged this up in a small open source, vanilla, dependency free JavaScript called gridnav and you can get it on GitHub. All you need to do is to call the script and give it a selector to reach your list of elements.

<ul id="links" data-amount="5" data-element="a">
  <li><a href="#">1</a></li>
  <li><a href="#">2</a></li><li><a href="#">25</a></li>
<script src="gridnav.js"></script>
  var linklist = new Gridnav('#links');

You define the amount of elements in each row and the keyboard accessible element as data attributes on the list element. These are optional, but make the script faster and less error prone. There’s an extensive README explaining how to use the script.

How does it work?

When I started to ponder how to do this, I started like any developer does: trying to tackle the most complex way. I thought I needed to navigate the DOM a lot using parent nodes and siblings with lots of comparing of positioning and using getBoundingClientRect.

Then I took a step back and realised that it doesn’t matter how we display the list. In the end, it is just a list and we need to navigate this one. And we don’t even need to navigate the DOM, as all we do is go from one element in a collection of buttons or anchors to another. All we need to do is to:

  1. Find the element we are on ( gives us that).
  2. Get the key that was pressed
  3. Depending on the key move to the next, previous, or skip a few elements to get to the next row

Like this (you can try it out here):

moving in the grid is the same as moving along an axis

The amount of elements we need to skip is defined by the amount of elements in a row. Going up is going n elements backwards and going down is n elements forwards in the collection.

diagram of navigation in the grid

The full code is pretty short if you use some tricks:

  var list = document.querySelector('ul');
  var items = list.querySelectorAll('button');
  var amount = Math.floor(
        list.offsetWidth / 
  var codes = {
    38: -amount,
    40: amount, 
    39: 1,
    37: -1
  for (var i = 0; i < items.length; i++) {
    items[i].index = i;
  function handlekeys(ev) {
    var keycode = ev.keyCode;
    if (codes[keycode]) {
      var t =;
      if (t.index !== undefined) {
        if (items[t.index + codes[keycode]]) {
          items[t.index + codes[keycode]].focus();
  list.addEventListener('keyup', handlekeys);

What’s going on here?

We get a handle to the list and cache all the keyboard accessible elements to navigate through

  var list = document.querySelector('ul');
  var items = list.querySelectorAll('button');

We calculate the amount of elements to skip when going up and down by dividing the width of the list element by the width of the first child element that is an HTML element (in this case this will be the LI)

  var amount = Math.floor(
        list.offsetWidth / 

Instead of creating a switch statement or lots of if statements for keyboard handling, I prefer to define a lookup table. In this case, it is called codes. They key code for up is 38, 40 is down, 39 is right and 37 is left. If we now get codes[37] for example, we get -1, which is the amount of elements to move in the list

  var codes = {
    38: -amount,
    40: amount, 
    39: 1,
    37: -1

We can use to get which button was pressed in the list, but we don’t know where in the list it is. To avoid having to loop through the list on each keystroke, it makes more sense to loop through all the buttons once and store their index in the list in an index property on the button itself.

  for (var i = 0; i < items.length; i++) {
    items[i].index = i;

The handlekeys() function does the rest. We read the code of the key pressed and compare it with the codes lookup table. This also means we only react to arrow keys in our function. We then get the current element the key was pressed on and check if it has an index property. If it has one, we check if an element exist in the collection that is in the direction we want to move. We do this by adding the index of the current element to the value returned from the lookup table. If the element exists, we focus on it.

  function handlekeys(ev) {
    var keycode = ev.keyCode;
    if (codes[keycode]) {
      var t =;
      if (t.index !== undefined) {
        if (items[t.index + codes[keycode]]) {
          items[t.index + codes[keycode]].focus();

We apply a keyup event listener to the list and we’re done 🙂

  list.addEventListener('keyup', handlekeys);

If you feel like following this along live, here’s a quick video tutorial of me explaining all the bits and bobs.

The video has a small bug in the final code as I am not comparing the count property to undefined, which means the keyboard functionality doesn’t work on the first item (as 0 is falsy).

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Better than Gzip Compression with Brotli

HTTP Compression

Brotli is an open source data compression library formally specified by IETF draft. It can be used to compress HTTPS responses sent to a browser, in place of gzip or deflate.

Support for Brotli content encoding has recently landed and is now testable in Firefox Developer Edition (Firefox 44). In this post, we’ll show you an example of how to set up a simple HTTPS server that takes advantage of Brotli when supported by the client.

When serving content over the web, an easy win or low hanging fruit is turning on server side compression.  Somewhat unintuitive, doing extra work to compress an HTTP response server side, and decompress the result client side is faster than not doing the additional work.  This is due to bandwidth constraints over the wire.  Adding compression improves transfer times when the content is large, isn’t already compressed (reapplying compression doesn’t buy you anything, unless you’re Pied Piper), and the cost to communicate is relatively large.

The way the User Agent, client, or Web browser signals to the server what kinds of compressed content it can decompress is with the `Accept-Encoding` header.  Let’s see what such a header might look like in Firefox 43 (prior to Brotli support) dev tools.

Accept-Encoding FF 41And in Firefox 44 (with Brotli support)

Accept Encoding FF 44

Just because the client supports these encodings doesn’t mean that’s what they’ll get.  It’s up to the server to decide which encoding to choose.  The server might not even support any form of compression.

The server then responds with the `Content-Encoding` header specifying what form of compression was used, if any at all.

Content Encoding

While the client sends a list of encodings it supports, the server picks one to respond with.  Responding with an unsupported content encoding, or with a header that doesn’t match the actual encoding of the content can lead to decompression errors and the summoning of Z??????????????????????A?????????????????????L???????????????????????????G??????????????O???????????????????????????.

Zalgo Decompression Errors

Most browsers support gzip and deflate (as well as uncompressed content, of course).  Gecko based browsers such as Firefox 44+ now support “br” for brotli.  Opera beta 33 has support for lzma (note: lzma1 not lzma2) and sdch. Here‘s the relevant Chromium bug for brotli support.

Creating Our Server

Here’s a simple Node.js server that responds with 5 paragraphs of generated Lorem Ipsum text.  Note: you’ll need Node.js installed, I’m using Node v0.12.7.  You’ll need a C++ compiler installed for installing the native addons I’m using:

npm install accepts iltorb lzma-native

Finally, you’ll need to generate some TLS certificates to hack on this since Firefox 44+ supports Brotli compression over HTTPS, but not HTTP.  If you’re following along at home, and aren’t seeing Accept-Encoding: “br”, make sure you’re connecting over HTTPS.

You can follow the tutorial here for generating self signed certs.  Note that you’ll need openssl installed, and that browsers wil throw up scary warnings since you’re not recognized as being part of their Certificate Authority “cartel.”  These warnings can be safely ignored when developing locally with certificates you generated yourself, but don’t go around ignoring certificate errors when browsing the web.

Here’s the code for our simple server.

#!/usr/bin/env node

var accepts = require('accepts');
var fs = require('fs');
var https = require('https');
var brotli = require('iltorb').compressStream;
var lzma = require('lzma-native').createStream.bind(null, 'aloneEncoder');
var gzip = require('zlib').createGzip;

var filename = 'lorem_ipsum.txt';

function onRequest (req, res) {
  res.setHeader('Content-Type', 'text/html');

  var encodings = new Set(accepts(req).encodings());

  if (encodings.has('br')) {
    res.setHeader('Content-Encoding', 'br');
  } else if (encodings.has('lzma')) {
    res.setHeader('Content-Encoding', 'lzma');
  } else if (encodings.has('gzip')) {
    res.setHeader('Content-Encoding', 'gzip');
  } else {

var certs = {
  key: fs.readFileSync('./https-key.pem'),
  cert: fs.readFileSync('./https-cert.pem'),

https.createServer(certs, onRequest).listen(3000);

Then we can navigate to https://localhost:3000 in our browser.  Let’s see what happens when I visit the server in various browsers.

Firefox 45 uses Brotli:

Firefox 45 BrotliOpera Beta 33 uses lzma:

Opera 33 lzmaSafari 9 and Firefox 41 use gzip:

Safari 9 gzip

We can compare the size of the asset before and after compression in Firefox’s dev tools, under the network tab, by comparing the Transferred vs Size columns.  The transferred column shows the bytes of the compressed content transferred over the wire, and the size column shows the asset’s decompressed size.  For content sent without any form of compression, these two should be the same.

Transferred vs Size

We can also verify using the curl command line utility:

$ curl https://localhost:3000 --insecure -H 'Accept-Encoding: br' -w '%{size_download}' -so /dev/null

$ curl https://localhost:3000 --insecure -H 'Accept-Encoding: lzma' -w '%{size_download}' -so /dev/null

$ curl https://localhost:3000 --insecure -H 'Accept-Encoding: gzip' -w '%{size_download}' -so /dev/null

$ curl https://localhost:3000 --insecure -w '%{size_download}' -so /dev/null

Notes about compression vs performance

The choice of which compression scheme to use does have implications.  Node.js ships with zlib, but including native node add-ons for lzma and brotli will slightly increase distribution size.  The time it takes the various compression engines to run can vary wildly, and the memory usage while compressing content can hit physical limits when servering numerous requests.

In the previous example, you might have noticed that lzma did not beat gzip in compression out of the box, and brotli did only maginally.  You should note that all compression engines have numerous configuration options that can be tweaked to trade off things like performance for memory usage, amongst other things.  Measuring the change in response time, memory usage, and Weissman score is something we’ll take a look at next.

The following numbers were gathered from running

$ /usr/bin/time -l node server.js &
$ wrk -c 100 -t 6 -d 30s -H 'Accept-Encoding: <either br lzma gzip or none>' https://localhost:3000
$ fg

The following measurements were taken on the following machine: Early 2013 Apple MacBook Pro OSX 10.10.5 16GB 1600 MHz DDR3 2.7 GHz Core i7 4-Core with HyperThreading.

Compression Method Requests/Second Bytes Transferred (MB/s) Max RSS (MB) Avg. Latency (ms)
br-stream 203 0.25 3485.54 462.57
lzma 233 0.37 330.29 407.71
gzip 2276 3.44 204.29 41.86
none 4061 14.06 125.1 23.45
br-static 4087 5.85 105.58 23.3

Some things to note looking at the numbers:

  • There’s a performance cliff for requests/second for compression other than gzip.
  • There’s significantly more memory usage for compression streams. The 9.8 GB 3.4 GB peak RSS for brotli looks like a memory leak that’s been reported upstream (my monocle popped out when I saw that).
  • The latency measured is only from localhost which would be at least this high across the Internet, probably much more. This is the waiting timing under Dev Tools > Network > Timings.
  • If we compress static assets ahead of time using brotli built from source, we get fantastic results. Note: we can only do this trick for static responses.
  • Serving statically-brotli-compressed responses performs as well as serving static uncompressed assets, while using slightly less memory. This makes sense, since there are fewer bytes to transfer! The lower number of bytes transferred per second makes that variable seem independent of the number of bytes in the file to transfer.

For compressing static assets ahead of time, we can build brotli from source, then run:

$ ./bro --input lorem_ipsum.txt --output

and modify our server:

< var brotli = require('iltorb').compressStream;
< var filename = 'lorem_ipsum.txt'; --- > var filename = '';
< fs.createReadStream(filename).pipe(brotli()).pipe(res); --- >     fs.createReadStream(filename).pipe(res);


Like other HTTP compression mechanisms, using Brotli with HTTPS can make you vulnerable to BREACH attacks. If you want to use it, you should apply other BREACH mitigations.


For 5 paragraphs of lorem ipsum, Brotli beats gzip by 5%. If I run the same experiment with the front page of from 10/01/2015, Brotli beats gzip by 22%! Note that both measurements were using the compressors out of the box without any tweaking of configuration values.

Whether or not a significant portion of your userbase is using a browser that supports Brotli as a content encoding, whether the added latency and memory costs are worth it, and whether your HTTPS server or CDN support Brotli is another story. But if you’re looking for better than gzip performance, Brotli looks like a possible contender.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Launching Open Web Apps feedback channels – help us make the web better!

About three months ago we launched a feedback channel for the Firefox Developer Tools, and since it was a great success, we’re happy announce a new one for Open Web Apps!

For Developer Tools, we have, and keep on getting, excellent suggestions at, which has lead to features coming from ideas there being implemented in both Firefox 32 & 33 – the first ideas shipped in Firefox only 6 weeks after we launched the feedback channels!

Your feedback as developers is crucial to building better products and a better web, so we want to take this one step further.

A channel for Open Web Apps

We have now just opened another feedback channel on UserVoice about Open Web Apps, available at

It is a place for constructive feedback around Open Web Apps with ideas and feature suggestions for how to make them more powerful and a first-class citizen on all platforms; desktop, mobile and more.

What we cover in the feedback channel is collecting all your ideas and also updating you on the different areas we are working on. In many cases these features are non-standard, yet: we are striving to standardize Apps, APIs, and features through the W3C/WHATWG – so expect these features to change as they are transitioned to become part of the Web platform.

If you want to learn more about the current state, there’s lots of documentation for Open Web Apps and WebAPIs on MDN.

Contributing is very easy!

If you have an idea for how you believe Open Web Apps should work, simply just go to the feedback channel, enter a name and an e-mail address (no need to create an account!) and you’re good to go!

In addition to that, you have 10 votes assigned which you can use to vote for other existing ideas there.

Just make sure that you have an idea that is constructive and with a limited scope, so it’s actionable; i.e. if you have a list of 10 things you are missing, enter them as a separate ideas so we can follow up on them individually.

We don’t want to hear “the web sucks” – we want you to let us know how you believe it should be to be amazing.

What do you want the web to be like?

With all the discussions about web vs. native, developers choosing iOS, Android or the web as their main target, PhoneGap as the enabler and much more:

Let us, and other companies building for the web, know what the web needs to be your primary developer choice. We want the web to be accessible and fantastic on all platforms, by all providers.

Share your ideas and help us shape the future of the web!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

How can we write better software? – Interview series, part 2 with Brian Warner

This is part 2 of a new Interview series here at Mozilla Hacks.

“How can we, as developers, write more superb software?”

A simple question without a simple answer. Writing good code is hard, even for developers with years of experience. Luckily, the Mozilla community is made up of some of the best development, QA and security folks in the industry.

This is part two in a series of interviews where I take on the role of an apprentice to learn from some of Mozilla’s finest.

Introducing Brian Warner

When my manager and I first discussed this project, Brian is the first person I wanted to interview. Brian probably doesn’t realize it, but he has been one of my unofficial mentors since I started at Mozilla. He is an exceptional teacher, and has the unique ability to make security approachable.

At Mozilla, Brian designed the pairing protocol for the “old” Firefox Sync, designed the protocol for the “new” Firefox Sync, and was instrumental in keeping Persona secure. Outside of Mozilla, Brian co-founded the Tahoe-LAFS project, and created Buildbot.

What do you do at Mozilla?

My title is a staff security engineer in the Cloud Services group. I analyse and develop protocols for securely managing passwords and account data and I implement those protocols in different fashions. I also review other’s code, I look at external projects to figure out whether it’s appropriate to incorporate them, and I try to stay on top of security failures like 0-days and problems in the wild that might affect us and also tools and algorithms that we might be able to use.

UX vs Security: Is it a false dichotomy? Some people have the impression that for security to be good, it must be difficult to use.

There are times when I think that it’s a three-way tradeoff. Instead of being x-axis, y-axis, and a diagonal line that doesn’t touch zero, sometimes I think it’s a three-way thing where the other axis is how much work you want to put into it or how clever you are or how much user research and experimentation you are willing to do. Stuff that engineers are typically not focused on, but that UX and psychologists are. I believe, maybe it’s more of a hope than a belief, that if you put enough effort into that, then you can actually find something that is secure and usable at the same time, but you have to do a lot more work.

The trick is to figure out what people want to do and find a way of expressing whatever security decisions they have to make into a normal part of their work flow. It’s like when you lend your house key to a neighbour so they can water your plants when you are away on vacation, you’ve got a pretty good idea of what power you are handing over.

There are some social constructs surrounding that like, “I don’t think you’re going to make a copy of that key and so when I get it back from you, you no longer have that power that I granted to you.” There are patterns in normal life with normal non-computer behaviours and objects that we developed some social practices around, I think part of the trick is to use that and assume that people are going to expect something that works like that and then find a way to make the computer stuff more like that.

Part of the problem is that we end up asking people to do very unnatural things because it is hard to imagine or hard to build something that’s better. Take passwords. Passwords are a lousy authentication technology for a lot of different reasons. One of them being that in most cases, to exercise the power, you have to give that power to whoever it is you are trying to prove to. It’s like, “let me prove to you I know a secret”…”ok, tell me the secret.” That introduces all these issues like knowing how to correctly identify who you are talking to, and making sure nobody else is listening.

In addition to that, the best passwords are going to be randomly generated by a computer and they are relatively long. It’s totally possible to memorize things like that but it takes a certain amount of exercise and practice and that is way more work than any one program deserves.

But, if you only have one such password and the only thing you use it on is your phone, then your phone is now your intermediary that manages all this stuff for you, and then it probably would be fair (to ask users to spend more energy managing that password). And it’s clear that your phone is sort of this extension of you, better at remembering things, and that the one password you need in this whole system is the bootstrap.

So some stuff like that, and other stuff like escalating effort in rare circumstances. There are a lot of cases where what you do on an everyday basis can be really easy and lightweight, and it’s only when you lose the phone that you have to go back to a more complicated thing. Just like you only carry so much cash in your wallet, and every once in a while you have to go to a bank and get more.

It’s stuff like that I think it’s totally possible to do, but it’s been really easy to fall into bad patterns like blaming the user or pushing a lot of decisions onto the user when they don’t really have enough information to make a good choice, and a lot of the choices you are giving them aren’t very meaningful.

Do you think many users don’t understand the decisions and tradeoffs they are being asked to make?

I think that’s very true, and I think most of the time it’s an inappropriate question to ask. It’s kind of unfair. Walking up to somebody and putting them in this uncomfortable situation – do you like X or do you like Y – is a little bit cruel.

Another thing that comes to mind is permission dialogs, especially on Windows boxes. They show up all the time, even just to do really basic operations. It’s not like you’re trying to do something exotic or crazy. These dialogs purport to ask the user for permission, but they don’t explain the context or reasons or consequences enough to make it a real question. They’re more like a demand or an ultimatum. If you say “no” then you can’t get your work done, but if you say “yes” then the system is telling you that bad things will happen and it’s all going to be your fault.

It’s intended to give the user an informed choice, but it is this kind of blame the user, blame the victim pattern, where it’s like “something bad happened, but you clicked on the OK button, you’ve taken responsibility for that.” The user didn’t have enough information to do something and the system wasn’t well enough designed that they could do what they wanted to do without becoming vulnerable.

Months before “new” Sync ever saw the light of day, the protocol was hashed out in extremely vocal and public forum. It was the exact opposite of security through obscurity. What did you hope to accomplish?

There were a couple of different things that I was hoping from that discussion. I pushed all that stuff to be described and discussed publicly because it’s the right thing to do, it’s the way we develop software, you know, it’s the open source way. And so I can’t really imagine doing it any other way.

The specific hopes that I had for publishing that stuff was to try to solicit feedback and get people to look for basic design flaws. I wanted to get people comfortable with the security properties, especially because new Sync changes some of them. We are switching away from pairing to something based on passwords. I wanted people to have time to feel they understood what those changes were and why we were making them. We put the design criteria and the constraints out there so people could see we kind of have to switch to a password to meet all of the other goals, and what’s the best we can do given security based on passwords.

Then the other part is that having that kind of public discussion and getting as many experienced people involved as possible is the only way that I know of to develop confidence that we’re building something that’s correct and not broken.

So it is really just more eyeballs…

Before a protocol or API designer ever sits down and writes a spec or line of code, what should they be thinking about?

I’d say think about what your users need. Boil down what they are trying to accomplish into something minimal and pretty basic. Figure out the smallest amount of code, the smallest amount of power, that you can provide that will meet those needs.

This is like the agile version of developing a protocol.

Yeah. Minimalism is definitely useful. Once you have the basic API that enables you to do what needs to be done, then think about all of the bad things that could be done with that API. Try and work out how to prevent them, or make them too expensive to be worthwhile.

A big problem with security is sometimes you ask “what are the chances that problem X would happen.” If you design something and there is a 1/1000 chance that something will happen, that the particular set of inputs will cause this one particular problem to happen. If it really is random, then 1/1000 may be ok, 1/1M may be ok, but if it is in this situation where an attacker gets to control the inputs, then it’s no longer 1/1000, it’s 1 in however many times the attacker chooses to make it 1.

It’s a game of who is cleverer and who is more thorough. It’s frustrating to have to do this case analysis to figure out every possible thing that could happen, every state it could get into, but if somebody else out there is determined to find a hole, that’s the kind of analysis they are going to do. And if they are more thorough than you are, then they’ll find a problem that you failed to cover.

Is this what is meant by threat modelling?

Yeah, different people use the term in different ways, I think of when you are laying out the system, you are setting up the ground rules. You are saying there is going to be this game. In this game, Alice is going to choose a password and Bob is trying to guess her password, and whatever.

You are defining what the ground rules are. So sometimes the rules say things like … the attacker doesn’t get to run on the defending system, their only access is through this one API call, and that’s the API call that you provide for all of the good players as well, but you can’t tell the difference between the good guy and the bad guy, so they’re going to use that same API.

So then you figure out the security properties if the only thing the bad guy can do is make API calls, so maybe that means they are guessing passwords, or it means they are trying to overflow a buffer by giving you some input you didn’t expect.

Then you step back and say “OK, what assumptions are you making here, are those really valid assumptions?” You store passwords in the database with the assumption that the attacker won’t ever be able to see the database, and then some other part of the system fails, and whoops, now they can see the database. OK, roll back that assumption, now you assume that most attackers can’t see the database, but sometimes they can, how can you protect the stuff that’s in the database as best as possible?

Other stuff like, “what are all the different sorts of threats you are intending to defend against?” Sometimes you draw a line in the sand and say “we are willing to try and defend against everything up to this level, but beyond that you’re hosed.” Sometimes it’s a very practical distinction like “we could try to defend against that but it would cost us 5x as much.”

Sometimes what people do is try and estimate the value to the attacker versus the cost to the user, it’s kind of like insurance modelling with expected value. It will cost the attacker X to do something and they’ve got an expected gain of Y based on the risk they might get caught.

Sometimes the system can be rearranged so that incentives encourage them to do the good thing instead of the bad thing. Bitcoin was very carefully thought through in this space where there are these clear points where a bad guy, where somebody could try and do a double spend, try and do something that is counter to the system, but it is very clear for everybody including the attacker that their effort would be better spent doing the mainstream good thing. They will clearly make more money doing the good thing than the bad thing. So, any rational attacker will not be an attacker anymore, they will be a good participant.

How can a system designer maximise their chances of developing a reasonably secure system?

I’d say the biggest guideline is the Principle of Least Authority. POLA is sometimes how that is expressed. Any component should have as little power as necessary to do the specific job that it needs to do. That has a bunch of implications and one of them is that your system should be built out of separate components, and those components should actually be isolated so that if one of them goes crazy or gets compromised or just misbehaves, has a bug, then at least the damage it can do is limited.

The example I like to use is a decompression routine. Something like gzip, where you’ve got bytes coming in over the wire, and you are trying to expand them before you try and do other processing. As a software component, it should be this isolated little bundle of 2 wires. One side should have a wire coming in with compressed bytes and the other side should have decompressed data coming out. It’s gotta allocate memory and do all kinds of format processing and lookup tables and whatnot, but, nothing that box can do, no matter how weird the input, or how malicious the box, can do anything other than spit bytes out the other side.

It’s a little bit like Unix process isolation, except that a process can do syscalls that can trash your entire disk, and do network traffic and do all kinds of stuff. This is just one pipe in and one pipe out, nothing else. It’s not always easy to write your code that way, but it’s usually better. It’s a really good engineering practice because it means when you are trying to figure out what could possibly be influencing a bit of code you only have to look at that one bit of code. It’s the reason we discourage the use of global variables, it’s the reason we like object-oriented design in which class instances can protect their internal state or at least there is a strong convention that you don’t go around poking at the internal state of other objects. The ability to have private state is like the ability to have private property where it means that you can plan what you are doing without potential interference from things you can’t predict. And so the tractability of analysing your software goes way up if things are isolated. It also implies that you need a memory safe language…

Big, monolithic programs in a non memory safe language are really hard to develop confidence in. That’s why I go for higher level languages that have memory safety to them, even if that means they are not as fast. Most of the time you don’t really need that speed. If you do, it’s usually possible to isolate the thing that you need, into a single process.

What common problems do you see out on the web that violate these principles?

Well in particular, the web is an interesting space. We tend to use memory safe languages for the receiver.

You mean like Python and JavaScript.

Yeah, and we tend to use more object-oriented stuff, more isolation. The big problems that I tend to see on the web are failure to validate and sanitize your inputs. Or, failing to escape things like injection attacks.

You have a lot of experience reviewing already written implementations, Persona is one example. What common problems do you see on each of the front and back ends?

It tends to be escaping things, or making assumptions about where data comes from, and how much an attacker gets control over if that turns out to be faulty.

Is this why you advocated making it easy to trace how the data flows through the system?

Yeah, definitely, it’d be nice if you could kind of zoom out of the code and see a bunch of little connected components with little lines running between them, and to say, “OK, how did this module come up with this name string? Oh, well it came from this one. Where did it come from there? Then trace it back to the point where, HERE that name string actually comes from a user submitted parameter. This is coming from the browser, and the browser is generating it as the sending domain of the postMessage. OK, how much control does the attacker have over one of those? What could they do that would be surprising to us? And then, work out at any given point what the type is, see where the transition is from one type to another, and notice if there are any points where you are failing to do that, that transformation or you are getting the type confused. Definitely, simplicity and visibility and tractable analysis are the keys.

What can people do to make data flow auditing simpler?

I think, minimising interactions between different pieces of code is a really big thing. Isolate behaviour to specific small areas. Try and break up the overall functionality into pieces that make sense.

What is defence in depth and how can developers use it in a system?

“Belt and suspenders” is the classic phrase. If one thing goes wrong, the other thing will protect you. You look silly if you are wearing both a belt and suspenders because they are two independent tools that help you keep your pants on, but sometimes belts break, and sometimes suspenders break. Together they protect you from the embarrassment of having your pants fall off. So defence in depth usually means don’t depend upon perimeter security.

Does this mean you should be checking data throughout the system?

There is always a judgement call about performance cost, or, the complexity cost. If your code is filled with sanity checking, then that can distract the person who is reading your code from seeing what real functionality is taking place. That limits their ability to understand your code, which is important to be able to use it correctly and satisfy its needs. So, it’s always this kind of judgement call and tension between being too verbose and not being verbose enough, or having too much checking.

The notion of perimeter security, it’s really easy to fall into this trap of drawing this dotted line around the outside of your program and saying “the bad guys are out there, and everyone inside is good” and then implementing whatever defences you are going to do at that boundary and nothing further inside. I was talking with some folks and their opinion was that there are evolutionary biology and sociology reasons for this. Humans developed in these tribes where basically you are related to everyone else in the tribe and there are maybe 100 people, and you live far away from the next tribe. The rule was basically if you are related to somebody then you trust them, and if you aren’t related, you kill them on sight.

That worked for a while, but you can’t build any social structure larger than 100 people. We still think that way when it comes to computers. We think that there are “bad guys” and “good guys”, and I only have to defend against the bad guys. But, we can’t distinguish between the two of them on the internet, and the good guys make mistakes too. So, the principal of least authority and the idea of having separate software components that are all very independent and have very limited access to each other means that, if a component breaks because somebody compromised it, or somebody tricked it into behaving differently than you expected, or it’s just buggy, then the damage that it can do is limited because the next component is not going to be willing to do that much for it.

Do you have a snippet of code, from you or anybody else, that you think is particularly elegant that others could learn from?

I guess one thing to show off would be the core share-downloading loop I wrote for Tahoe-LAFS.

In Tahoe, files are uploaded into lots of partially-redundant “shares”, which are distributed to multiple servers. Later, when you want to download the file, you only need to get a subset of the shares, so you can tolerate some number of server failures.

The shares include a lot of integrity-protecting Merkle hash trees which help verify the data you’re downloading. The locations of these hashes aren’t always known ahead of time (we didn’t specify the layout precisely, so alternate implementations might arrange them differently). But we want a fast download with minimal round-trips, so we guess their location and fetch them speculatively: if it turns out we were wrong, we have to make a second pass and fetch more data.

This code tries very hard to fetch the bare minimum. It uses a set of compressed bitmaps that record which bytes we want to fetch (in the hopes that they’ll be the right ones), which ones we really need, and which ones we’ve already retrieved, and sends requests for just the right ones.

The thing that makes me giggle about this overly clever module is that the entire algorithm is designed around Rolling Stone lyrics. I think I started with “You can’t always get what you want, but sometimes … you get what you need”, and worked backwards from there.

The other educational thing about this algorithm is that it’s too clever: after we shipped it, we found out it was actually slower than the less-sophisticated code it had replaced. Turns out it’s faster to read a few large blocks (even if you fetch more data than you need) than a huge number of small chunks (with network and disk-IO overhead). I had to run a big set of performance tests to characterize the problem, and decided that next time, I’d find ways to measure the speed of a new algorithm before choosing which song lyrics to design it around. :).

What open source projects would you like to encourage people to get involved with?

Personally, I’m really interested in secure communication tools, so I’d encourage folks (especially designers and UI/UX people) to look into tools like Pond, TextSecure, and my own Petmail. I’m also excited about the variety of run-your-own-server-at-home systems like the GNU FreedomBox.

How can people keep up with what you are doing?

Following my commits on is probably a good approach, since most everything I publish winds up there.

Thank you Brian.


Brian and I covered far more material than I could include in a single post. The full transcript, available on GitHub, also covers memory safe languages, implicit type conversion when working with HTML, and the Python tools that Brian commonly uses.

Next up!

Both Yvan Boiley and Peter deHaan are presented in the next article. Yvan leads the Cloud Services Security Assurance team and continues with the security theme by discussing his team’s approach to security audits and which tools developers can use to self-audit their site for common problems.

Peter, one of Mozilla’s incredible Quality Assurance engineers, is responsible for ensuring that Firefox Accounts doesn’t fall over. Peter talks about the warning signs, processes and tools he uses to assess a project, and how to give the smack down while making people laugh.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

How can we write better software? – Interview series, part 1 with Fernando Jimenez Moreno

Do you ever look code and murmur a string of “WTFs?” Yeah, me too. As often as not, the code is my own.

I have spent my entire professional career trying to write software that I can be proud of. Writing software that “works” is difficult. Writing software that works while also being bug-free, readable, extensible, maintainable and secure is a Herculean task.

Luckily, I am part of a community that is made up of some of the best development, QA and security folks in the industry. Mozillians have proven themselves time and time again with projects like Webmaker, MDN, Firefox and Firefox OS. These projects are complex, huge in scope, developed over many years, and require the effort of hundreds.

Our community is amazing, flush with skill, and has a lot to teach.

Interviews and feedback

This is the first of a series of interviews where I take on the role of an apprentice and ask some of Mozilla’s finest

“How can we, as developers, write more superb software?”

Since shipping software to millions of people involves more than writing code, I approach the question from many viewpoints: QA, Security and development have all taken part.

The target audience is anybody who writes software, regardless of the preferred language or level of experience. If you are reading this, you are part of the part of the target audience! Most questions are high level and can be applied to any language. A minority are about tooling or language specific features.


Each interview has a distinct set of questions based around finding answers to:

  • How do other developers approach writing high quality, maintainable software?
  • What does “high quality, maintainable software” even mean?
  • What processes/standards/tools are being used?
  • How do others approach code review?
  • How can development/Security/QA work together to support each-other’s efforts?
  • What matters to Security? What do they look for when doing a security review/audit?
  • What matters to QA? What do they look for before signing off on a release?
  • What can I do, as a developer, to write great software and facilitate the entire process?

I present the highlights of one or two interviews per article. Each interview contains a short introduction to the person being interviewed followed by a series of questions and answers.

Where an interview’s audio was recorded, I will provide a link to the full transcript. If the interview was done over email, I will link to the contents of the original email.

Now, on to the first interview!

Introducing Fernando Jimenez Moreno

Fernando Jimenez MorenoThe first interview is with Fernando Jimenez Moreno, a Firefox OS developer from Telefonica. I had the opportunity to work with Fernando last autumn when we integrated Firefox Accounts into Firefox OS. I was impressed not only with Fernando’s technical prowess, but also his ability to bring together the employees of three companies in six countries on two continents to work towards a common goal.

Fernando talks about how Telefonica became involved in Firefox OS, how to bring a disparate group together, common standards, code reviews, and above all, being pragmatic.

What do you and your team at Telefonica do?

I’m part of what we call the platform team. We have different teams at Telefonica, one is focused on front end development in Gaia, and the other one is focused on the platform itself, like Gecko, Gonk and external services. We work in several parts of Firefox OS, from Gecko to Gaia, to services like the SimplePush server. I’ve personally worked on things like the Radio Interface Layer (RIL), payments, applications API and other Web APIs, and almost always jump from Gecko to Gaia and back. Most recently, I started working on a WebRTC service for Firefox OS.

How did Telefonica get involved working with Mozilla?

Well, that’s a longer story. We started working on a similar project to Firefox OS, but instead of being based on Gecko, we were working with WebKit. So we were creating this open web device platform based on WebKit. When we heard about Mozilla doing the same with Gecko, we decided to contact you and started working on the same thing. Our previous implementation was based on a closed source port of WebKit and it was really hard to work that way. Since then, my day to day work is just like any other member of Telefonica’s Firefox OS team, which I believe is pretty much the same as any other Mozilla engineer working on B2G.

You are known as a great architect, developer, and inter-company coordinator. For Firefox Accounts on Firefox OS, you brought together people from Telefonica, Telenor, and Mozilla. What challenges are present when you have to work across three different companies?

It was quite a challenge, especially during the first days of Firefox OS. We started working with Mozilla back in 2011, and it took some time for both companies to find a common work flow that fit well for both parts. I mean, we were coming from a telco culture where many things were closed and confidential by default, as opposed to the openness and transparency of Mozilla. For some of us coming from other open source projects, it wasn’t that hard to start working in the open and to be ready to discuss and defend our work on public forums. But, for other members of the team it took some time to get used to that new way of working, and new way of presenting our work.

Also, because we were following agile methodologies in Telefonica while Mozilla wasn’t still doing it, we had to find this common workflow that suits both parts. It took some time to do it, a lot of management meetings, a lot of discussions about it. Regarding working with other telco companies, the experience has also been quite good so far, especially with Telenor. We still have to be careful about the information that we share with them, because at the end of the day, we are still competitors. But that doesn’t mean we cannot work with them in a common target like what happened with Firefox Accounts.

When Mozilla and Telefonica started out on this process, both sides had to change. How did you decide what common practices to use and how did you establish a common culture?

I think for this agile methodology, we focused more on the front end parts because Gecko already had a very known process and a very known way of developing. It has it’s own train mechanism of 6 weeks. The ones doing the most, the biggest effort of finding that common workflow were the front end team because we started working on Gaia and Gaia was a new project with no fixed methodologies.

I don’t know if we really found the workflow, the perfect workflow, but I think we are doing good. I mean we participate in agile methodologies, but when it turns out that we need to do Gecko development and we need to focus on that, we just do it their way.

In a multi-disciplinary, multi-company project, how important are common standards like style guides, tools, and processes?

Well, I believe when talking about software engineering, standards are very important in general. But, I don’t care if you call it SCRUM or KANBAN or SCRUMBAN or whatever, or if you use Git workflow or Mercurial workflow, or if you use Google or Mozilla’s Javascript style guide. But you totally need some common processes and standards, especially in large engineering groups like open source, or Mozilla development in general. When talking about this, the lines are very thin. It’s quite easy to fail spending too much time defining and defending the usage of these standards and common processes and losing the focus on the real target. So, I think we shouldn’t forget these are only tools, they are important, but they are only tools to help us, and help our managers. We should be smart enough to be flexible about them when needed.

We do a lot of code reviews about code style, but in the end what you want is to land the patch and to fix the issue. If you have code style issues, I want you to fix them, but if you need to land the patch to make a train, land it and file a follow on bug to fix the issues, or maybe the reviewer can do it if they have the chance.

Firefox OS is made up of Gonk, Gecko and Gaia. Each system is large, complex, and intimidating to a newcomer. You regularly submit patches to Gecko and Gaia. Whenever you dive into an existing project, how do you learn about the system?

I’m afraid there is no magic technique. What works for me might not work for others for sure. What I try to do is to read as much documentation as possible inside and outside of the code, if it’s possible. I try to ask the owners of that code when needed, and also if that’s possible, because sometimes they just don’t work in the same code or they are not available. I try to avoid reading the code line by line at first and I always try to understand the big picture before digging into the specifics of the code. I think that along the years, you somehow develop this ability to identify patterns in the code and to identify common architectures that help you understand the software problems that you are facing.

When you start coding in unfamiliar territory, how do you ensure your changes don’t cause unintended side effects? Is testing a large part of this?

Yeah, basically tests, tests and more tests. You need tests, smoke tests, black box tests, tests in general. Also at first, you depend a lot on what the reviewer said, and you trust the reviewer, or you can ask QA or the reviewer to add tests to the patch.

Let’s flip this on its head and you are the reviewer and you are reviewing somebody’s code. Again, do you rely on the tests whenever you say “OK, this code adds this functionality. How do we make sure it doesn’t break something over there?”

I usually test the patches that I have review if I think the patch can cause any regression. I also try and run the tests on the “try” server, or ask the developer to trigger a “try” run.

OK, so tests.. A lot of tests.

Yeah, now that we are starting to have good tests in Firefox OS, we have to make use of them.

What do you look for when you are doing a review?

In general where I look first is correctness. I mean, the patch should actually fix the issue it was written for. And of course it shouldn’t have collateral effects. It shouldn’t introduce any regressions. And as I said, I try to test the patches myself if I have the time or if the patch is critical enough, to see how it works and to see if it introduces a regression. And also I look that the code is performant and is secure, and also if, I always try to ask for tests if I think they are possible to write for the patch. And I finally look for things like quality of the code in general, and documentation, coding style, contribution, process correctness.

You reviewed one of my large patches to integrate Firefox Accounts into Firefox OS. You placed much more of an emphasis on consistency than any review I have had. By far.

Well it certainly helps with overall code quality. When I do reviews, I mark these kinds of comments as “nit:” which is quite common in Mozilla, meaning that “I would like to see that changed, but you still get my positive review if you don’t change it, but I would really like to see them changed.”

Two part question. As a reviewer, how can you ensure that your comments are not taken too personally by the developer? The second part is, as a developer, how can you be sure that you don’t take it too personally?

For the record, I have received quite a few hard revisions myself. I never take them personally. I mean, I always try to take it, the reviews, as a positive learning experience. I know reviewers usually don’t have a lot of time to, in their life, to do reviews. They also have to write code. So, they just quickly write “needs to be fixed” without spending too much time thinking about the nicest ways to say it. Reviewers only say things about negative things in your code, not negative, but things that they consider are not correct. But they don’t usually say that the things that are correct in your code and I know that can be hard at first.

But once you start doing it, you understand why they don’t do that. I mean, you have your work to do. This is actually especially hard for me, being a non-native English speaker, because sometimes I try to express things in the nicest way possible but the lack of words make the review comments sound stronger than it was supposed to be. And, what I try to do is use a lot of smileys if possible. And always, I try to avoid the “r-” flag if I mean, the “r-” is really bad. I just clear it, use use the “feedback +” or whatever.

You already mentioned that you try to take it as a learning experience whenever you are developer. Do you use review as a potential teaching moment if you are the reviewer?

Yeah, for sure. I mean just the simple fact of reviewing a patch is a teaching experience. You are telling the coder what you think is more correct. Sometimes there is a lack of theory and reasons behind the comments, but we should all do that, we should explain the reasons and try to make the process as good as possible.

Do you have a snippet of code, from you or anybody else, that you think is particularly elegant that others could learn from?

I am pretty critical with my own code so I can’t really think about a snippet of code of my own that I am particularly proud enough to show :). But if I have to choose a quick example I was quite happy with the result of the last big refactor of the call log database for the Gaia Dialer app or the recent Mobile Identity API implementation.

What open source projects would you like to encourage people to participate in, and where can they go to get involved?

Firefox OS of course! No, seriously, I believe Firefox OS gives to software engineers the chance to get involved in an amazing open source community with tons of technical challenges from low level to front end code. Having the chance to dig into the guts of a web browser and a mobile operative system in such an open environment is quite a privilege. It may seem hard at first to get involved and jump into the process and the code, but there are already some very nice Firefox OS docs on the MDN and a lot of nice people willing to help on IRC (#b2g and #gaia), the mailing lists (dev-b2g and dev-gaia) or

How can people keep up to date about what you are working on?

I don’t have a blog, but I have my public GitHub account and my Twitter account.


A huge thanks to Fernando for doing this interview.

The full transcript is available on GitHub.

Next article

In the next article, I interview Brian Warner from the Cloud Services team. Brian is a security expert who shares his thoughts on architecting for security, analyzing threats, “belts and suspenders”, and writing code that can be audited.

As a parting note, I have had a lot of fun doing these interviews and I would like your input on how to make this series useful. I am also looking for Mozillians to interview. If you would like to nominate someone, even yourself, please let me know! Email me at

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)