Developer Relations revelations: presenting at a conference is much more than your talk

This is part of a series of posts about the life as a DevRel person and how not all is unicorns and roses. You can read the introduction and the other parts of the series here.

So, today, let’s talk about giving presentations.

Chris and unicorn in oil
Artistic impression by Anke Mehlert (@larryandanke) what it looks like when I present

As this is a “warts and all” series of posts, I’ll cover the following aspects:

  • Preparing your presentation for various audiences
  • Frustrations and annoyances to prepare for – things that always go wrong
  • Getting ready to be on stage
  • Things to avoid on stage
  • Planning the follow-up
  • The weirdness of making it measurable

Public speaking is tough

Me, presenting

Giving a presentation, or even having to speak in front of a group is the stuff of nightmares for a lot of people. There are so many things that can go wrong and you have nowhere to hide as you are the centre of attention.

It is not easy, and it shouldn’t be. I’ve been presenting at hundreds of meetings and events over the last decade. Every time I go up on stage, I am scared, worried and my tummy imitates the sounds of a dying whale. I also hope against hope that I won’t mess up. This is normal, this is healthy, and it keeps me humble and – hopefully – interesting. Sure, it gets a bit easier, but the voice in your head telling you that it is not normal that people care for what you have to say will never go away. So that’s something to prepare for.

Now, there is a lot of information out there about becoming a great presenter. A lot of it is about the right way to speak, breathe and move. You can even cheat yourself into appearing more confident using Power Poses.

I am coaching people on public speaking. And I found out pretty early that “being a great presenter” is a very personal thing. The techniques needed differ from person to person. There is no silver bullet for you to instantly become a great presenter. It is up to you to find your voice and way of presenting that makes you most confident. Your confidence and excitement is what makes a presentation successful. To a degree this happens in equal measures.

It is great to see presenters admitting not being experts but sharing what got them excited about the topic they cover. Much better than someone faking confidence or repeating tired old truths that can’t be disagreed with.

When you look around on the web for presentation tips there is a lot of thinly veiled advertising for a course, workshop or books. Once you become a known presenter, you don’t even have to look. You get spam mail offering you magical products to give awesome presentations. Others offer you to create materials for your talks and style your slides.

Here’s the thing: none of that is making that much of a difference. Your slides and their form are not that important. They should be wallpaper for your presentation. In the end, you have to carry the message and captivate the audience, not read a deck out to them.

Except, it does make a difference. Not for your talk, but for everyone else involved in it.

Players two, three and four have entered the game

Now, as I already hammered home in the last post, as a DevRel person you are not representing yourself, but also a group or company. That means, that you have a lot more work to prepare a presentation. You need to juggle demands of:

  • Your company and colleagues
  • The conference organisers
  • The audience
  • People who later on will watch the video of the talk
  • Those peering over the social media fence to add their ideas to fragments of information regardless of context

Seperating from your company is tough but needed

Here’s the biggest issue. Technical audiences hate sales pitches. They also hate marketing. They go to a tech event to hear from peers and to listen to people they look up to.

As soon as you represent a company there is already a sense of dread that they’ll find a shill wasting their time. This gets worse the bigger your company is and the older it is. People have a preconception of your company. This could be beneficial “Cool, there is a NASA speaker at that event”. Or it could be a constant struggle for you having to explain that things your company did or other departments do aren’t your fault. It could also be a struggle for you when all you have from your company is marketing messages. Messages that are groan-worthy for technical people but sound great in the press.

Do yourself a favour and be adamant that you own your presentations. You don’t want to get into the stress of presenting a peer-reviewed “reusable slide deck” of your company.

Be proactive in writing and owning your talks. Make sure your company knows what you do and why your talk is a great thing for them. You are on the clock and you are spending their money. That’s why you can’t use presenting only to further your personal brand – something needs to come out of it. That, of course, is tricky to get right, but there are some ways to make it worth while – more on that later.

Helping conference organisers

Conference organisers have a tough job. They don’t only need to wrangle the demands of the audience and locations. They also are responsible for everything that happens on stage. Thus it is understandble that they want to know as much as possible about your talk before it happens. This can get weird.

I often get asked for insipirational, cutting edge talks. In the same sentence I’m asked to deliver the slides months in advance. This is impossible unless you keep the talk very meta. Blue-sky, meta talks don’t help your company or your product. They advocate yourself as a visionary and an important voice in the market, which is good for the company. But tough to explain to the product teams how it affects their work.

In any case, it is a great idea to make the lives of conference organisers easier by having things ready for them. These include:

  • A great title and description of your talk
  • A description of the skill level you expect from the audience
  • An up-to-date bio to add to their materials
  • A few good photos of you
  • Your name, job description, company
  • Ways to connect to you on social media
  • Things you published or resources you maintain

These make it easier for the conference to drum up excitement about your talk and yourself. This can mean the difference between speaking to an empty or full room at the event.

Many events will want slides in advance. I tend to not do that as it limits me and often I get inspiration on the flight there. The only exception is if the event offers translation and especially sign translation. Then I provide slides, extensive notes that I will stick to and a list of special terms and what they mean. It is not fun when you talk about databases and the audience looks confused as the translator talks about kitchen tables.

Making it easier for the audience

I am a firm believer that you should separate yourself from your slides to deliver a great talk. I also realised that this often is wishful thinking.

You will hear a lot of “best practices” about slides and not having much text on them but set the mood instead. That’s true, but there is also a benefit to words on slides. Of course, you shouldn’t read your slides, but having a few keywords to aid your story help. They help you in your presentation flow. They also help an audience that doesn’t speak your language and miss out some of the nuances you add to your talk.

If your slides make at least some sort of sense without your narration you can reach more people. Most conferences will make the slides available. Every single time I present the first question of the audience is if they can get the slides. Most conferences record you and your slides. A sensible deck makes the video recording easier to understand.

When you considered all this, you can go on stage and give the audience what they are looking for. The next thing to tackle is stage technology.

Things that go wrong on stage

Ok, here comes trouble. Stage technology is still our enemy. Expect everything to go wrong.

  • Bring your own power adaptor, remote and connectors but don’t expect there to be a power plug.
  • Don’t expect dongles to work with long cables on stage
  • Don’t expect to be able to use the resolution of your computer
  • Learn about fixing resolution and display issues yourself – often the stage technicians don’t know your OS/device
  • Expect to show a 16:9 presentation in 4:3 and vice versa
  • Expect nothing to look on the projector like it does on your computer
  • Use slide decks and editors/terminals with large fonts and high contrast.
  • Don’t expect videos and audio to play; be prepared to explain them instead
  • Do not expect to be online without having a fallback solution available.
  • Expect your computer to do random annoying things as soon as you go on stage.
  • Reboot your machine before going up there
  • Turn off all notifications and ways for the audiences to hijack your screen
  • Make sure you have a profile only for presenting that has the least amount of apps installed.
  • Expect your microphone to stop working at any time or to fall off your head getting entangled in your hair/earrings/glasses/beard
  • Expect to not see the audience because of bright lights in your eyes
  • Expect to have terrible sound and hearing random things in the background and/or other presentations in adjacent rooms
  • Expect any of your demos to catch fire instead of doing what they are supposed to do
  • Have a lot of stuff on memory sticks – even when your machine dies you still have them. Be sure to format the stick to work across operating systems
  • Expect to not have the time you thought you had for your talk. I normally plan for a 10 minute difference in each direction

I’ve given up on trying high-tech presentations because too much goes wrong. I’ve tried Mac/Keynote, PC/Powerpoint, HTML Slides and PDFs and still things went wrong. I started prefering to have my slides shown on a conference computer instead of mine. At least then I have someone else to blame.

My main trick is that I have my slides as PowerPoint with all the fonts I used on a memory stick. I also have them as a PDF with all animations as single slides if even that will not work out. Be prepaired for everything to fail. And if it does, deal with it swiftly and honestly.

Getting ready to be on stage

Before going on stage there are a few things to do:

  • Announce on social media where and when you are presenting and that it is soon
  • Tell your colleagues/friends at the event where you present and that people may come and talk to them right after
  • Take some time out, go to a place where people don’t pester you and go through your talk in your head
  • Take a bio break, crossing your legs on stage is not a good plan
  • Check that your oufit has no malfunctions, drink some water, make sure your voice is clear (lozenges are a good idea)
  • Take some extra time to get to the room you present. I normally tend to sit in the talk before and use the break in between presentations to set up
  • Stock up on swag/cards and other things to immediately give to people after the talk.
  • Breathe, calm down, you’re ready, you got this

DevRel stage etiquette

Congratulations you made it. You’re on stage and the show must go on. There are a lot of things not to say on stage, and I wrote a whole post on the subject some time ago . In this current context of DevRel there are a few things that apply besides the obvious ones:

  • Don’t over-promise. Your job is to get people excited, but your technical integrity is your biggest asset
  • Don’t bad-mouth the competition. Nothing good comes from that
  • Don’t leave people wondering. Start with where the slides are available and how to contact you. Explain if you will be at the event and where to chat to you
  • Turn off all your notifications on your phone and your computer. You don’t want any sensitive information to show up on screen or some prankster in the audience post something offensive
  • Have a clean setup, people shouldn’t see personal files or weirdly named folders
  • Don’t have any slides that cause controversy without your explanations. It is a very tempting rhetoric device to show something out-of-context and describe the oddity of it. The problem is these days people post a photo of your slide. People without the context but a tendency to cause drama on social media then comment on this. You don’t want to get off stage, open your phone and drown in controversy. Not worth it.
  • Make it obvious who you work for. As mentioned earlier, this can be a problem, but you are there for this reason.
  • Show that you are part of the event by mentioning other, fitting talks on subjects you mention. This is a great way to help the organisers and help other presenters
  • Don’t get distracted when things go wrong. Admit the error, move on swiftly. It is annoying to witness several attempts of a tech demo.
  • If there is a video recording, make sure it makes sense. Don’t react to audience interjections without repeating what the context was. Don’t talk about things people on the video can’t see. When I spoke at TEDx the main message was that you talk more for the recording than the people in the room. And that applies to any multi-track conference.
  • Make sure to advocate the official communication channels of the products and teams you talk about. These are great ways to collect measureable impact information about your talk.

Things that happen after your talk

Once your done, there will be a lot of immediate requests by people. So make sure you have enough energy for that. I’m almost spent right after my talks and wished there were breaks, but you won’t get them.

Other things on your plate:

  • Use social media to thank people who went to your talk and to post a link to your slides using the conference hashtag.
  • Collect immediate social media reactions for your conference report
  • Tell people if and where you will be (probably your booth) for the rest of the conference
  • Find some calm and peace to re-charge. You’ve done good, and you should sit down, have a coffee or water and something to eat. I can’t eat before my talks, so I am famished right after

How do you know you were a success?

From a DevRel point of view it is hard to measure the success of presentations. Sure, you have impressive photos of rooms full of people. You also have some very posititive tweets by attendees and influencers as immediate feedback tends to be polarised. But what did you really achieve? How did your talk help the team and your product?

This is a real problem to answer. I always feel a high when on stage and at the event but a day later I wonder if it mattered to anyone. Sometimes you get lucky. People contact you weeks after telling you how much you inspired them. Sometimes they even show what they did with the information you told them. These are magical moments and they make it all worth while.

Feedback collected by the conference is to be taken with a grain of salt. Often there is a massive polarisation. As an example, I often have to deal with “Not technical enough” and “Too technical” at the same time. Tough to use this feedback.

One trick to make it worth while for your company and measurable is to have short-URLs in your slides. The statistics on those with the date attached can give you an idea that you made a difference.

Fact is that somehow you need to make it measurable. Going to an event and presenting is a large investment. In time, in money and also in emotional efforts. It is a great feeling to be on stage. But also remember how much time and effort you put into it. Time you could have spent on more re-usable and measurable DevRel efforts.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Truths: The web is better than any other platform as it is backwards compatible and fault tolerant

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of the web as a publication platform and how we keep repeating its virtues that may not apply to a publisher audience.

The web is better than any other platform as it is backwards compatible and fault tolerant

This has been the mantra of any web standards fan for a very long time. The web gets a lot of praise as it is to a degree the only platform that has future-proofing built in. This isn’t a grandiose statement. We have proof. Web sites older than many of today’s engineers still work in the newest browsers and devices. Many are still available, whilst those gone are often still available in cached form. Both search engines and the fabulous wayback machine take care of that – whether you want it or not. Betting on the web and standards means you have a product consumable now and in the future.

This longevity of the web stems from a few basic principles. Openness, standardisation, fault tolerance and backwards compatibility.


Openness is the thing that makes the web great. You publish in the open. How your product is consumed depends on what the user can afford – both on a technical and a physical level. You don’t expect your users to have a certain device or browser. You can’t force your users to be able to see or overcome other physical barriers. But as you published in an open format, they can, for example, translate your web site with an online system to read it. They can also zoom into it or even use a screenreader to hear it when they can’t see.

One person’s benefit can be another’s annoyance, though. Not everybody wants to allow others to access and change their content to their needs. Even worse – be able to see and use their code. Clients have always asked us to “protect their content”. But they also wanted to reap the rewards of an open platform. It is our job to make both possible and often this means we need to find a consensus. If you want to dive into a messy debate about this, follow what’s happening around DRM and online video.


Standardisation gave us predictability. Before browsers agreed on standards, web development was a mess. Standards allowed us to predict how something should display. Thus we knew when it was the browser’s fault or ours when things went wrong. Strictly speaking standards weren’t necessary for the web to work. Font tags, center tags, table layouts and all kind of other horrible ideas did an OK job. What standards allow us to do is to write quality code and make our lives easier. We don’t paint with HTML. Instead, we structure documents. We embed extra information and thus enable conversion into other formats. We use CSS to define the look and feel in one central location for thousands of documents.

The biggest benefactors of standards driven development are developers. It is a matter of code quality. Standards-compliant code is easier to read, makes more sense and has predictable outcome.

It also comes with lots of user benefits. A button element is keyboard, touch and mouse accessible and is available even to blind users. A DIV needs a lot of developer love to become an interactive element.

But that doesn’t mean we need to have everything follow standards. If we had enforced that, the web wouldn’t be where it is now. Again, for better or worse. XHTML died because it was too restrictive. HTML5 and lenient parsers were necessary to compete with Flash and to move the web forward.

Backwards compatibility

Backwards compatibilty is another big part of the web platform. We subscribed to the idea of older products being available in the future. That means we need to cater for old technology in newer browsers. Table layouts from long ago need to render as intended. There are even sites these days publishing in that format, like Hacker News. For browser makers, this is a real problem as it means we need to maintain a lot of old code. Code that not only has a diminishing use on the web, but often even is a security or performance issue. Still, we can’t break the web. Anything that goes into a “de facto standard” of web usage becomes a maintenance item. For a horror story on that, just look at all the things that can go in the head of a document. Most of these are non-standard, but people do rely on them.

Fault tolerance

Fault tolerance is a big one, too. From the very beginning web standards like HTML and CSS allow for developer errors. In the design principles of the language the “Priority of Constituencies” states it as such:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity

This idea is there to protect the user. A mistake made by a developer or a third party piece of code like and ad causing a problem should not block out users. The worrying part is that in a world where we’re asked to deliver more in a shorter amount of time it makes developers sloppy.

The web is great, but not simple to measure or monetise

What we have with the web is an open, distributed platform that grants the users all the rights to convert content to their needs. It makes it easy to publish content as it is forgiving to developer and publisher errors. This is the reason why it grew so fast.

Does this make it better than any other platform or does it make it different? Is longevity always the goal? Do we have to publish everything in the open?

There is no doubt that the web is great and was good for us. But I am getting less and less excited about what’s happening to it right now. Banging on and on about how great the web as a platform is doesn’t help with its problems.

It is hard to monetise something on the web when people either don’t want to pay or block your ads. And the fact that highly intrusive ads and trackers exist is not an excuse for that but a result of it. The more we block, the more aggressive advertising gets. I don’t know anyone who enjoys interstitials and popups. But they must work – or people wouldn’t use them.

The web is not in a good way. Sure, there is an artisinal, indie movement that creates great new and open ways to use it. But the mainstream web is terrible. It is bloated, boringly predictable and seems to try very hard to stay relevant whilst publishers get excited about snapchat and other, more ephemeral platforms.

Even the father of the WWW is worried: Tim Berners-Lee on the future of the web: The system is failing .

If we love the web the way we are happy to say all the time we need to find a solution for that. We can’t pretend everything is great because the platform is sturdy and people could publish in an accessible way. We need to ensure that the output of any way to publish on the web results in a great user experience.

The web isn’t the main target for publishers any longer and not the cool kid on the block. Social media lives on the web, but locks people in a very cleverly woven web of addiction and deceit. We need to concentrate more on what people publish on the web and how publishers manipulate content and users.

Parimal Satyal’s excellent Against a User Hostile Web is a great example how you can convey this message and think further.

In a world of big numbers and fast turnaround longevity isn’t a goal, it is a nice to have. We need to bring the web back to being the first publishing target, not a place to advertise your app or redirect to a social platform.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Why web accessibility matters now more than ever

If you design or develop websites for a living,  more than likely you’ve heard about the importance of web sites which are accessible and usable for everyone. So, what’s new and newsworthy today and why should you care?

Accessibility has a far reach

In a nutshell, today’s Web Accessibility and Usability best practices reach beyond the blind, the disabled and the hearing impaired to include today’s busy power users and a multitude of mobile devices. People want access to information. The web is the de facto “go to” location these days. This is why it is so important to make certain everyone has equal access.


Why does that matter? Accessibility is a civil right.


  • Monetization – if your site is not accessible, you may face a number of issues (from complaints to lawsuits). It is so much easier to incorporate accessibility into your site development process.
  • Differentiation – accessibility helps in other aspects (including helping with search engine rank and overall user experience).

Web Accessibility Summit findings

To better understand the value of what this means to today’s Web professionals, I participated in the Environment for Humans Web Accessibility Summit in early September, 2016. Here are some of the key take-aways:


  • Accessibility helps the overall user experience for many who do not have a disability (consider those working in bright sunlight/ experiencing screen glare).
  • It takes a team (know what aspects of accessibility you are good at and where you need help – and it is sometimes important to know when you need to ask for help).
  • Individual experiences vary significantly and the way we perceive a site often has to do with the context while we experience said site (for example, consider your willingness to tolerate page loading delays while you are trying to re-book a flight because you are at the airport and yours was just cancelled).
  • Some groups are working very hard to develop new technologies to assist those with disabilities.

For project managers

If you are reading this (and manage projects), it is important to champion accessibility because it improves the overall user experience at your site. One should not think of accessibility in terms of edge cases; think in terms of those who have temporary issues (whether holding an infant and trying to look up information about your product or suffering some motor impairment due to a stroke). As a project manager, you may need help developing a business case for accessibility. There are sites which can help (such as

For accessibility testing

If you are testing for accessibility, it is important to include screen captures in your report. Identify the exact problem (including the snippet of code). Also provide examples of how this problem may be repaired. Keep in mind that when multiple people report a problem, they will likely word it differently. This is why screen captures are important to include. It may also be helpful to include video of you interacting with the site using tools like VoiceOver (Mac), NVDA (Windows) or ChromeVox (for Chrome browser and ChromeOS).

Smart Charts Project

During the Summit, I learned about the Smart Charts project (for example, is a prototype data visualizer) from Doug Schepers. Surprisingly, if you are using a screen reader, you can gain more information from a chart than is presented visually. The above site should be examined visually and with a screen reader to experience the difference.


There are many resources which one can use to test for accessibility and to better understand how to code properly. At a minimum, you should be aware of the ADA site – We are developing a list of accessibility resources which will be available via our site for our members. A couple of short courses at our site will soon be offered covering the fundamentals of web accessibility.


We encourage you to strive to make your sites accessible, not just for legal reasons, but because it is the right thing to do.


Mark DuBois

Community Evangelist and Executive Director

The post Why web accessibility matters now more than ever appeared first on Web Professionals.

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Better than Gzip Compression with Brotli

HTTP Compression

Brotli is an open source data compression library formally specified by IETF draft. It can be used to compress HTTPS responses sent to a browser, in place of gzip or deflate.

Support for Brotli content encoding has recently landed and is now testable in Firefox Developer Edition (Firefox 44). In this post, we’ll show you an example of how to set up a simple HTTPS server that takes advantage of Brotli when supported by the client.

When serving content over the web, an easy win or low hanging fruit is turning on server side compression.  Somewhat unintuitive, doing extra work to compress an HTTP response server side, and decompress the result client side is faster than not doing the additional work.  This is due to bandwidth constraints over the wire.  Adding compression improves transfer times when the content is large, isn’t already compressed (reapplying compression doesn’t buy you anything, unless you’re Pied Piper), and the cost to communicate is relatively large.

The way the User Agent, client, or Web browser signals to the server what kinds of compressed content it can decompress is with the `Accept-Encoding` header.  Let’s see what such a header might look like in Firefox 43 (prior to Brotli support) dev tools.

Accept-Encoding FF 41And in Firefox 44 (with Brotli support)

Accept Encoding FF 44

Just because the client supports these encodings doesn’t mean that’s what they’ll get.  It’s up to the server to decide which encoding to choose.  The server might not even support any form of compression.

The server then responds with the `Content-Encoding` header specifying what form of compression was used, if any at all.

Content Encoding

While the client sends a list of encodings it supports, the server picks one to respond with.  Responding with an unsupported content encoding, or with a header that doesn’t match the actual encoding of the content can lead to decompression errors and the summoning of Z??????????????????????A?????????????????????L???????????????????????????G??????????????O???????????????????????????.

Zalgo Decompression Errors

Most browsers support gzip and deflate (as well as uncompressed content, of course).  Gecko based browsers such as Firefox 44+ now support “br” for brotli.  Opera beta 33 has support for lzma (note: lzma1 not lzma2) and sdch. Here‘s the relevant Chromium bug for brotli support.

Creating Our Server

Here’s a simple Node.js server that responds with 5 paragraphs of generated Lorem Ipsum text.  Note: you’ll need Node.js installed, I’m using Node v0.12.7.  You’ll need a C++ compiler installed for installing the native addons I’m using:

npm install accepts iltorb lzma-native

Finally, you’ll need to generate some TLS certificates to hack on this since Firefox 44+ supports Brotli compression over HTTPS, but not HTTP.  If you’re following along at home, and aren’t seeing Accept-Encoding: “br”, make sure you’re connecting over HTTPS.

You can follow the tutorial here for generating self signed certs.  Note that you’ll need openssl installed, and that browsers wil throw up scary warnings since you’re not recognized as being part of their Certificate Authority “cartel.”  These warnings can be safely ignored when developing locally with certificates you generated yourself, but don’t go around ignoring certificate errors when browsing the web.

Here’s the code for our simple server.

#!/usr/bin/env node

var accepts = require('accepts');
var fs = require('fs');
var https = require('https');
var brotli = require('iltorb').compressStream;
var lzma = require('lzma-native').createStream.bind(null, 'aloneEncoder');
var gzip = require('zlib').createGzip;

var filename = 'lorem_ipsum.txt';

function onRequest (req, res) {
  res.setHeader('Content-Type', 'text/html');

  var encodings = new Set(accepts(req).encodings());

  if (encodings.has('br')) {
    res.setHeader('Content-Encoding', 'br');
  } else if (encodings.has('lzma')) {
    res.setHeader('Content-Encoding', 'lzma');
  } else if (encodings.has('gzip')) {
    res.setHeader('Content-Encoding', 'gzip');
  } else {

var certs = {
  key: fs.readFileSync('./https-key.pem'),
  cert: fs.readFileSync('./https-cert.pem'),

https.createServer(certs, onRequest).listen(3000);

Then we can navigate to https://localhost:3000 in our browser.  Let’s see what happens when I visit the server in various browsers.

Firefox 45 uses Brotli:

Firefox 45 BrotliOpera Beta 33 uses lzma:

Opera 33 lzmaSafari 9 and Firefox 41 use gzip:

Safari 9 gzip

We can compare the size of the asset before and after compression in Firefox’s dev tools, under the network tab, by comparing the Transferred vs Size columns.  The transferred column shows the bytes of the compressed content transferred over the wire, and the size column shows the asset’s decompressed size.  For content sent without any form of compression, these two should be the same.

Transferred vs Size

We can also verify using the curl command line utility:

$ curl https://localhost:3000 --insecure -H 'Accept-Encoding: br' -w '%{size_download}' -so /dev/null

$ curl https://localhost:3000 --insecure -H 'Accept-Encoding: lzma' -w '%{size_download}' -so /dev/null

$ curl https://localhost:3000 --insecure -H 'Accept-Encoding: gzip' -w '%{size_download}' -so /dev/null

$ curl https://localhost:3000 --insecure -w '%{size_download}' -so /dev/null

Notes about compression vs performance

The choice of which compression scheme to use does have implications.  Node.js ships with zlib, but including native node add-ons for lzma and brotli will slightly increase distribution size.  The time it takes the various compression engines to run can vary wildly, and the memory usage while compressing content can hit physical limits when servering numerous requests.

In the previous example, you might have noticed that lzma did not beat gzip in compression out of the box, and brotli did only maginally.  You should note that all compression engines have numerous configuration options that can be tweaked to trade off things like performance for memory usage, amongst other things.  Measuring the change in response time, memory usage, and Weissman score is something we’ll take a look at next.

The following numbers were gathered from running

$ /usr/bin/time -l node server.js &
$ wrk -c 100 -t 6 -d 30s -H 'Accept-Encoding: <either br lzma gzip or none>' https://localhost:3000
$ fg

The following measurements were taken on the following machine: Early 2013 Apple MacBook Pro OSX 10.10.5 16GB 1600 MHz DDR3 2.7 GHz Core i7 4-Core with HyperThreading.

Compression Method Requests/Second Bytes Transferred (MB/s) Max RSS (MB) Avg. Latency (ms)
br-stream 203 0.25 3485.54 462.57
lzma 233 0.37 330.29 407.71
gzip 2276 3.44 204.29 41.86
none 4061 14.06 125.1 23.45
br-static 4087 5.85 105.58 23.3

Some things to note looking at the numbers:

  • There’s a performance cliff for requests/second for compression other than gzip.
  • There’s significantly more memory usage for compression streams. The 9.8 GB 3.4 GB peak RSS for brotli looks like a memory leak that’s been reported upstream (my monocle popped out when I saw that).
  • The latency measured is only from localhost which would be at least this high across the Internet, probably much more. This is the waiting timing under Dev Tools > Network > Timings.
  • If we compress static assets ahead of time using brotli built from source, we get fantastic results. Note: we can only do this trick for static responses.
  • Serving statically-brotli-compressed responses performs as well as serving static uncompressed assets, while using slightly less memory. This makes sense, since there are fewer bytes to transfer! The lower number of bytes transferred per second makes that variable seem independent of the number of bytes in the file to transfer.

For compressing static assets ahead of time, we can build brotli from source, then run:

$ ./bro --input lorem_ipsum.txt --output

and modify our server:

< var brotli = require('iltorb').compressStream;
< var filename = 'lorem_ipsum.txt'; --- > var filename = '';
< fs.createReadStream(filename).pipe(brotli()).pipe(res); --- >     fs.createReadStream(filename).pipe(res);


Like other HTTP compression mechanisms, using Brotli with HTTPS can make you vulnerable to BREACH attacks. If you want to use it, you should apply other BREACH mitigations.


For 5 paragraphs of lorem ipsum, Brotli beats gzip by 5%. If I run the same experiment with the front page of from 10/01/2015, Brotli beats gzip by 22%! Note that both measurements were using the compressors out of the box without any tweaking of configuration values.

Whether or not a significant portion of your userbase is using a browser that supports Brotli as a content encoding, whether the added latency and memory costs are worth it, and whether your HTTPS server or CDN support Brotli is another story. But if you’re looking for better than gzip performance, Brotli looks like a possible contender.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Shiva – More than a RESTful API to your music collection

Music for me is not only part of my daily life, it is an essential part. It helps me concentrate, improves my mood, distracts me and/or helps me relax. This is true for most (if not all) people.The lack of music or the wrong selection of tunes can have the complete opposite effect, it has a strong influence on how we feel. It also plays a key role in shaping our identity. Music, like most (if not all) types of culture, is not an accessory, is not something we can choose to ignore, it is a need that we have as human beings.

The Internet has become the most efficient medium ever in culture distribution. Today it’s easier than ever to have access to a huge diversity of culture, from any place in the world. At the same time you can reach the whole world with your music, instantly, with just signing up at one of the many websites you can find for music distribution. Just as “travel broadens the mind”, music sharing enriches culture, and thanks to the Internet, culture is nowadays more alive than ever.

Not too long ago record labels were the judges of what was good music (by their standards) and what was not. They controlled the only global-scale distribution channel, so to make use of it you would need to come to an agreement with them, which usually meant giving up most of the rights over your cultural pieces. Creating and maintaining such a channel was neither easy nor cheap, there was a need for the service they provided, and even though their goal was not to distribute culture but to be profitable (as every company) both parties, industry and society, benefited from this.

Times have changed and this model is obsolete now; the king is dead, so there are companies fighting to occupy this vacancy. What also changed was the business model. Now it is not just the music – it is also about restricting access to it and collecting (and selling) private information about the listeners. In other words, DRM and privacy. Here is where Shiva comes into play.

What is Shiva?

Shiva is, technically speaking, a RESTful API to your music collection. It indexes your music and exposes an API with the metadata of your files so you can then perform queries on it and organize it as you wish.

On a higher level, however, Shiva aims to be a free (as in freedom and beer) alternative to popular music services. It was born with the goal of giving back the control over their music and privacy to the users, protecting them from the industry’s obsession with control.

It’s not intended to compete directly with online music services, but to be an alternative that you can install and modify to your needs. You will own the music in your server. Nobody but you (or whoever you give permission) will be able to delete files or modify the files’ metadata to correct it when it’s wrong. And of course, it will all be available to any device with Internet connection.

You will also have a clean, RESTful API to your music without restrictions. You can grant access to your friends and let them use the service or, if they have their own Shiva instances, let both servers talk to each other and share the music transparently.

To sum up, Shiva is a distributed social network for sharing music.

Your own music server

Shiva-Server is the component that indexes your music and exposes a RESTful API. These are the available resources:

  • /artists
    • /artists/shows
  • /albums
  • /tracks
    • /tracks/lyrics

It’s built in python, using SQLAlchemy as ORM and Flask for HTTP communication.

Indexing your music

The installation process is quite simple. There’s a very complete guide in the README file, but I’ll summarize it here:

  • Get the source
  • Install dependencies from the requirements.pip file
  • Copy /shiva/config/ to /shiva/config/
  • Edit it and configure the directories to scan
  • Create the database (sqlite by default)
  • Run the indexer
  • Run the development server

For details on any of the steps, check the documentation.

Once the music has been indexed, all the metadata is stored in the database and queried from it. Files are only accessed by the file server for streaming. Lyrics are scraped the first time they are requested and then cached. Given the changing nature of the shows resource, this is the only one that is not cached; instead is queried every time. At the moment of this writing is using only one source, the BandsInTown API.

Once the server is running you have all you need to start playing with Shiva. Point to a resource, like /artists, to see it in action.

Scraping lyrics

As mentioned, lyrics are scraped, and you can create your own scrapers for specific websites that have the lyrics you want. All you need is to create a python file with a class inheriting from LyricScraper in the /shiva/lyrics directory. The following template makes clear how easy it is. Let’s say we have a file /shiva/lyrics/

From shiva.lyrics import LyricScraper:

class MyLyricsScraper(LyricScraper):
    “““ Fetches lyrics from ”””
    def fetch(self, artist, title):
        # Magic happens here
        if not lyrics:
            return False
        self.lyrics = lyrics
        self.source = lyrics_url
        return True

After this you need add your brand new scraper to the scrapers list, in your config file:

    ‘lyrics’: (

Shiva will instantiate your scraper and call the fetch() method. If it returns True, it will then proceed to look for the lyrics in the lyrics attribute, and the URL from which they were scraped in the source attribute:

if scraper.fetch():
    lyrics = Lyrics(text=scraper.lyrics, source=scraper.source,
    return lyrics

Check the existing scrapers for real world examples.

Lyrics will only be fetched when you request one specific tracks, not when retrieving more than one. The reason behind this is that each track’s lyrics may require two or more requests, and we don’t want to DoS the website when retrieving an artist’s discography. That would not be nice.

Setting up a file server

The development server, as its name clearly states, should not be used for production. In fact it is almost impossible because can only serve one request at a time, and the audio element will keep the connection open as long as the file is playing, ergo, blocking completely the API.

Shiva provides a way to delegate the file serving to a dedicated server. For this you have to edit your /shiva/config/ file, and edit the MEDIA_DIRS setting. This option expects a tuple of MediaDir objects, which provide the mechanism to define directories to scan and a socket to serve the files through:

MediaDir(‘/srv/music’, url=’http://localhost:8080)

This way doesn’t matter in which socket your application runs, files in the /src/music directory will be served through the URL defined in the url attribute. This object also allows to define subdirectories to be scanned. For example:

MediaDir(‘/srv/music’, dirs=(‘/pop’, ‘/rock’), url=’http://localhost:8080)

In this case only the directories /srv/music/pop and /srv/music/rock will be scanned. You can define as many MediaDir objects as you need. Suppose you have the file /srv/music/rock/nofx-dinosaurs_will_die.mp3, once this is in place the track’s download_uri attribute will be:

    "slug": "dinosaurs-will-die",
    "title": "Dinosaurs Will Die",
    "uri": "/track/510",
    "id": 510,
    "stream_uri": "http://localhost:8080/nofx-dinosaurs_will_die.mp3"

Your own music player

Once you have your music scanned and the API running, you need a client that consumes those services and plays your music, like Shiva-Client. Built as a single page application with AngularJS and HTML5 technologies, like Audio and Drag and Drop, this client will allow you to browse through your catalog, add files to a playlist and play them.

Due to the Same-Origin-Policy you will need a server that acts as proxy between the web app and the API. For this you will find a file in the repo that will do this for you. The only dependency for this file is Flask, but I assume you have that installed already. Now just execute it:


This will run the server on http://localhost:9001/

Access that URI and check the server output, you will see not only the media needed by the app (like images and javascript files) but also a /api/artists call. That’s the proxy. Any call to /api/ will be redirected by the server to http://localhost:9002/

If you open a console, like firebug, you will see a Shiva object in the global namespace. Inside of it you will find 2 main attributes, Player and Playlist. Those objects encapsulate all the logic for queueing and playing music. The Player only holds the current track, acts as a wrapper around HTML5’s Audio element. What may not seem natural at first is that normally you won’t interact with the Player, but with the Playlist, which acts as a façade because it knows all the tracks and instructs the Player which track to load and play next.

The source for those objects is in the js/controllers.js file. There you will also find the AngularJS controllers, which perform the actual calls to the API. It consists of just 2 calls, one to get the list of artists and another one to get the discography for an artist. Check the code, is quite simple.

So once tracks are added to the playlist, you can do things like play it:

Stop it:


Or skip the track:

Some performance optimizations were made in order to lower the processing as much as possible. For example, you will see a progress bar when playing music, that will only be updated when it has to be shown. The events will be removed when not needed to avoid any unnecessary DOM manipulation of non-visible elements:'timeupdate',, false);

Present and future

At the time of this writing, Shiva is in a usable state and provides the core functionality but is still young and lacks some important features. That’s also why this article doesn’t dig too much into the code, because it will rapidly change. To know the current status please check the documentation.

If you want to contribute, there are many ways you can help the project. First of all, fork both the server and the client, play with them and send your improvements back to upstream.

Request features. If you think “seems nice, but I wouldn’t use it” write down why and send those thoughts and ideally some ideas on how to tackle them. There is a long list of lacking features; your help is vital in order to prioritize them.

But most importantly; build your own client. You know what you like about your favourite music player and you know what sucks. Fork and modify the existing client or create your own from scratch, bring new ideas to the old world of music players. Give new and creative uses to the API.

There’s a lot of work to be done, in many different fronts. Some of the plans for the future are:

  • Shiva-jslib: An easy to use javascript library that encapsulates the API calls, so you can focus only on building the GUI and forget about the protocol.
  • Shiva2Shiva communication: Let two (or more) Shiva instances talk to each other to allow for transparent sharing of music between servers.
  • Shiva-FXOS: A Shiva client for Firefox OS.

And anything else you can think of. Code, send ideas, code your clients.

Happy hacking!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Is browser and tech innovation assuming an audience rather than talking to one?

Web development is not what it used to be. It has undergone so many transformations and changes that it is pretty confusing to keep up with what is going on. My main problem right now is that as someone working for a player that provides the world with a browser and is involved in defining the future technologies of the open stack I wonder who our audience is.

There is no one “web developer”

I am a web developer. I build web products and I love how simple it is to create meaning with semantic HTML, interactivity with JS and make things beautiful and more intuitive with CSS. I come to this from a developer angle, as I am a terrible designer. This makes me an endangered species.

In the last few years web development has gotten a lot more players. Moving JS to the server side and the advent of Websockets, Node.js and technologies like Phonegap and Emscripten, and yes, even GWT allow a lot more people who never bothered with the web to build web apps. And this is damn good as different knowledge can lead to better and more scalable solutions. It also means we can deliver faster to a market that is hungry for more and more products. And it also forces us to re-think some of our ways.

I’ve talked to people who are amazing in HTML/CSS/JS and feel the need to learn at least one server-side language or at least get into patterns to understand what their colleagues in the web development team are talking about. It seems the shift from web sites to apps means that we need to shift much more to traditional app development than we are ready to admit yet.

Staying in our comfort zones

However, I don’t see much mingling going on. The design-y conferences of this world talk about “mobile first” and how responsiveness will always beat strict native apps and the tech-y conferences get very excited about replacing old-school web development with MVC frameworks in JavaScript and how to use JS to replace other server-side architectures. We’re stuck in a world of demo sites and showcases and “hello world” examples that can “scale to thousands of users per second” but never get the chance to.

I know, there are a few outstanding examples that are not like that and I generalise, but look around and you will see that I have a point. We get excited about the possibilities and revel in academic exercises rather than getting real issues fixed and showing how to deliver real solutions. This goes as far as discussing for days whether to use semicolons in JS or not.

Who is the audience?

But let’s go back to browsers and standards. I really am at a loss as to who we are talking to when it comes to those. Personally I see a lot of that in the feedback I get. Say I just gave a talk about HTML5 and what it does for us. Audio, Video, richer semantics, JavaScript APIs that allow us to draw and store data locally, all that. I normally end with something like GamePad API, Pointer lock or WebRTC to show what else is brewing. The feedback I get is incredibly polarised:

  • Yeah, yeah, cool but why don’t you support the new experimental feature $x that browser $y has in the latest Nightly?
  • That’s cool but I don’t like using your browser (my favourite, as it has nothing to do with the talk 🙂 )
  • This is all fine but none of my clients will ever need that
  • Great, but I can not use this as all my clients use browser $shouldhavedied and will never upgrade

Now the luddite fraction of this has a point – a lot of what we show when we talk about “the bleeding edge” can only be used (for now) in Nightly releases of browsers or need certain flags to be turned on. In some cases you even needed a special build of a certain browser (like the GamePad API in Firefox or Adobe’s first CSS regions proposal). This means we do expect a lot of investment from our audience for something that might change or be discarded in the near future.

The “ZOMG YOU ARE SOOO BEHIND” fraction has a point, too – if they put their money where their mouth is and really use these new technologies in products rather than just getting excited about getting something new and shiny every week. Otherwise this is just borderline trolling and doesn’t help anybody.

Getting the bleeding edge into the mainstream

The question then is how could we ever get the new technologies we talk about used and implemented? There is no doubt that we need them to make the web the high fidelity app platform we got promised when some company arrogantly proclaimed Flash to be dead. But who will be the people to use them? In a lot of cases this only happens inside the companies that drive these technologies or by partners these companies pay to build showcases to prove that things could be amazing if we just started using new tech.

To me, this is not scalable and sad. We should be innovating for the people who build things now and not for a future that needs to come. This is less sexy and means a lot more work but it means we build with our audience rather than trying to lure them to change.

If you keep your eyes open then you see that actually a lot of what we consider amazing work is a very small percentage of the market. Tech press loves to hype them up and companies love to (pretend to) use bleeding edge technology to attract tech talent to work for them, but the larger part of the market wants one thing: getting the job done.

The majority of developers use libraries and frameworks

In the case of the web development this means one thing: libraries and polyfills. Yes, the things we considered a necessary evil to be able to build things fast and still support outdated browsers are now the thing people use to build web products. These are also the things they tell others to use – try to find a question on Stack Overflow that has no “use jQuery” as at least one of the answers. Try to find a CSS example that supports various prefixes rather than pulling a “this works only in webkit” or “use Less, no, use SASS, no use SMACSS…”.

Abstracting away the need for basic knowledge

Talking to colleagues and peers in other companies I hear a lot of moaning and complaining that it is impossible to hire real JavaScript developers as 90% of applicants come in and only know jQuery. They have no clue what an event handler is, how to navigate the DOM or create a simple XHR call without the library. Ridiculous? Not really – we are actually to blame.

The “in-crowd” scene has a fetish for abstraction. Instead of building applications and solutions we build more libraries, micro-libraries and polyfills to abstract the evil away from implementers and then we are surprised if implementers don’t know the basics any longer. Well, they used the precious time they had to learn what we build and started getting things done. And this learning time multiplies with the amount of things we release. The hour learning backbone, SASS, LESS, hammer.js or whatever is gone and should be used to build things with it now. All the more despicable when as the “cool kids” we just drop those libraries a few months later and build the next big thing.

Shouldn’t we innovate with existing libraries?

The question I am asking myself right now is this: when most of the market uses libraries to get their job done, why do we bother assuming that people would go back to writing “native” code for browsers – especially when we fail to produce standards that do not differ across browsers?

Wouldn’t the better way to get something done to build jQuery plugins that use the new APIs we want people to play with in an unobtrusive way and see real applications built with them? A great example are performance enhancements like requestAnimationFrame and pageVisibility. We can whine and complain that libraries are horrible and especially on phones drain the battery mercilessly or we could just start playing where our audience hangs out and improve where the errors happen rather than pointing them out.

Of course some things need us to find people to play with tech outside the libraries but a lot could be sneaked in without people knowing and then allow us to show real examples where a plugin that uses a new feature made an older implementation perform much better.

I’ve tried to do this with my talk at jQuery UK earlier this year. I showed the JavaScript equivalents of jQuery solutions and that browsers now have those and how following their ideas and principles could lead people to write better jQuery. I got good feedback so far. Maybe I am on to something.

Drop me an answer on Google+ or Facebook’s HTML5 group.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)