Let’s

Let’s make Machine Learning on the web and in the client happen. We need your input!

Machine learning and deep learning is the hot breakthrough in computing and a well-hyped topic. How “intelligent” your systems are can be a significant factor in sales and many a mobile device is sold as “personal AI” for your convenience.

O hai, robot

There is no question that learning interfaces are superior to hard-wired ones. The biggest example to me is the virtual keyboard on my mobile. The more I use it, the better it gets. It doesn’t only fix typos for me but also starts guessing the words following the current one correctly. And it doesn’t even matter if I swipe in English or German, the system is clever enough to recognize my needs based on earlier actions on my part.

Machine Learning is everywhere – it is just not accessible to most developers

I love the idea of learning machines and I think it is an inevitable step in the evolution of human-machine interaction. What we saw years ago in Star Trek – a ubiquitous computer at our beck and call with a voice interface – is now common. And we use it mostly to ask it about the weather. It makes no sense that we need to buy a certain device to reap the rewards of a new technology – I want this to be available on the web and any device.

The problem I have with this new evolution in computing is that it is opaque and locked to a few vendors. I love the web for what it is and how it broke down closed development environments and vendor lock-in when it comes to tooling. Anyone can start developing on the web, as it is the one and only truly open and distributed platform we have. The difference is the skill you have, not what device you have access to.

Now, when it comes to Machine Learning on the web, things look a lot less open and mature than I’d like them to be. In order to train models or to even get insights from models you need to use a third party service or library. Out-of-the-box, you can’t do any training or even image or facial detection in the browser.

Enter the WebML Community Group

I think this needs to change, which is why I am happy that there is a new community group of the W3C that works on an API proposal for Machine Learning on the web. This work started with Intel and Microsoft and I am happy we’ve come quite far, but now we need your help to make this a reality.

Let’s quickly recap why Machine Learning on the web and on device is a good idea:

  • Enhanced performance – results from trained model are returned immediately without any network latency
  • Offline functionality – lookups running on device don’t need a connection to a cloud service
  • Enhanced privacy – it is great that many cloud services offer us pre-trained models to run our requests against, but what if I don’t want that image I just took to go to some server in some datacenter of some corporation?

As with every innovation, there are current limitations and things to consider. These are some of the ones we are currently working on in the discussion group:

  • File size – well-trained models tend to be on the large side, often hundreds of megabytes. Using file sizes like that on the client side will result in I/O delays and also extensive RAM usage of browsers or your Node solution
  • Limited Performance – browsers are still hindered by a single thread JavaScript engine and no access to multiple cores of the device. Native code doesn’t have that issue which is why we propose an API that allows access to the native ML code on different OSes instead of imitating them.

The current state of affairs

Currently you can use tensorflow.js and onnx.js to talk to models or do your own training on device. Whilst there are some impressive demos and ideas floating around using those it still feels like we’re not taking the notion that serious.

That said, it is pretty amazing what you can do with a few lines of code and the right idea. Charlie Gerard’s learning keyboard is a great example that uses tensorflow.js and mobilenet to teach your computer to recognize head movements and then control a keyboard with it.

Why not just offer this as browser functionality?

One of the requests we often heard was why browsers don’t have this functionality built-in. Why can’t a browser create an alternative text for an image or recognize faces in a video? Well, it could, but once again this would mean your data is in the hand of some company and said company would control the functionality. It is no better than the offerings of native platforms in their SDKs. The web doesn’t work that way.

How can you help?

We’re right now in an experiment and investigating phase. As rolling out a new standard in the W3C isn’t a matter taken lightly we want to make sure that we deliver the right thing. Therefore we need to get real-life implementation examples where running ML on-device would make a massive difference to you.
So, please tell us what use cases aren’t covered in a satisfactory manner in the current web-talks-to-cloud-and-waits-for-data-to-come-back scenario.
We’re not looking for “I’d like to do facial recognition” but scenarios that state “If I had face recognition in JavaScript, I could…”. I’d be very interested in companies who do need this functionality to improve their current products, and I am already working with a few.
You can reach me on Twitter , you can fill out this form , or you can mail me at chris.heilmann@gmail.com with the subject “[WEBML scenario]”.

Thanks for your attention and all the work you do to keep the web rocking!

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Let’s all go to the pub – to learn about web development – Halfstack 2018 in London, England

Halfstack is a conference that is close to my heart. Because it is in London, because it is in a pub, because it is run by a person who is lovely, ginger and did so, so much for the JavaScript community over decades without having a huge ego or being weird: Dylan Schiemann.

This, in addition to a few other factors, makes Halfstack incredibly affordable, relaxed and at the same time full of great content. That’s why I keep presenting there, even when this time – for the first time – I had to fly to London to participate.

This year had quite an amazing line-up and a lot more talks than the past editions. The average talk length was a lot shorter than in the earlier years. To me, that’s a good thing. Better to make one point really well than treating an audience once again to the history of computing and how that relates to that brand new technology you actually wanted to talk about.

I shot a lot of photos, all of which are in this Google Photos album and here’s a quick recap of the talks.

The talks

Chris Heilmann, Microsoft: “Bringing best practices front and centre”

My opening keynote was about what we consider best practices and how they are often not applicable in context. How we miss out the opportunity of making them a starting point for new developers rather than something they have to learn to value after making the same mistakes we did before. With open and extensible editors like Visual Studio Code and tools to test the quality of our products while we deploy or even create them like webhint, we have a chance to embed our knowledge into the development flow instead of hoping people start caring.

My slides, resources and twitter reactions for ‘Bringing best practices front and centre’ are on notist.

Ada Rose Cannon, Samsung: “The present and future of VR on the Web”

Ada Rose Cannon and Alex Lakaitos

Ada Rose is chock-full of talent, knowledge and does a lot of good work to move the web into the third dimension and beyond. Working for Samsung’s Internet browser has its benefits as you have access to a lot of hardware to test. Ada showed examples from the history of VR/AR and XR and how it applies to web technologies. She ended with a call to action to support the Immersive Web Community Group of the W3C to get this work further along. It is fun to see someone who is so emerged in a topic explaining it in an accessible manner rather than drowning in jargon.

Alex Lakatos, Nexmo: “Building Bots with JavaScript”

Alex Lakatos worked with me at Mozilla, back then as a community member and was one of the first to benefit from their speaker training program. And it shows. In a few minutes he explained the benefits and pitfalls of bots as a platform and communication channel and showed in live demos how to train a bot in JavaScript how to understand humans. Both his slides and his demo code are available.

Alex also runs the developer avocados newsletter, a great resource for Developer Advocacy, call for papers and all that is related to that.

Anna Migas, Lunar Logic: “Fast But Not Furious: Debugging User Interaction Performance Issues”

Anna Migas presenting at halfstackconf

Anna Migas doesn’t only have an incredibly easy to remember Twitter handle (@szynszyliszys), she also has done a lot of homework in the area of web performance when it comes to making interfaces react quickly to the user. There is a truckload of information on the topic out there, and Anna did her best to distill it for the audience into sensible, digestible chunks in this short talk. Well worth a watch and share. Her slides are here to peruse.

Liliana Kastilio, Snyk: “npm install disaster-waiting-to-happen”

Liliana Kastilio presenting

Liliana Kastilio gave her first ever presentation and covered a lot of security ground about what not to do in your JavaScript. I expected a different talk considering the title, but I was not disappointed. A lot of sensible takeaways in a short amount of time.

Andrico Karoulla, Trint: “Enter ES2018

Andrico Karoulla on ES6

Andrico Karoulla is heir to a Fish and Chip shop and thus should already be set for life. However, his passion is telling people about the cool new features of JavaScript and he did so in a short talk. He didn’t only tell us about the features, but also managed to explain why they are important and what real implementation problems they fix. Good show, even when he had a tough time speaking into the mic and coding at the same time. 🙂

Stephen Cook, Onfido: “100% CSS Mario Kart”

CSS trick used to fake interactivity

Stephen Cook delivered the first jaw-dropping talk of the day by creating a CSS Mario Kart game. He applied a few interesting tricks, like a negative delay on CSS animations and using the validity state of the hidden form field to read out keystrokes in CSS. I’ve seen a few demos like that before, but it is pretty impressive to see it done live in such a short amount of time including explanations why some of these tricks work.
Both Stephen’s slides with explanations about the hacks and the demo of the Mario Kart animation are available

Sean McGee, Esri UK: “Buying a House with JavaScript”

Sean McGee presenting

Sean’s talk was a big let-down for anyone who thought they could learn how to afford buying a house in London with JavaScript as your only skill. If you came to learn about creating a clever mash-up of house offers, crime and travel information, you had a great time. Sean explained not only how to scrape the data, but also how to mash it up and display it in an intelligent manner that allowed him to find an affordable place with all the trimmings he wanted. As a former pipes/YQL and maps person, I was very happy.

Jonathan Fielding, Snyk: “Home Automation with JavaScript”

Jonathan Fielding is another person who spoke at a few Halfstack events and this time he covered the topic of home automation. It is a great topic and a market that needs cracking open as there are not many standards available. Instead you need to do a lot of reverse engineering and tinkering and Jonathan explained in an accessible fashion how to do this. Amongst other things, Jonathan lit and changed the colour of light bulbs on stage and deactivated his home security system – as you do.

Rob Bateman, The Away Foundation: “Reanimating the Web”

Rob Bateman with his TypeScript joke

Rob gave a similar talk at the warm-up of Beyond Tellerand Duesseldorf earlier this year, so you see the high quality and lots of work that went into this. He covered the history of animation on the web and went deep down into the nitty gritty on how we can ensure both that animations are buttery smooth and comparatively fast to native solutions doing the same things. A good reminder that we had a lot of innovation in the Flash space, and we now need to catch up again – both in tooling and in our approach to write animations.

Carolyn Stransky, Blacklane “The Most Important UI: You”

Carolyn on Self Care

Carolyn Stransky was the second “wow” moment for me this time. Her talk (slides are available here was about self care, how to be good to yourself and how to ensure we are not creating a horrible work environment. I’ve seen a few of these talks, but often they are high-level and “why aren’t we all better at this” finger pointing. Carolyn did a great job showing a truckload of resources you can use to make your life a bit easier and better and explained how to use them instead.

If you’re a conference organizer, contact her. This was absolutely lovely.

Tom Dye, SitePen and Dylan Schiemann, SitePen: “Cats vs. Dogs”

Tom and Dylan mostly did this talk to play out their fetish of wearing rubber animal masks:

Rubber cat and dog masks

Other than that kinky interlude, the talk was about all the weird little discussions and endless threads we have as a community about pointless things like tabs vs. spaces.

Cats vs. Dogs

The real important part here was though that they build a PWA that allowed the audience to vote for cats or dogs and control the speed of their tails wagging. You could also make them miaow or bark. ARE YOU NOT ENTERTAINED?

Cameron Diverand and Theodor Gherzan of Balena: “JavaScript at the edge”

Controlling a board of LEDs in JavaScript

Cameron and Theodor showed how to control a board of LEDs in JavaScript with sound coming from the audience. They didn’t talk about the Edge browser, which – to me – was disappointing. If you like the sort of thing of doing crazy hardware things in JavaScript, though, this was a lot of fun.

Jani Eväkallio, Formidable: “This Talk Is About You”

Jani did a poetry reading at the last Halfstack. This time he went further and did a visual storytelling kind of presentation reminding us that we’re not victims of the market we are in but should be much more in control over the quality of and the impact our code has on the world. This is tough to explain, it may make more sense to wait for Halfstack to release the video, as it was thoroughly enjoyable.

Jani does a lot of performing and is a joy to see present. Check it out. The keynote file of his talk is here. He also organises a technology comedy night called Component did Smoosh and the next one is 30th of November in Berlin.

Tony Edwards, Software Cornwall: “Beats, Rhymes & Unit Tests”

Tony Edwards is an incredibly passionate person about the web and organiser of the FutureSync conference, where he was crazynice enough to invite me to speak. In this session he covered the experimental web speech to text API and tried it on different rap lyrics with not much success. He then proceeded to do a live rapping session expecting the (mostly) British audience to go wild like a rap battle in Detroit or LA. It worked to a degree though, and his rap was much better converted by the API. All in all a thoroughly enjoyable talk by a multi-talented, nice bloke.

As a side node, using a full fledged deep learning API would give you much better results. The big thing about text recognition isn’t the interface to the browser, but the quality of the trained model. And they don’t come cheap which is why Mozilla tries to open-source that idea with their Common Voice project.

Professional detection software also started mixing audio recognition with lip-reading, which is incredibly exciting and yields much better results.

Joe Hart, Blend Media: Alpha, Beta, Gamer: Dev Mode

Competitive Tetris

Joe Hart’s talk was a splendid end of the evening. He covered oddities in the history of computer gaming and had a lot of interactive games with the audience. A Flappy Bird clone that worked by shouting at it, a Tetris clone where one player painted impossible Tetronimos and the other had to fit them in and other cruel measures to make the audience have fun and participate. Joe Hart is a Fringe presenter, so there is no question about the quality. This was fun from start to end.

Summary

Pub Quiz

Yes, Halfstack is different and the quality of the projector was questionable. The food was lovely though and having it in a pub means speakers are much more relaxed and lapses in their presentations much easier forgiven by the audience. Dylan and team are trying to take this concept on the road and for the first time plan to do a Vienna and NYC edition of the conference. I am really looking forward to seeing this succeed. I’ll be back and I’ll be having a great time again. Halfstack is an easy-going, yet valuable and highly diverse event, and well worth the money.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Let’s Write a Web Extension

You might have heard about Mozilla’s WebExtensions, our implementation of a new browser extension API for writing multiprocess-compatible add-ons. Maybe you’ve been wondering what it was about, and how you could use it. Well, I’m here to help! I think the MDN’s WebExtensions docs are a pretty great place to start:

WebExtensions are a new way to write Firefox extensions.

The technology is developed for cross-browser compatibility: to a large extent the API is compatible with the extension API supported by Google Chrome and Opera. Extensions written for these browsers will in most cases run in Firefox with just a few changes. The API is also fully compatible with multiprocess Firefox.

The only thing I would add is that while Mozilla is implementing most of the API that Chrome and Opera support, we’re not restricting ourselves to only that API. Where it makes sense, we will be adding new functionality and talking with other browser makers about implementing it as well. Finally, since the WebExtension API is still under development, it’s probably best if you use Firefox Nightly for this tutorial, so that you get the most up-to-date, standards-compliant behaviour. But keep in mind, this is still experimental technology — things might break!

Starting off

Okay, let’s start with a reasonably simple add-on. We’ll add a button, and when you click it, it will open up one of my favourite sites in a new tab.

The first file we’ll need is a manifest.json, to tell Firefox about our add-on.

{
  "manifest_version": 2,
  "name": "Cat Gifs!",
  "version": "1.0",
  "applications": {
    "gecko": {
      "id": "catgifs@mozilla.org"
    }
  },

  "browser_action": {
    "default_title": "Cat Gifs!"
  }
}

Great! We’re done! Hopefully your code looks a little like this. Of course, we have no idea if it works yet, so let’s install it in Firefox (we’re using Firefox Nightly for the latest implementation). You could try to drag the manifest.json, or the whole directory, onto Firefox, but that really won’t give you what you want.

The directory listing

Installing

To make Firefox recognize your extension as an add-on, you need to give it a zip file which ends in .xpi, so let’s make one of those by first installing 7-Zip, and then typing 7z a catgifs.xpi manifest.json. (If you’re on Mac or Linux, the zip command should be built-in, so just type zip catgifs.xpi manifest.json.) Then you can drag the catgifs.xpi onto Firefox, and it will show you an error because our extension is unsigned.

The first error

We can work around this by going to about:config, typing xpinstall.signatures.required in the search box, double-clicking the entry to set it to false, and then closing that tab. After that, when we drop catgifs.xpi onto Firefox, we get the option to install our new add-on!

It’s important to note that beginning with Firefox 44 (later this year), add-ons will require a signature to be installed on Firefox Beta or Release versions of the browser, so even if you set the preference shown below, you will soon still need to run Firefox Nightly or Developer Edition to follow this tutorial.

Success!!!

Of course, our add-on doesn’t do a whole lot yet.

I click and click, but nothing happens.

So let’s fix that!

Adding features

First, we’ll add the following lines to manifest.json, above the line containing browser_action:

  "background": {
    "scripts": ["background.js"],
    "persistent": false
  },

now, of course, that’s pointing at a background.js file that doesn’t exist yet, so we should create that, too. Let’s paste the following javascript in it:

'use strict';

/*global chrome:false */

chrome.browserAction.setBadgeText({text: '(?)'});
chrome.browserAction.setBadgeBackgroundColor({color: '#eae'});

chrome.browserAction.onClicked.addListener(function(aTab) {
  chrome.tabs.create({'url': 'http://chilloutandwatchsomecatgifs.com/', 'active': true});
});

And you should get something that looks like this. Re-create the add-on by typing 7z a catgifs.xpi manifest.json background.js (or zip catgifs.xpi manifest.json background.js), and drop catgifs.xpi onto Firefox again, and now, when we click the button, we should get a new tab! 😄

Cat Gifs!

Automating the build

I don’t know about you, but I ended up typing 7z a catgifs.xpi manifest.json a disappointing number of times, and wondering why my background.js file wasn’t running. Since I know where this blog post is ending up, I know we’re going to be adding a bunch more files, so I think it’s time to add a build script. I hear that the go-to build tool these days is gulp, so I’ll wait here while you go install that, and c’mon back when you’re done. (I needed to install Node, and then gulp twice. I’m not sure why.)

So now that we have gulp installed, we should make a file named gulpfile.js to tell it how to build our add-on.

'use strict';

var gulp = require('gulp');

var files = ['manifest.json', 'background.js'];
var xpiName = 'catgifs.xpi';

gulp.task('default', function () {
  console.log(files, xpiName)
});

Once you have that file looking something like this, you can type gulp, and see output that looks something like this:

Just some command line stuff, nbd.

Now, you may notice that we didn’t actually build the add-on. To do that, we will need to install another package to zip things up. So, type npm install gulp-zip, and then change the gulpfile.js to contain the following:

'use strict';

var gulp = require('gulp');
var zip = require('gulp-zip');

var files = ['manifest.json', 'background.js'];
var xpiName = 'catgifs.xpi';

gulp.task('default', function () {
  gulp.src(files)
    .pipe(zip(xpiName))
    .pipe(gulp.dest('.'));
});

Once your gulpfile.js looks like this, when we run it, it will create the catgifs.xpi (as we can tell by looking at the timestamp, or by deleting it and seeing it get re-created).

Fixing a bug

Now, if you’re like me, you clicked the button a whole bunch of times, to test it out and make sure it’s working, and you might have ended up with a lot of tabs. While this will ensure you remain extra-chill, it would probably be nicer to only have one tab, either creating it, or switching to it if it exists, when we click the button. So let’s go ahead and add that.

Lots and lots of cats.

The first thing we want to do is see if the tab exists, so let’s edit the browserAction.onClicked listener in background.js to contain the following:

chrome.browserAction.onClicked.addListener(function(aTab) {
  chrome.tabs.query({'url': 'http://chilloutandwatchsomecatgifs.com/'}, (tabs) => {
    if (tabs.length === 0) {
      // There is no catgif tab!
      chrome.tabs.create({'url': 'http://chilloutandwatchsomecatgifs.com/', 'active': true});
    } else {
      // Do something here…
    }
  });
});

Huh, that’s weird, it’s always creating a new tab, no matter how many catgifs tabs there are already… It turns out that our add-on doesn’t have permission to see the urls for existing tabs yet which is why it can’t find them, so let’s go ahead and add that by inserting the following code above the browser_action:

  "permissions": [
    "tabs"
  ],

Once your code looks similar to this, re-run gulp to rebuild the add-on, and drag-and-drop to install it, and then when we test it out, ta-da! Only one catgif tab! Of course, if we’re on another tab it doesn’t do anything, so let’s fix that. We can change the else block to contain the following:

      // Do something here…
      chrome.tabs.query({'url': 'http://chilloutandwatchsomecatgifs.com/', 'active': true}, (active) => {
        if (active.length === 0) {
          chrome.tabs.update(tabs[0].id, {'active': true});
        }
      });

Make sure it looks like this, rebuild, re-install, and shazam, it works!

Making it look nice

Well, it works, but it’s not really pretty. Let’s do a couple of things to fix that a bit.

First of all, we can add a custom icon so that our add-on doesn’t look like all the other add-ons that haven’t bothered to set their icons… To do that, we add the following to manifest.json after the manifest_version line:

  "icons": {
    "48": "icon.png",
    "128": "icon128.png"
  },

And, of course, we’ll need to download a pretty picture for our icon, so let’s save a copy of this picture as icon.png, and this one as icon128.png.

We should also have a prettier icon for the button, so going back to the manifest.json, let’s add the following lines in the browser_action block before the default_title:

    "default_icon": {
      "19": "button.png",
      "38": "button38.png"
    },

and save this image as button.png, and this image as button38.png.

Finally, we need to tell our build script about the new files, so change the files line of our gulpfile.js to:

var files = ['manifest.json', 'background.js', '*.png'];

Re-run the build, and re-install the add-on, and we’re done! 😀

New, prettier, icons.

One more thing…

Well, there is another thing we could try to do. I mean, we have an add-on that works beautifully in Firefox, but one of the advantages of the new WebExtension API is that you can run the same add-on (or an add-on with minimal changes) on both Firefox and Chrome. So let’s see what it will take to get this running in both browsers!

We’ll start by launching Chrome, and trying to load the add-on, and see what errors it gives us. To load our extension, we’ll need to go to chrome://extensions/, and check the Developer mode checkbox, as shown below:

Now we’re hackers!

Then we can click the “Load unpacked extension…” button, and choose our directory to load our add-on! Uh-oh, it looks like we’ve got an error.

Close, but not quite.

Since the applications key is required for Firefox, I think we can safely ignore this error. And anyways, the button shows up! And when we click it…

Cats!

So, I guess we’re done! (I used to have a section in here about how to load babel.js, because the version of Chrome I was using didn’t support ES6’s arrow functions, but apparently they’ve upgraded their JavaScript engine, and now everything is good. 😉)

Finally, if you have any questions, or run into any problems following this tutorial, please feel free to leave a comment here, or get in touch with me through email, or on twitter! If you have issues or constructive feedback developing WebExtensions, the team will be listening on the Discourse forum.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Have we lost our connection with the web? Let’s #webexcite

I love the web. I love building stuff in it using web standards. I learned the value of standards the hard way: building things when browser choices were IE4 or Netscape 3. The days when connections were slow enough that omitting quotes around attributes made a real difference to end users instead of being just an opportunity to have another controversial discussion thread. The days when you did everything possible – no matter how dirty – to make things look and work right. The days when the basic functionality of a product was the most important part of it – not if it looks shiny on retina or not.

Let's get excited

I am not alone. Many out there are card-carrying web developers who love doing what I do. And many have done it for a long, long time. Many of us don a blue beanie hat once a year to show our undying love for the standard work that made our lives much, much easier and predictable and testable in the past and now.

Enough with the backpatting

However, it seems we live in a terrible bubble of self-affirmation about just how awesome and ever-winning the web is. We’re lacking proof. We build things to impress one another and seem to forget that what we do sooner than later should improve the experience of people surfing the web out there.

In places of perceived affluence (let’s not analyse how much of that is really covered-up recession and living on borrowed money) the web is very much losing mind-share.

Apps excite people

People don’t talk about “having been to a web site”; instead they talk about apps and are totally OK if the app is only available on one platform. Even worse, people consider themselves a better class than others when they have iOS over Android which dares to also offer cheaper hardware.

The web has become mainstream and boring; it is the thing you use, and not where you get your Oooohhhs and Aaaahhhhs.

Why is that? We live in amazing times:

  • New input types allow for much richer forms
  • Video and Audio in HTML5 has matured to a stage where you can embed a video without worrying about showing a broken grey box
  • Canvas allows us to create and manipulate graphics on the fly
  • WebRTC allows for Skype-like functionality straight in the browser.
  • With Web Audio we can create and manipulate music in the browser
  • SVG is now an embed in HTML and doesn’t need to be an own document which allows us scalable vector graphics (something Flash was damn good in)
  • IndexedDB allows us to store data on the device
  • AppCache, despite all its flaws allows for basic offline functionality
  • WebGL brings 3D environments to the web (again, let’s not forget VRML)
  • WebComponents hint at finally having a full-fledged Widget interface on the web.

Shown, but never told

The worry I have is that most of these technologies never really get applied in commercial, customer-facing products. Instead we build a lot of “technology demos” and “showcases” to inspire ourselves and prove that there is a “soon to come” future where all of this is mainstream.

This becomes even more frustrating when the showcases vanish or never get upgraded. Many of the stuff I showed people just two years ago only worked in WebKit and could be easily upgraded to work across all browsers, but we’re already bored with it and move on to the next demo that shows the amazing soon to be real future.

I’m done with impressing other developers; I want the tech we put in browsers to be used for people out there. If we can’t do that, I think we failed as passionate web developers. I think we lost the connection to those we should serve. We don’t even experience the same web they do. We have fast macs with lots of RAM and Adblock enabled. We get excited about parallax web sites that suck the battery of a phone empty in 5 seconds. We happily look at a loading bar for a minute to get an amazing WebGL demo. Real people don’t do any of that. Let’s not kid ourselves.

Exciting, real products

I remember at the beginning of the standards movement we had showcase web sites that showed real, commercial, user-facing web sites and praised them for using standards. The first CSS layout driven sites, sites using clever roll-over techniques for zooming into product images, sites with very clean and semantic markup – that sort of thing. #HTML on ircnet had a “site of the day”, there was a “sightings” site explaining a weekly amazing web site, “snyggt” in Sweden showcased sites with tricky scripts and layout solutions.

I think it may be time to re-visit this idea. Instead of impressing one another with codepens, dribbles and other in-crowd demos, let’s tell one another about great commmercial products aimed not at web developers using up-to-date technology in a very useful and beautiful way.

That way we have an arsenal of beautiful and real things to show to people when they are confused why we like the web so much. The plan is simple:

  • If you find a beautiful example of modern tech used in the wild, tweet or post about it using the #webexcite hash tag
  • We can also set up a repository somewhere on GitHub once we have a collection going

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Webconf Hungary – HTML5 Beyond the Hype: Let’s make it work

I am on my way back to England from Budapest where I spent the last two days in an epic hotel and the day speaking at the 10th annual Webconf. I was invited by the Mozilla community in Hungary, who did a great job showing off Firefox OS to the attendees. My presentation (one of the very few in English) was about HTML5 and how Firefox OS helps making some of its promises a reality.

The slides are available on Slideshare:

And as always there is a screencast on YouTube.

The talk was filmed, and the video should be released soon. All in all Webconf was a cozy one day conference with a lot of very interested attendees. As most content was in Hungarian – a language that seems base64 encoded to me – I didn’t get much, but it seemed the overall quality was very high.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Let’s explain the “why” instead of the “how”

One thing that bugs me a lot is that in the publishing world about the web we have a fetish for the “how” whereas we should strive for the “why” instead.

What do I mean by that? Well, first of all, I count everything that is published as important. This could be a comment, a tweet, a blog post, a presentation, a screencast – doesn’t matter. If it ends up on the web it will be linked to, it will be quoted, it will be taken as “best practice” or “common usage” and people will start arguing over it and adding it to what we call “common knowledge” (spoiler: there is no such thing).

Advice duck telling people to go to W3Schools and learn to be a web developer to make money to finance your real studies
Meme that got around the web some time ago: web development is a means to make a lot of money to finance your real studies and it is possible by following the courses on w3schools. This cheapening of a whole profession to me is an immediate result of giving people solutions instead of inviting them to understand what they are doing.

That is why publications that answer the “how” without also explaining the “why” are dangerous. We explain how something is done and we pride ourselves when this is as short and simple as possible. We do live coding on stage showing how complex things can be done with one small command and five different build systems. We show how simple things are when people use this editor or that development tool or this browser and everything is just a click away and we get amazing insight into the things we do.

Assumed stamina and interest

We expect people who learn the “how” to be sharp and interested enough to get to the “why” themselves. Sadly enough this is hardly ever the case. Instead, the quick “how” also known as the “here is how you do it” becomes an excuse not to even question practices and solutions any longer. “Awesome technology expert $person said and showed on stage that this is how it is done. Don’t waste your time on doing it differently” is becoming a mantra for a lot of new developers.

Moldy advice

The issue with this is that “best practices” are getting more and more short-lived and in many cases very dependent on the environment they are applied in. What fixed performance issues in a Web View on iPhone 3 might be a terrible idea on Chrome on a Desktop, what was a real issue in JavaScript 10 years ago might not even make a minimal difference in today’s engines (string concatenation anyone?).

What is “the why”?

The “why” can be a few different things:

  • Why does doing something in the way we do it work?
  • Why should you use a certain technology?
  • Why is it important to do this, but also understand the environment it is most effective in?
  • Why is using something simple and effective but also dangerous depending on outside factors?
  • Why is a new way of doing something more effective than an older way of doing it?
  • Why is it important to understand what you do and how do you explain to other people that there is a reason to do it?

Explaining the “why” is much, much harder than the “how”. Telling someone to do something in a certain way is giving orders, explaining a procedure. Explaining why it should be done this way means you teach the other person, and it also means you need to deeply understand what you do. The “how” can be repeated by someone who doesn’t know really how something works – and in many cases is – the “why” means you have to put much more effort into understanding what you advocate. The “how” is what lead to boring school books and terrible training folders. The “why” leads to interactive and memorable training experiences.

W3Schools – the kingdom of the how

Getting rid of the fetish of the how is an incredibly frustrating uphill battle. The biggest manifestation of the “how” is W3Schools.com. This site shows you how to do something – even interactively – and thus has become a force majeur in the web development world. It gives you a fast, quick answer to copy and paste without the pesky having to “understand what you are doing” part. This leads to people defending it tooth and nail every time some righteous people set out to kill it. All of these efforts are doomed to fail if they mean setting up yet another resource that will “do things better than w3schools”. The reason sites like W3Schools work are:

  • They give you a short answer and make you feel clever as you achieved something amazing without effort
  • They are easy to link to as an answer to a question without having to explain things
  • They are easy to embed into a tutorial or article as a quick citation to “prove a point”
  • People used them for years and they grew constantly which is something that Google loves

In other words, they are a useful reminder and lookup resource for people who already know the “why” and simply forgot the “how”. Thus, they look like a power tool the experts use and are very tempting for beginners to use as well. Much like buying the same shoes as Usain Bolt should make you an amazing runner…

The only way to “kill W3Schools” is to support resources that explain the how and the why, like MDN or WebPlatform.org – not to create more resources that have the right heart but are doomed to fail as maintaining a documentation resource is an amazing amount of work. Instead of sending new developers to w3schools or a Stackoverflow post that explains how something is done quickly, send them to a deep link on those. We can not expect people we point to solutions to care about how they happened. We have to show them the way, not the destination. By sending them to the destination via a shortcut, we deprive them of their own, personal learning experience and we cheapen our job to something anyone can look up on demand.

The “how” gets outdated, and – in many cases – dangerous practice very, very quickly. The “why” remains as it lights up the way to a solution, a solution that can change over time.

That’s why I’d love people to stop spouting quick answers and let new developers ponder the solution for themselves before telling them a way to do it quickly. We need to learn in order to understand and be empowered to create on our own. You only learn by asking why – let’s be supportive of that instead of feeling smug about pointing out an already existing solution. Web development got to where it is by continuously questioning how we do things and find ways to make it work. If we stop doing that, we stagnate.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

These apples don’t taste like oranges – let’s burn down the orchard

When I see comparisons of HTML5 to native apps I get the feeling that the way we measure failure and success could give statisticians a heart attack. Take Andrea Giammarchi’s The difficult road to vine via web as an example. In this piece Andrea, who really knows his stuff tries to re-create the hot app of the month, Vine using “HTML5 technologies” and comes to the conclusion – once again – that HTML5 is not ready to take on native technologies head-on. I agree. But I also want to point out that a 1:1 comparison is pretty much useless. Vine is only available in iOS. Vine also was purposely built to work for the iPhone. In order to prove if HTML5 is ready all we’d need to do is to find one single browser on one single OS, nay even only on one piece of hardware to match the functionality of Vine.

sobbing mathematically

Instead we set the bar impossibly high for ourselves. Whenever we talk about HTML5 we praise its universality. We talk about build once and run everywhere and we make it impossible for ourselves to deliver on that promise if we don’t also embrace the flexible nature of HTML5 and the web. In other words: HTML5 can and does deliver much more than any native app already. It doesn’t limit you to one environment or hardware and you can deliver your app on an existing delivery platform – the web – that doesn’t lock you in to Terms and Conditions that could change any time from under you. Nowhere is written though that the app needs to work and look the same everywhere. This would actually limit its reach as many platforms just don’t allow HTML5 apps to reach deep into hardware or to even perform properly.

What needs to change is our stupid promise of HTML5 apps working the same everywhere and matching all the features of native apps. That can not happen as we can not deliver the same experience regardless of connectivity, hardware access or how busy the hardware is already. HTML5 apps, unless packaged, will always have to compete with other running processes on the hardware and thus need to be much cleverer in resourcing than native apps.

Instead of trying to copy what’s cool in native and boring and forgotten a month later (remember Path?) if we really want to have HTML5 as our platform of choice we should play it to its strengths. This means that our apps will not look and feel the same on every platform. It means that they use what the platform offers and allow lesser able environments to at least consume and add data. And if we want to show off what HTML5 can do, then maybe showcasing on iOS is the last thing we want to do. You don’t put a runner onto a track full of quicksand, stumbling blocks and a massive wind pushing in the opposite direction either, do you?

HTML5 needs to be allowed to convince people that it is a great opportunity because of its flexibility, not by being a pale carbon copy of native apps. This leads to companies considering native as the simpler choice to control everything and force users to download an app where it really is not needed. Tom Morris’ “No I’m not going to download your bullshit app” and the lesser sweary I’d like to use the web my way thank you very much, Quora by Scott Hanselman show that this is already becoming an anti-pattern.

Personally I think the ball is firmly in the court of Android to kill the outdated and limiting stock browser and get an evergreen Chrome out for all devices. I also look for Blackberry 10 to make a splash and for Windows phone and tablets to allow us HTML5 enthusiasts to kick arse. And then there is of course Firefox OS, but this goes without saying as the OS itself is written in HTML5.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Linklib – lets film lovers and filmmakers send time synced links from videos to phones – a WebFWD project

Have you ever googled films and TV-shows while you watch them? Do you think Youtube’s popup annotations in the middle of a video are distracting?

Having shot a documentary about the Pirate Bay for 4 years I wanted to embed links directly into the film to give the audience more nuances in the complex and bizarre story about the Swedish file-sharing site. I constantly google videos I watch.

What problem is Linklib solving?

Linklib lets filmmakers, film fans, journalists and bloggers send time synched links from a full screen video directly to their audiences’ phones. Instead of googling an actor, fact checking an election video or feverishly trying to find the soundtrack to that TV-series, just pick up the phone and the information is right there.

When I stumbled over popcorn.js in 2011 I realized the amazing potential of embedding time synched links into films, but I was still looking for a way to hide the hyperlinks from the fullscreen video.

So we built Linklib, a system that sends time synced links from streamed video to phones. That way you can read up on that actress, fact check that election video or follow that rapper on twitter while you’re watching the video. Without obtrusive annotations in the frame.

Linklib is a free and open library of time synced video commentary. Film directors, journalists and fan communities can add facts and comments to give films more depth and nuances. The system works just as well for feature films, documentaries, music videos, educational and commercical films. Linklib is an open source project that wants to tell better stories by using the open web.

How it works

The basic components of Linklib’s system are a remote, usually a smartphone or a tablet, and a video viewer, typically a computer. The remote shows synchronized links from the video viewer and can send and receive events such as play, pause, forward and rewind. To sync the phone with the video we show a QR code that you can scan with your phone. At the moment our video viewer can show youtube, vimeo and html5 video streams.

A lobby server handles the communication between the remote and the video. The remote uses socket.io to communicate with the lobby server and is written in JavaScript.

For users to be able to add videos and create link feeds of their own we’ve built a web based authoring tool focused on simplicity. The tool is built using Twitter bootstrap and jQuery. All links and account information are stored on a mysql database on Amazon AWS.

Components

  • Remote – shows synchronized links from the Video Viwer and can send receive events such as play, pause, forward and rewind.
  • Video Viewer – Shows a youtube or vimeo stream.
  • Lobby server – handles communication between Remote and Video Viewer
  • Authoring Tool – Web based system that allows users to add videos and create link flows for them

APIs and Libraries

  • Remote uses socket.io to communicate with Lobby server and is written in JavaScript.
  • Lobby server is based on node.js
  • Video viewer uses popcorn.js to play videos
  • All links and account information are store on a mysql database on Amazon AWS
  • Authoring Tool uses Twitter bootstrap and jQuery

Help us test out Linklib

Are you a filmmaker looking for a way to tie closer bonds to your audience? Or a Game of Thrones-fan looking for a way to fill your favorite episode with geeky references? Maybe you want to throw in a bunch of research that you couldn’t explain in your last conference video? Or you’re a fashion designer that wants to reveal the details of your catwalk? If you’re a film producer you could fill your youtube trailer with your characters’ social media and reviews, like the great documentary Indiegame: The Movie here beneath.

You can also add mashups and remixes to that banging new Outkast music video! And help activists worldwide spread information that doesn’t make the mainstream news! Linklib puts the web into videos without ruining the traditional viewing experience.

We just launched a beta and we’d love your feedback about bugs and features!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)