Making

Codemotion Berlin – AI for good keynote and making people happier JavaScript developers

Audience at Codemotion Berlin

The day before yesterday I was honoured to open the Berlin Edition of Codemotion
. Codemotion touts itself the biggest developer event in Europe and is a multi-track event in Amsterdam, Rome, Madrid, Milan and many other European locations. I spoke there before in Rome, but I have to say the event grew much bigger and they do a great job with the marketing around the event.

Christian Heilmann presenting at Codemotion Berlin

My opening keynote covered the topic of ethics in AI and democratizing Machine Learning. I made sure to end on a positive note and invite anyone to start playing with and owning these technologies instead of just becoming consumers or victims of it.

In addition to the keynote, I also got interviewed by InfoQ on the same topic and you can read the interview and my answers here .

I collected the slides, resources and tweet reactions of the opening keynote on notist.

Christian Heilmann presenting at Codemotion Berlin

My second task was a more technical JavaScript talk about getting to grips with the changed world of JavaScript without feeling overwhelmed. Again, all the resources, slides and tweet reactions of the JavaScript talk are on notist.

I’d love to say more about the event, but with me being interviewed in between and generally having a bad cold, I didn’t watch too many other talks and stayed in the shadows.

That said, I managed to bring my partner and the web-famous Larry the dog to the speaker dinner and he was a much bigger success than I could ever be .

I’m looking forward to the videos and the interviews done at Codemotion and thank everyone I met, as there were some interesting leads for me.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Help making the fourth industrial revolution less scary

Last week I spent in Germany at an event sponsored by the government agency for unemployment covering the issue of digitalisation of the job market and the subsequential loss of jobs.

me, giving a keynote on machine learning and work

When the agency approached me to give a keynote on the upcoming “fourth industrial revolution” and what machine learning and artificial intelligence means for the job market I was – to put it mildly – bricking it. All the other presenters at the event had several doctor titles and were professors of this and that. And here I was, being asked to deliver the “future” to an audience of company owners, university professors and influential people who decide the employment fate of thousands of people.

Expert Panel

I went into hermit mode and watched, read and translated dozens of videos and articles on AI and the work environment. In the end, I took a more detailed look at the conference schedule and realised that most of the subject matter data will be covered by the presenter before me.

Thus I delivered a talk covering the current situation of AI and what it means for us as job seekers and employers. The slides and screencast are in German, but I am looking forward to translating them and maybe deliver them in a European frame soon.

The slide deck is on Slideshare, and even without knowing German, you should get the gist:

The screencast is on YouTube:

The feedback was overwhelming and humbling. I got interviewed by the local TV station where I mostly deflected all the negative and defeatist attitude towards artificial intelligence the media loves to portrait.

tv interview

I also got a half page spread in the local newspaper where – to the amusement of my friends – I was touted a “fascinating prophet”.

Newspaper article

During the expert panel on digital security I had a few interesting encounters. Whilst in general it felt tough to see how inflexible and outdated some of the attitudes of companies towards computers were, there is a lot of innovation happening even in rural areas. I was especially impressed with the state of robots in warehouses and the investment of the European Union in Blockchain solutions and security research.

One thing I am looking forward to is working with a cybersecurity centre in the area giving workshops on social engineering and security of iOT.

A few things I learned and I’d like you to also consider:

  • We are at the cusp – if not in the middle of – a new digital revolution
  • Our job as people in the know is to reach out to those who are afraid of it and give out sensible information as a counter point to some of the fearmongering of the press
  • It is incredibly rewarding to go out of our comfort zone and echo chamber and talk to people with real business and social change issues. It humbles you and makes you wonder just how you ended up knowing all that we do.
  • The good social aspects of our jobs could be a blueprint for other companies to work and change to be resilient towards replacement by machines
  • German is hard 🙂

So, be brave, offer to present at places not talking about the latest flavour of JavaScript or CSS preprocessing. The world outside our echo chamber needs us.

Or as the interrupters put it: What’s your plan for tomorrow?

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Making ES6 available to all with ChakraCore – A talk at JFokus2016

Today I gave two talks at JFokus in Stockholm, Sweden. This is the one about JavaScript and ChakraCore.

Presentation: Making ES6 available to all with ChakraCore
Christian Heilmann, Microsoft

2015 was a year of massive JavaScript innovation and changes. Lots of great features were added to language, but using them was harder than before as not all features are backwards compatible with older browsers. Now browsers caught on and with the open sourcing of ChakraCore you have a JavaScript runtime to embed in your products and reliable have ECMAScript support. Chris Heilmann of Microsoft tells the story of the language and the evolution of the engine and leaves you with a lot of tips and tricks how to benefit from the new language features in a simple way.

I wrote the talk the night before, and thought I structure it the following way:

  • Old issues
  • The learning process
  • The library/framework issue
  • The ES6 buffet
  • Standards and interop
  • Breaking monopolies

Slides

The Slide Deck is available on Slideshare.

Screencast

A screencast of the talk is on YouTube

Resources:

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Making and Breaking the Web With CSS Gradients

What is CSS prefixing and why do I care?

Straight from the source:

“Browser vendors sometimes add prefixes to experimental or nonstandard CSS properties, so developers can experiment but changes in browser behavior don’t break the code during the standards process. Developers should wait to include the unprefixed property until browser behavior is standardized.”

As a Web developer, users of your web sites will be affected if you use prefixed CSS properties which later have their prefixes removed—especially if syntax has changed between prefixed and unprefixed variants.

There are steps you can take in stewardship of an unbroken Web. Begin by checking your stylesheets for outdated gradient syntax and updating with an unprefixed modern equivalent. But first, let’s take a closer look at the issue.

What are CSS gradients?

CSS gradients are a type of CSS <image> function (expressed as a property value) that enable developers to style the background of block-level elements to have variations in color instead of just a solid color. The MDN documentation on gradients gives an overview of the various gradient types and how to use them. As always, CSS Tricks has top notch coverage on CSS3 gradients as well.

Screenshot of bloomberg.com's CSS with a CSS gradient

Removing (and then not removing) prefixed gradients from Firefox

In Bug 1176496, we tried to remove support for the old -moz- prefixed linear and radial gradients. Unfortunately, we soon realized that it broke the Web for enough sites ([1], [2], [3], [4], [5], [6]) that we had to add back support (for now).

Sin and syntax

Due to changes in the spec between the -moz- prefixed implementation and the modern, prefix-less version, it’s not possible to just remove prefixes and get working gradients.

Here’s a simple example of how the syntax has changed (for linear-gradient):

/* The old syntax, deprecated and prefixed, for old browsers */
background: -prefix-linear-gradient(top, blue, white); 
/* The new syntax needed by standard-compliant browsers (Opera 12.1,
   IE 10, Firefox 16, Chrome 26, Safari 6.1), without prefix */
background: linear-gradient(to bottom, blue, white);

In a nutshell, to and at keywords were added, contain and cover keywords were removed, and the angle coordinate system was changed to be more consistent with other parts of the platform.

When IE10 came out with support for prefixless new gradients, IEBlog wrote an awesome post illustrating the differences between the prefixed (old) syntax and the new syntax; check that out for more in-depth coverage. The css-tricks.com article on CSS3 gradients also has a good overview on the history of CSS gradients and its syntaxes (see “Tweener” and “New” in the “Browser Support/Prefixes” section).

OK, so like, what should I do?

You can start checking your stylesheets for outdated gradient syntax and making sure to have an unprefixed modern equivalent.

Here are some tools and libraries that can help you maintain modern, up-to-date, prefixless CSS:

If you’re already using the PostCSS plugin Autoprefixer, you won’t have to do anything. If you’re not using it yet, consider adding it to your tool belt. And if you prefer a client-side solution, Lea Verou’s prefix-free.js is another great option.

In addition, the web app Colorzilla will allow you to enter your old CSS gradient syntax to get a quick conversion to the modern prefixless conversion.

Masatoshi Kimura has added a preference that can be used to turn off support for the old -moz- prefixed gradients, giving developers an easy way to visually test for broken gradients. Set layout.css.prefixes.gradients to false (from about:config) in Nightly. This pref should ship in Firefox 42.

Modernizing your CSS

And as long as you’re in the middle of editing your stylesheets, now would be a good time to check the rest of them for overall freshness. Flexbox is an area that is particularly troublesome and in need of unbreaking, but good resources exist to ease the pain. CSS border-image is also an area that had changes between prefixed and unprefixed versions.

Thanks for your help in building and maintaining a Web that works.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Mozilla hits one million bugs – thanks for making the Web better with us

We passed a significant milestone on Wednesday. Mozilla’s installation of the Bugzilla bug-tracking software reached the landmark of bug number 1,000,000.

Our Bugzilla installation has been running since Mozilla started in 1998, and tracked bugs, issues, enhancement requests, work projects and almost any other kind of task, across the whole breadth of Mozilla. There are over a thousand other projects and companies that use Bugzilla, including Yahoo! and Red Hat. At Mozilla, anyone who gets an account can file a bug – that’s part of what it means to be an open, transparent and participatory project. Some of the people who filed the earliest bugs are still involved in the project today, and have amassed quite astounding bug-filing counts. Most of the bugs are now resolved, one way or another, and it’s probably fitting that the oldest open one is a request for an enhancement to Bugzilla itself.

So thanks to all those who have filed, triaged, processed or fixed bugs in our Bugzilla installation over the years, and to all those who have hacked on the software. (Bugzilla the project is very much alive and used widely across the industry; if you want to help, here’s how.) Bugzilla has been an essential tool in making our software as great as it is, and we couldn’t have done it without you.

Here’s to the next 1,000,000!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The Making of the Time Out Firefox OS app

A rash start into adventure

So we told our client that yes, of course, we would do their Firefox OS app. We didn’t know much about FFOS at the time. But, hey, we had just completed refactoring their native iOS and Android apps. Web applications were our core business all along. So what was to be feared?

More than we thought, it turned out. Some of the dragons along the way we fought and defeated ourselves. At times we feared that we wouldn’t be able to rescue the princess in time (i.e. before MWC 2013). But whenever we got really lost in detail forest, the brave knights from Mozilla came to our rescue. In the end, it all turned out well and the team lived happily ever after.

But here’s the full story:

Mission & challenge

Just like their iOS and Android apps, Time Out‘s new Firefox OS app was supposed to allow browsing their rich content on bars, restaurants, things to do and more by category, area, proximity or keyword search, patient zero being Barcelona. We would need to show results as illustrated lists as well as visually on a map and have a decent detail view, complete with ratings, access details, phone button and social tools.

But most importantly, and in addition to what the native apps did, this app was supposed to do all of that even when offline.

Oh, and there needed to be a presentable, working prototype in four weeks time.

Cross-platform reusability of the code as a mobile website or as the base of HTML5 apps on other mobile platforms was clearly prio 2 but still to be kept in mind.

The princess was clearly in danger. So we arrested everyone on the floor that could possibly be of help and locked them into a room to get the basics sorted out. It quickly emerged that the main architectural challenges were that

  • we had a lot of things to store on the phone, including the app itself, a full street-level map of Barcelona, and Time Out’s information on every venue in town (text, images, position & meta info),
  • at least some of this would need to be loaded from within the app; once initially and synchronizable later,
  • the app would need to remain interactively usable during these potentially lengthy downloads, so they’d need to be asynchronous,
  • whenever the browser location changed, this would be interrupted

In effect, all the different functionalities would have to live within one single HTML document.

One document plus hash tags

For dynamically rendering, changing and moving content around as required in a one-page-does-all scenario, JavaScript alone didn’t seem like a wise choice. We’d been warned that Firefox OS was going to roll out on a mix of devices including the very low cost class, so it was clear that fancy transitions of entire full-screen contents couldn’t be orchestrated through JS loops if they were to happen smoothly.

On the plus side, there was no need for JS-based presentation mechanics. With Firefox OS not bringing any graveyard of half-dead legacy versions to cater to, we could (finally!) rely on HTML5 and CSS3 alone and without fallbacks. Even beyond FFOS, the quick update cycles in the mobile environment didn’t seem to block the path for taking a pure CSS3 approach further to more platforms later.

That much being clear, which better place to look for best practice examples than Mozilla Hacks? After some digging, Thomas found Hacking Firefox OS in which Luca Greco describes the use of fragment identifiers (aka hashtags) appended to the URL to switch and transition content via CSS alone, which we happily adopted.

Another valuable source of ideas was a list of GAIA building blocks on Mozilla’s website, which has since been replaced by the even more useful Building Firefox OS site.

In effect, we ended up thinking in terms of screens. Each physically a <div>, whose visibility and transitions are governed by :target CSS selectors that draw on the browser location’s hashtag. Luckily, there’s also the onHashChange event that we could additionally listen to in order to handle the app-level aspects of such screen changes in JavaScript.

Our main HTML and CSS structure hence looked like this:

And a menu

We modeled the drawer menu very similarily, just that it sits in a <nav> element on the same level as the <section> container holding all the screens. Its activation and deactivation works by catching the menu icon clicks, then actively changing the screen container’s data-state attribute from JS, which triggers the corresponding CSS3 slide-in / slide-out transition (of the screen container, revealing the menu beneath).

This served as our “Hello, World!” test for CSS3-based UI performance on low-end devices, plus as a test case for combining presentation-level CSS3 automation with app-level explicit status handling. We took down a “yes” for both.

UI

By the time we had put together a dummy around these concepts, the first design mockups from Time Out came in so that we could start to implement the front end and think about connecting it to the data sources.

For presentation, we tried hard to keep the HTML and CSS to the absolute minimum. Mozilla’s GAIA examples being a very valuable source of ideas once more.

Again, targeting Firefox OS alone allowed us to break free of the backwards compatibility hell that we were still living in, desktop-wise. No one would ask us Will it display well in IE8? or worse things. We could finally use real <section>, <nav>, <header>, and <menu> tags instead of an army of different classes of <div>. What a relief!

The clear, rectangular, flat and minimalistic design we got from Time Out also did its part to keep the UI HTML simple and clean. After we were done with creating and styling the UI for 15 screens, our HTML had only ~250 lines. We later improved that to 150 while extending the functionality, but that’s a different story.

Speaking of styling, not everything that had looked good on desktop Firefox even in its responsive design view displayed equally well on actual mobile devices. Some things that we fought with and won:

Scale: The app looked quite different when viewed on the reference device (a TurkCell branded ZTE device that Mozilla had sent us for testing) and on our brand new Nexus 4s:

After a lot of experimenting, tearing some hair and looking around how others had addressed graceful, proportional scaling for a consistent look & feel across resolutions, we stumbled upon this magic incantation:

<meta name="viewport" content="user-scalable=no, initial-scale=1,
maximum-scale=1, width=device-width" />

What it does, to quote an article at Opera, is to tell the browser that there is “No scaling needed, thank you very much. Just make the viewport as many pixels wide as the device screen width”. It also prevents accidental scaling while the map is zoomed. There is more information on the topic at MDN.

Then there are things that necessarily get pixelated when scaled up to high resolutions, such as the API based venue images. Not a lot we could do about that. But we could at least make the icons and logo in the app’s chrome look nice in any resolution by transforming them to SVG.

Another issue on mobile devices was that users have to touch the content in order to scroll it, so we wanted to prevent the automatic highlighting that comes with that:

li, a, span, button, div
{
    outline:none;
    -moz-tap-highlight-color: transparent;
    -moz-user-select: none;
    -moz-user-focus:ignore
}

We’ve since been warned that suppressing the default highlighting can be an issue in terms of accessibility, so you might wanted to consider this carefully.

Connecting to the live data sources

So now we had the app’s presentational base structure and the UI HTML / CSS in place. It all looked nice with dummy data, but it was still dead.

Trouble with bringing it to life was that Time Out was in the middle of a big project to replace its legacy API with a modern Graffiti based service and thus had little bandwidth for catering to our project’s specific needs. The new scheme was still prototypical and quickly evolving, so we couldn’t build against it.

The legacy construct already comprised a proxy that wrapped the raw API into something more suitable for consumption by their iOS and Android apps, but after close examination we found that we better re-re-wrap that on the fly in PHP for a couple of purposes:

  • Adding CORS support to avoid XSS issues, with the API and the app living in different subdomains of timeout.com,
  • stripping API output down to what the FFOS app really needed, which we could see would reduce bandwidth and increase speed by magnitude,
  • laying the foundation for harvesting of API based data for offline use, which we already knew we’d need to do later

As an alternative to server-side CORS support, one could also think of using the SystemXHR API. It is a mighty and potentially dangerous tool however. We also wanted to avoid any needless dependency on FFOS-only APIs.

So while the approach wasn’t exactly future proof, it helped us a lot to get to results quickly, because the endpoints that the app was calling were entirely of our own choice and making, so that we could adapt them as needed without time loss in communication.

Populating content elements

For all things dynamic and API-driven, we used the same approach at making it visible in the app:

  • Have a simple, minimalistic, empty, hidden, singleton HTML template,
  • clone that template (N-fold for repeated elements),
  • ID and fill the clone(s) with API based content.
  • For super simple elements, such as <li>s, save the cloning and whip up the HTML on the fly while filling.

As an example, let’s consider the filters for finding venues. Cuisine is a suitable filter for restaurants, but certainly not for museums. Same is true for filter values. There are vegetarian restaurants in Barcelona, but certainly no vegetarian bars. So the filter names and lists of possible values need to be asked of the API after the venue type is selected.

In the UI, the collapsible category filter for bars & pubs looks like this:

The template for one filter is a direct child of the one and only

<div id="templateContainer">

which serves as our central template repository for everything cloned and filled at runtime and whose only interesting property is being invisible. Inside it, the template for search filters is:

<div id="filterBoxTemplate">
  <span></span>
  <ul></ul>
</div>

So for each filter that we get for any given category, all we had to do was to clone, label, and then fill this template:

$('#filterBoxTemplate').clone().attr('id', filterItem.id).appendTo(
'#categoryResultScreen .filter-container');
...
$("#" + filterItem.id).children('.filter-button').html(
filterItem.name);

As you certainly guessed, we then had to to call the API once again for each filter in order to learn about its possible values, which were then rendered into <li> elements within the filter‘s <ul> on the fly:

$("#" + filterId).children('.filter_options').html(
'<li><span>Loading ...</span></li>');

apiClient.call(filterItem.api_method, function (filterOptions)
{
  ...
  $.each(filterOptions, function(key, option)
  {
    var entry = $('<li filterId="' + option.id + '"><span>'
      + option.name + '</span></li>');

    if (selectedOptionId && selectedOptionId == filterOptionId)
    {
      entry.addClass('filter-selected');
    }

    $("#" + filterId).children('.filter_options').append(entry);
  });
...
});

DOM based caching

To save bandwidth and increase responsiveness in on-line use, we took this simple approach a little further and consciously stored more application level information in the DOM than needed for the current display if that information was likely needed in the next step. This way, we’d have easy and quick local access to it without calling – and waiting for – the API again.

The technical way we did so was a funny hack. Let’s look at the transition from the search result list to the venue detail view to illustrate:

?

As for the filters above, the screen class for the detailView has an init() method that populates the DOM structure based on API input as encapsulated on the application level. The trick now is, while rendering the search result list, to register anonymous click handlers for each of its rows, which – JavaScript passing magic – contain a copy of, rather than a reference to, the venue objects used to render the rows themselves:

renderItems: function (itemArray)
{
  ...

  $.each(itemArray, function(key, itemData)
  {        
    var item = screen.dom.resultRowTemplate.clone().attr('id', 
      itemData.uid).addClass('venueinfo').click(function()
    {
      $('#mapScreen').hide();
      screen.showDetails(itemData);
    });

    $('.result-name', item).text(itemData.name);
    $('.result-type-label', item).text(itemData.section);
    $('.result-type', item).text(itemData.subSection);

    ...

    listContainer.append(item);
  });
},

...

showDetails: function (venue)
{
  require(['screen/detailView'], function (detailView)
  {
    detailView.init(venue);
  });
},

In effect, there’s a copy of the data for rendering each venue’s detail view stored in the DOM. But neither in hidden elements nor in custom attributes of the node object, but rather conveniently in each of the anonymous pass-by-value-based click event handlers for the result list rows, with the added benefit that they don’t need to be explicitly read again but actively feed themselves into the venue details screen as soon a row receives a touch event.

And dummy feeds

Finishing the app before MWC 2013 was pretty much a race against time, both for us and for Time Out’s API folks, who had an entirely different and equally – if not more so – sportive thing to do. Therefore they had very limited time for adding to the (legacy) API that we were building against. For one data feed, this meant that we had to resort to including static JSON files into the app’s manifest and distribution; then use relative, self-referencing URLs as fake API endpoints. The illustrated list of top venues on the app’s main screen was driven this way.

Not exactly nice, but much better than throwing static content into the HTML! Also, it kept the display code already fit for switching to the dynamic data source that eventually materialized later, and compatible with our offline data caching strategy.

As the lack of live data on top venues then extended right to their teaser images, we made the latter physically part of the JSON dummy feed. In Base64 :) But even the low-end reference device did a graceful job of handling this huge load of ASCII garbage.

State preservation

We had a whopping 5M of local storage to spam, and different plans already (as well as much higher needs) for storing the map and application data for offline use. So what to do with this liberal and easily accessed storage location? We thought we could at least preserve the current application state here, so you’d find the app exactly as you left it when you returned to it.

Map

A city guide is the very showcase of an app that’s not only geo aware but geo centric. Maps fit for quick rendering and interaction in both online and offline use were naturally a paramount requirement.

After looking around what was available, we decided to go with Leaflet, a free, easy to integrate, mobile friendly JavaScript library. It proved to be really flexible with respect to both behaviour and map sources.

With its support for pinching, panning and graceful touch handling plus a clean and easy API, Leaflet made us arrive at a well-usable, decent-looking map with moderate effort and little pain:

For a different project, we later rendered the OSM vector data for most of Europe into terabytes of PNG tiles in cloud storage using on-demand cloud power. Which we’d recommend as an approach if there’s a good reason not to rely on 3rd party hosted apps, as long as you don’t try this at home; Moving the tiles may well be slower and more costly than their generation.

But as time was tight before the initial release of this app, we just – legally and cautiously(!) – scraped ready-to use OSM tiles off MapQuest.com.

The packaging of the tiles for offline use was rather easy for Barcelona because about 1000 map tiles are sufficient to cover the whole city area up to street level (zoom level 16). So we could add each tile as a single line into the manifest.appache file. The resulting, fully automatic, browser-based download on first use was only 10M.

This left us with a lot of lines like

/mobile/maps/barcelona/15/16575/12234.png
/mobile/maps/barcelona/15/16575/12235.png
...

in the manifest and wishing for a $GENERATE clause as for DNS zone files.

As convenient as it may seem to throw all your offline dependencies’ locations into a single file and just expect them to be available as a consequence, there are significant drawbacks to this approach. The article Application Cache is a Douchebag by Jake Archibald summarizes them and some help is given at Html5Rocks by Eric Bidleman.

We found at the time that the degree of control over the current download state, and the process of resuming the app cache load in case that the initial time users spent in our app didn’t suffice for that to complete was rather tiresome.

For Barcelona, we resorted to marking the cache state as dirty in Local Storage and clearing that flag only after we received the updateready event of the window.applicationCache object but in the later generalization to more cities, we moved the map away from the app cache altogether.

Offline storage

The first step towards offline-readiness was obviously to know if the device was online or offline, so we’d be able to switch the data source between live and local.

This sounds easier than it was. Even with cross-platform considerations aside, neither the online state property (window.navigator.onLine), the events fired on the <body> element for state changes (“online” and “offline”, again on the <body>), nor the navigator.connection object that was supposed to have the on/offline state plus bandwidth and more, really turned out reliable enough.

Standardization is still ongoing around all of the above, and some implementations are labeled as experimental for a good reason :)

We ultimately ended up writing a NetworkStateService class that uses all of the above as hints, but ultimately and very pragmatically convinces itself with regular HEAD requests to a known live URL that no event went missing and the state is correct.

That settled, we still needed to make the app work in offline mode. In terms of storage opportunities, we were looking at:

Storage Capacity Updates Access Typical use
App / app cache, i.e. everything listed in the file that the value of appcache_path in the app‘s webapp.manifest points to, and which is and therefore downloaded onto the device when the app is installed. <= 50M. On other platforms (e.g. iOS/Safari), user interaction required from 10M+. Recommendation from Moziila was to stay <2M. Hard. Requires user interaction / consent, and only wholesale update of entire app possible. By (relative) path HTML, JS, CSS, static assets such as UI icons
LocalStorage 5M on UTF8-platforms such as FFOS, 2.5M in UTF16, e.g. on Chrome. Details here Anytime from app By name Key-value storage of app status, user input, or entire data of modest apps
Device Storage (often SD card) Limited only by hardware Anytime from app (unless mounted as UDB drive when cionnected to desktop computer) By path, through Device Storage API Big things
FileSystem API Bad idea
Database Unlimited on FFOS. Mileage on other platforms varies Anytime from app Quick and by arbitrary properties Databases :)

Some aspects of where to store the data for offline operation were decided upon easily, others not so much:

  • the app, i.e. the HTML, JS, CSS, and UI images would go into the app cache
  • state would be maintained in Local Storage
  • map tiles again in the app cache. Which was a rather dumb decision, as we learned later. Barcelona up to zoom level 16 was 10M, but later cities were different. London was >200M and even reduced to max. zoom 15 still worth 61M. So we moved that to Device Storage and added an actively managed download process for later releases.
  • The venue information, i.e. all the names, locations, images, reviews, details, showtimes etc. of the places that Time Out shows in Barcelona. Seeing that we needed lots of space, efficient and arbitrary access plus dynamic updates, this had to to go into the Database. But how?

The state of affairs across the different mobile HTML5 platforms was confusing at best, with Firefox OS already supporting IndexedDB, but Safari and Chrome (considering earlier versions up to Android 2.x) still relying on a swamp of similar but different sqlite / WebSQL variations.

So we cried for help and received it, as always when we had reached out to the Mozilla team. This time in the form of a pointer to pouchDB, a JS-based DB layer that at the same time wraps away the different native DB storage engines behind a CouchDB-like interface and adds super easy on-demand synchronization to a remote CouchDB-hosted master DB out there.

Back last year it still was in pre-alpha state but very usable already. There were some drawbacks, such as the need for adding a shim for WebSql based platforms. Which in turn meant we couldn’t rely on storage being 8 bit clean, so that we had to base64 our binaries, most of all the venue images. Not exactly pouchDB’s fault, but still blowing up the size.

Harvesting

The DB platform being chosen, we next had to think how we’d harvest all the venue data from Time Out’s API into the DB. There were a couple of endpoints at our disposal. The most promising for this task was proximity search with no category or other restrictions applied, as we thought it would let us harvest a given city square by square.

Trouble with distance metrics however being that they produce circles rather than squares. So step 1 of our thinking would miss venues in the corners of our theoretical grid

while extending the radius to (half the) the grid’s diagonal, would produce redundant hits and necessitate deduplication.

In the end, we simply searched by proximity to a city center location, paginating through the result indefinitely, so that we could be sure to to encounter every venue, and only once:

Technically, we built the harvester in PHP as an extension to the CORS-enabled, result-reducing API proxy for live operation that was already in place. It fed the venue information in to the master CouchDB co-hosted there.

Time left before MWC 2013 getting tight, we didn’t spend much time on a sophisticated data organization and just pushed the venue information into the DB as one table per category, one row per venue, indexed by location.

This allowed us to support category based and area / proximity based (map and list) browsing. We developed an idea how offline keyword search might be made possible, but it never came to that. So the app simply removes the search icon when it goes offline, and puts it back when it has live connectivity again.

Overall, the app now

  • supported live operation out of box,
  • checked its synchronization state to the remote master DB on startup,
  • asked, if needed, permission to make the big (initial or update) download,
  • supported all use cases but keyword search when offline.

The involved components and their interactions are summarized in this diagram:

Organizing vs. Optimizing the code

For the development of the app, we maintained the code in a well-structured and extensive source tree, with e.g. each JavaScript class residing in a file of its own. Part of the source tree is shown below:

This was, however, not ideal for deployment of the app, especially as a hosted Firefox OS app or mobile web site, where download would be the faster, the fewer and smaller files we had.

Here, Require.js came to our rescue.

It provides a very elegant way of smart and asynchronous requirement handling (AMD), but more importantly for our purpose, comes with an optimizer that minifies and combines the JS and CSS source into one file each:

To enable asynchronous dependency management, modules and their requirements must be made known to the AMD API through declarations, essentially of a function that returns the constructor for the class you’re defining.

Applied to the search result screen of our application, this looks like this:

define
(
  // new class being definied
  'screensSearchResultScreen',

  // its dependencies
  ['screens/abstractResultScreen', 'app/applicationController'],

  // its anonymous constructor
  function (AbstractResultScreen, ApplicationController)
  {
    var SearchResultScreen = $.extend(true, {}, AbstractResultScreen,
    {
      // properties and methods
      dom:
      {
        resultRowTemplate: $('#searchResultRowTemplate'),
        list: $('#search-result-screen-inner-list'),
        ...
      }
      ...
    }
    ...

    return SearchResultScreen;
  }
);

For executing the optimization step in the build & deployment process, we used Rhino, Mozilla’s Java-based JavaScript engine:

java -classpath ./lib/js.jar:./lib/compiler.jar   
  org.mozilla.javascript.tools.shell.Main ./lib/r.js -o /tmp/timeout-webapp/
  $1_config.js

CSS bundling and minification is supported, too, and requires just another call with a different config.

Outcome

Four weeks had been a very tight timeline to start with, and we had completely underestimated the intricacies of taking HTML5 to a mobile and offline-enabled context, and wrapping up the result as a Marketplace-ready Firefox OS app.

Debugging capabilities in Firefox OS, especially on the devices themselves, were still at an early stage (compared to clicking about:app-manager today). So the lights in our Cologne office remained lit until pretty late then.

Having built the app with a clear separation between functionality and presentation also turned out a wise choice when a week before T0 new mock-ups for most of the front end came in :)

But it was great and exciting fun, we learned a lot in the process, and ended up with some very useful shiny new tools in our box. Often based on pointers from the super helpful team at Mozilla.

Truth be told, we had started into the project with mixed expectations as to how close to the native app experience we could get. We came back fully convinced and eager for more.

In the end, we made the deadline and as a fellow hacker you can probably imagine our relief. The app finally even received its 70 seconds of fame, when Jay Sullivan shortly demoed it at Mozilla’s MWC 2013 press conference as a showcase for HTML5?s and Firefox OS’s offline readiness (Time Out piece at 7:50). We were so proud!

If you want to play with it, you can find the app in the marketplace or go ahead try it online (no offline mode then).

Since then, the Time Out Firefox OS app has continued to evolve, and we as a team have used the chance to continue to play with and build apps for FFOS. To some degree, the reusable part of this has become a framework in the meantime, but that’s a story for another day..

We’d like to thank everyone who helped us along the way, especially Taylor Wescoatt, Sophie Lewis and Dave Cook from Time Out, Desigan Chinniah and Harald Kirschner from Mozilla, who were always there when we needed help, and of course Robert Nyman, who patiently coached us through writing this up.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The Making of Face to GIF

Face to gif is a simple webapp that lets you record yourself and gives you an infinitely looping animated gif. In this post I will walk you through how it came to be and what I’ve learned from building the small app.

image of the preview window in face to gif

It started with Chris Heilmann’s post about people losing expressiveness to internet memes. At least, that was what I wanted to understand out of it. I thought it really came down to tooling, like most problems do.

It is the year 2000 and something and we still haven’t found a solution to simple problems like sending large files, doing taxes automatically and reliably online or recording an animated gif in your browser. Also, because memes are so popular and easily accessible, why would people even bother trying to create original content when they can make do with a cute kitten image. I thought some things should be easier.

I had already played around with downloading files generated on the client, so I knew text files were trivial and static images were not that hard. But I didn’t find anything about making gif files client side. I thought that I’d figure out the gif part later or even write it myself – how hard could it actually be, right?

The Humble Beginning

Since WebRTC is gaining traction, getUserMedia is becoming a somewhat viable API. Getting a stream from a webcam to be displayed on a video element was very easy.

navigator.getUserMedia({video: true, audio: false}, yes, no);
…
video.src = URL.createObjectURL(stream);

image of the getUserMedia request dialog in face to gif

I then needed to capture the images that would later make up the gif’s frames. This was not that hard, either. Luckily, you can paint a video element on a canvas context directly using

context.drawImage(video, 0,0, width,height);

This also allows you to scale the captured frames right there, to normalise the different webcam resolutions. Just make sure your canvas element has the correct width and height properties specified, and you should be fine. Also, you should either display: none; it or remove it from the DOM to avoid unnecessary paints.

<canvas width=320 height=240></canvas>

To capture frames, just set an interval at your desired frame rate and cache the frames in an array.

setInterval(function () {
  context.drawImage(video, 0,0, width,height);
  frames.push(context.getImageData(0,0, width,height));
}, 67);

Please note that there is no need to use requestAnimationFrame in this case. The video stream continues to play even when the page it’s on is not visible – so I guess capturing it also makes sense. More importantly, you will need a specific interval between frames that will most probably not end up being 60 frames per second.

After stopping the interval – that is to stop “recording” – you are left with a lot of frames, each frame having a lot of pixel data from the video stream that comes from your webcam. And all that data never leaves the web page that’s being displayed on your computer.

At one point, I was considering to add a “download raw data” button so people could do other things than just make a gif of themselves. I decided to actually solve the gif part first, then think about bells and whistles.

The GIF Writer

After reading too much about the GIF89a and dithering and the LZW algorithm, I cowardly decided to see if I could not find a ready made library. I was lucky to find a demo that combined a series of images into an animated gif – all in JavaScript. I quickly retrofitted the library into my small app and things started moving again.

gifworker.sendMessage({ images: frames, delay: 67 });
...
gifworker.onmessage = function(event) {
  var img = document.createElement('img');
  img.src = event.data.gifDataString;
  document.body.appendChild(img);
};

What needs to be done from there is as follows:

  1. write a binary header that describes a file as a GIF98a file.
  2. write a block describing the width, height and looping control.
  3. write each frame from the image data list.
  4. write a trailer \59 – aka semicolon.

Using WebWorkers to do the heavy lifting in a separate thread, keeping the UI responsive was a no-brainer. After it’s done processing, the library provides you with a base64 encoded string representing the gif file. That can be used as a data url for an image.

At one point in my life, I was using data urls so intensively, that I would provide clients with mockups consisting of just one HTML file that had images, css and javascript base64`d in and that wouldn’t require and internet connection to work. And that wouldn’t work in IE.

But I was about to face a different set of problems this time around…

Saving the files

Data urls can be saved if they’re small enough. If you want to save a gif that is too long and displayed via data url, the browser will not even let you try do that. Trying to be clever with the download attribute on links didn’t help either.

image of the generated gif and its options in face to gif

While data urls are really cool, there is a limit to how long they can be. I didn’t want to impose what seemed to be a legacy limit on this app.

I altered the library a little to provide me with the raw bytes instead of a base64 string and I used the raw bytes to create a Blob, then used URL.createObjectURL to make something I could set an image’s source attribute to.

var blob = new Blob([uint8array], {type: "image/gif"});
img.src = URL.createObjectURL(blob);

This method of using user generated resources as source attributes is much more reliable and scalable than the old data url method. This also allowed for easier saving of the image.

I use a trick for the download link you will find in my app: I place a simple anchor, with an empty href attribute and I attach a simple ‘click’ event handler. When the user clicks on it, my event handler function simply changes the href attribute to be the same with the source attribute of the image. The browser does the rest.

a.addEventListener('click', function (e) {
    a.href = img.src;
  // the real trick is to let the event bubble up
}, false);

We spend so much time as web developers hijacking control from the browser so we may do our own thing. The truth is, we can most of the time just tell the browser where to go and he’ll do a much better job at getting there on its own than if we would be involved.

Getting back to my app, though, I had gotten it to a place where it was doing what I hoped it would be doing: Recording my face with my webcam and serving me a gif of it.

The Speed

The app was rocking, but it was more like a ballad than a heavy metal song. It took 16 seconds of my life for each 1 second of a gif. This was also because I was writing the gif files at 640×480 originally, but also because it turns out that binary operations on pixel data can be quite slow if not optimised.

I was scrambling for solutions, looking into the library’s TypeScript source code and the generated JavaScript to find ways to improve it, considering asm.js, using TypedArrays more, anything – when I stumbled upon another JavaScript library for writing animated gifs.

gif.js was leaner, could use several web workers to process the frames and had what I thought was a better looking API. After retrofitting this new library, tweaking the settings and halving the size, I was able to produce gifs, right in the web app at blazing speeds.

The one downfall was that what I had gained in speed, I had lost in compression. A mere 10 seconds of GIF would produce about 30 MB worth of data. After some more heavy tweaking, I was able to get that down to about 5 MB / 10s. Still a lot, but it is uncompressed and aggressive compression via online tools can bring that down to as little as 600KB.

The other cool thing about working with Blobs is that you can append them directly to FormData objects, which meant that doing a cross origin ajax call to imgur.com to upload the generated gif was a breeze, and a much welcomed addition to the web app.

What I’ve learned

  • URL.createObjectURL is a great api for client generated media, solving so many problems you’d have otherwise.
  • Using TypedArrays will boost your data intensive app’s performance a lot.
  • Dividing workload between multiple concurrent WebWorkers actually works and helps.
  • WebRTC is at a pretty stable point where you can use the media devices of about 40% of internet users.
  • It is easy to make an app that lets users generate content without involving your server.
  • People really like playing with their web cams. I think using them in a web app makes perfect sense.
  • It is easy to fill up 2MB, imgur’s file limit, with gif data.

thumbs up!

I would also like to thank Johan Nordberg and nobuoka for their hard work coming up with their JavaScript gif writing libraries.

You can take face to gif for a quick spin, or look at the source code on github and fork it, improve it and have lots of fun; Just like I did.

I cannot wait until WebRTC becomes really available on mobile devices too!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Making your HTML5 efforts worthwhile – #sotb3 talk

Today I gave a talk at the State of the browser 3 event in London, England. The Slides are here, a screencast (with bad audio) is on YouTube and here are the notes.

Abstract:
When the web was defined as an idea it was based on the principle of independence of hardware, global location, prosperity or ability. This changed drastically when the mobile web came around and we got sucked into a world of software dependent on certain hardware and global location. HTML5 and the mobile web based on open technologies became something that needed conversion to native code to access the new hardware people use. This is against the main principle of the web and means we duplicate efforts all over the place. In this talk Chris Heilmann shows how Mozilla is battling this trend and how brushing up your HTML5 solutions allows you to reach millions of new users forgotten by native technology but nevertheless eager to be online.

When I was a kid, I had an uncle in America and he sent us comic books. One of the things advertised in these comic books were sea monkeys – awesome pets that are a whole society and play with another and anyone can look after. Turns out these things are just Brine Shrimp and don’t look at all like that. Actually they are really ugly and boring.

This is what I feel a bit like when we look at what happened to HTML5 on mobile devices. When the iPhone came out Steve Jobs announced that there is no need for an SDK and that Safari with web technologies is more than enough to deliver great experiences. When I tried though what worked I quickly found myself hindered in a lot of respects and the attitude of Apple towards web technologies on the phone vs. native apps changed drastically and very quickly.

And somehow this lead to a terrible experience of the web on mobile devices. This is especially annoying when the sites have been “optimised for mobile viewing” and still fail to deliver anything useful. Another big thing that is happening right now is web sites redirecting you to download a native app instead. This is not what I want when I am on the go on a limited connection and simply look something up. Brad Frost collects a lot of terrible user experiences on mobile at wtfmobileweb.com.

What happened? When did we give up on the idea of nice and responsive web products that use what is available to them? It can not be about the tools we have. Browsers have amazing developer tools built into them these days – all of them, really. Using these tools we have very fine-grained control over what happens in a browser.

For example in this shot you can see the difference using requestAnimationFrame instead of setTimeout makes.

Requestanimationframe vs setTimeout

Let’s not forget how far browsers have come in the recent years.
Browsers these days are nothing more but awesome. Whatever you can complain about you can file as a bug and if it is a real issue it can be fixed within weeks. Every few weeks there are browser updates and security fixed happen over night. All of them render HTML the same way.

And yet the web is full of sites that are plain broken on different devices. Simple things like forgetting to define a viewport size can make an interface unusable or really annoying to get around. Why?

Terrible mobile login interface

I think as a community we get far too excited about products. The whole mobile space thrives on hardware sales. So instead of building stable and good solutions we continuously want the newest and coolest and support it exclusively. I remember when Retina displays came out and many voices in the web design community called out that we need to fundamentally change what we are doing now. This is fleeting, and in many cases we aren’t even allowed as web developers to access the new technology that makes a product what it is. You can look very silly, very quickly when chasing the shiny.

One big part of this was people getting too excited about the iPhone and Android as the only platforms to support prematurely calling WebKit the only browser engine worthy of our efforts (effectively repeating the same mistakes we did in the end 90ies which gave us all the fun “IE only” web products). With the announcement of the Blink rendering engine powering Opera and Chrome from here on forward the “webkit only” argument went down the drain. Good riddance.

Firefox OS

How about we give web technologies a new platform on mobile devices? And how about not trying to compete with iOS and Android on high end devices while doing so? This is what Firefox OS is about – it brings the web to people who have mobiles as their main interaction with the web – based on web technologies and without the lock-out.

Here are the main differences that FirefoxOS bring in comparison to Android or iOS:

  • Targeted at new, emerging markets
  • Very affordable hardware
  • No credit card needed – client billing
  • Web technologies through and through
  • 18 mobile partners, 4 hardware partners

Firefox OS was created to bring users of feature phones into the web-enabled mobile world. It is meant to cater to the markets not covered by iOS and Android. Yes, you can buy cheap Androids world-wide but the version of Android they support doesn’t have an out-of-the-box browser that allows you to do interesting things on the web. Much like Firefox and Opera for Android allow more users world-wide to have a great web experience without having the latest hardware, Firefox OS goes further. Its main goal is to bring millions of new users to the web on their mobile devices without getting a second-grade experience.

The search interface of Firefox OS

One huge differentiator of Firefox OS is that instead of solely relying on a market place to list apps, apps can be found by entering what you are looking for. This means that if you enter for example a band name like “u2”, you get music apps offered to you. For a movie title it is apps that have to do with films. These are both apps listed in the market place and web-optimised sites. Say you looked for a band, you could click on the songkick icon and get the mobile interface of songkick. You can try the app before you download it and see if you like it. If you want to install it you just tap it for longer and Firefox OS will install the app – including offline functionality, full-screen interface and the extra hardware access Firefox OS offers. This means your mobile interfaces become the ad for your application and users don’t need to download and install a huge app just to try it out. Everybody wins. We made App discovery as easy as surfing the web.

What makes your HTML5 site and app for Firefox OS is the manifest file:

{
  "name": "My App",
  "description": "My elevator pitch goes here",
  "launch_path": "/",
  "icons": { "128": "/img/icon-128.png" },
  "developer": {
    "name": "Your name or organization",
    "url": "http://your-homepage-here.org"
  }
}

In it you define the name, describe the app, give us info about yourself and which icons to display. You also define the localisations that are available and what access you need to the hardware. Depending on how many things you want to access, you can host the app yourself or you have to get it hosted through our infrastructure. This is a crucial part of keeping the platform secure. We can not just allow any app to make phone calls for example without the user initiating them.

These are the three levels of apps available in Firefox OS. For third party app developers, the first two are the most interesting.

  • Hosted apps – stored on your server, easy to upgrade, limited access.
  • Privileged apps – reviewed by the App store, uses a Content Security Policy, hosted on trusted server
  • Certified apps – part of the OS, only by Mozilla and partners

Apps that run on your own server have all the access HTML5 gives them (local storage via IndexedDB, offline storage via AppCache) and the new WebAPIs defined by Mozilla and proposed as a standard. A few of these APIs are also available across other browsers, for example the Mouselock API or the Battery API.

  • Vibration API
  • Screen Orientation
  • Geolocation API
  • Mouse Lock API
  • Open WebApps
  • Network Information API
  • Battery Status API
  • Alarm API
  • Push Notifications API
  • WebFM API / FMRadio
  • WebPayment
  • IndexedDB
  • Ambient light sensor
  • Proximity sensor
  • Notification

One very important API in this stack is the Open Web Apps API. With these few lines of code you can turn any HTML5 web app into a Firefox OS app by offering a link or button to install it. No need to go through the marketplace at all – you can be in full control of your app.

var installapp = navigator.mozApps.install(manifestURL);
installapp.onsuccess = function(data) {
  // App is installed
};
installapp.onerror = function() {
 // App wasn't installed, info is in 
 // installapp.error.name
};

All the APIs are kept simple, they have a few properties you can reda out and fire events when there are changes in their values. If you used jQuery, you should be very familiar with this approach. This code, showing the Battery API should not be black magic.

var b = navigator.battery;
if (b) {
  var level = Math.round(b.level * 100) + "%",
      charging = (b.charging) ? "" : "not ",
      chargeTime = parseInt(b.chargingTime / 60, 10),
      dischargeTime = parseInt(b.dischargingTime/60,10);
  b.addEventListener("levelchange", show);
  b.addEventListener("chargingchange", show);
  b.addEventListener("chargingtimechange", show);
  b.addEventListener("dischargingtimechange", show);
}

If you host your app in the Mozilla Marketplace your app can do more than just the APIs listed earlier. You can for example access the address book, store data on the device’s SD card, connect via TCP Sockets or call third party APIs with XHR.

  • Device Storage API
  • Browser API
  • TCP Socket API
  • Contacts API
  • systemXHR

For a hosted, privileged app it is simple to create a new contact. That enables you for example to sync address books across services. As with all the other APIs, you get an event handler that gets fired on success or failure.

var contact = new mozContact();
contact.init({name: "Christian"});
var request = navigator.mozContacts.save(contact);
request.onsuccess = function() {
// contact generated
};
request.onerror = function() {
// contact generation failed
};

Certified apps – apps built by Mozilla and partners have full access to the hardware and can do everything on it, including calls and text messaging and reading and writing permissions as well as accessing the camera.

  • WebTelephony
  • WebSMS
  • Idle API
  • Settings API
  • Power Management API
  • Mobile Connection API
  • WiFi Information API
  • WebBluetooth
  • Permissions API
  • Network Stats API
  • Camera API
  • Time/Clock API
  • Attention screen
  • Voicemail

One question we get a lot is why hosted apps on your own server couldn’t get full access to the camera and the phone – something that always is an annoyance on iOS which is why we need to use something like phonegap to create native code from our HTML5 solutions. The reason is security. We can not just allow random code not in our control to access these devices without the user knowingly allowing it every time you want to access this functionality.

If however, you are fine for the user to initiate the access, then there is a way using web activities. This for example is the result of asking for a picture:

Pick activity

The user gets an interface that allows access to the gallery, the wallpaper or to the camera. Once a photo is picked from any of those, the data goes back to your app. In other words, web activities allow you to interact with the native apps built into the OS for the purpose of storing, creating and manipulating a certain type of data. Instead of just sending the user to the other app you have a full feedback loop once the activity was successfully done, or cancelled. This is similar to intents on Android or pseudo URL protocols on iOS, with the difference that the user gets back to your app automatically.

There are many predefined Web Actitivies allowing you to talk to native apps. All of these are also proposals for standardisation.

  • configure
  • costcontrol
  • dial
  • open
  • pick
  • record
  • save-bookmark
  • share
  • view
  • new, f.e type: “websms/sms” or “webcontacts/contact”

For example this is all the code needed to send a phone number to the hardware. For the user it would switch to the dialer app and they have to initiate the call. Once the call is hung up (or could not be connected) the user gets back to your app with information on the call (duration, and the like).

var call = new MozActivity({
  name: "dial",
  data: {
    number: "+1804100100"
  }
});

To get a picture from the phone you initiate the pick activity and specify an image MIME type. This offers the user all the apps that store and manipulate images – including the camera – to choose.

var getphoto = new MozActivity({
  name: "pick",
  data: {
    type: ["image/png", "image/jpg", "image/jpeg"]
  }
});

Again, a simple event handler gets you the image as a data blob and you can play with it in your app.

getphoto.onsuccess = function () {
  var img = document.createElement("img");
  if (this.result.blob.type.indexOf("image") != -1) {
    img.src = window.URL.createObjectURL(this.result.blob);
  }
};
getphoto.onerror = function () { // error
};

The great news is that if you have Firefox on Android, this functionality is also available outside of Firefox OS for you – any Android device will do.

I hope you are as excited as we are and you are ready to have a go at playing with these APIs and Activities. But where to start?

The Firefox OS Developer Hub is the one-stop-shop for everything Firefox OS. There you can find information on what makes a good HTML5 app, play and download example apps to change into yours and find information how to submit your app to the marketplace or how to publish it yourself. You also get information about monetisation and how to set up a development environment (basically installing the simulator).

Simulator

The simplest way to test out Firefox OS is to install the Simulator, which is just an add-on for Firefox. Once installed you can test your apps on your server or on your local hard-drive in a Firefox OS instance running in an own thread and an own window. You can get all kind of feedback about your app working in Firefox OS with the developer console and the error logs.

alt=”Boilerplate App” height=”400” class=”middle shadow”>

Firefox OS Boilerplate is a demo app that has stub code for all the different Web Activities. You can try them out that way and just delete the ones you don’t need. It is a great demo app to get started with and base your efforts on.

Geeksphone

Sooner or later you’d want to test your app on a real device though. The easiest way to do that is to get a developer phone from Geeksphone.com. These have the same specifications as the Firefox OS phones sold in the markets our partners are targeting. These are now ready for pre-order and the shop should be live soon.

Even if you don’t care for Firefox OS or don’t want to build something for it, rest assured that it will have an impact on the current mobile web. A whole new group of users will emerge and the mobile version of your site will become a calling card for them to get interested in your offers. And if anything, HTML5 is supported by every player in the market, so now is a good time to brush up what is out there. This is what Firefox OS brings you – and soon:

  • A whole new audience
  • HTML5 without the lock-out
  • Your web site is your ad!
  • Minimal extra work, it works across platforms

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Making your HTML5 efforts worthwhile – notes of the #sotb3 talk

In a few hours I will be giving a talk at the State of the browser 3 event in London, England. The Slides are here and here are the notes:

Abstract:
When the web was defined as an idea it was based on the principle of independence of hardware, global location, prosperity or ability. This changed drastically when the mobile web came around and we got sucked into a world of software dependent on certain hardware and global location. HTML5 and the mobile web based on open technologies became something that needed conversion to native code to access the new hardware people use. This is against the main principle of the web and means we duplicate efforts all over the place. In this talk Chris Heilmann shows how Mozilla is battling this trend and how brushing up your HTML5 solutions allows you to reach millions of new users forgotten by native technology but nevertheless eager to be online.

When I was a kid, I had an uncle in America and he sent us comic books. One of the things advertised in these comic books were sea monkeys – awesome pets that are a whole society and play with another and anyone can look after. Turns out these things are just Brine Shrimp and don’t look at all like that. Actually they are really ugly and boring.

This is what I feel a bit like when we look at what happened to HTML5 on mobile devices. When the iPhone came out Steve Jobs announced that there is no need for an SDK and that Safari with web technologies is more than enough to deliver great experiences. When I tried though what worked I quickly found myself hindered in a lot of respects and the attitude of Apple towards web technologies on the phone vs. native apps changed drastically and very quickly.

And somehow this lead to a terrible experience of the web on mobile devices. This is especially annoying when the sites have been “optimised for mobile viewing” and still fail to deliver anything useful. Another big thing that is happening right now is web sites redirecting you to download a native app instead. This is not what I want when I am on the go on a limited connection and simply look something up. Brad Frost collects a lot of terrible user experiences on mobile at wtfmobileweb.com.

What happened? When did we give up on the idea of nice and responsive web products that use what is available to them? It can not be about the tools we have. Browsers have amazing developer tools built into them these days – all of them, really. Using these tools we have very fine-grained control over what happens in a browser.

For example in this shot you can see the difference using requestAnimationFrame instead of setTimeout makes.

Requestanimationframe vs setTimeout

Let’s not forget how far browsers have come in the recent years.
Browsers these days are nothing more but awesome. Whatever you can complain about you can file as a bug and if it is a real issue it can be fixed within weeks. Every few weeks there are browser updates and security fixed happen over night. All of them render HTML the same way.

And yet the web is full of sites that are plain broken on different devices. Simple things like forgetting to define a viewport size can make an interface unusable or really annoying to get around. Why?

Terrible mobile login interface

I think as a community we get far too excited about products. The whole mobile space thrives on hardware sales. So instead of building stable and good solutions we continuously want the newest and coolest and support it exclusively. I remember when Retina displays came out and many voices in the web design community called out that we need to fundamentally change what we are doing now. This is fleeting, and in many cases we aren’t even allowed as web developers to access the new technology that makes a product what it is. You can look very silly, very quickly when chasing the shiny.

One big part of this was people getting too excited about the iPhone and Android as the only platforms to support prematurely calling WebKit the only browser engine worthy of our efforts (effectively repeating the same mistakes we did in the end 90ies which gave us all the fun “IE only” web products). With the announcement of the Blink rendering engine powering Opera and Chrome from here on forward the “webkit only” argument went down the drain. Good riddance.

Firefox OS

How about we give web technologies a new platform on mobile devices? And how about not trying to compete with iOS and Android on high end devices while doing so? This is what Firefox OS is about – it brings the web to people who have mobiles as their main interaction with the web – based on web technologies and without the lock-out.

Here are the main differences that FirefoxOS bring in comparison to Android or iOS:

  • Targeted at new, emerging markets
  • Very affordable hardware
  • No credit card needed – client billing
  • Web technologies through and through
  • 18 mobile partners, 4 hardware partners

Firefox OS was created to bring users of feature phones into the web-enabled mobile world. It is meant to cater to the markets not covered by iOS and Android. Yes, you can buy cheap Androids world-wide but the version of Android they support doesn’t have an out-of-the-box browser that allows you to do interesting things on the web. Much like Firefox and Opera for Android allow more users world-wide to have a great web experience without having the latest hardware, Firefox OS goes further. Its main goal is to bring millions of new users to the web on their mobile devices without getting a second-grade experience.

The search interface of Firefox OS

One huge differentiator of Firefox OS is that instead of solely relying on a market place to list apps, apps can be found by entering what you are looking for. This means that if you enter for example a band name like “u2”, you get music apps offered to you. For a movie title it is apps that have to do with films. These are both apps listed in the market place and web-optimised sites. Say you looked for a band, you could click on the songkick icon and get the mobile interface of songkick. You can try the app before you download it and see if you like it. If you want to install it you just tap it for longer and Firefox OS will install the app – including offline functionality, full-screen interface and the extra hardware access Firefox OS offers. This means your mobile interfaces become the ad for your application and users don’t need to download and install a huge app just to try it out. Everybody wins. We made App discovery as easy as surfing the web.

What makes your HTML5 site and app for Firefox OS is the manifest file:

{
  "name": "My App",
  "description": "My elevator pitch goes here",
  "launch_path": "/",
  "icons": { "128": "/img/icon-128.png" },
  "developer": {
    "name": "Your name or organization",
    "url": "http://your-homepage-here.org"
  }
}

In it you define the name, describe the app, give us info about yourself and which icons to display. You also define the localisations that are available and what access you need to the hardware. Depending on how many things you want to access, you can host the app yourself or you have to get it hosted through our infrastructure. This is a crucial part of keeping the platform secure. We can not just allow any app to make phone calls for example without the user initiating them.

These are the three levels of apps available in Firefox OS. For third party app developers, the first two are the most interesting.

  • Hosted apps – stored on your server, easy to upgrade, limited access.
  • Privileged apps – reviewed by the App store, uses a Content Security Policy, hosted on trusted server
  • Certified apps – part of the OS, only by Mozilla and partners

Apps that run on your own server have all the access HTML5 gives them (local storage via IndexedDB, offline storage via AppCache) and the new WebAPIs defined by Mozilla and proposed as a standard. A few of these APIs are also available across other browsers, for example the Mouselock API or the Battery API.

  • Vibration API
  • Screen Orientation
  • Geolocation API
  • Mouse Lock API
  • Open WebApps
  • Network Information API
  • Battery Status API
  • Alarm API
  • Push Notifications API
  • WebFM API / FMRadio
  • WebPayment
  • IndexedDB
  • Ambient light sensor
  • Proximity sensor
  • Notification

One very important API in this stack is the Open Web Apps API. With these few lines of code you can turn any HTML5 web app into a Firefox OS app by offering a link or button to install it. No need to go through the marketplace at all – you can be in full control of your app.

var installapp = navigator.mozApps.install(manifestURL);
installapp.onsuccess = function(data) {
  // App is installed
};
installapp.onerror = function() {
 // App wasn't installed, info is in 
 // installapp.error.name
};

All the APIs are kept simple, they have a few properties you can reda out and fire events when there are changes in their values. If you used jQuery, you should be very familiar with this approach. This code, showing the Battery API should not be black magic.

var b = navigator.battery;
if (b) {
  var level = Math.round(b.level * 100) + "%",
      charging = (b.charging) ? "" : "not ",
      chargeTime = parseInt(b.chargingTime / 60, 10),
      dischargeTime = parseInt(b.dischargingTime/60,10);
  b.addEventListener("levelchange", show);
  b.addEventListener("chargingchange", show);
  b.addEventListener("chargingtimechange", show);
  b.addEventListener("dischargingtimechange", show);
}

If you host your app in the Mozilla Marketplace your app can do more than just the APIs listed earlier. You can for example access the address book, store data on the device’s SD card, connect via TCP Sockets or call third party APIs with XHR.

  • Device Storage API
  • Browser API
  • TCP Socket API
  • Contacts API
  • systemXHR

For a hosted, privileged app it is simple to create a new contact. That enables you for example to sync address books across services. As with all the other APIs, you get an event handler that gets fired on success or failure.

var contact = new mozContact();
contact.init({name: "Christian"});
var request = navigator.mozContacts.save(contact);
request.onsuccess = function() {
// contact generated
};
request.onerror = function() {
// contact generation failed
};

Certified apps – apps built by Mozilla and partners have full access to the hardware and can do everything on it, including calls and text messaging and reading and writing permissions as well as accessing the camera.

  • WebTelephony
  • WebSMS
  • Idle API
  • Settings API
  • Power Management API
  • Mobile Connection API
  • WiFi Information API
  • WebBluetooth
  • Permissions API
  • Network Stats API
  • Camera API
  • Time/Clock API
  • Attention screen
  • Voicemail

One question we get a lot is why hosted apps on your own server couldn’t get full access to the camera and the phone – something that always is an annoyance on iOS which is why we need to use something like phonegap to create native code from our HTML5 solutions. The reason is security. We can not just allow random code not in our control to access these devices without the user knowingly allowing it every time you want to access this functionality.

If however, you are fine for the user to initiate the access, then there is a way using web activities. This for example is the result of asking for a picture:

Pick activity

The user gets an interface that allows access to the gallery, the wallpaper or to the camera. Once a photo is picked from any of those, the data goes back to your app. In other words, web activities allow you to interact with the native apps built into the OS for the purpose of storing, creating and manipulating a certain type of data. Instead of just sending the user to the other app you have a full feedback loop once the activity was successfully done, or cancelled. This is similar to intents on Android or pseudo URL protocols on iOS, with the difference that the user gets back to your app automatically.

There are many predefined Web Actitivies allowing you to talk to native apps. All of these are also proposals for standardisation.

  • configure
  • costcontrol
  • dial
  • open
  • pick
  • record
  • save-bookmark
  • share
  • view
  • new, f.e type: “websms/sms” or “webcontacts/contact”

For example this is all the code needed to send a phone number to the hardware. For the user it would switch to the dialer app and they have to initiate the call. Once the call is hung up (or could not be connected) the user gets back to your app with information on the call (duration, and the like).

var call = new MozActivity({
  name: "dial",
  data: {
    number: "+1804100100"
  }
});

To get a picture from the phone you initiate the pick activity and specify an image MIME type. This offers the user all the apps that store and manipulate images – including the camera – to choose.

var getphoto = new MozActivity({
  name: "pick",
  data: {
    type: ["image/png", "image/jpg", "image/jpeg"]
  }
});

Again, a simple event handler gets you the image as a data blob and you can play with it in your app.

getphoto.onsuccess = function () {
  var img = document.createElement("img");
  if (this.result.blob.type.indexOf("image") != -1) {
    img.src = window.URL.createObjectURL(this.result.blob);
  }
};
getphoto.onerror = function () { // error
};

The great news is that if you have Firefox on Android, this functionality is also available outside of Firefox OS for you – any Android device will do.

I hope you are as excited as we are and you are ready to have a go at playing with these APIs and Activities. But where to start?

The Firefox OS Developer Hub is the one-stop-shop for everything Firefox OS. There you can find information on what makes a good HTML5 app, play and download example apps to change into yours and find information how to submit your app to the marketplace or how to publish it yourself. You also get information about monetisation and how to set up a development environment (basically installing the simulator).

Simulator

The simplest way to test out Firefox OS is to install the Simulator, which is just an add-on for Firefox. Once installed you can test your apps on your server or on your local hard-drive in a Firefox OS instance running in an own thread and an own window. You can get all kind of feedback about your app working in Firefox OS with the developer console and the error logs.

alt=”Boilerplate App” height=”400” class=”middle shadow”>

Firefox OS Boilerplate is a demo app that has stub code for all the different Web Activities. You can try them out that way and just delete the ones you don’t need. It is a great demo app to get started with and base your efforts on.

Geeksphone

Sooner or later you’d want to test your app on a real device though. The easiest way to do that is to get a developer phone from Geeksphone.com. These have the same specifications as the Firefox OS phones sold in the markets our partners are targeting. These are now ready for pre-order and the shop should be live soon.

Even if you don’t care for Firefox OS or don’t want to build something for it, rest assured that it will have an impact on the current mobile web. A whole new group of users will emerge and the mobile version of your site will become a calling card for them to get interested in your offers. And if anything, HTML5 is supported by every player in the market, so now is a good time to brush up what is out there. This is what Firefox OS brings you – and soon:

  • A whole new audience
  • HTML5 without the lock-out
  • Your web site is your ad!
  • Minimal extra work, it works across platforms

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Making WebRTC Simple with conversat.io

WebRTC is awesome, but it’s a bit unapproachable. Last week, my colleagues and I at &yet released a couple of tools we hope will help make it more tinkerable and pose a real risk of actually being useful.

As a demo of these tools, we very quickly built a simple product called conversat.io that lets you create free, multi-user video calls with no account and no plugins, just by going to a url in a modern browser. Anyone who visits that same URL joins the call.

conversat.io

The purpose of conversat.io is two fold. First, it’s a useful communication tool. Our team uses And Bang for tasks and group chat, so being able to drop a link to a video conversation “room” into our team chat that people can join is super useful. Second, it’s a demo of the SimpleWebRTC.js library and the little signaling server that runs it, signalmaster.

(Both SimpleWebRTC and signalmaster are open sourced on Github and MIT licensed. Help us make them better!)

Quick note on browser support

WebRTC currently only works in Chrome stable and FireFox Nightlies (with the media.peerconnection.enabled preference enabled in about:config).

Hopefully we’ll see much broader browser support soon. I’m particularly excited about having WebRTC available on smartphones and tablets.

Approachability and adoption

I firmly believe that widespread adoption of new web technologies is directly corellated to how easy they are to play with. When I was a new JS developer, it was jQuery’s approachability that made me feel empowered to build cool stuff.

My falling in love with javascript all started with doing this with jQuery:

$('#demo').slideDown();

And then seeing the element move on my screen. I knew nothing. But as cheesy as it sounds, this simple thing left me feeling empowered to build more interesting things.

Socket.io did the same thing for people wanting to build apps that pushed data from the server to the client:

// server:
client.emit("something", {
    some: "data" 
});
// client:
socket = io.connect();
socket.on("something", function (data) {
    // here's my data!
    console.log(data);
});

Rather than having to figure out how to set up long-polling, BOSH, and XMPP in order to get data pushed out to the browser, I could now just send messages to the browser. In fact, if I didn’t want to, I didn’t even have to think about serializing and de-serializing. I could now just pass simple javascript objects seamlessly back and forth between the client and server.

I’ve heard some “hardcore” devs complain that tools like this lead to too many poorly made tools and too many “wannabe” developers who don’t know what they’re doing. That’s garbage.

Approachable tools that make developers feel empowered to build cool stuff is the reason the web is as successful and vibrant as it is.

Tools like this are the gateway drug for getting us hooked on building things on these types of technologies. They introduce the concept and help us think about what could be built. Whether or not we ultimately end up building the final app with the tool whose simplicity introduced it to us is irrelevant.

The potential of WebRTC

I’m convinced WebRTC has the potential to have a huge impact on how we communicate. It already has for our team at &yet. Sure, we already used stuff like Skype, Facetime, and Google Hangouts. But the simplicity and convenience of just opening a URL in a browser and instantly being in a conversation is powerful.

Once this technology is broadly available and on mobile devices, it’s nothing short of a game changer for communications.

Challenges

There are definitely quite a few hurdles that get in the way of just playing with WebRTC: complexity and browser differences in instantiating peer connections, generating and processing signaling messages, and attaching media streams to video elements.

Even at the point you have those things, you still need a way to let two users find each other and have a mechanism for each user to send the proper signaling messages directly to the other user or users that they want to connect to.

SimpleWebRTC.js is our answer to the clientside complexities. It abstracts away API differences between Firefox and Chrome.

Using SimpleWebRTC

At its simplest, you just need to include the SimpleWebRTC.js script, provide a container for your local video, a container for the remote video(s) like this:

<!DOCTYPE html>
<html>
    <head>
        <script src="http://simplewebrtc.com/latest.js"></script> 
    </head>
    <body>
        <div id="localVideo"></div>
        <div id="remoteVideos"></div>
    </body>
</html>

Then in you just init a webrtc object and tell it which containers to use:

var webrtc = new WebRTC({
    // the id of (or actual element) to hold "our" video
    localVideoEl: 'localVideo',
 
    // the id of or actual element that will hold remote videos
    remoteVideosEl: 'remoteVideos',
 
     // immediately ask for camera access
    autoRequestMedia: true
});

At this point, if you run the code above, you’ll see your video turn on and render in the container you gave it.

The next step is to actually specify who you want to connect to.

For simplicity and maximum “tinkerability” we do this by asking that both users who want to connect to each other join the same “room”, which basically means: call “join” with the same string.

So, for demonstration purposes we’ll just tell our webrtc to join a certain room once it’s ready (meaning it’s connected to the signaling server). We do this like so:

// we have to wait until it's ready
webrtc.on('readyToCall', function () {
    // you can name it anything
    webrtc.joinRoom('your awesome room name');
});

Once a user has done this, he/she is ready and waiting for someone to join.

If you want to test this locally, you can either open it in Firefox and Chrome or in two tabs within Chrome. (Firefox doesn’t yet let two tabs both access local media).

At this point, you should automatically be connected and be having a lively (probably very echo-y!) conversation with yourself.

If you happen to be me, it’d look like this:

henrik in conversat.io

The signaling server

The example above will connect to a sandbox signaling server we keep running to make it easy to mess around with this stuff.

We aim to keep it available for people to use to play with SimpleWebRTC, but it’s definitely not meant for production use and we may kill it or restart it at any time.

If you want to actually build an app that depends on it, you can either run one yourself, or if you’d rather not mess with it, we can host, and keep up to date, and help scale one for you. The code for that server is on github.

You can just pass a URL to a different signaling server as part of your config by passing a “url” option when initiating your webrtc object.

So, what’s it actually doing under the hood?

It’s not too bad, really. You can read the full source of the client library here: https://github.com/HenrikJoreteg/SimpleWebRTC/blob/master/simplewebrtc.js and the signaling server here: https://github.com/andyet/signalmaster/blob/master/server.js

The process of starting a video call in conversat.io looks something like this:

  1. Establish connection to the signaling server. It does this with socket.io and connects to our sandbox signaling server at: http://signaling.simplewebrtc.com:8888

  2. Request access to local video camera by calling browser prefixed getUserMedia.

  3. Create or get local video element and attach the stream that we get from getUserMedia to the video element.

    firefox:

    element.mozSrcObject = stream; element.play();

    webkit:

    element.autoplay = true;
    element.src = webkitURL.createObjectURL(stream);
  4. Call joinRoom which sends a socket.io message to the signaling server telling it the name of the room name it wants to connect to. The signaling server will either create the room if it doesn’t exist or join it if it does. All I mean by “room” is that the particular socket.io session ID is grouped by that room name so we can broadcast messages about people joining/leaving that room to only the clients connected to that room.

  5. Now we play an awesome rocket lander game that @fritzy wrote while we wait for someone to join us:

  6. When someone else joins the same “room” we broadcast that to the other connected users and we create a Conversation object that we’ve defined which wraps the browser’s peerConnection. The peer connection represents, as you’d probably guess, the connection between you and another person.

  7. The signaling server broadcasts the new socket.io session ID to each user in the room and each user’s client creates a Conversation object for every other user in the room.

  8. At this point we have a mechanism of knowing who to connect to and how to send direct messages to each of their sessions.

  9. Now we use the peerConnection to create an “offer” and store our local offer and set it in our peer connection as the local description. This contains information about how another client can reach and talk to our browser.

    peerConnection.createOffer();

    We then send this over our socket.io connection to the other people in the room.

  10. When a client receives and offer we add it to our peer connection:

    var remote = new RTCSessionDescription(message.payload);
    peerConnection.setRemoteDescriptionremoteDescription);

    and generate an answer by calling peerConnection.createAnswer() and send that back to the person we got the offer from.

  11. When the answer is received we set it as the remote description. Then we create and send ICE Candidates much in the same way. This will negotiate our connection and connect us.

  12. If that process is successful we’ll get an onaddstream event from our peer connection and we can then create a video element and attach that stream to it. At this point the video call should be in progress.

If you wish to dig into it further, send pull requests and file issues on the SimpleWebRTC project on github.

The road ahead

This is just a start. Help us make this stuff better!

There’s a lot more we’d like to see with this:

  1. Making the signaling piece more pluggable (so you can use whatever you want).
  2. Adding support for pausing and resuming video/audio.
  3. It’d be great to be able to figure out who’s talking and emit an event to other connected users when that changes.
  4. Better control over handling/rejecting incoming requests.
  5. Setting max connections, perhaps determined based on HTML5 connection APIs?

Hit me up on twitter (@henrikjoreteg) if you do something cool with this stuff or run into issues or just want to talk about it. I’d love to hear from you.

Keep building awesome stuff, you amazing web people! Go go gadget Internet!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)