Service

Offline strategies come to the Service Worker Cookbook

serviceworke.rs is a compendium of common and uncommon Service Worker use cases including push examples, usage patterns, performance tips and caching strategies.

Service Worker Cookbook recipes are presented as playgrounds or labs, with fully functional client-server setups, where you can learn and experiment with results using in-browser developer tools.

Still, the cookbook is far from comprehensive, and we realised it lacked some basic materials and user feedback mechanisms. Today, I’m proud to announce some changes to the Service Worker Cookbook starting with a new section about caching strategies.

Caching Strategies

Caching strategies includes recipes that demo several ways of serving content from a service worker. The recipes follow an identical layout in which two iframes are displayed side by side. Both show an image element pointing to the same online picture.

The first iframe is not under service worker interception, so the picture always displays fresh content from the server. In contrast, the second iframe is controlled by the service worker and the content is served according to the implemented cache strategy.

Layout for offline recipes: two iframes, the first controlled and the second not.

Picture content changes on the server every 10 seconds and you have a button to refresh both iframes at the same time and compare what happens to the images.

cache-update-refresh-out-of-sync

Some of the caching strategies are taken from an inspiring article from Jake Archibald’s “The offline cookbook” and others are homegrown.

Cache only

The most basic example: With cache only, requests will never reach the network. Instead, they will be served by the service worker from a local cache.

self.addEventListener('fetch', function(evt) {
  evt.respondWith(fromCache(evt.request));
});

function fromCache(request) {
  return caches.open(CACHE).then(function (cache) {
    return cache.match(request).then(function (matching) {
      return matching || Promise.reject('no-match');
    });
  });
}

In this implementation, cache-only assets are stored while installing the service worker and they will remain there until a new version of the worker is installed.

self.addEventListener('install', function(evt) {
  evt.waitUntil(precache());
});

function precache() {
  return caches.open(CACHE).then(function (cache) {
    return cache.addAll([
      './controlled.html',
      './asset'
    ]);
  });
}

You can use the cache-only strategy for your site’s UI related assets such as images, HTML, sprite sheets or CSS files.

Cache and update

This slight variation on the cache-only strategy also serves assets from a local cache but it also sends network requests for updated versions of the assets. The new content then replaces the older asset in the local cache.

self.addEventListener('fetch', function(evt) {
  evt.respondWith(fromCache(evt.request));
  evt.waitUntil(update(evt.request));
});

function update(request) {
  return caches.open(CACHE).then(function (cache) {
    return fetch(request).then(function (response) {
      return cache.put(request, response);
    });
  });
}

With this cache and update strategy, there comes a point when your assets are no longer synched with those online, but they will be synched upon a second request, which roughly translates to a second visit.

It is totally fine to use this strategy when delivering independent, non-critical content such as avatars or icons. Avoid relying on this strategy for dependent assets (such a complete UI theme) since there is nothing ensuring that the assets will update as needed at the same time.

Cache, update and refresh

Another twist on the previous strategy, now with a refreshing ingredient.

With cache, update and refresh the client will be notified by the service worker once new content is available. This way your site can show content without waiting for the network responses, while providing the UI with the means to display up-to-date content in a controlled way.

self.addEventListener('fetch', function(evt) {
  evt.respondWith(fromCache(evt.request));
  evt.waitUntil(
    update(evt.request)
    .then(refresh)
  );
});

function refresh(response) {
  return self.clients.matchAll().then(function (clients) {
    clients.forEach(function (client) {
      var message = {
        type: 'refresh',
        url: response.url,
        eTag: response.headers.get('ETag')
      };
      client.postMessage(JSON.stringify(message));
    });
  });
}

This is especially useful when fetching any kind of content. This is different than the previous strategy in that there is no need for a user to refresh or visit the site a second time. Because the client is aware of new content, the UI could update in smart, non-intrusive ways.

Embedded fallback

There are situations in which you always want to always display something to replace content that’s missing for whatever reason (network error, 404, no connection). It’s possible to ensure always available offline content by embedding that content into the service worker.

self.addEventListener('fetch', function(evt) {
  evt.respondWith(networkOrCache(evt.request).catch(function () {
    return useFallback();
  }));
});

// Dunno why this is shown as the actual SVG in WordPress but it looks awesome!
// You can see the source code in the recipe.
var FALLBACK =
    '' +
    '  ' +
    '  ' +
    '  ' +
    '  ' +
    '';

function useFallback() {
  return Promise.resolve(new Response(FALLBACK, { headers: {
    'Content-Type': 'image/svg+xml'
  }}));
}

In this recipe, the SVG which acts as a replacement for missing content is included in the worker. As soon as it is installed, fallbacks will be available without performing new network requests.

Network or cache

Service Workers place themselves between the client and the Internet. To some extent, they allow the developer to model their ideal network behaviour. This strategy exploits/enhances that idea by imposing time limits on network responses.

self.addEventListener('fetch', function(evt) {
  evt.respondWith(fromNetwork(evt.request, 400).catch(function () {
    return fromCache(evt.request);
  }));
});

function fromNetwork(request, timeout) {
  return new Promise(function (fulfill, reject) {
    var timeoutId = setTimeout(reject, timeout);
    fetch(request).then(function (response) {
      clearTimeout(timeoutId);
      fulfill(response);
    }, reject);
  });
}

With this recipe, requests are intercepted by the service worker and passed to the network. If the response takes too long, the process is interrupted and the content is served from a local cache instead.

Time limited network or cache can actually be combined with any other technique. The strategy simply gives the network a chance to answer quickly with fresh content.

User feedback

We want to know if recipes are useful, and if you find them clear or confusing. Do they provide unique value or are they redundant? We've added Disqus comments to recipes so you can share your feedback. Log in with Facebook, Twitter, Google or Disqus, and tell us how this recipe has served you or participate in the discussion about recommended use cases.

And more to come

We won’t stop here. More recipes are coming and new enhancements are on their way: a improved way to ask for recipes, an easier contribution pipeline, a visual refresh and a renewed recipe layout are things on our radar. If you like serviceworke.rs please share them with your friends and colleagues. Feel free to use these recipes in your talks or presentations, and, most importantly, help us by providing feedback in the form of on site comments, filing GitHub issues or by tweeting me directly 😉

Your opinion is really appreciated!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Debugging Service Workers and Push with Firefox DevTools

Following the announcement of Web Push in Firefox 44, we’re now delivering the capability to develop and debug service workers and push notifications code with DevTools using Firefox Developer Edition 47.

Here’s a screencast that demonstrates the features described in this post:

Or if you prefer text, keep reading!

about:debugging

Service workers do not behave exactly as normal web workers, and their lifecycle is different, so we can’t show them alongside normal scripts in the Debugger tab of DevTools.

Instead, we’ve added a new dashboard that will collect all registered service workers and shared workers, amongst other debuggable items such as Add-ons.

Therefore, our debugging adventure starts by going to about:debugging in a new tab, and clicking on the Workers tab on the left.

about:debugging interface

Alternatively, you can access this dashboard by going to the Tools ? Web Developer ? Service Workers menu, or by clicking on the toolbar menu, then Developer, and finally Service Workers.

Accessing about:debugging using the application menuAccessing about:debugging with toolbar menu

Dashboard instant updates

The first time we access the dashboard “nothing yet” will be displayed under the Service Workers and Shared Workers sections. These sections will be updated automatically as workers get registered. The displayed buttons will change accordingly, showing Push and Debug if the worker is running, or just a Start button if the worker is registered, but inactive.

Try it! Open about:debugging in one window, and navigate to this simple service worker demo in another window. The service worker will be registered and displayed under the Service Workers section. No need for you to reload the dashboard!

Debugging service workers

To debug a service worker, the worker must already be running. Click on the associated Debug button, or Start the worker if it’s not running yet (as long as it has been registered, and thus is in the about:debugging Dashboard).

This will pop up a new window with the code of the service worker. Here you can do all the usual debugging you would expect: setting breakpoints, step-by-step execution, inspecting variables, etc.

Service Worker debugger pop up window

Push notifications

Code that uses the Web Push API can now be debugged as well, by setting a breakpoint in the listener for the push event of the service worker. When the push notification is received, the debugger will stop at the breakpoint.

Debugger stopped at the push event listener

This is very handy, but sometimes notifications can be delayed for reasons outside of our control, or the network might be temporarily unreachable. Luckily, you can still test code that relies on push events, by pressing the Push button on the worker.

This will send a push payload, and in turn, it will trigger the push event pretty much instantly. You can reduce your development time as you won’t have to wait for the server to deliver the push.

Debugging shared workers

There’s also support for debugging shared workers. The most important difference is that they will show up in their own dedicated section in about:debugging.

Debugging requests (and cached requests)

You can also now distinguish normal network requests from requests cached by the worker. These cached requests are displayed as Service Worker in the Transferred column, instead of displaying the amount of transferred data.

Network panel showing cached requests

Requests initiated by service workers can be intercepted and debugged by setting a breakpoint on the fetch event listener.

Stopping at the fetch event

We can inspect data such as the requested url, http headers, etc., by looking at the event object in the variables list when the debugger stops at the breakpoint.

Wrap up

Hopefully, this provides a good overview of the new features we’re working on.

The reference documentation for about:debugging is on MDN. If you want to learn more about service workers, you should check out the guide to Using Service Workers, and, of course, the Service Workers cookbook, which is loaded with great demos and examples.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Offline Recipes for Service Workers

“Offline” is a big topic these days, especially as many web apps look to also function as mobile apps.  The original offline helper API, the Application Cache API (also known as “appcache”), has a host of problems, many of which can be found in Jake Archibald’s Application Cache is a Douchebag post.  Problems with appcache include:

  • Files are served from cache even when the user is online.
  • There’s no dynamism: the appcache file is simply a list of files to cache.
  • One is able to cache the .appcache file itself and that leads to update problems.
  • Other gotchas.

Today there’s a new API available to developers to ensure their web apps work properly:  the Service Worker API.  The Service Worker API allows developers to manage what does and doesn’t go into cache for offline use with JavaScript.

Introducing the Service Worker Cookbook

To introduce you to the Service Worker API we’ll be using examples from Mozilla’s new  Service Worker Cookbook!  The Cookbook is a collection of working, practical examples of service workers in use in modern web apps.  We’ll be introducing service workers within this three-part series:

  • Offline Recipes for Service Workers (today’s post)
  • At Your Service for More Than Just appcache
  • Web Push Updates to the Masses

Of course this API has advantages other than enabling offline capabilities, such as performance for one, but I’d like to start by introducing basic service worker strategies for offline.

What do we mean by offline?

Offline doesn’t just mean the user doesn’t have an internet connection — it can also mean that the user is on a flaky network connection.  Essentially “offline” means that the user doesn’t have a reliable connection, and we’ve all been there before!

Recipe:  Offline Status

The Offline Status recipe illustrates how to use a service worker to cache a known asset list and then notify the user that they may now go offline and use the app. The app itself is quite simple: show a random image when a button is clicked.  Let’s have a look at the components involved in making this happen.

The Service Worker

We’ll start by looking at the service-worker.js file to see what we’re caching. We’ll be caching the random images to display, as well as the display page and critical JavaScript resources, in a cache named dependencies-cache:


var CACHE_NAME = 'dependencies-cache';

// Files required to make this app work offline
var REQUIRED_FILES = [
  'random-1.png',
  'random-2.png',
  'random-3.png',
  'random-4.png',
  'random-5.png',
  'random-6.png',
  'style.css',
  'index.html',
  '/', // Separate URL than index.html!
  'index.js',
  'app.js'
];

The service worker’s install event will open the cache and use addAll to direct the service worker to cache our specified files:


self.addEventListener('install', function(event) {
  // Perform install step:  loading each required file into cache
  event.waitUntil(
    caches.open(CACHE_NAME)
      .then(function(cache) {
        // Add all offline dependencies to the cache
        return cache.addAll(REQUIRED_FILES);
      })
      .then(function() {
      	// At this point everything has been cached
        return self.skipWaiting();
      })
  );
});

The fetch event of a service worker is fired for every single request the page makes.  The fetch event also allows you to serve alternate content than was actually requested.  For the purposes of offline content, however, our fetch listener will be very simple:  if the file is cached, return it from cache; if not, retrieve the file from server:


self.addEventListener('fetch', function(event) {
  event.respondWith(
    caches.match(event.request)
      .then(function(response) {
        // Cache hit - return the response from the cached version
        if (response) {
          return response;
        }

        // Not in cache - return the result from the live server
        // `fetch` is essentially a "fallback"
        return fetch(event.request);
      }
    )
  );
});

The last part of this service-worker.js file is the activate event listener where we immediately claim the service worker so that the user doesn’t need to refresh the page to activate the service worker. The activate event fires when a previous version of a service worker (if any) has been replaced and the updated service worker takes control of the scope.


self.addEventListener('activate', function(event) {
  // Calling claim() to force a "controllerchange" event on navigator.serviceWorker
  event.waitUntil(self.clients.claim());
});

Essentially we don’t want to require the user to refresh the page for the service worker to begin — we want the service worker to activate upon initial page load.

Service worker registration

With the simple service worker created, it’s time to register the service worker:


// Register the ServiceWorker
navigator.serviceWorker.register('service-worker.js', {
  scope: '.'
}).then(function(registration) {
  // The service worker has been registered!
});

Remember that the goal of the recipe is to notify the user when required files have been cached.  To do that we’ll need to listen to the service worker’s state. When the state has become activated, we know that essential files have been cached, our app is ready to go offline, and we can notify our user:


// Listen for claiming of our ServiceWorker
navigator.serviceWorker.addEventListener('controllerchange', function(event) {
  // Listen for changes in the state of our ServiceWorker
  navigator.serviceWorker.controller.addEventListener('statechange', function() {
    // If the ServiceWorker becomes "activated", let the user know they can go offline!
    if (this.state === 'activated') {
      // Show the "You may now use offline" notification
      document.getElementById('offlineNotification').classList.remove('hidden');
    }
  });
});

Testing the registration and verifying that the app works offline simply requires using the recipe! This recipe provides a button to load a random image by changing the image’s src attribute:


// This file is required to make the "app" work offline
document.querySelector('#randomButton').addEventListener('click', function() {
  var image = document.querySelector('#logoImage');
  var currentIndex = Number(image.src.match('random-([0-9])')[1]);
  var newIndex = getRandomNumber();

  // Ensure that we receive a different image than the current
  while (newIndex === currentIndex) {
    newIndex = getRandomNumber();
  }

  image.src = 'random-' + newIndex + '.png';

  function getRandomNumber() {
    return Math.floor(Math.random() * 6) + 1;
  }
});

Changing the image’s src would trigger a network request for that image, but since we have the image cached by the service worker, there’s no need to make the network request.

This recipe covers probably the most simple of offline cases: caching required static files for offline use.

Recipe: Offline Fallback

This recipe follows another simple use case: fetch a page via AJAX but respond with another cached HTML resource (offline.html) if the request fails.

The service worker

The install step of the service worker fetches the offline.html file and places it into a cache called offline:


self.addEventListener('install', function(event) {
  // Put `offline.html` page into cache
  var offlineRequest = new Request('offline.html');
  event.waitUntil(
    fetch(offlineRequest).then(function(response) {
      return caches.open('offline').then(function(cache) {
        return cache.put(offlineRequest, response);
      });
    })
  );
});

If that requests fails the service worker won’t register since nothing has been put into cache.

The fetch listener listens for a request for the page and, upon failure, responds with the offline.html file we cached during the event registration:


self.addEventListener('fetch', function(event) {
  // Only fall back for HTML documents.
  var request = event.request;
  // && request.headers.get('accept').includes('text/html')
  if (request.method === 'GET') {
    // `fetch()` will use the cache when possible, to this examples
    // depends on cache-busting URL parameter to avoid the cache.
    event.respondWith(
      fetch(request).catch(function(error) {
        // `fetch()` throws an exception when the server is unreachable but not
        // for valid HTTP responses, even `4xx` or `5xx` range.
        return caches.open('offline').then(function(cache) {
          return cache.match('offline.html');
        });
      })
    );
  }
  // Any other handlers come here. Without calls to `event.respondWith()` the
  // request will be handled without the ServiceWorker.
});

Notice we use catch to detect if the request has failed and that therefore we should respond with offline.html content.

Service Worker Registration

A service worker needs to be registered only once. This example shows how to bypass registration if it’s already been done by checking the presence of the navigator.serviceWorker.controller property; if the controller property doesn’t exist, we move on to registering the service worker.


if (navigator.serviceWorker.controller) {
  // A ServiceWorker controls the site on load and therefor can handle offline
  // fallbacks.
  console.log('DEBUG: serviceWorker.controller is truthy');
  debug(navigator.serviceWorker.controller.scriptURL + ' (onload)', 'controller');
}

else {
  // Register the ServiceWorker
  console.log('DEBUG: serviceWorker.controller is falsy');
  navigator.serviceWorker.register('service-worker.js', {
    scope: './'
  }).then(function(reg) {
    debug(reg.scope, 'register');
  });
}

With the service worker confirmed as registered, you can test the recipe (and trigger the new page request) by clicking the “refresh” link: (which then triggers a page refresh with a cache-busting parameter):


// The refresh link needs a cache-busting URL parameter
document.querySelector('#refresh').search = Date.now();

Providing the user an offline message instead of allowing the browser to show its own (sometimes ugly) message is an excellent way of keeping a dialog with the user about why the app isn’t available while they’re offline!

Go offline!

Service workers have moved offline experience and control into a powerful new space.  Today you can use the Service Worker API in Chrome and Firefox Developer Edition.  Many websites are using service workers today as you can see for yourself by going to about:serviceworkers in Firefox Developer Edition;  you’ll see a listing of installed service workers from websites you’ve visited!

about:serviceworkers

The Service Worker Cookbook is full of excellent, practical recipes and we continue to add more. Keep an eye out for the next post in this series, At Your Service for More than Just appcache, where you’ll learn about using the Service Worker API for more than just offline purposes.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Localize Your Node.js Service, part 1 of 3 – A Node.js holiday season, part 9

This is episode 9, out of a total 12, in the A Node.JS Holiday Season series from Mozilla’s Identity team. Now it’s time to delve into localization!

Did you know that Mozilla’s products and services are localized into into as many as 90 languages?

The following are just a few examples of localization:

  • Text translated into a regional language variations
  • A screen rendered right to left for a given language
  • Bulletproof designs that accommodate variable length prose
  • Label, heading, and button text that resonates with a locale audience

In this series of posts, I’m going to cover some technical aspects of how to localize a Node.js service.

Before we start, I’ll be using the acronyms L10n (Localization) and I18n Internationalization. I18n is the technical plumbing needed to make L10n possible.

Mozilla Persona is a Node.js based service localized into X locales. Our team has very specific goals that inhibit us from using existing Node L10n libraries.

Goals

We created these modules, to meet the following goals

  • Work well with existing Mozilla L10n community
  • Let developers work with a pure JS toolkit

The resulting toolkit contains several new Node modules:

  • i18n-abide
  • jsxgettext
  • po2json.js
  • gobbledygook

i18n-abide is the main module you’ll use to integrate translations into your own service. Let’s walk through how to add it.

In these examples, we’ll assume your code uses Express and EJS templates.

Installation

npm install i18n-abide

Preparing your codebase

In your code

var i18n = require('i18n-abide');
 
app.use(i18n.abide({
  supported_languages: ['en-US', 'de', 'es', 'zh-TW'],
  default_lang: 'en-US',
  translation_directory: 'static/i18n'
}));

We will look at the configuration values in detail during the third installment of this L10n series.

The i18n abide middleware sets up request processing and injects various functions which We’ll use for translation. Below, we will see these are available in the request object and in the templating context.

Okay, you next step is to work through all of your code where you have user visible prose.

Here is an example template file:

<html lang="<%= lang %>" dir="<%= lang_dir %>">
  <head>
    <title><%= gettext('Mozilla Persona') %></title>

The key thing abide does, is it injects into the Node and express framework references to the gettext function.

Abide also provides other variables and functions, such as lang, lang_dir.

  • lang is the language code based on the user’s browser and preferred language settings.
  • lang_dir is for bidirectional text support.
    It will be either ltr or rtl. The English language is rendered ltr or left to right.
  • gettext is a JS function which will take an English string and return a localize string, again based on the user’s preferred region and language.

When doing localization, we refer to strings or Gettext strings: these are pieces of prose, labels, button, etc. Any prose that is visible to the end user is a string.

Technically, we don’t mean JavaScript String, as you can have strings which are part of your program, but never shown to the user. String is overloaded to mean, stuff that must get translated.

Here is an example JavaScript file:

app.get('/', function(req, res) {
    res.render('homepage.ejs', {
        title: req.gettext('Hello, World!')
    });
});

We can see that these variables and functions (like gettext) are placed in the req object.

So to setup our site for localization, we must look through all of our code and templates and wrap strings in calls to gettext.

Language Detection

How do we know what the user’s preferred language is?

At runtime, the middleware will detect the user’s preferred language.

The i18n-abide module looks at the Accept-Language HTTP header. This is sent by the browser and includes all of the user’s preferred languages with a preference order.

i18n-abide processes this value and compares it with your app’s supported_languages configuration. It will make the best match possible and serve up that language.

If a good match cannot be found, it will use the strings you’ve put into your code and templates, which are typically English strings.

Wrapping Up

In our next post, we’ll look at how strings like “Hello, World!” are extracted, translated, and prepared for use.

In the third post, we’ll look more deeply at the middleware and configuration options. We’ll also test out our localization work.

Previous articles in the series

This was part nine in a series with a total of 12 posts about Node.js. The previous ones are:

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Localize Your Node.js Service, part 1 of 3 – A Node.js holiday season, part 8

This is episode 9, out of a total 12, in the A Node.JS Holiday Season series from Mozilla’s Identity team. Now it’s time to delve into localization!

Did you know that Mozilla’s products and services are localized into into as many as 90 languages?

The following are just a few examples of localization:

  • Text translated into a regional language variations
  • A screen rendered right to left for a given language
  • Bulletproof designs that accommodate variable length prose
  • Label, heading, and button text that resonates with a locale audience

In this series of posts, I’m going to cover some technical aspects of how to localize a Node.js service.

Before we start, I’ll be using the acronyms L10n (Localization) and I18n Internationalization. I18n is the technical plumbing needed to make L10n possible.

Mozilla Persona is a Node.js based service localized into X locales. Our team has very specific goals that inhibit us from using existing Node L10n libraries.

Goals

We created these modules, to meet the following goals

  • Work well with existing Mozilla L10n community
  • Let developers work with a pure JS toolkit

The resulting toolkit contains several new Node modules:

  • i18n-abide
  • jsxgettext
  • po2json.js
  • gobbledygook

i18n-abide is the main module you’ll use to integrate translations into your own service. Let’s walk through how to add it.

In these examples, we’ll assume your code uses Express and EJS templates.

Installation

npm install i18n-abide

Preparing your codebase

In your code

var i18n = require('i18n-abide');
 
app.use(i18n.abide({
  supported_languages: ['en-US', 'de', 'es', 'zh-TW'],
  default_lang: 'en-US',
  translation_directory: 'static/i18n'
}));

We will look at the configuration values in detail during the third installment of this L10n series.

The i18n abide middleware sets up request processing and injects various functions which We’ll use for translation. Below, we will see these are available in the request object and in the templating context.

Okay, you next step is to work through all of your code where you have user visible prose.

Here is an example template file:

<html lang="<%= lang %>" dir="<%= lang_dir %>">
  <head>
    <title><%= gettext('Mozilla Persona') %></title>

The key thing abide does, is it injects into the Node and express framework references to the gettext function.

Abide also provides other variables and functions, such as lang, lang_dir.

  • lang is the language code based on the user’s browser and preferred language settings.
  • lang_dir is for bidirectional text support.
    It will be either ltr or rtl. The English language is rendered ltr or left to right.
  • gettext is a JS function which will take an English string and return a localize string, again based on the user’s preferred region and language.

When doing localization, we refer to strings or Gettext strings: these are pieces of prose, labels, button, etc. Any prose that is visible to the end user is a string.

Technically, we don’t mean JavaScript String, as you can have strings which are part of your program, but never shown to the user. String is overloaded to mean, stuff that must get translated.

Here is an example JavaScript file:

app.get('/', function(req, res) {
    res.render('homepage.ejs', {
        title: req.gettext('Hello, World!')
    });
});

We can see that these variables and functions (like gettext) are placed in the req object.

So to setup our site for localization, we must look through all of our code and templates and wrap strings in calls to gettext.

Language Detection

How do we know what the user’s preferred language is?

At runtime, the middleware will detect the user’s preferred language.

The i18n-abide module looks at the Accept-Language HTTP header. This is sent by the browser and includes all of the user’s preferred languages with a preference order.

i18n-abide processes this value and compares it with your app’s supported_languages configuration. It will make the best match possible and serve up that language.

If a good match cannot be found, it will use the strings you’ve put into your code and templates, which are typically English strings.

Wrapping Up

In our next post, we’ll look at how strings like “Hello, World!” are extracted, translated, and prepared for use.

In the third post, we’ll look more deeply at the middleware and configuration options. We’ll also test out our localization work.

Previous articles in the series

This was part nine in a series with a total of 12 posts about Node.js. The previous ones are:

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

An open letter to Sony: your Ultraviolet Film service teaches people to trust malware

Online piracy is a terrible thing. It is illegal. It does kill jobs and it does prevent products from being released and artists from becoming famous and being able to make a living. This is the truth, although when you hear it from labels and the film industry and seeing what is being promoted and sold it does lose some of its credibility.

Nevertheless, online piracy is a criminal actcivil matter (explanation) and should not be the norm. The best way to fight online piracy is to make it redundant. Purchasing a media and being able to watch it when I want as long as I want and as often as I want should be dead simple. This is what happened in the past when I bought physical media in the time of CDs and Vinyls and VHS tapes.

Even then the film industry made it hard for me to enjoy the things I bought. There were differences in TV formats (NTSC vs. PAL and the wonderful milky display of movies that were badly transferred) and different releases of vinyls in different countries had different tracks. Also, I was punished when I lived outside the US as I had to wait for half a year for a movie my friends on BBSes and later on in Newsgroups and IRC talked about.

I have not bought a CD in a while and I have not downloaded any pirated MP3 in years because of Spotify. I pay my monthly fee and I happily listen to as much music as I want. I download the music to play on my iPod in the gym offline and all is good. I pay, the artists get money, the labels get money, Spotify gets money and I can enjoy my stuff.

Now, on a flight lately I watched Total Recall, the remake (ironically released by “Original Film”) and I was almost ready to watch it on iTunes and buy it there. As it is a cool CGI movie, I thought I get the HD version and – if possible – check it on my Retina MBP. Then I thought that £13.99 is a bit much and as I want to see it next time I am in Sweden with my partner, I want to get it on the computer I take with me on Travels. Google Play was out of the question as it doesn’t let me access my UK account when I am out of the country.

So today I went to the shop and saw the DVD of Total Recall for £15 so I thought, OK, let’s buy a physical DVD. I could do it ironically and be a hipster about it. My plan was to rip the DVD to my computer and watch it with my partner whilst keeping the physical thing at home as none of my laptops have drives any longer.

But, oh wonder! You thought of this and gave me the awesome “Ultra Violet” film collection option. So I could go and get a digital copy of the movie I just bought for my convenience. Amazing! I was ready to download the hell out of this MP4 you’d offer me in a simple download, and went online to get the movie.

Now, the first thing I was asked to do was to fill out a form to sign up for your library. This form didn’t understand my perfectly valid 5 digit UK postcode and told me I need a 6 digit one – how dare I have a working address? It also asked me to have a password in a certain format after I entered mine twice instead of telling me after I entered it once that this will not do in your world of security.

OK, I signed up, giving you a wrong postcode to get in and a wrong birthdate as it is none of your business when I was born.

I then got to the download page which asked me to install Silverlight. Why is this not on the DVD pack? A simple “requires Microsoft Silverlight” would have told me that there is pain ahead.

I downloaded the Silverlight linked from the Download page and installed it. I restarted my Firefox and went to the download page and was asked to install it again. What? OK, I went to Safari, logged in and the login page told me my Silverlight is the wrong version. I installed the one not linked from your “download silverlight” button and hooray, I could now install the Sony Pictures Download Manager which is a secure and trustworthy and wonderful way of downloading movies I paid for. That is if it were a verified program file. As it was my browser told me that the publisher of this file is not verified:

unverified app

Is it yours? Is it malware? Should I be concerned that you tell me as Mac user that I should double-click the icon of the download manager once it is on my Desktop which it never will be? Should I install the .app file that my operating system tells me I downloaded from the internet and could be anything?

unknown application

I did, this is how much I am happy to meet you halfway here. So I installed the download manager and started the download. And I felt the laptop giving off a warm glow when it started, seeing that your download manager sucks up 17% of this very, very beefy computer whilst downloading the movie.

activity detected

I can only imagine what watching the movie will be like.

So here is my advise: hire a few researchers to download and watch pirated movies. Learn from the way pirates distribute and make things available and then make it easier. Today you lost me as a customer. This is the first and last movie I bought from Sony Pictures as your interest is neither safety nor my enjoyment.

What you do right now is:

  • Make legal customers go through a broken sign-up process with strange rules
  • Make legal customers install strange software without verified publishers (with one download linking to the wrong version)
  • Slow down my computer unnecessarily with a heavy download client whilst I already have iTunes and Google Play

You know what that is? The same thing shady download locker sites do to lure people into downloading malware after entering a captcha most likely used to get into another site. Instead of making it easy for end users who just want to legally watch a movie you teach them that nothing on the web can be trusted, so we might as well install whatever promises us movies to watch. As a security conscious person, I consider this bordering on aiding the criminals you so loudly proclaim to fight.

Let me repeat: you only fight piracy by making it unnecessary. All the money you spend on building overly complex and ridiculously locked-in systems like that is what kills movies and hurts artists. Learn from the people who attack your business and you will come out a winner.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Mobile App or a Mobile Website – Interview with Jason Summerfield, Human Service Solutions

In this twelve minute interview with Jason Summerfield, Principle, Human Service Solutions a Web Development and Application development agency based in Framingham, MA we learn his perspective regarding a recent article he wrote entitled, “What’s the Difference Between a Mobile Website and an App?”

Specifically we learn Jason’s thoughts on:

Which is Better – an App or a Mobile Website?

According to Jason, when it comes to deciding whether to build a native app or a mobile website, the most appropriate choice really depends on your end goals. If you are developing an interactive game an app is probably going to be your best option. But if your goal is to offer mobile-friendly content to the widest possible audience then a mobile website is probably the way to go. In some cases you may decide you need both a mobile website and a mobile app, but it’s pretty safe to say that it rarely makes sense to build an app without already having a mobile website in place.

Advantages of a Mobile Website vs. Native Apps

If your goals are primarily related to marketing or public communications, a mobile website is almost always going to make sense as a practical first step in your mobile outreach strategy. This is because a mobile website has a number of inherent advantages over apps, including broader accessibility, compatibility and cost-effectiveness.

* Compatibility – Mobile Websites are Compatible Across Devices
* Upgradability – Mobile Websites Can Be Updated Instantly
* Findability – Mobile Websites Can be Found Easily
* Shareability – Mobile Websites Can be Shared Easily by Publishers, and Between Users
* Reach – Mobile Websites Have Broader Reach
* LifeCycle – Mobile Websites Can’t be Deleted
* A Mobile Website Can be an App!
* Time and Cost – Mobile Websites are Easier and Less Expensive
* Support and Sustainability
* When Does an App Make Sense?

For additional details, the complete article and additional information on Jason’s organization and offerings follow the links below.

Mobile Website vs. Mobile App: Which is best for Your Organization?

Qfuse Mobile Website Builder and QR Code Generator
Human Service Solutions Web Services
Jason Summerfield on LinkedIn

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)