It’s

Compiling to WebAssembly: It’s Happening!

WebAssembly is a new binary format for compilation to the web. It is in the process of being designed and implemented as we speak, in collaboration among the major browser vendors. Things are moving quickly! In this post we’ll show some of our recent progress with a deep dive into the toolchain side of WebAssembly.

For WebAssembly to be usable, we need two major components: toolchains that compile code into WebAssembly, and browsers that can execute that output. Both of those components depend on progress in finishing the WebAssembly spec, but otherwise are largely separate engineering efforts. This separation is a good thing, as it will enable compilers to emit WebAssembly that runs in any browser, and browsers to run WebAssembly no matter which compiler generated it; in other words, it allows multiple toolchains and multiple browsers to work together, improving user choice. The separation also allows work on the two components to proceed in parallel right now.

A new project on the toolchain side of WebAssembly is Binaryen. Binaryen is a compiler infrastructure library for WebAssembly, written in C++. If you’re not working on a WebAssembly compiler yourself, you’ll probably never need to know anything about it, but if you use a WebAssembly compiler then it might use Binaryen for you under the hood; we’ll see examples of that later.

At Binaryen’s core is a modular set of classes that can parse and emit WebAssembly, as well as represent it in an AST designed for writing flexible transformation passes on. Built on top of that are several useful tools:

  • The Binaryen shell, which can load a WebAssembly module, transform it, execute it in an interpreter, print it, etc. Loading and printing use WebAssembly’s current temporary s-expression format, which has the suffix .wast (work is underway on designing the WebAssembly binary format, as well as the final text format, but they aren’t ready yet).
  • asm2wasm, which compiles asm.js into WebAssembly.
  • wasm2asm, which compiles WebAssembly into asm.js. (This is a work in progress.)
  • s2wasm, which compiles .s files, in the format emitted by the new WebAssembly backend being developed in LLVM, to WebAssembly.
  • wasm.js, a port of Binaryen itself to JavaScript. This lets us run the all the above components on a web page or any other JavaScript environment.

For a general overview of Binaryen, you can see these slides from a talk I recently gave. Don’t skip slide #9 🙂

It’s important to note that WebAssembly is still in the design phase, and the formats that Binaryen can read and write (.wast, .s) are not final. Binaryen has been constantly updating with those changes; the rate of churn is decreasing, but expect breakage.

Let’s discuss some of the specific areas where Binaryen can be helpful.

Compiling to WebAssembly using Emscripten

Emscripten can compile C and C++ to asm.js, and Binaryen’s asm2wasm tool can compile asm.js to WebAssembly, so together Emscripten+Binaryen provide a complete way to compile C and C++ to WebAssembly. You can run asm2wasm on asm.js code directly (it can be run on the commandline), but it’s easiest to let Emscripten do it for you, using something like

emcc file.cpp -o file.js -s ‘BINARYEN=”path-to-binaryen”’

Emscripten will compile file.cpp, and emit a main JavaScript file and a separate file for the WebAssembly output, in .wast format. Under the hood, Emscripten compiles to asm.js, then runs asm2wasm on the asm.js file to produce the .wast file. For more details, see the Emscripten wiki page on WebAssembly.

But wait, what good is it to compile to WebAssembly when browsers don’t support it yet? Good question 🙂 Yes, we don’t want to ship this code since browsers can’t run it. But it is still very useful for testing purposes: we want to know that Emscripten can compile properly to WebAssembly as soon as we can, since we don’t want to wait on browser support.

But how can we check that Emscripten is in fact compiling properly to WebAssembly, if we can’t run it? For that, we can use wasm.js, which Emscripten integrated into our output .js file when we ran that emcc command before. wasm.js contains portions of Binaryen compiled to JavaScript, including the Binaryen interpreter. If you run file.js (in node.js, or on a web page) then what happens is the interpreter will execute that WebAssembly. That lets us actually verify that the compiled WebAssembly code does the right thing. You can see an example of such a compiled program here, and there are some more builds for testing purposes in the build suite repo.

Of course, we are not quite on as solid ground as we would like, given this weird testing environment: a C++ program compiled to WebAssembly, running in a WebAssembly interpreter itself compiled from C++ to JavaScript, and no other way to run the program yet. But we have a few reasons to be confident in the results:

  • This output passes the Emscripten test suite. That includes many real-world codebases (Python, zlib, SQLite, etc.) as well as lots of unit tests for corner cases in C and C++. Experience has shown that when that test suite is passed, it’s very likely that other code will work too.
  • The Binaryen interpreter passes the WebAssembly spec test suite, indicating that it is running WebAssembly properly. In other words, when browsers get native support, they should run it in the same way (except much faster! this code is running in a simple intepreter for testing purposes, so it’s very slow; but note that there is work in progress on fast ways to polyfill).
  • This output was generated using Emscripten, which is a stable compiler used in production, and a relatively small amount of code on top of that in Binaryen (just a few thousand lines). The less new code, the less risk of bugs.

Overall, this indicates that we are in good shape here, and can compile C and C++ to WebAssembly today using Emscripten + Binaryen, even if browsers can’t run it yet.

Note that aside from emitting WebAssembly, the builds that we emit in this mode use everything else from the Emscripten toolchain normally: Emscripten’s port of the musl libc and syscalls to access it, OpenGL/WebGL code, browser integration code, node.js integration code, and so forth. As a result, this supports everything Emscripten already does, and existing projects using Emscripten can switch to emitting WebAssembly with just the flip of a switch. This is a key part of letting existing C++ projects that compile to the web benefit from WebAssembly when it launches, with little or no effort on their part.

Using the new experimental LLVM WebAssembly backend with Emscripten

We just saw an important milestone for Emscripten, in that it can compile to WebAssembly and even test that we get valid output. But things don’t stop there: that was using Emscripten’s current asm.js compiler backend, together with asm2wasm. There is a new LLVM backend for WebAssembly in development directly in the upstream LLVM repository, and while it isn’t ready for general use yet, in the long term it will be very important. Binaryen has support for that too.

The LLVM backend, like most LLVM backends, emits assembly code, in this case in a specific .s format. That output is close to WebAssembly, but not identical – it looks more like the output of a C compiler (linear list of instructions, one instruction per line, etc.) rather than WebAssembly’s more structured AST. The .s file can be translated into WebAssembly in a fairly straightforward way, though, and Binaryen includes s2wasm, a tool that translates .s to WebAssembly. It can be run standalone on the commandline, but also has Emscripten integration support: Emscripten now has a WASM_BACKEND option, which you can use like this:

emcc file.cpp -o file.js -s ‘BINARYEN=”path-to-binaryen”’ -s WASM_BACKEND=1

(Note that you also need the BINARYEN option, as s2wasm is part of Binaryen.) When that option is provided, Emscripten uses the new WebAssembly backend instead of the existing asm.js one. After calling the backend and receiving .s from it, Emscripten calls s2wasm to convert that to WebAssembly. Some examples of programs you can build with the new backend are on the Emscripten wiki.

There are, therefore, two ways to compile to WebAssembly using Binaryen: Emscripten + asm.js backend + asm2wasm, which works right now and should be fairly robust and reliable, and Emscripten + new WebAssembly backend + s2wasm, which is not yet fully functional, but as the WebAssembly backend matures it should become a powerful option, and hopefully will replace the asm.js backend in the future. The goal is to make that transition seamless: flipping between the two WebAssembly modes is just a matter of setting an option, as we saw.

The same is also true between asm.js and WebAssembly support in Emscripten, which is also just an option you can set, and the transition there should be seamless as well. In other words, there will be a straight and simple path from

  • using Emscripten to emit asm.js today, to
  • using it to emit WebAssembly using asm2wasm (possible today, but browsers can’t run it yet), to
  • using it to emit WebAssembly using the new LLVM backend (once the backend is ready).

Each step should provide substantial benefits, with no extra effort for developers.

In closing, note that while this post focused on using Binaryen with Emscripten, at its core it is designed to be a general-purpose WebAssembly library in C++: If you want to write something toolchain-related with WebAssembly, you probably need code to read WebAssembly, print it out, an AST to operate on, etc., which Binaryen provides. It was very useful in writing asm2wasm, s2wasm, etc., and hopefully other projects will find it useful as well.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

It’s a wrap! “App Basics for FirefoxOS” is out and ready to get you started

A week ago we announced a series of video tutorials around creating HTML5 apps for Firefox OS. Now we released all the videos and you can watch the series in one go.

wrap
Photo by Olliver Hallmann

The series is aimed at web developers who want to build their first HTML5 application. Specifically it is meant to be distributed in the emerging markets, where Firefox OS is the first option to get an affordable smartphone and start selling apps to the audiences there.

Over the last week, we released the different videos of the series – one each day:

Yesterday we announced the last video in the series. For all of you who asked for the whole series to watch in one go, you now got the chance to do so.

There are various resources you can use:

What’s next?

There will be more videos on similar topics coming in the future and we are busy getting the videos dubbed in other languages. If you want to help us get the word out, check the embedded versions of the videos on Codefirefox.com, where we use Amara to allow for subtitles.

Speaking of subtitles and transcripts, we are currently considering both, depending on demand. If you think this would be a very useful thing to have, please tell us in the comments.

Thanks

Many thanks to Sergi, Jan, Jakob, Ketil, Nathalie and Anne from Telenor, Brian Bondy from Khan Academy, Paul Jarrat and Chris Heilmann of Mozilla to make all of this possible. Technologies used to make this happen were Screenflow, Amazon S3, Vid.ly by encoding.com and YouTube.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Detecting touch: it’s the ‘why’, not the ‘how’

One common aspect of making a website or application “mobile friendly” is the inclusion of tweaks, additional functionality or interface elements that are particularly aimed at touchscreens. A very common question from developers is now “How can I detect a touch-capable device?”

Feature detection for touch

Although there used to be a few incompatibilities and proprietary solutions in the past (such as Mozilla’s experimental, vendor-prefixed event model), almost all browsers now implement the same Touch Events model (based on a solution first introduced by Apple for iOS Safari, which subsequently was adopted by other browsers and retrospectively turned into a W3C draft specification).

As a result, being able to programmatically detect whether or not a particular browser supports touch interactions involves a very simple feature detection:

if ('ontouchstart' in window) {
  /* browser with Touch Events
     running on touch-capable device */
}

This snippet works reliably in modern browser, but older versions notoriously had a few quirks and inconsistencies which required jumping through various different detection strategy hoops. If your application is targetting these older browsers, I’d recommend having a look at Modernizr – and in particular its various touch test approaches – which smooths over most of these issues.

I noted above that “almost all browsers” support this touch event model. The big exception here is Internet Explorer. While up to IE9 there was no support for any low-level touch interaction, IE10 introduced support for Microsoft’s own Pointer Events. This event model – which has since been submitted for W3C standardisation – unifies “pointer” devices (mouse, stylus, touch, etc) under a single new class of events. As this model does not, by design, include any separate ‘touch’, the feature detection for ontouchstart will naturally not work. The suggested method of detecting if a browser using Pointer Events is running on a touch-enabled device instead involves checking for the existence and return value of navigator.maxTouchPoints (note that Microsoft’s Pointer Events are currently still vendor-prefixed, so in practice we’ll be looking for navigator.msMaxTouchPoints). If the property exists and returns a value greater than 0, we have touch support.

if (navigator.msMaxTouchPoints > 0) {
  /* IE with pointer events running
     on touch-capable device */
}

Adding this to our previous feature detect – and also including the non-vendor-prefixed version of the Pointer Events one for future compatibility – we get a still reasonably compact code snippet:

if (('ontouchstart' in window) ||
     (navigator.maxTouchPoints > 0) ||
     (navigator.msMaxTouchPoints > 0)) {
      /* browser with either Touch Events of Pointer Events
         running on touch-capable device */
}

How touch detection is used

Now, there are already quite a few commonly-used techniques for “touch optimisation” which take advantage of these sorts of feature detects. The most common use cases for detecting touch is to increase the responsiveness of an interface for touch users.

When using a touchscreen interface, browsers introduce an artificial delay (in the range of about 300ms) between a touch action – such as tapping a link or a button – and the time the actual click event is being fired.

More specifically, in browsers that support Touch Events the delay happens between touchend and the simulated mouse events that these browser also fire for compatibility with mouse-centric scripts:

touchstart > [touchmove]+ > touchend > delay > mousemove > mousedown > mouseup > click

See the event listener test page to see the order in which events are being fired, code available on GitHub.

This delay has been introduced to allow users to double-tap (for instance, to zoom in/out of a page) without accidentally activating any page elements.

It’s interesting to note that Firefox and Chrome on Android have removed this delay for pages with a fixed, non-zoomable viewport.

<meta name="viewport" value="... user-scalable = no ...">

See the event listener with user-scalable=no test page, code available on GitHub.

There is some discussion of tweaking Chrome’s behavior further for other situations – see issue 169642 in the Chromium bug tracker.

Although this affordance is clearly necessary, it can make a web app feel slightly laggy and unresponsive. One common trick has been to check for touch support and, if present, react directly to a touch event (either touchstart – as soon as the user touches the screen – or touchend – after the user has lifted their finger) instead of the traditional click:

/* if touch supported, listen to 'touchend', otherwise 'click' */
var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');
blah.addEventListener(clickEvent, function() { ... });

Although this type of optimisation is now widely used, it is based on a logical fallacy which is now starting to become more apparent.

The artificial delay is also present in browsers that use Pointer Events.

pointerover > mouseover > pointerdown > mousedown > pointermove > mousemove > pointerup > mouseup > pointerout > mouseout > delay > click

Although it’s possible to extend the above optimisation approach to check navigator.maxTouchPoints and to then hook up our listener to pointerup rather than click, there is a much simpler way: setting the touch-action CSS property of our element to none eliminates the delay.

/* suppress default touch action like double-tap zoom */
a, button {
  -ms-touch-action: none;
      touch-action: none;
}

See the event listener with touch-action:none test page, code available on GitHub.

False assumptions

It’s important to note that these types of optimisations based on the availability of touch have a fundamental flaw: they make assumptions about user behavior based on device capabilities. More explicitly, the example above assumes that because a device is capable of touch input, a user will in fact use touch as the only way to interact with it.

This assumption probably held some truth a few years back, when the only devices that featured touch input were the classic “mobile” and “tablet”. Here, touchscreens were the only input method available. In recent months, though, we’ve seen a whole new class of devices which feature both a traditional laptop/desktop form factor (including a mouse, trackpad, keyboard) and a touchscreen, such as the various Windows 8 machines or Google’s Chromebook Pixel.

As an aside, even in the case of mobile phones or tablets, it was already possible – on some platforms – for users to add further input devices. While iOS only caters for pairing an additional bluetooth keyboard to an iPhone/iPad purely for text input, Android and Blackberry OS also let users add a mouse.

On Android, this mouse will act exactly like a “touch”, even firing the same sequence of touch events and simulated mouse events, including the dreaded delay in between – so optimisations like our example above will still work fine. Blackberry OS, however, purely fires mouse events, leading to the same sort of problem outlined below.

The implications of this change are slowly beginning to dawn on developers: that touch support does not necessarily mean “mobile” anymore, and more importantly that even if touch is available, it may not be the primary or exclusive input method that a user chooses. In fact, a user may even transition between any of their available input methods in the course of their interaction.

The innocent code snippets above can have quite annoying consequences on this new class of devices. In browsers that use Touch Events:

var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');

is basically saying “if the device support touch, only listen to touchend and not click” – which, on a multi-input device, immediately shuts out any interaction via mouse, trackpad or keyboard.

Touch or mouse?

So what’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities. Specifically, instead of making the decision about whether to react to click or touchend/touchstart mutually exclusive, these should all be taken into consideration as complementary.

Certainly, this may involve a bit more code, but the end result will be that our application will work for the largest number of users. One approach, already familiar to developers who’ve strived to make their mouse-specific interfaces also work for keyboard users, would be to simply “double up” your event listeners (while taking care to prevent the functionality from firing twice by stopping the simulated mouse events that are fired following the touch events):

blah.addEventListener('touchend', function(e) {
  /* prevent delay and simulated mouse events */
  e.preventDefault();
  someFunction()
});
blah.addEventListener('click', someFunction());

If this isn’t DRY enough for you, there are of course fancier approaches, such as only defining your functions for click and then bypassing the dreaded delay by explicitly firing that handler:

blah.addEventListener('touchend', function(e) {
  /* prevent delay and simulated mouse events */
  e.preventDefault();
  /* trigger the actual behavior we bound to the 'click' event */
  e.target.click();
})
blah.addEventListener('click', function() {
  /* actual functionality */
});

That last snippet does not cover all possible scenarios though. For a more robust implementation of the same principle, see the FastClick script from FT labs.

Being input-agnostic

Of course, battling with delay on touch devices is not the only reason why developers want to check for touch capabilities. Current discussions – such as this issue in Modernizr about detecting a mouse user – now revolve around offering completely different interfaces to touch users, compared to mouse or keyboard, and whether or not a particular browser/device supports things like hovering. And even beyond JavaScript, similar concepts (pointer and hover media features) are being proposed for Media Queries Level 4. But the principle is still the same: as there are now common multi-input devices, it’s not straightforward (and in many cases, impossible) anymore to determine if a user is on a device that exclusively supports touch.

The more generic approach taken in Microsoft’s Pointer Events specification – which is already being scheduled for implementation in other browser such as Chrome – is a step in the right direction (though it still requires extra handling for keyboard users). In the meantime, developers should be careful not to draw the wrong conclusions from touch support detection and avoid unwittingly locking out a growing number of potential multi-input users.

Further links

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Hello Chrome, it’s Firefox calling!

Mozilla is excited to announce that we’ve achieved a major milestone in WebRTC development: WebRTC RTCPeerConnection interoperability between Firefox and Chrome. This effort was made possible because of the close collaboration between the open Web community and engineers from both Mozilla and Google.

RTCPeerConnection (also known simply as PeerConnection or PC) interoperability means that developers can now create Firefox WebRTC applications that make direct audio/video calls to Chrome WebRTC applications without having to install a third-party plugin. Because the functionality is now baked into the browser, users can avoid problems with first-time installs and buggy plugins, and developers can deploy their apps much more easily and universally.

To help celebrate this momentous milestone, we thought it would be fun to call up our friends at Google to discuss it with them. Check out this Firefox-Chrome demonstration call between Mozilla’s Chief Innovation Officer, Todd Simpson, and Google’s Director of Product Management, Hugh Finnan, and read what Google had to say about this momentous occasion in their blog post.

This milestone builds on an earlier demo we showed late last year of WebRTC integrated with Social API. There we demonstrated an industry first with our implementation of DataChannels, a powerful component of WebRTC that can combined with an audio/video chat to allow users to share almost anything on their computer or device. Send vacation photos, memorable videos, links news stories etc., simply by dragging the item into your video chat window. Look out for more on this to come.

The purpose of WebRTC, an open standard being defined jointly at the W3C and IETF standards organizations, is to provide a common platform for all user devices to communicate and share audio, video and data in real-time. This is a first step toward that vision of interoperability and true, open, real-time communication on the web.

Posted by:
Serge Lachapelle, Chrome Product Manager and Maire Reavy, Firefox Media Product Lead

Start Developing Using RTCPeerConnection in Firefox

For JavaScript developers who haven’t tried RTCPeerConnection in Firefox yet (since it is a brand new feature for us), you can try this out using the most recent Firefox Nightly by setting the media.peerconnection.enabled pref to “true” (browse to about:config and search for the media.peerconnection.enabled pref in the list of prefs). Here is a snippet of code from a sample app that shows off how to initiate, accept, and end a WebRTC call in Firefox using RTCPeerConnection:

function initiateCall(user) {
  document.getElementById("main").style.display = "none";
  document.getElementById("call").style.display = "block";
 
  // Here's where you ask user permission to access the camera and microphone streams
  navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
    document.getElementById("localvideo").mozSrcObject = stream;
    document.getElementById("localvideo").play();
    document.getElementById("localvideo").muted = true;
 
    // Here's where you set up a Firefox PeerConnection
    var pc = new mozRTCPeerConnection();
    pc.addStream(stream);
 
    pc.onaddstream = function(obj) {
      log("Got onaddstream of type " + obj.type);
      document.getElementById("remotevideo").mozSrcObject = obj.stream;
      document.getElementById("remotevideo").play();
      document.getElementById("dialing").style.display = "none";
      document.getElementById("hangup").style.display = "block";
    };
 
    pc.createOffer(function(offer) {
      log("Created offer" + JSON.stringify(offer));
      pc.setLocalDescription(offer, function() {
        // Send offer to remote end.
        log("setLocalDescription, sending to remote");
        peerc = pc;
        jQuery.post(
          "offer", {
            to: user,
            from: document.getElementById("user").innerHTML,
            offer: JSON.stringify(offer)
          },
          function() { console.log("Offer sent!"); }
        ).error(error);
      }, error);
    }, error);
  }, error);
}
 
function acceptCall(offer) {
  log("Incoming call with offer " + offer);
  document.getElementById("main").style.display = "none";
  document.getElementById("call").style.display = "block";
 
  // Here's where you ask user permission to access the camera and microphone streams
  navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
    document.getElementById("localvideo").mozSrcObject = stream;
    document.getElementById("localvideo").play();
    document.getElementById("localvideo").muted = true;
 
    // Here's where you set up a Firefox PeerConnection
    var pc = new mozRTCPeerConnection();
    pc.addStream(stream);
 
    pc.onaddstream = function(obj) {
      document.getElementById("remotevideo").mozSrcObject = obj.stream;
      document.getElementById("remotevideo").play();
      document.getElementById("dialing").style.display = "none";
      document.getElementById("hangup").style.display = "block";
    };
 
    pc.setRemoteDescription(JSON.parse(offer.offer), function() {
      log("setRemoteDescription, creating answer");
      pc.createAnswer(function(answer) {
        pc.setLocalDescription(answer, function() {
          // Send answer to remote end.
          log("created Answer and setLocalDescription " + JSON.stringify(answer));
          peerc = pc;
          jQuery.post(
            "answer", {
              to: offer.from,
              from: offer.to,
              answer: JSON.stringify(answer)
            },
            function() { console.log("Answer sent!"); }
          ).error(error);
        }, error);
      }, error);
    }, error);
  }, error);
}
 
function endCall() {
  log("Ending call");
  document.getElementById("call").style.display = "none";
  document.getElementById("main").style.display = "block";
 
  document.getElementById("localvideo").mozSrcObject.stop();
  document.getElementById("localvideo").mozSrcObject = null;
  document.getElementById("remotevideo").mozSrcObject = null;
 
  peerc.close();
  peerc = null;
}

You’ll notice that Firefox still prefixes the RTCPeerConnection API call as mozRTCPeerConnection because the standards committee is not yet done defining it. Chrome prefixes it as webkitRTCPeerConnection. Once the standards committee finishes its work, we will remove the prefixes and use the same API, but in the meantime, you’ll want to support both prefixes so that your app works in both browsers.

Trying Interop Yourself

For those eager to give interop a try, here are instructions and information about “trying this at home”.

This is Firefox’s and Chrome’s first version of PeerConnection interoperability. As with most early releases, there are still bugs to fix, and interop isn’t supported yet in every network environment. But this is a major step forward for this new web feature and for the Web itself. We thank the standards groups and every contributor to the WebRTC community. While there’s more work to do, we hope you’ll agree that the Web is about to get a lot more awesome.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Firefox OS App Days: It’s a Wrap!

Over the last few weeks, Mozilla sponsored a worldwide series of hack days for developers to learn about creating apps for Firefox OS. Dubbed “Firefox OS App Days,” the events took place in more than 25 locales around the world, starting on 19 January in Mountain View, California and ending on 2 February in Berlin, Germany. The events were organized with the support of our Mozilla Reps, the Mozilla community and Firefox OS partners Telefonica and Deutsche Telekom in Africa, Asia, Europe, New Zealand, as well as North and South America.

Hacking

Hacking at a Firefox OS App Day

2,500 New Developers for Firefox OS

Our goals were to educate developers around the world about Firefox OS and open web apps and inspire them to start building apps for the Firefox Marketplace.

We engaged with over 2,500 developers worldwide. Hundreds of apps were demonstrated at the events, and many of them have already been submitted to the Marketplace. Some of the apps developed include:

  • Bessa – An image editor for Firefox OS, demonstrated in Berlin
  • Web Sliding Puzzle – a sliding puzzle game made from the Firefox OS App Days logo, demonstrated in Paris by Mathieu Pillard
  • Ash’s Rising – a strategy game, demonstrated in Toronto
  • Travel Saver – a local travel app, demonstrated in Warsaw
  • FoxKehCalc – a Fox-themed calculator, demonstrated in Tokyo

In addition to apps, we saw over two million impressions of the #firefoxosappdays hashtag on Twitter, and hundreds of photos from the events were posted on Flickr, Facebook and other social media sites.

Sample of Apps Developed at Firefox OS App Days

Sample of Apps Developed at Firefox OS App Days

Going Forward

Thanks to everyone who participated in the App Days, and if you haven’t submitted your app to the Marketplace yet, please do so as soon as you can. If you have a website or github repo hosting your app or a post about your App Day experience, please add your links to the comments below. We’d love to hear from you and check out your apps in progress. If you missed the events, or there wasn’t one in your area, stay tuned — our Mozilla Reps team plans to enable more in the near future.

And if you are just hearing about Firefox OS, and want to get started developing apps on your own, the Developer Hub, the Hacks Blog and the Mozilla Developer Network are excellent places to start. To stay in touch with upcoming App Days, developer phone releases, and app development news, subscribe to our monthly Firefox Apps & Hacks newsletter.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

It’s Opus, it rocks and now it’s an audio codec standard!

In a great victory for open standards, the Internet Engineering Task Force (IETF) has just standardized Opus as RFC 6716.

Opus is the first state of the art, free audio codec to be standardized. We think this will help us achieve wider adoption than prior royalty-free codecs like Speex and Vorbis. This spells the beginning of the end for proprietary formats, and we are now working on doing the same thing for video.

There was both skepticism and outright opposition to this work when it was first proposed in the IETF over 3 years ago. However, the results have shown that we can create a better codec through collaboration, rather than competition between patented technologies. Open standards benefit both open source organizations and proprietary companies, and we have been successful working together to create one. Opus is the result of a collaboration between many organizations, including the IETF, Mozilla, Microsoft (through Skype), Xiph.Org, Octasic, Broadcom, and Google.

A highly flexible codec

Unlike previous audio codecs, which have typically focused on a narrow set of applications (either voice or music, in a narrow range of bitrates, for either real-time or storage applications), Opus is highly flexible. It can adaptively switch among:

  • Bitrates from 6 kb/s to 512 kb/s
  • Voice and music
  • Mono and stereo
  • Narrowband (8 kHz) to Fullband (48 kHz)
  • Frame sizes from 2.5 ms to 60 ms

Most importantly, it can adapt seamlessly within these operating points. Doing all of this with proprietary codecs would require at least six different codecs. Opus replaces all of them, with better quality.
Illustration of the quality of different codecs
The specification is available in RFC 6716, which includes the reference implementation. Up-to-date software releases are also available.

Some audio standards define a normative encoder, which cannot be improved after it is standardized. Others allow for flexibility in the encoder, but release an intentionally hobbled reference implementation to force you to license their proprietary encoders. For Opus, we chose to allow flexibility for future encoders, but we also made the best one we knew how and released that as the reference implementation, so everyone could use it. We will continue to improve it, and keep releasing those improvements as open source.

Use cases

Opus is primarily designed for use in interactive applications on the Internet, including voice over IP (VoIP), teleconferencing, in-game chatting, and even live, distributed music performances. The IETF recently decided with “strong consensus” to adopt Opus as a mandatory-to-implement (MTI) codec for WebRTC, an upcoming standard for real-time communication on the web. Despite the focus on low latency, Opus also excels at streaming and storage applications, beating existing high-delay codecs like Vorbis and HE-AAC. It’s great for internet radio, adaptive streaming, game sound effects, and much more.

Although Opus is just out, it is already supported in many applications, such as Firefox, GStreamer, FFMpeg, foobar2000, K-Lite Codec Pack, and lavfilters, with upcoming support in VLC, rockbox and Mumble.

For more information, visit the Opus website.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

It’s time: MDN relaunch on Kuma wiki on August 3

That’s right! We’re finally ready to throw the switch! Tomorrow (that is, Friday, August 3, 2012) we intend to switch from the current MindTouch-based wiki to our new Kuma platform for the Mozilla Developer Network wiki. The changeover should happen at about 10:00 AM Pacific Daylight Time.

At that time, there should be, at most, a few moments of downtime, then the site should be running on the new system.

A few things you might need to know:

  1. There’s lots more stuff we’re planning to do to make Kuma even better than it already is. You may even notice some stuff that isn’t done yet. However, weeks and weeks of testing have told us that it works very well, so we decided it was time to go ahead and launch.
  2. We have updated documentation for using the wiki that you might like to look over, as well as an updated Editor guide.
  3. Things are different! You will run into stuff that doesn’t work the way you’re used to. It should look pretty familiar, by and large, though, and most people won’t notice the changes unless they look closely.
  4. There’s a big “Report a bug” button at the top-right corner of the window. Please use it! Any time you have a problem, concern, see something that looks wrong, or have an idea for a brilliant way to improve the system, click it and follow the handy wizard that will help you file your bug. We want the MDN wiki to rock, and you can help make it so.

We will be sharing additional information about where we are and where things are going over the next week or two, and, of course, for the foreseeable future as we continue development. Indeed, the MDN development team is getting together for a week of meetings next week to rehash processes and sort out priorities for what to work on next.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Google To Webmasters: It’s No Longer 1996, Stop Using Marquees

For those of you looking for some light web development humor you may want to check out this Google Webmaster Help thread where Googler, JohnMu, suggested to a webmaster to avoid using Marquees. Marquees? Honestly, I forgot what they were and had to do some searches to remember. An HTML marquees basically lets a text box scroll left or right or up or down or other methods. It is very old fashion …

View full post on web development – Yahoo! News Search Results

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)