Porting Chrome Extensions to Firefox with WebExtensions

After reading last month’s “Let’s Write a Web Extension,” I was inspired to try and port a real-world add-on to a WebExtension. Specifically, I tried to port the Chrome version of the popular, open-source “Reddit Enhancement Suite” (RES) to Firefox. Here’s what I learned, and what you can do today to prepare your own add-ons for the transition.

Note: The authors of RES are excited about WebExtensions and plan to officially port their add-on, but this is not that. If you want to use RES, you should install the supported version from AMO.

First, I want to stress that WebExtensions are a long-term, multi-year project. Our first releases will be focused on building a foundation of basic, well-supported, cross-browser APIs. This means that it may take a while before we’re ready to support complex add-ons that rely on unique browser features, but we’ll get there eventually.

Because everything here is still very early and experimental, you’ll need to use a Nightly build of Firefox if you want to follow along. This is a sneak peek, not something you should plan on deploying.

That said, if you have a Chrome extension or a cross-browser add-on, now is a great time to experiment with WebExtensions and provide feedback. Your input will be crucial in helping Mozilla figure out which APIs to prioritize and initially support.

Preparing to Port

  1. Download and install a Nightly build of Firefox.
  2. Create a new profile for testing and development.
  3. Visit about:config and set xpinstall.signatures.required to false.

Declaring Firefox Compatibility

You must explicitly mark your add-on as compatible with Firefox by adding an applications key to your manifest.json. It looks like this:

"applications": {
  "gecko": {
    "id": "YOUR_ADDON_ID"

Set "YOUR_ADDON_ID" to a made-up string formatted like "ext@example.org". If you plan on directly upgrading your users from an existing Firefox add-on to a WebExtension version of the same, you should re-use the value found in the "id" field of your package.json.

Checking Manifest Support

The next step is to compare the keys in your manifest.json to the ones that Firefox supports. Unsupported keys are ignored, so you can leave them in your manifest until we get around to implementing them, at which point they should Just Work.

Looking at Reddit Enhancement Suite’s manifest, we’re in pretty good shape. The metadata attributes are all implemented, and there’s sufficient support for background, content_scripts, and web_accessible_resources to work with RES.

Let’s look at what’s missing, and what impact it has:

  1. options_page: We’re OK without this since RES also injects a link to its settings via content_scripts, rather than solely relying on the options_page property.
  2. page_action: We’re OK here, too. RES only uses the page action as a shortcut for toggling a checkbox that it injects into pages via content_scripts.
  3. permissions: All of the permissions that RES requests are supported except for history, which hasn’t been implemented yet. RES only uses the history API to mark links as visited when previewing images inline from an “expando” button. Missing this means a slight degradation in functionality, but nothing catastrophic.
  4. optional_permissions: We don’t yet support optional permissions, which for RES means we won’t support embedding inline previews from Twitter or OneDrive via expando buttons. Unfortunate, but not a showstopper.

At this point, I’m feeling pretty good about our prospects. Most of the APIs we need are supported, and we should be able to deliver most of RES’s functionality despite the handful of missing APIs.

To Bugzilla!

Since we’ve identified some gaps in Firefox’s API coverage relative to our needs, it’s time to head to Bugzilla. Filing and voting for bugs are two of the most important contributions you can make as an add-on developer. In addition to keeping you informed of progress, it helps us judge which APIs are the most important to implement.

Note: Bugzilla has a somewhat esoteric search syntax. To look for all open and closed WebExtension bugs that mentioned the history API, try searching for ALL Component:WebExtensions #history, which should turn up Bug 1208334: “Implement history API for open extension API.”

Since I’m writing this article, I’ve gone ahead and made sure bugs were filed for the above APIs. Feel free to CC yourself on these bugs if you want to be notified of their progress, or click the little “vote” link next to the “Importance” field if the bug is particularly important to you.

  • Bug 1212684: Implement options_page manifest property for open extension API
  • Bug 1197422: Implement pageAction API for open extension API
  • Bug 1208334: Implement history API for open extension API
  • Bug 1197420: Implement permissions API and optional_permissions manifest property for open extension API

If you need to file a WebExtension bug, please file it against the “WebExtensions” component in the “Toolkit” product, and tag it with the “dev-doc-needed” keyword. This link should pre-fill all the right fields: File a WebExtension Bug.

Grepping the Code

In addition to manifest properties, we also need to ensure that Firefox actually supports the APIs we need. We’ve set up a visual dashboard of API progress at AreWeWebExtensionsYet.com, but for specifics you have to go to MDN. Since Chrome’s extension APIs are exposed as properties on a global chrome object, we can run grep to find out what we use:

$ grep -r 'chrome\.' ./Chrome ./lib
./Chrome/background.js: chrome.tabs.sendMessage(event.id, { requestType: 'subredditStyle', action: 'toggle'  }, function(response) {
# and so on...

Of the APIs that RES depends on, only a few are unimplemented:

  • history.addUrl
  • pageAction.hide, onClicked, setIcon, and show
  • permissions.remove, request
  • tabs.getCurrent

Before diving into the code, let’s head back to Bugzilla and make sure bugs have been filed for these. The bugs mentioned above cover History, Page Actions, and Permissions, but they don’t cover tabs.getCurrent. I’ve filed Bug 1212890 for that.

Hacks and Workarounds

Now that we’ve identified our limitations, we need to work around them. In the short term, we can just insert guards that check for the existence of an API before calling methods on it. For example, let’s look at how history.addUrl is used in background.js:

case 'addURLToHistory':
    chrome.history.addUrl({url: request.url});

As long as chrome.history.addUrl is undefined, this will throw an error. Instead, let’s check for its existence before we use it:

case 'addURLToHistory':
    if (chrome.history && chrome.history.addUrl) {
        chrome.history.addUrl({url: request.url});

This keeps the script from blowing up, but it means that addURLToHistory will silently fail until Bug 1208334 gets resolved. Under certain circumstances, like with RES, this might be acceptable. If it’s not, you’ll need to find a creative workaround or wait for the relevant bug to get resolved. Remember: file and vote on bugs! It’s how we know what we need to work on.

Page actions are another great example: while it’s handy to have a button in the browser’s UI, you may also be able to provide the same functionality by using content scripts to inject custom UI into target pages until Bug 1197422 is fixed.

Lastly, we could get around the lack of permissions.request() by moving all of the optional_permissions from our manifest.json up into the normal permissions block. That would work, but it’s best not to require more permission than you need, and changing the permissions stanza generally results in your users being prompted to re-authorize your add-on. If possible, just wait for Bug 1197420.

Packaging your WebExtension

We’re working on a better workflow in Bug 1185460, but for now:

  1. Zip your files so that your manifest.json is at the root of the zip file.
  2. Rename it from .zip to .xpi.
  3. Navigate to about:addons.
  4. Drag and drop your XPI onto the page.
  5. Click “Install” in the prompt.

If anything goes wrong, check out the packaging and installation docs on MDN for troubleshooting tips.

Testing it Out

Despite WebExtensions being a brand new initiative at Mozilla, we’ve already implemented most of the building blocks needed to support the Reddit Enhancement Suite. Things should work, as long as we’ve properly routed around unsupported API calls.

Let’s load it up and see if reality matches our expectations…

A screenshot of the Reddit homepage with RES active.

Hey! That looks good! Maybe it’s working? Let’s try the feature that loads more content when you scroll to the bottom of a page…

Animation showing RES failing to load additional content when scrolling to the bottom of the page.

…no dice. 🙁 So, what went wrong?


To find out what failed, we need to open up the Browser Console. It’s a global log of everything that happens in the browser, and it’s where uncaught exceptions from WebExtensions show up. It’s in the Developer menu.

Note: Though they are related, the Browser Console is not the same thing as the Web Console in that menu.

Looking at the Browser Console, there’s an uncaught exception: “TypeError: window.Favico is not a constructor.”

A screenshot of the Browser Console showing a TypeError

This happens when orangered.js calls:

favicon = new window.Favico();

The root cause of the bug is that the Favico library exports itself as this.Favico in its content script, and RES assumes that it will then be available as window.Favico in other scripts. It turns out that Firefox doesn’t work the same way. Off to Bugzilla to file Bug 1208775!

Fortunately, there’s an easy workaround: just omit the window. part.

favicon = new Favico();

This gets us past that error and results in working infinite scrolling. Hooray! Also, kudos to RES for fixing this in pull request #2465!

Of course, we’re not done yet. There are many other fascinating and hilarious bugs to be found, like Bug 1208874, which prevents RES from saving any of your settings because WebExtension localStorage is getting nuked every time the browser restarts. Boo!

Remember: Keep your Browser Console open and file bugs when you find them!

Wrapping Up

As I mentioned at the beginning of the article, WebExtensions are still very early in their development, and things are rapidly changing. For example, PageAction support should land any day now. That said, WebExtensions are already astonishingly capable. For add-ons like RES that isolate and minimize browser-specific code, a port to WebExtensions is surprisingly close to being viable on Nightly builds of Firefox.

We’re still several months out from any of this landing in mainline Firefox, but it’s encouraging to see rapid progress. Each day we’re closer to a future in which a single add-on codebase can be fully re-used across many browsers, and where add-ons are written using the same technology as the Web itself.

If you want to follow along with the bugs that are blocking a port of RES to WebExtensions, CC yourself on the RES metabug at Bug 1208765 and check out my own attempt at porting RES on GitHub.

Lastly, consider contributing to Firefox! Everything we do is open source, and most WebExtension APIs are implemented in JavaScript. If you can hack JS, you can make a difference. Check out the open WebExtension bugs and drop by the #webextensions channel on irc.mozilla.org to get started.

Finally, a quick word of thanks to Steve Sobel, creator of the Reddit Enhancement Suite, who would like me to remind you that any port of RES to WebExtensions is unfinished, unofficial, and unsupported until he personally tells you otherwise. Don’t bug him about our bugs. 😉

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Streaming media on demand with Media Source Extensions

Introducing MSE

Media Source Extensions (MSE) is a new addition to the Web APIs available in all major browsers.  This API allows for things like adaptive bitrate streaming of video directly in our browser, free of plugins. Where previously we may have used proprietary solutions like RTSP (Real Time Streaming Protocol) and Flash, we can now use simpler protocols like HTTP to fetch content, and MSE to smoothly stitch together video segments of varied quality.

All browsers that support HTMLMediaElements, such as audio and video tags, already make byte-range requests for subsequent segments of media assets.  One problem is that it’s up to each browser’s implementation of a media engine to decide when and how much to fetch.  It’s also tough to stitch together or deliver smooth playback of segments of different quality without pauses, gaps, flashes, clicks, or pops.  MSE gives us finer-grained control at the application level for fetching and playing back content.

In order to begin streaming, we need to figure out how to transcode our assets into a meaningful byte format for browsers’ media engines, determine what abstractions MSE provides, and figure out how to instruct the browser to play them back.

Having multiple resolutions of content allows us to switch between them while maintaining a constant viewport size.  This is known as upscaling, and it’s a common technique for real-time rendering in video games to meet a required frame time.  By switching to a lower quality video resolution, we can meet bandwidth limitations at the cost of fidelity.  The loss of fidelity causes such artifacts as aliasing, in which curves appear jagged and blocky.  This technique can often be seen by Netflix subscribers during peak viewing hours.

Rather than having an advanced protocol like RTSP handle bandwidth estimates, we can use a simpler network protocol like HTTP and move the advanced logic up one level into the application logic.


My recommended tools, ffmpeg and Bento4, are both free and open-source software (FOSS). ffmpeg is our Swiss army knife of transcoding, and Bento4 is a collection of great tools for working with mp4.  While I’m partial to non-licensed codecs like webm/vp8-9/opus, current browser support for those containers and codecs is rather poor, so in this post we’ll just be working with mp4/h.264/aac.  Both of the tools I’m working with are command line utilities; if you have nice GUI tools in your asset pipeline you’d like to recommend to our readers, let us know in the comments below.

We’ll start with a master of some file, and end up transcoding it into multiple files each of smaller resolutions, then segmenting the smaller-res whole files into a bunch of tiny files.  Once we have a bunch of small files (imagine splitting your video into a bunch of 10-second segments), the client can use more advanced heuristics for fetching the preferred next segment.

MSE multiple resolutions

Our smaller-res copies of the master asset

 Proper fragmentation

When working with mp4 and MSE, it helps to know that the mp4 files should be structured so that metadata is fragmented across pieces of the container, and across the actual audio/video streams being fragmented, instead of clustered together.  This is specified in the ISO BMFF Byte Stream Format spec, section 3:

“An ISO BMFF initialization segment is defined in this specification as a single File Type Box (ftyp) followed by a single Movie Header Box (moov).”

This is really important: Simply transcoding to an mp4 container in ffmpeg does not have the expected format and thus fails when trying to play back in a browser with MSE.  To check and see if your mp4 is properly fragmented, you can run Bento4’s mp4dump on your mp4.

If you see something like:

  $ ./mp4dump ~/Movies/devtools.mp4 | head
  [ftyp] size=8+24
  [free] size=8+0
  [mdat] size=8+85038690
  [moov] size=8+599967

Then your mp4 won’t be playable since the [ftyp] “atom” is not followed immediately by a [moov] “atom.”  A properly fragmented mp4 looks something like this —

  $ ./mp4fragment ~/Movies/devtools.mp4 devtools_fragmented.mp4
  $ ./mp4dump devtools_fragmented.mp4 | head
  [ftyp] size=8+28
  [moov] size=8+1109
  [moof] size=8+600
  [mdat] size=8+138679
  [moof] size=8+536
  [mdat] size=8+24490

— where mp4fragment is another Bento4 utility.  The properly fragmented mp4 has the [ftyp] followed immediately by a [moov], then subsequent [moof]/[mdat] pairs.

It’s possible to skip the need for mp4fragment by using the -movflags frag_keyframe+empty_moov flags when transcoding to an mp4 container with ffmpeg, then checking with mp4dump:

  $ ffmpeg -i bunny.y4m -movflags frag_keyframe+empty_moov bunny.mp4
Creating multiple resolutions

If we want to switch resolutions, we can then run our fragmented mp4 through Bento4’s mp4-dash-encode.py script to get multiple resolutions of our video.  This script will fire up ffmpeg and other Bento4 tools, so make sure they are both available in your $PATH environment variable.

$ python2.7 mp4-dash-encode.py -b 5 bunny.mp4
$ ls
video_00500.mp4 video_00875.mp4 video_01250.mp4 video_01625.mp4 video_02000.mp4

We now have 5 different copies of our video with various bit rates and resolutions. To be able to switch between them easily during playback, based on our effective bandwidth that changes constantly over time, we need to segment the copies and produce a manifest file to facilitate playback on the client.  We’ll create a Media Presentation Description (MPD)-style manifest file. This manifest file containing info about the segments, such as the threshold effective bandwidth for fetching the requisite segment.

Bento4’s mp4-dash.py script can take multiple input files, perform the segmentation, and emit a MPD manifest that most DASH clients/libraries understand.

$ python2.7 mp4-dash.py --exec-dir=. video_0*
$ tree -L 1 output
??? audio
?   ??? und
??? stream.mpd
??? video
    ??? 1
    ??? 2
    ??? 3
    ??? 4
    ??? 5

8 directories, 1 file

We should now have a folder with segmented audio and segmented video of various resolutions.

MSE & Playback

With an HTMLMediaElement such as an audio or video tag, we simply assign a URL to the element’s src attribute and the browser handles fetching and playback.  With MSE, we will fetch the content ourselves with XMLHttpRequests (XHRs) treating the response as an ArrayBuffer (raw bytes), and assigning the src attribute of the media element to a URL that points to a MediaSource object.  We may then append SourceBuffer objects to the MediaSource.

Pseudocode for the MSE workflow might look like:

let m = new MediaSource
m.onsourceopen = () =>
  let s = m.addSourceBuffer('codec')
  s.onupdateend = () =>
    if (numChunks === totalChunks)
video.src = URL.createObjectURL(m)

Here’s a trick to get the size of a file: make an XHR with the HTTP HEAD method.  A response to a HEAD request will have the content-length header specifying the body size of the response, but unlike a GET, it does not actually have a body.  You can use this to preview the size of a file without actually requesting the file contents.  We can naively subdivide the video and fetch the next segment of video when we’re 80% of the way through playback of the current segment.  Here’s a demo of this in action and a look at the code.

Note: You’ll need the latest Firefox Developer Edition browser to view the demo and test the code. More information below in the Compatibility section. The MSE primer from WebPlatform.org docs is another great resource to consult.

My demo is a little naive and has a few issues:

  • It doesn’t show how to properly handle seeking during playback.
  • It assumes bandwidth is constant (always fetching the next segment at 80% playback of the previous segment), which it isn’t.
  • It starts off by loading only one segment (it might be better to fetch the first few, then wait to fetch the rest).
  • It doesn’t switch between segments of varying resolution, instead only fetching segments of one quality.
  • It doesn’t remove segments (part of the MSE API), although this can be helpful on memory constrained devices. Unfortunately, this requires you to re-fetch content when seeking backwards.

These issues can all be solved with smarter logic on the client side with Dynamic Adaptive Streaming over HTTP (DASH).


Cross-browser codec support is a messy story right now; we can use MediaSource.isTypeSupported to detect codec support.  You pass isTypeSupported a string of the MIME type of the container you’re looking to play.  mp4 has the best compatibility currently. Apparently, for browsers that use the Blink rendering engine, MediaSource.isTypeSupported requires the full codec string to be specified.  To find this string, you can use Bento4’s mp4info utility:

./mp4info bunny.mp4| grep Codec
    Codecs String: avc1.42E01E

Then in our JavaScript:

if (MediaSource.isTypeSupported('video/mp4; codecs="avc1.42E01E, mp4a.40.2"')) {
// we can play this

— where mp4a.40.2 is the codec string for low complexity AAC, the typical audio codec used in an mp4 container.

Some browsers also currently whitelist certain domains for testing MSE, or over-aggressively cache CORS content, which makes testing frustratingly difficult.  Consult your browser for how to disable the whitelist or CORS caching when testing.


Using the MPD file we created earlier, we can grab a high quality DASH client implemented in JavaScript such as Shaka Player or dash.js.  Both clients implement numerous features, but could use more testing, as there are some subtle differences between media engines of various browsers.  Advanced clients like Shaka Player use an exponential moving average of three to ten samples to estimate the bandwidth, or even let you specify your own bandwidth estimator.

If we serve our output directory created earlier with Cross Origin Resource Sharing (CORS) enabled, and point either DASH client to http://localhost:<port>/output/stream.mpd, we should be able to see our content playing.  Enabling video cycling in Shaka, or clicking the +/- buttons in dash.js should allow us to watch the content quality changing.  For more drastic/noticeable changes in quality, try encoding fewer bitrates than the five we demonstrated.

Shaka Player in Firefox Dev Edition

Shaka Player in Firefox Developer Edition

dash.js running in Firefox Developer Edition

dash.js in Firefox Developer Edition

In conclusion

In this post, we looked at how to prep video assets for on-demand streaming by pre-processing and transcoding.  We also took a peek at the MSE API, and how to use more advanced DASH clients.  In an upcoming post, we’ll explore live content streaming using the MSE API, so keep an eye out.  I recommend you use Firefox Developer Edition to test out MSE; lots of hard work is going into our implementation.

Here are some additional resources for exploring MSE:

Top five web browser extensions

BANGALORE, INDIA: Browser extensions, as the name suggests, supplement your web browser with additional functionalities by adding elements to the default user interface besides those provided in the vanilla versions of some commonly used browsers such as Google Chrome, Mozilla Firefox, Apple Safari and Microsoft Internet Explorer.

View full post on web development – Yahoo! News Search Results

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Firefox and Google Chrome Extensions for Web Developers

The browser wars are heating up again. While Internet Explorer once held a near monopoly in the web browser market, other browsers like Mozilla Firefox and Google Chrome are slowly chipping away at Microsoft’s empire. One of the prominent features of these two alternative browsers is extensibility. Rather than being limited to the features that […] Check out the SEO Tools guide at Search …

View full post on Yahoo! News Search Results for web development

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)