write

Let’s Write a Web Extension

You might have heard about Mozilla’s WebExtensions, our implementation of a new browser extension API for writing multiprocess-compatible add-ons. Maybe you’ve been wondering what it was about, and how you could use it. Well, I’m here to help! I think the MDN’s WebExtensions docs are a pretty great place to start:

WebExtensions are a new way to write Firefox extensions.

The technology is developed for cross-browser compatibility: to a large extent the API is compatible with the extension API supported by Google Chrome and Opera. Extensions written for these browsers will in most cases run in Firefox with just a few changes. The API is also fully compatible with multiprocess Firefox.

The only thing I would add is that while Mozilla is implementing most of the API that Chrome and Opera support, we’re not restricting ourselves to only that API. Where it makes sense, we will be adding new functionality and talking with other browser makers about implementing it as well. Finally, since the WebExtension API is still under development, it’s probably best if you use Firefox Nightly for this tutorial, so that you get the most up-to-date, standards-compliant behaviour. But keep in mind, this is still experimental technology — things might break!

Starting off

Okay, let’s start with a reasonably simple add-on. We’ll add a button, and when you click it, it will open up one of my favourite sites in a new tab.

The first file we’ll need is a manifest.json, to tell Firefox about our add-on.

{
  "manifest_version": 2,
  "name": "Cat Gifs!",
  "version": "1.0",
  "applications": {
    "gecko": {
      "id": "catgifs@mozilla.org"
    }
  },

  "browser_action": {
    "default_title": "Cat Gifs!"
  }
}

Great! We’re done! Hopefully your code looks a little like this. Of course, we have no idea if it works yet, so let’s install it in Firefox (we’re using Firefox Nightly for the latest implementation). You could try to drag the manifest.json, or the whole directory, onto Firefox, but that really won’t give you what you want.

The directory listing

Installing

To make Firefox recognize your extension as an add-on, you need to give it a zip file which ends in .xpi, so let’s make one of those by first installing 7-Zip, and then typing 7z a catgifs.xpi manifest.json. (If you’re on Mac or Linux, the zip command should be built-in, so just type zip catgifs.xpi manifest.json.) Then you can drag the catgifs.xpi onto Firefox, and it will show you an error because our extension is unsigned.

The first error

We can work around this by going to about:config, typing xpinstall.signatures.required in the search box, double-clicking the entry to set it to false, and then closing that tab. After that, when we drop catgifs.xpi onto Firefox, we get the option to install our new add-on!

It’s important to note that beginning with Firefox 44 (later this year), add-ons will require a signature to be installed on Firefox Beta or Release versions of the browser, so even if you set the preference shown below, you will soon still need to run Firefox Nightly or Developer Edition to follow this tutorial.

Success!!!

Of course, our add-on doesn’t do a whole lot yet.

I click and click, but nothing happens.

So let’s fix that!

Adding features

First, we’ll add the following lines to manifest.json, above the line containing browser_action:

  "background": {
    "scripts": ["background.js"],
    "persistent": false
  },

now, of course, that’s pointing at a background.js file that doesn’t exist yet, so we should create that, too. Let’s paste the following javascript in it:

'use strict';

/*global chrome:false */

chrome.browserAction.setBadgeText({text: '(?)'});
chrome.browserAction.setBadgeBackgroundColor({color: '#eae'});

chrome.browserAction.onClicked.addListener(function(aTab) {
  chrome.tabs.create({'url': 'http://chilloutandwatchsomecatgifs.com/', 'active': true});
});

And you should get something that looks like this. Re-create the add-on by typing 7z a catgifs.xpi manifest.json background.js (or zip catgifs.xpi manifest.json background.js), and drop catgifs.xpi onto Firefox again, and now, when we click the button, we should get a new tab! 😄

Cat Gifs!

Automating the build

I don’t know about you, but I ended up typing 7z a catgifs.xpi manifest.json a disappointing number of times, and wondering why my background.js file wasn’t running. Since I know where this blog post is ending up, I know we’re going to be adding a bunch more files, so I think it’s time to add a build script. I hear that the go-to build tool these days is gulp, so I’ll wait here while you go install that, and c’mon back when you’re done. (I needed to install Node, and then gulp twice. I’m not sure why.)

So now that we have gulp installed, we should make a file named gulpfile.js to tell it how to build our add-on.

'use strict';

var gulp = require('gulp');

var files = ['manifest.json', 'background.js'];
var xpiName = 'catgifs.xpi';

gulp.task('default', function () {
  console.log(files, xpiName)
});

Once you have that file looking something like this, you can type gulp, and see output that looks something like this:

Just some command line stuff, nbd.

Now, you may notice that we didn’t actually build the add-on. To do that, we will need to install another package to zip things up. So, type npm install gulp-zip, and then change the gulpfile.js to contain the following:

'use strict';

var gulp = require('gulp');
var zip = require('gulp-zip');

var files = ['manifest.json', 'background.js'];
var xpiName = 'catgifs.xpi';

gulp.task('default', function () {
  gulp.src(files)
    .pipe(zip(xpiName))
    .pipe(gulp.dest('.'));
});

Once your gulpfile.js looks like this, when we run it, it will create the catgifs.xpi (as we can tell by looking at the timestamp, or by deleting it and seeing it get re-created).

Fixing a bug

Now, if you’re like me, you clicked the button a whole bunch of times, to test it out and make sure it’s working, and you might have ended up with a lot of tabs. While this will ensure you remain extra-chill, it would probably be nicer to only have one tab, either creating it, or switching to it if it exists, when we click the button. So let’s go ahead and add that.

Lots and lots of cats.

The first thing we want to do is see if the tab exists, so let’s edit the browserAction.onClicked listener in background.js to contain the following:

chrome.browserAction.onClicked.addListener(function(aTab) {
  chrome.tabs.query({'url': 'http://chilloutandwatchsomecatgifs.com/'}, (tabs) => {
    if (tabs.length === 0) {
      // There is no catgif tab!
      chrome.tabs.create({'url': 'http://chilloutandwatchsomecatgifs.com/', 'active': true});
    } else {
      // Do something here…
    }
  });
});

Huh, that’s weird, it’s always creating a new tab, no matter how many catgifs tabs there are already… It turns out that our add-on doesn’t have permission to see the urls for existing tabs yet which is why it can’t find them, so let’s go ahead and add that by inserting the following code above the browser_action:

  "permissions": [
    "tabs"
  ],

Once your code looks similar to this, re-run gulp to rebuild the add-on, and drag-and-drop to install it, and then when we test it out, ta-da! Only one catgif tab! Of course, if we’re on another tab it doesn’t do anything, so let’s fix that. We can change the else block to contain the following:

      // Do something here…
      chrome.tabs.query({'url': 'http://chilloutandwatchsomecatgifs.com/', 'active': true}, (active) => {
        if (active.length === 0) {
          chrome.tabs.update(tabs[0].id, {'active': true});
        }
      });

Make sure it looks like this, rebuild, re-install, and shazam, it works!

Making it look nice

Well, it works, but it’s not really pretty. Let’s do a couple of things to fix that a bit.

First of all, we can add a custom icon so that our add-on doesn’t look like all the other add-ons that haven’t bothered to set their icons… To do that, we add the following to manifest.json after the manifest_version line:

  "icons": {
    "48": "icon.png",
    "128": "icon128.png"
  },

And, of course, we’ll need to download a pretty picture for our icon, so let’s save a copy of this picture as icon.png, and this one as icon128.png.

We should also have a prettier icon for the button, so going back to the manifest.json, let’s add the following lines in the browser_action block before the default_title:

    "default_icon": {
      "19": "button.png",
      "38": "button38.png"
    },

and save this image as button.png, and this image as button38.png.

Finally, we need to tell our build script about the new files, so change the files line of our gulpfile.js to:

var files = ['manifest.json', 'background.js', '*.png'];

Re-run the build, and re-install the add-on, and we’re done! 😀

New, prettier, icons.

One more thing…

Well, there is another thing we could try to do. I mean, we have an add-on that works beautifully in Firefox, but one of the advantages of the new WebExtension API is that you can run the same add-on (or an add-on with minimal changes) on both Firefox and Chrome. So let’s see what it will take to get this running in both browsers!

We’ll start by launching Chrome, and trying to load the add-on, and see what errors it gives us. To load our extension, we’ll need to go to chrome://extensions/, and check the Developer mode checkbox, as shown below:

Now we’re hackers!

Then we can click the “Load unpacked extension…” button, and choose our directory to load our add-on! Uh-oh, it looks like we’ve got an error.

Close, but not quite.

Since the applications key is required for Firefox, I think we can safely ignore this error. And anyways, the button shows up! And when we click it…

Cats!

So, I guess we’re done! (I used to have a section in here about how to load babel.js, because the version of Chrome I was using didn’t support ES6’s arrow functions, but apparently they’ve upgraded their JavaScript engine, and now everything is good. 😉)

Finally, if you have any questions, or run into any problems following this tutorial, please feel free to leave a comment here, or get in touch with me through email, or on twitter! If you have issues or constructive feedback developing WebExtensions, the team will be listening on the Discourse forum.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Got something to say? Write a post!

Tweet button

Here’s the thing: Twitter sucks for arguments:

  • It is almost impossible to follow conversation threads
  • People favouriting quite agressive tweets leaves you puzzled as to the reasons
  • People retweeting parts of the conversation out of context leads to wrong messages and questionable quotes
  • 140 characters are great to throw out truisms but not to make a point.
  • People consistenly copying you in on their arguments floods your notifications tab without really wanting to weigh in any longer

This morning was a great example: Peter Paul Koch wrote yet another incendiary post asking for a one year hiatus of browser innovation. I tweeted about the post saying it has some good points. Paul Kinlan of the Chrome team disagreed strongly with the post. I opted to agree with some of it, as a lot of features we created and thought we discarded tend to linger longer on the web than we want to.

A few of those back and forth conversations later and Alex Russel dropped the mic:

@Paul_Kinlan: good news is that @ppk has articulated clearly how attractive failure can be. @codepo8 seems to agree. Now we can call it out.

Now, I am annoyed about that. It is accusing, calling me reactive and calls out criticism of innovation a failure. It also very aggressively hints that Alex will now always quote that to show that PPK was wrong and keeps us from evolving. Maybe. Probably. Who knows, as it is only 140 characters. But I am keeping my mouth shut, as there is no point at this agressive back and forth. It results in a lot of rushed arguments that can and will be quoted out of context. It results in assumed sub-context that can break good relationships. It – in essence – is not helpful.

If you truly disagree with something – make your point. Write a post, based on research and analysis. Don’t throw out a blanket approval or disapproval of the work of other people to spark a “conversation” that isn’t one.

Well-written thoughts lead to better quotes and deeper understanding. It takes more effort to read a whole post than to quote a tweet and add your sass.

In many cases, whilst writing the post you realise that you really don’t agree or disagree as much as you thought you did with the author. This leads to much less drama and more information.

And boy do we need more of that and less drama. We are blessed with jobs where people allow us to talk publicly, research and innovate and to question the current state. We should celebrate that and not use it for pithy bickering and trench fights.

Photo Credit: acidpix

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

How can we write better software? – Interview series, part 2 with Brian Warner

This is part 2 of a new Interview series here at Mozilla Hacks.

“How can we, as developers, write more superb software?”

A simple question without a simple answer. Writing good code is hard, even for developers with years of experience. Luckily, the Mozilla community is made up of some of the best development, QA and security folks in the industry.

This is part two in a series of interviews where I take on the role of an apprentice to learn from some of Mozilla’s finest.

Introducing Brian Warner

When my manager and I first discussed this project, Brian is the first person I wanted to interview. Brian probably doesn’t realize it, but he has been one of my unofficial mentors since I started at Mozilla. He is an exceptional teacher, and has the unique ability to make security approachable.

At Mozilla, Brian designed the pairing protocol for the “old” Firefox Sync, designed the protocol for the “new” Firefox Sync, and was instrumental in keeping Persona secure. Outside of Mozilla, Brian co-founded the Tahoe-LAFS project, and created Buildbot.

What do you do at Mozilla?

My title is a staff security engineer in the Cloud Services group. I analyse and develop protocols for securely managing passwords and account data and I implement those protocols in different fashions. I also review other’s code, I look at external projects to figure out whether it’s appropriate to incorporate them, and I try to stay on top of security failures like 0-days and problems in the wild that might affect us and also tools and algorithms that we might be able to use.

UX vs Security: Is it a false dichotomy? Some people have the impression that for security to be good, it must be difficult to use.

There are times when I think that it’s a three-way tradeoff. Instead of being x-axis, y-axis, and a diagonal line that doesn’t touch zero, sometimes I think it’s a three-way thing where the other axis is how much work you want to put into it or how clever you are or how much user research and experimentation you are willing to do. Stuff that engineers are typically not focused on, but that UX and psychologists are. I believe, maybe it’s more of a hope than a belief, that if you put enough effort into that, then you can actually find something that is secure and usable at the same time, but you have to do a lot more work.

The trick is to figure out what people want to do and find a way of expressing whatever security decisions they have to make into a normal part of their work flow. It’s like when you lend your house key to a neighbour so they can water your plants when you are away on vacation, you’ve got a pretty good idea of what power you are handing over.

There are some social constructs surrounding that like, “I don’t think you’re going to make a copy of that key and so when I get it back from you, you no longer have that power that I granted to you.” There are patterns in normal life with normal non-computer behaviours and objects that we developed some social practices around, I think part of the trick is to use that and assume that people are going to expect something that works like that and then find a way to make the computer stuff more like that.

Part of the problem is that we end up asking people to do very unnatural things because it is hard to imagine or hard to build something that’s better. Take passwords. Passwords are a lousy authentication technology for a lot of different reasons. One of them being that in most cases, to exercise the power, you have to give that power to whoever it is you are trying to prove to. It’s like, “let me prove to you I know a secret”…”ok, tell me the secret.” That introduces all these issues like knowing how to correctly identify who you are talking to, and making sure nobody else is listening.

In addition to that, the best passwords are going to be randomly generated by a computer and they are relatively long. It’s totally possible to memorize things like that but it takes a certain amount of exercise and practice and that is way more work than any one program deserves.

But, if you only have one such password and the only thing you use it on is your phone, then your phone is now your intermediary that manages all this stuff for you, and then it probably would be fair (to ask users to spend more energy managing that password). And it’s clear that your phone is sort of this extension of you, better at remembering things, and that the one password you need in this whole system is the bootstrap.

So some stuff like that, and other stuff like escalating effort in rare circumstances. There are a lot of cases where what you do on an everyday basis can be really easy and lightweight, and it’s only when you lose the phone that you have to go back to a more complicated thing. Just like you only carry so much cash in your wallet, and every once in a while you have to go to a bank and get more.

It’s stuff like that I think it’s totally possible to do, but it’s been really easy to fall into bad patterns like blaming the user or pushing a lot of decisions onto the user when they don’t really have enough information to make a good choice, and a lot of the choices you are giving them aren’t very meaningful.

Do you think many users don’t understand the decisions and tradeoffs they are being asked to make?

I think that’s very true, and I think most of the time it’s an inappropriate question to ask. It’s kind of unfair. Walking up to somebody and putting them in this uncomfortable situation – do you like X or do you like Y – is a little bit cruel.

Another thing that comes to mind is permission dialogs, especially on Windows boxes. They show up all the time, even just to do really basic operations. It’s not like you’re trying to do something exotic or crazy. These dialogs purport to ask the user for permission, but they don’t explain the context or reasons or consequences enough to make it a real question. They’re more like a demand or an ultimatum. If you say “no” then you can’t get your work done, but if you say “yes” then the system is telling you that bad things will happen and it’s all going to be your fault.

It’s intended to give the user an informed choice, but it is this kind of blame the user, blame the victim pattern, where it’s like “something bad happened, but you clicked on the OK button, you’ve taken responsibility for that.” The user didn’t have enough information to do something and the system wasn’t well enough designed that they could do what they wanted to do without becoming vulnerable.

Months before “new” Sync ever saw the light of day, the protocol was hashed out in extremely vocal and public forum. It was the exact opposite of security through obscurity. What did you hope to accomplish?

There were a couple of different things that I was hoping from that discussion. I pushed all that stuff to be described and discussed publicly because it’s the right thing to do, it’s the way we develop software, you know, it’s the open source way. And so I can’t really imagine doing it any other way.

The specific hopes that I had for publishing that stuff was to try to solicit feedback and get people to look for basic design flaws. I wanted to get people comfortable with the security properties, especially because new Sync changes some of them. We are switching away from pairing to something based on passwords. I wanted people to have time to feel they understood what those changes were and why we were making them. We put the design criteria and the constraints out there so people could see we kind of have to switch to a password to meet all of the other goals, and what’s the best we can do given security based on passwords.

Then the other part is that having that kind of public discussion and getting as many experienced people involved as possible is the only way that I know of to develop confidence that we’re building something that’s correct and not broken.

So it is really just more eyeballs…

Before a protocol or API designer ever sits down and writes a spec or line of code, what should they be thinking about?

I’d say think about what your users need. Boil down what they are trying to accomplish into something minimal and pretty basic. Figure out the smallest amount of code, the smallest amount of power, that you can provide that will meet those needs.

This is like the agile version of developing a protocol.

Yeah. Minimalism is definitely useful. Once you have the basic API that enables you to do what needs to be done, then think about all of the bad things that could be done with that API. Try and work out how to prevent them, or make them too expensive to be worthwhile.

A big problem with security is sometimes you ask “what are the chances that problem X would happen.” If you design something and there is a 1/1000 chance that something will happen, that the particular set of inputs will cause this one particular problem to happen. If it really is random, then 1/1000 may be ok, 1/1M may be ok, but if it is in this situation where an attacker gets to control the inputs, then it’s no longer 1/1000, it’s 1 in however many times the attacker chooses to make it 1.

It’s a game of who is cleverer and who is more thorough. It’s frustrating to have to do this case analysis to figure out every possible thing that could happen, every state it could get into, but if somebody else out there is determined to find a hole, that’s the kind of analysis they are going to do. And if they are more thorough than you are, then they’ll find a problem that you failed to cover.

Is this what is meant by threat modelling?

Yeah, different people use the term in different ways, I think of when you are laying out the system, you are setting up the ground rules. You are saying there is going to be this game. In this game, Alice is going to choose a password and Bob is trying to guess her password, and whatever.

You are defining what the ground rules are. So sometimes the rules say things like … the attacker doesn’t get to run on the defending system, their only access is through this one API call, and that’s the API call that you provide for all of the good players as well, but you can’t tell the difference between the good guy and the bad guy, so they’re going to use that same API.

So then you figure out the security properties if the only thing the bad guy can do is make API calls, so maybe that means they are guessing passwords, or it means they are trying to overflow a buffer by giving you some input you didn’t expect.

Then you step back and say “OK, what assumptions are you making here, are those really valid assumptions?” You store passwords in the database with the assumption that the attacker won’t ever be able to see the database, and then some other part of the system fails, and whoops, now they can see the database. OK, roll back that assumption, now you assume that most attackers can’t see the database, but sometimes they can, how can you protect the stuff that’s in the database as best as possible?

Other stuff like, “what are all the different sorts of threats you are intending to defend against?” Sometimes you draw a line in the sand and say “we are willing to try and defend against everything up to this level, but beyond that you’re hosed.” Sometimes it’s a very practical distinction like “we could try to defend against that but it would cost us 5x as much.”

Sometimes what people do is try and estimate the value to the attacker versus the cost to the user, it’s kind of like insurance modelling with expected value. It will cost the attacker X to do something and they’ve got an expected gain of Y based on the risk they might get caught.

Sometimes the system can be rearranged so that incentives encourage them to do the good thing instead of the bad thing. Bitcoin was very carefully thought through in this space where there are these clear points where a bad guy, where somebody could try and do a double spend, try and do something that is counter to the system, but it is very clear for everybody including the attacker that their effort would be better spent doing the mainstream good thing. They will clearly make more money doing the good thing than the bad thing. So, any rational attacker will not be an attacker anymore, they will be a good participant.

How can a system designer maximise their chances of developing a reasonably secure system?

I’d say the biggest guideline is the Principle of Least Authority. POLA is sometimes how that is expressed. Any component should have as little power as necessary to do the specific job that it needs to do. That has a bunch of implications and one of them is that your system should be built out of separate components, and those components should actually be isolated so that if one of them goes crazy or gets compromised or just misbehaves, has a bug, then at least the damage it can do is limited.

The example I like to use is a decompression routine. Something like gzip, where you’ve got bytes coming in over the wire, and you are trying to expand them before you try and do other processing. As a software component, it should be this isolated little bundle of 2 wires. One side should have a wire coming in with compressed bytes and the other side should have decompressed data coming out. It’s gotta allocate memory and do all kinds of format processing and lookup tables and whatnot, but, nothing that box can do, no matter how weird the input, or how malicious the box, can do anything other than spit bytes out the other side.

It’s a little bit like Unix process isolation, except that a process can do syscalls that can trash your entire disk, and do network traffic and do all kinds of stuff. This is just one pipe in and one pipe out, nothing else. It’s not always easy to write your code that way, but it’s usually better. It’s a really good engineering practice because it means when you are trying to figure out what could possibly be influencing a bit of code you only have to look at that one bit of code. It’s the reason we discourage the use of global variables, it’s the reason we like object-oriented design in which class instances can protect their internal state or at least there is a strong convention that you don’t go around poking at the internal state of other objects. The ability to have private state is like the ability to have private property where it means that you can plan what you are doing without potential interference from things you can’t predict. And so the tractability of analysing your software goes way up if things are isolated. It also implies that you need a memory safe language…

Big, monolithic programs in a non memory safe language are really hard to develop confidence in. That’s why I go for higher level languages that have memory safety to them, even if that means they are not as fast. Most of the time you don’t really need that speed. If you do, it’s usually possible to isolate the thing that you need, into a single process.

What common problems do you see out on the web that violate these principles?

Well in particular, the web is an interesting space. We tend to use memory safe languages for the receiver.

You mean like Python and JavaScript.

Yeah, and we tend to use more object-oriented stuff, more isolation. The big problems that I tend to see on the web are failure to validate and sanitize your inputs. Or, failing to escape things like injection attacks.

You have a lot of experience reviewing already written implementations, Persona is one example. What common problems do you see on each of the front and back ends?

It tends to be escaping things, or making assumptions about where data comes from, and how much an attacker gets control over if that turns out to be faulty.

Is this why you advocated making it easy to trace how the data flows through the system?

Yeah, definitely, it’d be nice if you could kind of zoom out of the code and see a bunch of little connected components with little lines running between them, and to say, “OK, how did this module come up with this name string? Oh, well it came from this one. Where did it come from there? Then trace it back to the point where, HERE that name string actually comes from a user submitted parameter. This is coming from the browser, and the browser is generating it as the sending domain of the postMessage. OK, how much control does the attacker have over one of those? What could they do that would be surprising to us? And then, work out at any given point what the type is, see where the transition is from one type to another, and notice if there are any points where you are failing to do that, that transformation or you are getting the type confused. Definitely, simplicity and visibility and tractable analysis are the keys.

What can people do to make data flow auditing simpler?

I think, minimising interactions between different pieces of code is a really big thing. Isolate behaviour to specific small areas. Try and break up the overall functionality into pieces that make sense.

What is defence in depth and how can developers use it in a system?

“Belt and suspenders” is the classic phrase. If one thing goes wrong, the other thing will protect you. You look silly if you are wearing both a belt and suspenders because they are two independent tools that help you keep your pants on, but sometimes belts break, and sometimes suspenders break. Together they protect you from the embarrassment of having your pants fall off. So defence in depth usually means don’t depend upon perimeter security.

Does this mean you should be checking data throughout the system?

There is always a judgement call about performance cost, or, the complexity cost. If your code is filled with sanity checking, then that can distract the person who is reading your code from seeing what real functionality is taking place. That limits their ability to understand your code, which is important to be able to use it correctly and satisfy its needs. So, it’s always this kind of judgement call and tension between being too verbose and not being verbose enough, or having too much checking.

The notion of perimeter security, it’s really easy to fall into this trap of drawing this dotted line around the outside of your program and saying “the bad guys are out there, and everyone inside is good” and then implementing whatever defences you are going to do at that boundary and nothing further inside. I was talking with some folks and their opinion was that there are evolutionary biology and sociology reasons for this. Humans developed in these tribes where basically you are related to everyone else in the tribe and there are maybe 100 people, and you live far away from the next tribe. The rule was basically if you are related to somebody then you trust them, and if you aren’t related, you kill them on sight.

That worked for a while, but you can’t build any social structure larger than 100 people. We still think that way when it comes to computers. We think that there are “bad guys” and “good guys”, and I only have to defend against the bad guys. But, we can’t distinguish between the two of them on the internet, and the good guys make mistakes too. So, the principal of least authority and the idea of having separate software components that are all very independent and have very limited access to each other means that, if a component breaks because somebody compromised it, or somebody tricked it into behaving differently than you expected, or it’s just buggy, then the damage that it can do is limited because the next component is not going to be willing to do that much for it.

Do you have a snippet of code, from you or anybody else, that you think is particularly elegant that others could learn from?

I guess one thing to show off would be the core share-downloading loop I wrote for Tahoe-LAFS.

In Tahoe, files are uploaded into lots of partially-redundant “shares”, which are distributed to multiple servers. Later, when you want to download the file, you only need to get a subset of the shares, so you can tolerate some number of server failures.

The shares include a lot of integrity-protecting Merkle hash trees which help verify the data you’re downloading. The locations of these hashes aren’t always known ahead of time (we didn’t specify the layout precisely, so alternate implementations might arrange them differently). But we want a fast download with minimal round-trips, so we guess their location and fetch them speculatively: if it turns out we were wrong, we have to make a second pass and fetch more data.

This code tries very hard to fetch the bare minimum. It uses a set of compressed bitmaps that record which bytes we want to fetch (in the hopes that they’ll be the right ones), which ones we really need, and which ones we’ve already retrieved, and sends requests for just the right ones.

The thing that makes me giggle about this overly clever module is that the entire algorithm is designed around Rolling Stone lyrics. I think I started with “You can’t always get what you want, but sometimes … you get what you need”, and worked backwards from there.

The other educational thing about this algorithm is that it’s too clever: after we shipped it, we found out it was actually slower than the less-sophisticated code it had replaced. Turns out it’s faster to read a few large blocks (even if you fetch more data than you need) than a huge number of small chunks (with network and disk-IO overhead). I had to run a big set of performance tests to characterize the problem, and decided that next time, I’d find ways to measure the speed of a new algorithm before choosing which song lyrics to design it around. :).

What open source projects would you like to encourage people to get involved with?

Personally, I’m really interested in secure communication tools, so I’d encourage folks (especially designers and UI/UX people) to look into tools like Pond, TextSecure, and my own Petmail. I’m also excited about the variety of run-your-own-server-at-home systems like the GNU FreedomBox.

How can people keep up with what you are doing?

Following my commits on https://github.com/warner is probably a good approach, since most everything I publish winds up there.

Thank you Brian.

Transcript

Brian and I covered far more material than I could include in a single post. The full transcript, available on GitHub, also covers memory safe languages, implicit type conversion when working with HTML, and the Python tools that Brian commonly uses.

Next up!

Both Yvan Boiley and Peter deHaan are presented in the next article. Yvan leads the Cloud Services Security Assurance team and continues with the security theme by discussing his team’s approach to security audits and which tools developers can use to self-audit their site for common problems.

Peter, one of Mozilla’s incredible Quality Assurance engineers, is responsible for ensuring that Firefox Accounts doesn’t fall over. Peter talks about the warning signs, processes and tools he uses to assess a project, and how to give the smack down while making people laugh.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

How can we write better software? – Interview series, part 1 with Fernando Jimenez Moreno

Do you ever look code and murmur a string of “WTFs?” Yeah, me too. As often as not, the code is my own.

I have spent my entire professional career trying to write software that I can be proud of. Writing software that “works” is difficult. Writing software that works while also being bug-free, readable, extensible, maintainable and secure is a Herculean task.

Luckily, I am part of a community that is made up of some of the best development, QA and security folks in the industry. Mozillians have proven themselves time and time again with projects like Webmaker, MDN, Firefox and Firefox OS. These projects are complex, huge in scope, developed over many years, and require the effort of hundreds.

Our community is amazing, flush with skill, and has a lot to teach.

Interviews and feedback

This is the first of a series of interviews where I take on the role of an apprentice and ask some of Mozilla’s finest

“How can we, as developers, write more superb software?”

Since shipping software to millions of people involves more than writing code, I approach the question from many viewpoints: QA, Security and development have all taken part.

The target audience is anybody who writes software, regardless of the preferred language or level of experience. If you are reading this, you are part of the part of the target audience! Most questions are high level and can be applied to any language. A minority are about tooling or language specific features.

Questions

Each interview has a distinct set of questions based around finding answers to:

  • How do other developers approach writing high quality, maintainable software?
  • What does “high quality, maintainable software” even mean?
  • What processes/standards/tools are being used?
  • How do others approach code review?
  • How can development/Security/QA work together to support each-other’s efforts?
  • What matters to Security? What do they look for when doing a security review/audit?
  • What matters to QA? What do they look for before signing off on a release?
  • What can I do, as a developer, to write great software and facilitate the entire process?

I present the highlights of one or two interviews per article. Each interview contains a short introduction to the person being interviewed followed by a series of questions and answers.

Where an interview’s audio was recorded, I will provide a link to the full transcript. If the interview was done over email, I will link to the contents of the original email.

Now, on to the first interview!

Introducing Fernando Jimenez Moreno

Fernando Jimenez MorenoThe first interview is with Fernando Jimenez Moreno, a Firefox OS developer from Telefonica. I had the opportunity to work with Fernando last autumn when we integrated Firefox Accounts into Firefox OS. I was impressed not only with Fernando’s technical prowess, but also his ability to bring together the employees of three companies in six countries on two continents to work towards a common goal.

Fernando talks about how Telefonica became involved in Firefox OS, how to bring a disparate group together, common standards, code reviews, and above all, being pragmatic.

What do you and your team at Telefonica do?

I’m part of what we call the platform team. We have different teams at Telefonica, one is focused on front end development in Gaia, and the other one is focused on the platform itself, like Gecko, Gonk and external services. We work in several parts of Firefox OS, from Gecko to Gaia, to services like the SimplePush server. I’ve personally worked on things like the Radio Interface Layer (RIL), payments, applications API and other Web APIs, and almost always jump from Gecko to Gaia and back. Most recently, I started working on a WebRTC service for Firefox OS.

How did Telefonica get involved working with Mozilla?

Well, that’s a longer story. We started working on a similar project to Firefox OS, but instead of being based on Gecko, we were working with WebKit. So we were creating this open web device platform based on WebKit. When we heard about Mozilla doing the same with Gecko, we decided to contact you and started working on the same thing. Our previous implementation was based on a closed source port of WebKit and it was really hard to work that way. Since then, my day to day work is just like any other member of Telefonica’s Firefox OS team, which I believe is pretty much the same as any other Mozilla engineer working on B2G.

You are known as a great architect, developer, and inter-company coordinator. For Firefox Accounts on Firefox OS, you brought together people from Telefonica, Telenor, and Mozilla. What challenges are present when you have to work across three different companies?

It was quite a challenge, especially during the first days of Firefox OS. We started working with Mozilla back in 2011, and it took some time for both companies to find a common work flow that fit well for both parts. I mean, we were coming from a telco culture where many things were closed and confidential by default, as opposed to the openness and transparency of Mozilla. For some of us coming from other open source projects, it wasn’t that hard to start working in the open and to be ready to discuss and defend our work on public forums. But, for other members of the team it took some time to get used to that new way of working, and new way of presenting our work.

Also, because we were following agile methodologies in Telefonica while Mozilla wasn’t still doing it, we had to find this common workflow that suits both parts. It took some time to do it, a lot of management meetings, a lot of discussions about it. Regarding working with other telco companies, the experience has also been quite good so far, especially with Telenor. We still have to be careful about the information that we share with them, because at the end of the day, we are still competitors. But that doesn’t mean we cannot work with them in a common target like what happened with Firefox Accounts.

When Mozilla and Telefonica started out on this process, both sides had to change. How did you decide what common practices to use and how did you establish a common culture?

I think for this agile methodology, we focused more on the front end parts because Gecko already had a very known process and a very known way of developing. It has it’s own train mechanism of 6 weeks. The ones doing the most, the biggest effort of finding that common workflow were the front end team because we started working on Gaia and Gaia was a new project with no fixed methodologies.

I don’t know if we really found the workflow, the perfect workflow, but I think we are doing good. I mean we participate in agile methodologies, but when it turns out that we need to do Gecko development and we need to focus on that, we just do it their way.

In a multi-disciplinary, multi-company project, how important are common standards like style guides, tools, and processes?

Well, I believe when talking about software engineering, standards are very important in general. But, I don’t care if you call it SCRUM or KANBAN or SCRUMBAN or whatever, or if you use Git workflow or Mercurial workflow, or if you use Google or Mozilla’s Javascript style guide. But you totally need some common processes and standards, especially in large engineering groups like open source, or Mozilla development in general. When talking about this, the lines are very thin. It’s quite easy to fail spending too much time defining and defending the usage of these standards and common processes and losing the focus on the real target. So, I think we shouldn’t forget these are only tools, they are important, but they are only tools to help us, and help our managers. We should be smart enough to be flexible about them when needed.

We do a lot of code reviews about code style, but in the end what you want is to land the patch and to fix the issue. If you have code style issues, I want you to fix them, but if you need to land the patch to make a train, land it and file a follow on bug to fix the issues, or maybe the reviewer can do it if they have the chance.

Firefox OS is made up of Gonk, Gecko and Gaia. Each system is large, complex, and intimidating to a newcomer. You regularly submit patches to Gecko and Gaia. Whenever you dive into an existing project, how do you learn about the system?

I’m afraid there is no magic technique. What works for me might not work for others for sure. What I try to do is to read as much documentation as possible inside and outside of the code, if it’s possible. I try to ask the owners of that code when needed, and also if that’s possible, because sometimes they just don’t work in the same code or they are not available. I try to avoid reading the code line by line at first and I always try to understand the big picture before digging into the specifics of the code. I think that along the years, you somehow develop this ability to identify patterns in the code and to identify common architectures that help you understand the software problems that you are facing.

When you start coding in unfamiliar territory, how do you ensure your changes don’t cause unintended side effects? Is testing a large part of this?

Yeah, basically tests, tests and more tests. You need tests, smoke tests, black box tests, tests in general. Also at first, you depend a lot on what the reviewer said, and you trust the reviewer, or you can ask QA or the reviewer to add tests to the patch.

Let’s flip this on its head and you are the reviewer and you are reviewing somebody’s code. Again, do you rely on the tests whenever you say “OK, this code adds this functionality. How do we make sure it doesn’t break something over there?”

I usually test the patches that I have review if I think the patch can cause any regression. I also try and run the tests on the “try” server, or ask the developer to trigger a “try” run.

OK, so tests.. A lot of tests.

Yeah, now that we are starting to have good tests in Firefox OS, we have to make use of them.

What do you look for when you are doing a review?

In general where I look first is correctness. I mean, the patch should actually fix the issue it was written for. And of course it shouldn’t have collateral effects. It shouldn’t introduce any regressions. And as I said, I try to test the patches myself if I have the time or if the patch is critical enough, to see how it works and to see if it introduces a regression. And also I look that the code is performant and is secure, and also if, I always try to ask for tests if I think they are possible to write for the patch. And I finally look for things like quality of the code in general, and documentation, coding style, contribution, process correctness.

You reviewed one of my large patches to integrate Firefox Accounts into Firefox OS. You placed much more of an emphasis on consistency than any review I have had. By far.

Well it certainly helps with overall code quality. When I do reviews, I mark these kinds of comments as “nit:” which is quite common in Mozilla, meaning that “I would like to see that changed, but you still get my positive review if you don’t change it, but I would really like to see them changed.”

Two part question. As a reviewer, how can you ensure that your comments are not taken too personally by the developer? The second part is, as a developer, how can you be sure that you don’t take it too personally?

For the record, I have received quite a few hard revisions myself. I never take them personally. I mean, I always try to take it, the reviews, as a positive learning experience. I know reviewers usually don’t have a lot of time to, in their life, to do reviews. They also have to write code. So, they just quickly write “needs to be fixed” without spending too much time thinking about the nicest ways to say it. Reviewers only say things about negative things in your code, not negative, but things that they consider are not correct. But they don’t usually say that the things that are correct in your code and I know that can be hard at first.

But once you start doing it, you understand why they don’t do that. I mean, you have your work to do. This is actually especially hard for me, being a non-native English speaker, because sometimes I try to express things in the nicest way possible but the lack of words make the review comments sound stronger than it was supposed to be. And, what I try to do is use a lot of smileys if possible. And always, I try to avoid the “r-” flag if I mean, the “r-” is really bad. I just clear it, use use the “feedback +” or whatever.

You already mentioned that you try to take it as a learning experience whenever you are developer. Do you use review as a potential teaching moment if you are the reviewer?

Yeah, for sure. I mean just the simple fact of reviewing a patch is a teaching experience. You are telling the coder what you think is more correct. Sometimes there is a lack of theory and reasons behind the comments, but we should all do that, we should explain the reasons and try to make the process as good as possible.

Do you have a snippet of code, from you or anybody else, that you think is particularly elegant that others could learn from?

I am pretty critical with my own code so I can’t really think about a snippet of code of my own that I am particularly proud enough to show :). But if I have to choose a quick example I was quite happy with the result of the last big refactor of the call log database for the Gaia Dialer app or the recent Mobile Identity API implementation.

What open source projects would you like to encourage people to participate in, and where can they go to get involved?

Firefox OS of course! No, seriously, I believe Firefox OS gives to software engineers the chance to get involved in an amazing open source community with tons of technical challenges from low level to front end code. Having the chance to dig into the guts of a web browser and a mobile operative system in such an open environment is quite a privilege. It may seem hard at first to get involved and jump into the process and the code, but there are already some very nice Firefox OS docs on the MDN and a lot of nice people willing to help on IRC (#b2g and #gaia), the mailing lists (dev-b2g and dev-gaia) or ask.mozilla.org.

How can people keep up to date about what you are working on?

I don’t have a blog, but I have my public GitHub account and my Twitter account.

Transcript

A huge thanks to Fernando for doing this interview.

The full transcript is available on GitHub.

Next article

In the next article, I interview Brian Warner from the Cloud Services team. Brian is a security expert who shares his thoughts on architecting for security, analyzing threats, “belts and suspenders”, and writing code that can be audited.

As a parting note, I have had a lot of fun doing these interviews and I would like your input on how to make this series useful. I am also looking for Mozillians to interview. If you would like to nominate someone, even yourself, please let me know! Email me at stomlinson@mozilla.com.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

How can we write better software? – Interview series, part 1

Do you ever look code and murmur a string of “WTFs?” Yeah, me too. As often as not, the code is my own.

I have spent my entire professional career trying to write software that I can be proud of. Writing software that “works” is difficult. Writing software that works while also being bug-free, readable, extensible, maintainable and secure is a Herculean task.

Luckily, I am part of a community that is made up of some of the best development, QA and security folks in the industry. Mozillians have proven themselves time and time again with projects like Webmaker, MDN, Firefox and Firefox OS. These projects are complex, huge in scope, developed over many years, and require the effort of hundreds.

Our community is amazing, flush with skill, and has a lot to teach.

Interviews and feedback

This is the first of a series of interviews where I take on the role of an apprentice and ask some of Mozilla’s finest

“How can we, as developers, write more superb software?”

Since shipping software to millions of people involves more than writing code, I approach the question from many viewpoints: QA, Security and development have all taken part.

The target audience is anybody who writes software, regardless of the preferred language or level of experience. If you are reading this, you are part of the part of the target audience! Most questions are high level and can be applied to any language. A minority are about tooling or language specific features.

Questions

Each interview has a distinct set of questions based around finding answers to:

  • How do other developers approach writing high quality, maintainable software?
  • What does “high quality, maintainable software” even mean?
  • What processes/standards/tools are being used?
  • How do others approach code review?
  • How can development/Security/QA work together to support each-other’s efforts?
  • What matters to Security? What do they look for when doing a security review/audit?
  • What matters to QA? What do they look for before signing off on a release?
  • What can I do, as a developer, to write great software and facilitate the entire process?

I present the highlights of one or two interviews per article. Each interview contains a short introduction to the person being interviewed followed by a series of questions and answers.

Where an interview’s audio was recorded, I will provide a link to the full transcript. If the interview was done over email, I will link to the contents of the original email.

Now, on to the first interview!

Introducing Fernando Jimenez Moreno

Fernando Jimenez MorenoThe first interview is with Fernando Jimenez Moreno, a Firefox OS developer from Telefonica. I had the opportunity to work with Fernando last autumn when we integrated Firefox Accounts into Firefox OS. I was impressed not only with Fernando’s technical prowess, but also his ability to bring together the employees of three companies in six countries on two continents to work towards a common goal.

Fernando talks about how Telefonica became involved in Firefox OS, how to bring a disparate group together, common standards, code reviews, and above all, being pragmatic.

What do you and your team at Telefonica do?

I’m part of what we call the platform team. We have different teams at Telefonica, one is focused on front end development in Gaia, and the other one is focused on the platform itself, like Gecko, Gonk and external services. We work in several parts of Firefox OS, from Gecko to Gaia, to services like the SimplePush server. I’ve personally worked on things like the Radio Interface Layer (RIL), payments, applications API and other Web APIs, and almost always jump from Gecko to Gaia and back. Most recently, I started working on a WebRTC service for Firefox OS.

How did Telefonica get involved working with Mozilla?

Well, that’s a longer story. We started working on a similar project to Firefox OS, but instead of being based on Gecko, we were working with WebKit. So we were creating this open web device platform based on WebKit. When we heard about Mozilla doing the same with Gecko, we decided to contact you and started working on the same thing. Our previous implementation was based on a closed source port of WebKit and it was really hard to work that way. Since then, my day to day work is just like any other member of Telefonica’s Firefox OS team, which I believe is pretty much the same as any other Mozilla engineer working on B2G.

You are known as a great architect, developer, and inter-company coordinator. For Firefox Accounts on Firefox OS, you brought together people from Telefonica, Telenor, and Mozilla. What challenges are present when you have to work across three different companies?

It was quite a challenge, especially during the first days of Firefox OS. We started working with Mozilla back in 2011, and it took some time for both companies to find a common work flow that fit well for both parts. I mean, we were coming from a telco culture where many things were closed and confidential by default, as opposed to the openness and transparency of Mozilla. For some of us coming from other open source projects, it wasn’t that hard to start working in the open and to be ready to discuss and defend our work on public forums. But, for other members of the team it took some time to get used to that new way of working, and new way of presenting our work.

Also, because we were following agile methodologies in Telefonica while Mozilla wasn’t still doing it, we had to find this common workflow that suits both parts. It took some time to do it, a lot of management meetings, a lot of discussions about it. Regarding working with other telco companies, the experience has also been quite good so far, especially with Telenor. We still have to be careful about the information that we share with them, because at the end of the day, we are still competitors. But that doesn’t mean we cannot work with them in a common target like what happened with Firefox Accounts.

When Mozilla and Telefonica started out on this process, both sides had to change. How did you decide what common practices to use and how did you establish a common culture?

I think for this agile methodology, we focused more on the front end parts because Gecko already had a very known process and a very known way of developing. It has it’s own train mechanism of 6 weeks. The ones doing the most, the biggest effort of finding that common workflow were the front end team because we started working on Gaia and Gaia was a new project with no fixed methodologies.

I don’t know if we really found the workflow, the perfect workflow, but I think we are doing good. I mean we participate in agile methodologies, but when it turns out that we need to do Gecko development and we need to focus on that, we just do it their way.

In a multi-disciplinary, multi-company project, how important are common standards like style guides, tools, and processes?

Well, I believe when talking about software engineering, standards are very important in general. But, I don’t care if you call it SCRUM or KANBAN or SCRUMBAN or whatever, or if you use Git workflow or Mercurial workflow, or if you use Google or Mozilla’s Javascript style guide. But you totally need some common processes and standards, especially in large engineering groups like open source, or Mozilla development in general. When talking about this, the lines are very thin. It’s quite easy to fail spending too much time defining and defending the usage of these standards and common processes and losing the focus on the real target. So, I think we shouldn’t forget these are only tools, they are important, but they are only tools to help us, and help our managers. We should be smart enough to be flexible about them when needed.

We do a lot of code reviews about code style, but in the end what you want is to land the patch and to fix the issue. If you have code style issues, I want you to fix them, but if you need to land the patch to make a train, land it and file a follow on bug to fix the issues, or maybe the reviewer can do it if they have the chance.

Firefox OS is made up of Gonk, Gecko and Gaia. Each system is large, complex, and intimidating to a newcomer. You regularly submit patches to Gecko and Gaia. Whenever you dive into an existing project, how do you learn about the system?

I’m afraid there is no magic technique. What works for me might not work for others for sure. What I try to do is to read as much documentation as possible inside and outside of the code, if it’s possible. I try to ask the owners of that code when needed, and also if that’s possible, because sometimes they just don’t work in the same code or they are not available. I try to avoid reading the code line by line at first and I always try to understand the big picture before digging into the specifics of the code. I think that along the years, you somehow develop this ability to identify patterns in the code and to identify common architectures that help you understand the software problems that you are facing.

When you start coding in unfamiliar territory, how do you ensure your changes don’t cause unintended side effects? Is testing a large part of this?

Yeah, basically tests, tests and more tests. You need tests, smoke tests, black box tests, tests in general. Also at first, you depend a lot on what the reviewer said, and you trust the reviewer, or you can ask QA or the reviewer to add tests to the patch.

Let’s flip this on its head and you are the reviewer and you are reviewing somebody’s code. Again, do you rely on the tests whenever you say “OK, this code adds this functionality. How do we make sure it doesn’t break something over there?”

I usually test the patches that I have review if I think the patch can cause any regression. I also try and run the tests on the “try” server, or ask the developer to trigger a “try” run.

OK, so tests.. A lot of tests.

Yeah, now that we are starting to have good tests in Firefox OS, we have to make use of them.

What do you look for when you are doing a review?

In general where I look first is correctness. I mean, the patch should actually fix the issue it was written for. And of course it shouldn’t have collateral effects. It shouldn’t introduce any regressions. And as I said, I try to test the patches myself if I have the time or if the patch is critical enough, to see how it works and to see if it introduces a regression. And also I look that the code is performant and is secure, and also if, I always try to ask for tests if I think they are possible to write for the patch. And I finally look for things like quality of the code in general, and documentation, coding style, contribution, process correctness.

You reviewed one of my large patches to integrate Firefox Accounts into Firefox OS. You placed much more of an emphasis on consistency than any review I have had. By far.

Well it certainly helps with overall code quality. When I do reviews, I mark these kinds of comments as “nit:” which is quite common in Mozilla, meaning that “I would like to see that changed, but you still get my positive review if you don’t change it, but I would really like to see them changed.”

Two part question. As a reviewer, how can you ensure that your comments are not taken too personally by the developer? The second part is, as a developer, how can you be sure that you don’t take it too personally?

For the record, I have received quite a few hard revisions myself. I never take them personally. I mean, I always try to take it, the reviews, as a positive learning experience. I know reviewers usually don’t have a lot of time to, in their life, to do reviews. They also have to write code. So, they just quickly write “needs to be fixed” without spending too much time thinking about the nicest ways to say it. Reviewers only say things about negative things in your code, not negative, but things that they consider are not correct. But they don’t usually say that the things that are correct in your code and I know that can be hard at first.

But once you start doing it, you understand why they don’t do that. I mean, you have your work to do. This is actually especially hard for me, being a non-native English speaker, because sometimes I try to express things in the nicest way possible but the lack of words make the review comments sound stronger than it was supposed to be. And, what I try to do is use a lot of smileys if possible. And always, I try to avoid the “r-” flag if I mean, the “r-” is really bad. I just clear it, use use the “feedback +” or whatever.

You already mentioned that you try to take it as a learning experience whenever you are developer. Do you use review as a potential teaching moment if you are the reviewer?

Yeah, for sure. I mean just the simple fact of reviewing a patch is a teaching experience. You are telling the coder what you think is more correct. Sometimes there is a lack of theory and reasons behind the comments, but we should all do that, we should explain the reasons and try to make the process as good as possible.

Do you have a snippet of code, from you or anybody else, that you think is particularly elegant that others could learn from?

I am pretty critical with my own code so I can’t really think about a snippet of code of my own that I am particularly proud enough to show :). But if I have to choose a quick example I was quite happy with the result of the last big refactor of the call log database for the Gaia Dialer app or the recent Mobile Identity API implementation.

What open source projects would you like to encourage people to participate in, and where can they go to get involved?

Firefox OS of course! No, seriously, I believe Firefox OS gives to software engineers the chance to get involved in an amazing open source community with tons of technical challenges from low level to front end code. Having the chance to dig into the guts of a web browser and a mobile operative system in such an open environment is quite a privilege. It may seem hard at first to get involved and jump into the process and the code, but there are already some very nice Firefox OS docs on the MDN and a lot of nice people willing to help on IRC (#b2g and #gaia), the mailing lists (dev-b2g and dev-gaia) or ask.mozilla.org.

How can people keep up to date about what you are working on?

I don’t have a blog, but I have my public GitHub account and my Twitter account.

Transcript

A huge thanks to Fernando for doing this interview.

The full transcript is available on GitHub.

Next article

In the next article, I interview Brian Warner from the Cloud Services team. Brian is a security expert who shares his thoughts on architecting for security, analyzing threats, “belts and suspenders”, and writing code that can be audited.

As a parting note, I have had a lot of fun doing these interviews and I would like your input on how to make this series useful. I am also looking for Mozillians to interview. If you would like to nominate someone, even yourself, please let me know! Email me at stomlinson@mozilla.com.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Write less, achieve meh?

In my keynote at HTML5DevConf in San Francisco I talked about a pattern of repetition those of us who’ve been around for a while will have encountered, too: every few years development becomes “too hard” and “too fragmented” and we need “simpler solutions”.

chris in suit at html5devconf

In the past, these were software packages, WYSIWYG editors and CMS that promised us to deliver “to all platforms without any code overhead”. Nowadays we don’t even wait for snake-oil salesmen to promise us the blue sky. Instead we do this ourselves. Almost every week we release new, magical scripts and workflows that solve all the problems we have for all the new browsers and with great fall-backs for older environments.

Most of these solutions stem from fixing a certain problem and – especially in the mobile space – far too many stem from trying to simulate an interaction pattern of native applications. They do a great job, they are amazing feats of coding skills and on first glance, they are superbly useful.

It gets tricky when problems come up and don’t get fixed. This – sadly enough – is becoming a pattern. If you look around GitHub you find a lot of solutions that promise utterly frictionless development with many an unanswered issue or un-merged pull request. Even worse, instead of filing bugs there is a pattern of creating yet another solution that fixes all the issues of the original one . People simply should replace the old one with the new one.

Who replaces problematic code?

All of this should not be an issue: as a developer, I am happy to discard and move on when a certain solution doesn’t deliver. I’ve changed my editor of choice a lot of times in my career.

The problem is that completely replacing solutions expects a lot of commitment from the implementer. All they want is something that works and preferably something that fixes the current problem. Many requests on Stackoverflow and other help sites don’t ask for the why, but just want a how. What can I use to fix this right now, so that my boss shuts up? A terrible question that developers of every generation seem to repeat and almost always results in unmaintainable code with lots of overhead.

That’s when “use this and it works” solutions become dangerous.

First of all, these tell those developers that there is no need to ever understand what you do. Your job seems to be to get your boss off your back or to make that one thing in the project plan – that you know doesn’t make sense – works.

Secondly, if we found out about issues of a certain solution and considered it dangerous to use (cue all those “XYZ considered dangerous” posts) we should remove and redirect them to the better solutions.

This, however, doesn’t happen often. Instead we keep them around and just add a README that tells people they can use our old code and we are not responsible for results. Most likely the people who have gotten the answer they wanted on the Stackoverflows of this world will never hear how the solution they chose and implemented is broken.

The weakest link?

Another problem is that many solutions rely on yet more abstractions. This sounds like a good plan – after all we shouldn’t re-invent things.

However, it doesn’t really help an implementer on a very tight deadline if our CSS fix needs the person to learn all about Bower, node.js, npm, SASS, Ruby or whatever else first. We can not just assume that everybody who creates things on the web is as involved in its bleeding edge as we are. True, a lot of these tools make us much more efficient and are considered “professional development”, but they are also very much still in flux.

We can not assume that all of these dependencies work and make sense in the future. Neither can we expect implementers to remove parts of this magical chain and replace them with their newer versions – especially as many of them are not backwards compatible. A chain is as strong as its weakest link, remember? That also applies to tool chains.

If we promise magical solutions, they’d better be magical and get magically maintained. Otherwise, why do we create these solutions? Is it really about making things easier or is it about impressing one another? Much like entrepreneurs shouldn’t be in love with being an entrepreneur but instead love their product we should love both our code and the people who use it. This takes much more effort than just releasing code, but it means we will create a more robust web.

The old adage of “write less, achieve more” needs a re-vamp to “write less, achieve better”. Otherwise we’ll end up with a world where a few people write small, clever solutions for individual problems and others pack them all together just to make sure that really everything gets fixed.

The overweight web

This seems to be already the case. When you see that the average web site according to HTTParchive is 1.7MB in size (46% cacheable) with 93 resource requests on 16 hosts then something, somewhere is going terribly wrong. It is as if none of the performance practices we talked about in the last few years have ever reached those who really build things.

A lot of this is baggage of legacy browsers. Many times you see posts and solutions like “This new feature of $newestmobileOS is now possible in JavaScript and CSS – even on IE8!”. This scares me. We shouldn’t block out any user of the web. We also should not take bleeding edge, computational heavy and form-factor dependent code and give it to outdated environments. The web is meant to work for all, not work the same for all and certainly not make it slow and heavy for older environments based on some misunderstanding of what “support” means.

Redundancy denied

If there is one thing that this discouraging statistic shows then it is that future redundancy of solutions is a myth. Anything we create that “fixes problems with current browsers” and “should be removed once browsers get better” is much more likely to clog up the pipes forever than to be deleted. Is it – for example – really still necessary to fix alpha transparency in PNGs for IE5.5 and 6? Maybe, but I am pretty sure that of all these web sites in these statistics only a very small percentage really still have users locked into these browsers.

The reason for denied redundancy is that we solved the immediate problem with a magical solution – we can not expect implementers to re-visit their solutions later to see if now they are not needed any longer. Many developers don’t even have the chance to do so – projects in agencies get handed over to the client when they are done and the next project with a different client starts.

Repeating XHTML mistakes

One of the main things that HTML5 was invented for was to create a more robust web by being more lenient with markup. If you remember, XHTML sent as XML (as it should, but never was as IE6 didn’t support that) had the problem that a single HTML syntax error or an un-encoded ampersand would result in an error message and nothing would get rendered.

This was deemed terrible as our end users get punished for something they can’t control or change. That’s why the HTML algorithm of newer browsers is much more lenient and does – for example – close tags for you.

Nowadays, the yellow screen of death showing an XML error message is hardly ever seen. Good, isn’t it? Well, yes, it would be – if we had learned from that mistake. Instead, we now make a lot of our work reliant on JavaScript, resource loaders and many libraries and frameworks.

This should not be an issue – the “JavaScript not available” use case is a very small one and mostly by users who either had JavaScript turned off by their sysadmins or people who prefer the web without it.

The “JavaScript caused an error” use case, on the other hand, is very much alive and will probably never go away. So many things can go wrong, from resources not being available, to network timeouts, mobile providers and proxies messing with your JavaScript up to simple syntax errors because of wrong HTTP headers. In essence, we are relying on a technology that is much less reliable than XML was and we feel very clever doing so. The more dependencies we have, the more likely it is that something can go wrong.

None of this is an issue, if we write our code in a paranoid fashion. But we don’t. Instead we also seem to fall for the siren song of abstractions telling us everything will be more stable, much better performing and cleaner if we rely on a certain framework, build-script or packaging solution.

Best of breed with basic flaws

One eye-opener for me was judging the Static Showdown Hackathon. I was very excited about the amazing entries and what people managed to achieve solely with HTML, CSS and JavaScript. What annoyed me though was the lack of any code that deals with possible failures. Now, I understand that this is hackathon code and people wanted to roll things out quickly, but I see a lot of similar basic mistakes in many live products:

  • Dependency on a certain environment – many examples only worked in Chrome, some only in Firefox. I didn’t even dare to test them on a Windows machine. These dependencies were in many cases not based on functional necessity – instead the code just assumed a certain browser specific feature to be available and tried to access it. This is especially painful when the solution additionaly loads lots of libraries that promise cross-browser functionality. Why use those if you’re not planning to support more than one browser?
  • Complete lack of error handling – many things can go wrong in our code. Simply not doing anything when for example loading some data failed and presenting the user with an infinite loading spinner is not a nice thing to do. Almost every technology we have has a success and an error return case. We seem to spend all the time in the success one, whilst it is much more likely that we’ll lose users and their faith in the error one. If an error case is not even reported or reported as the user’s fault we’re not writing intelligent code. Thinking paranoid is a good idea. Telling users that something went wrong, what went wrong and what they can do to re-try is not luxury – it means building a user interface. Any data loading that doesn’t refresh the view should have an error case and a timeout case – connections are the things most likely to fail.
  • A lack of very basic accessibility – many solutions I encountered relied on touch alone, and doing so provided incredibly small touch targets. Others showed results far away from the original action without changing the original button or link. On a mobile device this was incredibly frustrating.

Massive web changes ahead

All of this worries me. Instead of following basic protective measures to make our code more flexible and deliver great results to all users (remember: not the same results to all users; this would limit the web) we became dependent on abstractions and we keep hiding more and more code in loaders and packaging formats. A lot of this code is redundant and fixes problems of the past.

The main reason for this is a lack of control on the web. And this is very much changing now. The flawed solutions we had for offline storage (AppCache) and widgets on the web (many, many libraries creating DOM elements) are getting new, exciting and above all control-driven replacements: ServiceWorker and WebComponents.

Both of these are the missing puzzle pieces to really go to town with creating applications on the web. With ServiceWorker we can not only create apps that work offline, but also deal with a lot of the issues we now solve with dependency loaders. WebComponents allow us to create reusable widgets that are either completely new or inherited from another or existing HTML elements. These widgets run in the rendering flow of the browser instead of trying to make our JavaScript and DOM rendering perform in it.

The danger of WebComponents is that it allows us to hide a lot of functionality in a simple element. Instead of just shifting our DOM widget solutions to the new model this is a great time to clean up what we do and find the best-of-breed solutions and create components from them.

I am confident that good things are happening there. Discussions sparked by the Edge Conference’s WebComponents and Accessibility panels already resulted in some interesting guidelines for accessible WebComponents

Welcome to the “Bring your own solution” platform

The web is and stays the “bring your own solution platform”. There are many solutions to the same problem, each with their own problems and benefits. We can work together to mix and match them and create a better, faster and more stable web. We can only do that, however, when we allow the bricks we build these solutions from to be detachable and reusable. Much like glueing Lego bricks together means using it wrong we should stop creating “perfect solutions” and create sensible bricks instead.

Welcome to the future – it is in the browsers, not in abstractions. We don’t need to fix the problems for browser makers, but should lead them to give us the platform we deserve.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Write Elsewhere, Run on Firefox OS

Over the last year we’ve been recruiting developers who’ve already built apps for various Open Web and HTML5 platforms: apps for native platforms built with PhoneGap, Appcelerator Titanium or hand-coded wrappers; HTML5 apps built for Amazon Appstore, Blackberry Webworks, Chrome Dev Store, Windows Phone, and WebOS; and C++ apps translated to JavaScript with Emscripten. We’ve supported hundreds of developers and shipped devices all over the world to test and demo Firefox Apps.

If you have a well-rated, successful HTML5 app on any platform we urge you to port it to Firefox OS: We’re betting it will be worth your while. In November, Firefox OS launched in Hungary, Serbia, Montenegro and Greece. Earlier this month we launched in Italy. In 2014, we launch in new locales with new operators, with new devices in new form factors. We anticipate more powerful cross-platform tools in 2014, and closer integration with PhoneGap and Firefox Developer Tools. It is the best of times for first-mover advantage.

Here’s a look at just a few of the successful ports we’re aware of. We spoke with skilled HTML5 app developers who’ve participated in “app port” programs and workshops and asked them to describe their Firefox app porting experience.

RunOnFirefoxOSCaptain Rogers, by Andrzej Mazur; Reditr by Dimitry Vinogradov & David Zorycht; Aquarium Plants by Diego Lopez

Aquarium Plants (Android w/ hand-coded native wrapper)


App: Aquarium Plants
Developer:Diego Lopez
Original platform: Google Play Store

Time:

Aquarium Plants is written with our own JavaScript code and simple HTML5 code: we just used two common JavaScript frameworks (Zepto.js and TaffyDB.js). It took me about 3 hours or less. Most of the time was reading forums and documentation.

Recommendations:

It is extremely easy to build/port apps for Firefox OS. It is great that we can use web tools and frameworks to create native apps for Firefox OS. I recommend that other developers start developing Firefox Apps; it is almost effortless and a great experience.

Challenges:

The only challenge I had was finding out how to build the package. It was amazing when I realized that building for Firefox OS was as easy as creating a zip with the app files and manifest.

Calc (iOS w/ hand-coded native wrapper)


App: Calc
Developer: Markus Greve
Original platform: iOS w/ hand-coded native wrapper

Time:

It was really easy. That’s why I would sign the claim “Write Elsewhere, Run on Firefox OS.” I think it took about two evenings with night #1 getting the app to run and night #2 doing some fine-tuning for display size of the simulator and Keon.

Afterwards another evening for app icon and deployment to the Marketplace 😉 .

Recommendations:

I would recommend that developers give it a try to see how easy it really is. I think Mozilla.org is doing a great job and your support for developers is excellent.

I already made a short presentation about how to create FirefoxOS Apps as a documentation of my experiences. You can see it here: A Firefox OS app in five minutes (in English). There’s also a blog post called App Entwicklung für den Feuerfuchs that documents my experience in German.

Challenges:

1. I use OSX 10.9 Mavericks and the Simulator crashed with the old plugin and ran more stably with Firefox 1.26b and app-manager.

2. The need to have a subdomain per app — it seems there’s an open issue with this.

3. Data types in JSON Manifest (I tried “fullscreen”: true instead of “fullscreen”: “true”). My fault: Solved by sticking to the documentation.

4. I have the feeling that the icon sizes are different in the documentation to the sizes really needed for the Marketplace. Also, it’s not sensible to offer a Photoshop-Template for a 60×60 pixel Icon when you need much bigger files elsewhere (256×256 for the store I think). It would be better if you provided a template for 1024 and the user then shrank the images afterwards.

5. Suggestion: The correlation between navigator.mozApp.checkInstalled() and .install() could be explained more comprehensively.

Calcula Hipoteca – Amazon Appstore


App: Calcula Hipoteca
Developer: Emilio Baena
Original platform: Amazon Appstore

Time:

It was easy. I took a few hours.

Recommendations:

You are doing good work with this platform.

Challenges:

I had problems with manifest.webapp, I think that I missed having more documentation about this type of file.

Captain Rogers (HTML5 Desktop)

App: Captain Rogers
Developer: Andrzej Mazur
Original platform: HTML5 Desktop

Time:

The whole process of porting Captain Rogers for the Firefox OS platform took me only two weeks of development with an additional two weeks of testing and bug-fixing. In the core concept the game was supposed to be simple so I could focus on the process of polishing it for Firefox OS devices and publishing it in the Firefox Marketplace.

Challenges:

There weren’t any big problems, because basically building for Firefox OS is just building for the Web itself. Firefox OS devices are the long-awaited hardware for the mobile web — it is built using JavaScript and HTML5 after all.

My main focus on making the game playable on the Firefox OS device was put on the manifest file. I had some problems with getting the orientation to work properly, but during the Firefox OS App Workshop in Warsaw I received great help from Jason Weathersby and at the end of the day the first version of the game was working very smoothly and a few days later it was accepted into the Firefox Marketplace.

For optimizing the code I recommend this handy article by Louis Stowasser and Harald Kirschner — it was very helpful for me. Also, the Web Audio API is still a big problem on mobile, but it works perfectly in Captain Rogers, launched on the Firefox OS phone.

Recommendations:

The main message is simple: if you’re building HTML5 games for Firefox OS, you’re building them for the Open Web. Firefox OS is the hardware platform that the Web needs and in the long run everybody will benefit from it as the standards and APIs are battle-tested directly on the actual devices by millions of people. If you want to know more about HTML5 game development I can recommend my two creations: my Preparing for Firefox OS article and HTML5 Gamedev Starter list. You should also read this great introduction by Austin Hallock. The community is very helpful, so if you have any problems or issues, check out the HTML5GameDevs forums for help and you will definitely receive it. It’s a great time for HTML5 game development, so jump in and have fun!.

Cartelera Panamá (Appcelerator Titanium)


App: Cartelera Panamá
Developer: Demóstenes Garcia
Original platform/wrapper: Appcelerator Titanium

Time:

It was pretty easy to get up to speed on the development flow for Cartelera. The development for Firefox OS is easy, since it’s well documented and HTML5 + JavaScript makes it *really* easy to make apps.

Challenges:

Our UX/UI team at Pixmat Studios is very centered on usability, and that’s why we choose Titanium over PhoneGap (better user experience) so our main challenge was to make it look as a real Firefox OS application.

In the end, we decided to go with real Firefox OS Building Blocks, instead of using Titanium for the UI. We decided to:

  • Extract all the business logic for the App (which handles the schedules, movies, listings, etc.) to its own Git repository/module.
    Then we reuse everything on Titanium and Firefox OS, using native user experience for all the three platforms we now accept.
  • Another issue we had was to actually implement an OAuth client on our phone in a good way. However Jason Weathersby pointed out good resources on that matter for us.

Recommendations:

Firefox OS and the environment (debug, etc) is easy for a regular web developer, so we didn’t need extra knowledge for making real mobile apps.

  • Use Firefox OS Building Blocks for UI. It’s easy to get a native UI experience with no fuzz.
  • Firefox OS at the Mozilla MDN will be your best friend :-).
  • GitHub has some good projects to learn from, so create an account (if you still don’t have one) and look for demo projects.

Fresh Food Finder (PhoneGap)

App: Fresh Food Finder
Developer: Andrew Trice
Original platform/wrapper: PhoneGap

Time:

PhoneGap support is coming for Firefox OS, and in preparation I wanted to become familiar with the Firefox OS development environment and platform ecosystem. So… I ported the Fresh Food Finder, minus the specific PhoneGap API calls. The best part (and this really shows the power of web-standards based development) is that I was able to take the existing PhoneGap codebase, turn it into a Firefox OS app AND submit it to the Firefox Marketplace in under 24 hours!

Recommendations:

Basically, I commented out the PhoneGap-specific API calls, added a few minor bug fixes, and added a few Firefox-OS specific layout/styling changes (just a few minor things so that my app looked right on the device). Then you put in a mainfest.webapp configuration file, package it up, then submit it to the Marketplace.

If you’re interested, you can check out progress on Firefox OS support in the Cordova project, and it will be available on PhoneGap.com once it’s actually released. (Editor’s note: Look for release news in early 2014.)

Picross (WebOS)


Picross: Picross
Developer: Owen Swerkstrom
Original platform: WebOS

Time:

Picross was installed and running on FxOS within minutes of getting my hands on a device.

Challenges:

I did have to make some touch API tweaks, but that just meant some quick reading.

Recommendations:

My favorite quote was “You’re not porting to FirefoxOS, you’re porting to the Web!” Also, Firefox OS (and Firefox browser) Canvas support is sensational. My next game will use Canvas, for sure.

In my case, the “porting” path was really simple. My game already ran in a browser, and it had originally been designed to run on the Palm Pre, a device with the exact same resolution as the lowest-end FirefoxOS phone. But the concept of installing an app involves nothing more than visiting a website: that’s what really made it painless, and that is what’s really unique about FirefoxOS. It has an “official” app marketplace, but unlike every other smartphone ever made, it doesn’t _need_ one! As a user, this is liberating. I can install any web app, from any site I want. As a developer, the story’s even better. I don’t even need Mozilla’s permission to publish an app on this platform. I don’t need to put my device into a special developer mode, or buy any toolkit, or build on any specific platform. If I can put up a website, I’m good to go.

Even though I just called it “unnecessary”, the Firefox Marketplace has given me nothing but good experiences. I don’t need to bundle up, sign, and upload any package — I just enter a URL and type up some descriptive blurbs. Review times have always been short, and are even quicker now than when I started. When I submitted Halloween Artist, it was approved almost immediately, and the reviewer suggested that I write up a blog entry for Mozilla Hacks about how I’d put it together. This is both a traditional “marketplace” with the featured apps etc, and a community of geeks celebrating interesting code and novel ideas. That is absolutely the world I want my apps to live in.

Before I went to the Mozilla workshop, I was considering what approach to take for an action/adventure game that I’m still working on. I was torn between writing some C OpenGL code, waiting for the not-yet-released-at-the-time SDL 2.0, or, as a longshot, doing at least some prototyping in HTML5 canvas. After learning that Canvas operations on a FirefoxOS phone basically go right to the framebuffer, and after trying some experiments and seeing for myself the insane performance I could get out of these cheap little ARM devices, I knew I’d be ditching C for the project and writing the game as a web app. I’d highly recommend investigating Canvas for your next game, even if you’ve mostly done Flash or SDL etc. before.

Reditr (Chrome Dev Store)

App: Reditr
Developer: Dimitry Vinogradov & David Zorycht
Original platform: Chrome Dev Store

Time:

It took us about a week to port everything over from Chrome to Firefox OS.

Challenges:

One challenge we faced when creating Reditr for Firefox OS was the content security policy (CSP). One of the great things about web applications for users is that they can see an entire overview of the permissions that a given app requires. For developers though, it can take some time to learn how to create a content security policy. For Reditr, it took us several days of trial and error to learn how the security policy effected our app.

Recommendations:

Make sure you understand how your app deals with the content security policy. Check to see if the APIs you use have oAuth support since this can help remedy CSP issues.

Speed Cube Timer – Blackberry Webworks


App:
Speed Cuber Timer
Developer:Konstantinos Mavrodis
Original platform: Blackberry Webworks

Time:

It took about half an hour to make the exported web app compatible with Firefox OS. (Insanely fast, don’t you think? 🙂 )

Challenges:

No real challenges here that I can recall of. Everything was almost a piece of cake!

Recommendations:

My recommendation to fellow devs would be them to port their HTML5 apps to Firefox OS as soon as possible! Porting a Web app has never been easier. Firefox OS is a promising OS that shows the great power of Web Development!

Squarez (C++)

App: Squarez
Developer: Patrick Nicolas
Original platform: C++ with Emscripten

Emscripten and C++ usage:

I have always preferred statically typed and compiled languages as they fit better with my way of organizing software. There are several options with Emscripten, and I have decided to write the game logic in C++ and the user interface in HTML/CSS/JavaScript.

All the logic code is in C++, and for multiplayer mode, server can reuse everything and only has roughly 500 additional lines of code.

Time:

The game has always been a web project: the initial development, from scratch, took nearly 1 month. Porting it to be a Firefox application only took a couple of days, in order to make the manifests and icons. Performance tweaking has been a complex task and took nearly as much time as the rest of development.

Challenges:

Squarez is the first game I have ever written and it is the first time I have used Emscripten, that was the first challenge. Writing a web application is not really a challenge any more, because good quality documentation is available everywhere. Second challenge was tweaking performance, once I had received the Geeksphone Keon, I saw that the game was so slow that it was almost unplayable.

I have used performance tools from native compiled code and both desktop Firefox and Chromium to find the bottlenecks. Because of Emscripten JavaScript generation, it was not very easy to track the faulty functions, I had to deal with partially mangled names in profile view. The following step was to optimize CSS, and I have only found tools to check time spent in selector processing. It is impossible to know which specific animation is taking time, I had to use very manual methods and try disabling individual ones, guess what is taking time and change it blindly.

Recommendations:

My recommendation for other developers is to choose technologies they like; I wanted structured and statically typed code, so Emscripten/C++ was a great solution. I would also discourage creating an application that only works in Firefox OS, but take advantage of standards to make it available to any browser and have Web APIs used to enhance the application.

If you want to also include a link to the code repository, it is available at squarez.git. Everything is GPLv3+, fork it if you want.

Touch 12i (Windows Phone)


App: Touch 12i
Developer: Elvis Pfutzenreuter
Original platform: HTML5 for Windows Phone (and other platforms)

Time:

It took one day to do the bulk of the work, and another day for final details, including registration in Marketplace.

Later, I added payment receipt checking which took me the best part of a day since it needs to be well tested. But this effort could be reused readily for another app (Touch 11i).

Challenges:

In general, porting was very easy. I put it to work in one day. The biggest problem was the viewport: that works differently in other platforms (all Webkit-based). For Firefox, I had to use a CSS transform-based approach to fit the calculator in screen.

Touch 12i is an HTML5 app that runs on Windows Phones and it works in the same way for Android and iOS too: a “shell” of native code with a Web component that cradles the main program. Currently the Firefox version is the only one that is “purely” HTML5. (I used to have a pure Web-based version for iOS as well, but this model has been toned down for iOS.)

I’ve written a detailed blog post called Playing With Firefox OS, about my experience participating in the Phones for App Ports program.

And that’s a wrap. Thanks for reading this far. If you’re a Firefox OS App porter, a web developer or simply a curious reader, we’d love to hear from you. Leave a comment or send us a note at appsdev@mozilla.com.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Write Elsewhere, Run on Firefox

Over the last year we’ve been recruiting developers who’ve already built apps for various Open Web and HTML5 platforms: apps for native platforms built with PhoneGap, Appcelerator Titanium or hand-coded wrappers; HTML5 apps built for Amazon Appstore, Blackberry Webworks, Chrome Dev Store, Windows Phone, and WebOS; and C++ apps translated to Javascript with Emscripten. We’ve supported hundreds of developers and shipped devices all over the world to test and demo Firefox Apps.

If you have a well-rated, successful HTML5 app on any platform, we urge you to port it to Firefox OS. We’re betting it will be worth your while. In November, Firefox OS launched in Hungary, Serbia, Montenegro and Greece. Earlier this month we launched in Italy. In 2014, we launch in new locales with new operators, with new devices in new form factors. We anticipate more powerful cross-platform tools in 2014, and closer integration with PhoneGap and Firefox Developer Tools. It is the best of times for first-mover advantage.

Here’s a look at just a few of the successful ports we’re aware of. We spoke with skilled HTML5 app developers who’ve participated in “app port” programs and workshops and asked them to describe their Firefox app porting experience.

RunOnFirefoxOSCaptain Rogers, by Andrzej Mazur; Reditr by Dimitry Vinogradov & David Zorycht; Aquarium Plants by Diego Lopez

Aquarium Plants – Android w/ hand-coded native wrapper


App: Aquarium Plants
Developer:Diego Lopez
Original platform: Google Play Store

Time:

Aquarium Plants is written with our own javascript code and simple HTML5 code: we just used two common javascript frameworks (Zepto.js and TaffyDB.js). It took me about 3 hours or less. Most of the time was reading forums and documentation.

Recommendations:

It is extremely easy to build/port apps for Firefox OS. It is great that we can use web tools and frameworks to create native apps for Firefox OS. I recommend to other developers to start developing Firefox Apps, it is almost effortless and it is a great experience.

Challenges:

The only challenge I had was finding out how to build the package. It was amazing when I realized that building for Firefox OS was as easy as creating a zip with the app files and manifest.

Calc – iOS w/ hand-coded native wrapper


App: Calc
Developer: Markus Greve
Original platform: iOS w/ hand-coded native wrapper

Time:

It was really easy. That’s why I would sign the claim “Write Elsewhere, Run on Firefox OS.” I think it took about two evenings with night #1 getting the app to run and night #2 doing some fine-tuning for display size of the simulator and Keon.

Afterwards another evening for app icon and deployment to the Marketplace 😉 .

Recommendations:

I would recommend to developers to give it a try to see how easy it really is. I think Mozilla.org is doing a great job and your support for developers is excellent.

I already made a short presentation about how to create FirefoxOS Apps as a documentation of my experiences. You can see it here: A Firefox OS app in five minutes (in English). There’s also a blog post called App Entwicklung für den Feuerfuchs that documents my experience in German.

Challenges:

1. I use OSX 10.9 Mavericks so the Simulator crashed with the old plugin and ran more stable with Firefox 1.26b and app-manager.

2. The need to have a subdomain per app – it seems there’s an open issue with this.

3. Data types in JSON Manifest (I tried “fullscreen”: true instead of “fullscreen”: “true”). My fault: Solved by sticking to the documentation.

4. I have the feeling as if the icon sizes are different in the documentation than the sizes really needed for the Marketplace. And: it’s not sensible to offer a Photoshop-Template for a 60×60 pixel Icon when you need some much bigger files elsewhere (256×256 for the store I think). It would be better if you provide a template for 1024 and the user has to shrink the images afterwards.

5. Suggestion: The correlation between navigator.mozApp.checkInstalled() and .install() could be explained more comprehensively.

Calcula Hipoteca – Amazon Appstore


App: Calcula Hipoteca
Developer: Emilio Baena
Original platform: Amazon Appstore

Time:

It was easy. I took a few hours.

Recommendations:

You are doing good work with this platform.

Challenges:

I had problems with manifest.webapp, I think that I missed having more documentation about this type of file.

Captain Rogers (HTML5 Desktop)

App: Captain Rogers
Developer: Andrzej Mazur
Original platform: HTML5 Desktop

Time:

The whole process of porting Captain Rogers for the Firefox OS platform took me only two weeks of development with the additional two weeks for testing and bug-fixing. In the core concept the game was supposed to be simple so I could focus on the process of polishing it for the Firefox OS devices and publishing it in the Firefox Marketplace.

Challenges:

There weren’t any big problems, because basically building for Firefox OS is just building for the Web itself. Firefox OS device is the long-awaited hardware for the mobile web – it is built using JavaScript and HTML5 after all.

My main focus on making the game playable on the Firefox OS device was put on the manifest file. I had some problems with having the orientation work properly, but during the Firefox OS App Workshop in Warsaw I received great help from Jason Weathersby and at the end of the day the first version of the game was working very smoothly and a few days later it was accepted in the Firefox Marketplace.

For optimizing the code I recommend this handy article by Louis Stowasser and Harald Kirschner – it was very helpful for me. Also, the Audio API is still a big problem on mobile, but it works perfectly in Captain Rogers, launched on the Firefox OS phone.

Recommendations:

The main message is simple: if you’re building HTML5 games for Firefox OS, you’re building them for the Open Web. Firefox OS is the hardware platform that the Web needs and in the long run everybody will benefit from it as the standards and APIs are battle-tested directly on the actual devices by millions of people. If you want to know more about HTML5 game development I can recommend my two creations: my Preparing for Firefox OS article and HTML5 Gamedev Starter list. You should also read this great introduction by Austin Hallock. The community is very helpful, so if you have any problems or issues, check out the HTML5GameDevs forums for help and you will definitely receive it. It’s a great time for HTML5 game development, jump in and have fun!.

Cartelera Panamá – Appcelerator Titanium


App: Cartelera Panamá
Developer: Demóstenes Garcia
Original platform/wrapper: Appcelerator Titanium

Time:

It was pretty easy to get up to speed on the development flow for Cartelera. The development for Firefox OS is easy, since it’s well documented and HTML5+JavaScript makes it *really* easy to make apps.

Challenges:

Our UX/UI team at Pixmat Studios is very centered on usability, and that’s why we choose Titanium over PhoneGap (better user experience) so our main challenge was to make it look as a real Firefox OS application.

In the end, we decided to go with real Firefox OS Building Blocks, instead of using Titanium for the UI. We decided to:

  • Extract all the business logic for the App (which handle the schedules, movies, listings, etc.) to it’s own Git repository/module.
    Then we reuse everything on Titanium and Firefox OS, using native user experience for all the three platforms we now accept.
  • Another issue we had was to actually implement OAuth client on our phone in a good way. However Jason Weathersby pointed out good resources on that matter for us.

Recommendations:

Firefox OS and the environment (debug, etc) is easy for a regular web developer, so we didn’t need extra knowledge for making real mobile apps.

  • Use Firefox OS Building Blocks for UI. It’s easy to get a native UI experience with no fuzz.
  • Firefox OS at the Mozilla MDN will be your best friend :-).
  • GitHub has some good projects to learn from, so create an account (if you still don’t have one) and look for demo projects.

Fresh Food Finder (PhoneGap)

App: Fresh Food Finder
Developer: Andrew Trice
Original platform/wrapper: PhoneGap

Time:

PhoneGap support is coming for Firefox OS, and in preparation I wanted to become familiar with the Firefox OS development environment and platform ecosystem. So… I ported the Fresh Food Finder, minus the specific PhoneGap API calls. The best part (and this really shows the power of web-standards based development) is that I was able to take the existing PhoneGap codebase, and turn it into a Firefox OS app AND submit it to the Firefox Marketplace in under 24 hours!

Recommendations:

Basically, I commented out the PhoneGap-specific API calls, added a few minor bug fixes, and added a few Firefox-OS specific layout/styling changes (just a few minor things so that my app looked right on the device). Then you put in a mainfest.webapp configuration file, package it up, then submit it to the Marketplace.

If you’re interested, you can check out progress on Firefox OS support in the Cordova project, and it will be available on PhoneGap.com once it’s actually released. (Editor’s note: Look for release news in early 2014.)

Picross (WebOS)


Picross: Picross
Developer: Owen Swerkstrom
Original platform: WebOS

Time:

Picross was installed and running on FxOS within minutes of getting my hands on a device.

Challenges:

I did have to make some touch API tweaks, but that just meant some quick reading.

Recommendations:

My favorite quote was “You’re not porting to FirefoxOS, you’re porting to the Web!” Also, Firefox OS (and Firefox browser) Canvas support is sensational. My next game will use Canvas, for sure.

In my case, the “porting” path was really simple. My game already ran in a browser, and it had originally been designed to run on the Palm Pre, a device with the exact same resolution of the lowest-end FirefoxOS phone. But the concept of installing an app involving nothing more than visiting a website, that’s what really made it painless, and that’s what’s really unique about FirefoxOS. It has an “official” app marketplace, but unlike every other smartphone ever made, it doesn’t _need_ one! As a user, this is liberating. I can install any web app, from any site I want. As a developer, the story’s even better. I don’t even need Mozilla’s permission to publish an app on this platform. I don’t need to put my device into a special developer mode, or buy any toolkit, or build on any specific platform. If I can put up a website, I’m good to go.

Even though I just called it “unnecessary”, the Firefox Marketplace has given me nothing but good experiences. I don’t need to bundle up, sign, and upload any package – I just enter a URL and type up some descriptive blurbs. Review times have always been short, and are even quicker now than when I started. When I submitted Halloween Artist, it was approved almost immediately, and the reviewer suggested that I write up a blog entry for hacks.mozilla.org about how I’d put it together. This is both a traditional “marketplace” with the featured apps etc, and a community of geeks celebrating interesting code and novel ideas. That is absolutely the world I want my apps to live in.

Before I went to the Mozilla workshop, I was considering what approach to take for an action/adventure game that I’m still working on. I was torn between writing some C OpenGL code, waiting for the not-yet-released-at-the-time SDL 2.0, or, as a longshot, doing at least some prototyping in HTML5 canvas. After learning that Canvas operations on a FirefoxOS phone basically go right to the framebuffer, and after trying some experiments and seeing for myself the insane performance I could get out of these cheap little ARM devices, I knew I’d be ditching C for the project and writing the game as a web app. I’d highly recommend investigating Canvas for your next game, even if you’ve mostly done Flash or SDL etc. before.

Reditr (Chrome Dev Store)

App: Reditr
Developer: Dimitry Vinogradov & David Zorycht
Original platform: Chrome Dev Store

Time:

It took us about a week to port everything we could over from Chrome to FirefoxOS.

Challenges:

One challenge we faced when creating Reditr for Firefox OS was the content security policy (CSP). One of the great things about web applications for users is that they can see an entire overview of the permissions that a given app requires. For developers though, it can take some time to learn how to create a content security policy. For Reditr, it took us several days of trial and error to learn how the security policy effected our app.

Recommendations:

Make sure you understand how your app deals with the content security policy. Check to see if the APIs you use have oAuth support since this can help remedy CSP issues.

Speed Cube Timer – Blackberry Webworks


App:
Speed Cuber Timer
Developer:Konstantinos Mavrodis
Original platform: Blackberry Webworks

Time:

It took about half an hour to make the exported web app compatible with Firefox OS. (Insanely fast, don’t you think? 🙂 )

Challenges:

No real challenges here that I can recall of. Everything was almost a piece of cake!

Recommendations:

My recommendation to fellow devs would be them to port their HTML5 apps to Firefox OS as soon as possible! Porting a Web app has never been easier. Firefox OS is well promising OS that shows the great power of Web Development!

Squarez (C++)

App: Squarez
Developer: Patrick Nicolas
Original platform: C++ with Emscripten

Emscripten and C++ usage:

I have always preferred statically typed and compiled languages as they fit better my way of organizing software. There are several options with Emscripten, I have decided to write the game logic in C++ and the user interface in html/css/javascript.

All the logic code is in C++, and for multiplayer mode, server can reuse everything and only has roughly 500 additional lines of code.

Time:

The game has always been a web project: the initial development, from scratch, took nearly 1 month. Porting it to be a Firefox application only took a couple of days, in order to make the manifests and icons. Performance tweaking has been a complex task and took nearly as much time as the rest of development.

Challenges:

Squarez is the first game I have ever written and it is the first time I have used Emscripten, that was the first challenge. Writing a web application is not really a challenge any more, because good quality documentation is available everywhere. Second challenge was tweaking performance, once I had received the Geeksphone Keon, I saw that the game was so slow that it was almost unplayable.

I have used performance tools from native compiled code and both desktop Firefox and Chromium to find the bottlenecks. Because of Emscripten JavaScript generation, it was not very easy to track the faulty functions, I had to deal with partially mangled names in profile view. The following step was to optimize css, and I have only found tools to check time spent in selector processing. It is impossible to know which specific animation is taking time, I had to use very manual methods and try disabling individual ones, guess what is taking time and change it blindly.

Recommendations:

My recommendation for other developers is to choose technologies they like, I want structured and statically typed code, so emscripten/C++ was a great solution. I would also discourage creating an application that only works in Firefox OS, but take advantage of standards to make it available to any browser and have wep APIs used to enhance the application.

If you want to also include a link to the code repository, it is available at squarez.git. Everything is GPLv3+, fork it if you want.

Touch 12i (Windows Phone)


App: Touch 12i
Developer: Elvis Pfutzenreuter
Original platform: HTML5 for Windows Phone (and other platforms)

Time:

As I mentioned, it took one day to do the bulk of the work, and another day for final details, including registration in Marketplace.

Later, I added payment receipt checking which took me the best part of a day since it needs to be well tested. But this effort could be reused readily for another app (Touch 11i).

Challenges:

In general, porting was very easy. I put it to work in one day. The biggest problem was the viewport, that works differently in other platforms (all Webkit-based). For Firefox, I had to use a CSS transform-based approach to fit the calculator in screen.

Touch 12i is an HTML5 app that runs on Windows Phones and it works in the same way for Android and iOS too: a “shell” of native code with a Web component that cradles the main program. Currently the Firefox version is the only one that is “purely” HTML5. (I used to have a pure Web-based version for iOS as well, but this model has been toned down for iOS.)

I’ve written a detailed blog post called Playing With Firefox OS, about my experience participating in the Phones for App Ports program.

And that’s a wrap. Thanks for reading this far. If you’re a Firefox OS App porter, a web developer or simply a curious reader, we’d love to hear from you. Leave a comment or send us a note at appsdev@mozilla.com.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Help me write a Developer Evangelism/Advocacy guide

A few years ago now, I spent two afternoons to write down all I knew back then about Developer Evangelism/Advocacy and created the Developer Evangelism Handbook. It has been a massive success in terms of readers and what I hear from people has helped a lot of them find a new role in their current company or a new job in others. I know for a fact that the handbook is used in a few companies as training material and I am very happy about that. I also got thanked by a lot of people not in a role like this learning something from the handbook. This made me even happier.

Frontend United London 2013

With the role of developer evangelist/advocat being rampant now and not a fringe part of what we do in IT I think it is time to give the handbook some love and brush it up to a larger “Guide to Developer Evangelism/Advocacy” by re-writing parts of it and adding new, more interactive features.

For this, I am considering starting a Kickstarter project as I will have to do that in my free-time and I see people making money with things they learned. Ads on the page do not cut it – at all (a common issue when you write content for people who use ad-blockers). That’s why I want to sound the waters now to see what you’d want this guide to be like to make it worth you supporting me.

In order to learn, I put together a small survey about the Guide and I’d very much appreciate literally 5 minutes of your time to fill it out. No, you can’t win an iPad or a holiday in the Carribean, this is a legit survey.

Let’s get this started, I’d love to hear what you think.

Got comments? Please tell me on Google+ or Facebook or Twitter.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)