Part

Building RTL-Aware Web Apps & Websites: Part 2

Pushing the Web forward means making it better for developers and users alike. It means tackling issues that our present Web faces; this is especially true for making responsive RTL design and development easier to achieve.

Before we move on to bleeding edge RTL topics, here’s a video tutorial with a practical example to refresh your mind on how to code RTL properly today:

 

Picking up where we left off in our last post, it’s time to dive into some advanced topics around Right-To-Left UI implementation. In Part 1 we covered the basics of RTL. In this second post we will cover how-tos for bidirectional (BIDI) user interfaces on the bleeding edge. This is the alpha state of the Web, and we can take a look at what the future of RTL holds in store.

RTL on the bleeding edge

The core of RTL UI design and development on the Web is found in CSS properties and, as with every other aspect of web development, these properties keep improving over time.

Here are the CSS properties you can start using today in alpha versions of several modern browsers. (This information is likely to change as browser updates are released.):

Text alignment: In the past we used to explicitly set the horizontal alignment of text with text-align left or right (e.g. text-align: left). We now have start and end values that can be used instead. For instance, text-align: start will give a different result depending on whether the content is set to RTL or LTR. start indicates left in LTR and right in RTL, and vice versa for end.

Example using left/right:


#content {
  text-align: left:
}
html[dir="rtl"] #content {
  text-align: right;
}

Example using start/end:


#content {
  text-align: start;
}

With start/end, there’s no need for an RTL rule.
Browser support: Chrome 1.0+, Safari 3.1+, Firefox 3.6+, Android, iOS. Not supported by Internet Explorer at this time.

Padding, margin, and border: there are a couple of improvements regarding these properties.
Each of them now can take an additional suffix of -inline-start or -inline-end, for example you can write padding-inline-end or margin-inline-start. These new keywords give the same result as start and end in text alignment: padding-inline-start equates to padding-left in LTR and is padding-right in RTL. margin-inline-end gives the effect of margin-right in LTR and margin-left in RTL.

Example using left/right:


#content {
  padding-left: 12px:
  margin-right: 20px;
}
html[dir="rtl"] #content {
  padding-left: 0;
  padding-right: 12px;
  margin-left: 20px;
  margin-right: 0px;
}

Example using start/end:


#content {
  padding-inline-start: 12px:
  margin-inline-end: 20px;
}

Again, there’s no need for an RTL rule; margin-inline-end will act as margin-right in LTR and act as margin-left for RTL, for example.
Browser support: Firefox 41+ only.

Absolute positioning: left and right are CSS properties that have always been key to absolute positioning, but very soon you’ll be writing much smarter CSS for your users with the new offset-inline-start and offset-inline-end properties.

It is also worth mentioning that top and bottom can be substituted by offset-block-start and offset-block-end.

Example using left/right:


#content {
  position: absolute;
  left: 5rem;
}
html[dir="rtl"] #content {
  left: auto;
  right: 5rem;
}

Example using start/end:


#content {
  offset-inline-start: 5rem;
}

Once again, the start/end concept has saved us a rule in our CSS file.
Browser support: Firefox 41+ only.

Floats: float: left and float: right have been around and used for a long time, but very soon you will be able to use float: inline-start and float: inline-end in Firefox Nightly. inline-start will float your element left in an left-to-right (LTR) layout, and right in a right-to-left (RTL) layout.

Example using start/end:


#content {
  float: left;
}
html[dir="rtl"] #content {
  float: right;
}

Using start/end:


#content {
  float: inline-start;
}

Browser support: Firefox 44 (soon to land in Firefox Nightly).

Firefox OS gives you a great opportunity to try some of the very latest CSS property implementations — there is a reference page on the Mozilla Wiki for all the new BIDI-relevant CSS properties and values.

Web components: Web components are reusable user interface widgets (see MDN for more information). If you’ve worked with web components you’ll know that CSS styles defined inside the shadow DOM are scoped, so by default you can’t get the direction of the overall page from the component’s scoped CSS using html[dir=”rtl”]. However, the shadow DOM introduces a new selector called :host-context(), which can be used to select a shadow host element based on a matching parent element.

Let’s make this clearer with an example — say you have an element inside your shadow DOM tree with a class name called back. In a normal situation where there isn’t a shadow DOM you would just use html[dir=”rtl”] .back {}, however you can’t use this syntax from inside a web component. Scoped and encapsulated, remember? The equivalent selector when .back is inside a web component scope would look like this:


.back:host-context(html[dir="rtl"]) { }

The exact browser support status of this syntax is not clear currently, however you can use this polyfill to plug holes in support.

Want to use a polyfill for shadow CSS but still want to play around with RTL in web components in browsers that already support it such as Firefox (currently hidden behind the dom.webcomponents.enabled flag) and Google Chrome? You can do this using Mutation Observers by observing the document.documentElement.dir property. Code for the observer would look similar to this example:


var dirObserver = new MutationObserver(updateShadowDir);
dirObserver.observe(document.documentElement, {
  attributeFilter: ['dir'],
  attributes: true
});

Also don’t forget to initialize your function for the first time once the page loads, right after the observer logic:

updateShadowDir();

Your updateShadowDir() function would look something like this:


function updateShadowDir() {
  var internal = myInnerElement.shadowRoot.firstElementChild;
  if (document.documentElement.dir === 'rtl') {
    internal.setAttribute('dir', 'rtl');
  } else {
    internal.removeAttribute('dir');
  }
};

That’s all you should be doing on the JavaScript side — now, since the first child element of your web component has a changing dir attribute, your CSS will depend on it as a selector instead of the HTML element. Say you are using a span — your selector would look like this:

span[dir="rtl"] rest-of-the-selectors-chain { … }

This might look a little hacky, but fortunately the future is brighter, so let’s take a look at what the W3C has in store for us in the future regarding Right-To-Left.

The future of RTL on the Web

Informing the server of text direction: Currently in text inputs and text areas, when a form is submitted, the direction of the page/form elements is lost. Servers then have to come up with their own logic to estimate the direction of the text that was sent to them.

To solve this problem and be more informative about the text being sent as well as about its direction, W3C has added a new attribute to the HTML5 forms spec called dirname, which browser vendors will be implementing in the future.

As the spec states, including the dirname “on a form control element enables the submission of the directionality of the element, and gives the name of the field that contains this value during form submission. If such an attribute is specified, its value must not be the empty string.”

Translation: Adding this attribute to your <input> or <textarea> makes their direction get passed along with the GET or POST string that is sent to the server. Let’s see a real-world example — first, say you have this form:

<form action="result.php" method="post">
<input name="comment" type="text" dirname="comment.dir"/>
<button type=submit>Send!</button>
</form>

After submitting the form, the POST string sent to the server will include a field called comment, and another called comment.dir. If the user types Hello into the text area and submits it the result string will be:
comment=Hello&comment.dir=ltr

But if they type “?????” it will look like this:

comment=%D9%85%D8%B1%D8%AD%D8%A8%D8%A7&comment.dir=rtl

Read the spec here.

Note: In order for Arabic characters to be displayed correctly in URLs in your browser the characters are encoded in UTF-8 and each Arabic letter now uses 4 characters and is separated in the middle with a % sign.

For example, if you have this link: http://ar.wikipedia.org/wiki/????_?????

When you visit it, the link in your browser will be transformed to

http://ar.wikipedia.org/wiki/%D9%86%D8%AC%D9%8A%D8%A8_%D9%85%D8%AD%D9%81%D9%88%D8%B8

Better ways to manage backgrounds and images in RTL: Tired of using transform: scaleX(-1) to mirror your background images? The W3C is introducing a new CSS keyword called rtlflip, which is used like so:

background-image: url(backbutton.png) rtlflip;

This keyword saves you the hassle of manually mirroring your images through the transform: scaleX(-1) property.
It can also be used to style list item markers like this:

list-style-image:url('sprite.png#xywh=10,30,60,20') rtlflip;

The spec page also states that this keyword can be used along with all of the other possible ways that an image can be specified in CSS3 Images, e.g. url, sprite, image-list, linear-gradient, and radial-gradient.

Internationalization APIs

We covered the JavaScript Internationalization APIs before in an introductory post. Let’s recap the important lesser-known parts relevant in the context of working on right-to-left content.

Internationalization API support covers a wide variety of languages and their derivatives. For example it not only supports Arabic as a language, but also Arabic-Egypt, Arabic-Tunisia, and the whole list of other variations.

Using Eastern Arabic numerals instead of Western Arabic: In some Arabic variants, Eastern Arabic numerals (???) are used instead of the Western Arabic (123). The International APIs allow us to specify when to show ??? and when to show 123.

console.log(new Intl.NumberFormat('ar-EG').format(1234567890));

This states that we want to format 1234567890 to Arabic-Egypt numbers.
Output: ?????????????

console.log(new Intl.NumberFormat('ar-TN').format(1234567890));

Here we’re saying we want to output the same number in Arabic Tunisia format; since Tunisia uses Western Arabic, the output is: 1234567890

Showing dates in the Hijri/Islamic calendar instead of the Gregorian: During the Ramadan month, Google made google.com/ramadan to show content related to the occasion. One of the essential pieces of information shown was the day of the month in the Hijri calendar. Google used a polyfill to get this information. With the Internationalization APIs you can extract the same information with one line of JavaScript instead of a whole JavaScript library.
Here’s an example:

console.log(new Intl.DateTimeFormat("fr-FR-u-ca-islamicc").format(new Date()));
This line states that we want to format the current date to the Islamic Calendar date and show it in fr-FR numerals (Western Arabic). Output: 16/12/1436
console.log(new Intl.DateTimeFormat("ar-SA-u-ca-islamicc").format(new Date()));
This is doing the same thing, but formatted for display in the Arabic-Saudi-Arabia locale which uses Eastern Arabic numerals.
Output: ???/???/????

Final words

In this article, we wrap up our in-depth look at doing RTL on the Web the right way. We hope this has been helpful in encouraging you to get started adding RTL support to your websites/products! If you are a Firefox OS contributor this will be your guide to for adding RTL to Gaia.

As always, let us know in the comments below if you have any questions!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Of impostor syndrome and running in circles (part 3)

These are the notes of my talk at SmartWebConf in Romania. Part 1 covered how Impostor Syndrome cripples us in using what we hear about at conferences. It covered how our training and onboarding focuses on coding instead of human traits. In Part 2 I showed how many great things browsers do for us we don’t seem to appreciate. In this final part I’ll explain why this is and why it is a bad idea. This here is a call to action to make yourself feel better. And to get more involved without feeling inferior to others.

This is part 3 of 3.

Part 2 of this series ended with the explanation that JavaScript is not fault tolerant, and yet we rely on it for almost everything we do. The reason is that we want to control the outcome of our work. It feels dangerous to rely on a browser to do our work for us. It feels great to be in full control. We feel powerful being able to tweak things to the tiniest detail.

JavaScript is the duct-tape of the web

There is no doubt that JavaScript is the duct-tape of the web: you can fix everything with it. It is a programming language and not a descriptive language or markup. We have all kind of logical constructs to write our solutions in. This is important. We seem to crave programmatic access to the things we work with. That explains the rise of CSS preprocessors like Sass. These turn CSS into JavaScript. Lately, PostCSS even goes further in merging these languages and ideas. We like detailed access. At the same time we complain about complexity.

No matter what we do – the problem remains that on the client side JavaScript is unreliable. Because it is fault intolerant. Any single error – even those not caused by you – result in our end users getting an empty screen instead of the solution they came for. There are many ways JavaScript can fail. Stuart Langridge maintains a great flow chart on that called “Everyone has JavaScript, right?“.

There is a bigger issue with fixing browser issues with JavaScript. It makes you responsible and accountable for things browser do. You put it onto yourself to fix the web, now it is your job to keep doing that – for ever, and ever and ever…

Taking over things like page rendering, CSS support and page loading with JavaScript feels good as it fixes issues. Instead of a slow page load we can show a long spinner. This makes us feel good, but doesn’t help our end users much. Especially when the spinner has no timeout error case – like browser loading has.

Fixing a problem with JavaScript is fun. It looks simple enough and it removes an unknown browser support issue. It allows us to concentrate on building bigger and better things. And we don’t have to worry about browser issues.

It is an inspiring feeling to be the person who solved a web-wide issue. It boosts our ego to see that people rely on our brains to solve issues for them. It is great to see them become more effective and faster and free to build the next Facebook.

It gets less amazing when you want to move on and do something else. And when people have outrageous demands or abuse your system. Remy Sharp lately released a series of honest and important blog posts on that matter. “The toxic side of free” is a great and terrifying read.

Publishing something new in JavaScript as “open source” is easy these days. GitHub made it more or less a one step process. And we get a free wiki, issue tracker and contribution process with it to boot. That, of course, doesn’t mean we can make this much more complex if we wanted to. And we do as Eric Douglas explains.

open source is free as in puppy

Releasing software or solutions as open source is not the same as making it available for free. It is the start of a long conversation with users and contributors. And that comes with all the drama and confusion that is human interaction. Open Source is free as in puppy. It comes with responsibilities. Doing it wrong results in a badly behaving product and community around it.

Help stop people falling off the bleeding edge

300 cliff

If you embrace the idea that open source and publishing on the web is a team effort, you realise that there is no need to be on the bleeding edge. On the opposite – any “out there” idea needs a group of peers to review and use it to get data on how sensible the idea really is. We tend to skip that part all too often. Instead of giving feedback or contributing to a solution we discard it and build our own. This means all we have to do is to deal with code and not people. It also means we pile on to the already unloved and unused “amazing solutions” for problems of the past that litter the web.

The average web page is 2MB with over 100 http requests. The bulk of this is images, but there is also a lot of JS and CSS magical solutions in the mix.

If we consider that the next growth of the internet is not in the countries we are in, but in emerging places with shaky connectivity, our course of action should be clear: clean up the web.

Of course we need to innovate and enhance our web technology stack. At the same time it is important to understand that the web is an unprecedented software environment. It is not only about what we put in, it is also about what we can’t afford to lose. And the biggest part of this is universal access. That also means it needs to remain easy to turn from consumer into creator on the web.

If you watch talks about internet usage in emerging countries, you’ll learn about amazing numbers and growth predictions.

You also learn about us not being able to control what end users see. Many of our JS solutions will get stripped out. Many of our beautiful, crafted pictures optimised into a blurry mess. And that’s great. It means the users of the web of tomorrow are as empowered as we were when we stood up and fought against browser monopolies.

So there you have it: you don’t have to be the inventor of the next NPM module to solve all our issues. You can be, but it shouldn’t make you feel bad that you’re not quite interested in doing so. As Bill Watterson of Calvin and Hobbes fame put it:

We all have different desires and needs, but if we don’t discover what we want from ourselves and what we stand for, we will live passively and unfulfilled.

So, be active. Don’t feel intimidated by how clever other people appear to be. Don’t be discouraged if you don’t get thousands of followers and GitHub stars. Find what you can do, how you can help the merging of bleeding edge technologies and what goes into our products. Above all – help the web get leaner and simpler again. This used to be a playground for us all – not only for the kids with the fancy toys.

You do that by talking to the messy kids. Those who build too complex and big solutions for simple problems. Those doing that because clever people told them they have to use all these tools to build them. The people on the bleeding edge are too busy to do that. You can. And I promise, by taking up teaching you end up learning.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Of impostor syndrome and running in circles (part 2)

These are the notes of my talk at SmartWebConf in Romania. Part 1 covered how Impostor Syndrome cripples us in using what we hear about at conferences. It also covered how our training and onboarding focuses on coding. And how it lacks in social skills and individuality. This post talks about the current state of affairs. We have a lot of great stuff to play with but instead of using it we always chase the next.

This is part 2 of 3.

Lunch eaten by native: news at 11

When reading about the state of the web there is no lack of doom and gloom posts. Native development is often quoted as “eating our lunch”. Native-only interaction models are sold to us as things “people use these days”. Many of them are dependent on hardware or protected by patents. But they look amazing and in comparison the web seems to fall behind.

The web doesn’t need to compete everywhere

This is true, but it also not surprising. Flash showed many things that are possible that HTML/CSS/JS couldn’t do. Most of these were interesting experiments. They looked like a grand idea at the time. And they went away without an outcry of users. What a native environment have and what we do on the web is a comparison the web can’t win. And it shouldn’t try to.

The web per definition is independent of hardware and interaction model. Native environments aren’t – on the contrary. Success on native is about strict control. You control the interaction, the distribution and what the user can and can’t see. You can lock out users and not let them get to the next level. Unless they pay for it or buy the next version of your app or OS. The web is a medium that puts the user in control. Native apps and environments do not. They give users an easy to digest experience. An experience controlled by commercial ideas and company goals. Yes, the experience is beautiful in a lot of cases. But all you get is a perishable good. The maintainer of the app controls what stays in older versions and when you have to pay the next version. The maintainers of the OS dictate what an app can and can not do. Any app can close down and take your data with it. This is much harder on the web as data gets archived and distributed.

The web’s not cool anymore – and that’s OK

Evolution happens and we are seeing this right now. Browsers on desktop machines are not the end-all of human-computer interaction. That is one way of consuming and contributing to the web. The web is ubiquitous now. That means it is not as exciting for people as it was for us when we discovered and formed it. It is plumbing. How much do you know about the electricity and water grid that feeds your house? You never cared to learn about this – and this is exactly how people feel about the web now.

This doesn’t mean the web is dead – it just means it is something people use. So our job should be to make that experience as easy as possible. We need to provide a good service people can trust and rely on. Our aim should be reliability, not flights of fancy.

It is interesting to go back to the promises HTML5 gave us. Back when it was the big hype and replacement for Flash/Flex. When you do this, you’ll find a lot of great things that we have now without realising them. We complained when they didn’t work and now that we have them – nobody seems to use them.

Re-visiting forms

Take forms for example. You can see the demos I’m about to show here on GitHub.

When it comes down to it, most “apps” in their basic form are just this: forms. You enter data, you get data back. Games are the exception to this, but they are only a small part of what we use the web for.

When I started as a web developer forms meant you entered some data. Then you submitted the form and you got an error message telling you what fields you forgot and what you did wrong.

<form action="/cgi-bin/formmail.pl">
  <ul class="error">
    <li>There were some errors:
      <ul>
        <li><a href="#name">Name is required</a></li>
        <li><a href="#birthday">Birthday needs to 
          be in the format of DD/MM/YYYY</a></li>
        <li><a href="#phone">Phone can't have 
          any characters but 0-9</a></li>
        <li><a href="#age">Age needs to be 
          a number</a></li>
      </ul>
    </li>
  </ul>
  <p><label for="name">Contact Name *</label>
     <input type="text" id="name" name="name"></p>
  <p><label for="bday">Birthday</label>
     <input type="text" id="bday" name="bday"></p>
  <p><label for="lcolour">Label Colour</label>
     <input type="text" id="lcolour" name="lcolour"></p>
  <p><label for="phone">Phone</label>
     <input type="text" id="phone" name="phone"></p>
  <p><label for="age">Age</label>
     <input type="text" id="age" name="age"></p>
  <p class="sendoff">
    <input type="submit" value="add to contacts">
  </p>   
</form>

form with error links

This doesn’t look much, but let’s just remember a few things here:

  • Using labels we make this form available to all kind of users independent of ability
  • You create a larger hit target for mobile users. A radio button with a label next to it means users can tap the word instead of trying to hit the small round interface element.
  • As you use IDs to link labels and elements (unless you nest one in the other), you also have a free target to link to in your error links
  • With a submit button you enable user to either hit the button or press enter to send the form. If you use your keyboard, that’s a pretty natural way of ending the annoying data entry part.

Nothing ground-breaking, I admit, but a lot of useful functionality. Functionality you’d have to simulate if you did it all with SPANs and DIVs. And all this without a single line of JavaScript.

Enter JavaScript

Then we got JavaScript. This enabled us to create higher fidelity forms. Forms that tell the user when something went wrong before submitting. No more uneccesary page reloads. We started to build richer interaction models like forms with optional fields depending on the content of others. In my 2006 book Beginning JavaScript with DOM Scripting in Ajax I had a whole chapter dedicated to forms (code examples are here). All of these enhancements had the problem that when JavaScript for some reason or another didn’t work, the form still was happily submitting data to the server. That meant and still means that you have to validate on the server in addition to relying on client-side validation. Client side validation is a nice-to-have, not a security measure.

Enter HTML5 and browser convenience features

HTML5 supercharged forms. One amazing thing is the required attribute we can put on any form field to make it mandatory and stop the form from submitting. We can define patterns for validation and we have higher fidelity form types that render as use-case specific widgets. If a browser doesn’t support those, all the end user gets is an input field. No harm done, as they can just type the content.

In addition to this, browsers added conveniences for users. Browsers remember content for aptly named and typed input elements so you don’t have to type in your telephone number repeatedly. This gives us quite an incredible user experience. A feature we fail to value as it appears so obvious.

Take this example.

<form action="/cgi-bin/formmail.pl">
  <p><label for="name">Contact Name *</label>
     <input type="text" required id="name" name="name"></p>
  <p><label for="bday">Birthday</label>
     <input type="date" id="bday" name="bday" 
     placeholder="DD/MM/YYYY"></p>
  <p><label for="lcolour">Label Colour</label>
     <input type="color" id="lcolour" name="lcolour"></p>
  <p><label for="phone">Phone</label>
     <input type="tel" id="phone" name="phone"></p>
  <p><label for="age">Age</label>
     <input type="number" id="age" name="age"></p>
  <p class="sendoff">
    <input type="submit" value="add to contacts">
  </p>   
</form>

animation showing different form elements and auto-filling by browsers

There’s a lot of cool stuff happening here:

  • I can’t send the form without entering a contact name. This is what the required attribute does. No JavaScript needed here. You even can rename the error message or intercept it.
  • The birthday date field has a placeholder telling the user what format is expected. You can type a date in or use the arrows up and down to enter it. The form automatically realises that there is no 13th month and that some months have less than 31 days. Other browsers even give you a full calendar popup.
  • The colour picker is just that – a visual, high-fidelity colour picker (yes, I keep typing this “wrong”)
  • The tel and number types do not only limit the allowed characters to use, but also switch to the appropriate on-screen keyboards on mobile devices.
  • Any erroneous field gets a red border around it – I can’t even remember how many times I had to code this in JavaScript. This is even style-able with a selector.

That’s a lot of great interaction we get for free. What about cutting down on the display of data to make the best of limited space we have?

Originally, this is what we had select boxes for, which render well, but are not fun to use. As someone living in England and having to wonder if it is “England”, “Great Britain” or “United Kingdom” in a massive list of countries, I know exactly how that feels. Especially on small devices on touch/stylus devices they can be very annoying.

<form action="/cgi-bin/formmail.pl">
<p>
  <label for="lang">Language</label>
  <select id="lang" name="lang">
    <option>arabic</option>
    <option>bulgarian</option>
    <option>catalan</option>
    […]
    <option>kinyarwanda</option>
    <option>wolof</option>
    <option>dari</option>
    <option>scottish_gaelic</option>
  </select>
</p>  
<p class="sendoff">
  <input type="submit" value="add to contacts">
</p>   
</form>

scrolling through a select box

However, as someone who uses the keyboard to navigate through forms, I learned early enough that these days select boxes have become more intelligent. Instead of having to scroll through them by clicking the tiny arrows or using the arrow keys you can start typing the first letter of the option you want to choose. That way you can select much faster.

typing in a select box

This only works with words beginning with the letter sequence you type. A proper autocomplete should also match character sequences in the middle of an option. For this, HTML5 has a new element called datalist.

<form action="/cgi-bin/formmail.pl">
  <p>
    <label for="lang">Language</label>
    <input type="text" name="lang" id="lang" list="languages">
    <datalist id="languages">
      <option>arabic</option>
      <option>bulgarian</option>
      <option>catalan</option>
      […]
      <option>kinyarwanda</option>
      <option>wolof</option>
      <option>dari</option>
      <option>scottish_gaelic</option>
    </datalist>
  </p>  
  <p class="sendoff">
    <input type="submit" value="add to contacts">
  </p>   
</form>

This one extends an input element with a list attribute and works like you expect it to:

datalist autocomplete example

There is an interesting concept here. Instead of making the select box have the same feature and roll it up into a combo box that exists in other UI libraries, the working group of HTML5 chose to enhance an input element. This is consistent with the other new input types.

However, it feels odd that for browsers that don’t support the datalist element all this content in the page would be useless. Jeremy Keith thought the same and came up with a pattern that allows for a select element in older browsers and a datalist in newer ones:

<form action="/cgi-bin/formmail.pl">
  <p>
    <label for="lang">Language</label>
    <datalist id="languages">
    <select name="lang">
        <option>arabic</option>
        <option>bulgarian</option>
        <option>catalan</option>
        […]
        <option>kinyarwanda</option>
        <option>wolof</option>
        <option>dari</option>
        <option>scottish_gaelic</option>
      </select>
    </datalist>
    <div>or specify: </div>
    <input type="text" name="lang" id="lang" list="languages">
  </p>  
  <p class="sendoff">
    <input type="submit" value="add to contacts">
  </p>   
</form>

This works as a datalist in HTML5 compliant browsers.

datalist autocomplete example working

In older browsers, you get a sensible fallback, re-using all the option elements that are in the document.

datalist autocomplete example failing and falling back to a select box

This is not witchcraft, but is based on a firm understanding of how HTML and CSS work. Both these are fault tolerant. This means if a mistake happens, it gets skipped and the rest of the document or style sheet keeps getting applied.

In this case, older browsers don’t know what a datalist is. All they see is a select box and an input element as browsers render content of unknown elements. The unknown list attribute on the input element isn’t understood, so the browser skips that, too.

HTML5 browsers see a datalist element. Per standard, all this can include are option elements. That’s why neither the select, nor the input and the text above it get rendered. They are not valid, so the browser removes them. Everybody wins.

A craving for control

Browsers and the standards they implement are full of clever and beautiful things like that these days. And we’ve loudly and angrily demanded to have them when they got defined. We tested, we complained, we showed what needed to be done to make the tomorrow work today and then we forgot about it. And we moved on to chase the next innovation.

How come that repeatedly happens? Why don’t we at some point stop and see how much great toys we have to play with? It is pretty simple: control.

We love to be in control and we like to call the shots. That’s why we continuously try to style form elements and create our own sliders, scroll bars and dropdown boxes. That’s why we use non-support by a few outdated browsers of a standard as an excuse to discard it completely and write a JavaScript solution instead. We don’t have time to wait for browsers to get their act together, so it is up to us to save the day. Well, no, it isn’t.

Just because you can do everything with JavaScript, doesn’t mean that you should do it. We invented HTML5 as the successor and replacement of XHTML as it had one flaw: a single wrong encoding or nesting of elements and the document wouldn’t render. End users would be punished for our mistakes. That’s why HTML5 is fault tolerant.

JavaScript is not. Any mistake that happens means the end user looks at an endless spinner or an empty page. This isn’t innovation, this is building on a flaky foundation. And whilst we do that, we forgot to re-visit the foundation we have in browsers and standards support. This is a good time to do that. You’d be surprised how many cool things you’ll find you only thought possible by using a JavaScript UI framework.

But more on that in part 3 of this post.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Building RTL-Aware Web Apps & Websites: Part 1

Making the web more accessible to more people, in more languages, is an ongoing effort and a mission we take very seriously at Mozilla.

This post is the first of a series of articles to explain one of the most neglected and least well-known corners of web development: RTL (right-to-left) development. In a web development context it means making your web content compatible with RTL languages like Arabic, Hebrew, Persian, and Urdu, which are all written from right to left.

This area is often neglected by developers because of the lack of teaching resources. In this series, we hope to provide enough information to allow developers to make content that’s accessible to a broader global audience using the RTL capabilities that most web browsers currently provide.

What actually is RTL?

To make things clear, there is a difference between a language and its script. A script is the text or written form — while the language refers to the spoken form. So technically, right-to-left describing the script in which a language is written, comparable to left-to-right scripts, like English or French, more familiar to many Western readers.

For example, “Marhaban” – which means hello – is a word written in the English script, but has Arabic as its language, while “?????” is both Arabic script and Arabic language.

On the web, as we said earlier, the term Right-To-Left indicates more than just a script for writing. It stands for all facets of developing web apps and websites that do not break or become unreadable when used with Arabic, Urdu or any other RTL script.

Before continuing, let’s just clear up some misconceptions about RTL.

First, RTL is not about translating text to Arabic: It means more than translating your website into Arabic and calling it a day. It’s about making every aspect the UI and layout RTL-friendly. And as I always say – and can’t stress it enough – do RTL right or don’t bother! Doing it halfway will lose your audience and credibility.

Second, RTL is more than just “flip all the things”: I’m not sure if this issue has been fixed yet or not, but setting your locale to Arabic in Gnome will cause the time to be shown as PM 33:12 instead of 12:33 PM.

Well, that is not how it works. I’m a native Arabic speaker but that doesn’t mean I tell time backwards. There are some exceptions and things to pay attention to about numbers, and we will cover them in this series.

Why should I care about RTL?

You might have the impression that RTL is hard, scary, and will add a great deal of work to your project! Really, it’s not that difficult. Once you comprehend the basics, the extra effort required is not that great.

You should really care about adding right-to-left support to your web app or website for many, many reasons. There are over 410 million native RTL speakers around the world as of 2010 (That’s a lot of people! Note: The assessment is based on Wikipedia’s list of all the languages.) It’s also millions of potential users and a huge potential market for your app or website. Most companies now add RTL support to their software (e.g. Windows, iOS, Android) and websites, so native RTL speakers expect your website to meet their expectations regarding RTL support. Without this support visitors to your web application may not become repeat users.

In the case of Firefox OS, we only started shipping devices to the Middle East when we had RTL support fully implemented in the system, and thus our partner Orange was able to sell devices with the OS on it. This obviously opens up a whole new market of users to Firefox OS.

Best practices for RTL

Here are some general rules of thumb to keep in mind when developing RTL-aware websites and web apps:

  • Think from the right: In a region with an RTL script and language, books open from the right-hand side. Thus, most UI elements related to readability are going to be mirrored.
  • Hardware legacy UI is a thing: In MENA (Middle East and North Africa) and other regions where RTL languages are spoken, people use gadgets too. Audio hardware has the same control layout as anywhere else in the world. So buttons like Play, Fast Forward, Next, Previous have always been the same. And because of that, it’s not a good idea to mirror any of those buttons. Please don’t RTL the media player buttons. Another example of hardware legacy is the physical feature phone keyboard; before there were Blackberries and similar keyboards for handhelds, there was the simple physical numpad — until very recently it was widely popular in MENA countries. For this reason, It is strongly advised to *not* mirror the number keys. A good example of this is Firefox OS. As its initial target was developing countries, we made sure not to make the dialer mirrored in RTL.
  • Firefox OS Contacts AppThinking Right-To-Left is not thinking left-handed: Remember not to confuse developing RTL UIs with southpaw-first ones. In RTL regions of the world, right-handed people are a majority, just like in the rest of the world. So anything that is not related to how the user reads the actual content on the page should not be mirrored. An example of this is the Firefox OS A-Z scrollbar (image to the right) in the Contacts App. It is placed on the right because it’s easier to scroll through with the right hand, so such a thing should not go on the other side when your page is in RTL mode.

Down to business: How to RTL content

The first step towards making a web page RTL is adding the code dir="rtl" to the <html> tag:

<html dir="rtl">

dir stands for direction and setting its value to rtl defaults the horizontal starting point of elements to the right instead of the left.

To target an element with CSS only when the page is RTL:

  1. Create a copy of any regular CSS rules that target it.
  2. Add a html[dir="rtl"] attribute selector onto the front of the selector chain.
  3. Next, whenever you find a property or a value that represents horizontal positioning of some kind in the RTL rule, use its opposite. For example, if you find `float: left` you should change it to float: right.

As an example, if the original rule is as follows —

.someClass {
    text-align: left;
    padding: 0 10px 0 0;
    text-decoration: underline;
}

we would paste a copy of it after the original and update it as follows:

html[dir="rtl"] .someClass {
    text-align: right;
    padding: 0 0 0 10px;
}

Notice that the text-decoration: underline; declaration has been removed. We don’t need to override it for the RTL version because it doesn’t affect any direction-based layout elements. Thus, the value from the original rule will still be applied.

Here’s an even more detailed example you can hack on directly:

See the Pen dYGKQZ by Ahmed Nefzaoui (@anefzaoui) on CodePen.

Of course, real-life cases won’t be quite as simple as this. There are applications with a huge number of CSS rules and selectors, and there are multiple strategies you could adopt. Here is one recommended approach:

  1. Copy and clean up: First, copy the content of your stylesheet to another file, add html[dir="rtl"] to all the rules, then clear out any properties that do not relate to horizontal direction setting. You should end up with a more lightweight file that deals purely with RTL.
  2. Mirror the styles: Change all of the properties left in the new file to their opposites: e.g. padding-left becomes padding-right, float: right becomes float: left, etc. Note: if you originally had padding-left and you changed it to padding-right, remember that the original padding-left still exists in the original rule. You must add padding-left: unset alongside padding-right, otherwise the browser will compute both properties: the padding-left from the original CSS rule and the new padding-right in the RTL rule. The same is true for any property that has multiple direction variants like the margin-left|right, border-left|right
  3. Paste back: After you’ve completed the previous steps, paste the newly created rules back into the original file, after the originals. I personally like to add a little comment afterwards such as: /* **** RTL Content **** */ for the sake of easier differentiation between the two parts of your style file.

A better way: isolating left/right styles completely

Sometimes you will find yourself facing edge cases that produce code conflict. This involves properties inheriting values from other properties, meaning that sometimes you won’t be sure what to override and what not to. Maybe you’ve added the margin-right to the RTL rule but are not sure what to set as the value for the original margin-left? In all real-world cases it is advisable to adopt another approach, which is better in general but more long-winded. For this approach we isolate direction-based properties completely from the rules for both the LTR and the RTL cases. Here’s an example: say we have .wrapper .boxContainer .fancyBox listed like this:

.wrapper .boxContainer .fancyBox {
    text-align: left;
    padding-left: 10px;
    text-decoration: underline;
    color: #4A8CF7;
}

Instead of adding another property for RTL with both padding-left and padding-right, you can do this:

.wrapper .boxContainer .fancyBox {
    text-decoration: underline;
    color: #4A8CF7;
}

html[dir="ltr"] .wrapper .boxContainer .fancyBox {
    text-align: left;
    padding-left: 10px;
}

html[dir="rtl"] .wrapper .boxContainer .fancyBox {
    text-align: right;
    padding-right: 10px;
}

This solution consists of 3 parts:

  1. The original rule/selector with only non-direction based properties, because they are shared by the LTR and the RTL layout.
  2. The left to right case — html[dir="ltr"] — with only the direction-based CSS properties included, their values set to match your LTR layout.
  3. The right to left case — html[dir="rtl"] — with the same properties as the LTR case, only with their values set according to what you want in your RTL layout.

Notice that in the second rule we had padding-left only and in the third one we had padding-right only. That’s because each one of them is exclusive to the direction that was given to them in the dir attribute. This is a nice, clean approach that doesn’t require unsetting of properties. This is the exact approach we use when adding RTL support to Firefox OS. (Note: the unset keyword isn’t supported in all browsers. For the best compatibility you may need to explicitly re-declare the desired value for any properties you want to unset.)

How do we get and set the direction of the page using JavaScript?

It’s fairly easy, and requires only one line of code: document.documentElement.dir
You can assign this line to a variable, and have it outputted to the console to see for yourself:  

var direction = document.documentElement.dir; console.log(direction);

Or try this example:

See the Pen WQxWQQ by Ahmed Nefzaoui (@anefzaoui) on CodePen.

To set the direction of the page, just use the following:
document.documentElement.dir = "rtl"

And here’s an example of both getting and setting the direction value for a web page:

See the Pen gaMyPN by Ahmed Nefzaoui (@anefzaoui) on CodePen.

Automatic RTL solutions

Enough talk about manual ways to do RTL. Now let’s take a look at some automated approaches.

css-flip

Twitter has developed a solution to automate the whole process and make it easier for bigger projects to implement RTL. They have open sourced it and you can find it on Github.

This NPM plugin covers pretty much all the cases you could find yourself facing when working on RTL, including edge cases. Some of its features include:

  • no-flip: To indicate to the mirroring engine that you don’t want a certain property flipped, just add /* @noflip*/ at the beginning of the property. So, for example, if you write /* @noflip*/ float: left, it will stay as float: left after css-flip is run.
  • @replace: When you have background images that differ between LTR and RTL, you can specify a replacement image by including /*@replace: url(my/image/path) before the original property. Let’s say you have background-image: url(arrow-left.png). If you update the line to
    /*@replace: url(arrow-rightish.png) */ background-image: url(arrow-left.png);
    You will end up with background-image: url(arrow-rightish.png); in the RTL layout.

You can use css-flip through the command line by executing
css-flip path/to/file.css > path/to/file.rtl.css or by requiring css-flip in your page. More about that on their github repo. css-flip is being actively used on Twitter properties.

rtlcss

Another option for automatic conversion is a tool from Mohammad Younes called rtlcss — this is also available on github and provides some pretty nifty features. With this one engine you can:

  • Rename rule names (e.g. renaming #boxLeft to #boxRightSide).
  • Completely ignore rules from the mirroring process.
  • Ignore single properties.
  • Append/prepend new values.
  • Insert new property values in between other property values.

Basic usage is via the command line — you can create your RTL CSS equivalent by the running the following in the same directory your original CSS is available in:

rtlcss input.css output.rtl.css

And of course you can just require it in your page. More details are available on github.

This project is very popular and is being used by several projects including WordPress.

Final words

While there is still a lot to cover, this article aims to provide you with enough knowledge to start exploring RTL confidently. In the next article we’ll cover more advanced topics around RTL.

Be sure to ask any questions you may have around RTL in the comments section and we will try to answer them here or in the next post!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Of impostor syndrome and running in circles (part 1)

john steinbeck: and now that you don't have to be perfect, you can be good

I just got back from SmartWebConf in Romania. I gave the closing talk. I wanted it to be of a slower pace and more reflective than what I normally deliver. I messed that up, as I got passionate about the subject again, but it was for the better. I am happy and humbled about the feedback I got. Many people thanked me for putting their worries into words. That’s why I offer the recording and slides here, and write a longer post about the ideas covered in them.

This is part 1 of 3.

Part 1: never stop learning and do it your way
Part 2: you got what you asked for, time to use it
Part 3: give up on the idea of control and become active

The screencast recording is on YouTube (if you want to download it, I use Youtube-dl)

The Slides are on Slideshare and have been featured on their frontpage.

The end of a conference is always a wonderful time. You are inspired, fascinated, excited, hopeful and ready to apply and play with all the cool stuff you heard. You had interesting talks with people. You found out that other people have the same problems and worries you have. You’re driven to tackle them head on (the problems, not the other people). That’s what happens when the presenters and organisers did their jobs right.

The problem is that these fuzzy feelings tend to evaporate when you go back to work. Conferences are a cocoon of being with people who get you – as they have the same ideas and deliveries.

Going back to your daily work can make you feel not as good any more. You go online. You get overwhelmed by the avalanche of stuff you find on our industry’s social media channels. You feel you should know a lot more than you do and use all the clever solutions the cool kids use. Instead, you look at a huge list of bugs you need to plough through. You look at the product you worked on for a year. And you suffer the convoluted processes it has. And you get frustrated by the code you’d love to clean up but can’t touch.

The impostor syndrome rears its ugly head and brings those darker feelings.

  • That feeling that you don’t belong…
  • That feeling that you’re not good enough…
  • The feeling that you can’t contribute anything useful…
  • The feeling that you should apologise for not using the newest and coolest…

All these are bullshit and you should not spend a second of your precious time on them. You don’t have to impress anyone, you don’t have to beat others. All you need to worry about is being a better you tomorrow than you are today. You can do this in many ways. Be a kinder person. Learn something new. Question your biases. Try to understand why someone dislikes what you do. And many more.

In our little world, we started to put ridiculous pressure onto each other. And it is based on one thing: engineering and technology. OK, that’s two things but one tends to lead to the other.

We turned engineering into a religion. With all the dogma and holy wars that comes with that.

This manifests itself in a few ways:

  • We value code over everything else. We made “learning to code” a proper fetish and the answer to everything. And we keep stretching the term “code” to painful proportions. Everything you type and follows some logic or instructs a runtime to do something is considered code. The thinking behind it becomes secondary, which can be terrible. Writing good HTML is not about instructing a machine to do something with it. Writing good JavaScript is not about writing out a lot of content.
  • We measure our worth by quantity and visibility, not by how we contribute. As Sarah Mei put it on Twitter

    Here’s my github contribution graph. I write code all fucking day. I want to do something different outside of work.

    Is she less of a coder as she strives to make a difference in her job rather than her free time? Is our creative work disconnected from our deliveries?

  • We consider automation the end goal of everything. Humans should do other things than menial tasks that computers can do better. We value libraries and scripts and methodologies that automate tasks more than repetition. Repetition is considred failure. It could also be good for building up muscle memory. By automating all our tasks, we rob new developers of the learning why these automatisation tools are so useful. We remove them from the original problem. And they could learn a lot by facing these problems instead of relying on a magical solution they don’t understand.

This is dangerous thinking. Being human, we are more than number crunching machines. We can’t work at the breakneck speed of machines. Taking time off doing nothing is an important part of our mental health. By creating machines to make our lives easier, we might eventually end up with a world that doesn’t need us. This can be good, but the economics of it are pretty screwed up. CPG Grey’s Humans need not apply is an interesting 15 minute show about this.

This unhealthy focus on technological solutions for all our problems lead to a change in our hiring practices that annoy me. It seems we celebrate social ineptitude in our hiring and training. We have ridiculous brain teaser questions in interviews like the Fizz Buzz Text, which helps

filter out the 99.5% of programming job candidates who can’t seem to program their way out of a wet paper bag.

Arrogance like this doesn’t go unpunished. For example Estelle Weyl showed that a few lines of CSS can solve the Fizz Buzz test. That she framed her tweet as a “CSS interview question” scares me. CSS is not about cryptic coding puzzles, it is about describing interfaces and interactions. Once more we bring something that has an important role to play back to “code”. Thus we make it more palpable for the engineering crowd. A crowd that had not much respect for it in the first place.

Interview questions like these are scary, arrogant and short-sighted. We keep complaining about a lack of diversity in our market. Yet we put trip-wires like these in our onboarding process instead of celebrating diversity. And don’t get me started on the concept of hackdays to increase diversity. There is no silver bullet to force integration. If we want to be more diverse, we have to change our hiring practices. We also need to adapt our promotion and compensation paths. Right now our market values the “rockstar ninja coders” who are in constant competition with their peers. That’s not a good start to build teams.

Diversity isn’t a goal – it is a beginning. Your products will be more resilient, interesting and delivered faster with diverse teams than with a group of likeminded people. You add spices to food when you cook it. This is what diversity can bring to our market.

The way we teach shouldn’t be about hypothetical examples but about the medium we work in. Three years ago Brett Victor released Learnable Programming, a hands-on training course taking a different approach to learning to “code”. We should have more of these, and less of yet another resource to learn the coolest and newest.

There is no shortage of resources to learn our craft from these days. The problem with a lot of them is that they assume a lot of knowledge and expect a certain environment.

Learning is a personal thing. If you feel stupid trying to follow a learning resource, don’t use it. You’re not stupid – you’re using the wrong resource. We all are wired differently. If you don’t understand something – ask the person teaching the subject to explain it. If there is an aggressive answer or none at all, you know that the goal of the teaching resource was not to educate, but to show off. That’s fine, but it shouldn’t be something that frustrates you. Maybe you don’t need this new knowledge. Find something that makes you happy instead. Something that you are much better at than others are because you like doing it.

Continued in part 2, where I will look at what we have and why we don’t use it.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Keyboard events in Firefox OS TV: Part 2

Implementation details for keyboard events

In our introductory post, Keyboard events in Firefox OS TV, we described four keyboard event scenarios triggered by the Info key on a Smart TV remote: SYSTEM-ONLY, SYSTEM-FIRST, APP-CANCELLED, AND APP-FIRST. We explained how these keyboard events are activated, described the default sequence of events, and explored the iframe structure in Firefox OS and its embedded browser, in order to understand how developers can implement the four keyboard event scenarios.

Now we’ll take a closer look at each of the four scenarios, complete with example code for each event-handling scenario.

SYSTEM-ONLY

If a keyboard event is categorized as SYSTEM-ONLY, then the desired response is defined in mozbrowserbeforekey*’s event handler. Once this key is pressed, the system receives the mozbrowserbeforekey* event before the key* event is dispatched to an app. In addition, the key* events dispatch is cancelled once the system event handler is called. Now, we need to figure out a way to stop the event dispatch. Fig 5 – Keyboard events in Firefox OS TV shows that the keyboard events are dispatched to the system process, then also to the app process. To stop dispatching events to the the embedded page, event.preventDefault() is a straightforward solution. Event.preventDefault() is used to prevent the default action. The defined default action of the mozbrowserbeforekey* event is to dispatch the key* event. For this reason, by calling event.preventDefault() in mozbrowserbeforekey*’s event handler, key* events won’t be dispatched. The final result will be the same as shown in Fig 7.

f7

[Fig 7]

SYSTEM-FIRST

This is very similar to the implementation of SYSTEM-ONLY. The only difference — it’s not necessary to call event.preventDefault() in mozbrowserbeforekey*’s event handler. Apps are able to handle the key* event after the system finishes processing it.

f8

[Fig 8]

APP-CANCELLED
f9

[Fig 9]

If specific keyboard events are designated for use by apps only, such as those assigned to the four colored keys (shown in Fig 9) on smart TV remotes, then event.preventDefault() will be called in the app’s key* event handler. The event.preventDefault() cannot prevent the mozbrowserafterkey* event from being dispatched to the system, but the property embeddedCancelled of mozbrowserafterkey* will be set to true once the embedded app calls event.preventDefault(). The value of embeddedCancelled lets the system detect whether or not this event has been handled already. If this value is true, the system does nothing.

f10

[Fig 10]

APP-FIRST

The difference between APP-FIRST and APP-CANCELLED is that with APP-FIRST event.preventDefault() will not be called in the app’s event handler. Therefore, the value of embeddedCancelled is false and the system can take over the keyboard event.

f11

[Fig 11]

Sample code

Event handlers
function handleEvent(event) {
  dump("Receive event '" + event.type + "'.");
  // Handle event here.....
};

function handleEventAndPreventDefault(event) {
  dump("Receive event '" + event.type + "'.");
  // Handle event here.....

  // Call preventDefault() to stop the default action.
  // It means that the event is already handled.
  event.preventDefault();
};

function checkAttrAndHandleEvent(event) {
  dump("Receive event '" + event.type + 
       "' with embeddedCancelled equals to '" +
       event.embeddedCancelled + "'.");
  if (!event.embeddedCancelled) {
    // Do something if the event wasn't being handled before!
    // The following code should be executed in APP-FIRST scenario only!
  }
};
SYSTEM-ONLY
  • mozbrowser iframe host page
window.addEventListener('mozbrowserbeforekeydown', handleEventAndPreventDefault);
window.addEventListener('mozbrowserbeforekeyup', handleEventAndPreventDefault);
window.addEventListener('mozbrowserafterkeydown', function() { }); // no use
window.addEventListener('mozbrowserafterkeyup', function() { }); // no use
  • the embedded page
// This function will never be triggered because the preventDefault() is called in mozbrowserbeforekeyXXX's handler.
window.addEventListener('keydown', handleEvent);
window.addEventListener('keyup', handleEvent);
  • Results of keydown related events
Order the embedded page mozbrowser iframe host page Output
1 mozbrowserbeforekeydown Receive event ‘mozbrowserbeforekeydown’.
2 mozbrowserafterkeydown
  • Results of keyup related events
Order the embedded page the host page Output
1 mozbrowserbeforekeyup Receive event ‘mozbrowserbeforekeyup’.
2 mozbrowserafterkeyup
SYSTEM-FIRST
  • mozbrowser iframe host page
window.addEventListener('mozbrowserbeforekeydown', handleEvent);
window.addEventListener('mozbrowserbeforekeyup', handleEvent);
window.addEventListener('mozbrowserafterkeydown', function() { }); // no use
window.addEventListener('mozbrowserafterkeyup', function() { }); // no use
  • the embedded page
window.addEventListener('keydown', handleEvent);
window.addEventListener('keyup', handleEvent);
  • Received results of keydown related events
Order mozbrowser-embedded page mozbrowser iframe host page Output
1 mozbrowserbeforekeydown Receive event ‘mozbrowserbeforekeydown’.
2 keydown Receive event ‘keydown’.
3 mozbrowserafterkeydown
  • Received results of keyup related events
Order the embedded page mozbrowser iframe host page Output
1 mozbrowserbeforekeyup Receive event ‘mozbrowserbeforekeyup’.
2 keyup Receive event ‘keyup’.
3 mozbrowserafterkeyup Receive event ‘mozbrowserafterkeyup’ with embeddedCancelled equals to ‘true’.
APP-CANCELLED
  • mozbrowser iframe host page
window.addEventListener('mozbrowserbeforekeydown', function() { }); // no use
window.addEventListener('mozbrowserbeforekeyup', function() { }); // no use
window.addEventListener('mozbrowserafterkeydown', checkAttrAndHandleEvent);
window.addEventListener('mozbrowserafterkeyup', checkAttrAndHandleEvent);
  • mozbrowser iframe embedded page
window.addEventListener('keydown', handleEventAndPreventDefault);
window.addEventListener('keyup', handleEventAndPreventDefault);
  • Received results of keydown related events
Order the embedded page mozbrowser iframe host page Output
1 mozbrowserbeforekeydown
2 keydown Receive event ‘keydown’.
3 mozbrowserafterkeydown Receive event ‘mozbrowserafterkeydown’ with embeddedCancelled equals to ‘true’.
  • Received results of keyup related events
Order mozbrowser-embedded page mozbrowser iframe host page Output
1 mozbrowserbeforekeyup
2 keyup Receive event ‘keyup’.
3 mozbrowserafterkeyup Receive event ‘mozbrowserafterkeyup’ with embeddedCancelled equals to ‘true’.
APP-FIRST
  • mozbrowser iframe host page
window.addEventListener('mozbrowserbeforekeydown', function() { }); // no use
window.addEventListener('mozbrowserbeforekeyup', function() { }); // no use
// This will be trigger after keydown event is
// dispatched to mozbrowser iframe embedded page
window.addEventListener('mozbrowserafterkeydown', checkAttrAndHandleEvent);
window.addEventListener('mozbrowserafterkeyup', checkAttrAndHandleEvent);
  • mozbrowser iframe embedded page
window.addEventListener('keydown', handleEvent);
window.addEventListener('keyup', handleEvent);
  • Received results of keydown related events
Order mozbrowser-embedded page mozbrowser-iframe host page Output
1 mozbrowserbeforekeydown
2 keydown Receive event ‘keydown’.
3 mozbrowserafterkeydown Receive event ‘mozbrowserafterkeydown’ with embeddedCancelled equals to ‘false’.
  • Received results of keyup related events
Order mozbrowser-embedded page mozbrowser iframe host page Output
1 mozbrowserbeforekeyup
2 keyup Receive event ‘keyup’.
3 mozbrowserafterkeyup Receive event ‘mozbrowserafterkeyup’ with embeddedCancelled equals to ‘false’.

 

 

 

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

7 Reasons why EdgeConf rocks and why you should be part of it

Having just been there and seeing that the coverage is available today, I wanted to use this post to tell you just how amazing EdgeConf is as a conference, a concept and a learning resource. So here are seven reasons why you should care about EdgeConf:

Reason 1: It is a fully recorded think-tank

Unlike other conferences, where you hear great presentations and have meetings and chats of high significance but have to wait for weeks for them to come out, EdgeConf is live in its coverage. Everything that’s been discussed has a live comment backchannel (this year it was powered by Slack), there are dedicated note-takers and the video recordings are transcribed and published within a few days. The talks are searchable that way and you don’t need to sift through hours of footage to find the nugget of information you came for.

Reason 2: It is all about questions and answers, not about the delivery and showing off

The format of EdgeConf is a Q&A session with experts, moderated by another expert. There are a few chosen experts on stage but everybody in the audience has the right to answer and be part of it. This happens in normal conference Q&A in any case; Edge makes sure it is natural instead of disrupting. There is no space for pathos and grandstanding in this event – it is all about facts.

Reason 3: The audience is a gold-mine of knowledge and experts to network with

Edge attracts the most dedicated people when it comes to newest technology and ideas on the web. Not blue-sky “I know what will be next” thinkers, but people who want to make the current state work and point towards what’s next. This can be intimidating – and it is to me – but for networking and having knowledgable people to bounce your ideas of, this is pure gold.

Reason 4: The conference is fully open about the money involved

Edge is a commercial conference, with a very affordable ticket price. At the end of the conference, you see a full disclosure of who paid for what and how much money got in. Whatever is left over, gets donated right there and then to a good cause. This year, the conference generated a massive amount of money for codeclub. This means that your sponsorship is obvious and people see how much you put in. This is better than getting a random label like “platinum” or “silver”. People see how much things cost, and get to appreciate it more.

Reason 5: The location is always an in-the-trenches building

Instead of being in a hotel or convention centre that looks swanky but has no working WiFi, the organisers partner with tech companies to use their offices. That way you get up-close to Google, Facebook, or whoever they manage to partner with and meet local developers on their own turf. This is refreshingly simple and means you get to meet folk that don’t get time off to go to conferences, but can drop by for a coffee.

Reason 6: If you can’t be there, you still can be part of this

All the panels of this conference are live streamed, so even if you can’t make it, you can sit in and watch the action. You can even take part on Slack or Twitter and have a dedicated screening in your office to watch it. This is a ridiculously expensive and hard to pull off trick that many conferences wouldn’t event want to do. I think we should thank the organisers for going that extra step.

Reason 7: The organisers

The team behind Edge is extremely dedicated and professional. I rushed my part this year, as I was in between other conferences, and I feel sorry and like a slacker in comparison what the organisers pulled off and how they herd presenters, moderators and audience. My hat is off to them, as they do not make any money with this event. If you get a chance to thank them, do so.

Just go already

When the next Edge is announced, don’t hesitate. Try to get your tickets or at least make sure you have time to watch the live feeds and take part in the conversations. As someone thinking of sponsoring events, this is a great one to get seen and there is no confusion as to where the money goes.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Creating a mobile app from a simple HTML site: Part 4

How to polish your app and prepare it for market

In previous sections of this step-by-step series (Part 1, Part 2, and Part 3) we’ve created an app that loads multiple school plans from the server.

What we have so far is functional, but still has a number of issues, including two which are major: no offline mode and a hard-coded configuration. In this closing post, we’ll work on all of these issues.

If you haven’t built up the examples from the previous parts, use stage7 app and stage7 server as a starting point. To get started, follow the instructions on how to load any stage in this tutorial.

Refactoring

First, let’s do some refactoring. We’ll add a User object type to make it easier to choose which user’s plan to display. This will allow us to remove some more code from the app’s main logic flow, making the code cleaner and more modular. This object will load data from the server to create the plan UI. At the moment, rendering is done in app.onDeviceReady and app.renderData methods. Our new onDeviceReady method creates an app.user object and runs the methods needed to get and render plans. Replace the existing onDeviceReady method with the following:

onDeviceReady: function() {
    app.user = new User('johndoe');
    app.user.getPlans();

    app.activateFingerSwipe();
},

Next, delete the app.renderData method completely. We will store this functionality inside our new User object instead, as you’ll see below.

Next, at the beginning of the index.js file (before the definition of Plan), add the following code to define the User object:

function User(userid) {
    this.id = userid;
    this.plans = [];
}

User.prototype.getPlans = function() {
    var self = this;
    var request = new XMLHttpRequest({
        mozAnon: true,
        mozSystem: true});
    request.onload = function() {
        var plans = JSON.parse(this.responseText);
        for (var i = 0; i < plans.length; i++) {
            self.plans.push(new Plan(plans[i]));
        }
        self.displayPlans();
    };
    request.open("get", app.getPlansURL + this.id, true);
    request.send();
};

User.prototype.displayPlans = function() {
    var self = this;
    navigator.globalization.getDateNames(function(dayOfWeek){
        var deck = document.getElementById('plan-group');
        var tabbar = document.getElementById('plan-group-menu');
        for (var i = 0; i < self.plans.length; i++) {
            self.plans[i].createUI(deck, tabbar, dayOfWeek);
        }
    }, function() {}, {type: 'narrow', item: 'days'});
};

This contains functionality previously found in renderData and onDeviceReady. After the plan’s JSON is loaded from the server we iterate over the list and create a list of Plan objects in user.plans. We then call user.displayPlans, which creates the required DOM environment and invokes plan.createUI for each element of user.plans.

You may notice that the above code uses app.getPlansURL to allow the XHR call to find the JSON. Previously the URL was hardcoded into request.open("get", "http://127.0.0.1:8080/plans/johndoe", true);. Since it’s better to keep settings in one place we’ll add this as a parameter of the app object instead.

Add the following into your code:

var app = {
    getPlansURL: "http://127.0.0.1:8080/plans/",
    // ....
}

Now try preparing your app again with the cordova prepare terminal command, starting your server, and reloading the app in WebIDE (as we’ve done in previous articles). The app should work as before.

Offline mode

Let’s turn our attention to making the app work offline.

There are several technologies available to store data on the device. I’ve decided to use the localStorage API, as the data we need to store is basically just simple strings and doesn’t need any complex structuring. As this is a key/value store, objects need to be stringified (represented as a string). I will use only two functions — setItem() and getItem(). For example, storing a user id would require you to call localStorage.setItem('user_id', user.id);.

There are two values we definitely need to store in the phone memory — the ids of the current user and the plans. To make the app fully open and allow users to use servers not provided by the app creators we also need to store the server address. We’ll start by storing the plans. The user id and server will be still hardcoded. Let’s add a new method — User.loadPlans — and call that from app.onDeviceReady, instead of calling both user.getPlans() and app.user.displayPlans. We will decide if the plans need to be loaded from the server and then display the plans by calling User.displayPlans() from User.loadPlans.

First update onDeviceReady again, to this:

onDeviceReady: function() {
    app.user = new User('johndoe');
    app.user.loadPlans();
    app.activateFingerSwipe();
},

Now add in the following definition for the loadPlans method, up near your other User object-defining code:

User.prototype.loadPlans = function() {
    var plans = localStorage.getItem('plans');
    if (plans === null) {
        return this.getPlans();
    }
    console.log('DEBUG: plans loaded from device');
    this.initiatePlans(plans);
    this.displayPlans();
};

The above method calls another new method — initiatePlans. This parses the JSON representation of the plans and initiates Plan objects inside the User.plans array. In the future we would like to be able to reload the plans from the server, but the existing code always adds new plans instead of replacing them.

Add the following definition just below where you added the loadPlans code:

User.prototype.initiatePlans = function(plansString) {
    var plans = JSON.parse(plansString);
    for (var i = 0; i < plans.length; i++) {
        this.plans.push(new Plan(plans[i]));
    }
}

We also need to store the plans on the client. The best point to do this is right after the plans have been loaded from the server. We will call localStorage.setItem('plans', this.responseText) inside the success response function of getPlans. We must also remember to call the initiatePlans method.

Update your getPlans definition to the following:

User.prototype.getPlans = function() {
    var self = this;
    var request = new XMLHttpRequest({
        mozAnon: true,
        mozSystem: true});
    request.onload = function() {
        console.log('DEBUG: plans loaded from server');
        self.initiatePlans(this.responseText);
        localStorage.setItem('plans', this.responseText);
        self.displayPlans();
    };
    request.onerror = function(error) {
        console.log('DEBUG: Failed to get plans from ``' + app.getPlansURL + '``', error);
    };
    request.open("get", app.getPlansURL + this.id, true);
    request.send();
};

At this moment, successful loading of the user’s plans happens only once. Since school plans change occasionally (usually twice a year) you should be able to reload the plans from the server any time you want. We can use User.getPlans to return updated plans, but first we need to remove existing plans from the UI, otherwise multiple copies will be displayed.

Let’s create User.reloadPlans — add the code below your existing User object-defining code:

User.prototype.reloadPlans = function() {
    // remove UI from plans
    for (var i = 0; i < this.plans.length; i++) {
        this.plans[i].removeUI();
    }
    // clean this.plans
    this.plans = [];
    // clean device storage
    localStorage.setItem('plans', '[]');
    // load plans from server
    this.getPlans();
};

There is a new method to add to the Plan object too — Plan.removeUI — which simply calls Element.remove() on a plan’s card and tab. Add this below your previous Plan object-defining code:

Plan.prototype.removeUI = function() {
    this.card.remove();
    this.tab.remove();
};

This is a good time to test if the code has really reloaded. Run cordova prepare again on your code, and reload it in WebIDE.

We’ve got two console.log statements inside loadPlans and getPlans to let us know if the plans are loaded from device’s storage or the server. To initiate a reload run this code in Console:

app.user.reloadPlans()

Note: Brick reports an error at this point, as the link between each tab and card has been lost inside the internal system. Please ignore this error. Once again, our app still works.

reloadPlans from Console

Let’s make the reload functional by adding a UI element to reload the plans. In index.html, wrap a div#topbar around the brick-tabbar. First, add a reload button:

<div id="reload-button"></div>
<div id="topbar">
    <brick-tabbar id="plan-group-menu">
    </brick-tabbar>
</div>

Now we’ll make the button float to the left and use calc to set the size of the div#topbar. This makes Brick’s tabbar calculate the size when layout is changed (portrait or horizontal). In css/index.css, add the following at the bottom of the file:

#topbar {
	width: calc(100% - 45px);
}

#reload-button {
	float: left;
	/* height of the tabbar */
	width: 45px;
	height: 45px;
	background-image: url('../img/reload.svg');
	background-size: 25px 25px;
	background-repeat: no-repeat;
	background-position: center center;
}

We will download some Gaia icons to use within the application. At this stage, you should save the reload.svg file inside your img folder.

Last detail: Listen to the touchstart event on the #reload-button and call app.user.reloadPlans. Let’s add an app.assignButtons method:

assignButtons: function(){
    var reloadButton = document.getElementById('reload-button');
    reloadButton.addEventListener('touchstart', function() {
        app.user.reloadPlans();
    }, false);
},

We need to call it from app.deviceReady before or after app.activateFingerSwipe().

onDeviceReady: function() {
    // ...
    app.activateFingerSwipe();
    app.assignButtons();
},

At this point, save your code, prepare your cordova app, and reload it in WebIDE. You should see something like this:

reload button

Here you can find current code for app and server.

Settings page

Currently all of the identity information (the address of the server and user id) is hardcoded in the app. If the app is to be used with others, we provide functionality for users to set this data without editing the code. We will implement a settings page, using Brick’s flipbox component to change from content to settings and vice versa.

First we must tell the app to load the component. Let’s change index.html to load the flipbox widget by adding the following line into the HTML <head> section:

<link rel="import" href="app/bower_components/brick-flipbox/dist/brick-flipbox.html">

Some changes must happen inside the HTML <body> as well. brick-topbar and brick-deck need to be placed inside switchable sections of the same flipbox, and the reload button also moves to inside the settings. Replace everything you’ve got in your <body> so far with the following (although you need to leave the <script> elements in place at the bottom):

<brick-flipbox>
    <section id="plans">
        <div id="settings-button"></div>
        <div id="topbar">
            <brick-tabbar id="plan-group-menu">
            </brick-tabbar>
        </div>
        <brick-deck id="plan-group">
        </brick-deck>
    </section>
    <section id="settings">
        <div id="settings-off-button"></div>
        <h2>Settings</h2>
    </section>
</brick-flipbox>

The idea is to switch to the “settings” view on pressing a settings-button, and then back to the “plans” view with the settings-off-button.

Here’s how to wire up this new functionality. First up, inside your app definition code add a toggleSettings() function, just before the definition of assignButtons():

toggleSettings: function() {
    app.flipbox.toggle();
},

Update assignButtons() itself to the following (we are leaving the reloadButton code here, as we’ll need it again soon enough):

assignButtons: function() {
    app.flipbox = document.querySelector('brick-flipbox');
    var settingsButton = document.getElementById('settings-button');
    var settingsOffButton = document.getElementById('settings-off-button');
    settingsButton.addEventListener('click', app.toggleSettings);
    settingsOffButton.addEventListener('click', app.toggleSettings);
        
    var reloadButton = document.getElementById('reload-button');
    reloadButton.addEventListener('touchstart', function() {
        app.user.reloadPlans();
    }, false);
},

Next: we need to retrieve some more icons for the app. Save the settings.svg and back.svg files inside your img folder.

Let’s update our css/index.css file too, to add the additional icons in and style our new features. Add the following at the bottom of that file:

#form-settings {
    font-size: 1.4rem;
    padding: 1rem;
}
#settings-button, #settings-off-button {
    float: left;
    width: 45px;
    height: 45px;
    background-size: 25px 25px;
    background-repeat: no-repeat;
    background-position: center center;
}
#settings-button {
    background-image: url('../img/settings.svg');
}
#settings-off-button {
    background-image: url('../img/back.svg');
    border-right: 1px solid #ccc;
    margin-right: 15px;
}
brick-flipbox {
    width: 100%;
    height: 100%;
}

If you now prepare your cordova app again with cordova prepare, and reload the app inside WebIDE, you should see something like this:

flipbox working

Adding a settings form

To add a form and some data, replace your current <section id="settings"></section> element with the following:

<section id="settings">
    <div id="settings-off-button"></div>
    <h2>Settings</h2>
    <form id="form-settings">
        <p><label for="input-server">Server</label></p>
        <p><input type="text" id="input-server" placeholder="http://example.com/"/></p>
        <p><label for="input-user" id="label-user">User ID</label></p>
        <p><input type="text" id="input-user" placeholder="johndoe"/></p>
        <p><button id="reload-button">RELOAD ALL</button></p>
    </form>
</section>

Making it look good is easy! We’ll use some ready-made CSS available in Firefox OS Building Blocks. Download input_areas.css and buttons.css and save them in your www/css directory.

Then add the following to your HTML <head>, to apply the new CSS to your markup.

<link rel="stylesheet" type="text/css" href="css/input_areas.css">
<link rel="stylesheet" type="text/css" href="css/buttons.css">

Go into the css/index.css file and delete the #reload-button { ... } rule to make the “RELOAD ALL” button display correctly.

Run cordova prepare again and reload the app in WebIDE, and you should see something like this:

settings

We’ve got the view — now to support it. When the app first installs there will be no setting, so the app will have no idea how to load the data. Instead of showing empty plans we immediately toggle the view to the settings. Update your onDeviceReady function as shown:

onDeviceReady: function() {
    app.plansServer = localStorage.getItem('plansServer');
    app.userID = localStorage.getItem('userID');
    app.activateFingerSwipe();
    app.assignButtons();

    if (app.plansServer && app.userID) {
        app.user = new User(app.userID);
        app.user.loadPlans();
    } else {
        app.toggleSettings();
    }
},

The app needs to read the data from our form inputs, so save them into the phone’s storage, and reload after the change. Let’s add this functionality at the end of the app.assignButtons function. First we will stop the form from submitting (otherwise our app would just reload the index.html instead of redrawing the content). Add the following code to the end of the assignButtons function, just before the closing },:

document.getElementById('form-settings').addEventListener('submit', function(e) {
    e.preventDefault();
}, false)

We will read and set the input value for the server. I’ve decided to read the input value on the blur event. It is dispatched when the user touches any other DOM Element on the page. When this happens, the server value is stored in the device’s storage. If app.plansServer exists on app launch, we set the default input value. The same has to happen for the userID, but there is a special step. Either we change the app.user or create a new one if none exists. Add the following just above the previous block of code you added:

var serverInput = document.getElementById('input-server');
serverInput.addEventListener('blur', function() {
    app.plansServer = serverInput.value || null;
    localStorage.setItem('plansServer', app.plansServer);
});
if (app.plansServer) {
    serverInput.value = app.plansServer;    
}

var userInput = document.getElementById('input-user');
userInput.addEventListener('blur', function() {
    app.userID = userInput.value || null;
    if (app.userID) {
        if (app.user) {
            app.user.id = app.userID;
        } else {
            app.user = new User('app.userID');
        }
        localStorage.setItem('userID', app.userID);
    }
});
if (app.userID) {
    userInput.value = app.userID;    
}

After preparing your app and reloading it in WebIDE, you should now see the settings working as expected. The settings screen appears automatically if no plans are found in the localstorage, allowing you to enter your server and userID details.

Test it with the following settings:

  • Server: http://127.0.0.1:8080
  • User ID: johndoe

The only inconvenience is that you need to switch back to the plans view manually once Settings are loaded. In addition, you might get an error if you don’t blur out of both form fields.

settings are working

Improving the settings UX

In fact, switching back to the plans view should be done automatically after plans are successfully loaded. We can implement this by adding a callback to the User.getPlans function, and including it in the call made from User.reloadPlans:

First, update your getPlans() function definition to the following:

User.prototype.getPlans = function(callback) {
    var self = this;
    var url = app.plansServer + '/plans/' + this.id;
    var request = new XMLHttpRequest({
        mozAnon: true,
        mozSystem: true});
    request.onload = function() {
        console.log('DEBUG: plans loaded from server');
        self.initiatePlans(this.responseText);
        localStorage.setItem('plans', this.responseText);
        self.displayPlans();
        if (callback) {
            callback();
        }
    };
    request.open("get", url, true);
    request.send();
};

Next, in User.reloadPlans add the callback function as an argument of the this.getPlans call:

this.getPlans(app.toggleSettings);

Again, try saving, preparing, and reloading your new code. You should now see that the app displays the plans view automatically when correct server and user information is entered and available. (It switches to the Settings view only if there is no error in self.initiatePlans(this.responseText).).

final

Where do we go from here

Congratulations! You’ve reached the end of this 4-part tutorial, and you should now have your own working prototype of a fun school plan application.

There is still plenty to do to improve the app. Here are some ideas to explore:

  • Writing JSON files is not a comfortable task. It would be better to add new functionality that lets you edit plans on the server. Assigning them to the user on the server opens new possibilities, and this would just require an action to sync the user account on both server and client.
  • Displaying hours would also be nice.
  • It would be cool to implement a reminder alarm for some school activities.

Try implementing these yourself, or build out other improvements for your own school scheduling app. Let us know how it goes.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

How can we write better software? – Interview series, part 2 with Brian Warner

This is part 2 of a new Interview series here at Mozilla Hacks.

“How can we, as developers, write more superb software?”

A simple question without a simple answer. Writing good code is hard, even for developers with years of experience. Luckily, the Mozilla community is made up of some of the best development, QA and security folks in the industry.

This is part two in a series of interviews where I take on the role of an apprentice to learn from some of Mozilla’s finest.

Introducing Brian Warner

When my manager and I first discussed this project, Brian is the first person I wanted to interview. Brian probably doesn’t realize it, but he has been one of my unofficial mentors since I started at Mozilla. He is an exceptional teacher, and has the unique ability to make security approachable.

At Mozilla, Brian designed the pairing protocol for the “old” Firefox Sync, designed the protocol for the “new” Firefox Sync, and was instrumental in keeping Persona secure. Outside of Mozilla, Brian co-founded the Tahoe-LAFS project, and created Buildbot.

What do you do at Mozilla?

My title is a staff security engineer in the Cloud Services group. I analyse and develop protocols for securely managing passwords and account data and I implement those protocols in different fashions. I also review other’s code, I look at external projects to figure out whether it’s appropriate to incorporate them, and I try to stay on top of security failures like 0-days and problems in the wild that might affect us and also tools and algorithms that we might be able to use.

UX vs Security: Is it a false dichotomy? Some people have the impression that for security to be good, it must be difficult to use.

There are times when I think that it’s a three-way tradeoff. Instead of being x-axis, y-axis, and a diagonal line that doesn’t touch zero, sometimes I think it’s a three-way thing where the other axis is how much work you want to put into it or how clever you are or how much user research and experimentation you are willing to do. Stuff that engineers are typically not focused on, but that UX and psychologists are. I believe, maybe it’s more of a hope than a belief, that if you put enough effort into that, then you can actually find something that is secure and usable at the same time, but you have to do a lot more work.

The trick is to figure out what people want to do and find a way of expressing whatever security decisions they have to make into a normal part of their work flow. It’s like when you lend your house key to a neighbour so they can water your plants when you are away on vacation, you’ve got a pretty good idea of what power you are handing over.

There are some social constructs surrounding that like, “I don’t think you’re going to make a copy of that key and so when I get it back from you, you no longer have that power that I granted to you.” There are patterns in normal life with normal non-computer behaviours and objects that we developed some social practices around, I think part of the trick is to use that and assume that people are going to expect something that works like that and then find a way to make the computer stuff more like that.

Part of the problem is that we end up asking people to do very unnatural things because it is hard to imagine or hard to build something that’s better. Take passwords. Passwords are a lousy authentication technology for a lot of different reasons. One of them being that in most cases, to exercise the power, you have to give that power to whoever it is you are trying to prove to. It’s like, “let me prove to you I know a secret”…”ok, tell me the secret.” That introduces all these issues like knowing how to correctly identify who you are talking to, and making sure nobody else is listening.

In addition to that, the best passwords are going to be randomly generated by a computer and they are relatively long. It’s totally possible to memorize things like that but it takes a certain amount of exercise and practice and that is way more work than any one program deserves.

But, if you only have one such password and the only thing you use it on is your phone, then your phone is now your intermediary that manages all this stuff for you, and then it probably would be fair (to ask users to spend more energy managing that password). And it’s clear that your phone is sort of this extension of you, better at remembering things, and that the one password you need in this whole system is the bootstrap.

So some stuff like that, and other stuff like escalating effort in rare circumstances. There are a lot of cases where what you do on an everyday basis can be really easy and lightweight, and it’s only when you lose the phone that you have to go back to a more complicated thing. Just like you only carry so much cash in your wallet, and every once in a while you have to go to a bank and get more.

It’s stuff like that I think it’s totally possible to do, but it’s been really easy to fall into bad patterns like blaming the user or pushing a lot of decisions onto the user when they don’t really have enough information to make a good choice, and a lot of the choices you are giving them aren’t very meaningful.

Do you think many users don’t understand the decisions and tradeoffs they are being asked to make?

I think that’s very true, and I think most of the time it’s an inappropriate question to ask. It’s kind of unfair. Walking up to somebody and putting them in this uncomfortable situation – do you like X or do you like Y – is a little bit cruel.

Another thing that comes to mind is permission dialogs, especially on Windows boxes. They show up all the time, even just to do really basic operations. It’s not like you’re trying to do something exotic or crazy. These dialogs purport to ask the user for permission, but they don’t explain the context or reasons or consequences enough to make it a real question. They’re more like a demand or an ultimatum. If you say “no” then you can’t get your work done, but if you say “yes” then the system is telling you that bad things will happen and it’s all going to be your fault.

It’s intended to give the user an informed choice, but it is this kind of blame the user, blame the victim pattern, where it’s like “something bad happened, but you clicked on the OK button, you’ve taken responsibility for that.” The user didn’t have enough information to do something and the system wasn’t well enough designed that they could do what they wanted to do without becoming vulnerable.

Months before “new” Sync ever saw the light of day, the protocol was hashed out in extremely vocal and public forum. It was the exact opposite of security through obscurity. What did you hope to accomplish?

There were a couple of different things that I was hoping from that discussion. I pushed all that stuff to be described and discussed publicly because it’s the right thing to do, it’s the way we develop software, you know, it’s the open source way. And so I can’t really imagine doing it any other way.

The specific hopes that I had for publishing that stuff was to try to solicit feedback and get people to look for basic design flaws. I wanted to get people comfortable with the security properties, especially because new Sync changes some of them. We are switching away from pairing to something based on passwords. I wanted people to have time to feel they understood what those changes were and why we were making them. We put the design criteria and the constraints out there so people could see we kind of have to switch to a password to meet all of the other goals, and what’s the best we can do given security based on passwords.

Then the other part is that having that kind of public discussion and getting as many experienced people involved as possible is the only way that I know of to develop confidence that we’re building something that’s correct and not broken.

So it is really just more eyeballs…

Before a protocol or API designer ever sits down and writes a spec or line of code, what should they be thinking about?

I’d say think about what your users need. Boil down what they are trying to accomplish into something minimal and pretty basic. Figure out the smallest amount of code, the smallest amount of power, that you can provide that will meet those needs.

This is like the agile version of developing a protocol.

Yeah. Minimalism is definitely useful. Once you have the basic API that enables you to do what needs to be done, then think about all of the bad things that could be done with that API. Try and work out how to prevent them, or make them too expensive to be worthwhile.

A big problem with security is sometimes you ask “what are the chances that problem X would happen.” If you design something and there is a 1/1000 chance that something will happen, that the particular set of inputs will cause this one particular problem to happen. If it really is random, then 1/1000 may be ok, 1/1M may be ok, but if it is in this situation where an attacker gets to control the inputs, then it’s no longer 1/1000, it’s 1 in however many times the attacker chooses to make it 1.

It’s a game of who is cleverer and who is more thorough. It’s frustrating to have to do this case analysis to figure out every possible thing that could happen, every state it could get into, but if somebody else out there is determined to find a hole, that’s the kind of analysis they are going to do. And if they are more thorough than you are, then they’ll find a problem that you failed to cover.

Is this what is meant by threat modelling?

Yeah, different people use the term in different ways, I think of when you are laying out the system, you are setting up the ground rules. You are saying there is going to be this game. In this game, Alice is going to choose a password and Bob is trying to guess her password, and whatever.

You are defining what the ground rules are. So sometimes the rules say things like … the attacker doesn’t get to run on the defending system, their only access is through this one API call, and that’s the API call that you provide for all of the good players as well, but you can’t tell the difference between the good guy and the bad guy, so they’re going to use that same API.

So then you figure out the security properties if the only thing the bad guy can do is make API calls, so maybe that means they are guessing passwords, or it means they are trying to overflow a buffer by giving you some input you didn’t expect.

Then you step back and say “OK, what assumptions are you making here, are those really valid assumptions?” You store passwords in the database with the assumption that the attacker won’t ever be able to see the database, and then some other part of the system fails, and whoops, now they can see the database. OK, roll back that assumption, now you assume that most attackers can’t see the database, but sometimes they can, how can you protect the stuff that’s in the database as best as possible?

Other stuff like, “what are all the different sorts of threats you are intending to defend against?” Sometimes you draw a line in the sand and say “we are willing to try and defend against everything up to this level, but beyond that you’re hosed.” Sometimes it’s a very practical distinction like “we could try to defend against that but it would cost us 5x as much.”

Sometimes what people do is try and estimate the value to the attacker versus the cost to the user, it’s kind of like insurance modelling with expected value. It will cost the attacker X to do something and they’ve got an expected gain of Y based on the risk they might get caught.

Sometimes the system can be rearranged so that incentives encourage them to do the good thing instead of the bad thing. Bitcoin was very carefully thought through in this space where there are these clear points where a bad guy, where somebody could try and do a double spend, try and do something that is counter to the system, but it is very clear for everybody including the attacker that their effort would be better spent doing the mainstream good thing. They will clearly make more money doing the good thing than the bad thing. So, any rational attacker will not be an attacker anymore, they will be a good participant.

How can a system designer maximise their chances of developing a reasonably secure system?

I’d say the biggest guideline is the Principle of Least Authority. POLA is sometimes how that is expressed. Any component should have as little power as necessary to do the specific job that it needs to do. That has a bunch of implications and one of them is that your system should be built out of separate components, and those components should actually be isolated so that if one of them goes crazy or gets compromised or just misbehaves, has a bug, then at least the damage it can do is limited.

The example I like to use is a decompression routine. Something like gzip, where you’ve got bytes coming in over the wire, and you are trying to expand them before you try and do other processing. As a software component, it should be this isolated little bundle of 2 wires. One side should have a wire coming in with compressed bytes and the other side should have decompressed data coming out. It’s gotta allocate memory and do all kinds of format processing and lookup tables and whatnot, but, nothing that box can do, no matter how weird the input, or how malicious the box, can do anything other than spit bytes out the other side.

It’s a little bit like Unix process isolation, except that a process can do syscalls that can trash your entire disk, and do network traffic and do all kinds of stuff. This is just one pipe in and one pipe out, nothing else. It’s not always easy to write your code that way, but it’s usually better. It’s a really good engineering practice because it means when you are trying to figure out what could possibly be influencing a bit of code you only have to look at that one bit of code. It’s the reason we discourage the use of global variables, it’s the reason we like object-oriented design in which class instances can protect their internal state or at least there is a strong convention that you don’t go around poking at the internal state of other objects. The ability to have private state is like the ability to have private property where it means that you can plan what you are doing without potential interference from things you can’t predict. And so the tractability of analysing your software goes way up if things are isolated. It also implies that you need a memory safe language…

Big, monolithic programs in a non memory safe language are really hard to develop confidence in. That’s why I go for higher level languages that have memory safety to them, even if that means they are not as fast. Most of the time you don’t really need that speed. If you do, it’s usually possible to isolate the thing that you need, into a single process.

What common problems do you see out on the web that violate these principles?

Well in particular, the web is an interesting space. We tend to use memory safe languages for the receiver.

You mean like Python and JavaScript.

Yeah, and we tend to use more object-oriented stuff, more isolation. The big problems that I tend to see on the web are failure to validate and sanitize your inputs. Or, failing to escape things like injection attacks.

You have a lot of experience reviewing already written implementations, Persona is one example. What common problems do you see on each of the front and back ends?

It tends to be escaping things, or making assumptions about where data comes from, and how much an attacker gets control over if that turns out to be faulty.

Is this why you advocated making it easy to trace how the data flows through the system?

Yeah, definitely, it’d be nice if you could kind of zoom out of the code and see a bunch of little connected components with little lines running between them, and to say, “OK, how did this module come up with this name string? Oh, well it came from this one. Where did it come from there? Then trace it back to the point where, HERE that name string actually comes from a user submitted parameter. This is coming from the browser, and the browser is generating it as the sending domain of the postMessage. OK, how much control does the attacker have over one of those? What could they do that would be surprising to us? And then, work out at any given point what the type is, see where the transition is from one type to another, and notice if there are any points where you are failing to do that, that transformation or you are getting the type confused. Definitely, simplicity and visibility and tractable analysis are the keys.

What can people do to make data flow auditing simpler?

I think, minimising interactions between different pieces of code is a really big thing. Isolate behaviour to specific small areas. Try and break up the overall functionality into pieces that make sense.

What is defence in depth and how can developers use it in a system?

“Belt and suspenders” is the classic phrase. If one thing goes wrong, the other thing will protect you. You look silly if you are wearing both a belt and suspenders because they are two independent tools that help you keep your pants on, but sometimes belts break, and sometimes suspenders break. Together they protect you from the embarrassment of having your pants fall off. So defence in depth usually means don’t depend upon perimeter security.

Does this mean you should be checking data throughout the system?

There is always a judgement call about performance cost, or, the complexity cost. If your code is filled with sanity checking, then that can distract the person who is reading your code from seeing what real functionality is taking place. That limits their ability to understand your code, which is important to be able to use it correctly and satisfy its needs. So, it’s always this kind of judgement call and tension between being too verbose and not being verbose enough, or having too much checking.

The notion of perimeter security, it’s really easy to fall into this trap of drawing this dotted line around the outside of your program and saying “the bad guys are out there, and everyone inside is good” and then implementing whatever defences you are going to do at that boundary and nothing further inside. I was talking with some folks and their opinion was that there are evolutionary biology and sociology reasons for this. Humans developed in these tribes where basically you are related to everyone else in the tribe and there are maybe 100 people, and you live far away from the next tribe. The rule was basically if you are related to somebody then you trust them, and if you aren’t related, you kill them on sight.

That worked for a while, but you can’t build any social structure larger than 100 people. We still think that way when it comes to computers. We think that there are “bad guys” and “good guys”, and I only have to defend against the bad guys. But, we can’t distinguish between the two of them on the internet, and the good guys make mistakes too. So, the principal of least authority and the idea of having separate software components that are all very independent and have very limited access to each other means that, if a component breaks because somebody compromised it, or somebody tricked it into behaving differently than you expected, or it’s just buggy, then the damage that it can do is limited because the next component is not going to be willing to do that much for it.

Do you have a snippet of code, from you or anybody else, that you think is particularly elegant that others could learn from?

I guess one thing to show off would be the core share-downloading loop I wrote for Tahoe-LAFS.

In Tahoe, files are uploaded into lots of partially-redundant “shares”, which are distributed to multiple servers. Later, when you want to download the file, you only need to get a subset of the shares, so you can tolerate some number of server failures.

The shares include a lot of integrity-protecting Merkle hash trees which help verify the data you’re downloading. The locations of these hashes aren’t always known ahead of time (we didn’t specify the layout precisely, so alternate implementations might arrange them differently). But we want a fast download with minimal round-trips, so we guess their location and fetch them speculatively: if it turns out we were wrong, we have to make a second pass and fetch more data.

This code tries very hard to fetch the bare minimum. It uses a set of compressed bitmaps that record which bytes we want to fetch (in the hopes that they’ll be the right ones), which ones we really need, and which ones we’ve already retrieved, and sends requests for just the right ones.

The thing that makes me giggle about this overly clever module is that the entire algorithm is designed around Rolling Stone lyrics. I think I started with “You can’t always get what you want, but sometimes … you get what you need”, and worked backwards from there.

The other educational thing about this algorithm is that it’s too clever: after we shipped it, we found out it was actually slower than the less-sophisticated code it had replaced. Turns out it’s faster to read a few large blocks (even if you fetch more data than you need) than a huge number of small chunks (with network and disk-IO overhead). I had to run a big set of performance tests to characterize the problem, and decided that next time, I’d find ways to measure the speed of a new algorithm before choosing which song lyrics to design it around. :).

What open source projects would you like to encourage people to get involved with?

Personally, I’m really interested in secure communication tools, so I’d encourage folks (especially designers and UI/UX people) to look into tools like Pond, TextSecure, and my own Petmail. I’m also excited about the variety of run-your-own-server-at-home systems like the GNU FreedomBox.

How can people keep up with what you are doing?

Following my commits on https://github.com/warner is probably a good approach, since most everything I publish winds up there.

Thank you Brian.

Transcript

Brian and I covered far more material than I could include in a single post. The full transcript, available on GitHub, also covers memory safe languages, implicit type conversion when working with HTML, and the Python tools that Brian commonly uses.

Next up!

Both Yvan Boiley and Peter deHaan are presented in the next article. Yvan leads the Cloud Services Security Assurance team and continues with the security theme by discussing his team’s approach to security audits and which tools developers can use to self-audit their site for common problems.

Peter, one of Mozilla’s incredible Quality Assurance engineers, is responsible for ensuring that Firefox Accounts doesn’t fall over. Peter talks about the warning signs, processes and tools he uses to assess a project, and how to give the smack down while making people laugh.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

How can we write better software? – Interview series, part 1 with Fernando Jimenez Moreno

Do you ever look code and murmur a string of “WTFs?” Yeah, me too. As often as not, the code is my own.

I have spent my entire professional career trying to write software that I can be proud of. Writing software that “works” is difficult. Writing software that works while also being bug-free, readable, extensible, maintainable and secure is a Herculean task.

Luckily, I am part of a community that is made up of some of the best development, QA and security folks in the industry. Mozillians have proven themselves time and time again with projects like Webmaker, MDN, Firefox and Firefox OS. These projects are complex, huge in scope, developed over many years, and require the effort of hundreds.

Our community is amazing, flush with skill, and has a lot to teach.

Interviews and feedback

This is the first of a series of interviews where I take on the role of an apprentice and ask some of Mozilla’s finest

“How can we, as developers, write more superb software?”

Since shipping software to millions of people involves more than writing code, I approach the question from many viewpoints: QA, Security and development have all taken part.

The target audience is anybody who writes software, regardless of the preferred language or level of experience. If you are reading this, you are part of the part of the target audience! Most questions are high level and can be applied to any language. A minority are about tooling or language specific features.

Questions

Each interview has a distinct set of questions based around finding answers to:

  • How do other developers approach writing high quality, maintainable software?
  • What does “high quality, maintainable software” even mean?
  • What processes/standards/tools are being used?
  • How do others approach code review?
  • How can development/Security/QA work together to support each-other’s efforts?
  • What matters to Security? What do they look for when doing a security review/audit?
  • What matters to QA? What do they look for before signing off on a release?
  • What can I do, as a developer, to write great software and facilitate the entire process?

I present the highlights of one or two interviews per article. Each interview contains a short introduction to the person being interviewed followed by a series of questions and answers.

Where an interview’s audio was recorded, I will provide a link to the full transcript. If the interview was done over email, I will link to the contents of the original email.

Now, on to the first interview!

Introducing Fernando Jimenez Moreno

Fernando Jimenez MorenoThe first interview is with Fernando Jimenez Moreno, a Firefox OS developer from Telefonica. I had the opportunity to work with Fernando last autumn when we integrated Firefox Accounts into Firefox OS. I was impressed not only with Fernando’s technical prowess, but also his ability to bring together the employees of three companies in six countries on two continents to work towards a common goal.

Fernando talks about how Telefonica became involved in Firefox OS, how to bring a disparate group together, common standards, code reviews, and above all, being pragmatic.

What do you and your team at Telefonica do?

I’m part of what we call the platform team. We have different teams at Telefonica, one is focused on front end development in Gaia, and the other one is focused on the platform itself, like Gecko, Gonk and external services. We work in several parts of Firefox OS, from Gecko to Gaia, to services like the SimplePush server. I’ve personally worked on things like the Radio Interface Layer (RIL), payments, applications API and other Web APIs, and almost always jump from Gecko to Gaia and back. Most recently, I started working on a WebRTC service for Firefox OS.

How did Telefonica get involved working with Mozilla?

Well, that’s a longer story. We started working on a similar project to Firefox OS, but instead of being based on Gecko, we were working with WebKit. So we were creating this open web device platform based on WebKit. When we heard about Mozilla doing the same with Gecko, we decided to contact you and started working on the same thing. Our previous implementation was based on a closed source port of WebKit and it was really hard to work that way. Since then, my day to day work is just like any other member of Telefonica’s Firefox OS team, which I believe is pretty much the same as any other Mozilla engineer working on B2G.

You are known as a great architect, developer, and inter-company coordinator. For Firefox Accounts on Firefox OS, you brought together people from Telefonica, Telenor, and Mozilla. What challenges are present when you have to work across three different companies?

It was quite a challenge, especially during the first days of Firefox OS. We started working with Mozilla back in 2011, and it took some time for both companies to find a common work flow that fit well for both parts. I mean, we were coming from a telco culture where many things were closed and confidential by default, as opposed to the openness and transparency of Mozilla. For some of us coming from other open source projects, it wasn’t that hard to start working in the open and to be ready to discuss and defend our work on public forums. But, for other members of the team it took some time to get used to that new way of working, and new way of presenting our work.

Also, because we were following agile methodologies in Telefonica while Mozilla wasn’t still doing it, we had to find this common workflow that suits both parts. It took some time to do it, a lot of management meetings, a lot of discussions about it. Regarding working with other telco companies, the experience has also been quite good so far, especially with Telenor. We still have to be careful about the information that we share with them, because at the end of the day, we are still competitors. But that doesn’t mean we cannot work with them in a common target like what happened with Firefox Accounts.

When Mozilla and Telefonica started out on this process, both sides had to change. How did you decide what common practices to use and how did you establish a common culture?

I think for this agile methodology, we focused more on the front end parts because Gecko already had a very known process and a very known way of developing. It has it’s own train mechanism of 6 weeks. The ones doing the most, the biggest effort of finding that common workflow were the front end team because we started working on Gaia and Gaia was a new project with no fixed methodologies.

I don’t know if we really found the workflow, the perfect workflow, but I think we are doing good. I mean we participate in agile methodologies, but when it turns out that we need to do Gecko development and we need to focus on that, we just do it their way.

In a multi-disciplinary, multi-company project, how important are common standards like style guides, tools, and processes?

Well, I believe when talking about software engineering, standards are very important in general. But, I don’t care if you call it SCRUM or KANBAN or SCRUMBAN or whatever, or if you use Git workflow or Mercurial workflow, or if you use Google or Mozilla’s Javascript style guide. But you totally need some common processes and standards, especially in large engineering groups like open source, or Mozilla development in general. When talking about this, the lines are very thin. It’s quite easy to fail spending too much time defining and defending the usage of these standards and common processes and losing the focus on the real target. So, I think we shouldn’t forget these are only tools, they are important, but they are only tools to help us, and help our managers. We should be smart enough to be flexible about them when needed.

We do a lot of code reviews about code style, but in the end what you want is to land the patch and to fix the issue. If you have code style issues, I want you to fix them, but if you need to land the patch to make a train, land it and file a follow on bug to fix the issues, or maybe the reviewer can do it if they have the chance.

Firefox OS is made up of Gonk, Gecko and Gaia. Each system is large, complex, and intimidating to a newcomer. You regularly submit patches to Gecko and Gaia. Whenever you dive into an existing project, how do you learn about the system?

I’m afraid there is no magic technique. What works for me might not work for others for sure. What I try to do is to read as much documentation as possible inside and outside of the code, if it’s possible. I try to ask the owners of that code when needed, and also if that’s possible, because sometimes they just don’t work in the same code or they are not available. I try to avoid reading the code line by line at first and I always try to understand the big picture before digging into the specifics of the code. I think that along the years, you somehow develop this ability to identify patterns in the code and to identify common architectures that help you understand the software problems that you are facing.

When you start coding in unfamiliar territory, how do you ensure your changes don’t cause unintended side effects? Is testing a large part of this?

Yeah, basically tests, tests and more tests. You need tests, smoke tests, black box tests, tests in general. Also at first, you depend a lot on what the reviewer said, and you trust the reviewer, or you can ask QA or the reviewer to add tests to the patch.

Let’s flip this on its head and you are the reviewer and you are reviewing somebody’s code. Again, do you rely on the tests whenever you say “OK, this code adds this functionality. How do we make sure it doesn’t break something over there?”

I usually test the patches that I have review if I think the patch can cause any regression. I also try and run the tests on the “try” server, or ask the developer to trigger a “try” run.

OK, so tests.. A lot of tests.

Yeah, now that we are starting to have good tests in Firefox OS, we have to make use of them.

What do you look for when you are doing a review?

In general where I look first is correctness. I mean, the patch should actually fix the issue it was written for. And of course it shouldn’t have collateral effects. It shouldn’t introduce any regressions. And as I said, I try to test the patches myself if I have the time or if the patch is critical enough, to see how it works and to see if it introduces a regression. And also I look that the code is performant and is secure, and also if, I always try to ask for tests if I think they are possible to write for the patch. And I finally look for things like quality of the code in general, and documentation, coding style, contribution, process correctness.

You reviewed one of my large patches to integrate Firefox Accounts into Firefox OS. You placed much more of an emphasis on consistency than any review I have had. By far.

Well it certainly helps with overall code quality. When I do reviews, I mark these kinds of comments as “nit:” which is quite common in Mozilla, meaning that “I would like to see that changed, but you still get my positive review if you don’t change it, but I would really like to see them changed.”

Two part question. As a reviewer, how can you ensure that your comments are not taken too personally by the developer? The second part is, as a developer, how can you be sure that you don’t take it too personally?

For the record, I have received quite a few hard revisions myself. I never take them personally. I mean, I always try to take it, the reviews, as a positive learning experience. I know reviewers usually don’t have a lot of time to, in their life, to do reviews. They also have to write code. So, they just quickly write “needs to be fixed” without spending too much time thinking about the nicest ways to say it. Reviewers only say things about negative things in your code, not negative, but things that they consider are not correct. But they don’t usually say that the things that are correct in your code and I know that can be hard at first.

But once you start doing it, you understand why they don’t do that. I mean, you have your work to do. This is actually especially hard for me, being a non-native English speaker, because sometimes I try to express things in the nicest way possible but the lack of words make the review comments sound stronger than it was supposed to be. And, what I try to do is use a lot of smileys if possible. And always, I try to avoid the “r-” flag if I mean, the “r-” is really bad. I just clear it, use use the “feedback +” or whatever.

You already mentioned that you try to take it as a learning experience whenever you are developer. Do you use review as a potential teaching moment if you are the reviewer?

Yeah, for sure. I mean just the simple fact of reviewing a patch is a teaching experience. You are telling the coder what you think is more correct. Sometimes there is a lack of theory and reasons behind the comments, but we should all do that, we should explain the reasons and try to make the process as good as possible.

Do you have a snippet of code, from you or anybody else, that you think is particularly elegant that others could learn from?

I am pretty critical with my own code so I can’t really think about a snippet of code of my own that I am particularly proud enough to show :). But if I have to choose a quick example I was quite happy with the result of the last big refactor of the call log database for the Gaia Dialer app or the recent Mobile Identity API implementation.

What open source projects would you like to encourage people to participate in, and where can they go to get involved?

Firefox OS of course! No, seriously, I believe Firefox OS gives to software engineers the chance to get involved in an amazing open source community with tons of technical challenges from low level to front end code. Having the chance to dig into the guts of a web browser and a mobile operative system in such an open environment is quite a privilege. It may seem hard at first to get involved and jump into the process and the code, but there are already some very nice Firefox OS docs on the MDN and a lot of nice people willing to help on IRC (#b2g and #gaia), the mailing lists (dev-b2g and dev-gaia) or ask.mozilla.org.

How can people keep up to date about what you are working on?

I don’t have a blog, but I have my public GitHub account and my Twitter account.

Transcript

A huge thanks to Fernando for doing this interview.

The full transcript is available on GitHub.

Next article

In the next article, I interview Brian Warner from the Cloud Services team. Brian is a security expert who shares his thoughts on architecting for security, analyzing threats, “belts and suspenders”, and writing code that can be audited.

As a parting note, I have had a lot of fun doing these interviews and I would like your input on how to make this series useful. I am also looking for Mozillians to interview. If you would like to nominate someone, even yourself, please let me know! Email me at stomlinson@mozilla.com.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)