First

First impressions of my HoloLens

Chris Heilmann with his HoloLens

Also available on Medium.

I am now proud owner of a HoloLens. I am not officially trained up on it yet as a Microsoft Employee. But I wanted to share the first impressions of setting it up and using it.

These are my personal impressions and not an official stance by my company. I’m sharing my first excitement here. I hope can make some people understand what is happening here.

This is also just a user POV as I haven’t started developing for it yet, but this will happen soon – promised.

HoloLens is unique

First of all, it is important to understand that HoloLens is something pretty unique. Every time I mention it people start making comparisons to the Occulus or Vive, but those don’t work.

A high-powered, multi camera mobile on your head

HoloLens is a self-contained computer you wear on your head. You don’t need anything else. It is not a peripheral, there is no other computer or server necessary. This is important when considering the price. Many VR headsets are much cheaper, but they aren’t Mixed Reality and they need a hefty computer to run. It doesn’t even need an internet connection all the time. Just because you wear them on your head doesn’t mean you can compare these products on even ground.

You should plan coding for it like a mobile phone on your head in terms of CPU/GPU power. The specs are high, but the demand of the way it works are, too. If you build for HoloLens be conservative with the resources you need – you’ll make me happy. Waiting isn’t fun, even when it is a floating animation in your room.

Your natural movement is an event

Anke calibrating HoloLens
When you’re calibrating your HoloLens and all the dog can think of is you holding a treat instead of using the “bloom”

HoloLens is a system that uses natural motion of your head and body to explore an augmented space. This means you don’t lose connection to the real world – you still see it through the device. What you get is a constant analysis of your surroundings and Holograms overlayed on it. You open apps and either use them floating before you or dock them to a wall to use later when you look at said wall. You distribute your work space in your living space – without needing to go to IKEA to buy furniture.

This means the way you move and where you look become events software can interact with. The “gaze” gesture, which is “looking at something” is akin to a hover with a mouse. The “air tap” gesture is a click or submit.

That way the relative small size of you viewport compared to Occulus or Vive is less relevant. You’re not stuck in it as your viewport follows your head movement. You’re not supposed to have a whole takeover. HoloLens is there to augment the world – not replace it.

Your whole body is now an event trigger. Instead of learning keyboard shortcuts, you learn gestures. Or you can use your voice.

Gestures vs. Voice Recognition

You can use your hands to select and interact with things. Or you can say “select” to interact and “next” to move on in menus. Voice recognition is always on and Cortana is just one “Hey Cortana” away. You can use it to open apps, search the web, research, all kind of things. It still feels odd to me to talk to my phone. I am on Android, maybe Siri is a better experience. It feels more natural to talk to a voice in a space of apps I distributed around my house.

Spatial sound

HoloLens has a lot of speakers built in which allows you to hear sounds from all directions. This is pretty amazing when it comes to games like RoboRaid:

And even more so in Fragments:

When using the speakers, there is not much privacy though. It is pretty easy to hear what HoloLens says when you are close to someone wearing it.

If you want a keyboard, you can have one

If you enter a lot of text into web sites or something similar, you can also pair a bluetooth keyboard. Or clicker, or whatever. At first it annoyed me to enter my pretty secure passwords in a floating keyboard. But the more I got used to HoloLens interaction, the easier it became.

A whole new way of interaction

I’m not a big fan of VR because I am prone to get nauseous if the frame rate isn’t perfect. I am also getting car sick a lot, so it isn’t something to look forward to. I also feel confined by it – it fills me with dread missing out on things around me whilst being in a virtual space. I don’t like blindfolds and earplugs either.

The only discomfort I felt from HoloLens is having something weighing close to a kilogram on your head. But you get used to it. At the beginning you will also feel your fingers cramp up during air tapping and your shoulders hurt. This means you are doing it wrong. The more natural you move, the easier it is for HoloLens to understand you. An air tap doesn’t need full movement. Consider lifting your finger and pointing at stuff. Just like interrupting a meeting.

An outstanding onboarding experience

What made me go “wow” was the way you set up and start working with the HoloLens. The team did an incredible job there. The same way Apple did a great job getting people used to using a touch device back when the iPhone came out. Setting up a HoloLens is an experience of discovery.

You put on the device and a friendly “hello” appears with Cortana’s voice telling you what to do. You get to set up the device to your needs by calibrating it to your eyes. Cortana tells you step by step how to use the gestures you need to find your way around. Each step is full of friendly “well done” messages. When you get stuck, the system tells you flat out not to worry and come back to it later. It is an enjoyable learning experience.

How I use it

Putting cat Holograms on the dog

Right now, I have my kitchen cupboards as my work benches. Edge is on one of them and next to it is my task list of the day. I have a few games on the other side of the room. When it comes to Holograms, there is a cat on our dog’s bowl and a Unicorn above the bed to give us nice dreams. Because we can.

Skype is pretty amazing on HoloLens:

Some niggles I have

It is important to remember a few things about the HoloLens:

  • It isn’t a consumer device but for now a B2B tool. On the one side there is the high price. And there is a focus on working with it rather than playing games.
  • It is not an outside device. HoloLens scans your environment and turns it into meshes. After it created the meshes it stores them in “spaces” avoiding the need to keep scanning. Outside this means a constant re-evaluating of the space. This is expensive and not worth-while. So there is no danger of a re-emergence of the annoying Google Glass people in the street. It stilll is disconcerning to look at someone not knowing if they film you or not.
  • I agree with a lot of other people that there should be a way to have several user accounts with stored calibration info on a single HoloLens. Whilst you can share experiences with other HoloLens users, it would be great to hand it over without recalibration and giving someone else access to my Windows account.
  • There should be a way to wipe all the Holograms in a space with a single command. When you let other people play with your device you end up with lots of tigers, spacemen and all kind of other things in your space that you need to delete by hand.
  • Whilst it is easy to shoot video and take pictures of your experience, the sharing experience is very basic. You can store it to OneDrive or Facebook. No option to mail it or to add Twitter. That said, Skype helps with that.

This is truly some next level experience

I am sure that there are great things to come in the VR/AR/MR space. Many experiences might be much more detailed and Hi-Fi. Yet, I am blown away by the usefulness of this device. I see partners and companies already use it to plan architectural projects. I see how people repair devices in the field with Skype instructions from the office. I get flashbacks to Star Trek’s Holodeck – something I loved as a teen.

It is pretty damn compelling to be able to use your physical space as a digital canvas. You don’t have to leave your flat. And you don’t run the danger of bumping into things while you are off into cyberspace. It is augmentation as it should be. In a few years I will probably chuckle at this post when my cyber contacts and ear piece do the same thing for me. But for now, I am happy I had the chance to try this out.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

First Decoded Chat of the year: Paul Bakaus on AMP

Today on the Decoded Blog I published the first ever Decoded Chat I recorded, where I grilled Paul Baukaus in detail about AMP.

This is an hour long Skype call and different to the newer ones – I was still finding the format :). There are quite a few changes that happened to AMP since then and soon there will be an AMP Summit to look forward to. All in all, I do hope though that this will give you some insight into what AMP is and what it could be if the focus were to go away from “Google only” with it.

These are the questions we covered:

  1. What is AMP to you?
  2. The main focus of AMP seems to be mobile, is that fair to say?
  3. Was AMP an answer to Facebooks’ and Apple’s news formats? Does it rely on Google technology and – if so – will it be open to other providers?
  4. It seems that the cache infrastructure of AMP is big and expensive. How can we ensure it will not just go away as an open system as many other open APIs vanished?
  5. Do large corporations have a problem finding contributors to open source projects? Are they too intimidating?
  6. Is there a historical issue of large corporations re-inventing open source solutions to “production quality code”? Is this changing?
  7. Whilst it is easy to get an AMP version of your site with plugins to CMS, some of the content relies on JavaScript. Will this change?
  8. AMP isn’t forgiving. One mistake in the markup and the page won’t show up. Isn’t that XHTML reinvented – which we agreed was a mistake.
  9. AMP seems to be RSS based on best practices in mobile performance. How do we prevent publishers to exclusively create AMP content instead of fixing their broken and slow main sites?
  10. It seems to me that AMP is a solution focused on CMS providers. Is that fair, and how do we reach those to allow people to create AMP without needing to code?
  11. A lot of “best practice” content shown at specialist events seems to be created for those. How can we tell others about this?
  12. AMP seems to be designed to be limiting. For example, images need a height and width, right?
  13. In terms of responsive design, does the AMP cache create differently sized versions of my images?
  14. Are most of the benefits of AMP limited to Chrome on Android or does it have benefits for other browsers, too?
  15. Do the polyfills needed for other browsers slow down AMP?
  16. How backwards compatible is AMP?
  17. One big worry about publishing in AMP is that people are afraid of being fully dependent on Google. Is that so?
  18. Are there any limitations to meta information in AMP pages? Can I add – for example – Twitter specific meta information?
  19. Do AMP compatible devices automatically load that version and – if not – can I force that?
  20. How can I invalidate the AMP cache? How can I quickly remove content that is wrong or dangerous?
  21. Right now you can’t use third party JavaScript in an AMP page. Are you considering white-listing commonly used libraries?
  22. It seems AMP is catered to documents, while most people talk about making everything an App. Is this separation really needed?
  23. What’s the sandbox of AMP and how is this now extended to the larger web as a standard proposal?

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Decoded Chats – first edition live on the Decoded Blog

Over the last few weeks I was busy recording interviews with different exciting people of the web. Now I am happy to announce that the first edition of Decoded Chats is live on the new Decoded Blog.

Decoded Chats - Chris interviewing Rob Conery

In this first edition, I’m interviewing Rob Conery about his “Imposter Handbook“. We cover the issues of teaching development, how to deal with a constantly changing work environment and how to tackle diversity and integration.

We’ve got eight more interviews ready and more lined up. Amongst the people I talked to are Sarah Drasner, Monica Dinculescu, Ada-Rose Edwards, Una Kravets and Chris Wilson. The format of Decoded Chats is pretty open: interviews ranging from 15 minutes to 50 minutes about current topics on the web, trends and ideas with the people who came up with them.

Some are recorded in a studio (when I am in Seattle), others are Skype calls and yet others are off-the-cuff recordings at conferences.

Do you know anyone you’d like me to interview? Drop me a line on Twitter @codepo8 and I see what I can do 🙂

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Microsoft’s first web page and what it can teach us

Today Microsoft released a re-creation of their first web site, 20 years ago complete with a readme.html explaining how it was done and why some things are the way they are.

microsoft's first web site

I found this very interesting. First of all because it took me back to my beginnings – I built my first page in 1996 or so. Secondly this is an interesting reminder how creating things for the web changed over time whilst our mistakes or misconceptions stayed the same.

There are a few details worth mentioning in this page:

  • Notice that whilst it uses an image for the whole navigation the texts in the image are underlined. Back then the concept of “text with underscore = clickable and taking you somewhere” was not quite ingrained in people. We needed to remind people of this new concept which meant consistency was king – even in images.
  • The site is using the ISMAP attribute and a client side CGI program to turn x and y coordinates of the click into a redirect. I remember writing these in Perl and it is still a quite cool feature if you think about. You get the same mouse tracking for free if you use input type=image as that tells you where the image was clicked as form submission parameters
  • Client-side image maps came later and where a pain to create. I remember first using Coffeecup’s Image Mapper (and being super excited to meet Jay Cornelius, the creator, later at the Webmaster Jam session when I was speaking there) and afterwards Adobe’s ImageReady (which turned each layer into an AREA element)
  • Table layouts came afterwards and boy this kind of layout would have been one hell of a complex table to create with spacer GIFs and colspan and rowspan.

And this, to me, is the most interesting part here: one of the first web sites created by a large corporation makes the most basic mistake in web design – starting with a fixed design created in a graphical tool and trying to create the HTML to make it work. In other words: putting print on the web.

The web was meant to be consumed on any device capable of HTTP and text display (or voice, or whatever you want to turn the incoming text into). Text browsers like Lynx were not uncommon back then. And here is Microsoft creating a web page that is a big image with no text alternative. Also interesting to mention is that the image is 767px × 513px big. Back then I had a computer capable of 640 × 480 pixels resolution and browsers didn’t scale pictures automatically. This means that I would have had quite horrible scrollbars.

If you had a text browser, of course there is something for you:

If your browser doesn’t support images, we have a text menu as well.

This means that this page is also the first example of graceful degradation – years before JavaScript, Flash or DHTML. It means that the navigation menu of the page had to be maintained in two places (or with a CGI script on the server). Granted, the concept of progressive enhancement wasn’t even spoken of and with the technology of the day almost impossible (could you detect if images are supported and then load the image version and hide the text menu? Probably with a beacon…).

And this haunts us until now: the first demos of web technology already tried to be pretty and shiny instead of embracing the unknown that is the web. Fixed layouts were a problem then and still are. Trying to make them work meant a lot of effort and maintainability debt. This gets worse the more technologies you rely on and the more steps you put into between what you code and what the browser is meant to display for you.

It is the right of the user to resize a font. It is completely impossible to make assumptions of ability, screen size, connection speed or technical setup of the people we serve our content to. As Brad Frost put it, we have to Embrace the Squishiness of the web and leave breathing room in our designs.

One thing, however, is very cool: this page is 20 years old, the technology it is recreated in is the same age. Yet I can consume the page right now on the newest technology, in browsers Microsoft never dreamed of existing (not that they didn’t try to prevent that, mind you) and even on my shiny mobile device or TV set.

Let’s see if we can do the same with Apps created right now for iOS or Android.

This is the power of the web: what you build now in a sensible, thought-out and progressively enhanced manner will always stay consumable. Things you force into a more defined and controlled format will not. Something to think about. Nobody stops you from building an amazing app for one platform only. But don’t pretend what you did there is better or even comparable to a product based on open web technologies. They are different beasts with different goals. And they can exist together.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

My first patch for Gaia, the UI in Firefox OS

There are many ways of contributing to the Firefox OS project and most of these ways do not involve writing code. However, if you are a developer, there is nothing as sweet and satisfying as getting your clean patch pulled into the project. With all the excitement and energy around the Firefox OS platform, I decided it was time for me to make the leap and learn how to hack Gaia — the Firefox OS interface.

There already exists an excellent post about hacking Gaia for Firefox OS, and MDN docs for hacking Gaia is the best starting point.

This post takes a slightly different path, and my focus is more on my experience getting started with hacking Gaia and testing my changes using both Firefox browser and the Geeksphone Keon. I cover the process I went through to choose a bug, write a proposed fix, and submit a pull request to Gaia.

Choosing a bug

Firefox OS uses bugzilla to keep track of bugs. I had occasionally used bugzilla before but this was my first time to actually use it to find a bug to work on. I understand that it is very robust, but the UI can be a bit intimidating for newcomers like me. However, once I spent a bit of time with the various search options, I felt pretty comfortable with it. I ended up picking a bug about a UX enhancement in the clock app. I chose this issue in particular because it is relatively simple and low profile. So I could work on it with little pressure. Here is the issue description that was provided in the comments:

The layout of the alarm list needs adjustment so the user can tell there are more alarms than just 4.

Lets take a look at the clock app and add some alarms to understand what the issue is.

There is no way to see if there are more alarms above or below these four

There are more than four alarms in the above screenshot, but there is no way to know that visually, before starting to scroll, and that is the issue. The bug did not provide instruction on how to fix the problem. I bounced a few ideas with another mozillian about how to solve this problem and decided to use gradients on top and bottom edge to indicate presence of more content. I had seen this type of interaction implemented in other apps before, and it works well.

Proposed solution

I say proposed solution, because I don’t know yet if what I am implementing is acceptable and appropriate. However, it is a lot less work to implement a proposed solution and let people try it; than try to track down all the parties involved and convince them that you have a good idea to solve a UX issue. Even if this solution does not get accepted, the PR will (hopefully) promote some discussions about what would work best. Anyhow, to fix this problem I propose adding two visual elements to the top and bottom of the alarm list. We will use those element to give user a visual clue that there is more content in that direction. Next we will open the clock app within Gaia in Firefox browser to inspect it and figure out what it is made of.

Running Gaia in Firefox

Firefox dev tools are getting better by the day (or the night), and I really like some of the new features. I was very happy to learn that I could debug Firefox OS apps in Firefox.

Currently I know of four different ways to run Firefox OS apps:

Each of these ways have their own pros and cons. At the moment, using Firefox browser is the best way for modifying CSS, inspecting elements, and stepping through the javascript code. I know that there is work underway to bring remote debugging to the simulator, which will hopefully become available very soon. Until then, using Firefox to open a Gaia desktop profile, is the next best thing. There are two important disadvantages to using this method. Firstly, we don’t have access to many of the phone APIs such as orientation sensor, or notifications. Secondly, we are using the latest build of Firefox to get the best developer tools, but as a result we will be using a different version of Gecko than is used by Firefox OS which in some cases may be incompatible. That being said, as long as we can write up our patch using dev tools and then test the app on a physical device, we can be pretty sure that the fix is compatible.

Regardless of which method you choose to run Gaia apps, you need to build Gaia and generate a suitable profile. By default, make command generates a profile directory for a physical phone. I wanted to start inspecting the clock app in the browser, so I use the DEBUG=1 option to generate a gaia profile suitable for desktop. By default DEBUG profile will be created in gaia/profile-debug folder. After a few minutes make will finish and the last line in the terminal will read something like:

    Profile Ready: please run [b2g|firefox] -profile /home/user_name/Projects/gaia/profile-debug

On my system Firefox nightly is installed in /usr/bin/firefox-trunk so I will run the following command to get Gaia running in the browser:

/usr/bin/firefox-trunk -profile /home/user_name/Projects/gaia/profile-debug http://clock.gaiamobile.org:8080

The last argument http://clock.gaiamobile.org:8080 is optional and is used to specify the name of the app you want to run. Instead of `clock`, you can specify the name of any of the certified apps included in Gaia, or any other application that you may add to the `apps` folder in Gaia. For more information about how to do this checkout using Gaia in Firefox.

Making my changes

Before getting any further, lets take care of one very important first step — creating and checking out a branch for the fix.

gaia$ git checkout -b bug-873574-alarm-scroll

Now it was time to open the clock app in Firefox nightly. Right away I started to inspect elements in the app and look at their styles. Since I had a rough idea of the kind of visual effect I was looking for, I started playing around with CSS right in the browser. I used two pseudo elements with a gradient background, one for the top of alarm list and another for the bottom. Here is an snapshot while I was trying to position the pseudo elements properly.

Here is the CSS I ended up with for creating the gradient on top of the alarm list. The code for the bottom gradient is almost the same, except it uses the after psudo element and slightly different margins.

#alarms:before {
  content: '';
  pointer-events: none;
  background: none;
  position: fixed;
  width: 100%;
  margin-left: -1.5em;
  z-index: 10;
  height: 6em;
  margin-left: -1.5em;
}
 
#alarms.scroll-up:before {
  background: -moz-linear-gradient(bottom, rgba(16, 17, 17, 0) 0%, rgba(16, 17, 17, 1) 100%);
  background: linear-gradient(to top, rgba(16, 17, 17, 0) 0%, rgba(16, 17, 17, 1) 100%);
}

Next, I started looking into the Javascript code behind the clock app. Gaia apps are currently written using a bare metal approach, without use of any large libraries or frameworks. That is an interesting topic of discussion, and I would love to see a post looking at pros and cons of this approach. As a frontend developer it does not worry me too much yet. It is too soon to say anything, I have to keep looking at the code and hope that I will identify some common patterns of programming, and figure out how abstraction is achieved.

Now that the visual elements are created, I just need to add a function to toggle the CSS classes on the alarm list element as appropriate. I hook this function up to the scroll event, so every time the user scrolls, it will re-evaluate the need for indicators. It will also need to run whenever an alarm is added or removed to the list. You can how the function is hooked in the PR commit.

showHideScrollIndicators: function al_showHideScrollIndicators() {
    var threshold = 10; // hide indicators when scroll is close enough to an end
    var element = this.alarms;
 
    if (element.scrollTop < threshold) {
      element.classList.remove('scroll-up');
    } else {
      element.classList.add('scroll-up');
    }
 
    if (element.scrollTop > element.scrollTopMax - threshold) {
      element.classList.remove('scroll-down');
    } else {
      element.classList.add('scroll-down');
    }
  },

Here is an example screenshot showing the subtle gradient at the bottom of the list.

alarm_list_indicator_example

Validating Javascript

Since I made some Javascript change, I wanted to validate it with lint. The lint script in Gaia currently relies on google closure lint implementation. In order to install that on debian based systems we need easy install which is bundled inside python setup tools. After that we can install gjslint following these instructions.

$ sudo apt-get install python-setuptools
$ cd /tmp
/tmp$ sudo easy_install http://closure-linter.googlecode.com/files/closure_linter-latest.tar.gz

Once you have gjslint installed, you are good to go. Just very recently a pre commit hook has been added to the project, so that before each commit it will check the files that you have changed against gjslint. If there are any errors, it will point them out to you and stop the commit.

Previewing the changes

You can preview your changes before committing, as well as before submitting your patch to make sure you are not submitting any unintended changes. I like to use a little gtk based app called gitg to preview my changes quickly in a nice interactive GUI. Just type gitg in the command line from the gaia root directory. An even simpler way to view your changes is using the git diff command.

Submitting the PR

Once you are happy with your commit you need to push them to your own fork so you can make a pull request. Unless this is your first single commit for the PR, you would want to squash them into one by running:

gaia$ git rebase -i master

That will take you through an interactive rebase and let you specify a new commit message. Give the commit an appropriate message such as “Bug 873574 proposed fix r=person_to_review”. One thing to note is that when you update your PR, you can completely update the commit message. Use rebase to keep your PR to only one commit, so it is nice and clean for the reviewers. Keep your commit message short and relevant. The commit message should have the bug number and a brief description of what you have done to fix it. Once the rebase is done, we can push the changes to our remote branch. First time around that branch is used to send a pull request to upstream gaia. That same branch is used later on to send subsequent updates to the PR.

gaia$ git push origin bug-873574-alarm-scroll -f

Asking for review

The primary channel to request review for gaia patches seems to be the bugzilla issue. Git does not integrate smoothly with bugzilla and linking to PR involves a bit of manual process. From what I understand, it is a convention to attach a small html file to the bug report containing a link to your pull request. Make sure to choose the file type as text/html. Here is what I used based on examples I saw in other bug reports:

<!DOCTYPE html>
<meta charset="utf-8">
<meta http-equiv="refresh" content="1;https://github.com/mozilla-b2g/gaia/pull/10336/">
<title>Bugzilla Code Review</title>
<p>Redirecting to <a href="https://github.com/mozilla-b2g/gaia/pull/10336/">» pull request on github</a></p>

I learnt that you can specify someone to review your PR right from the commit message, but I dont know if it is actually being used. As someone who is totally new to the project, it is bit hard to guess who would be an appropriate person to ask for review. One way to find out is to use the git blame on the areas of the code that you have changed and find out who has worked on it recently. Another way that was suggested to me is to choose someone from owners and peers for the Gaia module. When you are attaching the above html file, you get a chance to enter the name of that person in a review tag, make sure to take advantage of that.

I submitted my first PR to Gaia. Woot woot! 🙂

Updating the PR

Yup, it was not perfect ;). Chances are, you may also get a request to update your PR with some changes. PR is often a starting point for discussion about a fix. I received some comments from the UX team to change the fade color as well as some other comments about the code. I checked out my branch for that bug fix again and made those changes — making sure to rebase the commits after each update, so the PR remains with one commit.

Now I wait — patiently

One thing about trying to contribute to a very active project such as Gaia is that you may not always get a quick feedback or review. At the time I submit my PR there was nearly 390 open PRs already in the project. I have been told that most of those are stalled, but I think it still shows that the core team has a lot to process. Since the bug I chose is not a high priority, the fix most likely will not make the cut for next release, but hopefully will be pulled in after v1 is shipped. Gaia is still a young project and it is very exciting time to join in and contribute.

Thank You

Several people both on #gaia irc channel and local mozillians in Vancouver have helped me with the various steps along the way. I specially want to thank James Burke who guided me through flashing my phone with latest version of B2G and gave me a lot of great advice about working with Gaia and Javascript development in general.

Closing

As a frontend developer who is very excited about developing apps for Firefox OS platform, I want to learn the secrets and successful patterns for developing apps that look sharp and perform well. Tutorials, guides and documentations are very useful, but why would I want to go there if I can get what I am looking for directly from the source — the Gaia project that is. If you have not done so already, I encourage you to give it a shot and see if you can make a difference while you learn a few cool tips and tricks. I look forward to your comments and suggestions.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

First video of a Firefox OS series is live

The last weeks I have been busy scripting (and then improvising as always) a series of videos explaining Firefox OS. These are now going live on a weekly basis.

be-the-future

Over on the Mozilla hacks blog, you can now find the first in a series of six videos explaining what Firefox OS is about. Under the description “Firefox OS – the platform HTML5 deserves” (a slogan I used in a few talks and interviews already) these videos are meant to explain a few things:

  • What Firefox OS is
  • How it is different to any other mobile platform
  • What it means for HTML5 as a movement
  • How you can be part of it
  • What its benefits are to you (a stable HTML5 platform with full hardware access aimed at a completely new and huge market of end users)
  • How to get started
  • Where to find documentation and file complaints and enhancement ideas

All in all, we thought a series of videos would be a good way to get the message out that scales better than talks and posts. Each of the videos will be about five minutes long and an interview/conversation between experts and me, namely in this case Daniel Appelquist ( @torgo) from Téléfonica Digital/ W3C and Desigan Chinniah ( @cyberdees) from Mozilla. The videos were shot over a period of two days in the London Mozilla office by Rainer Cvillink who did an amazing job.

Get over to the hacks blog and see the first video now, and feel free to spread it as far and wide as you can. Cheers.

video

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

WebRTC Update: Our first implementation will be in release soon. Welcome to the Party! But Please Watch Your Head.

I want to share some useful and exciting updates on Firefox’s WebRTC implementation and provide a sneak peak at some of our plans for WebRTC moving forward. I’ll then ask Adam Roach, who has worked in the VoIP/SIP space on IETF standards for over a decade and who joined the Mozilla WebRTC in November, to provide some historical background on the WebRTC feature itself and how things are progressing in general.

Getting ready to release our first implementation of WebRTC on Desktop

Firefox has made significant progress with our initial implementation of WebRTC on Desktop in terms of security and stability. For the last several weeks, PeerConnection and DataChannels have been pref’d on by default in Nightly and Aurora, and we expect to keep them pref’d on for all release phases of Firefox 22 (which means we expect WebRTC will go to Aurora, Beta and General Release).

We also got a chance to update the DataChannels implementation in Firefox 22 to match the recent spec changes (agreed to at IETF last month). Note: the changes we needed to make to comply with the spec changes are not backwards compatible with previous DataChannels implementations (in Firefox 21 or earlier). So please use Firefox 22 and later for testing your DataChannels apps.

TURN support has landed

And there’s more good news. We just added TURN support to Firefox Nightly and are in the process of testing it. This is a big deal since TURN will increase the likelihood that a call will successfully connect, regardless of the types of NATs that the end points are behind.

TURN (Traversal Using Relays behind NATs) is a standard for managing (allocating, using, and destroying) a relay session on a remote external server. This relay session enables WebRTC to connect when the NATs at both ends would otherwise cause the call to fail. Because TURN can introduce delay, especially if the TURN server is remote to both endpoints, and TURN servers can be expensive (because it has to handle all the media flows during a call), ICE typically uses TURN only when other methods (like STUN) to get media flowing during a call fail to work.

WebRTC on Firefox for Android is Ready for Experimentation and Feedback

Back in February at Mobile World Congress we showed exciting, new demos of WebRTC calls on Firefox for Android. We have just finished landing this code in Nightly. The code for both getUserMedia (gUM) and PeerConnection is behind a pref (as Desktop was initially), but you can enable it by setting both the media.navigator.enabled pref and the media.peerconnection.enabled pref to “true” (browse to about:config and search for media.navigator.enabled and media.peerconnection.enabled in the list of prefs).

In the same list of prefs, you can also set media.navigator.permission.disabled to “true” to automatically give permission to access the camera/microphone and bypass the permission/selection dialog when testing gUM and WebRTC.

This code is still in the early phases of testing, but we encourage you to try it, report back problems, and ask questions and provide feedback. Please be sure to mention “Android” if you are reporting a problem or providing feedback since we are tracking both Android and Desktop issues now. Like with Desktop, we will be working to improve, stabilize and expand this code over the next year.

WebRTC on Firefox OS Coming Next

Mozilla is also working hard to get this code running on Firefox OS. WebRTC on Firefox OS is not yet as far along as WebRTC on Firefox for Android, but the Firefox OS team is working to close the gap. You can follow Bug 750011 to track this work.

Future WebRTC Features

Since all our products share the gecko engine, improvements and new features made to core “platform” or gecko code typically benefit all our products. Over the next year we plan to make the following improvements:

  • Complete error and state reporting (spec compliant)
  • Recording API support
  • Persona integration
  • Multiple audio and video flows per Peer Connection (beyond 1 video flow and 1 audio flow)
  • Persistent permissions support (in the UX/UI)
  • AEC improvements
  • Improved call quality (especially audio latency)

We hope to do even more, but these are on the top of our list. We also have a WebRTC documentation and help page that we’re working to fill out over the next few weeks and then keep up-to-date. That will have links to what we’ve implemented and what we’re working on, as well as ways for contributors to get involved.

Moving Forward

Although we can’t wait to release our first implementation of WebRTC on Desktop (which is a huge milestone for Mozilla and the WebRTC feature itself), I am still encouraging all developers experimenting with WebRTC to continue to use Nightly for the foreseeable future because that is where the latest and greatest features from the spec land and bugs get fixed first — and that is where your feedback will be most helpful.

And even more importantly, the WebRTC spec itself is still being standardized. For more details on that, as well as some behind-the-scenes history on WebRTC, I hand the rest of this article off to Adam.

-Maire Reavy, Product Manager, Mozilla’s Media Platform Team

WebRTC is the Real Future of Communications

This is Adam.

About three years ago, my dear friend and VoIP visionary Henry Sinnreich spent some time over lunch trying to convince me that the real future of communications lay in the ability to make voice and video calls directly from the ubiquitous web browser. I can still envision him enthusiastically waving his smartphone around, emphasizing how pervasive web browsers had become. My response was that his proposal would require unprecedented cooperation between the IETF and W3C to make happen, and that it would demand a huge effort and commitment from the major browser vendors. In short: it’s a beautiful vision, but Herculean in scope.

Then, something amazing happened.

WebRTC sees the light of day

Over the course of 2011, the groundwork for exactly such IETF/W3C collaboration was put in place, and a broad technical framework was designed. During 2012, Google and Mozilla began work in earnest to implement towards the developing standard.

Last November, San Francisco hosted the first WebRTC expo. The opening keynote was packed to capacity, standing room only, with people spilling out into the hallway. During the following two days, we saw countless demos of nascent WebRTC services, and saw dozens of companies committed to working with the WebRTC ecosystem. David Jodoin shared with us the staggering fact that half of the ten largest US banks are already planning their WebRTC strategy.

And in February, Mozilla and Google drove the golden spike into the WebRTC railroad by demonstrating a real time video call between Firefox and Chrome.

So Where Are We?

With that milestone, it’s tempting to view WebRTC as “almost done,” and easy to imagine that we’re just sanding down the rough edges right now. As much as I’d love that to be the case, there’s still a lot of work to be done.

Last February in Boston, we had a joint interim meeting for the various standards working groups who are contributing to the WebRTC effort. Topics included issues ranging from the calling conventions of the WebRTC javascript APIs to the structure of how to signal multiple video streams – things that will be important for wide adoption of the standard. I’m not saying that the WebRTC standards effort is struggling. Having spent the past 16 years working on standards, I’m can assure you that this pace of development is perfectly normal and expected for a technology this ambitious. What I am saying is that the specification of something this big, something this important, and something with this many stakeholders takes a long time.

Current state of standards

Even if the standards work were complete today, the magnitude of what WebRTC is doing will take a long time to get implemented, to get debugged, to get right. Our golden spike interop moment took substantial work on both sides, and revealed a lot of shortcomings in both implementations. Last February also marked the advent of SIPit 30, which included the first actual WebRTC interop testing event. This testing predictably turned up several new bugs (both in our implementation as well as others’), on top of those limitations that we already knew about.

When you add in all the features that I know neither Mozilla nor Google has begun work on, all the features that aren’t even specified yet, there’s easily a year of work left before we can start putting the polish on WebRTC.

We’re furiously building the future of communications on the Internet, and it’s difficult not to be excited by the opportunities afforded by this technology. I couldn’t be more pleased by the warm reception that WebRTC has received. But we all need to keep in mind that this is still very much a work in progress.

Come and play! But please watch your head.

So, please, come in, look around, and play around with what we’re doing. But don’t expect everything to be sleek and finished yet. While we are doing our best to limit how the changing standards impact application developers and users, there will be inevitable changes as the specifications evolve and as we learn more about what works best. We’ll keep you up to date with those changes here on the Hacks blog and try to minimize their impact, but I fully expect application developers to have to make tweaks and adjustments as the platform evolves. Expect us to take us a few versions to get voice and video quality to a point that we’re all actually happy about. Most importantly, understand that no one’s implementation is going to completely match the rapidly evolving W3C specifications for quite a while.

I’m sure we all want 2013 to be “The Year of WebRTC,” as some have already crowned it. And for early adopters, this is absolutely the time to be playing around with what’s possible, figuring out what doesn’t quite work the way you expect, and — above all — providing feedback to us so we can improve our implementation and improve the developing standards.

As long as you’re in a position to deal with minor disruptions and changes; if you can handle things not quite working as described; if you are ready to roll up your sleeves and influence the direction WebRTC is going, then we’re ready for you. Bring your hard hat, and keep the lines of communication open.

For those of you looking to deploy paid services, reliable channels to manage your customer relationships, mission critical applications: we want your feedback too, but temper your launch plans. I expect that we’ll have a stable platform that’s well and truly open for business some time next year.

Credits: Original hardhat image from openclipart.org; Anthony Wing Kosner first applied the “golden spike” analogy to WebRTC interop.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)