WebRTC

WebRTC: Sending DTMF in Firefox

One of the features defined in WebRTC is the ability to send DTMF tones (popularly known in some markets as “touch tones”). While this has basically no purpose in the browser-to-browser case, it is somewhat important when using WebRTC to initiate calls to the legacy telephone network: many companies still use voice menu systems that require callers to send DTMF digits to indicate why they are calling, input credit card numbers and passcodes, and perform similar tasks.

Until recently, there had been very little interest expressed by developers to make use of this interface; and, as a consequence, it has been a relatively low priority for the Firefox WebRTC team. Over the past few weeks, there has been a surprising spike in queries about the availability of RTCDTMFSender. While there is no milestone fixed for implementing it, the feature does remain on our roadmap.

In the meanwhile, there is a reasonable stop-gap approach that will work in the vast majority of use cases. Through the use of WebAudio oscillators, it is possible to synthesize DTMF tones and mix them into an audio stream. It is worth noting that this results in behavior that is slightly different than that described in the specification: rather than using RFC4733 to send DTMF, this approach will actually encode the tones using the audio codec in use (for telephone gateways, this is almost always G.711). In practice, this works fine in almost all cases.

I have included an example implementation of this approach at the end of this post. For versions of Firefox prior to 44, applications will need to explicitly construct the DTMFSender with the stream they want to mix DTMF into, and then retrieve a new stream from the DTMFSender to add to the RTCPeerConnection (or wherever it wants to send DTMF tones). For example:

navigator.mediaDevices.getUserMedia({ audio: true })
  .then(function(micStream) {
    var sender = new DTMFSender(micStream);

    /* Now that we have a stream that represents microphone
       input mixed with the DTMF ("sender.outputStream"), we
       can do whatever we want with it. This example plays
       it locally, but you could just as easily add it to
       a PeerConnection. */

    var audio = document.createElement("audio");
    document.body.appendChild(audio);
    audio.mozSrcObject = sender.outputStream;
    audio.play();

    sender.ontonechange = function(e) {
      console.log(JSON.stringify(e));
    }
    sender.insertDTMF("2145551212,1");
  });

That’s admittedly a bit clunky. Fortunately, starting with Firefox 44, the addition of the ability to construct a MediaStream directly from a MediaStreamTrack gives us a way to transparently polyfill the DTMF sender: we intercept calls to addTrack(), create a DTMFSender, swap out the original track with the new one containing the DTMF generator, and attach the DTMFSender to the RTCRTPSender object where it belongs.

You can demonstrate this by including the DTMFSender object and then running through a basic local audio call:

/* Note: Requires Firefox 44 or later */
var pc1 = new RTCPeerConnection();
var pc2 = new RTCPeerConnection();

pc2.onaddtrack = function(e) {
  var stream = new MediaStream([e.track]);
  var audio = document.createElement("audio");
  document.body.appendChild(audio);
  audio.mozSrcObject = stream;
  audio.play();
};

pc1.onicecandidate = function(e) {
  if (e.candidate) {
    pc2.addIceCandidate(e.candidate);
  }
};

pc2.onicecandidate = function(e) {
  if (e.candidate) {
    pc1.addIceCandidate(e.candidate);
  };
};


navigator.mediaDevices.getUserMedia({ audio: true })
  .then(function(stream) {
    var track = stream.getAudioTracks()[0];
    var sender = pc1.addTrack(track, stream);

    pc1.createOffer().then(function(offer) {
      pc1.setLocalDescription(offer).then(function() {
        pc2.setRemoteDescription(offer).then(function() {
          pc2.createAnswer().then(function(answer) {
            pc2.setLocalDescription(answer).then(function() {
              pc1.setRemoteDescription(answer).then(function() {
                sender.dtmf.ontonechange = function(e) {
                  console.log(JSON.stringify(e));
                }
                sender.dtmf.insertDTMF("2145551212,1");
              });
            });
          });
        });
      });
    });
  });

If you’d like to be notified when platform work to implement RTCDTMFSender natively begins, add yourself to the CC list on Bug 1012645. And we would love to hear from you in the comments about successes and challenges you encounter in applying the oscillator-based method we describe in this post, as well as any suggestions you might have for improving the example implementation.

Finally, here’s the source of the DTMFSender object:

/*
 * DTMFSender.js
 *
 * This serves as a polyfill that adds a DTMF sender interface to the
 * RTCRTPSender objects on RTCRTPPeerConnecions for Firefox 44 and later.
 * Implementations simply include this file, and then use the DTMF sender
 * as described in the WebRTC specification.
 *
 * For versions of Firefox prior to 44, implementations need to manually
 * instantiate a version of the DTMFSender object, pass it a stream, and
 * then retreive "outputStream" from the sender object. Implmentations
 * may also choose to attach the sender to the corresponding RTCRTPSender,
 * if they wish.
 *
 * This Source Code Form is subject to the terms of the Mozilla Public License,
 * v. 2.0. If a copy of the MPL was not distributed with this file, You can 
 * obtain one at https://mozilla.org/MPL/2.0/.
 */


// The MediaStream enhancements we need to make a polyfill work landed
// at the same time as the "addTrack" method as added to MediaStream.
// If this is possible, we monkeypatch ourselves into RTCPeerConnection.addTrack
// so thatwe attach a new DTMF sender to each RTP Sender as they are created.
if ("addTrack" in MediaStream.prototype) {

  RTCPeerConnection.prototype.origAddTrack =
    RTCPeerConnection.prototype.addTrack;

  RTCPeerConnection.prototype.addTrack = function(track, stream) {
    var sender = this.origAddTrack(track, stream);
    new DTMFSender(sender);
    return(sender);
  }
}

function DTMFSender(senderOrStream) {
  var ctx = this._audioCtx = new AudioContext();
  this._outputStreamNode = ctx.createMediaStreamDestination();
  var outputStream = this._outputStreamNode.stream;

  var inputStream;
  var rtpSender = null;

  if ("track" in senderOrStream) {
    rtpSender = senderOrStream;
    inputStream = new MediaStream([rtpSender.track]);
  } else {
    inputStream = senderOrStream;
    this.outputStream = outputStream;
  }

  this._source = ctx.createMediaStreamSource(inputStream);
  this._source.connect(this._outputStreamNode);

  this._f1Oscillator = ctx.createOscillator();
  this._f1Oscillator.connect(this._outputStreamNode);
  this._f1Oscillator.frequency.value = 0;
  this._f1Oscillator.start(0);

  this._f2Oscillator = ctx.createOscillator();
  this._f2Oscillator.connect(this._outputStreamNode);
  this._f2Oscillator.frequency.value = 0;
  this._f2Oscillator.start(0);

  if (rtpSender) {
    rtpSender.replaceTrack(outputStream.getAudioTracks()[0])
      .then(function() {
        rtpSender.dtmf = this;
      }.bind(this));
  }
}

/* Implements the same interface as RTCDTMFSender */
DTMFSender.prototype = {

  ontonechange: undefined,

  get duration() {
    return this._duration;
  },

  get interToneGap() {
    return this._interToneGap;
  },

  get toneBuffer() {
    return this._toneBuffer;
  },

  insertDTMF: function(tones, duration, interToneGap) {
    if (/[^0-9a-d#\*,]/i.test(tones)) {
      throw(new Error("InvalidCharacterError"));
    }

    this._duration = Math.min(6000, Math.max(40, duration || 100));
    this._interToneGap = Math.max(40, interToneGap || 70);
    this._toneBuffer = tones;

    if (!this._playing) {
      setTimeout(this._playNextTone.bind(this), 0);
      this._playing = true;
    }
  },

  /* Private */
  _duration: 100,
  _interToneGap: 70,
  _toneBuffer: "",
  _f1Oscillator: null,
  _f2Oscillator: null,
  _playing: false,

  _freq: {
    "1": [ 1209, 697 ],
    "2": [ 1336, 697 ],
    "3": [ 1477, 697 ],
    "a": [ 1633, 697 ],
    "4": [ 1209, 770 ],
    "5": [ 1336, 770 ],
    "6": [ 1477, 770 ],
    "b": [ 1633, 770 ],
    "7": [ 1209, 852 ],
    "8": [ 1336, 852 ],
    "9": [ 1477, 852 ],
    "c": [ 1633, 852 ],
    "*": [ 1209, 941 ],
    "0": [ 1336, 941 ],
    "#": [ 1477, 941 ],
    "d": [ 1633, 941 ]
  },

  _playNextTone: function() {
    if (this._toneBuffer.length == 0) {
      this._playing = false;
      this._f1Oscillator.frequency.value = 0;
      this._f2Oscillator.frequency.value = 0;
      if (this.ontonechange) {
        this.ontonechange({tone: ""});
      }
      return;
    }

    var digit = this._toneBuffer.substr(0,1);
    this._toneBuffer = this._toneBuffer.substr(1);

    if (this.ontonechange) {
      this.ontonechange({tone: digit});
    }

    if (digit == ',') {
      setTimeout(this._playNextTone.bind(this), 2000);
      return;
    }

    var f = this._freq[digit.toLowerCase()];
    if (f) {
      this._f1Oscillator.frequency.value = f[0];
      this._f2Oscillator.frequency.value = f[1];
      setTimeout(this._stopTone.bind(this), this._duration);
    } else {
      // This shouldn't happen. If it does, just move on.
      setTimeout(this._playNextTone.bind(this), 0);
    }
  },

  _stopTone: function() {
    this._f1Oscillator.frequency.value = 0;
    this._f2Oscillator.frequency.value = 0;
    setTimeout(this._playNextTone.bind(this), this._interToneGap);
  }
};

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Controlling WebRTC PeerConnections with an extension

Author’s note: Firefox recently added some features (in Firefox 42) to allow users to exercise added control over WebRTC RTCPeerConnections, IP address gathering used in connecting them, and what IP addresses are exposed to JS applications. For a detailed explanation of the issues this is addressing and why Firefox is addressing them, please see my (Maire’s) personal blog post about the issue. Discussion of the problems, risks, tradeoffs, and reasoning are best done there. This article is about the new features in the code and how to access them.

Maire Reavy
Engineering Manager, WebRTC

To control IP address exposure and RTCPeerConnection usage, we’ve provided methods to hook createOffer/createAnswer and added about:config prefs for controlling which candidates are available during ICE negotiation. Also, some controls already existed in about:config. You can learn more about the controls available on Mozilla’s wiki page about WebRTC privacy.

The createOffer/createAnswer hooks allow extensions to modify the behaviour of the PeerConnection and, for example, add a door-hanger very similar to the one you get when a site uses the getUserMedia API to access camera and microphone. We have done a proof of concept extension and this is how it looks for a web site which only uses the DataChannel:

webrtc-datachannel-doorhanger

From a user interaction perspective, it’s important to ask for access permission in a non-scary way that an end user can understand. For getUserMedia, i.e., access to the user’s camera and microphone, the question asked is:

Would you like to share your camera and microphone with webrtc.github.io?

The implications of that are quite clear, as the website can record your voice and video and may send it to someone else.

The sample extension door-hanger pops up in two cases:

  • The site uses a receive-only connection, i.e., only receives video — you can test it here.
  • The site uses the datachannel without calling getUserMedia as shown with this sample.

For the case where the site has permission to access camera and microphone, e.g. Talky.io, no additional question is asked. This minimizes the number of questions the user has to answer and retains much of the current behaviour.

For the receive-only case, it is a more awkward question to ask. The use-case here is one-way streaming, e.g., for a webinar. Users don’t expect to be asked for permission here since you don’t need to grant similar permissions to watch a recorded video on YouTube.

For data channels, there are a number of different use cases, ranging from file transfer to gaming to peer-to-peer CDNs. For file transfer, the workflow is rather easy to explain to the user — they select a file, the door-hanger pops up, they allow it, and the file gets transferred. There is a direct connection between the user action and the popup. That applies to gaming as well.

The peer-to-peer CDN use case is harder. You start playing a video and the browser asks for something called DataChannel?! If you are a developer relying on this use-case, we recommend you try the sample extension, and use it to develop a good user experience around that use-case. We’d love to hear your real world feedback.

In the recently reported case of the New York Times’ somewhat surprising usage of WebRTC, the developer cited “fraud detection” as the use case. WebRTC was not built to solve this problem, and there are better tools and technologies for the job. We urge you to put those to use.

If you are an extension developer you can take a look at the source code of the extension to see what is needed to implement this interaction; you basically need to override the handling of rtcpeer:Request in the webrtcUI.receiveMessage handler. Let us know if you have any questions, either in the comments or open an issue over at github.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Introducing the Whiteboard Drum – WebRTC and Web Audio API magic

Browser functionality has expanded rapidly, way beyond merely “browsing” a document. Recently, Web browsers finally gained audio processing abilities with the Web Audio API. It is powerful to the point of building serious music applications.

Not only that, but it is also very interesting when combined with other APIs. One of these APIs is getUserMedia(), which allows us to capture audio and/or video from the local PC’s microphone / camera devices. Whiteboard Drum (code on GitHub) is a music application, and a great example of what can be achieved using Web Audio API and getUserMedia().

I demonstrated Whiteboard Drum at the Web Music Hackathon Tokyo in October. It was a very exciting event on the subject of Web Audio API and Web MIDI API. Many instruments can collaborate with the browser, and it can also create new interfaces to the real world.

I believe this suggests further possibilities of Web-based music applications, especially using Web Audio API in conjunction with other APIs. Let’s explain how the key features of Whiteboard Drum work, showing relevant code fragments as we go.

Overview

First of all, let me show you a picture from the Hackathon:

And a easy movie demo:

As you can see, Whiteboard Drum plays a rhythm according to the matrix pattern on the whiteboard. The whiteboard has no magic; it just needs to be pointed at by a WebCam. Though I used magnets in the demo, you can draw the markers using pen if you wish. Each row represents the corresponding instruments of Cymbal, Hi-Hat, Snare-Drum and Bass-Drum, and each column represents timing steps. In this implementation, the sequence has 8 steps. The color blue will activate the grid normally, and the red will activate with accent.

The processing flow is:

  1. The whiteboard image is captured by the WebCam
  2. The matrix pattern is analysed
  3. This pattern is fed to the drum sound generators to create the corresponding sound patterns

Although it uses nascent browser technologies, each process itself is not so complicated. Some key points are described below.

Image capture by getUserMedia()

getUserMedia() is a function for capturing video/audio from webcam/microphone devices. It is a part of WebRTC and a fairly new feature in web browsers. Note that the user’s permission is required to get the image from the WebCam. If we were just displaying the WebCam image on the screen, it would be trivially easy. However, we want to access the image’s raw pixel data in JavaScript for more processing, so we need to use canvas and the createImageData() function.

Because pixel-by-pixel processing is needed later in this application, the captured image’s resolution is reduced to 400 x 200px; that means one rhythm grid is 50 x 50 px in the rhythm pattern matrix.

Note: Though most recent laptops/notebooks have embedded WebCams, you will get the best results on Whiteboard Drum from an external camera, because the camera needs to be precisely aimed at the picture on the whiteboard. Also, the selection of input from multiple available devices/cameras is not standardized currently, and cannot be controlled in JavaScript. In Firefox, it is selectable in the permission dialog when connecting, and it can be set up from “contents setup” option of the setup screen in Google Chrome.

Get the WebCam video

We don’t want to show these parts of the processing on the screen, so first we hide the video:

<video id="video" style="display:none"></video>

Now to grab the video:

video = document.getElementById("video");
navigator.getUserMedia=navigator.getUserMedia||navigator.webkitGetUserMedia||navigator.mozGetUserMedia;
navigator.getUserMedia({"video":true},
    function(stream) {
        video.src= window.URL.createObjectURL(stream);
        video.play();
    },
    function(err) {
        alert("Camera Error");
    });

Capture it and get pixel values

We also hide the canvas:

<canvas id="capture" width=400 height=200 style="display:none"></canvas>

Then capture our video data on the canvas:

function Capture() {
    ctxcapture.drawImage(video,0,0,400,200);
    imgdatcapture=ctxcapture.getImageData(0,0,400,200);
}

The video from the WebCam will be drawn onto the canvas at periodic intervals.

Image analyzing

Next, we need to get the 400 x 200 pixel values with getImageData(). The analyzing phase analyses the 400 x 200 image data in an 8 x 4 matrix rhythm pattern, where a single matrix grid is 50 x 50 px. All necessary input data is stored in the imgdatcapture.data array in RGBA format, 4 elements per pixel.

var pixarray = imgdatcapture.data;
var step;
for(var x = 0; x < 8; ++x) {
    var px = x * 50;
    if(invert)
        step=7-x;
    else
        step=x;
    for(var y = 0; y < 4; ++y) {
        var py = y * 50;
        var lum = 0;
        var red = 0;
        for(var dx = 0; dx < 50; ++dx) {
            for(var dy = 0; dy < 50; ++dy) {
                var offset = ((py + dy) * 400 + px + dx)*4;
                lum += pixarray[offset] * 3 + pixarray[offset+1] * 6 + pixarray[offset+2];
                red += (pixarray[offset]-pixarray[offset+2]);
            }
        }
        if(lum < lumthresh) {
            if(red > redthresh)
                rhythmpat[step][y]=2;
            else
                rhythmpat[step][y]=1;
        }
        else
            rhythmpat[step][y]=0;
    }
}

This is honest pixel-by-pixel analysis of grid-by-grid loops. In this implementation, the analysis is done for the luminance and redness. If the grid is “dark”, the grid is activated; if it is red, it should be accented.

The luminance calculation uses a simplified matrix — R * 3 + G * 6 + B — that will get ten times the value – meaning – which means getting the value of range 0 to 2550 for each pixel. And the redness R – B is a experimental value because all that is required is a decision of Red or Blue. The result is stored in the rhythmpat array, with a value of 0 for nothing, 1 for blue or 2 for red.

Sound generation through the Web Audio API

Because the Web Audio API is a very cutting edge technology, it is not yet supported by every web browser. Currently, Google Chrome/Safari/Webkit-based Opera and Firefox (25 or later) support this API. Note: Firefox 25 is the latest version released at the end of October.

For other web browsers, I have developed a polyfill that falls back to Flash: WAAPISim, available on GitHub. It provides almost all functions of the Web Audio API to unsupported browsers, for example Internet Explorer.

Web Audio API is a large scale specification, but in our case, the sound generation part requires just a very simple use of the Web Audio API: load one sound for each instrument and trigger them at the right times. First we create an audio context, taking care of vendor prefixes in the process. Prefixes currently used are webkit or no prefix.

audioctx = new (window.AudioContext||window.webkitAudioContext)();

Next we load sounds to buffers via XMLHttpRequest. In this case, different sounds for each instrument (bd.wav / sd.wav / hh.wav / cy.wav) are loaded into the buffers array:

var buffers = [];
var req = new XMLHttpRequest();
var loadidx = 0;
var files = [
    "samples/bd.wav",
    "samples/sd.wav",
    "samples/hh.wav",
    "samples/cy.wav"
];
function LoadBuffers() {
    req.open("GET", files[loadidx], true);
    req.responseType = "arraybuffer";
    req.onload = function() {
        if(req.response) {
            audioctx.decodeAudioData(req.response,function(b){
                buffers[loadidx]=b;
                if(++loadidx < files.length)
                    LoadBuffers();
            },function(){});
        }
    };
    req.send();
}

The Web Audio API generates sounds by routing graphs of nodes. Whiteboard Drum uses a simple graph that is accessed via AudioBufferSourceNode and GainNode. AudioBufferSourceNode play back the AudioBuffer and route to destination(output) directly (for normal *blue* sound), or route to destination via the GainNode (for accented *red* sound). Because the AudioBufferSourceNode can be used just once, it will be newly created for each trigger.

Preparing the GainNode as the output point for accented sounds is done like this.

gain=audioctx.createGain();
    gain.gain.value=2;
    gain.connect(audioctx.destination);

And the trigger function looks like so:

function Trigger(instrument,accent,when) {
    var src=audioctx.createBufferSource();
    src.buffer=buffers[instrument];
    if(accent)
        src.connect(gain);
    else
        src.connect(audioctx.destination);
    src.start(when);
}

All that is left to discuss is the accuracy of the playback timing, according to the rhythm pattern. Though it would be simple to keep creating the triggers with a setInterval() timer, it is not recommended. The timing can be easily messed up by any CPU load.

To get accurate timing, using the time management system embedded in the Web Audio API is recommended. It calculates the when argument of the Trigger() function above.

// console.log(nexttick-audioctx.currentTime);
while(nexttick - audioctx.currentTime < 0.3) {
    var p = rhythmpat[step];
    for(var i = 0; i < 4; ++i)
        Trigger(i, p[i], nexttick);
    if(++step >= 8)
        step = 0;
    nexttick += deltatick;
}

In Whiteboard Drum, this code controls the core of the functionality. nexttick contains the accurate time (in seconds) of the next step, while audioctx.currentTime is the accurate current time (again, in seconds). Thus, this routine is getting triggered every 300ms – meaning look ahead to 300ms in the future (triggered in advance while nextticktime – currenttime < 0.3). The commented console.log will print the timing margin. While this routine is called periodically, the timing is collapsed if this value is negative.

For more detail, here is a helpful document: A Tale of Two Clocks – Scheduling Web Audio with Precision

About the UI

Especially in music production software like DAW or VST plugins, the UI is important. Web applications do not have to emulate this exactly, but something similar would be a good idea. Fortunately, the very handy WebComponent library webaudio-controls is available, allowing us to define knobs or sliders with just a single HTML tag.

NOTE: webaudio-controls uses Polymer.js, which sometimes has stability issues, causing unexpected behavior once in a while, especially when combining it with complex APIs.

Future work

This is already an interesting application, but it can be improved further. Obviously the camera position adjustment is an issue. Analysis can be more smarter if it has an auto adjustment of the position (using some kind of marker?), and adaptive color detection. Sound generation could also be improved, with more instruments, more steps and more sound effects.

How about a challenge?

Whiteboard Drum is available at http://www.g200kg.com/whiteboarddrum/, and the code is on GitHub.

Have a play with it and see what rhythms you can create!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

WebRTC: Update and Workarounds

As you’ve probably noticed, we’ve been making lots of progress on our WebRTC implementation, and we expect additional improvements over the next few releases.

We have work in the pipeline to improve audio quality issues (yes, we know we still have some!) and to assist with troubleshooting NAT traversal issues (you can follow the progress in Bug 904622).

Existing limitations

But beyond these upcoming improvements, I’d like to take a moment to look at a couple of our existing limitations that you might have noticed, and offer some advice for writing apps that work within these limitations.

The first issue, described in Bug 857115, is that mozRTCPeerConnection does not currently support renegotiation of an ongoing session. Once a session is set up, its parameters are fixed. In all practicality, this means that you can’t, for example, start an audio-only call and then add video to that same PeerConnection later in that session. We have another similar limitation in that we don’t currently support more than one each audio and video stream on a single PeerConnection (see Bug 784517 and Bug 907339).

Solutions for now

We’re going to fix these limitations as soon as we can, but it’s going to take a few months for our code changes to ride the Firefox train out into release. Until that happens, I want to give you a couple of workarounds so you can continue to use Firefox to make awesome things.

Muting audio and video streams

Media renegotiation has two main use cases: muting and unmuting media in the middle of a session; and adding/removing video in the middle of a session. For muting and unmuting, the trick is to make judicious use of the “enabled” attribute on the MediaStreamTrack object: simply set a track’s enabled to “false” when you want to mute it.

var pc = new mozRTCPeerConnection;
 
navigator.mozGetUserMedia({video: true},
  function (mediaStream) {
 
    // Create a new self-view video element
    var video = document.createElement("video");
    video.setAttribute("width", 640);
    video.setAttribute("height", 480);
    video.setAttribute("style", "transform: scaleX(-1)");
    video.src = window.URL.createObjectURL(mediaStream);
    document.body.appendChild(video);
    video.play();
 
    // Add a button to hold/unhold video stream
    var button = document.createElement("button");
    button.appendChild(document.createTextNode("Toggle Hold"));
    button.onclick = function(){
      mediaStream.getVideoTracks()[0].enabled =
         !(mediaStream.getVideoTracks()[0].enabled);
    }
    document.body.appendChild(document.createElement("br"));
    document.body.appendChild(button);
 
    // Add the mediaStream to the peer connection
    pc.addStream(mediaStream);
 
    // At this point, you're ready to start the call with
    // pc.setRemoteDescription() or pc.createOffer()
  },
  function (err) { alert(err); }
);

Note that setting a MediaStreamTrack’s “enabled” attribute to “false” will not stop media from flowing, but it will change the media that’s being encoded into a black square (for video) and silence (for audio), both of which compress very well. Depending on your application, it may also make sense to use browser-to-browser signaling (for example, WebSockets or DataChannels) to let the other browser know that it should hide or show the video window when the corresponding video is muted.

Adding video mid-call

For adding video mid-call, the most user-friendly work-around is to destroy the audio-only PeerConnection, and create a new PeerConnection, with both audio and video. When you do this, it will prompt the user for both the camera and the microphone; but, since Firefox does this in a single dialog, the user experience is generally pretty good. Once video has been added, you can either remove it by performing this trick in reverse (thus releasing the camera), or you can simply perform the “mute video” trick I describe above (which will leave the camera going — this might upset some users).

Send more than one audio or video stream

To send more than one audio or video stream, you can use multiple simultaneous peer connections between the browsers: one for each audio/video pair you wish to send. You can also use this technique as an alternate approach for adding and removing video mid-session: set up an initial audio-only call; and, if the user later decides to add video, you can create a new PeerConnection and negotiate a separate video-only connection.

One subtle downside to using the first approach for adding video is that it restarts the audio connection when you add video, which may lead to some noticeable glitches in the audio stream. However, once we have audio and video synchronization landed, making sure that audio and video tracks are in the same MediaStream will ensure that they remain in sync. This synchronization isn’t guaranteed for multiple MediaStreams or multiple PeerConnections.

Temporary workarounds and getting there

We recognize that these workarounds aren’t ideal, and we’re working towards spec compliance as quickly as we can. In the meanwhile, we hope that this information proves helpful in building out applications today. The good news is that these techniques should continue to work even after we’ve addressed the limitations described above, so you can migrate to a final solution at your leisure.

Finally, I would suggest that anyone interested in renegotiation and/or multiple media streams keep a eye on the bugs I mention above. Once we’ve implemented these features, they should appear in the released version of Firefox within about 18 weeks. After that happens, you’ll want to switch over to the “standard” way of doing things to ensure the best possible audio and video quality.

Thanks for your patience. Go out and make great things!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Firefox 24 for Android gets WebRTC support by default

WebRTC is now on Firefox for Android as well as Firefox Desktop! Firefox 24 for Android now supports mozGetUserMedia, mozRTCPeerConnection, and DataChannels by default. mozGetUserMedia has been in desktop releases since Firefox 20, and mozPeerConnection and DataChannels since Firefox 22, and we’re excited that Android is now joining Desktop releases in supporting these cool new features!

What you can do

With WebRTC enabled, developers can:

  • Capture camera or microphone streams directly from Firefox Android using only JavaScript (a feature we know developers have been wanting for a while!),
  • Make browser to browser calls (audio and/or video) which you can test with sites like appspot.apprtc.com, and
  • Share data (no server in the middle) to enable peer-to-peer apps (e.g. text chat, gaming, image sharing especially during calls)

We’re eager to see the ideas developers come up with!

For early adopters and feedback

Our support is still largely intended for developers and for early adopters at this stage to give us feedback. The working group specs are not complete, and we still have more features to implement and quality improvements to make. We are also primarily focused now on making 1:1 (person-to-person) calling solid — in contrast to multi-person calling, which we’ll focus on later. We welcome your testing and experimentation. Please give us feedback, file bug reports and start building new applications based on these new abilities.

If you’re not sure where to start, please start by reading some of the WebRTC articles on Hacks that have already been published. In particular, please check out WebRTC and the Early API, The Making of Face to GIF, and PeerSquared as well as An AR Game (which won our getUserMedia Dev Derby) and WebRTC Experiments & Demos.

An example of simple video frame capture (which will capture new images at approximately 15fps):

navigator.getUserMedia({video: true, audio: false}, yes, no);
video.src = URL.createObjectURL(stream);
 
setInterval(function () {
  context.drawImage(video, 0,0, width,height);
  frames.push(context.getImageData(0,0, width,height));
}, 67);

Snippet of code taken from “Multi-person video chat” on nightly-gupshup (you can try it in the WebRTC Test Landing Page — full code is on GitHub)

function acceptCall(offer) {
    log("Incoming call with offer " + offer);
 
    navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
    document.getElementById("localvideo").mozSrcObject = stream;
    document.getElementById("localvideo").play();
    document.getElementById("localvideo").muted = true;
 
    var pc = new mozRTCPeerConnection();
    pc.addStream(stream);
 
    pc.onaddstream = function(obj) {
      document.getElementById("remotevideo").mozSrcObject = obj.stream;
      document.getElementById("remotevideo").play();
    };
 
    pc.setRemoteDescription(new mozRTCSessionDescription(JSON.parse(offer.offer)), function() {
      log("setRemoteDescription, creating answer");
      pc.createAnswer(function(answer) {
        pc.setLocalDescription(answer, function() {
          // Send answer to remote end.
          log("created Answer and setLocalDescription " + JSON.stringify(answer));
          peerc = pc;
          jQuery.post(
            "answer", {
              to: offer.from,
              from: offer.to,
              answer: JSON.stringify(answer)
            },
            function() { console.log("Answer sent!"); }
          ).error(error);
        }, error);
      }, error);
    }, error);
  }, error);
}
 
function initiateCall(user) {
    navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
    document.getElementById("localvideo").mozSrcObject = stream;
    document.getElementById("localvideo").play();
    document.getElementById("localvideo").muted = true;
 
    var pc = new mozRTCPeerConnection();
    pc.addStream(stream);
 
    pc.onaddstream = function(obj) {
      log("Got onaddstream of type " + obj.type);
      document.getElementById("remotevideo").mozSrcObject = obj.stream;
      document.getElementById("remotevideo").play();
    };
 
    pc.createOffer(function(offer) {
      log("Created offer" + JSON.stringify(offer));
      pc.setLocalDescription(offer, function() {
        // Send offer to remote end.
        log("setLocalDescription, sending to remote");
        peerc = pc;
        jQuery.post(
          "offer", {
            to: user,
            from: document.getElementById("user").innerHTML,
            offer: JSON.stringify(offer)
          },
          function() { console.log("Offer sent!"); }
        ).error(error);
      }, error);
    }, error);
  }, error);
}

Any code that runs on Desktop should run on Android. (Ah, the beauty of HTML5!) However, you may want to optimize for Android knowing that it could now be used on a smaller screen device and even rotated.

This is still a hard-hat area, especially for mobile. We’ve tested our Android support of 1:1 calling with a number of major WebRTC sites, including talky.io, apprtc.appspot.com, and codeshare.io.

Known issues

  • Echo cancellation needs improvement; for calls we suggest a headset (Bug 916331)
  • Occasionally there are audio/video sync issues or excessive audio delay. We already have a fix in Firefox 25 that will improve delay (Bug 884365).
  • On some devices there are intermittent video-capture crashes; we’re actively investigating (Bug 902431).
  • Lower-end devices or devices with poor connectivity may have problems decoding or sending higher-resolution video at good frame rates.

Please help us bring real-time communications to the web: build your apps, give us your feedback, report bugs, and help us test and develop. With your help, your ideas, and your enthusiasm, we will rock the web to a whole new level.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

WebRTC and the Early API

In my last article, WebRTC and the Ocean of Acryonyms, I went over the networking terminology behind WebRTC. In this sequel of sorts, I will go over the new WebRTC API in great laboring detail. By the end of it you should have working peer-to-peer DataChannels and Media.

Shims

As you can imagine, with such an early API, you must use the browser prefixes and shim it to a common variable.

var PeerConnection = window.mozRTCPeerConnection || window.webkitRTCPeerConnection;
var IceCandidate = window.mozRTCIceCandidate || window.RTCIceCandidate;
var SessionDescription = window.mozRTCSessionDescription || window.RTCSessionDescription;
navigator.getUserMedia = navigator.getUserMedia || navigator.mozGetUserMedia || navigator.webkitGetUserMedia;

PeerConnection

This is the starting point to creating a connection with a peer. It accepts information about which servers to use and options for the type of connection.

var pc = new PeerConnection(server, options);

server

The server object contains information about which TURN and/or STUN servers to use. This is required to ensure most users can actually create a connection by avoiding restrictions in NAT and firewalls.

var server = {
    iceServers: [
        {url: "stun:23.21.150.121"},
        {url: "stun:stun.l.google.com:19302"},
        {url: "turn:numb.viagenie.ca", credential: "webrtcdemo", username: "louis%40mozilla.com"}
    ]
}

Google runs a public STUN server that we can use. I also created an account at http://numb.viagenie.ca/ for a free TURN server to access. You may want to do the same and replace with your own credentials.

options

Depending on the type of connection, you will need to pass some options.

var options = {
    optional: [
        {DtlsSrtpKeyAgreement: true},
        {RtpDataChannels: true}
    ]
}

DtlsSrtpKeyAgreement is required for Chrome and Firefox to interoperate.

RtpDataChannels is required if we want to make use of the DataChannels API on Firefox.

ICECandidate

After creating the PeerConnection and passing in the available STUN and TURN servers, an event will be fired once the ICE framework has found some “candidates” that will allow you to connect with a peer. This is known as an ICE Candidate and will execute a callback function on PeerConnection#onicecandidate.

pc.onicecandidate = function (e) {
    // candidate exists in e.candidate
    if (e.candidate == null) { return }
    send("icecandidate", JSON.stringify(e.candidate));
    pc.onicecandidate = null;
};

When the callback is executed, we must use the signal channel to send the Candidate to the peer. On Chrome, multiple ICE candidates are usually found, we only need one so I typically send the first one then remove the handler. Firefox includes the Candidate in the Offer SDP.

Signal Channel

Now that we have an ICE candidate, we need to send that to our peer so they know how to connect with us. However this leaves us with a chicken and egg situation; we want PeerConnection to send data to a peer but before that we need to send them metadata…

This is where the signal channel comes in. It’s any method of data transport that allows two peers to exchange information. In this article, we’re going to use FireBase because it’s incredibly easy to setup and doesn’t require any hosting or server-code.

For now just imagine two methods exist: send() will take a key and assign data to it and recv() will call a handler when a key has a value.

The structure of the database will look like this:

{
    "<roomId>": {
        "candidate:<peerType>": …
        "offer": …
        "answer": … 
    }
}

Connections are divided by a roomId and will store 4 pieces of information, the ICE candidate from the offerer, the ICE candidate from the answerer, the offer SDP and the answer SDP.

Offer

An Offer SDP (Session Description Protocol) is metadata that describes to the other peer the format to expect (video, formats, codecs, encryption, resolution, size, etc etc).

An exchange requires an offer from a peer, then the other peer must receive the offer and provide back an answer.

pc.createOffer(function (offer) {
    pc.setLocalDescription(offer);
 
    send("offer", JSON.stringify(offer));
}, errorHandler, constraints);

errorHandler

If there was an issue generating an offer, this method will be executed with error details as the first argument.

var errorHandler = function (err) {
    console.error(err);
};

constraints

Options for the offer SDP.

var constraints = {
    mandatory: {
        OfferToReceiveAudio: true,
        OfferToReceiveVideo: true
    }
};

OfferToReceiveAudio/Video tells the other peer that you would like to receive video or audio from them. This is not needed for DataChannels.

Once the offer has been generated we must set the local SDP to the new offer and send it through the signal channel to the other peer and await their Answer SDP.

Answer

An Answer SDP is just like an offer but a response; sort of like answering the phone. We can only generate an answer once we have received an offer.

recv("offer", function (offer) {
    offer = new SessionDescription(JSON.parse(offer))
    pc.setRemoteDescription(offer);
 
    pc.createAnswer(function (answer) {
        pc.setLocalDescription(answer);
 
        send("answer", JSON.stringify(answer));
    }, errorHandler, constraints);
});

DataChannel

I will first explain how to use PeerConnection for the DataChannels API and transferring arbitrary data between peers.

Note: At the time of this article, interoperability between Chrome and Firefox is not possible with DataChannels due to Firefox using RDP and Chrome using SCTP.

var channel = pc.createDataChannel(channelName, channelOptions);

The offerer should be the peer who creates the channel. The answerer will receive the channel in the callback ondatachannel on PeerConnection.

channelName

This is a string that acts as a label for your channel name. Warning: Make sure your channel name has no spaces or Chrome will fail on createAnswer().

channelOptions

var channelOptions = {};

Currently these options are not well supported so you can leave this empty for now.

Channel Events and Methods

onopen

Executed when the connection is established.

onerror

Executed if there is an error creating the connection. First argument is an error object.

channel.onerror = function (err) {
    console.error("Channel Error:", err);
};

onmessage

channel.onmessage = function (e) {
    console.log("Got message:", e.data);
}

The heart of the connection. When you receive a message, this method will execute. The first argument is an event object which contains the data, time received and other information.

onclose

Executed if the other peer closes the connection.

Binding the Events

If you were the creator of the channel (meaning the offerer), you can bind events directly to the DataChannel you created with createChannel. If you are the answerer, you must use the ondatachannel callback on PeerConnection to access the same channel.

pc.ondatachannel = function (e) {
    e.channel.onmessage = function () {};
};

The channel is available in the event object passed into the handler as e.channel.

send()

channel.send("Hi Peer!");

This method allows you to send data directly to the peer! Amazing. You must send either String, Blob, ArrayBuffer or ArrayView, so be sure to stringify objects.

close()

Close the channel once the connection should end. It is recommended to do this on page unload.

Media

Now we will cover transmitting media such as audio and video. To display the video and audio you must include a <video> tag on the document with the attribute autoplay.

Get User Media

<video id="preview" autoplay></video>
 
var video = document.getElementById("preview");
navigator.getUserMedia(mediaOptions, function (stream) {
    video.src = URL.createObjectURL(stream);
}, errorHandler);

mediaOptions

Constraints on what media types you want to return from the user.

var mediaOptions = {
    video: true,
    audio: true
};

If you just want an audio chat, remove the video key.

errorHandler

Executed if there is an error returning the requested media.

Media Events and Methods

addStream

Add the stream from getUserMedia to the PeerConnection.

pc.addStream(stream);

onaddstream

Executed when the connection has been setup and the other peer has added the stream to the peer connection with addStream. You need another <video> tag to display the other peer’s media.

<video id="otherPeer" autoplay></video>
 
var otherPeer = document.getElementById("otherPeer");
pc.onaddstream = function (e) {
    otherPeer.src = URL.createObjectURL(e.stream);
};

The first argument is an event object with the other peer’s media stream.

View the Source already

You can see the source built up from all the snippets in this article at my WebRTC repo.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

WebRTC and the Ocean of Acronyms

My experience getting started with WebRTC can be summarised in a three letter acronym so I decided to write this article dedicated to answering my many questions. I’ve always said, if you don’t know an acronym, it’s probably a networking protocol.

What is ICE?

Interactive Connectivity Establishment (ICE) is a framework to allow your web browser to connect with peers. There are many reasons why a straight up connection from Peer A to Peer B simply won’t work. It needs to bypass firewalls that would prevent opening connections, give you a unique address if like most situations your device doesn’t have a public IP address, and relay data through a server if your router doesn’t allow you to directly connect with peers. ICE uses some of the following techniques described below to achieve this:

What is STUN?

Session Traversal Utilities for NAT (STUN) (acronym within an acronym) is a protocol to discover your public address and determine any restrictions in your router that would prevent a direct connection with a peer.

The client will send a request to a STUN server on the internet who will reply with the client’s public address and whether or not the client is accessible behind the router’s NAT.

What is NAT?

Network Address Translation (NAT) is used to give your device a public IP address. A router will have a public IP address and every device connected to the router will have a private IP address. Requests will be translated from the device’s private IP to the router’s public IP with a unique port. That way you don’t need a unique public IP for each device but can still be discovered on the internet.

Some routers will have restrictions on who can connect to devices on the network. This can mean that even though we have the public IP address found by the STUN server, not anyone can create a connection. In this situation we need to turn to TURN.

What is TURN?

Some routers using NAT employ a restriction called ‘Symmetric NAT’. This means the router will only accept connections from peers you’ve previously connected to.

Traversal Using Relays around NAT (TURN) is meant to bypass the Symmetric NAT restriction by opening a connection with a TURN server and relaying all information through that server. You would create a connection with a TURN server and tell all peers to send packets to the server which will then be forwarded to you. This obviously comes with some overhead so is only used if there are no other alternatives.

What is SDP?

Session Description Protocol (SDP) is a standard for describing the multimedia content of the connection such as resolution, formats, codecs, encryption, etc so that both peers can understand each other once the data is transferring. This is not the media itself but more the metadata.

What is an Offer/Answer and Signal Channel?

Unfortunately WebRTC can’t create connections without some sort of server in the middle. We call this the Signal Channel. It’s any sort of channel of communication to exchange information before setting up a connection, whether by email, post card or a carrier pigeon… it’s up to you.

The information we need to exchange is the Offer and Answer which just contains the SDP mentioned above.

Peer A who will be the initiator of the connection, will create an Offer. They will then send this offer to Peer B using the chosen signal channel. Peer B will receive the Offer from the signal channel and create an Answer. They will then send this back to Peer A along the signal channel.

What is an ICE candidate?

As well as exchanging information about the media (discussed above in Offer/Answer and SDP), peers must exchange information about the network connection. This is know as an ICE candidate and details the available methods the peer is able to communicate (directly or through a TURN server).

The entire exchange in a complicated diagram

image

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

PeerSquared – one-on-one online teaching with WebRTC

It was somewhere in the midst of 2010 when I first learned that the people at Ericson Labs were working on an ‘open standards’ browser implementation for P2P video chat. I was excited right away. The fact that you could only use video chat in your web browser through Flash or other plug-ins bothered me. Webcams had already been around for quite a few years, but their use was mainly limited to proprietary programs like MSN Messenger and Skype.

We’re now three years later and it is all going to change. Just a couple of days ago P2P video chat made it to the Final Release of Firefox 22. That means, with Google Chrome already supporting this technology shortly, there are over a billion people now able to use native webcam chat in their browser. I think that is truly awesome and that it will probably cause a new big wave of change on the Internet. Imagine in just a few years from now that the old trusty telephone line will be obsolete and we will all be making browser based video calls.

PeerSquared

After reading about Google Chrome and Firefox adding data channels to P2P connections I was getting even happier, since it offers loads of new possibilities. I am very interested in e-learning and so I got the idea to build a whiteboard system for online tutoring, called PeerSquared.

The current version is a proof of concept, to see for myself what is really possible with the PeerConnection API and data channels in particular. To use PeerSquared simply log in on two different screens as a teacher and student respectively but using the same and unique room name. After logging in on both screens the P2P connection is established and the teacher is able to become creative with the whiteboard.

Actions performed by the teacher, like painting, writing and creating shapes, are also instantly visible on the student’s whiteboard, making it some sort of screen sharing. Most functions are self-explanatory but a less obvious feature is the ability to drop images from your file system onto the whiteboard to show them to the student, as you can see in the picture below (the earth and moon are drawn as dataURL images onto the canvas, which itself has an image of the universe as a background).

Note: PeerSquared does not work in Google Chrome yet, because it doesn’t have reliable data channels implemented at the moment

Progressively uploading data

All data messages sent through a data channel are nicely queued. This means when for example the teacher sends a big image to the student and just after that he draws a line (which is a small amount of data to send), there is no risk that the student receives the line data first. In addition, they are also suitable for uploading larger data chunks. I have uploaded pictures up to 6MB to the student’s whiteboard canvas, without any problem.

However, for larger data it’s nice to be able to see the upload progress. So that made me wonder whether it would be possible for the teacher to reliably send data in chunks to the student. This appeared to be really simple. All that is required is reading a file into an ArrayBuffer, slice it with the slice method, and then send the chunks through the data channel:

// after obtaining an 'arrayBuffer' from a FileReader:
var chunkSize = 1000, byteLength = arrayBuffer.byteLength;
for(i = 0; i < byteLength; i = i + chunkSize) {
	dataChannel.send(arrayBuffer.slice(i, i + chunkSize)); 
}

Of course it is also necessary to send meta information such as file name, size and type, in order to create a download link on the student’s side, but it’s easy to do so. Just send the array buffer data raw and the file meta data as a stringified JSON object. Then on the student’s side in the onmessage event handler you can differentiate between the two:

/*
1. The teacher sends meta information, for example: JSON.stringify({status : 'start', name: 'image.jpg', type: 'image/jpg', chunkCount : 20});
2. The teacher sends the file chunks, see code above
3. After the last chunk the teacher sends a message that the upload is complete, for example: JSON.stringify({status : 'complete'});
*/
var arrayBufferChunks = [], blob = null, meta = {}, container = document.getElementById('some_div');
 
dataChannel.onmessage = function(evt) {
	var data = evt.data;
	if(typeof data == 'object') {
		// step 2: put the chunks together again
		arrayBufferChunks.push();
		// note: arrayBufferChunks.length / meta.chunkCount would be a measure for the progress status
	}
	else if(typeof data == 'string') {
		data = JSON.parse(data); 
		if(data.status == 'start') {
			// step 1: store the meta data temporarily
			meta = data;
		}
		else if(data.status == 'complete') {
			// step 3: create an object URL for a download link
			blob = new Blob(arrayBufferChunks, { "type" : meta.type });
			container.innerHTML = '<a href="' + URL.createObjectURL(blob) + '" download="' + meta.name + '">' +  meta.name + '</a> is completed
		}		
	}
}

Like this I have been able to upload multiple files in a row and with a size over 200 MB. With even bigger files the browser starts taking up loads of memory and may become frozen (This seems to be due to reading the file though, not the sending). Another issue was that when adding 8+ files from the file picker I sometimes experienced a browser crash. This may have been the consequence of instantiating independent data channels on the fly for each file being read, so it’s worth trying to queue all files just in one data channel.

I’ve also noted a few times that a file upload froze. This could simply be due to a hampering Internet connection. It’s nice to know then that it shouldn’t be too hard to make the progressive downloads resumable as well. As long as the receiver keeps track of the last received data chunk, he can just send a message back to the sender after a paused or interrupted upload: ‘please send me file X but starting from chunk Y’. And there you have a quite sophisticated P2P file sharing tool made simple.

You can try the progressive file upload in PeerSquared by selecting one of more files on your system and drag them to the chat input box on the teacher’s side, like in this screenshot:

Adding to and removing from a PeerConnection

Currently a drawback of the PeerConnection object in Firefox is that it isn’t possible (yet) to just add and remove multiple data channels and video streams to a single PeerConnection object, because every addition/removal requires to renegotiate a session. From http://www.w3.org/TR/webrtc/:

In particular, if a RTCPeerConnection object is consuming a MediaStream and a track is added to one of the stream’s MediaStreamTrackList objects, by, e.g., the add() method being invoked, the RTCPeerConnection object must fire the negotiationneeded event. Removal of media components must also trigger negotiationneeded.

The negotiationneeded event hasn’t arrived yet. As a bypass in PeerSquared I am using several independent PeercConnection objects: one for all the data channels together and one per video stream. That way it works fine.

The next step in P2P sharing

I believe PeerSquared’s whiteboard and webcam are great tools together for online one-on-one teaching and there are plenty of options to build more interaction on top of the whiteboard. However sometimes it is desirable to share video or even the whole desktop. How can that be done? One way to that is to use a virtual webcam driver like manycam which is able to capture streams from a video or your desktop to your webcam. The drawback is that you are depending on external proprietary software again.

Since version 26, Google Chrome experimentally allows getUserMedia() access to the screen and share it through a peer connection, as you can test in WebRTC Plugin-free Screen Sharing. I’m not sure if or when this will become a web standard though. The last option I can think of, and which only captures the current tab content, is using a library like html2canvas. I have not tried this myself yet, and I wonder if it will be fast and reliable enough for a good ‘tab share experience’.

In conclusion

There have already been some great demonstrations of online multiplayer gaming and video conferencing with HTML5 on Mozilla Hacks. I hope that in addition my PeerSquared demo will give you a good a idea of some awesome possibilities for online collaboration and teaching with it and inspires you to go P2P. Any questions or suggestions? Please don’t hesitate to contact me.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

WebRTC comes to Firefox

As we mentioned in the Hacks blog back in April , WebRTC will be on by default in Firefox 22. getUserMedia (gUM) has been on by default since Firefox 20. PeerConnection and DataChannel, which enable video/audio calling and peer-to-peer data sharing, are what’s new in Firefox 22 (due to be released today).

WebRTC brings real-time communication to the web for the first time ever, and we’re excited to get this new technology into the hands of developers. We believe the industry has only scratched the surface of what’s possible with WebRTC, and only by getting it into the hands of developers and early adopters will we see this technology’s true potential.

Known issues/limitations

There are a few known issues/limitations in the early releases:

  • We are initially focused on getting 1:1 calling working well. We’ve done nothing to prevent conference or mesh calling, but depending on the capabilities of your device, video calls with multiple participants may be sluggish. We will be improving multi-person calling in future releases. Our roadmap includes full support for multi-person/conference/mesh calling and we expect to improve the experience in future releases.
  • You may hear echo on calls when you or the party you’re talking to is playing sound over your computer speakers. We’re working on improving echo cancellation but for the time being, try wearing headphones if you experience this problem.
  • On some systems, you may experience audio delay relative to the video. We’ve isolated the problem and are working on a fix for a near-term Firefox release.
  • If you are behind a particularly restrictive NAT or firewall, you may have trouble connecting. We are adding support for media relaying (TURN) in Firefox 23, so you should find this improving soon.

Trying WebRTC support today

If you’d like to try out Firefox’s WebRTC support today, here are some sites that support WebRTC calling:

NOTE: most of these sites support 3 or more callers. We expect basic 1:1 (2-person) calling to perform well enough for developer and early adopter use. As mentioned above, you may find that your mileage may vary with 3-or-more person calling using the current release.

If you’re a developer interested in embedding WebRTC video chat into your website, please check out article on that.

Testing DataChannels

You can also try out DataChannels in Firefox, which is the first browser to launch a spec-compliant implementation of DataChannels to the market. Some sites and projects that use DataChannels:

Using Firefox Nightly to test the latest

I still encourage developers to use Firefox Nightly because it has the latest and greatest code and improvements, and we will be continuing to improve existing features and add new ones as we get feedback from developers and users and as the WebRTC standard itself evolves.

Rapid progress!

We expect new WebRTC sites, supporting PeerConnection and DataChannels, to come online rapidly over the next several months. We’ll keep you updated on our progress and on WebRTC’s progress here on Mozilla Hacks.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Embedding WebRTC Video Chat Right Into Your Website

Most of you remember the Hello Chrome, it’s Firefox calling! blog post right here in Mozilla Hacks demonstrating WebRTC video chat between Firefox and Chrome. It raised a lot of attention. Since then we here at Fresh Tilled Soil have seen a tremendous amount of startups and companies which have sprung up building products based WebRTC video chat technology. Tsashi Levent-Levi who is a WebRTC evangelist has been interviewing most of these companies on his blog, the list is quite impressive!

WebRTC chat demo

Much like most of early adopters we have been playing around with WebRTC for quite awhile now. We have of course created our own WebRTC video chat demo and have also very recently released WebRTC video chat widgets.

The widgets work very simply, anybody can take the following HTML embed code:

<!-- Begin Fresh Tilled Soil Video Chat Embed Code -->
<div id="freshtilledsoil_embed_widget" class="video-chat-widget"></div>
<script id="fts" src="http://freshtilledsoil.com/embed/webrtc-v5.js?r=FTS0316-CZ6NqG97"></script>
<!-- End Fresh Tilled Soil Video Chat Embed Code -->

and add this code to any website or blog post. You’ll see the following widget on their website:

From here it’s dead simple to start a WebRTC video chat, just make up a name for a room, type it in and click start chat. Tell the other person to do the same and you’re all set.

As always make sure you’re giving this a try in Firefox Nightly or the latest stable build of Google Chrome. If you are on a tablet make sure you are on Google Chrome beta if you are using the Google Chrome browser.

Something else to note is that for this first version our video chat is limited to just two participants per a room. If a room name is occupied by two people the third person who tries to connect to this room simply won’t be able to connect.

How It Works

Without getting too deep into the code behind how WebRTC video chat actually works, let’s briefly go over what is actually happening behind the scenes when you click the start chat button and how WebRTC video chat actually works. Here is a step by step timeline of what actually happens to give you a better idea:

A quick note about this step: “Once remote media starts streaming stop adding ICE candidates” – this is a temporary solution which might result in suboptimal media routing for many network topologies. It should only be used until Chrome’s ICE support is fixed.

A quick and very important tip to remember when you are trying to get this to work. We used a ‘polyfill’ like technique as shown in this article by Remy Sharp. As Remy describes we wrote a piece of code to adapt for the Firefox syntax to get cross-browser functionality.

Issues We Ran Into and How We Solved Them

As you might expect we ran into a number of problems and issues trying to build this. WebRTC is evolving quickly so we are working through a number of issues every day. Below are just some of the problems we ran into and how we solved them.

PeerConnection capability in Google Chrome

While working with the new PeerConnection capability in Chrome we discovered a strict order of operation for it to work; more specifically:

  • Peers must be present with local streaming video before sending SIP (offer/answer SDP)
  • For ‘Answerer’; Do not add ICE candidate until the peer generates the ‘Answer SDP’
  • Once remote media starts streaming stop adding ICE candidates
  • Never create peer connect for answerer until you get the ‘Offer SDP’

We fixed it by handling those issues and handling the connection in the order described above. This was crucial to making the connection work flawlessly. Before we did that it would work only every once in a while.

Added latency due to lag

When streaming to a mobile device there is added latency due to lag and limitations of surfing the net via mobile phone.

We solved this by making the resolution of streamed video reduced via a hash tag at the end of the URL. URL can optionally contain '#res=low' for low resolution stream video & '#res=hd' for HiDefinition streaming video as an optional URL parameter. A quick note here that other configurable properties are now available such as frames per second which you can use for this same purpose.

Recording the WebRTC demo

We’ve been dabbling with recording video WebRTC demo. When recording video we used the new JavaScript type arrays to save the streaming data. We quickly discovered that it is only possible to record the video and audio separately.

We solved this by creating two instances of recording, one for the audio and one for the video, that utilized the new javascript data types and recorded both streams simultaneously.

Conclusion

It’s exciting to dabble in this stuff, we love WebRTC so much that we created an entire page dedicated to our experiments with this technology and others which we believe will transform the web in 2013. If you have any question please give us a shout.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)