Taking my G-Ro for a spin…

Almost two years ago the G-Ro travel bag kickstarter did the rounds and all of us travelers pricked up our ears. It sounded revolutionary and a really cool bag that is a mix of carry-on and laptop bag. It’s unique physics and large wheels promise easy travel and the in-built charger for mobiles and laptops seems excellent.

As with many kickstarters, this took ages to, well, kick in and by the time mine arrived my address had already changed. They dealt with this easily though and this last trip I took the cool bag for its first spin.

Now, a caveat: if you use the bag the way it is intended, I am sure it performs amiably. The problem I find is that the use case shown in the videos isn’t really one that exists for an international traveler.

Let’s start with the great things about the G-Ro:

  • It looks awesome. Proper Star Trek stuff going on there.
  • It does feel a lot lighter when you roll it compared to other two wheeled rollers. The larger wheels and the higher axle point makes physically sense.
  • It comes with a lot of bags for the interior to fold shirts and jackets and lots of clever features.
  • Once you spend the time to go through the instructions on the kickstarter page you’ll find more and more clever bits in it.
  • The handle is sturdy and the right length to pull. It is less of a danger to other travelers, as all in all the angle you use it on is steeper. You use less space walking. However, it still is worse than a four-wheeled bag you push on your side. People still manage to run into the G-Ro at airports.

Now, for a weekend trip with a few meetings an a conference, this thing surely is cool and does the job. However, on my 4 day trip with two laptops and a camera it turns out to be just not big enough and the laptop bag is measured only for one laptop and not even a sensible space for the chargers.

Here are the things that miffed me about the G-Ro:

  • Whilst advertising that it is the correct size for every airline to be a carry-on, the G-Ro is big and there are no straps to make it thinner. This is what I like about my The North Face Overhead Carry on Bag. This means that on an Airbus in Business Class, the G-Ro is a tight fit, both in height and length. G-Ro in overhead on British Airways
    As most airlines ask you to put your coats on your bag, this is a no-go.
  • The easy access bag on the front for your liquids and gels is flat and big, but the problem with liquids and deodorant/perfume bottles is that they are bulkier and less wide than that. This easy-access bag would be much better as another laptop/tablet holder. With your liquids in that bag, the G-Ro looks bulky and you’re sure to bump against the top of the overhead compartment with your liquids. Basically there is a good chance for accidental spillage. A bag on the side or a wider one on the back would make more sense.
  • The bag in the back in between the handle bars is supposed to be for your wallet and passport, and thus works as an advertisement for pick-pockets. I used it for the chargers of my laptops instead, and that’s actually pretty convenient.
  • The G-Ro is very clever in the way you can put a lot of cables and hardware into a very tight space. This is convenient, but also ensures that every time the bag is X-Rayed at the airport, it is taken out and officers ask you to remove things. Instead of keeping cables, iPods and chargers in the bags they should go, it would be better to have a removable pouch for them. I will use a Cable Organiser to avoid this now.
  • One thing that is not really a problem but freaked me out is that the G-Ro is always slightly tilted and I am always wondering if it will fall over. It won’t, and what is pretty cool is that you can fully open the front bag without it falling over. But it is something to get used to.Bag between handle bars, tilted standing and open G-Ro
  • Now, I might have put too much in for a four day trip, but here is the main issue with the G-Ro. For its size it is ridiculously heavy – you know, like the first two Black Sabbath albums heavy. With its big wheels it feels great to pull the bag, but once you get to some stairs, you get a rude awakening. No, you can’t roll it down most stairs, as it would bounce and as with all two-wheel bags you have the issue of a slight angle going down a step making the bag fishtail. The heaviness of the bag is exacerbated by the uselessness of the handle on the side, which doesn’t pull out at all and thus for my fat fingers is a trap and great to remove fingernails rather than a way to carry the bag or pull it out of the overhead compartment.
    Bad handle

All in all, I am not punishing myself for backing this product, but it is only useful for a certain use case. In essence, it is a glorified backpack or laptop bag, but not a full travel companion. I’m looking forward to using it for weekend business trips that last two days, as it will force me not to buy things. But with all the hype and the plethora of useful features that the web site and the videos promise us, I found it underwhelming, especially for this price.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

7 tricks to have very successful conference calls

Conference Call

I work remotely and with a team eight hours away from me. Many will be in the same boat, and often the problem with this is that your meetings are late at night your time, but early for the others. Furthermore, the other team meets in a room early in the morning. This either means that they are fresh and bushy tailed or annoyed after having been stuck in traffic. Many different moods and agendas at play here. To avoid this being a frustrating experience, here are seven tips any team in the same situation should follow to ensure that everyone involved gets the most out of the conference call:

  • Be on time and stick to the duration – keep it professional – of course things go wrong, but there is no joy in being in a hotel room at 11pm listening to 6 people tell each other that others are still coming as they are “getting a quick coffee first”. It’s rude to waste people’s time. The meeting time should be information and chats that apply to all, regardless of location and time. You can of course add a social part before or after the meeting for the locals.
  • Have a meeting agenda and stick to it – that way people who have a hard time being part of the meeting due to time difference can decline to come to the meeting and this may make it shorter
  • Have the agenda editable to everyone available during the meeting – this way people can edit and note down things that have been said. This is beneficial as it acts as a script for those who couldn’t attend and it also means that you can ensure people remotely on the call are on the ball and not watching TV
  • Introduce yourself when you speak and go close to the mic – for people dialing in, this is a feature of the conference call software, but when 10 people in a room speak, remote employees who dialed in have no no idea what’s going on.
  • Avoid unnecessary sounds – as someone dialing in, mute your microphone. Nobody needs your coughing, coffee sipping, or – at worst – typing sounds – on the conference call. As someone in the room, don’t have conversations with others next to the microphone. Give the current presenters the stage they deserve.
  • Have a chat window open – this allows people to post extra info or give updates when something goes wrong. It is frustrating to speak when nobody hears you and you can’t even tell them that it doesn’t work. A text chat next to the conf call hardly ever fails to work and is a good feedback mechanism
  • Distribute presenter materials before the call – often presenting a slide deck or web product over Skype or others fails for various reasons or people dialing in are on a very bad connection. If they have the slide deck locally, they can watch it without blurs and delays

Using these tricks you end up with a call that results in a documented agenda you can send to those who couldn’t attend. You can also have an archive of all your conf calls for reference later on. Of course, you could just record the sessions, but it is much more annoying to listen to a recording and it may be tough to even download them for remote attendees on bad connections. By separating the social part of the meeting from the official one you still have the joy of meeting in the mornings without annoying the people who can’t be part of it.

Photo Credit: quinn.anya Flickr cc

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web bloat isn’t a knowledge problem

Fat hamster stuck

There is a consensus among all browser makers and lovers of the web: the current state of the web is a mess. The average web page is 2.4 megabyte big and has over 200 requests.

We spend more time waiting for trackers and ads to load than we spend reading content. Our high-powered computers and phones, our excellent mobile devices choke on megabytes of JavaScript and CSS. Code that often isn’t needed in the environments it currently gets delivered to. Code that fixes issues of the past that aren’t a problem anymore these days.

Our reaction to this is a lot of times an assumption that people use libraries and frameworks because they don’t know better. Or that people use them to solve an issue they have that we don’t?—?like having to support legacy environments.

I’m getting more and more the impression that we are wrong in our approach. This is not about telling people that “you don’t need to use a framework” or that “using XYZ is considered harmful”. It isn’t even about “if you do this now, it will be a maintenance nightmare later”. These are our solutions to our problems.

We’ve done this. Over and over again.

Information is plentiful and tools aren’t a problem

We try to do our best to battle this. We make our browsers evergreen and even accessible as headless versions for automatic testing. We create tools to show you without a doubt what’s wrong about a certain web product. We have simulators that show you how long it takes for your product to become available and responsive to interaction. We allow you to simulate mobile devices on your desktop machine and get a glimpse of what your product will look like for your end users.

We create toolchains that undo the worst offenses when it comes to performance. We concatenate and minify scripts and CSS, we automatically optimise images and we flag up issues in a build process before they go live.

We give talks and record training videos on how to build a great, responsive product using modern browsers whilst supporting old browsers. We release all of this for free and openly available?—?even as handy collections and checklists.

And yet, the web is a mess. And I’m not even talking about products that bloat because of maintenance issues or age. Brand new, celebrated and beautiful products get even the basics of performance wrong. Why?

One reason for bloat?—?a lack of suffering the same problems

One of the biggest reasons is that developers in general don’t suffer the same issues end users do. We check our products on fast, well equipped devices on fast and steady connections. We use ad blockers. We don’t test our products on a mobile device with a flaky connection. We don’t test them on outdated setups that may well still be in use out there. We don’t even test them on other operating systems than ours. If it isn’t broken on our machine, it is good enough to release.

This is not malice on our behalf, it is at the worst indifference to other people’s needs. But most likely something else is the real problem.

The main reason for bloat

I think it is much more likely that the code quality of the end product and its performance isn’t even on the radar of many companies. We live in a market that moves at breakneck speed. Being first to market is paramount.

Companies that are funded to create hype and get millions of users posting random stuff, liking and adding emoji or commenting and sharing don’t care if they send a lot of unused JavaScript down the wire. They want to be the first to roll out a new feature that also can be turned off a month later when people don’t want it any more. They want to be the first to have the feature or be fast to add it when a competitor has success with it.

A lot of our messaging is the total opposite of that. With good reason. This is not a healthy way to work, this is not how you build a quality product. This is how you build an un-maintainable mess full of security holes and bloat. But this is how you make investors and prospective buyers happy.

We live in a market of hype and quick turnaround and we preach about longevity and quality thinking that takes time and effort and will yield results in the long run. We are right, we have proven over and over that we are, but the business model of our quick success poster child web companies is utterly and totally not about that.

I am not saying that this is good. At all. This constant push for innovation for the sake of showing something new and getting people even more addicted to using our products is killing the web. And it is killing our community and leads to an overly competitive work environment that favours 10x developers who have time and the drive to work 20 hours a day for a few months on a product that is destined to be scrapped a few weeks after the pivot.

Our job is to battle this.

Our job is to shine a bright light on all the bullshit cinderella stories of the unknown startup that made millions in a month by moving fast and breaking things.

Our job is to push back as developers when project managers want us to deliver fast and fix later?—?that never happens.

Our job is to give sensible estimates as to what we can deliver in a certain time and be proud of what we delivered.

And our job is to work closely with library and framework creators to ensure that products based on things that promise “deliver more by writing less” result in quality code instead of overly generic bloat.

People don’t bloat products because they don’t know better. They do it because they want to be seen as incredibly productive and faster than others. And that comes at a cost that is not immediately evident and seems not important in comparison.

The biggest problem we need to solve is that we celebrate re-inventing our way of working every few months. Instead of dealing with our jobs asking us to deliver too much in not enough time we started falling into the same trap. We want to be seen using the newest and coolest and follow patterns and ways of working of the fast companies out there. Quality and working with the platform isn’t sexy. Inventing new things and discarding them quickly is.

No, I am not against innovating and I’d be the last to pretend that the web stack is perfect. But I am also tired of seeing talented developers being burned out. We have a 1–2 year average retention span of developers in companies. This is not sustainable. This is not how we can have a career. This is not how we can become more diverse. The ugly brogrammer is only in part our own biases and faults. It is a result of an unhealthy work environment based on “release fast and break things”. We broke a lot. Let’s try to fix it by fixing what people use, not telling them what they should be using.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Comparison of content management systems

We have previously mentioned this infographic in some of our social media feeds (Twitter and Facebook). We thought it might be helpful to provide this on our blog as well (with the author’s permission). In our opinion, it is a good perspective of the three major content management systems. Clicking on the infographic will open a new browser window/ tab with the full article.


Presented by Skilled.co (they have further information supporting this infographic on their site)

 

The post Comparison of content management systems appeared first on Web Professionals.

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

First Decoded Chat of the year: Paul Bakaus on AMP

Today on the Decoded Blog I published the first ever Decoded Chat I recorded, where I grilled Paul Baukaus in detail about AMP.

This is an hour long Skype call and different to the newer ones – I was still finding the format :). There are quite a few changes that happened to AMP since then and soon there will be an AMP Summit to look forward to. All in all, I do hope though that this will give you some insight into what AMP is and what it could be if the focus were to go away from “Google only” with it.

These are the questions we covered:

  1. What is AMP to you?
  2. The main focus of AMP seems to be mobile, is that fair to say?
  3. Was AMP an answer to Facebooks’ and Apple’s news formats? Does it rely on Google technology and – if so – will it be open to other providers?
  4. It seems that the cache infrastructure of AMP is big and expensive. How can we ensure it will not just go away as an open system as many other open APIs vanished?
  5. Do large corporations have a problem finding contributors to open source projects? Are they too intimidating?
  6. Is there a historical issue of large corporations re-inventing open source solutions to “production quality code”? Is this changing?
  7. Whilst it is easy to get an AMP version of your site with plugins to CMS, some of the content relies on JavaScript. Will this change?
  8. AMP isn’t forgiving. One mistake in the markup and the page won’t show up. Isn’t that XHTML reinvented – which we agreed was a mistake.
  9. AMP seems to be RSS based on best practices in mobile performance. How do we prevent publishers to exclusively create AMP content instead of fixing their broken and slow main sites?
  10. It seems to me that AMP is a solution focused on CMS providers. Is that fair, and how do we reach those to allow people to create AMP without needing to code?
  11. A lot of “best practice” content shown at specialist events seems to be created for those. How can we tell others about this?
  12. AMP seems to be designed to be limiting. For example, images need a height and width, right?
  13. In terms of responsive design, does the AMP cache create differently sized versions of my images?
  14. Are most of the benefits of AMP limited to Chrome on Android or does it have benefits for other browsers, too?
  15. Do the polyfills needed for other browsers slow down AMP?
  16. How backwards compatible is AMP?
  17. One big worry about publishing in AMP is that people are afraid of being fully dependent on Google. Is that so?
  18. Are there any limitations to meta information in AMP pages? Can I add – for example – Twitter specific meta information?
  19. Do AMP compatible devices automatically load that version and – if not – can I force that?
  20. How can I invalidate the AMP cache? How can I quickly remove content that is wrong or dangerous?
  21. Right now you can’t use third party JavaScript in an AMP page. Are you considering white-listing commonly used libraries?
  22. It seems AMP is catered to documents, while most people talk about making everything an App. Is this separation really needed?
  23. What’s the sandbox of AMP and how is this now extended to the larger web as a standard proposal?

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Enjoying the silence…

It is a strange love I have for the web. Ever since I used the web for the first time it felt like taking a ride with a best friend.

I knew I had to start working on it to never let it down again. Many people who started with the web when I did have the same idea: I feel behind the wheel and I keep wanting to show the people the world in my eyes.

But sometimes I get the feeling that we’re flying high and we’re watching the world pass us by. Every new dress the web wears comes with technical challenges showing the limits of web technology. I don’t think that is broken, it is a pain I am used to. I show people how to work around limits, how stripped down solutions will perform better in the long run, and how walking in my shoes can give them a long lasting career with some great reward.

That said, I don’t think we get the balance right. It happens all the time that things don’t go as perfect as we want them to be. Of course we can tell people that they shouldn’t have done that, and follow a sacred policy of truth about the need for, for example, ubiquituous accessibility. But people are people and in many cases it is not always easy. When the boys in the higher up departments say go and it is a question of time in the project, shortcuts are taken. You stare down the barrel of a gun of delivery and you don’t have time for sweetest perfection. And – even worse – our whole market is constantly asked to innovate and create new things. We who are telling quality stories of old don’t suffer that well.

The landscape is changing all the time and the sinner in me is OK to spread some blasphemous rumours. The bottom line is that in your room you have a lot more insight than in a cut-throat delivery cycle. When products are faulty and become hard to maintain, we shouldn’t tell people “told you so”, that’s wrong.

We should be more flexible and whilst we have and hold the web we love, we should also remember that everything counts in large amounts. So our advice should yield fast and maybe dirty results, as much as the promise of sweetest perfection and a great reward in the future.

Instead of saying it’s no good, let’s say I feel you and try to fix the fragile tension between constant growth, innovation and quality.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

SSL Certificates as a Protection Tool for Web Professionals

As we all know, SSL certificates help protect the information transmitted between the web server and the client.  Given the increasing cyber security attacks (and associated media coverage), data breaches and theft of payment processing data, we thought it appropriate to remind readers of the importance of SSL certificates.

Vivek Ram is a Technical Blog Writer from Comodo. He writes about information security, focusing on web application security. He provided the following information we wanted to share with readers of our blog. Many thanks to Vivek for providing this article.

“As a small business owner or even the owner of a larger company or ecommerce site, cyber security news about a data breach in payment processing on a website may be just one of the things that keep you up at night.
The good news is that cyber security consultants and professionals can develop plans, run a network security audit and even develop a network security policy that is designed to keep this type of data safe from hackers and breaches.
However, there are also some very simple security measures that can be put in place to provide encryption security between the server and the browser. This is known as SSL or Secure Sockets Layer and it has been the technology in place to protect data transmitted online since its introduction in 1994.
Today, the use of the new version of SSL, Transport Layer Security or TLS, is based on the early SSL technology. Even through TLS is the correct name, most of the Certificate Authorities and IT professionals still refer to it as SSL.
To protect your website and the information transmitted between the web server and the client, SSL certificates provide authentication and encryption. To understand how this provides both customer and user protection as well as protects the site itself, consider the following essential features, factors and functions. ”

The Trust Factor

“When existing customers or new prospective customers arrive at your website, the first thing they will look at is the quality of the landing page. However, once they start adding items to their cart and going through the checkout process, most customers will have taken a glance up at the address bar or to the sides of the page.
What they are looking for is the universal sign of online security. This is the padlock in the address bar that indicates they are on a site using SSL technology. Now, there is also the full green address bar which signifies the use of an EV SSL or Extended Validation certificate. This is the highest level of validation possible through any CA and for any type of website. Most customers aren’t certain about how the technology works, but they do recognize the need to have that padlock and perhaps the green bar present.
Glancing to the side or the bottom of the page will confirm the use of a specific Certificate Authority (CA). All of the major CAs will have their own site seal. This is a graphic that is used to designate the security of the website and the use of a particular product by a particular CA.
With these things in place, your website will have a decrease in the amount of abandoned shopping carts, something that is common if the customer gets through the selection process and then realizes on the checkout page that the padlock or green bar isn’t present.
However, and even more importantly, it makes your website safe for your customers to use. This preserves the reputation of your website and your company.”

Full Encryption at 256 Bits

“The use of SSL/TLS certificates also provides full encryption at the industry standard 256 bits. This encryption and decryption are done through the use of a pair of keys. These two keys use Public Key Infrastructure or PKI to provide internet security protection for online data.
The public key is used to encrypt data between the browser and the server. The public key is available to all because it is public. However, it is only recognized by one unique private key.
The private key is located on the server that hosts the website. When data comes in encrypted by the public key, it is unreadable unless it is decrypted by the private key. This protects all data transmitted from the website including financial information, personal information or even general information.
The public and private key are actually a long string of what looks like random numbers. They are able to recognize each other through a complicated mathematical relationships that is never duplicated and is completely unique.
The 256 bit encryption is virtually impossible to hack or break, even with brute force types of hacking attempts. The level of encryption offered by SSL certificate technology has changed over time and will continue to evolve as computer systems advance.”

Validation and Verification Process

“To further provide complete protection to your website against spoof websites or fraudulent website trying to look like your site, the CAs have to follow a rigid and very complex process to verify and validate the application for any type of SSL certificate.
This is based on the AICPA/CICA WebTrust for Certification Authorities Principles and Criteria and outlines what verifications must be completed for the various SSL certificates at the different validation levels.
As hackers or spoofing sites have to provide the necessary information and this has to match with records on file with a wide variety of databases and trusted sources, it makes it impossible for these criminals to be able to obtain an SSL certificate for those fraudulent sites. This not only protects your website but with the SSL certificate in place, it will also protect your customers.”

The post SSL Certificates as a Protection Tool for Web Professionals appeared first on Web Professionals.

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Firebug lives on in Firefox DevTools

As you might have heard already, Firebug has been discontinued as a separate Firefox add-on.

The reason for this huge change is Electrolysis, Mozilla’s project name for a redesign of Firefox architecture to improve responsiveness, stability, and security. Electrolysis’s multiprocess architecture makes it possible for Firefox to run its user interface (things like the address bar, the tabs and menus) in one process while the content (websites) runs in other processes. With multiprocess architecture, if a website crashes, it doesn’t also crash the whole browser.

Unfortunately, Firebug wasn’t designed with multiprocess in mind, and making it work in this new scenario would have required an extremely difficult and costly rewrite. The Firebug Working Group agreed they didn’t have enough resources to implement such a massive architectural change. Additionally, Firefox’s built-in developer tools have been gaining speed, so it made sense to base the next version of Firebug on these tools instead.

The decision was made that the next version of Firebug (codenamed Firebug.next) would build on top of Firefox DevTools, and Firebug would be merged into the built-in tools.

And perhaps most importantly, we joined forces to build the best developer tools together, rather than compete with each other. Many of Firebug’s core developers are on the DevTools team, including Jan ‘Honza’ Odvarko and Mike Ratcliffe. Other Firebug Working Group members like Sebastian Zartner and Florent Fayolle are also active DevTools contributors.

A huge thank you to them for bringing their expertise in browser developer tooling to the project!

In practical terms, what does it mean to merge Firebug into DevTools?

Several features have been absorbed: The DOM panel, the Firebug theme, Server-side log messages, the HTTP inspector (aka XHR Spy), and various popular add-ons like FireQuery, HAR export, and PixelPerfect. Also, over 40 bugs were fixed to close the gap between DevTools and Firebug.

For curious readers, a couple of articles on hacks.mozilla.org and on the Firebug blog go into more detail.

If you are switching now from Firebug to Firefox DevTools, you will of course notice differences. This migration guide can help you.

We understand that disruption is never really welcome, but we are working hard to ensure developers have the best possible tools, and sometimes this means we need to refocus and use resources wisely.

You can help: Tell us which features you need are missing. There are a few ways you can do this:

We are already tracking missing features on this bug, and so far you have told us that the most important are these:

We thank you for your loyalty and hope you understand why we’ve made this difficult decision. The Firebug spirit lives on in all of the browser developer tools we build and use today.

The Firefox DevTools and Firebug teams

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Level Up Your Cross-Browser Testing

Today we’re announcing a special opportunity for web developers to learn how to build and automate functional browser tests — we’ve partnered with Sauce Labs to offer a special extended trial of their excellent tools, and we’ve created a custom learning resource as part of this trial.

2016: The year of web compat

In 2016 we made a lot of noise about cross-browser compatibility:

Let there be no question: Cross-browser compatibility is essential for a robust, healthy internet, and we’ll continue urging developers to build web experiences that work for more people on more browsers and devices.

That is by no means the end of it, however — we also intend to make it easier to achieve cross-browser compatibility, with tools and information to help developers make the web as interoperable as possible.

Special access to Sauce Labs

Beginning today, we’re introducing an exceptional opportunity for web developers. Our partners — Sauce Labs — are giving web developers special access to a complete suite of cross-browser testing tools, including hundreds of OS/browser combinations and full access to automation features.

The offer includes a commitment-free extended trial access period for Sauce Labs tools, with extra features beyond what is available in the standard trial. To find out more and take advantage of the offer, visit our signup page now.

Access to Mozilla’s new testing tutorials

We’ve also brought together expert trainers from Sauce Labs and MDN to create a streamlined tutorial to help developers get up and running quickly with automated website testing, covering the most essential concepts and skills you’ll need: See Mozilla’s cross-browser testing tutorial.

What’s next

This is a time-limited offer from our friends at Sauce Labs. We want to understand whether access to tools like this can help developers deliver effective, performant, interoperable web experiences across today’s complex landscape of browsers and devices.

Sign up and let us know how it goes. Is the trial helpful to you? Do you have constructive feedback on the tooling? Please let us know in the comments below.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Firefox Hardware Report for Web Developers

Suppose you’re developing a sophisticated web game or application, and you’re wondering — will it actually be able to run? What hardware should I be targeting to get the widest possible audience? Existing hardware reports (such as those from Valve and Unity) are excellent, but represent a different group of hardware users than the majority of people who use the web.

We’ve been working on the next-generation web game platform for years now, and often receive questions from web developers about the hardware market. Today we’re releasing the Firefox Hardware Report to help answer those questions and inform your development decisions.

On this site you’ll find a variety of data points showing what hardware and OSes Firefox web users are using on the web, and trends over time. This includes CPU vendors, cores, and speeds; system memory; GPU vendors, models, and display resolution; Operating System architecture and market share; browser architecture share, and finally, Flash plugin availability. Some charts (such as the GPU models) have an expanded view showing all of the models captured. You’ll also discover all sorts of interesting stats — like 30% of Firefox users have 4 GB of RAM; that Intel makes 86% of users’ CPUs and 63% of their GPUs; or that the most popular screen resolution (with 33% of users) is 1366×768.

Firefox Hardware Survey

The Firefox Hardware Report is a public report of the hardware used by a representative sample of the population from Firefox’s desktop release channel. The source data for the Hardware Report was collected through the Firefox Telemetry system, which automatically collects browser and platform information from machines using Firefox. We use this data to prioritize development efforts and to power this report.

Once data is collected, we aggregate and anonymize it. During the aggregation step, less common screen resolutions and OS versions are handled as special cases — resolutions are rounded to the nearest hundred and version numbers are collapsed together in a generic group. Any reported configurations that account for less than 1% of the data are grouped together in an “Other” bucket. At the end of the process, the aggregated, anonymized data is exported to a JSON file and published on the report website.

To learn more about how we created this report, take a look at our blog post on the Mozilla Tech Medium. The code used to generate the reported data and website is available on GitHub.

This is just our first step, and we hope to expand this data set over time. In the meanwhile, please let us know what you think — is this data useful? What other data points might be helpful in the future? Do you have visualization suggestions? We look forward to your email feedback, with thanks!

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)