6 tips to turn your slow loading website into a brisk browsing experience

Internet users want a speedy experience and they’re not getting it, a fact that leaves them frustrated and website owners with less revenue. Don’t believe it? Numbers don’t lie. A full 53 percent of surfers want any site they visit load in three seconds or less. The largest ecommerce sites in the world recognize this necessity – they load incredibly fast. Most of the rest of the internet leaves a time gap that makes for a lot of gritted teeth and nervous toes tapping the floor. The good news is that speeding up a slow website is not difficult or time-consuming. The bad news is you might not choose to do it.

Are You Flirting with the Performance Poverty Line?

The performance poverty line is a term that represents the point at which being slow doesn’t matter because you’ve already lost most of your traffic. That number sits at around 8 seconds. The more pertinent question is, do you know your website’s speed. REALLY know your website’s speed?

No guessing because this is important stuff.

There’s an easy way to find out. Pay a visit to a website called Pingdom — it’ll probably load fast because it’s sort of their business — and enter your URL in the box. Select a location from the dropdown menu and hit “start.” Unless they’re exceptionally busy (it happens sometimes) you should get a performance summary in less than a minute.

Screen capture of Pingdom report

There’s a good chance what you see won’t impress anyone, but that’s okay. Few websites do. We’re here to provide you with a road map to get those numbers headed down, down, down and your visitors to start getting happy, happy, happy. Let’s call this…

A 6-Part Roadmap to Fast Websites and Happy Customers

Part 1: Magically Shrink Your Website

Actually, as far as we know, there’s no way to magically shrink your website but you can get the same effect by applying a sweet little bit of technology called Gzip compression. When implemented, some site owners have seen overall file size reduction of as much as 70 percent. That’s huge. Actually it’s tiny and that’s the point. It works like this. When a request hits the server to view the website, it automatically zips all the files before sending them onto the requester’s browser, where it is unzipped and displayed.

Part 2: Fix Bad Design and Too Many HTTP Requests

Every element on your website — we’re talking about images, videos, scripts, and even text — generates an individual request to the server. The more “stuff” your website has, the more requests there are and the longer it takes to load. If ever there was an argument for using a minimalistic approach when designing your website, this is it. Fewer requests mean a faster website. The tricky part is to not get distracted by all things you could do and stick to only what is needed to accomplish the site’s mission.

Overview of http requests and responses

Part 3: Put Hefty Images on a Diet

Images are huge. Incorrectly (or not at all) optimized, they put a terrible strain on bandwidth and leave the server and browser gasping from the strain. While we could write a book on the topic, there is one thing you can do that will fix a lot of the issues and that is choose the correct format — png, gif, and jpeg are good — and make the things as small as you can stand BEFORE uploading to your website. If you upload a full size image, even if you reduce it later, the server still has the original version and that’s the one that clogs the pipeline.

Part 4: Upgrade Your Hosting

We love cheap stuff as much as the next person but when it comes to choosing a web hosting plan, you need to understand the different types of plans and know when it’s time to upgrade. Inexpensive shared plans can be as low as a few dollars a month and that’s okay for a hobby or site that doesn’t have much traffic yet. Once you reach a certain level, though, the shared resource approach of this kind of plan will almost certainly mean slow-loading and downtime. While a dedicated server might not be worth the expense, a virtual private server or VPS hosting can be a great compromise.

Schematic depicting VPS

Part 5: Turn on Browser Caching

Browser caching is an easy-to-implement, tactic that most fast-loading websites use. The idea is simple. Rather than force the server to send over all the website files every time someone visits, static files (those that don’t change) are stored in the browser’s temporary memory and only dynamic files have to be retrieved. Obviously, this doesn’t help on a first visit but, with browser caching enabled, subsequent visits will be quicker. For WordPress websites, W3 Total Cache is a free plugin to look for. Others just require a simple code addition.

Part 6: Resolve Plugin Conflicts

This WordPress-specific advice is based on the reality that a lot of site owners install plugins that they never update or even use. Considering the third-party nature of these bits of software, it should be no surprise that they don’t always play nice together — they weren’t intended to. If your WordPress website is slow or buggy, one of the first actions to take is to uninstall any plugins you aren’t using. After that, turn what’s left off one at a time and check site speed. There’s a good chance you’ll find one of the culprits to slow loading.

Final Thoughts

The state of technology today is such that people expect (even if it’s not a reasonable standard) a website to load in three seconds or less. A clean, fast-loading experience will go a long ways towards creating loyal customers and more revenue, which are both good things to shoot for as an online entrepreneur. Keep in mind that the process is iterative. There’s no magic wand that will turn your site into a speed burner. Small actions taken methodically, such as the ones described, should, over time, move you incrementally closer to that three second target. Good luck and thanks for reading.

Photo of Gary Stevens

Member author Gary Stevens is a front end developer. He’s a full time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor.

The post 6 tips to turn your slow loading website into a brisk browsing experience appeared first on Web Professionals.

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

February UX update – User Experience is all about Users

User experience design (UX, UXD, UED or XD) is the process of enhancing user satisfaction with a product by improving the usabilityaccessibility, and pleasure provided in the interaction with the product. User experience design encompasses traditional human–computer interaction (HCI) design, and extends it by addressing all aspects of a product or service as perceived by users. As an aspiring or practicing web professional, we should make every effort to enhance user satisfaction.

UX Term origin

User Experience Architect Donald Norman – it has been said that he has invented this term as he thought human interface and usability were too narrow and he wanted to cover all aspects of the person’s experience with the system including industrial design graphics, the interface, the physical interaction and the manual. Since then the term has spread widely, so much so that it is starting to lose its meaning. He has written his personal reflection about this in his Wikipedia article.

The post February UX update – User Experience is all about Users appeared first on Web Professionals.

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Professional Trends for 2014 – User Experience with Tomer Sharon

In this 10 minute interview with Tomer Sharon, User Experience (UX) Researcher @Google we talk about Web Professional Trends for 2014 including User Experience and Startups:

* Startups are increasingly interested in their own UX research
* Advice for UX research
* User Experience techniques
* UX research methods
* Links to UX resources
* Common UX pitfalls and how to avoid them
* UX case study

The post Web Professional Trends for 2014 – User Experience with Tomer Sharon appeared first on Web Professionals.

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Professional Trends 2014 – User Experience with Dave Hogue

In this 10 minute interview with David M. Hogue, Ph.D., Design Manager @Google we talk about Web Professional Trends for 2014 including Web Design, User Experience and:

•That design for the Web is a very broad field that includes aspects of design ranging from visual design, interaction design, micro interaction and user experience
•Design at a higher level that includes emotional reaction design, service design and how experiences will cross channel
•The trends for recognition of these facts that will change the way people think about design
•How the role psychology is playing even a larger role today than ever before
•How the role psychology will affect digital environments and interaction trends
•The distinction between design and art
•That experienced designers are problem solvers
•That design is way beyond the visual
•That design is playing a larger role in business today
•About the need for improving overall design education
•The need for interdisciplinary approaches in education

The post Web Professional Trends 2014 – User Experience with Dave Hogue appeared first on Web Professionals.

View full post on Web Professional Minute

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Firefox OS Security: Part 2 – User Experience and Security Updates

When presenting Firefox OS to people, security is a big topic. Can an operating system built on web technologies be secure? What has Mozilla built in to avoid drive-by downloads and malware? How can a browser-based app be secure without making the UX suffer by asking the user to react to a lot of “do you want to allow this?”. In this, the second of a two part video series Christian Heilmann (@codepo8), principal evangelist of Mozilla, talks to Michael Coates (@_mwc), chair of @OWASP Board about the user experience when it comes to installing and using apps and the security update model for Firefox OS.

You can watch the video on YouTube.

Firefox OS was built on top of the technologies that power the Web. Following Mozilla’s security practices and knowledge from over 10 years of securing Firefox, Firefox OS is engineered as a multi-tiered system that protects users while delivering the power of the mobile web. The design ensures users are in control of their data and developers have APIs and technologies at their disposal to unlock the power of the Web.

Additional links for more information:

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

A Social-aware Dashboard Experience with Gecko in Walls

This article covers a Web project that rethinks how TV appliances can be used in public spaces for real-time content and interaction with social networks. Tela Social is a powered by Mozilla system application that runs in Linux appliances and creates a visual experience that presents custom and interactive content with real time data using the Web. It’s different of a standard kiosk and far from the personal browser experience because it’s not tailored for an individual. Instead, it explores how online data can be used to provide impact to users in common areas. In the following sections, you will read about the general system’s architecture; a special focus is given to how CSS3 and JavaScript is used to support the user experience with engaging animated effects.

First, boot to the Web

The application starts following the Linux system’s reboot. A monitor process is launched and starts to check whether the Gecko-based client is running or not. If not running, it starts the main process: a Gecko-based single window browsing engine with no toolbars in full screen.

The following image shows an overview of the web stack infrastructure that is launched. The web application sits above the engine in a similar way to applications such as Firefox or other Powered by Mozilla apps using Gecko.

Gecko architecture overview

Online and Offline

Instead of loading a remote web page, most of the resources are loaded from a local web server written in JavaScript using NodeJS — TelaSocial Mediator. This system is also monitored in order to reduce problems in the case of script execution failures.

Once the web app comes from a local web server, a main HTML layout may bring inner HTML components that are loaded within iframes. This degree of separation exists to help developers to create isolated components in a standard test environment: all the components are made of plain pages. As part of this project, an experimental project script was created to help developers to quickly deploy new grid layouts — Gridtype.JS can take strings as arguments and can generate a grid layout that is tableless. The following example shows how to generate a grid of DIVs:


The above “string” specification will generate the following layout:

Layout design

Free of pop-ups or unwanted desktop interfaces

There are many systems using online data to display information in TV appliances. Some of these systems are applications launched on top of the operating system graphic user interface. The following photos were taken from airports in 2012 in Brazil. These are a few of many examples of problems with unwanted user interface elements that are displayed on top of systems using operating system desktops. For this reason, and also the principle of simplicity and cost reduction, the designed approach was to launch Gecko from the basic X Window infra structure. Such approach ensures that no other visual applications are running at the same time, thus creating an improved quality experience without those unwanted pop-ups.

Picture shows an Operating System update pop up which is common in airports in Brazil

Picture shows an Operating System update pop up which is common in airports in Brazil

Routing Gecko pop-ups into frames

While OS desktop pop-ups are out of the scene, yet there are some conditions where Gecko used to display pop-ups to alert the user. The solution to this problem was to use Gecko’s preferences API and ask Gecko to display network error messages inline. With this, all errors are displayed in the iframe which is great because it is possible to track the errors and signal application logic outside the Gecko process. As an example, we can monitor the output console and change the layout depending on network error conditions. So, for example, if a weather channel frames brings a network error, the system should be able to launch an alternate layout.

Animation library with zoom and pan

The above elements are essential to keep the level of quality that is expected in enterprise environments. With this functional base, it’s when things can be creative and the web developer mindset rules. HTML5 and modern effects, provided by CSS3, can be used to help understand and improve the user experience in this scenario that is not the same of a desktop or a mobile.

To help the creation of content, a small JavaScript library was also produced as part of this project. The main goal of this library is to help the creation of visual narratives out of HTML pages. The TagVisor library reads a list of <li> elements which tells how CSS3 and other transformation effects are applied with time-referenced data:

<ul id="animation" style="display: none;">
   <li data-target="slidea" data-time="1s"
      data-effect="scalefit" data-duration="3s">
   <li data-target="slideb" data-time="6s" 
      data-effect="scalefit" data-duration="3s">
   <li data-target="slidec" data-time="12s"
      data-effect="scalefit" data-duration="3s">

The above script dispatches transformation functions that can modify the document using JavaScript for DOM transformations and DOM style changes that will take care of the visual effects. These effects are common in many systems, including well known JS-based solutions such as Impress.JS. However TagVisor comes with tailored functions that are very important for the scenarios in digital signage. One difference is that the script allows modifications to be performed within iframes which is good for cases where many separated HTML resources are loaded in a grid arrangement.

The zoom and animation metaphor does a lot of the heavy work to ensure a level of smoothness in animation which is expected by users in front of a TV panel. Some effects can also be combined in time in order to produce engaging visual narratives as shown in the TagVisor video demonstration.

Web-driven, for adaptation

User experience is of major importance and a main reason for the use of a web-based infrastructure. Such model brings a level of dynamism supporting organisations who can tailor the system with custom data-sources using web services formats such as JSON or RSS. This approach has allowed us to test the screens with a variety of online sources; and to learn how the stream of web events can impact users in local organisations.

The following videos are examples of systems with Tela Social. The first video refers to the system at a major science park in Brazil (Press release in Portuguese). It’s an example with a variety of content areas that are tied to a variety of sources: from journalists to editors plus open channels using social networks. The second video shows an example where the TV appliance is set in vertical position.

People recognise that using data from events plus local interest content can build local participation. For most people, TV content is something far away and not tailored to their local community interests. To make things different, we have even used TVs in the vertical position to make a point that the whole experience was flipped and that content and interaction is bottom up.

The objective of the project is to develop web-based solutions that can be useful in spaces and bring social-aware experiences that are positive to local communities . While social features can be used in the mobile and on desktop, we believe that our approach can help communities because it extends the physical environment — it creates a social reflection to the environment. It’s a chance for custom web experiences that build awareness about contributions made by local communities and real-time data of interest. This approach also explores challenges in user interfaces since it engages people to interact in new ways: it’s not a direct touch like standard kiosks. It’s the web in the first place and opportunities for interaction are explored based on live data.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Kuma: The updated editor experience

The editing experience on the new Kuma wiki that we’ll be deploying starting on July 5th is not enormously different from what you’re used to, but there are some key differences I’d like to call out.

Getting into the editor

Let’s take a look, first, at differences in how you get into the editor. Once you’ve logged in using BrowserID, you’ll still see your old friend, the “Edit” button, at the top right corner as usual:

Screenshot of primary edit buttonYou can simply click that big blue “Edit” button to begin editing the entire page. Easy! But if you want, you can edit just a single section. Each header line has its own edit button off to the right, like this:

Screenshot of the section editing button.

Clicking that pretty blue “Edit” button to the right of the section heading will open an editor just for that section.

Changing page information

Once you’re in the editor, you can edit both content and page information. At the top left you see the title of the page:

Screen shot of the heading box.

Clicking on the “i” icon gives you access to edit page metadata:

Screen shot of the metadata editor.

You can then edit the page’s title (that is, the text displayed as the title of the individual page, and the slug, which is the URL component below the parent page (which you can’t edit; there’s a separate move feature for that).

The “TOC” checkbox lets you toggle whether or not the page displays its table of contents of its headers.

Saving and previewing

Then, at the top right of the editor screen, you’ll see these buttons:

Screen shot of the bar of save/discard options.

These are pretty self-explanatory. The first gives you the option to save your changes without leaving the editor; this is a feature we’ve wanted for ages, but finally have. The second button saves your changes and closes the editor.

The “Preview Changes” button actually opens a new tab showing a preview of the page. We finally, finally, have the ability to double-check our use of scripted templates before saving your edits. This is a huge deal for us!

Finally, the “Discard Changes” button lets you throw away your edits. Hopefully that’s pretty obvious.


The editor is essentially the same CKEditor we’ve always used on MDN, although it’s a newer version. Most of our keyboard shortcuts are the same as they were before. The most notable difference is that Ctrl-S no longer toggles source mode; instead, it does a “Save Changes.”

One thing we’ve done is revamp the toolbar to be more useful for the types of work we do:

Screen shot of the MDN editor's toolbar.

This is very reorganized from what we had before, with fewer unneeded buttons. Immediately below the toolbar is a block hierarchy bar; this shows you the hierarchy of elements that leads to your current cursor position. This is helpful, for example, to know what heading level you’re on, or how deeply nested your list is, and so forth.

We also now have handy buttons for the heading levels, and a button for preformatted text. To the right of the <pre> button is a menu that, when opened while your cursor is in a <pre> block, presents a list of syntax highlighting language options:

Screen shot of the MDN syntax highlighting popup.

This list is much simpler than the old one, and is certainly easier to read!

The style drop-down menu is pretty similar to the old one, with an assortment of styles we use regularly:

Screen shot of the styles drop-down menu.Tagging articles

Currently, the only way to tag articles is from within the editor screen. This will be changed at some point, but for now, you will find the tag editor at the bottom of the edit page:

Screen shot of the MDN tag editor.

You can delete tags by clicking the “X” in each tag’s box, or add new ones by simply clicking to the right of the tag list and starting to type.

There’s currently a bug that makes it impossible to enter tag names with spaces in their names. This will hopefully be fixed before we deploy Kuma.

Requesting reviews

We’re in the process of building a new, formal review system. While not all of the support for tracking reviews is in place yet, you can establish review requests using the checkboxes at the bottom of the editor page:

Screen shot of the review checkboxes.

For new articles, both the technical and editorial review requests are enabled by default. You can set or clear these as appropriate based on the type and number of changes you’ve made (and, of course, your confidence in your work!).

The “Template” review request is used to indicate that a template has been changed and needs a code review. This won’t affect very many people, because template editing is now under a tighter set of permissions than most editing, for security reasons.

Onward and upward

We will continue to iterate on the editing experience going forward, to make it even better. There are lots of things we want to do to make Kuma amazing!

Sometime in the next couple of days, I’ll share a look at the new localization tools we provide in Kuma. They’re not finished, but they’re already much, much better than what we had with our old system (which is to say: none at all).

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)