Truths

Web Truths: The web is world-wide and needs to be more inclusive

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to talk about the notion of the web as being a world-wide publishing platform and having to support all environments no matter how basic.

The web is world-wide and needs to be more inclusive

Well, yes, sure, the world-wide-web is world-wide and anyone can be part of it. Hosting is not hard to find and creating a web site is pretty straight-forward, too.

Web content needs to be as inclusive as possible, that’s common sense. If we enable more people to access our content, there is a higher chance we create something that is worth while.

There is no question at all that web content should not exclude people because of their ability. As web content used to be text as a starting point this wasn’t extra effort. Alternative text for images, descriptions for complex visuals and transcripts for videos. These, together with assistive technology and translation services allowed for access. I won’t talk about accessibility here. This is about availability.

For those who started early on the web these are basic principles you need to follow when you want to play on the web. Which is why we always cry foul when someone violates them.

  • When someone creates a product that only works in a certain market
  • When something is only in one language or assumes people know how to work around that problem
  • When a product assumes a certain setup, operating system or browser
  • When a product relies on new technology without a fallback option for older browsers

These are all valid points and in a perfect world, they’d be a problem for a publisher not to follow. We don’t live in a perfect world though, and the sad fact is that the web we defend with these ideas never made it. It died much earlier when we forgot a basic principle of publication: how do you make money with it.

If all you want is to publish something on the web, and your reward is that it will be available for people you’re set. This attitude is either fierce altruism or stems from a position where you can afford to be generous.

For many, this isn’t what the web is about. Very early on the web was sold as a gold-mine. You create a web site and customers will come. Customers ready to pay for your services. Your web site is a 24/7 storefront that doesn’t have any overhead like a call centre would.

Amongst other problems, this marketing message lead to the mess that the web has become. It isn’t about making your content available. It is about reaching people ready to pay for your product or at least cover your costs. Those aren’t hypothetical users all over the world. For most companies that rely on making money these are affluent audiences where the company itself is. There are, of course, exceptions to this and some companies like the NYC based dating site Ignighter survived by realising their audience is coming from somewhere else. But these are scarce, and the safe bet or fast fail for most publishers was to reach people where they are.

You can’t make money if you spend too much, and thus you cut corners. Sure, it sounds interesting to have your product available world-wide. But it is more cost-efficient to stick to the markets you know and can bill people in. That’s why the world-wide-web we wanted to have is in essence a collection of smaller, local webs.

I’m moving from England to Germany. I’m amazed how many people use local web products instead of those I use. Some are because of their offers – the Netflix catalogue for example is much larger in the UK. But many are a success because of classic advertising on TV, in trains and newspapers.

As competiton grew it wasn’t about creating valuable content and maintaining it any longer. It was about adding what you offer and find shortcuts to get yourself found instead of your competition. And thus search engine optimisation came about and got funding and effort.

Of course it makes sense to have your product available in one form or another on all browsers and on enterprise and legacy environments. But is it worth the extra effort? It is when you start with your aim being publishing content. But many web products now don’t start with content, they start as a platform and get products added in a CMS. And that’s when browser market share is the measure, not supporting everybody. A browser that works well on Desktop and is also available on the high-end smartphones is what people optimise for. Because that’s what every sales person tells you is the target audience and they can show fancy numbers to prove that. That this seems short-sighted is understandable for us, but we also need to prove that it really is. In a measureable way with real numbers of how much people lost betting on one environment in the short term.

The sad fact we have to face now is that the web is not the main focus of people who want to offer content online any longer. Which seems strange, as apps are much harder to create and you are at the mercy of the store publishers.

Apple just announced that next year they disallow templated and generated apps. This is a blow to a lot of smaller publishers who bet on iOS as the main channel to paying customers. No worry, though, they can easily create a web site and turn it into a PWA, right?

Well, maybe. But there is an interesting statistic in the Apple article:

According to 2017 data from Flurry, mobile browser usage dropped from 20 percent in 2013 to just 8 percent in 2016, with the rest of our time spent in apps, for example. They are our doorway to the web and the way we interact with services.

Of course, one stat can be debated, and the time we spend in apps doesn’t mean we also buy things in them or click on ads. A good web site may be used for 30 seconds and leave me a happy customer. However, new users are more likely to look for an app than enter or click a URL. The reason is that apps are advertised as more reliable and focused whereas the web is a mess.

On the surface for Apple, this isn’t about punishing smaller publishers. This is a move to clean up their store. A lot of these kind of apps are either low quality or meant to look like successful ones. Reminds us of the time when SEO efforts did exactly the same thing. Companies created dozens of sites that looked great. They had custom made content to attract Google to create inbound links to the site they got paid to promote.

The sad truth about the web and the fact that publishing is easy is that the cleaning step never happens. The best content hasn’t won on the web for years now. The most promoted, the one using the dirtiest tricks whilst skirting Google’s TOCs did.

A lot of what is web content now is too heavy and too demanding for people coming to the web. People on mobile devices with very limited data traffic at their disposal.

That’s no problem though, right? We can slim down the web and make it available to these new markets. There is no shortage of talks at the moment to reach the next billion users. By doing less, by using Service Workers to create offline ready functionality. By concentrating on what browsers can do for us instead of simulating it with frameworks.

Maybe this is the solution. But there is a problem that often we come from a position of privilege that is much further from the truth than we’d like to. Natalie Pistunovich has a great talk called Developing for the next Billion where she talks about her experiences releasing a product in Kenya, one of the big growth markets.

There is a lot of interesting information in there and some hard to swallow truths. One of them is that smartphones are not as available as we think they are – even low end Androids. And the second is that even when they are that people aren’t having reliable data connections. Instead of going to web sites people send apps to each other using Bluetooth. Or they offer and sell products on WhatsApp groups – as these tend to be exempt of the monthly data allowance. With Net Neutrality under attack in the US right now, this could be the same for us soon, too.

Of course these are extreme conditions. But the fact remains that closed systems like WhatsApp allow these people to sell online where the web failed. Not because of its nature and technologies, which are capable enough for that. But because of what it became over the years.

The web may be world-wide as a design idea, but the realities of connectivity and availability of web-ready hardware are a different story. Much like anything else, a few have both in abundance and squander it whilst others who could benefit the most don’t even know how to start.

That’s why the “reaching the next million users with bleeding edge web technology” messages ring hollow. And it doesn’t help to tell us over and over again that it could be different. We have to make it happen.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Truths: Publishing on the web using web standards is easy and amazing

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to talk about the notion of the web as an open publication platform and HTML as the way to publish.

Publishing on the web using web standards is easy and amazing

Those of us who were lucky enough to be around when the web started growing cherish this concept. It was exciting to be able to open a text editor – any text editor – write some tags and see our text come to life in a browser. Adding a H1 here gave us a headline, adding a P described paragraphs and gave free margins. Adding an A allowed us to point to other resources or a target in the same page. Adding an HR gave us a horizontal ruler. The latter later on turned out to be a pain in the backside to style and was generally a bad idea. We learned by doing and we learned by trying things out.

We also smirked at the people who looked down on us as not being “real developers”. These people considered us crazy for relying on a browser we don’t control to render our output. Could be that we were, but at least we didn’t have to use a convoluted IDE and follow a slow build process to get a result. Our work was immediate and satisfaction was only a ctrl+s, shift-tab and ctrl-r (or F5) away.

We also loved the fact that all we need is some hosting space and an FTP client to publish our work to the world.

If we disliked what our server companies did, we moved to the next one. After all, your domain is the thing you send to people, not your IP address. To get people to read what we wrote we published on mailing lists, in forums and on IRC. We linked to each other in webrings and banner exchanges. When Google came around we were the old guard that Google loved. Easy to index, with sensible TITLE and proper text content. Not like Flash pages that tried to trick Google with META keyword spamming or hidden text.

All the good things about HTML and publishing on your own server are still valid. It is a beautiful, free and open way to publish with nobody to answer to. You are your own marketing department and your server is your playground. If you are vigilant and you make sure that you have a lot of time to delete spam and defend yourself against attacks.

But here is the problem with telling this story over and over again. It doesn’t quite work any longer. And it is not as fascinating for people these days as it was for us. I remember the exhiliration of hearing a successful modem handshake and my HTML rendering in Netscape 3. I remember explaining to people that “when it sounds everything is broken, then you are online”. People these days don’t expect to be offline. Often they only experience being offline when they are on the go and the mobile connection dies. Those lucky enough to have a home with a fat wireless connection never put effort in to reach the content. They consume, much like we did when we watched TV.

The same happens to publication. It isn’t about writing the perfect article or blog post. It is about creating something fast that gets a lot of eyeballs. And if you want eyeballs, you keep publishing. Faster and faster, more and more. And as it is hard to create your own, you re-hash what other people have done instead and ride any success train of the day. There is no place for HTML or proper standards publication in this world.

I’m not saying at all that this is a good thing. But it is where we are. The web hasn’t won over Facebook, WhatsApp and other closed environments because it needs more effort to use. You still need to show interest and skill to build a web site. Writing three words and picking a GIF from a collection is easier.

The web we love and explain as the amazing publishing platform will survive. It will be a playground of enthusiasts and specialists. And old people. The disruptive platform of the past hasn’t become the mainstream. Everyone has the potential to be a creator and maker. But the marketing machine of the world wants us to be consumers instead. And the best way to keep people consuming is to lock them into a place that is ridiculously easy to publish in. More importantly you need to give them the feeling of being part of a community of cool people. And the explosive growth of the web and tweaking of search algorithms to show the “new” instead of the “correct” isn’t a group of cool people. It is hard work. There are no likes, kudos, claps or whatever on the web. Adding an immediate feedback channel for people is 90% removing horrible content and spam. The web isn’t a small group of cool people, but mainstream media makes pretty sure to tell us it is full of dangers and wrong information. Better stay where it is safe. In a controlled environment that has very enticing immediate feedback.

If you think this is dark, check out André Staltz’ The Web began dying in 2014, here’s how where he paints a pretty bleak future for the mainstream web. And darn him, there is some pretty good evidence in there that finding web content in the future not published inside Google, Amazon or Facebook products will be close to impossible.

So, yes, publishing on the web is amazing. Nobody denies that. But we’re dealing with a new generation of people who grew up with the web and don’t care about it. It is there, like water is when you open the tap. You don’t think about how it gets there or what is involved until it stops coming out. And that might be the same for the web.

Instead of painting a romantic view of how the open web keeps prevailing, it may be time to tell people more about what their use of closed platforms does. How much they give away for the convenience of publishing something and harvesting some fake internet points. We’re past the format of the publication. We need to get people excited again about owning their data. For our sake and theirs.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Truths: the web is broken and backwards compatibility is holding us back

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of the web not moving fast enough for people and clinging on to seemingly terrible ideas from the past.

The web is broken and backwards compatibility is holding us back

This is the counter-argument to the one I discussed in the last post. Much like the exaggurated praise for the web and its distributed nature it has been around for as long as I can remember online discussions.

There is no doubt that many things about the web are sub-optimal. It is also true that carrying the burden of never blocking out old content can slow us down. Yes there are many features of CSS and JavaScript that in hindsight are terrible ideas. And it is true that by sticking to the bleeding edge, you have a lot more fun as a developer. First, there are more things to play with. And, more modern environments also come with better tooling and deeper insights.

There is a problem with this though: as Calvin said it, the problem with the future is that it always turns into the present.

calvin and hobbes strip

So, whenever we embrace new and bleeding edge technology and damn the consequences we create debt. Often arguments against backwards compatibility stem from actions like these. Some standards we have to keep adding to browsers have been rash decisions or based on a need of one player in the W3C at the time.

Breaking changes in any new version of software aren’t ever fun for users and maintainers. This gets worse with how popular your software becomes and how many people use it. And I’d argue that the web is the most used piece of software out there.

Back in the day the argument against the web stack was always Flash. It seemed to be the right thing to use. There was 99% coverage in browsers. It had far advanced tooling in comparison to Firebug (RIP). And there was a sort of built-in code protection. People couldn’t look at and steal your code without jumping any barriers.

Turns out, Flash wasn’t the amazing platform these arguments made it out to be. In the Flash Games Post Mortem keynote at GDC. John Cooney of Kongegrate talks about the story of Indie gaming and Flash.

I love this talk. It shows that Flash and Web developers weren’t that different. Except Flash developers were more pragmatic about wanting to make money. And they had less delusions about their code lasting forever but knew that they had a short window of opportunity.

And this is what this argument boils down to. When it comes to betting on the web, there’s a lot of good faith and wanting to create something lasting involved. If that is your thing it will make you more understanding for the failures of the web as a software platform. And it makes backwards compatibility a no-brainer as this is what ensures the longevity of the web. When Flash changed and the support from the one company that owned it faded, a lot of developers felt forgotten. We now have the problem that a lot of creativity and a lot of work will go away as the platform to execute it is gone. Backwards compatibility ensures that isn’t the case.

If your thing is to release something quick, make some money and know it will go away, the web isn’t as interesting. Even worse, those defending it can come across as evangelical or condescending. But there is nothing wrong with what you want to do. A lot of innovation stems from this approach, and the web can learn from its successes and failures. Much like HTML5 learned a lot from Flash. But that doesn’t mean your approach is better or that the web is broken – it just doesn’t fit your goals. Without the web, Flash wouldn’t have happened the way it has either. Air proved that. The distribution model of the web works. And you can benefit from that without having to replace it.

There are of course some valid arguments for the abandonment of older ideas and non-support of broken platforms. Seeing how fast JavaScript moves, it seems detrimental to the cause to support older browsers. And some of the new ideas we have now solve important performance and security issues.

But all in all, the backwards compatibility of the web is what made it survive all the other platforms set out to replace it. And there will not be a time where we need to run the web in emulators because of it. That is what makes the argument of the web as broken and backwards compatibility holding us back invalid. Of course we can do better, but are we also 100% sure that what we think is amazing now really stands the test of time?

There are more pressing matters to consider:

  • How can we ensure that despite backwards compatibility we get people to upgrade their environments? Seeing that a malware targeted at Windows XP is a huge success in 2017 is more than worrying.
  • How can we enhance older solutions to become better without breaking them? Chrome’s passive Event Listener extension to addEventListener seems to break backwards compatibility . Arrow functions are arguably only syntactic sugar (despite fixing “this”) but will always be just a syntax error for older browsers.
  • How can we make developers embrace newer solutions to old problems that have less side effects? It seems there is a certain point where we stop caring to keep up-to-date and use whatever worked in the past instead.
  • How can we make newer developers embrace the idea of the web as a platform without overloading them with borderline evangelical and philosophical messages? How can we make the web speak for itself?

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Truths: The web is better than any other platform as it is backwards compatible and fault tolerant

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of the web as a publication platform and how we keep repeating its virtues that may not apply to a publisher audience.

The web is better than any other platform as it is backwards compatible and fault tolerant

This has been the mantra of any web standards fan for a very long time. The web gets a lot of praise as it is to a degree the only platform that has future-proofing built in. This isn’t a grandiose statement. We have proof. Web sites older than many of today’s engineers still work in the newest browsers and devices. Many are still available, whilst those gone are often still available in cached form. Both search engines and the fabulous wayback machine take care of that – whether you want it or not. Betting on the web and standards means you have a product consumable now and in the future.

This longevity of the web stems from a few basic principles. Openness, standardisation, fault tolerance and backwards compatibility.

Openness

Openness is the thing that makes the web great. You publish in the open. How your product is consumed depends on what the user can afford – both on a technical and a physical level. You don’t expect your users to have a certain device or browser. You can’t force your users to be able to see or overcome other physical barriers. But as you published in an open format, they can, for example, translate your web site with an online system to read it. They can also zoom into it or even use a screenreader to hear it when they can’t see.

One person’s benefit can be another’s annoyance, though. Not everybody wants to allow others to access and change their content to their needs. Even worse – be able to see and use their code. Clients have always asked us to “protect their content”. But they also wanted to reap the rewards of an open platform. It is our job to make both possible and often this means we need to find a consensus. If you want to dive into a messy debate about this, follow what’s happening around DRM and online video.

Standardisation

Standardisation gave us predictability. Before browsers agreed on standards, web development was a mess. Standards allowed us to predict how something should display. Thus we knew when it was the browser’s fault or ours when things went wrong. Strictly speaking standards weren’t necessary for the web to work. Font tags, center tags, table layouts and all kind of other horrible ideas did an OK job. What standards allow us to do is to write quality code and make our lives easier. We don’t paint with HTML. Instead, we structure documents. We embed extra information and thus enable conversion into other formats. We use CSS to define the look and feel in one central location for thousands of documents.

The biggest benefactors of standards driven development are developers. It is a matter of code quality. Standards-compliant code is easier to read, makes more sense and has predictable outcome.

It also comes with lots of user benefits. A button element is keyboard, touch and mouse accessible and is available even to blind users. A DIV needs a lot of developer love to become an interactive element.

But that doesn’t mean we need to have everything follow standards. If we had enforced that, the web wouldn’t be where it is now. Again, for better or worse. XHTML died because it was too restrictive. HTML5 and lenient parsers were necessary to compete with Flash and to move the web forward.

Backwards compatibility

Backwards compatibilty is another big part of the web platform. We subscribed to the idea of older products being available in the future. That means we need to cater for old technology in newer browsers. Table layouts from long ago need to render as intended. There are even sites these days publishing in that format, like Hacker News. For browser makers, this is a real problem as it means we need to maintain a lot of old code. Code that not only has a diminishing use on the web, but often even is a security or performance issue. Still, we can’t break the web. Anything that goes into a “de facto standard” of web usage becomes a maintenance item. For a horror story on that, just look at all the things that can go in the head of a document. Most of these are non-standard, but people do rely on them.

Fault tolerance

Fault tolerance is a big one, too. From the very beginning web standards like HTML and CSS allow for developer errors. In the design principles of the language the “Priority of Constituencies” states it as such:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity

This idea is there to protect the user. A mistake made by a developer or a third party piece of code like and ad causing a problem should not block out users. The worrying part is that in a world where we’re asked to deliver more in a shorter amount of time it makes developers sloppy.

The web is great, but not simple to measure or monetise

What we have with the web is an open, distributed platform that grants the users all the rights to convert content to their needs. It makes it easy to publish content as it is forgiving to developer and publisher errors. This is the reason why it grew so fast.

Does this make it better than any other platform or does it make it different? Is longevity always the goal? Do we have to publish everything in the open?

There is no doubt that the web is great and was good for us. But I am getting less and less excited about what’s happening to it right now. Banging on and on about how great the web as a platform is doesn’t help with its problems.

It is hard to monetise something on the web when people either don’t want to pay or block your ads. And the fact that highly intrusive ads and trackers exist is not an excuse for that but a result of it. The more we block, the more aggressive advertising gets. I don’t know anyone who enjoys interstitials and popups. But they must work – or people wouldn’t use them.

The web is not in a good way. Sure, there is an artisinal, indie movement that creates great new and open ways to use it. But the mainstream web is terrible. It is bloated, boringly predictable and seems to try very hard to stay relevant whilst publishers get excited about snapchat and other, more ephemeral platforms.

Even the father of the WWW is worried: Tim Berners-Lee on the future of the web: The system is failing .

If we love the web the way we are happy to say all the time we need to find a solution for that. We can’t pretend everything is great because the platform is sturdy and people could publish in an accessible way. We need to ensure that the output of any way to publish on the web results in a great user experience.

The web isn’t the main target for publishers any longer and not the cool kid on the block. Social media lives on the web, but locks people in a very cleverly woven web of addiction and deceit. We need to concentrate more on what people publish on the web and how publishers manipulate content and users.

Parimal Satyal’s excellent Against a User Hostile Web is a great example how you can convey this message and think further.

In a world of big numbers and fast turnaround longevity isn’t a goal, it is a nice to have. We need to bring the web back to being the first publishing target, not a place to advertise your app or redirect to a social platform.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Truths: We need granular control over web APIs, not abstractions

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of offering new functionality to the web. Should we deliver low-level APIs to functionality to offer granular control? Or should we have abstractions that get people started faster? Or both?

In a perfect scenario, both is the obvious answer. We should have low-level APIs for those working “close to the metal”. And we should offer abstractions based on those APIs that allow for easier access and use.

In reality there is quite a disconnect between the two. There is no question that newer web standards learned a lot from abstractions. For example, jQuery influenced many additions to the DOM specification. When browsers finally got querySelector and classList we expected this to be the end of the need for abstractions. Except, it wasn’t and still isn’t. What abstractions also managed to do is to even out implementation bugs and offer terser syntax. Both of these things resonate well with developers. That’s why we have a whole group of developers that are happy to use an abstraction and trust it to do the right thing for them.

Before we had a standardised web, we had to develop to the whims of browser makers. With the emergence of standards this changed. Web standards were our safeguard. By following them we had a predictable way of debugging. We knew what browsers were supposed to do. Thus we knew when we made a mistake and when it was a bug in the platform. This worked well for a textual and forms driven web. When HTML5 broke into the application space web standards became much more complex. Add the larger browers and platform fragmentation and working towards standards and on the web became much harder. It doesn’t help when some of the standards felt rushed. An API that returns empty string, “possibly” or “maybe” when asked if the current browser can play a video doesn’t fill you with confidence. For outsiders and beginners, web standards are not considered any more the “use this and it will work” approach. They seem convoluted in comparison with other offers and a lot of changes seem to be a lot of work for developers to keep up. Maybe too much work.

Here’s what it boils down to:

  • Abstractions shield developers from a lot of implementation quirks and help them work on what they want to achieve instead
  • Low-level APIs allow for leaner solutions, but expect developers to know them, keep track of changes and to deal in a sensible way with non-supporting environments (see: Progressive Enhancement)

What do developers want?

As web developers in the know, you want to have granular control. We’ve been burnt too often by “magical” abstractions. We want to know what we use and see where it comes from. That way we can create a lot of different solutions and ensure that what we want to standardise works. We also want to be able to fix and replace parts of our solutions when a part becomes problematic. What we don’t want is to be unable to trace back where a certain issue comes from. We also want to ensure that new functionality of the web stays transparent and secure. We achieve this by creating smaller, specialised components that can get mixed and matched.

As new developers who haven’t gone through the pains of the browser wars or don’t need to know how browsers work, things look different. We want to code and reach our goal instead of learning about all the different parts along the way. We want to re-use what works and worry about our product instead. We’re not that bothered about the web as a platform and its future. For us, it is only one form factor to build against and to release products on. Much like iOS is or gaming plaforms are.

This is also where our market is going: we’re not paid to understand what we do – we’re expected to already know. We’re paid to create a viable product in the shortest amount of time and with the least effort.

The problems is that the track record of the web shows that we often have to start over whenever there is a new technology. And that instead of creating web specific functionality we got caught up trying to emulate what other platforms did.

The best case in point here is offline functionality. When HTML5 became the thing and Flash was declared dead we needed a fast solution to offer offline content. AppCache was born, and it looked like a simple solution to the issue. As it turns out, once again what looked too good to be true wasn’t that great at all. A lot of functionality of AppCache was unreliable. In retrospect it also turned out to be more of a security issue than we anticipated.

There was too much “magic” going on that browsers did for us and we didn’t have enough insight as implementers. That’s how Service Workers came about. We wanted to do the right thing and offer a much more granular way of defnining what browsers cache where and when. And we wanted to give developers a chance to intercept network requests and act on them. This is a huge endeavour. In essence we replicate the networking stack of a browser with an API. By now, Service Workers are doing much more than just offline functionality. They also should deal with push notifications and app updates in the background.

This makes Service Workers tougher to work with as they seemed complex. Add to that the lack of support in Safari (which is now changing) and you lost a lot of developer enthusiasm.

There is more use in abstractions like Workbox as they promise you to keep up-to-date whilst the changes in the spec are ironed out. Instead of getting a “here are all the lego bricks, build your own car”, it has a “so you want to build a car, here are some ways to do so” approach.

This is a good thing. Of course we need to define more granular and transparent standards and solutions to build the web on. But there is a reluctance in developers to take part in the definition phase and keep an eye on changes. We can not expect everybody who wants to build for the web to care that much. That is not how the web grew – not everybody had to be a low level engineer or know JavaScript. We should consider that the web outgrew the time where everyone was deeply involved with the standards world.

We need to face the fact that the web has become much more complex than it used to be. We demand a lot from developers if we want them all to keep up to date with standards. Work that often isn’t as appreciated by employers or clients than shipping products is.

This isn’t good. This isn’t maintainable or future facing. And it shouldn’t have come to this. But it is a way of development we allowed to take over. Development has become a pretty exhausting and competitive environment. Deliver fast and in a short cadence. Move fast and break things. If you can re-use something, do it, don’t worry too much if you don’t know what it does or if it is secure to do so. If you don’t deliver it first to market someone else will.

This attitude is not healthy and we’re rubbing ourselves raw following it. It also ensures that diversity in our market it tough to achieve. It is an aggressive game that demands a lot of our time and an unhealthy amount of competitiveness.

We need to find a way to define what’s next on the web and make it available as soon as possible. Waiting for all players to support a new feature makes it hard for developers to use things in production.

Relying on abstractions seems to be the way things are going anyways. That means as standards creators and browser makers we need to work more with abstraction developers. It seems less and less likely that people are ready to give up their time to follow specs as they change and work with functionality behind flags. Sure, at conferences and in our talks everyone gets excited. The hardware and OS configurations we have support all the cool and new. But we need to get faster to market and reach those who aren’t already sold on our ideas.

So, the question isn’t about granular definition of specifications, small parts that work together or abstractions. It is about getting new and sensible, more secure and better performing solutions into production code quicker. And this means we need both. And abstractions should have a faster update cycle to incorporate new APIs under the hood. We should work on abstractions using standards, not patching them.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web Truths: JavaScript can’t be trusted

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of JavaScript and how much we should rely on it.

Good vs. eval()

JavaScript is the love/hate topic of the “modern web”. People who have been around for a long time have learned the hard way not to blindly rely on it. People who just started don’t even know how you could turn it off or why it could be a problem.

This gives us an endless opportunity to rant in one direction or the other. The old guard points out that the new builds fragile products that demand too much of the users. The new guard points out that a web without JavaScript is neither fun for users nor easy to maintain. A lot of times, this argument is about developer convenience trumping sturdiness of the web. And we keep going in circles viewing it from both ends of the spectrum.

When we got JavaScript, it ran in the browser and made a lot of things possible that HTML and CSS alone could not do yet. We could react to things our users were doing without reloading the page. We could read out and react to the environment the script was running in. JavaScript was the interactive glue of the web. We could make less able browsers do things others could. We could create interaction models that HTML forms didn’t provide. And we could create a consistent experience across browsers and platforms.

And this is where the main issue came up: for many, the web isn’t meant to look and work the same everywhere. Instead it should give a working experience for all users and get fancier the more able the user agent is. By relying on JavaScript in our solutions we put up a barrier for the sake of control. This was especially bad when we didn’t deliver any functionality when JavaScript failed.

And JavaScript can fail in many ways. From non supporting environments to coding errors and connectivity issues – anything can break.

JavaScript isn’t like CSS or HTML. Both these building blocks of the web are fault-tolerant. This means when you write invalid HTML, the browser tries to fix it. If you use bleeding edge CSS in an old browser, it ignores it. Not so with JavaScript. Syntax errors or accessing an unknown object cause the interpreter to give up and say no. Even the most capable environments don’t support JavaScript until the first script loaded and ran without a problem.

Stuart Langridge explains what can go wrong until your script does what you wanted it to do in his Everyone has JavaScript, right? site.

The main power of JavaScript was that it executes on the fly and in the browser. It didn’t need any compilation, it didn’t need any fancy development environment. That was a huge part of its success. Compared to other languages that have similar power, it is much more approachable. And it seems easy to fix a problem you have with JavaScript that seems complex in CSS if you’re not used to its syntax. JavaScript can do everything: it can load extra information, it can create HTML and it can change styles and images. It is the jack-of-all-trades of the web.

When something is easy to apply there is always a danger that people over-use it. It is tempting to see your well-connected development device as the world. And to expect the same speeds and computing powers of your users. In this scenario loading a few megabyte of JavaScript is not a high price to pay to allow for easy maintenance. When you’re on a metered, slow or unreliable connection or on a low-end device this convenience can soon turn into frustration. This is an even bigger problem as it is hard – if not impossible – to detect these conditions.

So, yes, JavaScript is a fair-weather friend and can break in many ways. You may also block out a lot of users because you crave more control over things you aren’t meant to control.

There is a flip-side to this truth though. JavaScript has evolved from a scripting language in the browser to a development environment in its own right. The rise of Node and other server-side and embedded systems put JavaScript on the map as a key skill of our market.

JavaScript isn’t a client-side problem, it is a whole bigger set of offers these days. I talked about this beginning of the year at ScriptConf in Austria

Universal, isomorphic JavaScript – or whatever other buzzword we come up with – is the answer to the lack of fault tolerance of the language. We can run the JavaScript in a space we control like the server or a build process and we render out plain HTML. We can use client-side JavaScript in a fair weather situation. If that fails, we can rely on a JS based API and routing mechanism to still give the user the content they came for.

The real, pragmatic approach to the flimsiness of JavaScript though is much easier: people use it anyways, let’s concentrate on keeping it safe and reliable.

Whilst we still complain that JavaScript breaks in the client, we have a huge group of developers who use JavaScript in everything. While we worry about support for a certain new browser feature, people rely on hundreds of package dependencies to build very basic functionality. Whilst we worry about DOM bugs, people use libraries with virtual DOMs and scripted routing instead of HTTP.

JavaScript is a given and a language that empowers and inspires hundreds of new developers each day. Our job as lovers of the web is not to tell people that they are wrong using it in the first place. Our job is to allow these developers to be as creative with this new use of it. Much like we were when a standardised DOM was still a dream to come.

We’re not here to call the shots. We’re here to embrace a new use of a valid technology and help with our knowledge to not repeat age-old mistakes. But we need to make sure that we learn in that process, too. It is far too easy to find glaring mistakes in new applications of old technology. It is much harder to help people solve new problems they face with guidance of past experience. But it is much more rewarding as it doesn’t create a “us old sages vs. those new cowboy-coders” world.

With more defined and controlled environments, JavaScript becomes much easier to trust. The thing we need to worry more about now is to ensure that it doesn’t get too complex.

Instead of worrying about the non-fault-tolerance of JavaScript, here are some other things to worry about:

  • How safe is it to rely on a loosely curated package repository for our projects? How can we make sure that in the dozens of NPM modules we use none of them is malware? How can we ensure people use packages safely, keep them up-to-date and not face disaster when one of them breaks?
  • How can we reap the rewards of abstractions without creating an unhealthy dependency? The vue.js of tomorrow may well be the jQuery UI of today. Yes, we create faster and more with an abstraction. But we miss out on understanding how what we create works. We don’t want to have a lot of developers and products that become ineffective once an abstraction is out of fashion.
  • How can we create a professional development environment for JavaScript without overwhelming new developers? Back in the days we needed a text editor and a browser. Now we need to have command-line knowledge, toolchains, unit tests, continuous integration and heavily customised editors. Each of these things make sense, but can look daunting to a new developer.
  • How can we move the language itself ahead without relying on transpilation? JavaScript is finally standardised and new functionality should be used by anyone, not only in a compilation step.
  • How can we still reap the rewards of the just-in-time compilation of JavaScript when we use it like a compiled language?
  • How can our tooling help new and experienced developers without overwhelming one group and boring the other? Is linting the answer or is it expecting developers to be experts in browser tools?

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web truths that cause infinite loops

Every few months there is a drama happening in the blogging and social media scene. We seem to need that to keep ourselves occupied. It allows us to get distracted from the feeling that the people who pay us have no idea what we do. And keep praising us for things we are not proud of, cementing our impostor syndrome.

For an upcoming talk, I analysed the recurring themes that we get fired up about. I will post one on each of these over the next few weeks.

  • CSS is not real programming and broken
  • JavaScript can’t be trusted
  • We need granular control over web APIs, not abstractions
  • The web is better than any other platform as it is backwards compatible and fault tolerant
  • The web is broken and backwards compatibility is holding us back
  • Publishing on the web using web standards is easy and amazing
  • The web is world-wide and needs to be more inclusive
  • Mobile is the present and future

Each of these topics can spark thousands of reactions, dozens of blog posts. Many will get you a speaking slot at an event. They are all true, but they are also all not necessarily yielding the amazing insight we expect them to. I’ll try to explain why they are endless loops and what we could do to get past discussing these over and over again. Maybe it is a good time to concentrate on solving other, new, problems instead. And recognise that a new generation of makers and developers may not be as excited about them as we are.

Yes, these will be my opinions and they may spark some discourse. That’s fine. You can disagree with me, and I promise to keep this to the point and civil. I’ve done this for a very long time, I’ve heard many people talk and discuss these. Hopefully my insights will hit a mark with some of you and make us reconsider rehashing the same old discussions over and over again.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Web truths: CSS is not real programming

This is part of the web truths series of posts.

Every few months there is an article claiming that CSS is not real programming. That CSS is too hard and broken. The language used in these can get creative:

There is truth to the fact that working with CSS is not traditional programming. There is also truth that CSS has its language faults and that some things are much harder than they should be. That is the case with any standardised language, though. CSS is a way to describe what an interface should be like. It is not the implementation of said interface in a programmatic fashion, like, for example, the Canvas API. That CSS is not like a traditional programming language is by design.

CSS is a great idea when it comes to creating an interface for something as complex and unknown as a user on the web. I talked about the difference of CSS and JavaScript in detail at GOTO Amsterdam where I called it a decision between trust and control.

As a CSS developer, you trust the user agent (in most cases a browser) to do the right thing. You can’t control that it happens, but you also don’t need to sweat the details like performance, paint time and responsiveness. This ball is in the court of the user agent creators and the OS it runs on. And that is a great thing as it allows to fix those important things in one place – where they get applied. If you create interfaces or animations using JavaScript, you have much more granular control. But you also need to ensure everything works. Using CSS means giving up control for the sake of having more time to build a responsive interface catering to the users’ needs. Users need and can mess with your interface settings. CSS is OK with that. Pixel-perfect, defined interfaces are not.

CSS development isn’t programming in the traditional sense where you have loops, conditions and variables. CSS is going that direction to a degree and Sass paved the way. But the most needed skill in CSS is not syntax. It is to understand what interfaces you describe with it. And how to ensure that they are flexible enough that users can’t do things wrong and get locked out. You can avoid a lot of code when you understand HTML and use CSS to style it.

If you don’t consider an interface as an agreement with your users with various levels of fidelity depending on their technical platform, CSS isn’t for you. It is by design a forgiving language, that doesn’t throw any errors when something can’t get applied. Thus it is amazing for progressive enhancement. You don’t even need to worry about adding a line of unsupported code as the parser skips what it can’t apply. What causes a JavaScript parser to throw in the towel and give you an error message, the CSS parser shrugs off and moves on. That can feel odd for a developer – I for one like to know when something went wrong. But it frees you from needing to test on all possible user agents and put “if” statements around everything. Want to use a gradient on button? Define a background color, then override it with a gradient in the next line. If the user agent can’t render gradients, you get a simpler, but still working button. And you didn’t need to worry about gradient support at all.

A lot of “CSS is not real programming” arguments are a basic misunderstanding what CSS is there to achieve. If you want full control over and interface and strive for pixel perfection – don’t use it. If you want to build an interface for an inclusive and diverse web, CSS is a great tool. Writing CSS is describing interfaces and needs empathy with the users. It is not about turning a Photoshop file into a web interface. It requires a different skillset and attitude of the maintainer and initial programmer than a backend language would.

In any case, belittling people who write CSS and considering them not real developers is arrogant nonsense. Especially when you don’t even spend the time to understand what CSS tries to achieve and how amazing it has become.

On the other hand, CSS is not and should not be the solution for everything. Yes, you can create pixels with drop shadows, but you also punish the browser rendering engine with this.

CSS is an integral part of the web to me and whilst the syntax is weird for someone coming from a different programming language, it proved its worth over the years. It should and will not go away for quite some time. So if you don’t like it, pair with someone who does. If your managers demand you to do it, we don’t have a technical issue at our hands, but a project/staffing one.

Instead of having discussions if CSS is broken and needs to be replaced, it is healthier to have different discussions about CSS:

  • What CSS can do and where it isn’t enough
  • What CSS can do these days that needed other technology in the past and how to apply it
  • How to write CSS in a maintainable fashion
  • What can you do to make the life of the CSS developer easier?
  • What CSS hacks we used and why we should not use them anymore
  • What can we do to make CSS richer and better?

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)