Biz & IT —

Parsing the difference between the Internet and the Web according to Alan Kay

Kay thinks the Internet was built better than the Web. Is he right?

Parsing the difference between the Internet and the Web according to Alan Kay
Stack Exchange
This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

What did digital pioneer Alan Kay mean by, “The Internet was done so well, but the Web, in comparison, is a joke. It was done by amateurs”?

When Kay speaks, programmers listen. But like anyone who puts forward an opinion, he opens himself up to being misinterpreted. That was the case last July in an interview with long-running publication *Dr. Dobb's Journal*. At one point, the inventor of object-oriented programming chimed in:

The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

Stack Exchange user kalaracey can't have been the only programmer confused about the meaning of Kay's quote, but he's the one who asked about it. Several devs answered.

See the original question here.

Read further

Karl Bielefeldt answers (41 votes):

Kay actually elaborates on that very topic on the second page of the interview. It's not the technical shortcomings of the protocol he's lamenting, it's the vision of Web browser designers. As he put it:

You want it to be a mini-operating system, and the people who did the browser mistook it as an application.

He gives some specific examples, like the Wikipedia page on a programming language being unable to execute any example programs in that language and the lack of WYSIWYG editing, even though it was available in desktop applications long before the Web existed. Twenty-three years later, and we're just barely managing to start to work around the limitations imposed by the original Web browser design decisions.

Related: "What is JavaScript, really?"

Kay doesn’t get how messy lower-level protocols are

Chris Adams answers (8 votes):

I read this as Kay being unfamiliar enough with the lower level protocols to assume they're significantly cleaner than the higher level Web. The “designed by professionals” era he's talking about still had major problems with security (spoofing is still too easy), reliability, and performance, which is why there's still new work being done tuning everything for high speed or high packet loss links. Go back just a little further and hostnames were resolved by searching a text file which people had to distribute!

Both systems are complex heterogeneous systems and have significant backwards compatibility challenges any time you want to fix a wart. It's easy to spot problems, hard to fix them, and as the array of failed competitors to both shows it's surprisingly hard to design something equivalent without going through the same learning curve.

As a biologist might tell an intelligent design proponent, if you look at one and see genius design, you're not looking closely enough.

There’s some truth to the claim

Stephen C answers (62 votes):

In a sense he was right. The original (pre-spec) versions of HTML, HTTP, and URL were designed by amateurs (not standards people). And there are aspects of the respective designs and the subsequent (original) specs that are (to put it politely) not as good as they could have been. For example:

  • HTML did not separate structure/content from presentation, and it has required a series of revisions—and extra specs (CSS)—to remedy this.
  • HTTP 1.0 was very inefficient, requiring a fresh TCP connection for each "document" fetched.
  • The URL spec was actually an attempt to reverse engineer a specification for a something that was essentially ad hoc and inconsistent. There are still holes in the area of definition of schemes, and the syntax rules for URLs (e.g. what needs to be escaped where) are baroque.

And if there had been more "professional" standards people involved earlier on, many of these "miss-steps" might not have been made. (Of course, we will never know.)

However, the Web has succeeded magnificently despite these things. And all credit should go to the people who made it happen. Whether or not they were "amateurs" at the time, they are definitely not amateurs now.

Find more answers or leave your own at the original post. See more Q&A like this at Programmers, a site for conceptual programming questions at Stack Exchange. And of course, feel free to login and ask your own.

Channel Ars Technica