Rixstep
 About | ACP | Buy | Industry Watch | Learning Curve | News | Products | Search | Substack
Home » Learning Curve

'Why is OSX better than Linux?'

In all other aspects, Linux and macOS are fully equivalent.


Get It

Try It

What appeared to be a Linux fanboy recently asked a question.

'Why is OSX [sic] better than Linux?'

It didn't have to be an insincere question. There was no indication of the questioner's background, or purpose in asking the question. It could have been a Linux fanboy, it sounded like it, but that's perhaps assuming too much. So let's just take the question seriously - which is probably the best approach anyway.

1. 'OS X' (or 'macOS' as it's called today) is microkernel, Linux is not.

It's not known why Linus so detests microkernel, but it sure is a shame.

What is 'microkernel'? It's an idea that gained acceptance at, amongst other places, Carnegie Mellon, where, amongst others, Avie Tevenian and Rick Rashid were studying and researching. Rashid went on to Microsoft, where he became a microkernel evangelist for Sir Bill, and Avie went to Redwood City, where he led development of NeXTSTEP for Steve Jobs.

Computers - at least the kind we're talking about - operate in at least two distinct modes. Let's call them 'user level' and 'privileged level' (or 'kernel level'). The 'user level' is, unsurprisingly, where most of the operations on behalf of end-users take place. This level is limited in its access to computer resources, specifically so that conflicts do not arise, and so the end-user cannot put the device in an unstable state (or worse). Code from one application, for example, cannot overwrite data belonging to another application. And so forth.

Neither can code from one application overwrite code from another application. In fact, no 'user land' application can ever 'see' the actual computer 'resources' - not the memory, not the disks, not the keyboard, not the mouse or trackpad. Not any of it.

User applications are written as if the 'client' (the application in question) actually does have access to these resources, but nothing can be further from the truth.

Using an intricate system of virtual memory, and page tables, and page faults, and thereby swapping, the system kernel is able to take access requests and translate what the applications think are real memory addresses and sundry access points and turn that data into the 'real deal'. User land code normally will reside in one section of real memory, and that other code - the privileged code run by the 'kernel' - will reside in another.

This system is so constructed to give the illusion of 'everything happening at once' - so-called preemptive multitasking, not the kind of 'cooperative' (ahem) multitasking found on earlier personal systems, where a single failure in any one application could bring the whole system down. It's the kernel of the system that now controls who does what - and it can kick out anything that doesn't behave, or simply ignore it.

'User mode' severely limits the possibilities. But, in 'privileged mode', there are no limitations. Code run in privileged mode can do anything. That code doesn't have to worry about fartsy 'virtual' addresses - everything is real, man! And whereas user land code can be preempted (interrupted) by privileged mode code if something seems fishy, nothing can really stop privileged mode code if something goes south.

The infmous BSODs ('blue screens of death') on Windows (and the equivalent on macOS, albeit rare) occur when the system quite literally no longer knows what to do and has to completely stop operations. Even IBM mainframes have been known to end up in similar situations (but perhaps only once per year). All you do in such a situation is - you guessed it - the 'three-finger salute': you just boot again.

BSODs (what the Linux people call 'kernel panics') also occur when device drivers run into trouble. Devices? Your computer is full of them. You have the mouse, or the trackpad, or, in the case of a mobile device, a screen for both input and output. You have 'secondary storage', which can be a hard disk drive (HDD) or a solid-state drive (SDD). You can have ports (and, with recent additions to the Apple hardware lineup, dongles - lots of them) which in turn offer access to yet more devices. (This is what David Maynor was referring to when he spoke of there being 'so many computers inside the computer'.) And what drives these devices? More code - code known as device drivers. Device drivers, like the kernel, must run in privileged mode.

But no man-made system is perfect, and the overall goal of system stability has to be reliability - that is: if there are errors, those errors must be limited as much as possible.



A lot of driver code doesn't actually deal in sensitive areas. This code can still be flawed. Running a spurious extra routine that has nothing to do with system security, such code may encounter an error (perhaps caused by another module) and thus bring the system to a screeching halt.

The microkernel approach to building systems is an attempt to use privileged mode only for code that truly must run in privileged mode. Unfettered access to the computer's memory, to its devices, its keyboard, its pointing devices, its ports? You need privileged mode. Anything else? Most likely you don't need it.

Driver writers also have a mechanism known as the deferred procedure call. As the drivers must heroically cooperate to make sure the host can maintain the illusion of everything happening at once, no driver call in privileged mode can take too long. There is no preemptive multitasking in privileged mode - you just run, dude! The drivers must always return control to the system scheduler - the part of the kernel that manages the multitasking - as soon as possible.

But perhaps not all driver tasks are completed before 'time runs out'? Enter the deferred procedure call. Additional tasks that do not require pausing the system's ordinary multitasking - do not require privileged mode access - can be 'deferred' until later and put in a queue much like any other user mode request.

[Remember: multitasking depends on the goodwill of the system's device drivers.]

It's common sense - as plain as day - that the less time you spend in kernel mode, the fewer are the risks you'll crash the system.

Microsoft's NT began with a microkernel architecture, just as Apple's macOS is today. David Cutler understood all too well the importance of system stability.

But Linus Torvalds, of Turku Finland, has said he doesn't like the microkernel approach. It's not known if he ever explained why, more than something like his trademark:

1. No.
2. No.
3. No.
4. See above.

Kernel panics on macOS are extremely rare because of the system's microkernel architecture - you take as few risks as possible. You don't go looking for trouble.

It doesn't matter if Linux gets more stable over time: ceteris paribus, the microkernel system will always be more stable.

Which is why Linus' opposition to microkernel is so perplexing.

2. The user interface.

The original Macintosh user interface is inspired by the work of Alan Kay, who ran the Learning Research Group at the Xerox Palo Alto Research Center. It was Kay's vision that led to object orientation - he invented the term - and to the programming language Smalltalk, which he also invented. Brad Cox made a 'compiled' (very fast) version of Smalltalk which eventually became known as Objective-C. Steve Jobs bought the rights to Objective-C from Cox in 1995.

But again: the original Macintosh also used a similar paradigm.

And, for an unknown reason, Microsoft, who were given early Macintosh prototypes to help them develop their spreadsheet application Multiplan, used a lot of their time studying those prototype Macintoshes and eventually copied the Macintosh design for their own Windows - and yet they somehow missed what must be the most salient point: the object-orientation.

This can be most easily seen by the menu bar (or lack thereof). On the Macintosh, the menu bar resides separately - it's not connected to an application window (nor should it be).

The eager Microsofties put the menu bar on the application window itself - and thereby crippled their own system, already in December 1985.

And the Linux GUIs, GNOME and KDE? True to form, and in an effort to be as accommodating as possible to their Windows friends, did the same, partly out of misappropriated consideration, and undoubtedly out of sheer ignorance.

Having a menu bar separate from an application document window is not merely an aesthetic issue (although that's possibly what the Redmond thieves thought).

√ It means you can open more than one window at once
√ It means you can close all windows at once
√ It means you can save all documents at once

The very concept of the 'document window' is meaningless on Windows.

√ It means you can save gobs of memory. An application's code is reentrant: it can be used by any number of documents. On Windows, to edit two documents with the same application, you have to load the same application code twice.



Let's take it a step further. macOS inherits NeXT's brilliant 'document controller' class (NSDocumentController). This class is so smart that it can keep track of all open documents on behalf of an application, keep track of their names and paths, cycle through all those documents to prompt the user to save changes, and so forth. Such a concept doesn't exist on Windows or Linux.

The macOS document controller - even now, without the underlying HFS+ filesystem - can keep track of document movements: it knows if the names or paths to the documents on disk have changed, and it follows along, seamlessly - something else that's alien to Windows and Linux.

Windows doesn't have a document controller. Linux doesn't either. They can't - they don't have document classes that can inherit from a base document class (NSDocument) that could interact with it.

The Windows API is shambolic. It's easy to see which parts were written by Microsoft and which parts were written by IBM. The IBM code is sober and intelligent. The Microsoft code looks like it was designed by the Marx Brothers. Consider the prospect of creating a dialog - a type of window ordinarily subordinate to an ordinary window. (macOS calls this a 'panel' - NSPanel.)

Generally there are two types of dialogs.

√ Those that are 'modal' - they block input traffic to anything else in the application.
√ Those that 'float' - input can go anywhere.

How does the Windows API deal with these two distinct types of dialog? (Try to guess if it was Microsoft or IBM who designed this bit.)

√ Modal dialogs: call the API DialogBox.
√ Floaters: call the API CreateDialog.

Now on the macOS side: the dialog is in a class called NSPanel. NSPanel inherits from NSWindow. Meaning it borrows a lot of code from its base class. (No such thing is possible on Windows or Linux.) You want modal? No problem: it's an attribute to an existing class, not a separate entity unto itself.

Or let's talk sheets. Sheets are crucial to macOS, but cannot even exist on Windows or Linux. A sheet is a document-modal panel that applies to a specific document window. (Windows and Linux don't even have document windows.) Visually, the sheet appears to come out of the document window - from under its title bar. It follows the document window around wherever it goes.

And let's talk as little about the actual APIs as possible. The Windows API can in many cases be better than the Linux GUI APIs. Try creating a tableview ('listview' in Windows parlance) on KDE, for example. KDE wants to use C++ as much as possible. Without getting too deeply into the steamy morass of that programming language, let's just accept the fact that the language requires startup code that precedes code run at the official 'entry point', that it uses so-called 'constructors' and 'destructors' which must be implemented even if they're not needed, and that KDE, when last visited, had dedicated startup code for tableviews with one, two, three, four, five, six, seven, eight, nine, and ten columns of data (but no more). You can't dynamically decide how many columns you want - you have to call special constructor code for each. And, if you want more than ten columns, you're shit out of luck.

[This isn't even getting into NIBs, the invention that makes macOS break away from the field. A lot of the above isn't 'programmatic' in the traditional sense anyway: it's 'freeze-dried' (as effective code) into the NIB itself.]

OK, let's talk graphics. For Mark Shuttleworth, the South African cosmonaut who launched Ubuntu, once went public with his desire to make his own GUI as nice as Apple's - revealing that he really believed it was all about the attractiveness of the icons - that they had to be, to quote Steve Jobs, 'lick-able'. So check and compare: how are Linux icons doing?

But it's not down to the icons anyway. Beauty is in how things work, said the same Steve. On Windows and Linux, you can't open more than one document with the same application code image. You can't close all your document windows at once. You can't opt to close them all - even if you had more than one - and be prompted by a wizard, impossible on those platforms, to save changes. You can't find a document on disk if it moves. You go around wasting precious memory all over the place, loading and reloading the same reentrant code for each document you want to view or edit. Your API is disorganised at best, batshit insane at worst.

And you have an OS kernel that will crash from time to time.

But yes, in all other aspects, Linux and macOS are fully equivalent.

About Rixstep

Stockholm/London-based Rixstep are a constellation of programmers and support staff from Radsoft Laboratories who tired of Windows vulnerabilities, Linux driver issues, and cursing x86 hardware all day long. Rixstep have many years of experience behind their efforts, with teaching and consulting credentials from the likes of British Aerospace, General Electric, Lockheed Martin, Lloyds TSB, SAAB Defence Systems, British Broadcasting Corporation, Barclays Bank, IBM, Microsoft, and Sony/Ericsson.

Rixstep and Radsoft products are or have been in use by Sweden's Royal Mail, Sony/Ericsson, the US Department of Defense, the offices of the US Supreme Court, the Government of Western Australia, the German Federal Police, Verizon Wireless, Los Alamos National Laboratory, Microsoft Corporation, the New York Times, Apple Inc, Oxford University, and hundreds of research institutes around the globe. See here.

All Content and Software Copyright © Rixstep. All Rights Reserved.

CONTACT INFO:
John Cattelin
Media Contact
contact@rixstep.com
PURCHASE INFO:
ACP/Xfile licences
User/Family/Business
http://rixstep.com/buy
About | ACP | Buy | Industry Watch | Learning Curve | News | Products | Search | Substack
Copyright © Rixstep. All rights reserved.