If we can’t trust Kaspersky, should we trust Microsoft?

In the current environment of suspicion, what we need is a global convention on software safety.

Eugene Kaspersky, Chief Executive of Russia''s Kaspersky Lab, speaks during an interview in Moscow
Eugene Kaspersky, Chief Executive of Russia's Kaspersky Lab, founded the antivirus company in Moscow in 1997 [Reuters/Maxim Shemetov]

The past 10 years have made it clear that the internet – both the software that powers it and the software that runs on top of it – are fair game for attackers. The past five years have made it clear that nobody has internalised this message as well as the global intelligence community. The Snowden leaks pulled back the curtains on massive Five Eyes efforts in this regard, from muted deals with internet behemoths, to amusing grab-all efforts like grabbing still images from Yahoo webcam chats.

In response to these revelations, a bunch of us predicted a creeping fragmentation of the internet, as more people became acutely aware of their dependence on a single country for all their software and digital services. Two incidents in the last two months have caused these thoughts to resurface: the NotPetya worm, and the accusations against the Russian antivirus giant Kaspersky Lab.

To quickly recap NotPetya: a mundane accounting package called MeDoc with wide adoption (in Ukraine) was abused to infect victims. Worms and viruses are a dime a dozen, but a few things made NotPetya stand out. For starters, it used an infection vector repurposed from an NSA leak. It seemed to target Ukraine pretty specifically, and it had tangible side effects in the real world (Maersk shipping company reported loss up to $200m due to NotPetya).

What interested us most about NotPetya, however, was its infection vector. Having compromised the wide open servers of MeDoc, the attackers proceeded to build a malicious update for the accounting package. This update was then automatically downloaded and applied by thousands of clients. Auto-updates are common at this point and considered good security hygiene, so it’s an interesting twist when the update itself becomes the attack vector.

The Kaspersky saga also touched on “evil updates” tangentially. While many in the US intelligence community have long looked down on the Russian antivirus company gaining popularity in the US, Kaspersky has routinely performed well enough to gain considerable market share. This came to a head in September this year when the US Department of Homeland Security (DHS) issued a directive for all US governmental departments to remove Kaspersky software from their computers. In the days that followed, a more intriguing narrative emerged.

{articleGUID}

According to various sources, an NSA employee who was working on exploitation-and-attack tooling took some of his work home, where his home computer (running Kaspersky software) proceeded to slurp up his “tagged” files.

Like most things infosec, this has kicked off a distracting sub-drama involving Israeli, Russian and US cyberspooks. Kaspersky defenders have come out calling the claims outrageous; Kaspersky detractors claim that their collusion with Russian intelligence is obvious and some timid voices have remained non-committal while waiting for more proof. We are going to ignore this part of the drama completely.

What we do care about though is the possibility that updates can be abused to further nation-state interests. The US claim that Russian intelligence agencies were pushing updates selectively to some of its users, turning their software into a massive, distributed spying tool, is completely feasible from a technical standpoint. Kaspersky has responded by publishing a plan for improved transparency, which may or may not maintain their standing with the general public.

But that ignores the obvious fact that as with any software that operates at that level, a “non-malicious” system is just one update away from being “malicious”. The anti-Kasperskians are quick to point out that even if Kaspersky has been innocent until now, they could well turn malicious tomorrow (with pressure from the GRU) and that any assurances given by the company are dependent on them being “good” instead of there being technical controls.

For us, as relative non-combatants in this war, the irony is biting. The same (mostly American) voices who are quick to float the idea of the GRU co-opting Russian companies claim that US-based companies would never succumb to pressure from the US intelligence community, because of the threat to their industry position should it come out. There is no technical control that’s different in the two cases; US defenders are betting that the US intelligence agencies will do the “right thing”, not only today but also far into the future. This naturally leads to an important question: do the same rules apply if the US is officially (or unofficially) at war with another nation?

During World War II, Germany nationalised British assets located in Germany, and Great Britain did likewise. It makes perfect sense and will probably happen during future conflicts, too. But computers and the internet changed this. In a fictitious war between the US and Germany, the Germans could take over every Microsoft campus in the country, but it wouldn’t protect their Windows machines from a single malicious update propagated from the company headquarters in Redmond, Washington state.

The more you think about this, the scarier it gets. A single malicious update pushed from Microsoft could cripple almost every government worldwide. What could prevent this? Certainly not technical controls (unless you build a national operating system, as North Korea did).

This situation is without precedent. That a small number of vendors have the capacity to remotely shut down government infrastructure, or vacuum up secret documents, is almost too scary to wrap your head around. And that’s without pondering how likely they are to be pressured by their governments. In the face of future conflict, is the first step going to be disabling auto-updates for software from that country?

This bodes ill for us all. The internet is healthier when everyone auto-updates. When eco-systems delay patching, we are all worse off. When patching is painful, botnet malware like Mirai take out innocent netizens with 620 Gbit/s of traffic. Even just the possibility leads us to a dark place. South Korea owns about 30 percent of the phone market in the US (and supplies components for almost all of them). Chinese factories build hardware and ship firmware in devices we rely on daily. Like it or not, we are all dependent on these countries behaving as good international citizens but have very little in terms of a carrot or a stick to encourage “good behaviour”.

{articleGUID}

It gets even worse for smaller countries. A type of mutually assured technology destruction might exist between China and the US, but what happens when you are South Africa? You don’t have a dog in that fight. You shovel millions and millions of dollars to foreign corporations and you hope like hell that it’s never held against you. South Africa doesn’t have the bargaining power to enforce good behaviour, and neither does Argentina or Spain, but together, we may.

An agreement between all participating countries can be drawn up, where a country commits to not using their influence over a local software company to negatively affect other signatories. Countries found violating this principle risk repercussions from all member countries for all software produced by the country. In this way, any intelligence agency that seeks to abuse influence over a single company’s software risks all software produced by that country blocked by all member countries. This creates a shared stick that keeps everyone safer.

This clearly isn’t a silver bullet. An intelligence agency may still break into software companies to backdoor their software, and probably will. They just can’t do it with the company’s cooperation. Countries will have a central arbitrator (like the International Court of Justice) that will field cases to determine if intelligence machinations were done with or without the consent of the software company, and like the Geneva Convention would still be enforceable during times of conflict or war.

Software companies have grown rich by selling to countries all over the world. Software (and the internet) have become massive shared resources that countries the world over are dependent on. Even if they do not produce enough globally distributed software to have a seat at the table, all countries deserve the comfort of knowing that the software they purchase won’t be used against them. The case against Kaspersky makes it clear that the US acknowledges this, as a credible threat and are taking steps to protect themselves. A global agreement protects the rest of us, too.

The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial policy.