Much is being made about this, and its cousin cyberwarfare.
Most experts are raising a stink about our (meaning the West) vulnerabilities. Sophisticated events originating from Ukraine, China and Iran/Syria are cited as proof that we have enemies willing to use technology against us. Various other experts point out serious technical and institutional weaknesses.
On the other hand, security blogger Bruce Schneier downplays the risks.
This is an ideal controversy for me. It is not of the manufactured cable news variety, and it is something I know a good deal about.
On the one hand are the folks whose job it is to protect us. Generally, they are tunneled into a specific threat, and their actions from the outside often seem unbalanced. The TSA (the guys who hassle grandmothers and children at airports) are a good example, but I have been through dozens of cases from the inside, from missile gaps, to Shi'ite bioterrorism. It is clear to me that exaggeration is the only way to get Congressional funding, and so it is common, almost necessary.
On the other hand, Schneier's position is to poke fun at the imbalances, a worthwhile role. He is a responsible journalist, but that role really does color his views.
What I think is different than either side presents.
Hacking in the sense of breaking in will never go away. The situation is asymmetric (meaning that it will always cost more to protect a system than to break in), but the costs of protecting are far less than the costs of theft or induced havoc. Everyone is at the same level of risk, more or less, so the market will adjust to an escalating effort of attack and response.
I do expect a 9-11 scale cyberattack of this type — from a breach — in the next few years, with some tragic loss. But let's be clear about the scale of 9-11. The direct loss was fewer than 3,500 lives and less than $10B in property loss. Compare that, for example, to nearly 50,000 deaths a year in the US from car accidents (even better are the voluntary 440,000 deaths a year in the US from tobacco), and as much as 12 Trillion dollars by a self-inflicted financial crisis.
The real damage was to our psyche, and in the profound costs of the inappropriate responses to 9-11. Supposing that we can manage our collective souls after an event, we will recover handily enough and the grace with which we do so should be enough to deter a similar attack.
Destroying infrastructure by software is a different weakness. Bad guys could take down a power grid or one of the financial networks, and this is also likely to happen. Militaries around the world work on this all the time, and it has been used to supplement combat action from time to time. But I do not expect much of this because the aggressor can be easily traced and punished. Deterrence will work here.
That said, I think we do have two significant weaknesses, and I would like to see focused government action to address them. We have to be aware that the simple ‘lock it down’ approach won't cover us.
The first threat is simple: we can focus on computers and computers as they interact with networks. But most computing devices are not computers in the usual sense. Chips are in printers, phones, toasters and cars; they are in almost everything. That RFID chip can even be compromised, and there are some scary demonstrations of how easy this is and how hard to detect and prevent.
This should be simple, a job for NSA (in the US) to provide expertise and NIST to provide security standards and validation for the hundreds of billions of these devices that will snow all over our lives.
A more pernicious threat can be described as an intruder subtly changing a working system to be not quite right. In this case, the 'breaking in for effect' can happen at a number of levels: after installation of course, but systems can be spoofed quite literally when they are programmed by programmer tools that tweak things surreptitiously. We saw some simple example of this with Stuxnet, where a worm sought specific machine tools and perturbed their performance.
I won't describe much about this, but there are some strong new mathematics on how to accomplish this, and some work (not in the intelligence sector, alas) on how to detect and autocorrect these functional drifts.
This is where our real risks come from. What if you could not trust any automated process? None at all?