Why We’re So Vulnerable
Article Courtesy of: MIT Technology Review
An expert in U.S. national cybersecurity research and policy says the next generation of technology must have data security built in from the very start.
In an age of continuing electronic breaches and rising geopolitical tensions over cyber-espionage, the White House is working on a national cybersecurity strategy that’s expected in early 2016.
Helping to draft that strategy is Greg Shannon. He was until recently chief scientist at Carnegie Mellon University’s Software Engineering Institute and is now on leave to serve as assistant director for cybersecurity strategy at the White House Office of Science and Technology Policy.
In an interview with MIT Technology Review senior writer David Talbot, Shannon explained that dealing with today’s frequent breaches and espionage threats—which have affected federal agencies as well as businesses and individuals—requires fundamentally new approaches to creating all kinds of software. Fixing the infrastructure for good may take two decades.
Cybersecurity has long been a serious worry. Have recent events really changed the game?
If you just consider the attack on Sony—it was a watershed event. The scale, scope, and cost were enormous. And it revealed how tightly cybersecurity and our economy are interrelated—and that the health of the economy is now potentially at stake.
Why are huge breaches like these happening? Are the billions of dollars spent on new security technologies in recent years not working?
It’s more that the incentives to wage malicious cyber activities keep skyrocketing. In the early years of the Internet, the improved efficiencies from networked IT infrastructure far outweighed the security risks created by this infrastructure. Threats were always there, but it was okay to use patches. Today what’s available online, and its value, keep increasing exponentially—and so do the incentives to exploit systems and steal data. What we are seeing are the results; absolutely, the threats and the attacks are bigger than they’ve ever been. And this hasn’t been foremost in the mind-set of most companies producing software infrastructure or Internet services.
The emergence of an Internet of things—interconnecting billions of devices—provides an opportunity to do things correctly from the start.
What is the underlying technology problem?
The answer might sound abstract and dry, but it has to do with efficacy and efficiency. On efficacy, how do you know that installing a new security technology is better than doing nothing? You often don’t. And on efficiency, the usual approach is that you fix a newly discovered problem so the adversary doesn’t use that method anymore. But at the end of the day this doesn’t achieve much, because it doesn’t create a general, systemic solution. It’s not efficient.
We need to restructure how we build software, and develop security systems that have evidence that they actually add value. This requires rigor in how the billions of lines of code that run our networked infrastructure are actually written and updated.
The only places where software writing is truly rigorous are places like NASA—where they are building code that must work for years and from millions of miles away. They have highly formal methods and use well-controlled tools and special engineering to make absolutely sure that the software is reliable and bug-free.
How can we make all IT infrastructure as great as the code running a Martian probe?
Many colleagues and I are devoted to this question. First, it’s important to understand that there are a number of nontechnical issues that keep everyday software from being anywhere near that good. There aren’t regulations or consequences that software companies experience if there are problems down the road—with the exception of certain high-priority domains like nuclear power plants or air traffic control.
So on the policy side you need to consider incentives for everybody to write better code—it could be because of liability, regulations, or market mechanisms. And on the technology side you need to create market incentives so rigorous software development methods, like the ones NASA uses, become far more efficient and easier for everyone to use. Congress, in the 2014 Cyber Security Enhancement Act, asked for a federal cybersecurity R&D strategic plan, and that plan is being drafted, for release by early 2016.
And while it will always be true that malicious insiders or human error can create problems, great software can to a large extent deal with that, too, by creating clear access rules and sending alerts when anything anomalous happens.
Meanwhile, what can companies do to protect themselves?
Read the FULL STORY on MIT Technology Review