Time flies when you’re making history. Hard to believe so much time has passed, but 30 years ago in 1989, at the same time SC made its debut, Sir Tim Berners-Lee was inventing the World Wide Web while at CERN, the European Particle Physics Laboratory. Berners-Lee wrote the first web client and server a year later in 1990, and the URI and HTTP specifications he developed at CERN along with HTML coding were refined as web technology was more ubiquitously deployed.

For those in the industry and the press that covered it, the late 1980s into the 1990s were some heady, exciting times. SC Magazine launched at the tail end of the ‘80s and throughout the internet boom of the 1990s, it was common for multiple print technology magazines to publish well more than 200 pages a week, although security looked much different than it does today.  

When the Web exploded in the 1990s it sometimes felt that it emerged out of thin air, but that was hardly the case. Here’s a quick timeline: The first successful communication of the Defense Department’s ARPAnet took place between UCLA and Stanford in October 1969. Then in 1973, Vint Cerf and Bob Kahn, widely considered the fathers of the internet, developed the TCP/IP protocol that became the common transport language of the internet. Cerf, Kahn and many others worked on TCP/IP for several years until finally on January 1, 1983, ARPAnet adopted TCP/IP: the day widely considered the formal launch of the internet.

So, the $64 million question that everyone in the security field wants to know: If the founders of the internet were so smart, why didn’t they think of security? Look at the mess we are in today. Everything from our last presidential election to people’s personal credit information to the backgrounds of millions of federal workers have been exposed by bad threat actors either working on behalf of nation-states, hackers for profit or hacktivists looking to make a political point.  

The short answer: They did think of security, but they were so busy making the technology work, that security took a backseat to the basic functions of the internet. The early teams were focused on sending messages over the system and getting the transmission protocols to work properly.

“Back in the 1960s and 1970s everybody was working with ARPA funding and there were just a limited number of us on the project, so if anybody was looking to abuse the system it wasn’t hard for us to track down who the person was,” says Steve Crocker, who worked on the original ARPAnet team and pioneered the Request for Comment (RFC) series of notes through which protocol designs are documented and shared. “There’s no question we traded speed-to-market for security. Had we been focused on security, usability would have slowed, deployment would have been slower and the explosion of the internet that happened in the 1990s would have been slower.”

David Clark, a senior research scientist at the MIT Computer Science and Artificial Intelligence Laboratory, who worked as chief protocol architect for the internet in the 1980s, adds that while they knew about many of the security issues, there were some capabilities that hadn’t been invented yet back then.

“It wasn’t that we ignored security, we thought about it a lot,” Clark explains. “But certain technologies, such as public-key encryption, weren’t invented yet. Cryptography came in a separate box, and the boxes were expensive, plus everyone had to agree on the algorithm because it was all in hardware.”

In a recent Business Insider article, Vint Cerf says looking back on it, he doesn’t know don’t know whether it would have worked to incorporate public-key encryption in those early days.

“We might not have been able to get much traction to adopt and use the network, because it would have been too difficult,” Cerf says.

MIT’s Clark does admit that the internet’s founders didn’t think enough about application security.

“We decided that we had no control over what the application designers were going to do, so I think we underappreciated how important application design was to security.”

In fact, in his recent book, Designing an Internet, in Chapter 10, the portion of the book that focuses on security, Clark goes into detail about the importance of application developers building security right into the application. Today, the prevailing DevOps concepts encourage developers to build-in security.

“When downloads of active code transmitted by a sender are untrustworthy, you can’t expect the network systems manager to fix it, the application guys have to fix it,” Clark told a group at Google earlier this year.

Trouble ahead  

The internet’s founders did think about security – and there were some early warning signs that the internet was vulnerable.

Dan Lynch, who was director of computing facilities at SRI International in the mid-to-late 1970’s where he performed initial development of the TCP/IP protocols in conjunction with Bolt, Beranek and Newman, says email was spoofable from the very beginning.

“I remember in the late 1970 there was a colonel who was gung-ho about the military using email as a tool, but I disabused him of that notion by showing him how spoofable it was,” Lynch says. “Email was spoofable from the very beginning. It’s never been secure, and never will be.”

One of the first widely reported worms in the press was in 1988, when Robert Morris, a graduate student at Cornell University at the time, wrote some code that he contended was intended to highlight security flaws. The worm, which was released on a computer system at MIT to disguise that it was launched from Cornell, worked by exploiting known vulnerabilities in Unix sendmail, finger, and rsh/rexec. It also took advantage of weak passwords.

Morris contended (but was later convicted and sentenced to three years of probation, 400 hours of community service and a fine of $10,050) that an unintended consequence of the code was that it proved a computer could be infected multiple times and each additional process would slow the computer down, ultimately to the point of being unusable. Morris later became a professor at MIT in 2006, and today it’s generally accepted (at least in the security community) that he didn’t intend to cause harm, that the Morris worm was not purposefully malicious.

That’s in stark contrast, to the Melissa and I Love You viruses, both coming a decade later in 1999 and 2000 respectively.

Panda Security Panda Security reports that Melissa, launched in March 1999, used social engineering techniques contained in email. Melissa came with the message “Here is the document you asked me for…  do not show it to anyone.” In just a few days, Melissa became one of the most massive infections in history, causing more than $80 million in damage to American businesses. Companies such as Microsoft, Intel and Lucent had to block their internet connections because of the Melissa virus.

Then a year later, in May 2000 the ILoveYou virus struck from the Philippines, also as a social media attack in email. People received the following message with “ILoveYou” as its subject: “Kindly check the attached Loveletter coming from me.” The email included an attachment that looked like a text file named “Love-Letter-For-You.” In a retrospective done on the 15th anniversary of ILoveYou,Motherboard reports that the so-called “Love Bug” infected more than 45 million computers, all lured by the promise of a touching love letter.

Business as Usual?

Ronald Bushar, vice president and CTO of government solutions at FireEye, agrees that security issues were always known behind the scenes and that the industry largely opted for speed-to-market over security. Bushar says the industry tends to handle security issue like the aviation industry.

“We just respond to the last crash,” he says.

Bushar, who started using the internet as a high school student in the early 1990s, says he first saw the extent to which the internet was under attack during his time in the Air Force in the late 1990s.

“We thought we were pretty decent at cyber, but we were still getting beat all the time,” he says. “It really didn’t feel like winning to me at the time…and today, the stakes are so much higher because adversaries such as Russia are using cyber as a component of information warfare.”

In his book, MIT’s Clark discusses how so much of what the founders knew and thought about security came from the intelligence community, which was focused on confidentiality, typically between two trusting parties. Clark says this distracted the founders from the later insight that most the traffic on the internet was between parties who were prepared to communicate, but weren’t sure if they could trust each other.

“When we were designing the internet and wanted to communicate with somebody about the only people available were from the NSA,” Clark told the Google group. “While we were very good in the beginning at confidentiality and integrity, we have totally screwed up availability. People often ask why people just click through the warning boxes on a download, well that’s because users just want to get their jobs done. We tend not to give them a Plan B.”  

 This focus on confidentiality and the priorities of the intelligence community makes sense, because until the late 1990s or certainly after the Target hack in 2013 and the 2016 Presidential election, security was not top-of-mind for leaders in business and for most people. The military has been security-conscious since the very beginning and has talked about information warfare at least since the 1990s, as have IT leaders in the federal government. And, while Presidential administrations have supported e-commerce, except for the final years of the Obama administration, top leadership has not made cybersecurity a priority.

Today, following any number of major hacks, corporate heads have rolled, and the public’s awareness has increased, especially about phishing emails. Most security experts agree that corporate America has done a better job with security – and now small and midsized businesses have to catch up. Books like the one from MIT’s Clark have set the technology community asking questions about what a rearchitected internet would look like.

Policy makers and the public may stamp their fists on the table and decry that the computer scientists have to fix the internet right away. “The internet is broken,” they say. That may be the case, but it will take a great deal of mutual cooperation among governments and businesses and some strong leadership to get us there.   

There’s plenty of brain power among the technical people to redesign the internet for the rest of the 21st century. The big question: In an era when cyber tools are weaponized against us regularly by our adversaries and mobility presents unprecedented security challenges, can the common good prevail when there can never be 100 percent mutual trust among all parties?

We can redesign the internet, but somebody still has to hold the keys. These are just some of the tough questions we collectively face as the security community works behind the scenes to develop a more functional internet for the next generation.