In Vint Cerf's Future, Internet Packets Fall From Sky
Vint Cerf invented the protocol that rules them all: TCP/IP. Most people have never heard of it. But it describes the fundamental architecture of the internet, and it made possible Wi-Fi, Ethernet, LANs, the World Wide Web, e-mail, FTP, 3G/4G ― as well as all of the inventions built upon those inventions.
Cerf did that in 1973. For most of you that’s probably 20 years before you even knew what the internet was. That’s why he’s known as the father of the internet and earned himself a Presidential Medal of Freedom. Cerf didn’t stop there ― he went on to co-found the Internet Society (ISOC) and served as president of ICANN, the organization which operates the domain naming system.
So it was pretty much a given that Cerf would be inducted, as he was on Monday, into ISOC’s internet Hall of Fame in its inaugural year.
Just a few days beforehand Cerf talked with Wired about how the military brought the TCP/IP protocol into being, how he and his co-conspirators knew ― almost 40 years ago ― what they were unleashing on the world, the threats to the net today, and what he’d like to see next: a vision that includes internet packets raining down from the sky.
For those who’d like to know more about the internet’s birth beyond this interview, Cerf recommended reading an essay from ISOC, written by Barry Leiner, entitled “Brief History of the Internet.”
Wired: So how did you come to be the author of the TCP/IP protocol?
Vinton Cerf: Bob Kahn and I had worked together in the Arpanet project that was funded by ARPA, and it was an attempt at doing a national-scale packet switching experiment to see whether computers could usually be interconnected through this packet-switching medium. In 1970‚ there was a single telephone company in the United States called AT&T and its technology was called circuit switching and that was all any telecom engineer worried about.
We had a different idea and I can’t claim any responsibility for having suggested to use packet-switching. That was really three other people working independently who suggested that idea simultaneously in the 1960s. So by the time I get involved in all in this, I was a graduate student at UCLA. I am working with my colleague and very close friend Steve Crocker, who now is the chairman of ICANN, a position I held for about a year.
A part of our job was to figure out what the software should look like for computers connecting to each other through this Arpanet. It was very successful ― there was a big public demonstration in October of 1972 which is organized by Kahn. After the October demo was done Bob went to DARPA and I went to Stanford.
So in early 1973, Bob appears in my lab at Stanford and says ‘I have a problem.’ My first question is ‘What’s the problem?” He said we now have the Arpanet working and we are now thinking, ‘How do we use computers in command and control?”
If we wanted to use a computer to organize our resources, a smaller group might defeat a larger one because it is managing its resources better with the help of computers. The problem is that if you are serious about using computers, you better be able to put them in mobile vehicles, ships at sea, and aircraft, as well as at fixed installations.
At that point, the only experience we had was with fixed installations of the Arpanet. So he had already begun thinking about what he called open networking and believed you might optimize radio network differently than a satellite network for ships at sea, which might be different from what you do with dedicated telephone lines.
So we had multiple networks, in his formulation, all of them packet-switched, but with different characteristics. Some were larger, some went faster, some had packets that got lost, some didn’t. So the question is how can you make all the computers on each of those various networks think they are part of one common network ― despite all these variations and diversity.
That was the internet problem.
In September 1973 I presented a paper to a group that I chaired called the International Network Working Group. We refined the paper and published formally in May of 1974, a description of how the internet would work.
Wired: Did you have any idea back then what the internet would develop into?
Cerf: People often ask, ‘How could you possibly have imagined what’s happening today?’ And of course, you know, we didn’t. But it’s also not honest to roll that answer off as saying we didn’t have any idea what we had done, or what the opportunity was.
You need to appreciate that by the time, mid-July ’73, we had two years of experience with e-mail. We had substantial amount of experience with Doug Englebart's system at SRI called The Online System. That system for all practical purposes was a one-computer world wide web. It had documents that pointed to each other using hyperlinks. Engelbart invented the mouse that pointed to things on the screen. [...] So we had those experiences, plus remote access through the net to the time-sharing machines, which is the Telnet protocol …. So we had all that experience as we were thinking our way through the internet design.
The big deal about the internet design was you could have arbitrary large number of networks so that they would all work together. And the theory we had is that if we just specify what the protocols would look like and what software you needed to write, anybody who wanted to build a piece of internet would do that and find somebody who would be willing to connect to them. Then the system would grow organically because it didn’t have any central control.
And that’s exactly what happened.
The network has grown mostly organically. The closest thing that was in anyway close to central control is the Internet Corporation for Assigned Names and Numbers (ICANN) and its job was to allocate internet address space and oversee the domain name system, which not been invented until 1984.
So, we were in this early stage. We were struggling to make sure that the protocols are as robust as possible. We went through several implementations of them until finally we started implementing them on as many different operating system as we could. And by January 1st 1983, we launched the internet.
That’s where it is dated as operational and that’s nearly 30 years ago, which is pretty incredible.
Wired: So how did the internet get beyond the technical and academic community?
Cerf: Xerox invented the Alto machine which was a $50,000 personal computer given to every employee of Xerox PARC ― so they’re living twenty years in the future for all practical purposes. They were even inventing their own internet. They had a whole suite of protocols. Some of the students that worked with me in Stanford went to work with Xerox PARC, so there was a lot of cross-fertilization.
It’s just that they decided to treat their protocol as proprietary, and Bob and I were desperate to have a non-proprietary protocol for the military to use. We said we’re not going to patent it, we’re not going to control it. We’re going to release it to the world as soon as it’s available, which we did.
So by 1988, I’m seeing this commercial phenomenon beginning to show up. Hardware makers are selling routers to universities so they can build up their campus networks. So I remember thinking, “Well, how are we going to get this in the hands of the general public?” There were no public internet services at that point.
And there was a rule that the government had instituted that said you could not put commercial traffic on government-sponsored backbones, and, in this case, it was the ARPANET run by ARPA or for ARPA; the NSFNet run for the National Science Foundation, and there were others. The Department of Energy has ESnet and NASA had what was called the NASA Science Internet. The rule was no commercial traffic on any of them. So I thought, “Well, you know, we’re never going to get commercial networking until we have the business community seeing that commercial networking is actually a business possibility.”
So I went to the US government, specifically to a committee called the Federal Networking Council since they had the program managers from various agencies and they had been funding internet research. I said, ‘Would you give me permission to connect MCI Mail, a commercial e-mail service, to the internet as a test?’
Of course, my purpose was to break the rule that said you couldn’t have commercial traffic on the backbone.
And so they kind of grumbled for a while and they said, ‘Well, OK. Do it for a year.’ So we turned that link up. I had built MCI Mail for MCI a few years before in 1983, so I knew how that worked and, of course, I knew how the internet worked.
We build it, we hook it up, we start traffic flowing between MCI Mail and the internet, and we announce this. And, of course, there were a whole bunch of other commercial e-mail service providers that were disconnected from each other.
So they all said, ‘Well, those guys from MCI shouldn’t have this privilege. We want to be connected to the internet too,’ and the Federal Networking Council said, ‘Well, OK.’ So they all get hooked up and the next thing they discover is, because they were compatible with the internet’s e-mail protocols, all these isolated e-mail systems could now talk to each other. It was just pretty dramatic and it broke many different barriers.
Two years later ― well, it was ’88,’89 ― three commercial internet service providers came into being in the wake of that demonstration.
Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?
Cerf: I’m not surprised at all because we designed it to do that.
This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how ― the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.
We were very, very careful to isolate that protocol layer from any detailed knowledge of how it was being carried. Plainly, the software had to know how to inject it into a radio link, or inject it into an optical fiber, or inject it into a satellite connection. But the basic protocol didn’t know how that worked.
And the other thing that we did was to make sure that the network didn’t know what the packets had in them. We didn’t encrypt them to prevent it from knowing ― we just didn’t make it have to know anything. It’s just a bag of bits as far as the net was concerned.
We were very successful in these two design features, because every time a new kind of communications technology came along, like frame relay or asynchronous transfer mode or passive optical networking or mobile radio‚ all of these different ways of communicating could carry internet packets.
We would hear people saying, ‘The internet will be replaced by X25,’ or ‘The internet will be replaced by frame relay,’ or ‘The internet will be replaced by APM,’ or ‘The internet will be replaced by add-and-drop multiplexers.’
Of course, the answer is, ‘No, it won’t.’ It just runs on top of everything. And that was by design. I’m actually very proud of the fact that we thought of that and carefully designed that capability into the system.
Wired: Right. You mentioned TCP/IP not knowing what’s within the packets. Are you concerned with the growth of things like Deep Packet Inspection and telecoms interested in having more control over their networks?
Cerf: Yes, I am. I’ve been very noisy about that.
First of all, the DPI thing is easy to defeat. All you have to do is use end-to-end encryption. HTTPS is your friend in that case, or IPSEC is your friend. I don’t object to DPI when you’re trying to figure out what’s wrong with a network.
I am worried about two things: one is the network neutrality issue. That’s a business issue. The issue has to do with the lack of competition in broadband access and therefore, the lack of discipline in the market to competition. There is no discipline in the American market right now because there isn’t enough facilities-based competition for broadband service.
And although the FCC has tried to introduce net neutrality rules to avoid abusive practices like favoring your own services over others, they have struggled because there has been more than one court case in which it was asserted the FCC didn’t have the authority to punish ISPs for abusing their control over the broadband channel. So, I think that’s a serious problem.
The other thing I worry about is the introduction of IPv6 because technically we have run out of internet addresses ― even though the original design called for a 32-bit address, which would allowed for 4.3 trillion terminations if it had been efficiently used.
And we are clearly over-subscribed this point. But it was only last year that we ran out. So one thing that I am anticipating is that on June 6 this year, all of those who can are going to turn on IPv6 capability.
Wired: Do you think before then we will see IPv4 auctions on EBay?
Cerf: We sort of anticipated that there will be a very messy endgame in the IPv4 network, and there have been court cases and issues with bankruptcies. I think it’s actually very damaging because if people try to monetize the remaining IPv4 address space then they chop it up in small pieces. It may be impossible to incorporate this into the routing tables in the backbone of the internet.
If you have to know that this little piece is over in Beijing and another piece from a related neighboring space is in Paris, you have to increase the routing table entries to keep track of all those little details. It’s actually quite messy. This is part of the reason that my colleagues and I are so vocal about IPv6 implementation, because the IPv4 system will eventually run out. It’s already ran out of space and may run out of routing table capability, too.
Wired: When you look at the net now are there things that make you very happy?
Cerf: Actually, given that my title at Google is Chief Internet Evangelist, I feel like there is this great challenge before me because we have three billion users, and there are seven billion people in the world. Which means we have four billion people to convert. That’s a big challenge. And it’s turning out to be not so easy to get everybody to build the infrastructure that’s needed.
Take very bold moves like the ones in Australia, where they are building a nationwide network. That’s a very big national commitment and I’m envious of what they’re doing because we don’t seem to be able to get our act together here in the U.S. Our friends in Oz are going to be getting 100 megabit per second connections.
Wired: Occasionally this pops up in the technology press, and as part of the tech press, I plead guilty, but stories come out that say ‘We need to replace the internet that we’ve got because the protocols aren’t good enough for security or for identifying users or for something…’ Is it time for the internet 2.0 or 3.0?
Cerf: The honest answer is that although people like to use terms like, ‘Internet one-point-oh,’ ‘two-point-oh,’ ‘three-point-oh,’ these are misnomers because the internet is really an evolving thing. It’s still very organic.
There are things going on now to increase the security of the system. The domain name system has known flaws and potential threats and hazards and something called 'DNSSEC', which is the Domain Name System Security Extension being implemented literally as we speak.
The hypertext protocol has an encrypted mode which you can initiate in order to secure transmission across the networking World Wide Web protocol space. The same can be said for e-mail: you can do PGP or other kinds of digitally signed responses.
It’s my sense that there are weaknesses that can be dealt with and in large measures they have been ― at least technologically. So I would not count the existing internet out in terms of improved security.
I think that we still have much that we can do to make it better. It might be the case, though, that over time we will need to introduce new features that will make the net more secure.
One thing that I can tell you that we have not done very well is to build-in broadcast capability into the network, and we don’t take advantage of broadcast radio. We don’t take advantage of the fact that when you transmit a packet on multiple channels so that multiple people can hear them, so there are things that we could do and should do make this a richer as well as a more secure environment.
Wired: How would that work? Would that be replacing the way broadcast radio currently works, so that your radio would actually be able to intercept IP packets?
Cerf: Well, in fact, this is sort of where my imagination has taken me.
I think that it’s perfectly reasonable to have packets raining down from satellites, IP packets just literally raining down from satellites and being picked up by hundreds, if not millions, of receivers at the same time. Or radio broadcasting that’s digital that would be delivering packets as you drive by. All those things, in my view, are reasonable to contemplate and could be readily done, so I’m hoping that we’ll see some motion in that direction.