The overview of my talk will start with outlining the problem of today’s internet, then I’ll go into the different generations of the internet. Followed by detailing what the future internet will look like. Then finishing off with what we’re doing to solve this problem.

Back in the 60s and 70s, Internet Protocol was designed to connect machines together. The floor above you contained a printer the size of a room and cost as much as a house, which needed to be connected to a mainframe, the floor below. Also the size of a room and cost as much as a house. It was easier to pull a cable between the two machines and slot in a networking stack letting the resource sharing commence.

TCP/IP, a conversation style protocol, solved the problem of resource sharing. It did it so well it was the mainstay protocol for the next 50 years, a veritable success disaster. It solved the problem of connecting machines together so well we’ve been deferring to this simpleton network for all our broadcast communication needs. Though the problem has shifted. We’re no longer interested in resource sharing, what we’re interested in is data dissemination. Some 98% of Internet traffic today is content sharing such as videos, audio, images and documents. The nature of this communication is fundamentally different from a point-to-point communication system. How so, you might ask? Well if I were to ask you for the time, the most sensible way of doing this is to broadcast the question to everyone in this room, someone, anyone who understands my question, may respond with the time of day. On the other hand if I were to ask the same thing over a point-to-point communication system I’d need to setup a channel with every single person in this room and repeat the exact same question on every channel.

As you may well guess, this is an expensive way of communicating. Because if the data on your node becomes popular, you’ll have to hire expensive software and networking engineers to be able to scale up the node to satisfy user traffic. Only companies that happen to win the Venture Capital lottery and have the connections to hire these expensive employees are able to do scale up their node. Achieving independent large scale data dissemination over a point-to-point communication system is far out of reach to the average non-technical person.

There’s a nasty side effect of doing a data dissemination over a point-to-point communication system. That is data monopolies will inevitably arise. There will be warehouses of information, purely as an emergent behaviour. Companies will demand that you query their hosts before they release the requested data. In other words, no matter which time line you happen to choose, any time line that uses a data dissemination over a point-to-point communication system, will inevitably have a Google equivalent. Absolute power corrupts absolutely, don’t be evil becomes be evil. The data becomes too valuable and concentrated in one area. Governments will become very interested and start passing laws so that they may plug pipelines into these data sources, no doubt to make decisions in your best interest, or at least in that guise. The Four Horsemen of the Apocalypse ride ever harder.

Our Internet has undergone two previous of evolutions, with each communication system boot strapping the next generation. The Internet actually began back in 1885 with Alexander Graham Bell’s American Telephone and Telegraph Company. Which sunk large scale costs in pulling cables. The only way to amortize these costs was to charge for telephone calls, that was the only functionality available in the day. Operators connected lengths of wire to each other forming chains. Where each physical connection consisted of high quality gold plated connectors. To help reduce the possibility of telephone call dropping, because with each connection of a wire the probability of dropped telephone calls goes up exponentially.

This problem was solved by the next generation of Internet, V2, which allowed us to setup relatively more stable physical networks, that didn’t have these expensive connectors, nor required operators to physically reconfigure the network to form a connection. All that was needed, was to add a bit of extra meta data to each packet, thus supplying endpoints of the network with enough information to reassemble packets, should there be any errors. As forwarded packets don’t care about which path they take through the network, all they care about is the destination address. The network itself will forward packets to that address. If one pathway fails, no big deal the connection isn’t dropped and the network will retry on another pathway. The onset of TCP/IP via ARPANET in the early 70s brought about this change. Eventually, decades later, what was once TCP/IP over telephony, now reverses roles and becomes telephony over TCP/IP. Exemplified by applications such as Skype and other Voice Over IP applications.

What’s the solution to this tech-giant-government-surveillance-advertising-centric Internet problem then? We need to move away from addressable hosts and instead seek to make data directly addressable. Thus we remove the host indirection. When this happens data essentially doesn’t have a home, so to say, nor is it cached. Data roams to where there is demand for it, leaving a breadcrumb trail in least recently used buffers throughout the network so that other interested parties can obtain the data in a few hops, instead of having to traverse the entire globe each and every time to get the data.

Why, we learned this lesson, we’ve done it before. When you write data to a disk, do you know the actual memory address of the data on the disk? No, we make use of the kernel to give us an indirection or file handle. This converts a human readable file name into a memory address. Then why? Might I ask, do we suddenly lose our brains and insist on using raw IP addresses, something that is fickle, transitory, and designed to be geographically fixed and is running out, when all we want to do is communicate with other machines. We need to elevate our communication systems in the same way we stopped writing directly to memory addresses. Just as we added a bit of meta data to a TCP/IP packet allowing end points to error handle, so we need to add a bit of meta data to data which does the job of mapping a human readable name to the actual universal unique ID of the data requested.

Only when we divorce location from data are we able to offer everyone the same content dissemination capabilities as Google, Facebook, Amazon and Twitter. To reiterate, a kid in Rwanda can disseminate their content to potentially billions of people across the world free of charge.

If you’ll permit a quick tangent and allow me to explain the concept of Social Scalability, a term coined by Nick Szabo. A socially scalable system is one where the original intent of the group is not lost when the group scales in size. Achieving this can be done in many ways, for example, on the people powered end of the spectrum by constructing a legal body, with judges and lawyer and associated police force. To the purely technical side of the spectrum which implements a monetary system with a built in nash equilibrium forgoing the need to be enforced by the creation of human laws.

One of the last standing outposts is that of data dissemination. Unlike a cryptocurrency, where the protocol treats the rich or poor exactly the same, no matter how much one lobbies the protocol. In data dissemination, a vital part of our society nowadays, tech giants achieve data dissemination far more easily than a non-technical person. This asynchrony in capability leads to funneling users ability to do do business through their dumbed down user interfaces targeted at the lowest common denominator. Fake, polarizing and unverified news becomes prevalent as people game the algorithms in the hopes of getting more clicks.

Generation 2 of the Internet doesn’t have security built into it. Scientists designed it to share information with each other, the need for security just wasn’t there. Therefore, generation 3, or Named Data Networking must have strong security built into the protocol. One should be able to determine from the data alone that the integrity and provenance of the data is correct. We no longer need to care about securing channels, because the metadata contains enough data to ensure it can travel through the NSA’s servers and be untampered. The protocol should also ensure that data can be optionally encrypted if needed.

A generation 2 network is more than happy to send unsolicited data to your mailbox, in a generation 3 network however, this is made significantly harder because it’s a pull network. Meaning that global bandwidth will always be at theoretical minimum. This plays well into IoT devices which are burdened with the mandatory cloud server all devices phone home to. As IoT doesn’t do advertising, it becomes hard to maintain those servers, as selling hardware is a once off event. Hence companies like fitbit ensure your hardware device will fail so that you can purchase a new one keeping the wheels of capitalism turning.

One other aspect of IoT seems to be over looked. The current Internet is bursting at the seams because of pushed data. These IoT devices we envisage will be generating a lot of typically low value noisy information that’ll further clog up the pipes. When one utilizes a pull network, data generated by IoT devices sits quietly on the devices and only moves when there is demand for it.

So what are we doing about it? We’re building a new browser that is specifically aimed at a data centric or named data network. We’ll also be tying in a cryptocurrency into the browser to facilitate trade. Thus our browser has three socially scalable systems that allows a kid in Rwanda to have a bank in their back pocket, a government equivalent digital identity and the data dissemination capabilities of tech giants.