A Brief History of Infrastructure

A Brief History of Infrastructure
Photo by Aldebaran S / Unsplash

We’re in the business of infrastructure, so we got thinking: Where are we and where did we come from? How did things work before AWS came along? Here’s our stab at mapping out the evolution of infrastructure.

The Problem of Infrastructure

As most of you probably realize, infrastructure is more than just computers.

Hardware, software, storage, networks, and many more complicated things need to work flawlessly together in real-time. This coordination is a problem—so many moving parts. Even as a developer, it’s incredible to me that infrastructure ever works together at all. And often, it doesn’t.

There are too many configuration details for human beings to keep track of all at once. Pity the early programmers that had to do this work using mechanical config settings. Nothing was automated. There were no networks. Telecommunications didn’t exist yet. Things simply had to be done by hand.

The First Infrastructure Problems

The first electro-mechanical computers, such as the Harvard Mark I that John Von Neumann used in the Manhattan Project, filled entire rooms with patch cords and switches that needed to be changed by hand. Talk about moving parts!

These computers generally didn’t do floating-point math yet. They stored their programming on lengthy punched-paper tape.

The whole concept of a program "loop" came from the practice of physically attaching the paper tape from its end back to its beginning, making a literal loop. Conditional branching was hand-coded using toggle switches. Constants were entered using manual rotating switches.

The Next Generation of Problems

The next generation of all-electric computers replaced the patch cords with electrical relays and vacuum tubes, tubes that often needed replacement.

Computing architecture mainly switched from analog to digital in this era, with computers such as ENIAC using punched cards or paper tape. Definitely better than twisting switches for configuration, but certainly not easy. It took ages to program using error-prone punching machines to create perfect programs.

Peripheral Problems

IBM began the next phase of configuration problems in the mid-1960s with the IBM S/360 mainframe series.

With improvements in the actual computing capabilities, and the ability to use microcode on some larger models, the S/360 line became the first commercially successful computers.

IBM expanded the usefulness of its S/360 line by adding peripherals such as printers, OCR using large optical readers (not software), a standard interface for adding peripherals from earlier computers, and more config-hungry devices. Each device, of course, required a separate, manual configuration.

The IBM S/360 had used transistors on removable SLP circuit boards. In 1965, Gordon Moore made the observation—later to be called Moore’s Law—that the number of transistors on a microchip doubles every two years, though the cost of computers drops by half. The result is cheaper, faster computers every two years. This observation put computer design and development into high gear.

World Wide Woes

The ARPANET, the forerunner of the internet, launched in 1969, beginning the process of linking computers together worldwide. The problems grew.

Let’s skip forward a bit to the Personal Computer (PC) era.

In 1974, the Altair 8800 personal computer was released, sparking the imagination of engineers and hobbyists all over the planet. Decked out with floppy disk drives and a CRT monitor, you could own the Altair computer yourself! Cool.

Altair BASIC, written in machine code by the Micro-Soft company (sound familiar?), allowed programmers (beginning to be called developers now) to write and store their programs while at home or at their office desk. Configuring peripherals wasn’t easy, and hardware drivers still had to be written in machine code. But as personal computing gained momentum, peripherals became a valuable part of the equation.

By the 1980s, businesses wanted to connect all their isolated PCs and peripherals to share data, resources, and money. They took their cue from the ARPANET.

Client/Server Problems

Client/server technology enabled the networking solutions businesses wanted. LANs (local area networks) appeared and started the momentum that ended up in the ability to connect all the isolated LANs into a WAN (wide area network).

Computer technology has advanced according to Moore’s law, and software engineering also improved. The ability to control WANs, LANs, and individual PCs developed with the hardware. Remote printing, among other useful features, became common. Scientists and businesses could collaborate from a distance.

Dashboards to monitor the dizzying number of remote network services and configuration settings began to appear as early as the mid-1970s and flourished in the 1980s.

World Wide Woes on Steroids

In 1983 the Internet appeared, based on the ARPANET project from 1969. Universities worldwide could now share email and collaborate on research through the Internet. The Internet was a profound boon to science. It got people talking and thinking big. Despite the standardization of TCP/IP, configuration problems grew.

Then in 1990, Tim Berners-Lee posted the first page to the World Wide Web from his NeXT computer. World-scale configuration problems were about to erupt, and everybody was excited.

Now all the remote WANs could communicate with each other, and individual computers could connect to anything else on the WWW without needing permission from any central controlling body. Passwords were implemented (poorly).

Developers (not necessarily requiring CS degrees like software engineers) and untrained individuals could now communicate worldwide from their own PCs without a huge and expensive infrastructure. Just a modem would do. The infrastructure itself had become remote.

Potential Solutions

Remote infrastructure is a powerful concept. How much time, effort, and money can be saved by putting almost everything online? This powerful concept started the movement to solve all the configuration problems rampant in the industry. The idea of “developer-friendly” became necessary as a crucial feature of the ballooning software industry. Without it, more and more errors were introduced into the already complicated system, not to mention the time it takes to deal with the infrastructure problems in the first place.

Ideas about how this remote infrastructure could evolve flew all over the web.

AWS Rises

Seizing on the possible solutions to an early e-commerce problem, Amazon, in its infancy, began organizing and systematizing its APIs. From this grew the idea to offer Infrastructure as a Service (IaaS) as a solution for businesses on the web. AWS launched in 2006 and has become the world’s largest and most successful IaaS platform, still outpacing its three top rivals combined.

AWS services automate the painfully slow configuration and deployment of modern cloud computing, as IaaS and AWS have defined the cloud. AWS solved a lot of problems, and Big Tech began to come into existence.

Of course, there is debate over who came up with the term "cloud computing." In 2006, the CEO of Google, Eric Schmidt, first used the term "cloud computing " at an industry conference. Did he invent the term? Who knows?

The upside of AWS, complete infrastructure control, comes at the cost of simplicity and developer-friendliness. But the AWS dashboard is enormous and complex since AWS is constantly changing its offerings to improve and compete with other services. Some developers do well, but most have problems with it.

What’s more, anytime you want to change something for your application, you must reconfigure AWS itself through the daunting dashboard.

Platform Problems

Platform as a Service (PaaS) evolved to simplify AWS configuration complexity. In some cases, it’s not necessary to deal with AWS at all. Companies—such as Heroku (2009)—have been popular with developers up until now because of the simplicity and reliability of app deployment.

PaaS, in turn, has its problems. While the dashboard experience is great for small, minimally scalable projects, those projects needing specialized AWS functions or massive scaling are limited to what the PaaS is capable of handling. PaaS doesn’t accommodate every AWS detail. Nor does PaaS scale well. AWS, not Heroku and PaaS, defines enterprise-ready.

This Brings Us to a Simpler Today.

Infrastructure as Code (IaC) reduces the problem of configuring infrastructure down to editable, repeatable, Git-compatible configuration files automatically generated using software such as Terraform.

The IaC dashboard experience is developer-friendly, and control over AWS is complete. Everything is massively scalable, secure, and resides on the AWS system, so startups and other tech-focused companies are not locked into a limiting platform. "Cloud jail" is a thing and has both cash and opportunity costs. IaC solves the problem of Cloud Jail.

Looking to the Future

I’m sure problems with IaC will surface, too. Our bet is that many companies will use another layer of UI abstraction that will sit on top of IaC and make you agnostic between providers (AWS,GCP & Azure).

Another phase of infrastructure-as-something will undoubtedly arise, such as improving quantum entanglement through software configuration techniques.

We all have these problems to look forward to.