An Exception to AI Exceptionalism

An Exception to AI Exceptionalism

by Jasper Gilley

It’s not often that as much is made of a nascent technology as has been made in recent years of artificial intelligence (AI.) From Elon Musk to Stephen Hawking to Bill Gates, all the big names of technology have publicly asserted its incomparable importance, with various levels of apocalyptic rhetoric. The argument usually goes something like this:

“AI will be the most important technology humans have ever built. It’s in an entirely different category from the rest of our inventions, and it may be our last invention.”

I usually love thinking about the future, often making it a habit, for instance, to remind anyone who will listen that in 50 years, the by-then-defunct petroleum industry will seem kind of funny and archaic, like the first bulky cell phones.

When I read quotes like the above, however, I feel kind of uncomfortable. In 1820, mightn’t it have seemed like the nascent automation of weaving would inevitably spread to other sectors, permanently depriving humans of work? When the internal combustion engine was invented in the early 20th century, mightn’t it have seemed like motorized automatons would outshine humans in all types of manual labor, not just transportation? We’re all familiar with the quaint-seeming futurism of H.G. Wells’ The War of the Worlds, Fritz Lang’s Metropolis, and Jules Verne’s From the Earth to the Moon. With the benefit of hindsight, it’s easy to spot the anthropomorphization that makes these works seem dated, but is it a foregone conclusion that we no longer anthropomorphize new technologies?

Moreover, I suspect that when we put AI in a different category than every other human invention, we’re doing so by kidnapping history and forcing it to be on our side. We know the story of the internal combustion engine (newsflash: it doesn’t lead to superior mechanical automatons), but we don’t yet know the story of AI – and therein lies the critical gap that allows us to put it in a different category than every other invention.

By no means do I endeavor to argue that AI will be unimportant, or useless. It’s already being used en masse in countless ways: self-driving cars, search engines, and social media, to name a few. The internal combustion engine, of course, revolutionized the way that people get around, and could probably be said to be the defining technology of the 20th century. But I will bitterly contest the exceptionalism currently placed on AI by many thinkers, even if one of them is His Holiness Elon Musk. (Fortunately for me, Elon Musk is famous and I’m not, so if he’s right and I’m wrong, I don’t really lose much, and if I’m right and he’s wrong, I get bragging rights for the next 1,000,000 years.) I will contest the exceptionalism of AI on three fronts: the economic, the technological, and the philosophical.

The Economic Front

 

       

 

As you can see on these graphs, human total population and GDP per capita has been growing exponentially since around 1850, a phenomenon that I termed in a previous post the ongoing Industrial Revolution. I further subdivided that exponentialism into outbreaks of individual technologies, which I termed micro-industrializations, since the development curve of each technology is directly analogous to the graph of GDP per capita since 1850.

Since micro-industrializations occur only in sequential bursts during exponential periods (such as the ongoing Industrial Revolution), it would be fair to infer that they have common cause: in the case of the Industrial Revolution, that cause would be the genesis of what we would call science. Though the specific technologies that cause each micro-industrialization might be very different from one another (compare the internal combustion engine to the Internet), since they have common cause, they might be expected to produce macroeconomically similar results. Indeed, this has been the case during the Industrial Revolution. Each micro-industrialization replaces labor with capital, in some form (capital is money invested to make more money, which is a shockingly new concept in mass application.) In the micro-industrialization of textiles, for instance, the capital invested in cotton mills (they were expensive at the time) replaced the labor of people sitting at home, knitting. This is absolutely an area in which AI is not exceptional. Right now, truly astonishing amounts of capital, invested by companies like Google, Tesla, Microsoft, and Facebook, threaten to replace the labor of people in a wide variety of jobs, from trucking to accounting.

Of course, if job losses were the only labor aspect of a micro-industrialization, the economy wouldn’t really grow. In the Industrial era, inevitably, the jobs automated away are more than accounted for by growth in adjacent areas. Haberdashers lost their jobs in the mid-1800s, but many more jobs were created in the textiles industry (who wants to be a haberdasher anyway?) Courier services went bankrupt due to the Internet, but countless more companies were created by the Internet, more than absorbing job losses. It’s too early to observe either job losses or job creation from AI, but there are definitely authoritative sources (such as Y Combinator and The Economist) that seem to think that AI will conform to this pattern. AI will have a big impact on the world economy – but the net effect will be growth, just like every other micro-industrialization. Economically, at least, AI seems to be only as exceptional as every other micro-industrialization.

The Technological Front

But Elon Musk isn’t necessarily saying that AI might be humanity’s last invention because it puts us all out of a job. He’s saying that AI might be humanity’s last invention because it might exterminate us after developing an intelligence far greater than our own (“superintelligence,” to philosopher Nick Bostrom.) If this claim is true, it alone would justify AI exceptionalism. To examine the plausibility of superintelligence, we need to wade deeper into the fundamentals of machine learning, the actual algorithms behind the (probably misleading) term artificial intelligence.

There are three fundamental types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning. The first two generally deal with finding patterns in pre-existing data, while the third does something more akin to improvising its own data and taking action accordingly.

Supervised learning algorithms import “training data” that is pre-categorized by a human. Based on this training data, if you feed it more data, the algorithm will tell you which category the additional data falls into. Examples include spam-checking algorithms (“give me enough spam and not-spam, and I’ll tell you if a new email is spam or not”) and image-recognition algorithms (“show me enough school buses and I’ll tell you if an arbitrary image contains a school bus.”)

Unsupervised learning algorithms import raw, uncategorized data, and categorize that data independently. The most common type of unsupervised learning is clustering algorithms which break data into similar chunks (e.g., “give me a list of 1 billion Facebook interactions and I’ll output a list of distinct communities on Facebook.”)

Reinforcement learning algorithms require a metric by which they can judge themselves, and by randomly discovering things that allow them to improve their performance on the metric, they gradually eliminate the randomness, becoming “skilled” at whatever task they have been trained to do, with “skilled” being defined as better performance on the given metric. Recently, Elon Musk’s OpenAI designed a reinforcement learning algorithm that beat the world’s best human players at Dota 2, a video game: “tell me that winning at the video game is what I am supposed to do, and I’ll find patterns in the game that allow me to win.”¹

The first two types of algorithms aren’t terribly mysterious, and rather obviously won’t lead, in and of themselves, to superintelligence. When superintelligence arguments are made, they most frequently invoke advanced forms of reinforcement learning. Tim Urban of the (incredibly awesome) blog Wait But Why tells an allegory about AI that goes something like this:

A small AI startup called Robotica has designed an AI system called Turry that writes handwritten notes. Turry is given the goal of writing as many test notes as fast as possible, to improve her handwriting. One day, Turry asks to be connected to the internet in order to vastly improve her language skills, a request which the Robotica team grants, but only for a short time. A few months later, everyone on Earth dies, being killed by Turry, who simply is doing what it takes to accomplish her goal of writing more notes. (Turry accomplished this task by building an army of nanobots and manipulating humans to take actions that would, unbeknownst to them, further Turry’s plan.) Turry subsequently converts the earth into a giant note-manufacturing facility, and begins colonizing the galaxy, to generate even more notes.

In the allegory, Turry is a reinforcement learning algorithm: “tell me that writing signatures is what I am supposed to do, and I’ll find patterns that allow me to do it better.”

Unfortunately, there are two technical problems with the notion of even an advanced reinforcement learning algorithm doing this. First, gathering training data at the huge scales necessary to train reinforcement learning algorithms in the real world is problematic. At first, before they can gather training data, reinforcement learning algorithms simply take random actions. They improve specifically by learning which random actions worked and which didn’t. It would take time, and deliberate effort on the part of the machine’s human overlords, for Turry to gather enough training data to determine with superhuman accuracy how to do things like manipulate humans, kill humans, and build nanobots. Third, as Morpheus tells Neo in The Matrix, “[the strength of machines] is based in a world that is built on rules.” Reinforcement learning algorithms become incredibly good at things like video games by learning the rules inherent in the game, and proceeding to master them. Whether the real world is based on rules probably remains an open philosophical question, but to the extent that it isn’t (it certainly isn’t to the extent that Dota 2 is), it would be extremely difficult for a reinforcement learning algorithm to achieve the sort of transcendence described in the Turry story.

That being said, these technical reasons probably don’t rule out advanced AI being used for nefarious purposes by terrorists or despots. “Superintelligence” as defined in the Turry allegory may be problematic, but insofar as it might be something akin to nuclear weapons, AI still might be relatively exceptional as micro-industrializations go.

The Philosophical Front

Unfortunately, a schism in the nature of the universe presents what is perhaps the biggest potential problem for superintelligence. The notion that superintelligence could gain complete and utter superiority over the universe and its humans relies on the axiom that the universe – and humans – are deterministic. As we shall see, to scientists’ best current understanding, the universe is not entirely deterministic. (The term deterministic, when applied to a scientific theory, describes whether that theory eliminates all randomness from the future development of the phenomenon it describes. Essentially, if a phenomenon is deterministic, then what it will do in the future is predictable, given the right theory.)

Right now, there are two mutually incompatible theories to explain the physical universe: the deterministic theory of general relativity (which explains the realm of the very large and very massive) and the non-deterministic theory of quantum mechanics (which explains the realm of the very small and very un-massive.) So, at least some of the physical universe is decidedly non-deterministic.

The affairs of humans, however, are probably equally non-deterministic. Another interesting property of the universe is the almost uncanny analogy between the laws of physics and economics. Thermodynamics, for instance, is directly analogous to competition theory, with perfect competition and the heat death of the universe being exactly correspondent mathematically. If the physics-economics analogy is to be used as a guide, at least some of the laws governing human interactions (e.g., economics) are also non-deterministic. This is by all means something that stands to reason. Physics becomes non-deterministic when you begin to examine the fundamental building blocks of the universe – that is, subatomic particles like electrons, positrons, quarks, and so on. Economics would therefore also become non-deterministic when you examine its fundamental building blocks – individuals. It would take a philosophical leap that I don’t think Musk And Company are prepared to make to claim that all human actions are perfectly predictable, provided with the right theories.

If neither the universe nor its constituent humans are perfectly predictable, a massive wrench is thrown in the omnipotence of Turry. After all, how can you pull off an incredible stunt like Turry’s if you can’t guarantee humans’ actions?

The one caveat to this line of reasoning is that scientists are actively searching for a theory of quantum gravity (QG) that will (perhaps deterministically) rectify general relativity and quantum mechanics. If a deterministic theory of quantum gravity is found, a deterministic theory of economics might also be found, which a sufficiently powerful reinforcement learning algorithm might be able to discern, using it for potentially harmful ends. That being said, if a theory of quantum gravity is found, we’ll probably be able to build the fabled warp drive from Star Trek, so we’ll be able to seek help from beings more advanced than Turry (plus, I’d be more than OK with a malign superintelligence if it meant we could have warp drives.)

So if AI is only as exceptional as every other micro-industrialization, where does that leave us? Considering that we’re in the middle of one of the handful of instances of exponential growth in human history, maybe not too poorly off.


I’d love to hear your thoughts on this post, whether you agree or disagree with it. Please feel welcome to publish them in the below comment section!

If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


1 – OpenAI

Featured image is of HAL 9000, from Stanley Kubrick’s 1968 film 2001: A Space Odyssey.

Highway Rest Stops, From Worst to Best

Highway Rest Stops, From Worst to Best

by Jasper Gilley

If you are a regular reader of this site and if you enjoy the sort of posts that generally populate it (those about technology, history, philosophy, etc.), your distress after reading the last post must have turned to outright alarm when you saw the title of this one. Here is a blog that has sunk, in three posts, from examining the lofty philosophical pronouncements of Nietzsche to speculating about petro-finance to dissecting the merits of highway rest stops. Worse yet, this blog is now proceeding to dissect highway rest stops in that most potentially sleazy of internet formats – the listicle.* As this blog’s author, however, I can assure you of two things: that a balanced intellectual diet (or, as the case may be, a balanced non-intellectual diet) does one’s mental digestive systems good, and that more weighty posts are en route. For now, however, you’ll just have to be content plumbing the depths of that most austere of topics: highway rest stops.

I have spent a non-negligible fraction of my childhood in the car on extensive road trips. As anyone who has been on a road trip in the United States can tell you, one of the best parts of a road trip can be the momentary respite from driving that occurs when one stops at a rest stop to eat, use the bathroom, and generally rejuvenate. Unfortunately, I was required to use the qualifying term can be in the previous sentence because not all rest stops are created equal. Hence, a comprehensive ranking of rest stops by quality is required. Unfortunately, I haven’t been to every US state, so the following rankings will be by no means comprehensive (though I may update them at a later date as I traverse more states.) Nonetheless, I feel it is my duty as a road traveler to inform my comrades-in-transit of the providential possibilities and potential perils of rest stops to the best of my ability. Thus, the following rankings, arranged from worst to best.

#12 – Montana

Montana rest stops can be abysmal, and potentially fatal, for a number of reasons. Most importantly, the restrooms give users the impression that they have been incarcerated in a correctional facility. For their part, Montana rest-stop-restroom-users would probably be better off using the restrooms in a correctional facility. The ones in the Montana rest stops are gloomy, dark, have only one temperature of sink-water (cold but not so cold as to be refreshing), and they may lack soap. To complete the correctional-facility ambiance, the good people at the Montana Department of Transportation included a low-quality speaker blasting weather forecasts, which may as well be blasting harsh instructions to maximum-security prisoners.

I add that Montana rest stops may be fatal because one just north of Billings on Interstate 94 has a sign that reads “Rattlesnakes have been observed. Stay on the pavement.” Finally, Montana rest stops lack recycling depositories, which forced me to put plastic bottles in the trash and feel like a sub-par human being.

In the classic Cold War film The Hunt for Red October, the eponymous Soviet submarine’s second-in-command dies while lamenting “I would like to have seen Montana.” In actuality, you don’t want to see Montana – its rest stops, at least.

#11 – Wisconsin

After a stint in the clink (also known as Montana’s rest stops), Wisconsin’s seem palatial. That being said, by any other measure, Wisconsin’s rest stops are lame. After entering a building that architecturally could not have been built anytime other than the 1960s, one is greeted by nothing – there really isn’t anything in these rest stops other than restrooms and maps. It’s not that Wisconsin rest stops are bad, it’s just that they’re not good, with one exceptional instance of active badness. The XCELERATOR Hand Dryers Of The Future™ in the Wisconsin rest stop restrooms spontaneously cease operation when they feel like it, regardless of the potential presence of moisture on one’s hands, which is aggravating (though it may be the same with other dryers of the same model – I don’t know.) It is a plus, though, that the Wisconsin rest stops have, in stark contrast to the Montana ones, about eight different types of recycling bins. Rest stoppers may be forced to pause and deliberate which bin their bottle goes in, but at least they won’t be forced to doom the human race to climate change.

#10 – Wyoming

Wyoming rest stops aren’t spiffy and may run out of soap, but they were saved from a lower ranking on this list since they’re almost exclusively powered by the sun, apparently, and have cool diagrams inside explaining how. The Wyoming DoT was probably incentivized to do this because it was cheaper than running electricity out to the middle of nowhere (which is basically everywhere in Wyoming, the least-populous state.)

#9 – Indiana

There isn’t much to say about Indiana rest stops except that they are what would occur if Wisconsin rest stops were invaded by a swarm of low-quality eating establishments that one only finds in rural areas (such as Red Burrito.)

#8 – Iowa

If you took a Wyoming rest stop, made it not powered by the sun, cleaned it up a bit, and stuck it in the middle of a cornfield, you’d have an Iowan rest stop. Since Iowa is about as interesting a state as Indiana, no more ink will be spilled about its rest stops here.

#7 – Italy

Surprisingly, Italy isn’t a state of the US. However, I happen to know what Italian rest stops are like in some detail since my high school orchestra did a tour of Italy a few years ago. Unsurprisingly, Italian rest stops are interesting. They are as commercialized as any American rest stops, but not exceptionally clean, much like southern Italy (northern Italy, however, is pristine, oddly enough.) Amusingly for those of us well acquainted with American rest stops, some Italian rest stops sell condoms, which you’d probably be rather unlikely to find in an American rest stop.

#6 – Pennsylvania

Pennsylvania rest stops are an intensely mixed bag. On Interstate 80, the rest stops are much like the passing scenery of western Pennsylvania – boring, but at least clean. Interstates 76 and 476, on the other hand, have superb rest stops, for a variety of reasons. Most importantly, there is only one rest stop for both directions of travel. This means that on Interstate 476, for instance, northbound drivers must navigate a veritable thicket of ramps to obtain the delicious nectars which emanate from the rest stop’s resident Jamba Juice (the residency of which is the secondary reason for these rest stops’ pre-eminence.)

#5 – Massachusetts

Massachusetts’ rest stops, which aren’t particularly special or interesting, would not warrant such an elevated status in this listicle if it weren’t for the singularly distinguishing fact that they are small, essentially containing room for no more than a single convenience shop. I think they also have electric vehicle charging, but I’m not sure. On the one instance of my stopping at a Massachusetts rest stop, I purchased a package of peanuts. That is all I know or remember about Massachusetts rest stops.

#4 – Illinois

Much like the rest stops of Pennsylvania, those of Illinois are a mixed bag. In most of the state, the rest stops are comparable to those of Iowa, with the additional bonus that they hand out assorted stickers, which I collect (this is an entirely normal thing for people to do, by the way.) Heading south on Interstate 57 near Champaign, I obtained a sticker that reads, “TOURISM WORKS FOR AMERICA.” It now resides on my piano music notebook.

The greater Chicago area, however, is home to rest stops that are – not over-grandiosely – known as oases. These delightful havens (or, I should say, Oases) are essentially commercialized, indoor bridges that stretch over the highway and contain pedestrian entrances on both sides. Once inside, lucky travelers can eat Panda Express from a perch directly overlooking the passing traffic. If all of Illinois had oases, the state would most definitely command the top ranking in this listicle. Unfortunately, there exist only eight oases for the entire state, so it remains relegated to the #4 spot.

#3 – North Dakota

No, your eyes do not deceive you. I have indeed granted the #3 spot on this ranking of highway rest stops to North Dakota. It, however, is a spot well-deserved. Consider the fact that North Dakota’s rest stops are cleaned and the grass of their lawns mowed every two hours by an individual dedicated to each rest stop. Additionally, the good people of North Dakota pay for free Wifi at their rest stops. And all this from a state with a population 6% of Illinois. In general, North Dakota is a vastly undervalued state, and you should make a point of visiting it at some point in your life.

#2 – New York

As has been frequently seen to be the case with many of this list’s rankings, New York’s rest stops well reflect the state itself. On the southern part of Interstate 87, just north of New York City, the rest stops are densely populated, cosmopolitan, and capitalistic, just like the city itself. Most importantly, these rest stops are wholly unique among rest stops in that they contain two levels of parking and are themselves two-storied. Upstate, the rest stops become smaller, but retain the good-natured bucolic nature that permeates that part of the state. In an additional bonus, the rest stop on Interstate 90 south of Buffalo, at least, contains electric vehicle charging.

#1 – Ohio

But the bustling rest stops of New York still remain inferior to those of Ohio. In every way, Ohio rest stops are optimal for the passing traveler. They are frequent (every 50-60 miles), architecturally significant (most are shaped in a large O, subtly reminding travelers of the state they are in), and, critically, well-provisioned (Starbucks and other edible restaurants reside in each.) Much thought was clearly put into their design: a TV displaying real-time weather, traffic, safety announcements, and entertaining advertisements is handily posted outside the bathrooms, giving one something to do while one waits for relatives less prompt in certain departments. Nor do truckers remain unappeased – Ohio generously provides them with showers and a dedicated lounge. Given this, it is well worth driving through Ohio for the express purpose of experiencing its rest stops.

There is a moral to this story. As self-driving cars slowly terminate the need for state spending on highway patrol and accident response teams over the next 15 years, state governments should consider investing in their rest stops. A five-star restaurant in every rest stop? Definitely. A hotel? Certainly. A holodeck? By all means. For weary travelers seeking a Mecca of culture amidst the barren swathes of land that highways must inevitably traverse, every dime invested in rest stops means ten times as much.

Finally, if this listicle about rest stops reads something like the Federalist papers, know that that is due to the fact that I spent the cross-country road trip that inspired this post listening to an audiobook biography of Alexander Hamilton, and that I may have temporarily picked up some Hamiltonian affectations in my writing style. I’d like to think that the supremely eloquent founding father would be proud to know that his writing influenced this inspired analysis of highway rest stops.


*The word listicle is a portmanteau of the words list and article. I say potentially sleazy because every time you see a silly clickbait-y BuzzFeed headline saying something like 10 Reasons You Should Always Eat Tomatoes, it is assuredly a listicle.


If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.

Short $ARAM

Short $ARAM

by Jasper Gilley

Well this is unexpected. Being a regular reader of this site, you expected this post to philosophize about the future of technology or something about that. Au contraire – what follows is a good old-fashioned quasi-clickbait-y article about finance!

If you haven’t been keeping up with the latest in Saudi Arabian geopolitics, word is on the street that Saudi Aramco (the state-owned oil producer and the world’s most valuable oil company) is going to list 5% of its business on the New York Stock Exchange in an IPO* in the near future.¹ Though we don’t yet know what its ticker will be, I’ve decided to pretend it will be $ARAM because it would be a good one and it isn’t taken on the NYSE. If it ends up being $ARAM and someone makes money from reading this article, please send me some.

Now, for the investment advice: short $ARAM.** There are two inextricably intertwined reasons for this, but both amount to a catch-22 ensuring Aramco’s profits will never be better than they are now.

The First Dilemma

Almost exactly three years ago, oil prices began a long descent from their previous equilibrium north of $100/barrel to a new equilibrium hovering in the vicinity of $50/barrel.² This was largely due to Saudi policymakers’ desire to keep the price of oil low so as to slow the rise of burgeoning shale and tar-sands producers in the US and Canada, who generally couldn’t extract oil nearly as cheaply as the Saudis. Unfortunately for the Saudis, shale producers proved able to scale production down quickly in response to lower oil prices, thus averting the bankruptcy Aramco was hoping for.

Of course, low oil prices mean less money for Saudi Arabia and other petro-states***, which most other petro-states weren’t thrilled about. Thus, OPEC (the international body through which petro-states set prices) recently agreed to cut back production, which should theoretically raise prices.³ However, when OPEC cuts back production, shale producers (who aren’t bound by cut-production agreements) gain market share and in so doing, help negate price increases.

It’s a dilemma of the first order for Saudi Arabia and Aramco that essentially ensures oil prices (and profits) will stay low indefinitely. This dilemma alone, however, might not be enough to justify shorting the stock of what is almost certainly the world’s most valuable company (including privately held parts of Aramco.)

The Second Dilemma

Saudi Aramco in 2017 is essentially analogous to Kodak in 1990 – a very valuable business, the days of which are numbered. As Elon Musk has said, “We are going to exit the fossil fuels era. It is inevitable.” [4] Someone will inevitably complain that I quoted the CEO of an electric-vehicle company on fossil fuels, so further: oil drilling is an old business, the efficiency gains in which have been almost entirely squeezed out. Solar power, for instance, is a relatively new business. Even relatively small, incremental reduction in the cost of solar panels compounded over a number of years will eventually be enough to make it significantly cheaper than oil energy. This is really just due to the fundamentals of the businesses: solar power is somewhat capital-intensive but comes with virtually no labor or shipping costs, whereas oil has incumbent capital but comes with high labor and shipping costs. Not to mention that a clean energy company recently pulled off the biggest product launch of all time.

This may not seem like a dilemma so much as a pronouncement of doom, except that Saudi Aramco still has to decide how much oil to produce in the short term, and none of the options are conducive to much profit on Aramco’s part. They could:

  • Produce little, driving prices (and possibly profits) up for a short period of time but further accelerating the transition to clean energy.
  • Produce a lot, keeping prices low and delaying the transition to clean energy, but minimizing profits.

Given those options, I’d guess Aramco will choose the latter, but either way, the company is generally headed for an era of stagnating profits.

Oiler’s Method

“But wait,” you might say, upon reading this post. “If Saudi Aramco’s prospects are so dim, wouldn’t they be recognized as such by investors during the IPO, making $ARAM undervalued and shorts therefore less profitable?” You would be correct, O diligent investor, except for the fact that Aramco’s financials are excellent and that, by purely quantitative comparison, it blows many Western oil companies out of the water, as it were.¹ In today’s era of quantitative investing, that is likely enough for Wall Street to give it a juicy valuation at the IPO.

Being the informed investor you are, from reading this blog, you will thus be able to make a tidy sum when in 10 years, $ARAM trades at half its IPO price.

Since this was a financial post, the obligatory disclaimer: the author of this post does not have an interest in any of the aforementioned securities (duh). All risk is assumed by the investor when taking financial advice from an 18-year-old. Consult with a real financial advisor before trading securities.


If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


*IPO stands for Initial Public Offering, and refers to the process by which a company (or part of a company) first becomes publicly held.

1 – The Economist

**Shorting a stock is the opposite of buying it: you sell it on credit, and buy the stock back at a later time. Thus, you make money if the price of the stock goes down.

***Informal term for a state that receives the majority of its revenue from oil.

2 – Bloomberg Energy

3 – Bloomberg

4 – Reuters

AI, Nietzsche, and What Humans are Good For

AI, Nietzsche, and What Humans are Good For

by Jasper Gilley

Note: this post is a sequel/conclusion to the prior post We’re Still in the Industrial Revolution. It would be helpful to read that post before this one.

I recently stumbled across this stressful cartoon:

 

The above image originally appeared in the Ray Kurzweil’s 2005 book The Singularity is Near. Ray Kurzweil has been bashed on this site before, but despite the apocalyptic overtones to most of his work, we’d be kidding ourselves to deny that there remains something compelling about fundamental parts of his message. Machines are indeed making incredible advances in (most) of the fields described in the above drawing, with no signs of stopping.

To contextualize, let’s examine the specifics of the current instance of human-replacement. Virtually all recent machine advances have been fueled by the advent of machine learning/artificial intelligence (AI). We, as a species, feel particularly threatened by AI because it threatens to make obsolete what we think of as the defining characteristic of humanity – intelligence. In other words, if an alien visitor to Earth were to ask a human what humanity is good for, 9 times out of 10, he’d get the response, “humans are good for being intelligent.” That AI threatens to supersede humans in (at least some) areas of intelligence, our species’ defining characteristic, is precisely why Ray Kurzweil is so controversial. His fame comes from pouring salt in the wound that is humans’ collective AI identity crisis.

Yet AI is, as was mentioned in We’re Still in the Industrial Revolution, just another micro-industrialization, and micro-industrializations have been occurring for the past 200 years. Moreover, people were far more up-in-arms (literally) about the original micro-industrialization of the textiles. Humans’ collective AI identity crisis isn’t new, and it hasn’t always been about AI. Rather, it’s a 200-year-old phenomenon, perhaps better known as the Industrial Identity Crisis, with sub-identity crises for each micro-industrialization.

This suggests, however, that humans’ collective identity hasn’t always been static. That is, “intelligence” wouldn’t necessarily be the answer given to the alien had he landed 200 years ago. Rather, he likely would have been told that humans’ capacity for manual labor was the thing that made them unique. After all, the vast majority of humans in 1817 still worked in the manual labor-intensive jobs that would be automated with the onset of the Industrial Revolution. It wasn’t as if any terrestrial animal could do agriculture like humans could.

Thus, still being in the Industrial Revolution, we have no reason to suppose that “intelligence”, broadly defined, is by any means a lasting definition of what humans are good for. In all probability, if our proverbial alien were to land in 50 years, he would not receive such an answer. Rather, he would be told that humans are good for a class of things not automatable by AI. To Ray Kurzweil, nothing isn’t automatable by AI, but of course, to certain 19th-century futurists, nothing wasn’t automatable by robots. I’d argue that the alien would, in 50 years, be told that creativity is what humans are good for. Yes, AI algorithms have “composed in the style of Bach”, but they did so in a very different way than a human would have, and furthermore, composing in the style of someone isn’t creativity – it’s mimicry, which deserves to be automated anyway.

Enough speculation. We’ve established that humans’ collective identity is constantly evolving in tandem with new technologies being developed, but does that mean that the question “what are humans good for?” will always generate a different answer at different points in time? It is perhaps no coincidence that a philosopher born at the dawn of the Industrial Revolution provided guidance to humans at all stages of the Industrial Revolution 125 years ago. Friedrich Nietzsche wrote in his magnum opus Thus Spoke Zarathustra:

What is great in man is that he is a bridge and not a goal; what can be loved in man is that he is an over-going and an under-going.

Humans’ identity should not be defined by where we are. Where we are is constantly in flux, so it is impossible to conjure up a lasting identity based upon it. Rather, our identity should be defined by the flux itself – that where we are, we soon will not be.

So, if an alien lands in your backyard and asks you “what are humans good for?”, give him an answer that won’t be untrue in 50 years. As long as the Industrial Revolution lasts, humans are good for the constant reinvention of what it means to be human.


If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


Thus Spoke Zarathustra translated from German by R.J. Hollingdale and Jasper Gilley.

Featured image is the painting Wanderer above the Sea of Fog by Caspar David Friedrich.

1900-2000

1900-2000

by Jasper Gilley

There were 1,944,139 people born in Germany in the year 1900.¹ Suppose you’re one of those people. Furthermore, suppose you live for a reasonably long time – you die at age 100, in the year 2000. I would venture to argue that you, in the course of your life, witnessed more change than any other individual that has ever existed.

When you were born, horses were the primary means of short- and medium-distance transportation. As a rule, getting from one place to another required a biological organism to do work – be that biological organism you or a horse. Railroads were certainly used for the occasional long journey, but most humans did not have large amounts of direct contact with them.

When you were born, Germany was also the ascendant hegemon of the world. The work of Otto von Bismarck and hundreds of years of industrialization had paid off, and Great Britain faced a serious threat to her global martial supremacy. The times were changing quickly. When you were 5, one of the five Great Powers of Europe (Austria-Hungary, Britain, France, Germany, and Russia) was defeated for the first time by a non-European nation. Newly-industrialized Japan sank the Russian Pacific fleet in the Russo-Japanese War of 1905, and when Russia sailed their stronger Black Sea fleet through the Mediterranean, around the Cape of Africa, and through the Indian Ocean to Japan to retaliate, Japan promptly sank that fleet as well.

Back in Europe, the peace that had existed since the Franco-Prussian War of 1871 was deteriorating quickly. As one historian put it, Germany was “too big for a balance-of-power, too small for total hegemony.” Late in his life, Bismarck remarked that the next major European war would come from “some damned affair in the Balkans,” and when you were 14, he was proven right: the assasination of the heir to the Austrian throne by a Serbian nationalist group triggered a diplomatic crisis that, in failing to be resolved, prompted the First World War.

Imagine living through this time: the first truly global war, accompanied by the application of horrific new weapons. Chemical weapons debuted en masse in the First World War. It was the twilight of the age of artillery – massive standalone guns blasted shells of unprecedented size – but the dawn of the age of airplanes. For the first time, humans could fly! And all this accompanied by claims that this would be “the war to end all wars.”

While most of the world was busy warring, the Russian Tsarist state was busy collapsing. Defeat at the hands of Japan, combined with heavy spending on the First World War, combined with a weak Tsar produced conditions that gave rise to an oft-forgotten Republican government, which was subsequently overthrown by Lenin and the Communists (that should be a band name.) A new form of government was born, one that seemed to pose a viable challenge to all established governments worldwide. In this age of rapid, drastic change, the phrase workers of the world, unite! seemed to be credible.

When you were 18, the Great War came to an end. Along with airplanes, it had brought the genesis of ships that could go underneath the water. We take this for granted today, but for contemporary ocean-goers, the thought that there might be a hostile warship submerged just below you must have been incredibly frightening. Back on land, the borders of Europe were redrawn by the Treaty of Versailles. One day, you might have suddenly found out you were living in a new German republic, or Poland, or Czechoslovakia, or a territory administered by a new organization called the League of Nations.

Throughout the next decade of your life, you watched the liberal world order that had been created by the Treaty of Versailles slowly unravel. When you were 22, Benito Mussolini took power in Italy, abandoning all pretense of democracy when you were 25. Adolf Hitler staged a coup against the German republic when you were 23, and though it failed, he was nonetheless elected democratically a decade later.

Meanwhile, the established economic order was quickly deteriorating as well. The Great Depression in the United States caused global economic troubles, particularly in the still-nascent German republic. Stagflation (inflation without corresponding growth) was rampant, and the German currency became almost worthless.

This was the world in which the Second World War broke out, in your 39th year. Due to its “bigger explosions and better villains,” the Second World War is better known than the First, so I won’t go into a lot of detail here. Nonetheless, World War II inspired the development of such technologies as the atomic bomb. For the first time, one explosive device could level cities. As is well documented, the combination of the nuclear threat and the ascendancy of the Soviet Union under Stalin caused unprecedented mass hysteria in the West, particularly the United States.

Despite being defeated when you were 45, Germany remained the epicenter of global politics in the coming years. As was the case following the Treaty of Versailles, German citizens such as yourself might have woken up one day to find themselves living under either a republic (West Germany) or a Communist state (East Germany.)

World War II also gave impetus to the development of perhaps the most important technology of your lifetime. Starting when you were about 50, the Soviet Union (and slightly later, the United States) began passing milestones in spaceflight: first rocket to reach space, first artificial satellite (Sputnik), and first human in space (Yuri Gagarin.) The culmination of this era occurred when, in July 1969, humans travelled to an extraterrestrial body – the moon.

Let’s put this development in context. What might a biologist list as the five most important developments in the history of life? The transition from bacterial to eukaryotic life would certainly make the list, as would the evolution of a central nervous system. The differentiation between plants and animals would be on there, as would the rise of mammals. Yet all of these developments occurred on Earth. In 1969, life left Earth for another solar body for the first time – and that deserves a place on the list. That means that you were 69 when one of the five most important events in the history of life occurred. Given this, everything else that occurred during your life seems inconsequential.

When you were 91, you saw Communism – the ideology that viably threatened the entire world when you were 17 – come crashing down. In your late 90s, humans began increasingly to use a new technology called the internet to exchange information. When you died in the year 2000, the internet was causing what, at that time, was the biggest speculative bubble in history, yet of course the euphoria must have seemed very real at the time.

Perhaps, on your deathbed, you compared the world in which you were then living to the world into which you were born. It very well might have seemed like two entirely different worlds. When you were born, most people used horses to travel, Germany was the ascendant hegemon of the world, and humans communicated by sending letters via the post. When you died, most people used automobiles to travel, Silicon Valley was the ascendant hegemon of the world, and humans communicated via ones and zeroes represented by quantum magnetic states beamed invisibly, instantly, around the world. These really were two different worlds.

And yet, you lived through every moment of the continued metamorphosis from the former to the latter. Would it really have been so impossible to extrapolate the coming transitions?

I was born in 1998 – not a bad analog to 1900 for you. If I die in 2100, will it be such a different world than the one into which I was born? Or was the 20th century unique in its rapid change? More importantly, what nascent revolutions are present today, waiting for the right moment to burst forward?


If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


1 – Official government figures

Industrial Demography

Industrial Demography

by Jasper Gilley

Note: this is effectively a sequel to the post We’re Still in the Industrial Revolution. You should read that post before reading this one.

We’re still in the Industrial Revolution, and we will be for the foreseeable future. In the first Industrial Revolution post, we looked at the industrial future through the lens of micro-industrializations, speculating as to which would come next. But one doesn’t have to look solely through the lens of micro-industrializations. This post will trace an aspect of the industrial future that will transcend any individual micro-industrialization: the Industrial demographic shift.

Industrial Demography

In the pre-Industrial world, there was a lot of death. Every year, for every 1,000 people alive, about 39 would die.¹ That means if you had 100 friends in January 1600, anywhere in the world, you could expect about 4 of them to die by December.

Fortunately, there was also a lot of procreation. Every year, for every 1,000 people alive, about 40 would be born. The end result was that populations remained roughly stable or grew slightly, except in times of war, disease, or famine, when populations shrank dramatically (30-60% of Europe’s population was wiped out by the Black Death.²)

Then the Industrial Revolution arrived in the West, and death rates began to fall as a result of dramatically increased food supply. Procreation, however, failed to adjust accordingly, and industrialized nations subsequently saw a massive population increase (Britain’s population doubled from 1700 to 1800, and again from 1800 to 1850.) Eventually, people caught on and stopped procreating so much, and population sizes began to level off again.

Many historians divide this demographic transition into phases:

In western Europe, where the Industrial Revolution originated, Phase 1 ended in the late 1700s, Phase 2 ended in the late 1800s, and Phase 3 ended around the 1970s. The United States is still arguably in Phase 3, China is in Phase 4, India is in Phase 2, and much of the developing world is in Phase 1.

Yet Phase 4 isn’t the end of the line. In some long-industrialized nations, such as Russia and Germany, the birth rate has fallen below the death rate, leading to declining population sizes if immigration is a negligible factor. Some define a fifth phase for this phenomenon.

The important thing, however, is that slowing population growth across the world is and will be the norm for the foreseeable future. All nations are moving inexorably towards Phases 4 or 5. So far, immigration to the developed world from the developing has largely prevented population shrinkage. The population of Canada, for instance, is growing at the same rate as that of India, despite the fact that women in India give birth to nearly one more child than women in Canada, on average (this doesn’t sound like a lot but it is.) As fertility in the developing world declines, however, immigration will eventually become significantly less significant, and European nations will see large-scale population shrinkage.

The New Automation

As outlined in the previous Industrial Revolution post, a constant of the Industrial era has been the automation of jobs performed by humans, leading to the creation of more jobs for humans than were originally lost. This automation has been driven largely by economic forces – why pay a human to do a task for $1 when you can pay a machine to do it for $0.10? Of course, as long as there are humans to innovate, this sort of automation will continue on.

But as humans become more scarce, at both a national and a global level, they’ll be decreasingly able to perform all the tasks required in the economy at large, especially if the economy keeps growing exponentially. A new bout of automation will be ushered in by demographic forces, with market forces as the vector: as people become a scarce commodity, they’ll need to be paid more, giving new teeth to the incentive to automation. Indeed, there may simply be no humans around to perform certain tasks that humans have always performed.

Make Birth Rates Great Again

As I see it, there are three possible outcomes to the industrial demographic transition. One is that across the world, birth rates stabilize in the general vicinity of death rates, the world population levels off at around 12 billion, and humans continue living on Earth as before. This outcome is really boring, but I had to mention it because it’s a distinct possibility.

The other two possibilities are much more interesting. To examine the more pleasant of the them, consider what causes population growth. Populations of any sort grow when they have a surplus of food and space, both things that were extremely abundant for the past 300 years. Perhaps it’s no coincidence, then, that the human population is leveling off now, when most of the world’s habitable areas have been habitated. For future humans, however, the Earth need not limit spatial expansion: it seems increasingly likely that humans will set foot on Mars and become a multi-planetary species before 2050. The forthcoming age of space exploration, therefore, may give rise to a second era of exponential population growth. Automation and procreation will coexist as they have for the past 300 years, and the driving force behind automation will once again be economics. In many ways, this would be most like the world as it is now.

Not With a Bang, But a Whimper

Yet if present trends continue into the future indefinitely, another outcome may occur. The population of a humanity confined forever to Earth, never to explore again, might stagnate permanently or even decline irreversibly. In the end, natural forces might prove superior to a humanity made weak by demographics, leading to what would be the end of sentient life in the universe, as far as we know.

Yet, of course, Industrial Revolution-era machines need not be subject to biological demographic forces. As machines become more intelligent, they may become capable of operating without human guidance of any sort. Thus, a sort of biological twilight would occur: humans, having lost Nietzsche’s Wille zur Macht, or will to power, would become obsolete, replaced by their synthetic successors. Personally, I find this possibility infinitely more compelling than Asimov-esque arguments that machines will out-compete humans directly.

There is also an interesting comparison to be made between the notion of biological twilight and the ideas of Thomas Malthus, who showed that, in a stagnant economy, occasional Malthusian catastrophes would put a limit on population growth. Biological twilight would be the ultimate Malthusian catastrophe insofar as it would put a final limit on population growth, except that it would be a total reversal of Malthusian concepts (in which populations always tend towards positive growth until they are decimated.) As T.S. Eliot’s poem The Hollow Men concludes, “this is the way the world ends: not with a bang, but a whimper.”

Demography and Geography

Which of the three outcomes (stagnation, continued acceleration, implosion) will occur? Looking at nations in advanced stages of the demographic transition would seem to suggest that implosion is the most likely. Most long-industrialized nations have birth rates well below the replacement rate of just over two births per woman, which means that without the effects of immigration, their populations would be shrinking (once again, immigration will become much less of a globally significant factor in coming decades.)

Yet predicting the future by this method assumes that the limits of human territory will be the same indefinitely. This is, of course, an assumption that will only be true for a limited time. Thus, humans’ demographic future and humans’ geographic future are intricately intertwined. Given some of the more recent developments in humans’ geographic future, however, I wouldn’t be quick to discount continued acceleration as the likely course of the future.

 


¹ – The Economist

² – New World Epidemics in Global Perspective by Suzanne Austin Alchon, pp. 21

Is Humankind Significant?

Is Humankind Significant?

by Jasper Gilley

I recently came across two extremely interesting, and entirely contradictory quotes by two very different astronomers:

“Now nature holds no mysteries for us; we have surveyed it in its entirety and are masters of the conquered sky…The breed of man, who rules all things, is alone reared equal to the inquiry into nature, the power of speech, breadth of understanding, the acquisition of various skills: he has tamed the land to yield him its fruits, made the beasts his slaves, and laid a pathway on the sea; nor does he rest content with the outward appearance of the gods, a but probes into heaven’s depths and, in his quest of a being akin to his own, seeks himself among the stars…reason is what triumphs over all. Be not slow to credit man with vision of the divine, for man himself is now creating gods and raising godhead to the stars, and beneath the dominion of Augustus will heaven grow mightier yet.”

 – Marcus Manilius, Astronomica, ~25 AD (translated by G.P. Goold)

“The human race is just a chemical scum on a moderate-sized planet, orbiting around a very average star in the outer suburb of one among a hundred billion galaxies. We are so insignificant that I can’t believe the whole universe exists for our benefit. That would be like saying that you would disappear if I closed my eyes.”

 – Stephen Hawking, 1995 AD

Firstly, I find an incredible amount of irony in these two quotes. Hawking uttered the above quote at a time when humans had effectively obtained control of the entirety of their planet, set foot on their planet’s moon, and made all of their information universally accessible, for free. Manilius wrote at a time when humans believed that the sun orbited the earth, that projectiles traveled in triangles, and that there were four elements (earth, water, air, and fire.) One would think each would be more qualified to spout the other’s opinion.

Yet there’s something very visceral about these quotes. Who is right?

To decide, we must define a reference point by which humanity is to be judged. Manilius seems to make his evaluation by comparing humanity to other animal species; this is fine, except that Hawking might point out that there very well may be alien species in the universe that perceive us to be as primitive as we perceive chickens. Hawking doesn’t argue this, however. He seems to believe that humans are insignificant simply on the basis of the enormity of the universe.

Hawking would do well to consider, therefore, that there are as many neurons in the average human brain as there are stars in the Milky Way galaxy.¹ If large numbers alone denote significance, then humans are certainly of great significance.

Since Hawking would seem to have no objection to judging humans by comparison with other animal species, one may easily conclude that humans are, at least technologically, more advanced than animals. The important question is, however, how did we get to the Moon? The critical factor enabling inter-planetary travel was industrialization, which in turn was enabled by the Enlightenment, a movement centered around scientific method, which is predicated on the assumption that humans do not know everything and that they can become more advanced.

The answer to the question is humanity significant?, therefore, is paradoxical. Humans are significant, but our significance directly stems from our ability to admit that we are not consummately significant. To resolve the dispute between our warring astronomers: Manilius is correct because the essence of Hawking’s thesis is correct. Enlightenment philosophers, however, would object to the goal of each statement – to evaluate the current status of humans’ significance. In their eyes, the important thing would be to recognize that humans will always be able to be more advanced. As Nikola Tesla once said:

“It is paradoxical, yet true, to say, that the more we know, the more ignorant we become in the absolute sense, for it is only through enlightenment that we become conscious of our limitations. Precisely one of the most gratifying results of intellectual evolution is the continuous opening up of new and greater prospects.”

 – Nikola Tesla


¹ space.com

Featured image from Dr. Greg A. Dunn

We’re Still in the Industrial Revolution

We’re Still in the Industrial Revolution

by Jasper Gilley

A little under 200 years ago, one of the most important events in human history began:

This graph is really boring, until about 200 years ago, when human population suddenly goes exponential. Clearly, something interesting is going on.

The Industrial Revolution

Historians generally think of the Industrial Revolution as a thing that happened in the late 1700s and early 1800s. If you ask a historian to tell you interesting things about the Industrial Revolution, they’ll probably mention things like the invention and subsequent adoption of power looms, steam engines, gas lighting, chemistry, and metallurgy. Those were all interesting things, but saying that they defined the Industrial Revolution misses the forest for the trees. One might be suspicious that something bigger is going on by looking at the world’s historical GDP per capita:

For pretty much all of human history, the average human’s wealth is stagnant or declining (due to population growth.) But suddenly, around 1850, it goes exponential, directly correlating with population growth. (If you’re unfamiliar with logarithmic graphs, a straight line on a logarithmic graph denotes exponential growth.)

The important thing about the Industrial Revolution is not so much that new technologies like power looms developed, but that those new technologies were used in entirely new ways. The power loom, for instance, in automating a task that previously required a semi-skilled laborer, allowed for the development of an entirely new form of manufacturing in factories and cotton mills; the key thing about the Industrial Revolution was not that humans gained new capabilities so much as it was that they gained a significantly cheaper and quicker and better way of doing things.

Yet the same could be said about any time in the past 200 years. The rise of railroads in the mid-1800s made long-distance transportation cheaper and quicker and better. The rise of automobiles 100 years ago made short-distance transportation cheaper and quicker and better. The rise of computers and the internet is currently making communication cheaper and quicker and better. The exponential gains in world GDP per capita seen 200 years ago have shown no signs of abating since. We’re still in the Industrial Revolution.

Exponential Economics

In the post-Industrial era, a number of distinctive economic trends have recurred. Perhaps the most significant of those trends is that of automation: the substitution of capital for labor.

Consider the pre- and post-Industrial methods of farming:

 

If you’re a pre-Industrial farmer, you’re not happy because you have to do a lot of manual work, in addition to owning and feeding an ox. In short, you do a lot of laboring. The only upside to your situation is that you didn’t have to raise much capital to get started in the farming business: since everyone needs them, oxen are relatively inexpensive, and can be fed using the food you grow. If you’re a post-Industrial farmer, though, your work consists solely of driving the tractor and having someone to repair it when necessary. You probably had to take out a sizable loan to buy the tractor and other equipment, however; you needed to raise a substantial amount of capital.

Prior to the Industrial Revolution, it would have been impossible for any non-state actor to raise the amounts of capital required to farm on an industrial scale, even if it were possible to buy a John Deere tractor in 1700. This points to a significant macroeconomic shift that occurred in parallel with the process of industrialization. Growth early in the Industrial Revolution created investment opportunities, which began a positive feedback loop that gave rise to the modern capitalist economic system:

During the Industrial Revolution, the economy began growing appreciably again for the first time in thousands of years, so it was newly possible to make money by not “doing anything,” but simply by funding others. Thus, the modern investor was born. The availability of investors’ capital enabled more capital-intensive projects to be undertaken, further driving growth. For the first time in human history, the factor limiting economic growth was the slowness of scientific progress, not the unavailability of capital.

Micro-Industrialization

One way to view the ongoing Industrial Revolution is as a series of micro-industrializations, the first being in textiles and the most recent being in information technology. Each micro-industrialization follows roughly the same synopsis:

  • Machines begin doing a task that until recently was done by humans, undercutting humans on price
  • Human jobs are lost, Luddites protest
  • In the long run, automation creates more jobs for humans in adjacent sectors

Take the micro-industrialization of ATMs in the 1970s. When ATMs began to be installed in banks, biological tellers became obsolete. Job losses led to widespread protests from Luddites. Yet in the long run, ATMs significantly lowered the marginal cost of opening a new bank, allowing more branches to be opened, which increased the total number of humans who worked in bank branches.

Or textiles. The power loom made cotton much more cheaply and quickly than any human could. As a result, human weavers were no longer competitive, and had to vacate the industry. But the power loom led to cheaper and better clothes for all, which increased the purchasing power of the poor in particular. As a result, everyone had more money to spend, and the economy grew, more than absorbing the job losses from textiles.

Our Industrial Future

GDP per capita is still rising exponentially. Therefore, we have no reason to suspect that the Industrial era is finished, or that society will not continue to see micro-industrializations that adhere to the above synopsis.

Likely one of the next major micro-industrializations will be in the operation of motor vehicles. About 3.5 million Americans currently drive a vehicle for a living¹, or about 1.2% of the total population. There are a number of very serious, well-funded startups and companies that will almost inevitably put 100% of these laborers out of work. The adoption of autonomous vehicles will likely be very swift (once regulatory approval is given), and there will almost certainly be the strongest Luddite backlash since the original Luddites who protested against the power loom (given the current political climate, this seems especially likely.)

Yet consumers will very quickly notice a funny thing happen. Since trucking is the dominant method of transporting goods in the United States, the price of almost every consumer product will go down by at least 15%. The economy will rally along with consumer spending, and the unemployed truckers will find something to do. The micro-industrial cycle will be complete, and the next micro-industrialization will arrive.

AI

Another pending related micro-industrialization will be that of artificial intelligence (AI.) This is a particularly difficult one to think about for several reasons:

  • We’re currently (early 2017) at the height of a speculative bubble surrounding AI.
  • Many quasi-legitimate thinkers (e.g., Ray Kurzweil) are really onboard with the speculative bubble, possibly for dubious reasons (e.g., it makes them money.)
  • AI has been the subject of many apocalyptic science fiction movies and books (most of which are as unrealistic as all science fiction.)
  • The term artificial intelligence is extremely misleading. The things AI algorithms can do may seem “intelligent” to us today, but so did power looms to Luddites. In 50 years, AI will seem as obviously non-intelligent as standard computers do today: cool, but nothing to lose your mind over.

That being said, the implementation of AI (the term is misleading but it’s the most commonly used, so let’s go with it) will be a very serious micro-industrialization, though it will still fall inside the parameters of those previous.

Before we analyze the synopsis of AI’s micro-industrialization, it will help to provide a clear definition of what an AI algorithm actually does. Since Wikipedia sets the world record for the worst possible definition (“Artificial Intelligence is intelligence exhibited by machines”), it won’t be difficult to do better. Broadly speaking, AI algorithms analyze patterns in data, and use those patterns to make predictions about further data. For instance, one type of image-recognition algorithm takes in photos of an object as input, and as output, can recognize (imperfectly) that object in new images it is fed. Thus, AI algorithms act without being explicitly programmed to do so, hence the misconception that they are somehow “intelligent.”

The problem is that there are a lot of laborers whose job consists of analyzing patterns in data and using those patterns to make predictions about further data. (I use the term labor to denote a job that is automatable at a given point in time, and human capital to denote a job that is not automatable at that particular time.) In 10 years, there will be a large number of newly unemployed individuals who until recently held a stable, white-collar job. Due to the upmarket nature of the jobs to be displaced, I’d expect less of a Luddite backlash against AI in general than against the specific application of AI to automate the operation of vehicles. Still, if Ray Kurzweil keeps up his rhetoric, anything is possible.

Yet just as with every other micro-industrialization, the end-result of AI will be greater productivity across the global economy (as long as its gains are widely distributed, not concentrated in the hands of a particular set of corporations.) Data will be analyzed faster and better, effectively putting every human out of work who finds patterns for a living, but this automation will allow humans to hold more meaningful occupations than pattern-finding, growing the economy in the long run.

Beyond Industrialization

As alluded to above, the Industrial Revolution isn’t the first time in human history with exponential gains. The dawn of modern agriculture and urbanization in 10,000 BC produced an explosion in Earth’s human population that contemporary humans would no doubt have considered as important as we consider the Industrial Revolution to be. If the Industrial Revolution has a predecessor, we can also expect it to have a successor.

What might that successor be? It is likely as difficult for us to imagine the Industrial Revolution’s successor as it would have been for ancient humans to imagine the Industrial Revolution itself. But a key constant in all forms of exponential growth is a long period of stagnation before takeoff. Workers’ productivity was stagnant for thousands of years prior to Industrialization, at which point it rapidly increased. The next round of exponentialism will revolutionize an area that is stagnant now.

Though, of course, thinking about such deep-future events, while fun, is hardly relevant to us who are in the heart of the Industrial Revolution. We have our own exponentialism to discover.


If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


¹ The American Trucking Associations

Featured image originally appeared in The Economist

I Am Not a Millennial

I Am Not a Millennial

by Jasper Gilley

I am not a millennial. I was born in 1998, and people born prior to 2000 are supposed to be millennials, but I’m decidedly not one. For all the good press they get for being “the most educated generation yet,” “held back by their elders,” etc.,¹ millennials do some incredibly irritating stuff.

Millennials Wear Bell-Bottom Jeans

Every millennial has seen that old photo of their parents arrayed in a jean jacket, bell-bottom jeans, and a headband, having caught Saturday Night Fever in the 70s. “Thank goodness we enlightened millennials refrain from wearing such silly vestments,” most millennials think. Yet millennials’ baby-boomer parents thought the same of their parents back in the day, and millennials’ yet-to-be-born kids will one day think the same of millennials’ skinny jeans, beards, iPhones, bicycles, and penchant for vinyl records and typewriters. The hipster in 2050 will be analogous to the hippie in 2017: an aging symbol of our silly ancestors who thought they were the coolest cats ever to be on fleek (note the intentional blending of baby boomers’ and millennials’ lingo. Millennials’ lingo will sound as antiquated as their clothing will look in 40 years.)

Millennials Think Themselves Very Cool Because They Have Their Own Telephone

Millennials will recall how their parents thought they were so cool as a teenager because they had their own landline telephone (imagine that!) “Thank goodness we enlightened millennials refrain from getting excited over the usage of such primitive technological achievements!,” think millennials. “While I’m on the subject of feeling superior to my ancestors, I’ll go update my Facebook status about said subject!” This also is one of the chief sins millennials commit: being far, far too thrilled with social media. This can manifest itself in several ways, one of which is simply an egregious lack of self-awareness on such sites. Far more insidiously, however, millennials’ social media addiction has given rise to awful things like the selfie, memes, a renewed obsession with celebrities, and – worst of all – the selfie stick (there are no words to describe how awful the selfie stick is. All I can say is that it epitomizes the worst of contemporary human culture.)

At any rate, the humans of 2050 will have hours of fun deriding the humans of 2017 for their usage of things like the selfie stick: this is one of the primary reasons why I am publicly distancing myself from the generation of people who invented them. I am not a millennial.

Millennials Love The Past While Simultaneously Shunning It

But the worst crime millennials commit is duplicity on their feelings towards the past. On one level, millennials shun the past. The quotes purported to be narrated by naïve millennials in the above paragraphs exemplify this (albeit hyperbolically.) Millennial culture has a pervasive attitude of arrival, as in, “we’ve arrived as a culture at our final destination. Our stylistic choices will permanently be in good taste. Phooey on previous cultures that thought the same thing about themselves.” Yet there is a simultaneous glorification of the past. How else might one explain the penchant for vinyl records, typewriters, or even beards?

Simultaneously, millennials seemingly revere agrarian society while eschewing the Industrial era. One sees this in millennials’ passion for boots, beards, flannel shirts, and most painfully, faux-folk pop music like this and this. For a generation that embraces new technology as much as any other yet, one might expect less cultural regression. Or perhaps there is actually a causal relationship: millennials, surrounded by technology, are attempting to escape it, even in ways as trivial as growing beards.

I Am Not a Millennial

Since I espouse none of the aforementioned millennial idiosyncrasies, I intellectually distance myself from millennials themselves. Indeed, I’ve begun referring to myself as a Dot-com Boomer. Being born in the heyday of a major micro-industrialization, namely, the dot-com boom, I can’t help but be captivated by the ever-accelerating forward march of human scientific and technological progress. Having operated a computer since approximately the age of five, I’m not as interested as millennials in simply using the digital products created by Silicon Valley, but I’m far more interested in integrating the IT benefits of the dot-com boom with other non-digital industry. Nor can I say that I don’t secretly sympathize with the exponential singularitan rhetoric that permeated the late 1990s. The dot-com boom alone may not have provided a permanent technological singularity, like some futurists of the era might have thought, but in the long run, Industrial society is doing just that. Thus, I discard the retrogressive nostalgia of millennials and look ahead to the next dot-com boom, the next micro-industrialization, the next singularity.

Everyone is a Millennial

A large part of my complaint about millennials is that their culture implies a sense of arrival – that subtle but intoxicating notion that history has reached its apex in its birth of a particular generation. To refute that claim, I’ve argued that every generation feels that way, so millennials must not have a place in history fundamentally different than that of any other generation. Yet making such an argument defeats the purpose of complaining about millennials, since one might make the same complaints about every generation.

Indeed, every generation will surely have its own traits that, as they become more apparent with the generation’s increasing age, infuriate subsequent generations (and possibly preceding ones.) I make no claim that dot-com boomers are the exception to this rule.

Of course, not every millennial is equally complicit in their generation’s annoying idiosyncrasies. Therefore, even if one is not a millennial, knowing that one’s generation must inevitably have these idiosyncrasies gives one the ability to remain aloof from them, and in many ways, transcend generational fashions.


¹ The Economist

Enlightened Misobservation

Enlightened Misobservation

by Jasper Gilley

In the movie Good Will Hunting, there’s a great moment where the titular math genius describes his talent for math to Beethoven or Mozart’s at the piano:

Will Hunting: I look at a piano, I see a bunch of keys, three pedals, and a box of wood. But Beethoven, Mozart, they saw it, they could just play. I couldn’t paint you a picture, I probably can’t hit the ball out of Fenway, and I can’t play the piano.
Skylar: But you can do my [organic chemistry] paper in under an hour.
Will: Right. Well, I mean when it came to stuff like that…I could always just play.

As a pianist myself, I know Will’s analogy to be true. Spending a lot of time at the piano, I began to see the piano less as what it literally is, but more as what it stands for – music.

This phenomenon isn’t limited to music or organic chemistry. Take literacy, for example. An illiterate person will see a word for what it literally is: dark smudges on a page or screen, and nothing more. But when you (I’m presuming you’re literate if you’re reading this) see the word mountain, you see a mountain, not dark smudges (though the dark smudges are the vector through which you picture a mountain.) The illiterate person might actually be said to see the word more literally, as might Will Hunting in the case of a piano.

Ever since the Enlightenment, Western society has valued literal observations as a natural companion to the pursuit of science and reason. The Enlightenment-era philosophy of empiricism, for instance, holds that scientific progress is the direct result of our observations of the universe, implying that accurate and literal observations are of paramount importance. Yet clearly, literal observations can also be a symptom of ignorance. An inaccurate perception of reality, it seems, can be more enlightened!

That being said, any enlightened misobservation will always be the result of countless hours of practicing – nobody is born literate, or able to play the piano. And of course, for every enlightened misobservation, there are a thousand unenlightened misobservations. Yet the very existence of enlightened such instances makes a powerful statement as to how one becomes enlightened in the first place: it is through the repeated and continual pursuit of an objective, to the point that things associated with that objective begin to lose any meaning that they had prior to the process of enlightenment.

When seeking enlightenment through the scientific method, literal-ness may indeed be beneficial, but when enlightenment would come from elsewhere, it may not be so. We would do well to consider the implications.