Thoughts on the potential impact of mass artificial intelligence implementation beginning with a brief history of its rise according to an open nerd Part 1.

What is AI?

AI has many definitions and covers a broad field of application. It is therefore quite difficult to label its definition in a concise way without leaving out some important aspect. In fact, encapsulating it into a single definition is a challenge shared by its more fundamental predecessor, machine computing, which, over the the past roughly 80 years, has gone through a number of definitions and use cases and is now so ubiquitous that there is not a single industry or endeavor on earth that is not somehow impacted by it. It began essentially as a means to automate immensely long, repetitive and therefore error prone calculations to a powerful tool for science to what is today essentially a substrate of human cognitive activity on a global scale.

In the case of AI if we step back and look, we can see that one general, overarching element regardless of specific use case or industry common to it and its implementation is the outsourcing of cognitive labor or intellectual tasks; a handing off of work from the human mind to what is essentially a “computer mind” although (currently) vastly more constrained and, essentially, “dumber” than its human counterpart in terms of real cognition. Using this concept, a good working definition for AI could be “an assortment of tools and technologies the purpose of which is to ease the cognitive burden on the user and thereby leverage the ratio of mental input to creative output”. And, I would add, it does so to a degree that is a level of magnitude (at least) about what traditional machine computing does for the typical user in daily life (there are some exceptions in terms of raw processing power being put to work against tasks like decoding the human genome, climate simulation and computer aided design etc. where the output is far more voluminous and useful purely as a ratio against the input).

Just as any physical tool from a hammer to an airplane functions as an extension of the user’s intention as a means of alleviating the physical effort required to complete the physical task, so AI tools function as an extension of the user’s mental effort to accomplish a given mental task. Although, here, the term “mental” covers a wide range of what generally falls under the subjective human experience that is the background of the desires, needs and creative work we are constantly processing as sentient beings. Just as with machine computing, AI is not strictly limited to math and the sciences anymore. From art to commerce, from medicine to education, there is no aspect of the human experience which is not, in some way, open to the implementation and, ideally, benefit of AI systems. And it’s implementation will likely take as many forms and display as many types of interface as the revolution of machine computing has only exponentially faster. This is currently a controversial and, for some, even scary thought to contemplate, the genie is out of the bottle and, if past human innovation is any metric to go by, it is never going back.

So, in this blog post, it is my intent as a humble human and open nerd, to share my thoughts on what the future of this strange partnership between humanity and one of its most potentially powerful technologies may look like.

I.e. work, shopping, deciding where to go, what to eat, what music, what movie, what method etc.

All of this is generally work undertaken by the mind more so than the body and so, we may think of AI in similar terms, an extension of the human mind but with a variety of different interfaces that may vary from extremely direct to almost unnoticeable. Whatever the case, the general purpose of AI has and will continue to be to make easier those tasks requiring the use of cognitive energy.

Enumerate fears concerning the direction of this development

Contest them with human creativity and the Deere to engage and grow

How did we get here?

The truth is AI is an innovation which has long been in the making. It’s path from conception to real world implementation is one that covers not decades but centuries. The desire for a “thinking machine” is very old. The reason it took this long is that several, large scale and revolutionary technologies had to become fully established before any form of AI could become truly useful to humanity. On top of this, there was also required a further development of the associated software that would run on the emergent hardware and some means of “teaching” these thinking machines to function and behave in a way useful to their human creators. In short, there was a lot of work to be done beginning first on a scale of centuries, then decades, then years and now, finally, oftentimes just months if that. Development at exponential rates of acceleration, it turns out, is just one of AI’s specialities.

To outline in brief where AI came from there was, first, a long, slow period where humanity saw the development of the languages of mathematics and physics as a means of representing natural phenomena as information that regularly described and predicted real events. This supported an equally long and slow period of trial and error through which the rudimentary hardware for machine computing (along with other things like electric lighting, indoor plumbing, automobiles and Big Macs) was developed with it’s first approximate success occurring nearly 200 years ago in the early 19th century. There were many attempts, many small successes and many failures leading up to and during this time but the first widely recognized working prototype was known as the “Analytical Engine” built by Charles Babbage and Ada Lovelace. Ada Lovelace was, incidentally, irrefutably influential as a female mathematician and visionary in a world dominated by men and “…the first to recognise that the machine had applications beyond pure calculation.”; a realization the future implications of which are probably impossible to understate. Even with how basic the “Analytical Engine” is by today’s standards, it was far, far ahead of its time. So far ahead, in fact, that most would be investors saw no real need for it in the market of the day and more or less ignored it for another 100 years or so until the end of WWII when interest in such technologies rose again. This notion of revolutionary technologies being seen as having no practical use in the world would be a continuing trend with machine computing in general for many decades and probably contributed to what is seen as the stalting nature of the evolution of AI where, at times, decades would pass with no notable developments. Nonetheless, the invention of the “Analytical Engine” served as a milestone in the long path toward largescale implementation of machine computing, the substrate on which all AI systems run today.

The story of the evolution of machine computing following WWII to the 80’s and, more so, the 90’s and 2000’s when “personal computers” became as ubiquitous as refrigerators and TVs in most western homes is one that follows the following trends: miniaturization, democratization, diversification, optimization, connection and an increase of accessibility. That last one happening on a scale probably rivaled only by technologies like the automobile. And, indeed, the evolution of the personal computer from the monolithic, incredibly inefficient and expensive monstrosities that only the defense department or a university could afford to the sleek, affordable hyper fast and connected magic boxes they are today has probably brought as much benefit to humanity in terms of the dissemination and democratization of information as the mass implementation of the combustion engine did for transport and commerce.

So what happened next? What happened in the roughly 30 to 40 years between the mass adoption of home computing machines and the rise of AI? DATA. In a word, data. Data was the missing ingredient, the dynamic resource that represented real human activity in digital form. This was absolutely key for AI to become ultimately useful for humanity.

Before mass adoption of personal computers and the following rise of the internet, the knowledge of humanity was confined solely to our books and our brains. While it was obviously useful and immensely influential in improving the quality of our lives, it was not uniformly accessible. It was distributed across hundreds of countries, thousands of languages and billions of individuals with no concrete way of collating it or parsing through it. It was essentially frozen in whatever form it had been imprinted on, imprisoned almost, isolated in a channel of biological, cultural and geographic evolution that may or may not survive to be useful to anyone. And then the internet came along.

What the internet did (particularly social media) was, for the first time in human history, make vast amounts of human thoughts, sentiments, discoveries, behavior, art, education and so on universally accessible in digital form, a form a computer could access just as easily as a human being only in a way that was highly programmable and trillions of times faster. The implications of this were unprecedented and we are only now seeing just the beginning of what is possible from the interface between machine computing, human ingenuity, and the vast oceans of data expanding daily via our activities in machine computing.

AI is essentially the next logical step in the evolution from agriculture, to science, to industry, to computing, to networked computing and then, finally, to applications running on the network to actively “learn” and (more accurately) collect, organize and present information available on the network in a task focused manner useful to human users (AI). It is a refinement of our capabilities that draws directly from what we contribute to the world (our data) in a way that no technology has ever come close to doing until now.

That’s all for part 1. This post was getting long so I’ll continue in part 2 by addressing the following questions:

What will be the potential benefits of AI?

What will be the potential difficulties or challenges see Terminator 2 (joke)

What can we do to ensure the best outcome?

Thanks for reading this far and stay tuned.

How can we support and prepare the younger generation?

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *