No Magic Required

Author: Theodore Odeluga

Unless you’ve been living under a rock since the start of 2023, you’ve probably heard the term artificial intelligence more often than you’d care to mention, as all kinds of opinion makers seem to have suddenly discovered this intriguing (but at least half a century old) field of computing with the unveiling of its latest high-profile example – Chat GPT.

More than anyone in Data Science, the mainstream media is definitely more obsessed now with future prediction and AI than those working in predictive analytics and from week to week, with one hyperbolic headline after another, it doesn't look as if there'll be any shortage of outlandish scenarios for the pundits to explore any time soon.

Over the last year, we've been entertained by a conveyor belt parade of sage chin-stroking “experts” anticipating everything from human enslavement to nuclear war – and that’s just this side of the 21st century.

Anyone would think at this point we would be better off no longer evolving as our puny struggles to remain the dominant species give our soon-to-be robot overlords ever more excuse to laugh in our faces.

Given how profoundly uninformed the conversation has now become and how at this point its difficult to tell the difference between serious discussion and the elements of a James Cameron movie (even he’s jumped on the bandwagon), the subject of AI is in danger of becoming some silly sci-fi parody of itself.

Where do we begin the necessary demystification (to stop the rot)?

Let’s start with a definition. What is artificial intelligence?

AI is just the computerized imitation of intelligent human behaviour.

Simply put, AI is a computer essentially imitating intelligent human behaviour in order to carry out a complicated task that would otherwise take a human much longer to complete (with likely less efficiency).

You might be wondering at this point how a computer can be “intelligent artificially”. We’ll get into that but first to capture your imagination further, let’s briefly explore the different types of AI.

Yes, AI isn’t “just AI”. In other words, there are different types of artificial intelligence.

Natural Language Processing – a branch of the field based on computers processing the elements of human language and being able to communicate with humans in say English or Spanish (think chatbots and smart assistants like Alexa).

Deep Learning – a sub-branch of Machine Learning, where a neural network (a collection of computers or processors working together to imitate the brain’s neuron structure) use multiple “layers” in the networks design to build up recognition and knowledge of complex and detailed information.

Artificial Life – Ok, this one’s a bit of an outlier as (real) experts often make a distinction between the study of artificial intelligence and the study of artificial life but there are of course crossovers in terms of the techniques used in both fields.

Alife concerns constructs such as cellular automata which are essentially virtual representations of simple cellular organisms from the biological world programmed to behave in a similar way to their biological counterparts.

AI is massive so in addition to the above let’s not forget text analysis, theory of mind, reactive machines etc, etc…

From algorithms to source code

You hear a lot about “algorithms”. What is an algorithm? An algorithm is just a series of steps in a set of instructions.

Essentially, an AI algorithm is no different to any other type of computer algorithm.

Instructions for AI begin with human input (that is they start out as an idea which is then written down on a piece of paper or published electronically and if approved by peer-review, re-written as the source code of a computer program).

In this latter stage, the algorithm is converted by a human programmer (or a program written by a human to write programs like a human programmer) into the aforementioned code.

What is source code?

Source code is just a set of instructions written by a computer programmer to create software.

The actual text of the instructions involves a combination of human language, special characters and numbers – all written in a special way according to the programming language (a language such as Python for example) using the correct syntax – or structure of the language.

Once written, the program must then be changed into "bytecode", otherwise known as "machine code" or "machine language" before the computer can run it. (More about this later).

So, to pick up on an earlier point – let’s revisit the question: how does a machine become “artificially intelligent?”

We can answer this through posing another question. What is human intelligence?

The Encyclopaedia Britannica states:

“Human Intelligence is a mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.”

Source: Britannica.com

A very thorough and compelling description I’m sure you’ll agree.

But I would put it more simply:

Intelligence is the ability to solve problems.

This is a simple statement with profound implications.

Computers are also good at solving problems – once they’ve been programmed by humans (or other computers).

Problems can be broken down into simple mathematical terms.

These terms can be further simplified by the only language machines “understand” – ones and zeros.

In binary terms, these two small but highly significant numbers when taken at their most basic level, can’t do very much.

However, these two simple components can be weaved into large logical patterns forming streams of instruction (similar to the way humans communicate with morse code – using only simple dashes and dots to symbolise words).

In the case of “machine code” (binary to us) – it’s the same as morse code – eventually, when the pattern of the message in these ones and zeros is complete, the message will spell out a long list of short instructions.

Because there are so many of these short instructions, the result will look complex.

However, those big, long binary numbers are just long lists, where each item contained in the list is a short, simple instruction.

Taken all together, and when carried out by the computer, this big collection of small bite-sized (or is that byte sized?) instructional pointers create the illusion of “intelligence”.

If the only characters in a language are binary numbers and the recipients of communications in this simple language don’t get bored (and don’t forget what they’ve been told (computer memory is permanent once saved)) – you could get a recipient, who only understands instructions in the simplest terms, to do some pretty complex stuff.

So there is clearly power in simple binary numbers. But why binary? Why not decimal or hex?

Ones and zeros are a perfect numbering system for the basic "on"/"off" logic of electronic circuitry (the "veins and capillaries" of computers carrying the cells of their "lifeblood" - electrons), with one symbolising "on" and zero symbolising "off".

In this context, instructions can be broken down numerically, telling the machine to "Stop" (off) or "Go" (on) at different times and at different frequencies where large combinations of this apparently "erratic" set of directions guide the machine through detailed routines.

One idea which would have occurred to you by now is how a computer, like a vehicle moving down a long winding road might simply run into difficulties if its been instructed to do one thing in a particular way but the situation changes.

As a vehicle can only safely adapt to a twisty road if there's someone driving it, computers must also be able to adapt to dynamic conditions.

The word "condition" is key in software terms and this concept is arguably at the heart of what makes AI often look as if it is actually sapient (able to think and make sound judgements (i.e. - demonstrate wisdom)) or sentient (able to demonstrate feelings and the ability to experience sensation).

Every software engineer knows what a "conditional statement" is.

These branches of the terms "if" or "else" in written code (among other terms) control the flow of a program (toward one action or the other) and reflect in words the schematic diagrams which are used to visually plan software to provide computers with a list of ways to respond when faced with potential changes to anticipated events while in operation.

Written in a certain way, conditional statements can make computers behave (or look as if they're behaving) adaptively to new situations.

Artificial "intelligence" is simply the logical extension of this on a grander scale.

While methodologies in AI differ according to the various specialisms, the essential principle is the same: Do something different if the situation changes. Here are the instructions.

Let’s dig deeper into the nuts and bolts of programming. The foundation of all artificial intelligence is essentially software development and the more we can understand about this fascinating discipline, the better we can understand AI.

As earlier mentioned, the binary numbers of machine language are the result of translating instructions from the source code written by programmers.

The job of a programming language then is to form a bridge of communication between people and machines when the former direct the latter to create software.

Machines only understand ones and zeros but humans use words, other numbers and special characters.

Programming languages allow us to instruct computers using communication in a form we recognise before its translated (special software does the translation – more about that below).

There are essentially two types of computer programming language.

One type of programming language is the “interpreted” language.

A computer program called an interpreter (built into another piece of software which will run the finished program, acting as the “container” environment (like an internet browser running a web application for example)) translates the human readable source into machine language.

In interpreted languages, this translation happens simultaneously while the program is actually running for the end user on the target device.

Interpreted languages are also called “scripting languages” because they’re typically used to write relatively shorter programs (known as “scripts”).

Examples of scripting languages (AKA interpreted languages) are JavaScript, PHP, Ruby and Python (as it happens, a lot of work done in Machine Learning is done with Python).

Scripting languages were traditionally and initially used for building web-based applications but for a number of years now have also been used in building desktop and mobile software.

Indeed, the lines between the term “web developer” and “software developer” have become so blurred as to make each role almost interchangeable.

The other type of computer language is the “compiled” language.

A compiled language is one where a computer program called a compiler (found in IDE’s – more about IDE’s below) first translates all the human readable source into binary first before running the finished program.

Programs written in compiled languages often run faster than those written in interpreted languages because in the case of the former, the machine isn’t multitasking by translating the original source of the program while also running the program itself at the same time.

This is why software programs which need to run particularly fast (such as videogames and financial trading software – for example) are often written in compiled languages.

Compiled languages include older programming languages such as C and C++ and newer languages such as Rust, Julia and Go.

Interpreted languages are commonly understood among developers to be slower.

However, compiled languages take time to work with when used on large projects because the bigger and more complex the software, the longer it takes to compile the code.

Interpreted languages are often favoured by developers nowadays over compiled ones, as they are a great aid to productivity (no long compile times to wait for your code to translate) and in many cases with scripting languages, the programmer can make changes to the program while its actually running.

Integrated Development Environments and Text Editors – or Microsoft Word for computer programmers

One aspect of AI which the “experts” (as they stroke their chins) often discuss in hushed tones – and which makes me laugh (sorry) - is machine “self-awareness”.

This alludes to what computer scientists have referred to as “the singularity” (don’t get this confused with the identical term used in physics to describe part of a black hole).

The singularity, according to various theorists, is the point at which technology becomes so advanced, instead of us controlling it, it takes control of us.

One aspect of this idea is that smart machines will eventually rise up and subjugate us like Skynet in the Terminator movies.

But how exactly would a computer become self-aware? Even with the most sophisticated system based on an advanced analogue imitation of human behaviour – this system would be just that – an imitation. (You’ve guessed by now I’m not quite convinced about the possibility of self-awareness in machines).

Human consciousness is so complex we don’t yet fully understand it.

Without such understanding how could we ever hope to fake it inside a machine?

I imagine this complexity defies the simple paradigm of a model based on ones and zeros – no matter how big the binary number.

The self-awareness argument seems to be saying that the basic components of mechanical instruction can somehow become “alive”.

The characters typed into Word are no different to the characters typed into an Integrated Development Environment (I’ll explain what an IDE is in a second).

However, when a user types characters into Microsoft Word, these characters don’t suddenly become “conscious” or “self-aware”, taking on a life of their own.

(An IDE is software for writing software. A text editor (very similar) is just a cut down version of an IDE – only containing the essential functions for writing and compiling code (maybe for instance, without an IDE’s debugging features)).

Integrated Development Environments are like “Microsoft Word” for computer programmers and web developers.

When the programmer types instructions into one of these tools to create their software, there is no voodoo involved.

Assuming the coder hasn’t made any mistakes (i.e. - there are no mistakes (known as bugs in software terms) in his or her code) the program will run and do what it’s been built to do – nothing else.

It's within our control

Artificial intelligence can’t exist without human input.

Artificial intelligence is the product of source code.

All source code (even machine generated) requires human input at some stage of the process.

Some cite the idea that even the scientists and engineers who build these systems don’t fully understand them and that as a result, some AI behaviours are “unexpected”.

This reminds me of the quote by Arthur C. Clarke about how if a technology is advanced enough, to a primitive onlooker, it would seem like magic.

But AI isn’t magic, is it?

There’s nothing mysterious (or sinister) about artificial intelligence (remember, it’s just source code).

It’s true that many systems are so vast and complex, it would take a full-scale project to analyse them, but just because a system isn’t fully understood, this doesn’t mean it has supernatural powers.

There is nothing spontaneous about a machine’s behaviour.

If it can do something, this is because it’s been designed, built and programmed to do so.

Another misconception stoking heated discussion is the fear of AI overtaking human creativity.

Human creativity can’t be automated.

Creativity is spontaneous while AI is essentially a pre-recorded set of directions.

However, we do need to go beyond simply asking Chat GPT funny questions or getting it to do our homework.

This is a powerful tool and right now (creatively), were not even scratching the surface of what we can do with it.

Simply possessing a powerful new tool won’t give anyone the edge over someone else in a creative field.

It’s one thing to have power at your fingertips and quite another knowing how to use it.

You still need ideas.

Without ideas, all you have is the equivalent of a computer that’s simply a useless piece of junk if there’s no software on it.

Innovation still matters and this is what will give the winners of the future their edge over lazy types who think they can replace hard intellectual work by simply pushing a button.

So, for example, while Chat GPT can assist one's work with writing code (or if you’re lazy enough (and not actually interested in programming) get it to write all your code) what are you actually writing code for? Building a search engine? A Triple-A 3D game? Pacman clone?

It's not the tool, technology or even the code itself which makes the difference – it’s the idea which inspires you to pursue the endeavour in the first place.

In this sense, Chat GPT (and other Large Language Models) are just additional tools in an already well populated toolbox containing a bunch of useful resources.

Creativity matters

People’s fears about artificial intelligence aren’t so much about the technology itself and its potentially destructive power but the large wealthy corporations who currently preside over its capability.

All the fears people have expressed about artificial intelligence are not actually describing the threats posed by AI at all but Big Tech – the large, extraordinarily wealthy and powerful organisations behind numerous applications of the technology.

Clearly, if unchallenged, Big Tech where it can, will continue to flout international rules and laws protecting individual’s rights, in its unending and relentless pursuit of profit.

For example, under the guise of somehow supposedly being unable to control the “unpredictable” and “self-autonomous” power of AI, it will abuse and ignore the rules of copyright to steal intellectual property, as we’ve seen in instances where artists have found parts of their work used without permission in the “spontaneous” responses to user queries inputted to generative AI systems.

These corporate entities are the real danger to our creative and civil freedoms – not the technology itself - in that they will never stop pursuing their own interests, and if necessary (according to these interests), continue to do so at our expense.

Computer programs don’t spontaneously steal intellectual property by themselves or somehow independently attack our privacy by creating fake videos with our faces in them.

This will only happen (and is only happening) because the companies and organisations who develop such tools and products deliberately or negligently direct their resources in such a way as to either simply allow it to happen or make it happen themselves.

As stated before, an AI algorithm is essentially no different to any other type of computer algorithm and AI programs are simply computer programs – not some otherworldly power.

Unfortunately, in the harsh reality of free-market economics, there is no incentive to make tech companies behave differently.

And so we must end on a note of wisdom (not caution, wisdom (in other words, be bold but wise)).

Back in May, after all the mad claims we had to endure from the start of the year, the first observer I actually heard make an intelligent comment about intelligent machines was Stephanie Hare, a Technology Ethicist.

On the BBC news channel, she made the astute point that all the “science-fiction” warnings about AI quoted by “the experts” were distracting everyone from the real dangers of AI that are happening today and not in some distant sci-fi future (i.e. - the present-day problem of disinformation or the injustices of negative bias in deliberate or negligent programming within facial-recognition systems – for example).

Yet, the focus of concern is still on that supposed distant (and hypothetical) existential peril. For instance, shortly after GPT 4.0 took the world by storm, a group of AI practitioners contributed their signatures to an open document warning about the risks posed by the technology.

Despite issuing these dire warnings, these same groups, companies and entities continue to develop the technology, happily taking the funding required – without seeming to care about the implications of their own work.

Conclusion

In truth, when future historians look back on what has happened to and with AI between their time and now, they’ll identify two distinct cultural strands.

On the one hand, the anti-democratic criminal version of AI including deepfakes, racist facial-recognition systems, intellectual property theft, lies and propaganda, and on the other, a more human version where practitioners, scientists, engineers and the creative community develop it for the benefit of humanity. They will also be reminded of the fact that if the majority of the world doesn’t want to succumb to all of the negative outcomes described above then we simply won’t. We do have the choice. This tool is within our control.