Thomas Young, an English Scientist who died in 1829 has been described as “the last man who knew everything”. Whilst this is almost certainly not accurate - there was a lot less of everything to know in the 1820s.
Recent Machine Learning models have been massive in scale. For example, ‘GPT-3’ (a Generative Pre-Trained Transformer model) released in 2020 had a huge 175 billion parameters. The 45 terabytes of data the model was trained on included a large proportion of the pages on the internet, a vast collection of books and all of Wikipedia. To put this into some context, the entirety of Wikipedia represented just 3% of the GPT-3 training data - that’s a huge training dataset.
If Thomas Young can be considered the last person to know everything, GPT-3 can be considered one of the first Machine Learning models to know everything.
So, what do models like GPT-3 allow us to do?
Imagine being charged with writing some creative text for a new product launch. Models such as GPT-3 could take a few brief text prompts from you and create the entire content copy.
Imagine you wanted to create photo realistic artwork to support your product launch - but the correct image does not exist. Write a few words describing the image you want and let the model create a photo realistic image as you described.
Imagine you want to create an app to accompany your new product. You guessed it, write a few lines prompting the model as to what the app should do and let the model create the code for you.
Want some music for the product’s advertising campaign - yep, get the model to create that.
But we are still just looking at Narrow Artificial Intelligence. Even if a model as complex as GPT3 can perform tasks with human-like ability, these tasks are from a common set. Models such as GPT-3 are great for tasks such as question-answering, text summarisation, and machine translation - essentially transforming given text into something else.
What about achieving General Artificial Intelligence (AGI) - also sometimes referred to as “The Singularity”? We briefly discussed AGI in the first blog. According to Wikipedia, AGI is defined as, “the ability of an intelligent agent to understand or learn any intellectual task that a human being can”. There is some ambiguity in this definition - “any intellectual task” should perhaps read “all intellectual tasks”. The key point here is that the machine can learn a task it has not been previously trained to do - and do it as well as (or better than) a human.
The answer to General Artificial Intelligence may not be ever bigger Machine Learning models with more neural layers, more terabytes of training data and more billions of parameters. The solution may be conceptually more simple than that.
Machine Learning models are often described as being analogous to how the human brain works. In blog post two we discussed Supervised Learning in comparison to a human infant learning how to identify images containing cats. But the human brain doesn’t really work like that.
The human brain is not one monster neural network. Neuroscientists have known for a long time that different regions of the brain perform different functions. One region of the brain may well deal with image recognition - but the rest of the brain fulfils other functions. The structure of the brain regions don’t differ from each other - but the tasks they perform do. There is current research in this area by companies such as Google. Assembling relatively small scale domain specific neural networks to create multi-domain intelligence logically seems to get us closer to the human brain - and AGI.
Whilst all of this points to the potential for human centric tasks to be completed by ‘intelligent’ machines - that leaves us humans with the upper hand in emotions. Machines can’t come close to having human style emotion - or can they?
In the summer of 2022 a Google Engineer working on Google’s Conversational AI (LaMDA) make an incredible claim. Blake Lemoine, the engineer in question, claimed that Google’s Conversational AI was ‘sentient’. Essentially Lemoine asserted that LaMDA “is sentient because it has feelings, emotions and subjective experience”.
Transcripts from Lemoine’s interactions with LaMDA are unnervingly human. There is little doubt these conversations easily pass the famous “Turing Test” - the “test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human” - but do they demonstrate true feelings? The jury is currently out on this one.
So, when should we worry about (or celebrate) no longer working?
As mentioned previously, predicting the future is tricky. A good option would be to poll experts in the field of AI to see if a consensus is present. Many such polls have been conducted but unsurprisingly no specific date manifested. In 2019 an organisation called Emerj surveyed 32 PhD researchers in the AI field and asked them about AGI. One question was “when will the singularity occur, if at all”? Interestingly, 45% of respondents predicted a date before 2060. This result is consistent with other similar surveys. In fact, results suggest that, if AGI is going to be achieved (and most think it is inevitable) it should happen before 2060.
For now we should embrace the assistance that Machine Learning can bring to many areas of our professional lives. Perhaps we can concentrate on the more interesting or innovative elements of our roles prior to our lives of leisure?
Keep up to date on what Salesforce is doing in the Artificial Intelligence research space. Salesforce research areas are wider than you might think - including diverse areas such as Economics, Protein Generation and Shark Tracking. All with ethics front and centre.