If you are part of an industry that is connected with Artificial Intelligence in any way (and there aren’t many that are not), you probably would have heard or read about the concept of Singularity. Even if you are not ‘into it’ yet, you are not losing out much. It is as intriguing and as science-fictional to folks in the industry, as it may sound to you.
Influenced by recent resurgence of the subject, I decided to share newly learnt concepts & ideas as part of this series of posts.
What is ‘Singularity’?
‘Singularity’, or more specifically ‘technological singularity’, is an hypothesis that, at one point artificial intelligence will match the finesse of human biological brain in all aspects, including its reasoning & emotional capability. Combined with the other information technologies (computing & communication), that are already advancing in an exponential rate, this will help artificially intelligent machines to share the knowledge & learning instantly to other machines, and build further on it.
It will mark the Singularity, when this self-learning and sharing cycle rapidly increases the overall intelligence to a level that will surpass the human intelligence so drastically, that it will be impossible for a normal human biological brains to even understand the artificially super-intelligent machines.
This hypothesis has been discussed long enough (since 1950), but in current times, it appears to be more near – as we are already experiencing artificial intelligence provided by machines better than human intelligence, in some very specific areas.
Singularity: Is it a Perpetual Motion Machine equivalent of information age?
At some level, this concept of continuously-self-learning-artificially-intelligent-machines, surpassing the capabilities of human intelligence, seems much like a very old Perpetual Motion Machine (hypothetical machines that can do work indefinitely without an energy source.) It means, for example, hypothetically, is it possible that we continuously use our air-condition at home using a finite electrical energy source.
We know that there is no such perpetual motion machine, even after best brains spent decades trying to figure that out. That is a crucial contrast between these two possibilities of Perpetual motion machines, and “Exponentially-self-learning-super-intelligent-machines”. Something that makes a perpetual motion machine ‘impossible’, is an absence of ‘new’ energy source (or continuously depleting ‘constant’ energy source) versus something that can make it ‘possible’ to have artificial super intelligence machines, is an always increasing & improving cycles of data-knowledge-skill-sharing-feedback-data-knowledge.
Artificial General Intelligence (AGI): Is it a pre-cursor to Singularity?
In current times, machine intelligence is far superior compared to human abilities in specific problems such as faster & accurate diagnosis, helping us navigate through traffic while we are on the move, labelling our videos or pictures automatically for better organisation, etc. It’s applied Artificial Intelligence at work behind all these solutions, but these are solving very specific problems with a limited scope. These are aptly termed as “narrow AI” amongst researchers.
AGI, on the far other side of the AI-spectrum, is an attempt to perform a full range of cognitive abilities like humans. While the field itself is in still early research right now, the most common definition of AGI includes abilities to:
- Sense (see, hear or touch)
- Reason out (assemble information from senses, represent it as a knowledge, make judgements)
- Plan & Act (make decisions, formulate strategy, create plans and apply next set of actions)
- Learn & navigate (learn from the surrounding responses to the actions, navigate and alter plans & actions)
- Communicate (in natural language for human interface)
- Lastly, to Imagine (extrapolation and creating hypothesis)
- Integration of all these human-brain-capabilities at a very high scale and speed, will be essential for artificial intelligence to spiral into marking singularity.
Also, another aspect of machines’ ability to deal with varied problems (problems that are inherently different in their attributes) in a unified and a coherent way, requires us to find one software solution that meets the expectations of AGI. If software program is basically a set of algorithms encoded to do a job, then a software that needed to do all of above, a Master Algorithm. A must read on the topic, a brilliant book “The Master Algorithm” by Pedro Domingos details on possible approaches towards this.
The Master Algorithm: By Knowledge engineering or Machine Learning?
The question is, how can we create a software with such a Master Algorithm?
Let’s break it down & simplify.
An algorithm is a set of instruction or steps to do a specific job. These instructions are basically the application of knowledge about that specific job. Software is simply the encoded instructions that computers can understand, and perform that job on its own.
Typically, this is done by someone who can write a software code by taking inputs from someone who has the knowledge about the job to be performed. Generally, these ‘instructions’ are coded such that they take some ‘inputs’ and give some ‘output’ by applying the specific processes/steps.
This approach of creating softwares by encoding existing knowledge, is called as Knowledge Engineering approach. Behind most of our day-to-day digital experiences (Internet banking, online flight ticket booking, mobile games, using credit cards or mobile wallets to make payments, etc) is the massive ‘Knowledge Engineering’ at work.
Traditionally, most of the softwares are created in this manner, and that has helped in automating a lot of human-mental-effort-intensive processes to computer programs/softwares – to achieve major efficiencies & scale.
That has been the work horse-powering the digital revolution till recent years.
But there is a problem with this approach when it comes to creating a Master Algorithm. It expects someone to know-all and some-one to code-all. The complexities involved in even attempting to automate ‘how-universe-works?’ is hard to even imagine.
Enter the old idea of – ‘Machine Learning’.
Conceptually, machine learning is about turning around the task of ‘encoding of knowledge’. Here the approach is:
- Provide to the machine all the observed ‘inputs’ and respective ‘outputs’
- Let machine figure out the ‘instructions / steps’, on its own, on how the output would have been created for those inputs.
- That way we don’t have to encode every possible instruction on how things work.
- After that, we can just provide new inputs to the machine, and it will apply the instructions/steps it learnt previously and get the necessary output.
- That is like automating the automation itself. That’s a smarter approach.
It has been successfully implemented, and it is already mainstream in many areas. It is very good at very specific tasks and creating Artificial Intelligence around those specific tasks. In other words, machine learning approach is good, as long as the ‘universe’ is very small and very specific – such as examples we saw earlier.
But what about our real world? Can it learn everything about everything in the universe? As of now, there is no Machine Learning approach to learn how-universe-works with an efficiency of human cognitive abilities – when the universe is everything-about-everything.
But with Machine Learning, now there is at least an imaginable option to create that Master Algorithm – and achieve AGI over a period of time.
Machine Learning | ‘No free lunch’
There is one important aspect to be doubtful about in this approach though.
Fundamentally, Machine Learning needs enough ‘inputs’ and ‘outputs’ – to possibly create instructions for all possible ways the universe works, of course, depending on what is the scope of the ‘universe’ (read ‘specific’ intelligence or ‘general’ intelligence).
Historically there has been a debate on a topic – amongst scientists and philosophers – not specifically for Machine Learning, but generally on how world works. The debate is about ‘Can we really predict what we have not seen – based on what we have seen’?
This is the origin of No Free Lunch Theorem!
It means different things for different streams: science, economics, finance, statistics, technology and even sports. Check out the meaning of No Free Lunch across streams.
This No Free Lunch problem can be profound, especially when trying to create Master Algorithm or AGI, using traditional Machine Learning approaches.
So machines need have more human-brain like abilities: to reason out based on what we have seen, imagined and created hypotheses around them. Combining that with some ways to mimic the human evolutionary aspects – abilities that humans are born with i.e. which come to us by means of evolution – could be the way forward.
The benefits and adoption of cloud have been well documented. Cloud platforms built on virtual…
Organizations are embarking on automation journeys, adopting digital technologies like RPA,…