The CEO at Google, Sundar Pichai, with a team of researchers at Google Brain are working on a revolutionary method to develop a more efficient self-generating coding program called Auto ML – auto machine learning. Something called a controller neural net, within googles own computers, can develop what’s known as ‘child model architecture’ – a child to be more specific, that can be developed by the computer itself and trained to complete certain tasks; then subsequently evaluated.
This process can be repeated thousands of times.
What does this mean?
It means that computers can code themselves; effectively replicate. The controller neural net creates the ‘child’ and sets it tasks, from the feedback it evaluates how well the child has learnt. The controller can complete this process thousands of times, improving as it goes along.
It’s a little like a parent teaching a child everything he/she knows and tests them to see if they have learnt it correctly. However, the speed and efficiency of computers far outweighs human capability and of course the permutations for learning are endless. More worryingly, this self-learning technology could be made widely available.
Could this be a worry?
It could be. Computers are now self-learning when all said and done. Also, computers are far more efficient at coding than humans; they don’t sleep, they just work constantly, efficiently, cost effectively and at great speed.
Will we be able to control this technology?
Who knows? At the moment, there is talk of producing an AI politician on the basis that a computer cannot be influenced, or bribed, so it will supposedly be impartial. My initial thought however is that it could be dangerous. I mean what next? AI army commanders, senators or even presidents? All of whom could be regenerated and improved constantly by a self-learning algorithm, possibly beyond our control.
Are there any plus points?
The obvious plus points are the improvements to our lives. This learning technology might find a breakthrough for many medical conditions that blight our species. It might one day replicate the work that nurses and surgeons do – be able to work 24/7 without getting fatigued; there are lots of potential plus points.
Worst case scenario?
Could it be feasible that robots will build other robots? Robots of whom we have no control. Is Googles Auto ML just the beginning? Who knows?