top of page
  • Tumblr
  • Black Twitter Icon
  • Black YouTube Icon
  • Black Pinterest Icon
  • Black Instagram Icon
Search
Writer's pictureMarc Primo

Are We Pushing Deep Learning Too Far?

Updated: Mar 19, 2020

The following is an article “Are We Pushing Deep Learning Too Far?” by Marc Primo.


In one episode of the hit American animated series Family Guy, baby Stewie invents a robot replica of their dog Brian who mimics everything Brian does based on data that Stewie collected from him. Eventually, robot Brian becomes exactly like the real Brian leading to a punchline where the real Brian ends up falling in love and kissing robot Brian.

While Family Guy may seem to get closer to the truth today as it can, some argue that we might have pushed deep learning a bit farther than it can really go-- or have we?


Human-level AI in baby steps


During the recent RE•WORK Deep Learning Summit in Montreal, Canadian deep learning and artificial neural networks expert Yoshua Bengio explains the current progress of human-level AI capabilities and where its leading to in the near future. Neuroscience has long been married to AI but their respective roles are currently more like mother and child rather than husband and wife. Or at least, as of the moment.


Bengio acknowledges that “there has already been a lot of inspiration in the design of current deep learning coming from living intelligence,” such as how AI can recognize basic things via the architecture of convolutional nets. Yet today’s algorithms are far from being enough to teach intelligent machines to adapt human instincts for certain tasks. For that, Bengio sees that further neuroscientific research is needed for machines to define high-level concepts and that increases in data are still necessary to attain human-level AI.


A deeper learning


Other experts may argue that we have pushed deep learning to how far it can feasibly go and that the Self-Organizing Tree Algorithm (SOTA), an unsupervised neural network with a binary tree topology, can soon grow to be inefficient in deep learning that may result to counter-progress.


However, Bengio emphasizes that neuroscience’s role in developing deep learning remains important today than ever before. More work has to be done in spiking neural networks specifically in techniques such as dropout in cost function studies or in quantizing the activity of neurons.


“I think there are many of the puzzle pieces still missing,” laments Bengio as he reiterates that we’re still far from human capabilities in the cognition of AI intelligence including computer vision, speech recognition, and synthesis.


Teach your children well


Bengio postulates that the lack in central complexity can simply be satisfied by building more intelligent machines that know a whole lot more about the world so it can perform particular tasks just as humans can. Efforts to devise more systems for higher AI cognition that is able to handle reasoning, generalization, and causality via neural nets all the more make human intelligence necessary today in industrial systems than ever before. But that being the ultimate goal, is everyone really on board in making robots as intelligent as humans?


As how neuroscience currently nurtures deep learning like a mother would a child, we can perhaps assume that the use of human intelligence in developing deep learning aims to mirror a norm of efficient cognition and reaction that are all for social and economic benefits such as health care, security, transportation, and such others. We can only push deep learning too far if it becomes a catalyst that instigates fear and anxiety towards AI. Where we are exactly on that aspect at the moment is not clear, but what really matters most right now is that we teach our “children” well in the hopes that they won’t rebel in the future.

Comentários


bottom of page