This week in AI #10: Hello, HAL. Do you read me, HAL?
This week in AI, Andrew Ng, Geoffrey Hinton, and Yoshua Bengio give their take on AI plus two neat applications!
1 — Learning overflow
Nowadays, the term AI is thrown around with claims that it will completely revolutionize the world as we know it. The truth is, AI is not a neat little object you can touch or which has defined boundaries; AI is the union of a multitude of fields connected to create something bigger. The great thing about humankind is that we’re always learning collectively. As our knowledge of those different fields improves, as we discover new areas of interest, and as others become common knowledge, AI is pushed even further. As Arthur C. Clarke, author of the book 2001: A Space Odyssey, once wrote: “Any sufficiently advanced technology is indistinguishable from magic.” It’s hard to tell the exact moment when magic becomes technology, but for a good take on today’s landscape, you can watch Andrew Ng on what are the most promising fields.
2 — Call it a day
So what does AI consists of? The most prominent area today is supervised deep-learning. At the root, it’s an approximation function: you teach it using examples, that’s the supervised part, and with sufficient data, it learns to generalize to objects it has never seen. Well, you should know that the functions used are often complex networks, the deep in deep-learning, but the overall process is a two-step game: the forward-pass is the prediction while the back-propagation is where the learning happens. It’s one of the fundamentals of deep-learning. And… **drum roll** the guy who got the idea 30 years ago now says that we should throw it all away and start again. According to Hinton, his technique won’t cut it if we want to attain the next holy-grail: unsupervised learning, extracting knowledge from the environment without any special guidance. The real big question is: what should we replace it with? Don’t expect to get an answer here, but if you want a hint you should have a look at this theory on generalization as information compression, or as one of its author puts it: “The most important part of learning is actually forgetting.”
3 — Bring in the antitrust lawyers!
Speaking of godfathers, Yoshua Bengio also gave us a piece of his mind recently; mainly that we shouldn’t put all our AI eggs into one basket: “Concentration of wealth leads to concentration of power. That’s one reason why monopoly is dangerous. It’s dangerous for democracy.” Keep in mind that Bengio just started element.ai, his startup aimed at applying cutting-edge research to business applications so he might have a hidden agenda. That being said, he truly has a point. Take Microsoft for instance: it’s going all in on AI. The last year marked the beginning of its AI and Research group: today more than 8000 people are working there! That’s quite a lot of firepower especially when you consider that they already have an easy-access to an enormous amount of data. And, as luck would have it, Microsoft just announced it’s setting up a new AI healthcare division…
4 — Afraid of snakes?
You shouldn’t be! Indeed, according to a recent survey by KDnuggets, Python is now officially the most used programming language in analytics, data science, and machine learning, overthrowing the long-term governing king, R, in the game of thrones. And now, you can play with it — for real! Try out Serpent.AI, a novel and addictive game agent framework written in Python. A fun way to play around with AI and do research at the same time!
5 — Mind the gap
While Microsoft is focusing on the bigger picture, smaller actors are delivering real value that is applicable right now. For instance, Plantix, a new app that lets you upload a picture of a plant and get a diagnosis, is already used by 200000 farmers in India, Brazil, and North Africa! Quite a feat! But beware, it’s not always smooth sailing: a recent study shows that the gap between AI expectation and reality in companies is huge. 85% of interviewed executives believe in the AI success story, but only 20% of them have already implemented something. The two main hurdles are, 1) designing an interface between humans and machines to make the most of the two worlds, and 2) getting your hands on data, especially data corresponding to failure cases. On top of that, the talent shortage is only going to get worse as Facebook, Google, Microsoft etc. compete to get the best to join their teams which will only make it harder for smaller companies!
A bit gloomy isn’t it? Not at all! I don’t know if you’ve heard about it, but a company is doing just that: making AI accessible to all, whether you have a deep-learning team or not. Its name is deepomatic, and you should definitely check it out… and not just because we work there!
Originally published at blog.deepomatic.com on September 25, 2017.