World Summit A.I. Recap (2/2)
My first impression was that the World Summit AI was less technical than I expected. Still, with so many speakers from big companies (NASA, Facebook, Google Cloud, IBM Watson, Pinterest, Netflix, Alibaba) it was impossible not to be impressed. They shared many interesting insights about how they are using AI now and what is their vision for the future of AI.
There were plenty of philosophical discussions about ethics, threats and risks of AI usage. As an antithesis, Gary Marcus gave a very interesting talk where he claimed that our contemporary advances in AI are not advanced enough to “seriously” discuss the long-term effects of AI. He showed multiple examples convincing that modern AI is able to solve problems only partially (on a very basic level), but they are still very far from reaching general AI.
Another positive outcome from those non-technical discussions was disambiguation of some terms, such as Artificial Intelligence, Machine Learning and Deep Learning, which many attendees and presenters seemed to not fully comprehend. AI is a general name of the discipline, that includes a wide range of topics, both technical and philosophical ones. Machine Learning is a set of techniques, which are used to solve some problems in AI and Deep Learning is one of the techniques of Machine Learning. Deep learning is usually based on multilayered neuron networks, where supervised and unsupervised learning are used during the training stage.
From the talks I’ve heard, I understood that Deep Learning techniques are heavily used for image/sound/text recognition and tagging. For example at Pinterest they apply Deep Learning for assigning multiple tags to an image depending on what is depicted on the image contents: “chair”, “lamp”, “retro style”. There is a very long Boolean vector, each element of which corresponds to a single possible tag, and there is a large neural network, that performs such an image to tag vector correspondence. Such a large neural network must be very hard to train. An idea I would like to try is instead of having a large neural network that implements multiple tagging, is to train multiple simple neural networks in parallel, each performing a single tag recognition.
Another takeaway is that startups are using Cloud-based Machine Learning solutions, including pre-trained models. It seems that Cloud Machine Learning is a good option to obtain quick results. Pretrained Cloud-based Machine Learning models are too generic, and custom trained domain-specific models usually outperform the Cloud ones. However the quality of pretrained Cloud-based models is often enough for broad variety of applications.
I really liked the talk by Netflix where their recommendation system was presented. It was built upon N-armed bandits method. I find this method very promising, despite the fact it is not widely used (it doesn’t belong to Spark ML library, for example); I would definitely like to implement it on Spark.
Another highlight was the visit of the Afghan All-Girl Robotics Team. It was a compelling story about the struggle of being a woman in Afghanistan that wants to study and pursue a career in engineering. Their attendance was an inspiration that through perseverance you can achieve high goals.
It was definitely an interesting experience and I’m looking forward to the World Summit AI 2018!