AI in addition to ML?

Well, topic is techno-ish, isn't it?

I've been involved in training people within the company to apply deep learning in their work. It is truly delightful to see talented people quickly grasp the concepts and tools and proceed to create very nice applications quite soon afterwards. There is something special about deep learning which adds spark to the eyes of the cognoscenti. Having done my Ph.D. in computational physics, I was at first surprised by the attitude about this particular class of algorithms, both that of others as well as my own. I remember feeling child-like joy seeing a sequence model producing a good time series prediction for the first time. I also remember feeling rather less enthusiastic about some other algorithms during the Ph.D. research. Way back, if I may add.

There are theoretical reasons why deep learning is a special case. Most likely also the general enthusiasm about the discipline adds to one's excitement. Tooling such as Spyder IDE and Jupyter notebooks for Python have greatly increased attractiveness of practical machine learning. Is it nevertheless possible that we're getting overexcited about deep learning in particular and machine learning in general?

Trip down the memory lane

Consulting the usual source of consensus, we learn that Machine Learning (ML) was born within Artificial Intelligence (AI) research as an alternative to  knowledge-based approaches for dealing with data. Knowledge-Based Reasoning (KBR) allows partial replacing of software implementation with knowledge models and automated reasoning. In order to be applied to data intensive applications, this approach requires that knowledge model has adequate constructs specific to data at hand.

 Machine learning, on the other hand, stresses learning from data as opposed to defining relevant aspects extraneously to the data set. This has proven to be a powerful approach; witness the success of deep learning in the areas ranging from Natural Language Processing (NLP) to image classification to - well, just about any field nowadays. A consequence of this development is that ML has moved away from AI home and become a sort of advanced analytics toolset around with impressive applications can be built.

All-in with ML then?

The success of ML has led to perception of it as a panacea for just about any computing problem dealing with data. There are some caveats, however. A machine learning algorithm is chosen according to the data at hand, as well as the use case one is dealing with. This means that the ultimate application is "built around" the algorithm so that applications also become use case dependent as a consequence. Compared to KBR, opacity inherent in software-based implementation is replaced opacity of the ML algorithm. For example, a hot topic in deep learning now is explainability: can we get the algorithm to tell why it thinks that Pluto is a cat?

The use of bespoke algorithms is justified for point solutions where high performance is sought for. Problems start to arise when overall system includes multiple ML algorithms, each potentially with dedicated semantics. This is a challenge in my own research field, 5G management and orchestration where ML is expected to be used near data. Myself as well as my co-researchers believe that a combination of KBR and ML algorithms might be the right solution to this challenge. A conference paper about this was presented in NOMS 2018 this week. Stay tuned.

Finally, one should not forget that there are types of data which is already information-rich and thus lends itself naturally to KBR-like treatment. It would be a waste of time to re-learn with ML the semantics inherent in the metadata and context.


Popular posts from this blog

Business of Machine Learning

Latency - the new black?

On y va