I. Introduction: Overcoming the Fear of Semantics
There are two ways of approaching robotic intelligence and communication, as indeed there are two ways of dealing with artificial intelligence: the current way and the semantic way. Human intelligence is definitely meaning-and knowledge-based. Most of AI research has been statistical for over two decades now. Why? It is largely the result of opportunity and also of demography. A significant contribution to contentful, descriptive, comprehensive semantics within AI is reasonably expected from linguistic semanticists. But very few researchers in AI and, for that matter, in cognitive science are linguists; even fewer of those linguists are semanticists. Switch the purview to robotics, and these numbers are zeroing out. Statisticians of various ilks have been competing very productively, improving various learning algorithms and achieving incremental improvements on a variety of metrics. There is no doubt of the increasing excellency of the tools and the elegance and dexterity of the increasingly diverse algorithms. At the same time, from inside the approach, there have been increasing doubts about the precision of the methods and the accuracy of the applications [1]. More and more experts in machine learning are talking about knowledge as the necessarily paired component with learning [2]–[5]. There have been motions to combine NLP (read: machine learning/statistics) with knowledge representation [6]–[7]. It all sounds reasonable, so why haven't there been significant results? Why have there not been any significant progress in accessing natural language meaning. Why do routine references to semantics on start-up websites advertising their products turn out to be more statistics, limited lists of words with shallow interpretations or, occasionally, just parts of speech, all posing as semantics.