A combination of deep neural networks and vector-symbolic models

IBM Research Zürich and ETH Zürich researchers have recently developed a new architecture that combines two of the most well-known artificial intelligence approaches, deep neural networks and vector-symbolic models. Their architecture, described in Nature Machine Intelligence, has the potential to overcome the limitations of both approaches by solving progressive matrices and other reasoning tasks more effectively.”Our recent paper was based on our earlier research works aimed at augmenting and enhancing neural networks with the powerful machinery of vector-symbolic architectures (VSAs),” said Abbas Rahimi, one of the study’s authors.This combination has previously been applied to few-shot and few-shot continuous learning tasks, achieving state-of-the-art accuracy while requiring less computational complexity. This concept is extended beyond perception in our recent paper.

R P M

Raven’s progressive matrices(RPM) are nonverbal tests that are commonly used to assess people’s intelligence and abstract reasoning abilities. They are made up of a series of items presented in sets, one or more of which is missing.Respondents must correctly identify the missing items in given sets from a few possible choices in order to solve Raven’s progressive matrices. This necessitates advanced reasoning abilities, such as the ability to detect abstract relationships between objects that may be related to their shape, size, colour, or other characteristics.Rahimi and his colleagues’ neuro-vector-symbolic architecture (NVSA) combines deep neural networks, which are known to perform well on perception tasks, with VSA machinery. VSAs are one-of-a-kind computational models that use distributed, high-dimensional vectors to perform symbolic computations.

“While our approach might sound a bit like neuro-symbolic AI approaches, neuro-symbolic AI has inherited the limitations of their individual deep learning and classical symbolic AI components,” Rahimi explained. “Our key objective is to address these limitations, namely the neural binding problem and exhaustive search, in NVSA by using a common language between the neural and symbolic components.”The team’s combination of deep neural networks and VSAs was supported by two main architecture design features. These include a new neural network training process and a method to perform VSA transformations.”We developed two key enablers of our architecture,” Rahimi said. “The first is the use of a novel neural network training method as a flexible means of representation learning over VSA. The second is a method to attain proper VSA transformations such that exhaustive probability computations and searches can be substituted by simpler algebraic operations in the VSA vector space.”

AI Raven

In preliminary tests, the architecture developed by Rahimi and his colleagues yielded very promising results, solving Raven’s progressive matrices faster and more efficiently than previous architectures. It outperformed both cutting-edge deep neural networks and neuro-symbolic AI approaches, setting new records of 87.7% on the RAVEN dataset and 88.1% on the I-RAVEN dataset.”To solve a Raven test, something called probabilistic abduction is required,” Rahimi explained. “This is a process that involves looking for a solution in a space defined by prior background knowledge about the test.” “The prior knowledge is symbolically represented by describing all possible rule realisations that could govern the Raven tests.”

In contrast with existing architectures, NVSA can perform extensive probabilistic calculations in a single vector operation. This in turn allows it to solve abstract reasoning and analogy-related problems, such as Raven’s progressive matrices, faster and more accurately than other AI approaches based on deep neural networks or VSAs alone.

“Our approach also addresses the neural binding problem, enabling a single neural network to separately recognize distinct properties of multiple objects simultaneously in a scene,” Rahimi said. “Overall, NVSA offers transparent, fast and efficient reasoning; and it is the very first example showing how probabilistic reasoning (as an upgrade of pure logical reasoning) can be efficiently performed by distributed representations and operators of VSA. Compared to the symbolic reasoning of neuro-symbolic approaches, the probabilistic reasoning of NVSA is two orders of magnitude faster, with less expensive operations on the distributed representations.”

This team’s new architecture has so far proven to be highly promising for solving complex reasoning tasks efficiently and quickly. It could be tested and applied to a variety of other problems in the future, potentially inspiring the development of other similar approaches.”NVSA is a significant step towards encapsulating various AI paradigms in a unified framework to address tasks involving both perception and higher-level reasoning,” Rahimi added. “NVSA, surprisingly, already demonstrated degrees of generalisation to previously unseen combinations of objects and object attributes. Further generalisation in this direction remains an open question.”

Leave a Comment