Skip to content Skip to footer

Introducing Symbolicai: An Integration of Generative Models and Solvers in a Logic-Based Approach to Machine Learning Frameworks

The rise of Generative AI, particularly large language models (LLMs), has transformed various sectors, enhancing tools that aid in search-based interactions, program synthesis, and chat, among others. LLMs have facilitated connections between different modalities, initiating transformations like text-to-code, text-to-3D, and more, emphasizing the impact of language-based interactions on future human-computer interactions.

Despite these advancements, issues like value misalignment persist. There’s still room for improvement for LLMs, especially concerning functional language competence. To address these, researchers from Johannes Kepler University and the Austrian Academy of Sciences have introduced SymbolicAI, a compositional neuro-symbolic (NeSy) framework. The framework can represent and manipulate self-referential and multi-modal structures, thereby significantly improving LLMs’ creative process.

SymbolicAI uses functional zero- and few-shot learning operations for in-context learning, thereby allowing development of flexible applications. It directs the generation process and facilitates a modular architecture accommodating various solvers such as mathematical expression evaluators, theorem provers, knowledge databases, and search engines.

Aiming at creating domain-invariant problem solvers, the researchers developed SymbolicAI, using the solvers as building blocks for composing functions as computational graphs. They envision language as a central processing unit distinct from other cognitive processes like memory, shaping the foundation for general AI.

The SymbolicAI team has also tackled the challenge of evaluating multi-step NeSy generating processes by designing a benchmark, computing a quality measure, and estimating its empirical score. They also analyzed the logic capabilities of various models, including GPT-3.5 Turbo, GPT-4 Turbo, Gemini-Pro, LlaMA 2 13B, Mistral 7B, and Zephyr 7B.

The team ultimately aims to utilize SymbolicAI for future studies in areas such as self-referential systems, hierarchical computational graphs, and intricate program synthesis. They plan to integrate probabilistic methods into the AI design process to facilitate the development of autonomous agents. As part of their commitment to growing collaboratively, they are open to sharing their ideas.

In conclusion, the introduction of SymbolicAI by a team from Johannes Kepler University and the Austrian Academy of Sciences signifies a significant advancement in LLMs. The framework empowers LLMs by enhancing their creative process, fostering functional language competence, and facilitating the development of domain-invariant problem solvers. The future of AI looks promising with the integration of SymbolicAI into various study areas.

Leave a comment

0.0/5