Skip to content Skip to sidebar Skip to footer

Machine learning

Discovering Hallucinations in Text Generated by Advanced AI: A New Innovation from KnowHalu: Evaluating Large Language Models (LLMs)

Artificial intelligence models, in particular large language models (LLMs), have made significant strides in generating coherent and contextually appropriate language. However, they sometimes create content that seems correct but is actually inaccurate or irrelevant, a problem often referred to as "hallucination". This can pose a considerable issue in areas where high factual accuracy is critical,…

Read More

Researchers from the University of California, Berkeley have unveiled a new AI strategy named Learnable Latent Codes as Bridges (LCB). This innovative approach merges the abstract thinking abilities of large language models with low-level action strategies.

Robotics traditionally operates within two dominant architectures: modular hierarchical policies and end-to-end policies. The former uses rigid layers like symbolic planning, trajectory generation, and tracking, whereas the latter uses high-capacity neural networks to directly connect sensory input to actions. Large language models (LLMs) have rejuvenated the interest in hierarchical control architectures, with researchers using LLMs…

Read More

Reducing Computational Overload in Dependable Implementation: A Hybrid CNN Approach to Repetition in AI

Researchers from Zurich's Institute of Embedded Systems at the University of Applied Sciences Winterthur have addressed the issue of reliability and safety in AI models. This is especially relevant for systems with essential safety integrated functions (SIF), such as edge-AI devices. The team noted that while existing redundancy techniques are effective, they are often computationally…

Read More

Improving Graph Neural Network Training with DiskGNN: A Significant Advancement towards Effective Large-Scale Learning

Graph Neural Networks (GNNs) are essential for processing complex data structures in domains such as e-commerce and social networks. However, as graph data volume increases, existing systems struggle to efficiently handle data that exceed memory capacity. This warrants out-of-core solutions where data resides on disk. Yet, such systems have faced challenges balancing speed of data…

Read More

COLLAGE: An Innovative Machine Learning Method to Handle Floating-Point Mistakes in Low-Precision for Accurate and Streamlined LLM Training

Large language models (LLMs) have introduced ground-breaking advancements to the field of natural language processing, such as improved machine translation, question-answering, and text generation. Yet, training these complex models poses significant challenges, including high resource requirements and lengthy training times. Former methods addressing these concerns involved loss-scaling and mixed-precision strategies, which aimed to further training efficiency…

Read More

AnchorGT: An Innovative Attention Mechanism for Graph Transformers Providing a Versatile Component to Enhance Scalability Across Various Graph Transformer Models

The standard Transformer models in machine learning have encountered significant challenges when applied to graph data due to their quadratic computational complexity, which scales with the number of nodes in the graph. Past efforts to navigate these obstacles have tended to diminish the key advantage of self-attention, which is a global receptive field, or have…

Read More

An improved method for regulating the transformation of flexible robots.

Scientists at MIT have been working on the design and control of a reconfigurable, squishy, soft robot, similar in nature to 'slime', that has potential applications in healthcare, wearable devices and industrial systems due to its ability to shape-shift to complete varying tasks. These soft robots currently only exist in labs and do not possess…

Read More

Examining the Influence of Intense Focus on Numerical Variation and Training Consistency in Extensive Machine Learning Systems.

Training large-scale Generative AI models can be challenging due to the immense computational resources and time they require. This complexity gives rise to frequent instabilities, manifested as disruptive loss spikes during prolonged training periods. These instabilities can result in costly interruptions, requiring the training process to be paused and restarted. For example, the LLaMA2's 70-billion…

Read More