Generative AI, recognized mainly for its ability in creating text and images, is now used by companies to create synthetic data for various scenarios aiding in patient treatment, plane rerouting, and software improvements especially in situations that lack real-world data or require sensitive data. DataCebo, an MIT spinoff, has invented a generative software system called…
A new technique has been proposed by researchers from the Massachusetts Institute of Technology (MIT) and other institutions that allows large language models (LLMs) to solve tasks involving natural language, math and data analysis, and symbolic reasoning by generating programs. Known as natural language embedded programs (NLEPs), the approach enables a language model to create…
A joint study by experts from Colorado State University (CSU), Save the Elephants, and ElephantVoices has discovered that African elephants have a complex system of vocal communication that is unique among animals. Using AI and machine-learning techniques, the researchers analyzed around 470 different elephant calls recorded over four years in Kenya and deduced that elephants…
Generative AI, in the past few years, has gained significant popularity because of its capacity to develop realistic text and images. However, the created text and images form only a portion of the data generated today. Every interaction we have with a medical system, software application, or the effect of any environment, such as a…
University of Waterloo researchers have introduced GenAI-Arena, a user-centric evaluation platform for generative AI models, filling a critical gap in fair and efficient automatic assessment methods. Traditional metrics like FID, CLIP, FVD provide insights into visual content generation but may not sufficiently evaluate user satisfaction and aesthetic qualities of generated outputs. GenAI-Arena allows users not…
Large Generative Models (LGMs) such as GPT, Stable Diffusion, Sora, and Suno have significantly advanced content creation, improving the effectiveness of real-world applications. Unlike preceding models that trained on small, specialized datasets, LGMs gain their success from extensive training on broad, well-curated data from various sectors. This leads to a question: Can we create large…
Large language models (LLMs) like GPT-3 require substantial computational resources for their deployment, making it challenging to use them on resource-constrained devices. Strategies to boost the efficiency of LLMs like pruning, quantization, and attention optimization have been developed, but these can often lead to decreased accuracy or continue to rely heavily on energy-consuming multiplication operations.…
In today's digital era, the demand for ever-increasing computing power has been overwhelmingly huge, driven primarily by advancements in artificial intelligence. However, the constant innovation in computing technology is facing obstacles, primarily due to the limitations in the shrinking size of transistors used in chips. This imposes a strict limit on Moore's Law and Dennard's…