Baylor University researchers Pham Hoang Van and Scott Cunningham have conducted an experiment using OpenAI's language prediction model, ChatGPT, to predict future events. Despite OpenAI's terms of service stating that its usage for prediction of the future is prohibited, the researchers used a smart strategy that resulted in strikingly accurate results.
ChatGPT models function as predictive…
Google researchers have developed a technique called Infini-attention, aimed at improving the efficiency of Language Learning Models (LLMs). This technique allows LLMs to handle infinitely long text without swelling the compute and memory requirement which is a common limitation with existing models.
LLMs, which use a Transformer architecture, work by giving attention to all tokens, or…
A collective team of researchers from the University of Massachusetts Amherst, Adobe, Princeton University, and the Allen Institute for AI have carried out a study to ascertain the accuracy and quality of summaries produced by Large Language Models (LLMs) when summarizing book-length narratives. The purpose of this research was to observe how well AI models…
Apple engineers have developed an artificial intelligence (AI) system capable of better understanding and responding to contextual references in user interactions. This new development could possibly enhance on-device virtual assistants, making them more efficient and responsive.
Understanding references within a conversation comes naturally to humans. Phrases such as "the bottom one" or "him" are easily…
In a joint effort, researchers from DeepMind and Stanford University have developed an AI agent that fact-checks Large language models (LLMs), enabling the benchmarking of their factuality. These advanced AI models sometimes concoct false facts in their responses, which becomes more likely as the length of the response increases. Prior to this development, there was…
Quiet-STaR, a language training model, has been developed by researchers from Stanford University and Notbad AI. The system allows artificial intelligence (AI) to internally reason before creating a response, mimicking the thought process humans undergo before speaking.
Described in a research paper, the technique involved training a language model, known as Mistral-7B, to mimic this human…
Researchers, including experts from Scale AI, the Center for AI Safety, and leading academic institutions, have launched a benchmark to determine the potential threat large language models (LLMs) may hold in terms of the dangerous knowledge they contain. Using a new technique, these models can now "unlearn" hazardous data, preventing bad actors from using AI…
The rapid development of generative AI technology has resulted in declining public trust in the AI industry, according to the 2024 Edelman Trust Barometer, a large-scale survey of over 32,000 respondents across 28 countries. There has been a significant drop in global confidence in AI companies, with trust levels falling from 61% to 53% in…
Academics from the University of Washington, Western Washington University, and the University of Chicago have devised a method of manipulating language-learning models (LLMs), such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2, utilizing a tactic known as ArtPrompt. ArtPrompt involves the use of ASCII art, a form of design made from letters, numbers, symbols, and punctuation…
A significant study, conducted by researchers such as Valentin Hofman, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King, has documented the troubling issue of racial bias embedded in artificial intelligence (AI) systems, particularly large language models (LLMs). The research study, accessible on ArXiv, highlights the negative discriminatory attributes that AI systems often present towards African…