Scientists from the Swiss Federal Institute of Technology Lausanne (EPFL) have discovered a flaw in the refusal training of modern language learning models (LLMs) that is easily bypassed through the mere use of past tense when inputting dangerous prompts.
When interacting with artificial intelligence (AI) models such as ChatGPT, certain responses are programmed to be…
A group of researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) recently conducted a series of tests to understand whether AI models like ChatGPT are actually capable of reasoning through problems, or if they are merely echoing correct answers from their training data.
The series of tests, which they referred to as "counterfactual tasks",…
SenseTime, a leading AI technology company from China, has unveiled its upgraded SenseNova 5.5 model, which boasts a 30% increase in overall performance from its predecessor, SenseNova 5. The company has touted the model as being on par with GPT-4 Turbo, and released benchmark scores that show it outperforming GPT-4o and Anthropic's Claude Sonnet 3.5…
Despite impressive advances in AI, the cognitive reasoning abilities of large language models (LLMs) like GPT-4o still fall short when it comes to solving basic problems most humans or even children could figure out. Discussions about the intellectual capacity of AI have been as varied as they are conflicted, with some experts like Geoffrey Hinton,…
A research paper by the University of Chicago suggests that GPT-4, a generalized AI model, could be more effective in predicting a company's future earnings than human experts. The researchers found that the AI model achieved an accuracy of 60% in its predictions, significantly higher than the 53% accuracy achieved by human analysts.
The experiment involved…
A new study from Georgia State University's Psychology Department has found that artificial intelligence (AI) can outperform humans in making moral judgments. The study, led by Associate Professor Eyal Aharoni, and published in Nature Scientific Reports, stemmed from the team's curiosity about how language models address ethical questions.
The research was conceived in the style of…
According to a report by The Information, Microsoft is developing a large scale language learning model (LLM) called MAI-1, featuring a staggering 500 billion parameters. If the claims hold, the model will be the largest that Microsoft has deployed, surpassing the company's Phi-3 Mini family of small language models that range from 3.8B to 14B…
Researchers at the University of Illinois Urbana-Champaign have found that AI agents utilizing GPT-4, a powerful language learning model (LLM), can effectively exploit documented cybersecurity vulnerabilities. These types of AI agents are increasingly playing a role in cybercrime.
In particular, the researchers studied the aptitude of such AI agents to exploit "one-day" vulnerabilities, which are identified…
Researchers from the University of Illinois Urbana-Champaign (UIUC) have revealed that artificial intelligence (AI) agents powered by GPT-4 are capable of autonomously exploiting cybersecurity vulnerabilities. As AI models continue to progress, their dual functionalities can both be useful and potentially dangerous. For example, Google expects AI to be heavily involved in both committing and preventing…