Skip to content Skip to sidebar Skip to footer

LLMS

LLM training resistance can be effortlessly evaded using prompts in past tense.

Scientists from the Swiss Federal Institute of Technology Lausanne (EPFL) have discovered a flaw in the refusal training of modern language learning models (LLMs) that is easily bypassed through the mere use of past tense when inputting dangerous prompts. When interacting with artificial intelligence (AI) models such as ChatGPT, certain responses are programmed to be…

Read More

Performance of AI Models: Are they truly reasoning or merely parroting?

A group of researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) recently conducted a series of tests to understand whether AI models like ChatGPT are actually capable of reasoning through problems, or if they are merely echoing correct answers from their training data. The series of tests, which they referred to as "counterfactual tasks",…

Read More

The SenseNova 5.5 from China, which is the country’s first real-time language model, outperforms the GPT-4o.

SenseTime, a leading AI technology company from China, has unveiled its upgraded SenseNova 5.5 model, which boasts a 30% increase in overall performance from its predecessor, SenseNova 5. The company has touted the model as being on par with GPT-4 Turbo, and released benchmark scores that show it outperforming GPT-4o and Anthropic's Claude Sonnet 3.5…

Read More

LLMs struggle significantly with resolving basic river traversal conundrums.

Despite impressive advances in AI, the cognitive reasoning abilities of large language models (LLMs) like GPT-4o still fall short when it comes to solving basic problems most humans or even children could figure out. Discussions about the intellectual capacity of AI have been as varied as they are conflicted, with some experts like Geoffrey Hinton,…

Read More

GPT-4 surpasses financial analysts in forecasting earnings.

A research paper by the University of Chicago suggests that GPT-4, a generalized AI model, could be more effective in predicting a company's future earnings than human experts. The researchers found that the AI model achieved an accuracy of 60% in its predictions, significantly higher than the 53% accuracy achieved by human analysts. The experiment involved…

Read More

Research from Georgia State University suggests that AI surpasses humans in making ethical decisions.

A new study from Georgia State University's Psychology Department has found that artificial intelligence (AI) can outperform humans in making moral judgments. The study, led by Associate Professor Eyal Aharoni, and published in Nature Scientific Reports, stemmed from the team's curiosity about how language models address ethical questions. The research was conceived in the style of…

Read More

LLM agents have the ability to independently take advantage of one-day vulnerabilities.

Researchers at the University of Illinois Urbana-Champaign have found that AI agents utilizing GPT-4, a powerful language learning model (LLM), can effectively exploit documented cybersecurity vulnerabilities. These types of AI agents are increasingly playing a role in cybercrime. In particular, the researchers studied the aptitude of such AI agents to exploit "one-day" vulnerabilities, which are identified…

Read More

Agents of LLM can independently take advantage of vulnerabilities within a day.

Researchers from the University of Illinois Urbana-Champaign (UIUC) have revealed that artificial intelligence (AI) agents powered by GPT-4 are capable of autonomously exploiting cybersecurity vulnerabilities. As AI models continue to progress, their dual functionalities can both be useful and potentially dangerous. For example, Google expects AI to be heavily involved in both committing and preventing…

Read More