Skip to content Skip to sidebar Skip to footer

Machine learning

MIT researchers studying the effects and usage of generative AI receive another round of seed funding.

In response to their call for papers last summer, MIT President Sally Kornbluth and Provost Cynthia Barnhart received an overwhelming interest from the research community. The call for proposals was made to "articulate effective roadmaps, policy recommendations, and calls for action across the broad domain of generative AI." The response far exceeded expectations, with 75…

Read More

A Simultaneous Coding Structure for Assessing Efficiency Challenges in Handling Several Extended-Context Requests under Restricted GPU High-Speed Memory (HBM) Conditions

Large language models (LLMs) are becoming progressively more powerful, with recent models exhibiting GPT-4 level performance. Nevertheless, using these models for applications requiring extensive context, such as understanding long-duration videos or coding at repository-scale, presents significant hurdles. Typically, these tasks require input contexts ranging from 100K to 10M tokens — a great leap from the…

Read More

This Stanford-authored paper discusses the introduction of a novel set of data scaling laws related to artificial intelligence and how AI capabilities increase with data size in the field of machine learning.

Researchers from Stanford University have developed a new model to investigate the contributions of individual data points to machine learning processes. This allows an understanding of how the value of each data point changes as the scale of the dataset grows, illustrating that some points are more useful in smaller datasets, while others become more…

Read More

MIT researchers examining the influence and uses of generative AI have received a second batch of seed funding.

MIT President Sally Kornbluth and Provost Cynthia Barnhart launched a call for papers last summer relating to generative AI, with the aim of collecting effective strategies, policy suggestions, and calls to action in this expansive field. The response was overwhelming, with a total submission of 75 proposals, out of which 27 were chosen for seed…

Read More

Dropout: An Innovative Method for Minimizing Overfitting in Neural Networks

Overfitting is a prevalent problem when training large neural networks on limited data. It indicates a model's strong performance on the training data but its failure to perform comparably on unseen test data. This issue arises when the network’s feature detectors become overly specialized to the training data, building complex dependencies that do not apply…

Read More

The second instance of seed funding has been granted to MIT researchers examining the effects and uses of generative AI.

Last year, a request from MIT President Sally Kornbluth and Provost Cynthia Barnhart for research proposals concerning generative AI initiatives resulted in 75 submissions. Out of these, 27 were selected for initial funding. Inspired by the impressive response, Kornbluth and Barnhart issued a second call for papers last fall. This resulted in 53 more proposals,…

Read More

Upcoming Major Innovations in Extensive Language Model (LLM) Studies

The evolution of Large Language Models (LLMs) in artificial intelligence has spawned several sub-groups, including Multi-Modal LLMs, Open-Source LLMs, Domain-specific LLMs, LLM Agents, Smaller LLMs, and Non-Transformer LLMs. Multi-Modal LLMs, such as OpenAI's Sora, Google's Gemini, and LLaVA, consolidate various types of input like images, videos, and text to perform more sophisticated tasks. OpenAI's Sora…

Read More

Microsoft AI Unveils Master Key: A Novel Generative AI Escape Method

Generative AI jailbreaking is a technique that allows users to get artificial intelligence (AI) to create potentially harmful or unsafe content. Microsoft researchers recently discovered a new jailbreaking method they dubbed "Skeleton Key." This technique tricks AI into ignoring safety guidelines and Responsible AI (RAI) guardrails that help prevent it from producing offensive, illegal or…

Read More

The second phase of funding grants has been given to MIT researchers examining the effects and uses of generative artificial intelligence.

In recent months, the Massachusetts Institute of Technology (MIT) called for papers on the topic of artificial intelligence (AI), set to construct effective roadmaps, policy recommendations, and action strategies across generative AI. The response was overwhelmingly positive with 75 paper proposals submitted. Out of these, 27 were chosen for seed funding. Given the successful outcome…

Read More

Promoting Sustainability via Automation and Artificial Intelligence in Fungi-oriented Bioprocessing

The integration of automation and artificial intelligence (AI) in fungi-based bioprocesses is becoming instrumental in achieving sustainability through a circular economy model. These processes take advantage of the metabolic versatility of filamentous fungi, allowing for conversion of organic substances into bioproducts. Automation replaces manual procedures enhancing efficiency, while AI improves decision making and control based…

Read More

EvoAgent: An Innovative Approach to Automatically Advance Professional Agents for Multi-Agent Systems Using Evolutionary Algorithms

Large Language Models (LLMs) have achieved considerable success in various tasks related to language understanding, reasoning, and generation. Currently, researchers are focusing on creating LLM-based autonomous agents for more diverse and complex real-world applications. However, many situations in the real world pose challenges that cannot be overcome by a single agent. Hence, engineers are developing…

Read More

Spectrum: A Technique Powered by AI that Boosts LLM Training by Precisely Focusing on Layer Modules According to their Signal-to-Noise Ratio (SNR)

The development and deployment of large language models (LLMs) play a crucial role in natural language processing (NLP), but these models pose significant challenges due to their high computational cost and extensive memory requirement. This makes the training process laborious and inefficient and could inhibit broader application and research. As a result, developing efficient methods…

Read More