Skip to content Skip to sidebar Skip to footer

Machine learning

Progressing with Precision Psychiatry: Utilizing AI and Machine Learning for Customized Diagnosis, Therapy, and Outcome Prediction.

Precision psychiatry combines psychiatry, precision medicine, and pharmacogenomics to devise personalized treatments for psychiatric disorders. The rise of Artificial Intelligence (AI) and machine learning technologies has made it possible to identify a multitude of biomarkers and genetic locations associated with these conditions. AI and machine learning have strong potential in predicting the responses of patients to…

Read More

To enhance the effectiveness of an AI assistant, begin by emulating the unpredictable actions of people.

Researchers at MIT and the University of Washington have developed a model that predicts the behavior of an agent (either human or machine) by accounting for unknown computational constraints that might hamper problem-solving abilities. This model, described as an agent's "inference budget", can infer these constraints from just a few prior actions and subsequently predict…

Read More

This small microchip can protect user information and enhance influential computing on a mobile phone.

A team of researchers from the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator that is resistant to the most common types of cyber attacks. This development could help secure sensitive health records, financial information and other private data while still allowing complicated artificial intelligence (AI) models…

Read More

This small microchip can protect user information while facilitating effective computation on a mobile phone.

Smartphone health-monitoring apps can be invaluable for managing chronic diseases or setting fitness goals. However, these applications often suffer from slowdowns and energy inefficiencies due to the large machine-learning models they use. These models are frequently swapped between a smartphone and a central memory server, hampering performance. One solution engineers have pursued is the use…

Read More

Progress and Obstacles in Forecasting TCR Specificity: From Grouping to Protein Linguistic Models

Researchers from IBM Research Europe, the Institute of Computational Life Sciences at Zürich University of Applied Sciences, and Yale School of Medicine have evaluated the progress of computational models which predict TCR (T cell receptor) binding specificity, identifying potential for improvement in immunotherapy development. TCR binding specificity is key to the adaptive immune system. T cells…

Read More

Research Scientists at Google’s Deepmind Unveil Jumprelu Sparse Autoencoders: Attaining Top-Class Restoration Accuracy

Sparse Autoencoders (SAEs) are a type of neural network that efficiently learns data representations by enforcing sparsity, capturing only the most essential data characteristics. This process reduces dimensionality and improves generalization to unseen information. Language model (LM) activations can be approximated using SAEs. They do this by sparsely decomposing the activations into linear components using…

Read More

For improving an AI assistant, begin by simulating the unpredictable actions of humans.

Researchers at MIT and the University of Washington have developed a model to estimate the computational limitations or "inference budget" of an individual or AI agent, with the ultimate objective of enhancing the collaboration between humans and AI. The project, spearheaded by graduate student Athul Paul Jacob, proposes that this model can greatly improve the…

Read More

This minuscule microchip can protect the information of its users while facilitating effective processing on a mobile phone.

Researchers from MIT and the MIT-IBM Watson AI Lab have designed a machine-learning accelerator that is impervious to the two most common types of cyberattacks. Currently, healthcare apps that monitor chronic diseases or fitness goals are relying on machine learning to operate. However, the voluminous machine-learning models utilized need to be transferred between a smartphone…

Read More

Research reveals that AI systems may fail if they are trained using data produced by AI, according to a study.

A recent study in Nature has highlighted that artificial intelligence (AI) models, specifically large language models (LLMs), experience a significant drop in quality when trained on data created by prior AI models. This degradation over time, known as "model collapse", could undermine the quality of future AI models, especially given the growing prevalence of AI-generated…

Read More

Researchers from Microsoft and Stanford University Present Trace: An Innovative Python Framework Set to Transform the Automatic Enhancement of AI Systems.

Designing computation workflows for AI applications faces complexities, requiring the management of various parameters such as prompts and machine learning hyperparameters. Improvements made post-deployment are often manual, making the technology harder to update. Traditional optimization methods like Bayesian Optimization and Reinforcement Learning often call for greater efficiency due to the intricate nature of these systems.…

Read More

LAMBDA: A Fresh, Open-Source, No-Code Multi-Agent Data Analysis System Developed to Connect Domain Experts with Sophisticated AI Models

In recent years, artificial intelligence advancements have occurred across multiple disciplines. However, a lack of communication between domain experts and complex AI systems have posed challenges, especially in fields like biology, healthcare, and business. Large language models (LLMs) such as GPT-3 and GPT-4 have made significant strides in understanding, generating, and utilizing natural language, powering…

Read More