Skip to content Skip to sidebar Skip to footer

Machine learning

Developing custom coding languages for effective visual artificial intelligence systems.

Associate Professor Jonathan Ragan-Kelley at the MIT Department of Electrical Engineering and Computer Science is a creator behind many innovative technologies used in photographic image processing and editing. Ragan-Kelley has contributed to the visual effects industry and was instrumental in designing the Halide programming language, a tool widely used in the photo editing sector. Ragan-Kelley,…

Read More

Google DeepMind Unveils Med-Gemini: A Pioneering Suite of AI Models Transforming Medical Diagnosis and Clinical Judgement

Artificial intelligence (AI) has increasingly become a pivotal tool in the medical industry, assisting clinicians with tasks such as diagnosing patients, planning treatments, and staying up-to-date with the latest research. Despite this, current AI models face challenges in efficiently analyzing the wide array of medical data which includes images, videos and electronic health records (EHRs).…

Read More

HPI-MIT’s joint design research effort fosters formidable teams.

The recent ransomware attack on ChangeHealthcare underscores the disruptive nature of supply chain attacks. Such attacks are becoming increasingly prominent and often target large corporations through the small and medium-sized vendors in their corporate supply chains. Researchers from Massachusetts Institute of Technology (MIT) and Hasso Plattner Institute (HPI) in Potsdam, Germany, are investigating different organizational…

Read More

Interpretability and Precision in Deep Learning: A Fresh Phase with the Introduction of Kolmogorov-Arnold Networks (KANs)

Multi-layer perceptrons (MLPs), also known as fully-connected feedforward neural networks, are foundational models in deep learning. They are used to approximate nonlinear functions and despite their significance, they have a few drawbacks. One of the limitations is that in applications like transformers, MLPs tend to control parameters and they lack interpretability compared to attention layers.…

Read More

The team at Google AI announced the development of the TeraHAC Algorithm, displaying its superior performance and adaptability by handling graphs with as much as 8 trillion edges.

Google's Graph Mining team has developed a new processing algorithm, TeraHAC, capable of clustering extremely large data sets with hundreds of billions, or even trillions, of data points. This process is commonly used in activities such as prediction and information retrieval and involves the categorization of similar items into groups to better comprehend the relationships…

Read More

The team from Google AI presented the TeraHAC algorithm, showcasing its superior quality and scalability on graphs with as many as 8 trillion edges.

Google's Graph Mining team has unveiled TeraHAC, a clustering algorithm designed to process massive datasets with hundreds of billions of data points, which are often utilized in prediction tasks and information retrieval. The challenge in dealing with such massive datasets is the prohibitive computational cost and limitations in parallel processing. Traditional clustering algorithms have struggled…

Read More

The brain’s language network has to exert more effort when dealing with complicated and unfamiliar sentences.

Researchers from MIT, led by neuroscience associate professor Evelina Fedorenko, have used an artificial language network to identify which types of sentences most effectively engage the brain’s language processing centers. The study showed that sentences of complex structure or unexpected meaning created strong responses, while straightforward or nonsensical sentences did little to engage these areas.…

Read More

A team led by Princeton raises concerns: AI threatens the reliability of scientific research.

A recent study released by an interdisciplinary team led by computer scientists Arvind Narayanan and Sayash Kapoor from Princeton University brings into sharp focus the potential harm that AI could do to scientific research. The researchers argue that the lack of properly outlined best practices in using machine learning within scientific fields is threatening the…

Read More

PyTorch Launches ExecuTorch Alpha: A Comprehensive Solution Concentrating on Implementation of Substantial Language and Machine Learning Models to the Periphery.

PyTorch recently launched the alpha version of its state-of-the-art solution, ExecuTorch, enabling the deployment of intricate machine learning models on resource-limited edge devices such as smartphones and wearables. Poor computational power and limited resources have traditionally hindered deploying such models on edge devices. PyTorch's ExecuTorch Alpha aims to bridge this gap, optimizing model execution on…

Read More

The AI research paper by Princeton and Stanford presents CRISPR-GPT as a groundbreaking enhancement for gene-editing.

Gene editing, a vital aspect of modern biotechnology, allows scientists to precisely manipulate genetic material, which has potential applications in fields such as medicine and agriculture. The complexity of gene editing creates challenges in its design and execution process, necessitating deep scientific knowledge and careful planning to avoid adverse consequences. Existing gene editing research has…

Read More