Skip to content Skip to footer
Search
Search
Search

Lawmakers express worries about the impact of RAND on the organization for AI safety

President Biden’s executive order in October, which tasked the National Institute of Standards and Technology (NIST) with researching how to test and analyze AI model safety, is inspiring enthusiasm and anticipation! The RAND Corporation’s influence on the NIST is coming under scrutiny, and the US Congress Committee on Science, Space, and Technology has voiced its concern in a letter over how the NIST is carrying out its mandate and how research from groups like RAND may influence its work.

The NIST established the Artificial Intelligence Safety Institute (AISI) and will likely outsource a lot of the research it needs to do on AI safety. Who will it farm the research out to? The NIST isn’t saying, but two groups have reportedly been engaged by the institute. The committee’s letter didn’t mention RAND by name but a reference to a report from the think tank makes it clear they’re concerned that RAND may continue to exert its influence on AI regulation.

The NIST’s work is crucial to the future of AI safety and regulation, and the committee’s letter stated, “In implementing the AISI, we expect NIST to hold the recipients of federal research funding for AI safety research to the same rigorous guidelines of scientific and methodological quality that characterize the broader federal research enterprise.” This ensures that the research is held to the highest standards and that it is accurate and reliable.

The motivation behind the unsubtle AI fearmongering in RAND’s research becomes clearer when you follow the money. RAND received $15.5 million in grants from Open Philanthropy, a funding organization focused on supporting effective altruism causes. Proponents of effective altruism are among the loudest voices calling for a halt or slowdown in the development of AI. If RAND is one of the two organizations reportedly tasked with doing research for the AISI, then we can expect tighter AI regulation in the near future.

Will the NIST invite input from AI accelerationists like Meta’s VP and Chief AI Scientist Yann LeCun? In a recent interview with Wired, LeCun said, “Whenever technology progresses, you can’t stop the bad guys from having access to it. Then it’s my good AI against your bad AI. The way to stay ahead is to progress faster. The way to progress faster is to open the research, so the larger community contributes to it.”

The National Institute of Standards and Technology’s (NIST) research into AI model safety is incredibly exciting! With the US Congress Committee on Science, Space, and Technology voicing their concern over RAND’s influence on AI regulation, and the NIST’s commitment to holding recipients of federal research funding to the highest standards, the future of AI regulation looks bright. With the potential for AI accelerationists like Meta’s VP and Chief AI Scientist Yann LeCun to contribute to the research, the future of AI regulation looks better than ever! We are eagerly awaiting the results of the NIST’s research and their contributions to AI safety.

Leave a comment

0.0/5