
- Members of the House Science Committee have raised concerns over the National Institute of Standards and Technology’s (NIST) planned research partnership with the RAND Corp. in the field of artificial intelligence (AI).
- The bipartisan group of lawmakers criticized NIST for a lack of transparency and failure to announce a competitive process for research grants related to the newly established U.S. AI Safety Institute.
- Lawmakers, including House Science Chair Frank Lucas (R-Okla.) and ranking member Zoe Lofgren (D-Calif.), emphasized the importance of thorough and well-executed AI safety research, cautioning against rushing the process.
- NIST, central to President Joe Biden’s AI plans, has not publicly disclosed the groups receiving research grants through the AI Safety Institute. However, it is indicated that RAND is one of the organizations involved.
- The concerns stem from RAND’s affiliation with tech billionaires, the AI industry, and the controversial “effective altruism” movement, raising questions about potential biases and influences on AI safety research.
- RAND’s previous work on biosecurity risks related to advanced AI models was cited in the House letter as an example of research lacking academic peer review.
- The House Science Committee urged NIST to prioritize scientific merit and transparency, emphasizing that recipients of federal research funding for AI safety research should adhere to rigorous guidelines.
- NIST responded by stating that it is exploring options for a competitive process for cooperative research opportunities related to the AI Safety Institute, emphasizing its commitment to scientific independence and transparency.
- The House Science Committee’s concerns highlight the growing awareness in Congress of the need for rigorous measurement science in AI regulation, distinguishing between AI hype and the actual governance challenges.
- Experts suggest that Congress is recognizing the ideological and political perspectives embedded in scientific language concerning AI governance, signaling a deeper understanding of the importance of defining AI governance based on measurable and accountable criteria.
Subscribe to our newsletter
COMMENTS