美國NIST發布GenAI計劃:挑戰賽助力檢測AI生成內容真偽

Featured Image
The National Institute of Standards and Technology (NIST) has announced the launch of its new program, NIST GenAI, which aims to assess generative AI technologies such as text and image generators. This initiative is part of NIST’s efforts to promote information integrity and combat the spread of fake or misleading AI-generated content.

The NIST GenAI program will release benchmarks and develop systems to detect deepfakes and identify the source of AI-generated information. It will also evaluate the capabilities and limitations of generative AI technologies through a series of challenge problems. The first project of NIST GenAI is a pilot study to differentiate between human-created and AI-generated media, starting with text.

To participate in the study, teams from academia, industry, and research labs can submit either “generators” (AI systems that generate content) or “discriminators” (systems designed to identify AI-generated content). Generators must generate 250-word summaries based on a given topic and set of documents, while discriminators must determine if a summary is potentially AI-written.

NIST GenAI will provide the necessary data for testing the generators to ensure fairness. However, systems trained on publicly available data that do not comply with applicable laws and regulations will not be accepted. The registration for the pilot study will begin on May 1, and the results are expected to be published in February 2025.

This launch of NIST GenAI is a response to President Joe Biden’s executive order on AI, which calls for greater transparency from AI companies and establishes new standards for labeling AI-generated content. It is also the first AI-related announcement from NIST since the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Safety Institute.

However, Christiano’s appointment has been met with controversy due to his pessimistic views on AI development. Some critics, including scientists within NIST, fear that he may focus on “fantasy scenarios” instead of addressing more immediate risks from AI. Nevertheless, NIST states that NIST GenAI will contribute to the work of the AI Safety Institute.

As the volume of AI-generated misinformation and disinformation continues to grow, initiatives like NIST GenAI are crucial in ensuring the safe and responsible use of AI-generated content. With the development of detection systems and the promotion of information integrity, we can combat the spread of fake and misleading information in the digital age.

Share this content: