美國NIST GenAI計畫啟動:標準化AI生成內容與偵測技術

Featured Image
The National Institute of Standards and Technology (NIST), a US government agency responsible for developing and testing technology, has launched a new program called NIST GenAI. This program aims to assess generative AI technologies, including text and image generation. NIST GenAI will release benchmarks, develop content authenticity detection systems, and promote the development of software to identify the source of fake or misleading AI-generated information.

According to NIST, the program will issue challenge problems to evaluate the capabilities and limitations of generative AI technologies. These evaluations will help identify strategies to promote information integrity and guide the responsible use of digital content. The first project of NIST GenAI is a pilot study to build systems that can differentiate between human-created and AI-generated media, starting with text. While there are many services claiming to detect deepfakes, studies have shown them to be unreliable, especially when it comes to text.

NIST GenAI is inviting teams from academia, industry, and research labs to submit AI systems that generate content or systems designed to identify AI-generated content. The generators in the study must provide 250-word or fewer summaries based on a given topic and set of documents, while the discriminators must detect whether a summary is potentially AI-written. NIST will provide the necessary data to test the generators to ensure fairness. Systems trained on publicly available data that do not comply with applicable laws and regulations will not be accepted.

The pilot study will begin registration on May 1, with the first round scheduled to close on August 2. The final results from the study are expected to be published in February 2025. The launch of NIST GenAI and the focus on deepfakes come at a time when the volume of AI-generated misinformation and disinformation is increasing rapidly. According to data from Clarity, a deepfake detection firm, there has been a 900% increase in deepfakes created and published this year compared to the same time frame last year. This alarming trend has led to concerns among the public, with 85% of Americans expressing worry about the spread of misleading deepfakes online.

The launch of NIST GenAI is a response to President Joe Biden’s executive order on AI, which called for greater transparency from AI companies and established new standards for labeling content generated by AI. This announcement also marks the first AI-related development since the appointment of Paul Christiano, a former OpenAI researcher, to NIST’s AI Safety Institute. Christiano’s appointment has been met with controversy due to his pessimistic views on AI development and concerns that the AI Safety Institute may focus on unrealistic scenarios rather than immediate risks.

NIST GenAI will inform the work of the AI Safety Institute, according to NIST. As the prevalence of AI-generated content continues to grow, it is crucial to address the challenges posed by deepfakes and ensure the integrity of digital information. The development of reliable detection systems and the responsible use of generative AI technologies are essential steps in achieving this goal.

Share this content: