NIST推出GenAI計劃:挑戰賽評估AI內容真實性

Featured Image
The National Institute of Standards and Technology (NIST) has recently launched an exciting new program called NIST GenAI. This program aims to assess generative AI technologies, including text- and image-generating AI. NIST GenAI will release benchmarks, develop content authenticity detection systems, and promote the development of software to identify fake or misleading AI-generated information.

According to NIST, the GenAI program will issue a series of challenge problems to evaluate the capabilities and limitations of generative AI technologies. The goal is to promote information integrity and guide the safe and responsible use of digital content. The first project of NIST GenAI is a pilot study to build systems that can distinguish between human-created and AI-generated media, starting with text. This is a much-needed initiative since current methods to detect deepfakes in text have proven to be unreliable.

NIST GenAI is inviting teams from academia, industry, and research labs to participate in the pilot study. They can submit either “generators,” which are AI systems that generate content, or “discriminators,” which are systems designed to identify AI-generated content. Generators must generate summaries of 250 words or fewer based on a given topic and set of documents. Discriminators, on the other hand, must determine whether a given summary is potentially AI-written.

To ensure fairness, NIST GenAI will provide the necessary data for testing the generators. However, systems trained on publicly available data that do not comply with applicable laws and regulations will not be accepted. Registration for the pilot study will begin on May 1, and the first round is scheduled to close on August 2. The final results are expected to be published in February 2025.

The launch of NIST GenAI comes at a time when the volume of AI-generated misinformation and disinformation is growing rapidly. According to Clarity, a deepfake detection firm, there has been a 900% increase in the creation and publication of deepfakes compared to the same period last year. This alarming trend has raised concerns among Americans, with 85% expressing worry about misleading deepfakes spreading online, according to a recent poll by YouGov.

NIST’s GenAI program is part of the agency’s response to President Joe Biden’s executive order on AI. The executive order aims to establish greater transparency from AI companies regarding how their models work and sets new standards, including labeling content generated by AI. This is a significant step towards ensuring the safety and security of AI technologies.

It is worth noting that the launch of NIST GenAI is the first AI-related announcement since the appointment of Paul Christiano, a former OpenAI researcher, to NIST’s AI Safety Institute. Christiano’s appointment has been met with some controversy due to his pessimistic views on AI development. However, NIST assures that NIST GenAI will contribute to the work of the AI Safety Institute, providing valuable insights and guidance.

Overall, NIST GenAI is poised to make a significant impact in the field of generative AI technologies. By promoting information integrity and developing systems to detect fake or misleading AI-generated content, NIST is taking a proactive approach to address the challenges posed by deepfakes. With the rapid growth of AI technologies, it is crucial to ensure their responsible and ethical use. NIST GenAI is a step in the right direction towards achieving this goal.

Share this content: