Lompat ke konten Lompat ke sidebar Lompat ke footer

AI Labels Could Harm More Than Help, Study Finds

The Growing Use of AI in Scientific Content

The increasing use of artificial intelligence (AI) to generate scientific and science-related content, especially on social media, has raised significant concerns. These texts may contain false or highly persuasive information that users find difficult to detect. This can influence public opinion and decision-making, making it a critical issue for scientists, policymakers, and the general public.

Several jurisdictions and platforms are taking steps to ensure clearer disclosure of AI-generated or AI-synthesized content to protect the public. However, a recent study published in JCOM warns that these labels might have the opposite effect of what regulators intend. Instead of helping people identify misinformation, they could reduce the credibility of true scientific information while increasing the perceived credibility of false claims.

Risks of AI-Generated Scientific Content

AI-generated content can be misleading for at least two reasons. First, language models may "hallucinate," producing statements that sound plausible but are factually incorrect. Second, users can deliberately prompt AI systems to create false yet credible messages. Because of these risks, several countries have introduced transparency obligations requiring online content generated by AI to be clearly labeled.

Teng Lin, a Ph.D. candidate at the School of Journalism and Communication, University of Chinese Academy of Social Sciences (UCASS), Beijing, and Yiqing Zhang, a Master's student at the same school, conducted a study to test whether these disclosure labels actually achieve their intended goal of protecting the public from misinformation.

Experimental Study on AI Labels

"We focused on science-related information shared on social media," explains Teng. The experimental study involved 433 participants recruited online through the Credamo platform between March and May 2024. The researchers created four types of social media posts: correct information with or without an AI label, and misinformation with or without an AI label.

The texts were adapted using GPT-4 from items published by China's Science Rumor Debunking Platform, creating both accurate and misleading Weibo-style versions. These were then independently checked by the researchers.

Participants were asked to rate the perceived credibility of each post on a scale from 1 to 5. The researchers also measured participants' negative attitudes toward AI and their level of involvement with the topic.

A Paradoxical Effect

The results revealed a counterintuitive pattern. "Our most important finding is what we call a 'truth-falsity crossover effect,'" says Teng. "The same AI label pushes credibility in opposite directions depending on whether the information is true or false: it reduces the credibility of true messages and increases the credibility of false ones."

He adds that this does not necessarily mean the effect would be identical across all platforms or formats, but in their experimental setting, the pattern was clear. In this context, AI disclosure does not help people distinguish between true and false information. Instead, it appears to redistribute credibility in a paradoxical way.

Teng and Zhang also found that individual attitudes toward AI play a role. Participants who held more negative views of AI penalized correct information even more strongly when it was labeled as AI-generated. However, even among those with negative attitudes, the credibility boost observed for misinformation did not disappear entirely; it was only partially reduced, and this attenuation was topic-dependent, as it weakened in one topic but was not eliminated overall.

This suggests that so-called "algorithm aversion" does not lead to a uniform rejection of AI-generated content, but rather to a more complex and asymmetric reaction.

The Need for Careful Policy Design

Research like this highlights the need for careful testing before implementing regulatory interventions, as well-intended transparency measures may produce unintended consequences. "In our paper we put forward some recommendations, although they need further research to be validated," Teng explains.

"One proposal is to implement a dual-labeling approach. Instead of simply stating that the content is AI-generated, the label could also include a disclaimer indicating that the information has not been independently verified, or add a risk warning." In short, simply informing audiences that a text was generated by AI may not be sufficient on its own.

"Another recommendation is to adopt a graded or categorical labeling system," Teng adds. "Different types of scientific information carry different levels of risk. For example, medical or health-related information may require a stronger warning, while information about new technologies may involve lower risk. So we suggest using different levels of disclosure depending on the type and risk level of the content."

Conclusion

As AI continues to shape the dissemination of scientific information, it is essential to consider the potential unintended consequences of transparency measures. The findings from this study emphasize the importance of designing policies that not only inform but also guide users in critically evaluating the content they encounter online. Future research and policy development must take into account the nuanced ways in which AI labels affect public perception and trust in scientific information.

Posting Komentar untuk "AI Labels Could Harm More Than Help, Study Finds"