AI Summary • Published on Jan 13, 2026
The increasing integration of artificial intelligence (AI) tools in news production has led to widespread calls for transparency regarding AI use through disclosures. However, previous research has identified a "transparency dilemma," where the act of disclosing AI involvement can paradoxically decrease readers' trust. Little is known about how the specific level of detail provided in these AI disclosures influences trust within the news context. This study aims to explore whether detailed AI disclosures mitigate or exacerbate this existing transparency dilemma in news articles.
A 3x2x2 mixed factorial study was conducted with 34 participants to examine the effects of AI disclosure on news readers' trust. The study manipulated three independent variables: three levels of AI disclosure (none, one-line, and detailed), two types of news content (politics and lifestyle), and two levels of AI involvement in article creation (low and high). Participants read various news articles and their trust was measured using an adapted News Media Trust questionnaire at both article and outlet levels. Additionally, two decision-making behaviors were observed: source-checking (through token spending) and subscription decisions. To gain deeper insights, semi-structured interviews were also conducted with participants. The news articles for the study were generated or edited using ChatGPT-4o to represent low and high AI involvement, with human oversight. Disclosure statements were carefully crafted to reflect these involvement levels and ranged from a simple one-line mention to a detailed explanation of AI's role, human review, and error reporting contact.
The study found that only detailed AI disclosures consistently led to lower trust scores in questionnaires and reduced subscription rates, particularly for lifestyle articles. In contrast, one-line disclosures yielded similar trust and subscription outcomes as articles with no disclosure. However, both one-line and detailed disclosures resulted in an increase in source-checking behavior, with detailed disclosures prompting more frequent checks. Qualitative analysis from interviews indicated that subscription decisions were primarily driven by trust, while source-checking was more often motivated by curiosity or interest in the topic. A significant portion of participants (approximately two-thirds) expressed a preference for detailed disclosures or a "detail-on-demand" format, valuing the transparency, but also noted that the length of detailed disclosures might deter reading.
The findings suggest that the transparency dilemma in news is not an inevitable outcome of all AI disclosures, but rather a trade-off between readers' desire for transparency and their trust. News organizations might find concise, one-line disclosures more effective in maintaining trust while still fulfilling transparency obligations, as overly detailed explanations can be counterproductive. The study challenges previous assumptions that merely highlighting AI involvement reduces trust, demonstrating that the design and framing of disclosures are crucial. This offers practical guidance for news organizations to be transparent about AI without necessarily sacrificing audience trust. Policy implications point towards considering differentiated disclosure requirements for various news domains (e.g., higher-stakes political news) and exploring interactive "detail-on-demand" features to balance informativeness and user accessibility.