AI Summary • Published on Mar 23, 2026
Financial institutions face growing cyber threats and strict regulations, making Cyber Threat Intelligence (CTI) crucial for security decisions. While Artificial Intelligence (AI) is widely seen as a way to enhance CTI with faster detection and deeper insights, its trustworthy production use in finance remains limited. Adoption is not just about predictive performance but also governance, integration into existing security workflows, and analyst trust. Existing research often focuses on isolated task performance without addressing real-world deployment challenges like explainability, auditability, and resilience against adversarial attacks. This creates a significant gap between the promise of AI in CTI and its practical, trustworthy implementation within financial organizations.
To understand the current state and barriers to AI-driven CTI adoption in finance, the researchers employed a mixed-methods, user-centric approach. This involved three main components: a systematic literature review (SLR), semi-structured interviews, and an exploratory survey. The SLR covered 330 publications from 2019–2025, ultimately retaining 12 finance-relevant studies that involved AI or analyzed AI risks in CTI. Six semi-structured interviews were conducted with practitioners in senior cybersecurity roles from financial institutions and consulting firms. Finally, an exploratory survey collected 14 responses from similar organizations to triangulate adoption signals and barriers in current practice, with all evidence coded against specific research questions.
The study identified four recurrent socio-technical failure modes that hinder trustworthy AI-driven CTI deployment in finance. Firstly, "shadow AI" describes employees using public AI tools outside institutional controls, creating unvetted data paths. Secondly, a "license-first trap" occurs when AI adoption is driven by vendor bundles rather than strategic needs, leading to unused or disabled features. Thirdly, an "attacker-perception gap" means defensive teams often underestimate adversaries' sophisticated use of AI for offensive purposes, complicating threat modeling. Lastly, "missing security for the AI itself" involves limited monitoring, robustness evaluation, and audit-ready evidence for AI models. Survey results indicated strong future expectations for AI, with 71.4% of respondents expecting AI to be central within five years. However, current use is infrequent (57.1%) due to concerns about interpretability and assurance, and 28.6% reported direct encounters with adversarial risks.
Based on the identified barriers, the paper derives three security-oriented operational safeguards for AI-enabled CTI deployments. First, "controlled analyst-in-the-loop operation" is recommended to prevent misuse and ungoverned "shadow AI" by ensuring auditable rationales for AI outputs, defaulting low-confidence outputs to human review, and providing approved pathways for AI assistance. Second, "security assurance gates and continuous monitoring for adversarial and operational failures" suggests treating AI components as attackable systems, implementing robustness checks, continuous monitoring for drift, and maintaining audit-ready documentation. Third, "attack-surface reduction via capability-driven integration and 'disabled-by-default' vendor features" advocates for explicit definition of AI's role in workflows, disabling non-integrated modules by default, and mapping enabled AI capabilities to control objectives. These safeguards aim to minimize new attack surfaces, prevent ungoverned usage, and provide continuous assurance and auditability, emphasizing that trustworthy AI in CTI is fundamentally a security challenge.