Information Processing & Management 62(4), pp. 104139.
ISSN/ISBN: Not available at this time. DOI: 10.1016/j.ipm.2025.104139
Abstract: AI technologies, such as the GPT-series, have garnered worldwide attention and raised concerns regarding their potential for misuse, owing to their groundbreaking text-generating capabilities, particularly in AI-generated text (AIGT). In response to the urgent need for effective detection, this study proposes BENATTEN, a novel approach that exploits the attention between human-generated text (HGT) and AIGT. We reveal that the way humans think and the probabilistic nature of AI algorithms lead to discrepancies in how they pay attention to tokens within the text they produce, with AIGT exhibiting a higher adherence to Benford's law in attention distribution compared to HGT. Extensive experiments on three general-domain datasets demonstrate the advantage of BENATTEN compared with existing methods. For instance, on the HC3 dataset, BENATTEN achieved an impressive 99.24 % accuracy, 99.69 % F1 and 99.47 % AUC, surpassing the OpenAI detector by 3.05 %, 3.48 % and 2.39 %, respectively. Also, comprehensive evaluations on seven specialized-application domain datasets have confirmed BENATTEN's robustness and its cross-platform applicability, proving its ongoing efficacy even as AI technology evolves. Further, the experiments have shown that BENATTEN exhibits remarkable resilience, effectively handling adversarial attacks and interference from other AI systems.
Bibtex:
@article{,
title = {Can attention detect AI-generated text? A novel Benford's law-based approach},
journal = {Information Processing & Management},
volume = {62},
number = {4},
pages = {104139},
year = {2025},
issn = {0306-4573},
doi = {https://doi.org/10.1016/j.ipm.2025.104139},
url = {https://www.sciencedirect.com/science/article/pii/S0306457325000767},
author = {Zhenhua Wang and Guang Xu and Ming Ren},
}
Reference Type: Journal Article
Subject Area(s): Computer Science, Statistics