AI Assurance in Tax Compliance: A Systematic Review and Meta-Analysis of Risk-Based Frameworks for Enhancing Compliance Quality

Grace O. Ikudehinbu, Adeola A. Adeniyi

Abstract


Tax administrations globally continue to face persistent challenges related to taxpayer noncompliance, revenue leakage, and declining public trust. The rapid integration of artificial intelligence (AI), including machine learning, predictive analytics, and automated decision-support systems, has transformed tax compliance processes. However, concerns regarding automation bias, transparency, and governance underscore the need for robust AI assurance frameworks to ensure the quality of compliance. This study conducted a systematic review and meta-analysis in accordance with PRISMA 2020 guidelines. A comprehensive search of Scopus, Web of Science, IEEE Xplore, ScienceDirect, and Google Scholar identified relevant studies published through December 2025. A total of 24 studies met the inclusion criteria, encompassing empirical, experimental, and policy-oriented research across tax administration and public sector domains. Data extraction and quality assessment were performed independently by two reviewers using an adapted Joanna Briggs Institute framework. Pooled standardized mean differences (SMDs) were estimated using a random-effects model, and meta-regression and subgroup analyses were conducted to examine moderating factors. AI assurance frameworks demonstrated significant positive effects on compliance outcomes. Compliance accuracy showed a moderate improvement (SMD = 0.52; 95% CI: 0.38 to 0.66), while explainability and transparency (SMD = 0.44; 95% CI: 0.29 to 0.59) and taxpayer trust (SMD = 0.36; 95% CI: 0.18 to 0.54) also improved significantly. Automation bias was significantly reduced (SMD = −0.41; 95% CI: −0.57 to −0.25). Governance and risk outcomes, including risk detection efficiency (SMD = 0.56) and decision quality (SMD = 0.49), demonstrated moderate-to-large improvements. Meta-regression identified AI use intensity, governance frameworks, and explainability as significant predictors (P < 0.01). Subgroup analysis revealed that high-assurance AI systems produced the strongest effects (SMD = 0.68), while low-assurance systems showed no significant impact. AI assurance frameworks play a critical role in enhancing compliance quality, improving decision-making, and mitigating automation bias in tax administration. Comprehensive assurance mechanisms, incorporating explainability, governance, and human oversight, are essential for maximizing the benefits of AI while safeguarding fairness and accountability. Policymakers and tax authorities should prioritize the adoption of high-assurance AI frameworks to strengthen compliance systems, reduce risks, and enhance public trust.

KEYWORDS: Finance, Artificial intelligence (AI); Risk, Tax compliance; AI assurance; Automation bias; Explainable AI; Governance frameworks; Compliance quality; Accountability, Systematic review; Meta-analysis

DOI: 10.7176/RJFA/17-2-03

Publication date: April 30th 2026


Full Text: PDF
Download the IISTE publication guideline!

To list your conference here. Please contact the administrator of this platform.

Paper submission email: RJFA@iiste.org

ISSN (Paper)2222-1697 ISSN (Online)2222-2847

Please add our address "contact@iiste.org" into your email contact list.

This journal follows ISO 9001 management standard and licensed under a Creative Commons Attribution 3.0 License.

Copyright © www.iiste.org