Summary

  • AI is transforming finance by enhancing analysis and forecasting but requires strong governance for accuracy and compliance.
  • CFOs, CAOs, and controllers play a key role in assessing AI-driven risks, designing controls and engaging with external auditors and stakeholders.
  • To enhance AI’s benefits, finance leaders should embed governance, responsibility and reporting accuracy into its use.

AI is revolutionizing the finance function, offering leaders the ability to accelerate financial analysis and reporting, enhance forecasting accuracy and extract actionable insights from vast and complex data sets. While these tools offer an opportunity to unlock significant value, they can also introduce complexities for finance executives — particularly those in public companies. These complexities may not only require technical experience but may also demand a structured framework to guide the strategic use of AI in a consistent, transparent and accountable manner.

Adopting Responsible AI practices helps to harness technology’s transformative potential while mitigating inherent risks and helps earn stakeholder trust. Where financial reporting accuracy is crucial, AI governance and internal controls can allow companies to balance speed and safety. Critical steps include validating data sources, reviewing AI outputs and assessing the integration of AI within third-party systems to support accountability and help maintain trust. Establishing repeatable processes and practices to support AI calls for finance leaders to address these three core considerations.

Read on for practical insights and guidance on how key finance stakeholders can take action now.

Responsible AI in finance: 3 key actions to take now

Share

Summary

  • AI is transforming finance by enhancing analysis and forecasting but requires strong governance for accuracy and compliance.
  • CFOs, CAOs, and controllers play a key role in assessing AI-driven risks, designing controls and engaging with external auditors and stakeholders.
  • To enhance AI’s benefits, finance leaders should embed governance, responsibility and reporting accuracy into its use.

10 minute read

March 17, 2025

AI is revolutionizing the finance function, offering leaders the ability to accelerate financial analysis and reporting, enhance forecasting accuracy and extract actionable insights from vast and complex data sets. While these tools offer an opportunity to unlock significant value, they can also introduce complexities for finance executives — particularly those in public companies. These complexities may not only require technical experience but may also demand a structured framework to guide the strategic use of AI in a consistent, transparent and accountable manner.

Adopting Responsible AI practices helps to harness technology’s transformative potential while mitigating inherent risks and helps earn stakeholder trust. Where financial reporting accuracy is crucial, AI governance and internal controls can allow companies to balance speed and safety. Critical steps include validating data sources, reviewing AI outputs and assessing the integration of AI within third-party systems to support accountability and help maintain trust. Establishing repeatable processes and practices to support AI calls for finance leaders to address these three core considerations.

Read on for practical insights and guidance on how key finance stakeholders can take action now.

One-third of CEOs say GenAI has increased revenue and profitability over the past year, and half expect their investments in the technology to increase profits in the year ahead.

PwC’s 28th Annual Global CEO Survey

1. AI’s foundation: Establishing data integrity

The adoption of AI is amplifying both opportunities and challenges associated with data in financial analysis, reporting processes and internal controls. While organizations have long grappled with data quality, the expanded use of diverse and complex data sets in AI-enabled processes underscores the importance of enhanced governance and controls to help manage evolving risks. Key considerations include:

  • Data sources and validation: Finance leaders should maintain clear oversight of data sources, including their origins, structures and reliability. This includes confirming appropriate rights to use the data and implementing procedures to address potential data inconsistencies.
  • Control enhancements: When new data sources are used to support financial reporting processes, such as forecasting or impairment assessments, they may become relevant to the company’s internal control over finance reporting (ICFR) environment. Individuals responsible for executing relevant internal controls should evaluate these new data sets for completeness and accuracy.

Example use case: Integrating external data for impairment assessments

A company integrates external market data into its AI-driven impairment models, which previously relied solely on internal historical financial data. Discrepancies between vendors lead to inconsistent impairment results. The company takes the following actions:

  • Validation protocols: The finance team develops a standardized process to validate external data, including reconciling differences between vendors and reviewing methodologies used to generate the data.
  • Data lineage and governance: The company implements a data lineage system to track the origin and transformations of external data, supporting transparency and auditability.
  • ICFR integration: Recognizing the materiality of this external data, the company expands its ICFR framework to include controls for data completeness, accuracy and validity.

Key takeaway

While external data can enhance decision-making by providing additional insights and supporting reliable AI-driven models, its integration highlights the importance of rigorous validation, governance and controls. These steps, however, are not unique to AI or external data. The same principles and processes should apply to any new or existing data sets incorporated into control frameworks to enhance reliability, consistency and informed decision-making.

2. Validating AI: Verifying reliable outputs

The probabilistic nature of AI tools calls for a structured, human-led review process to validate outputs and enhance their reliability for business purposes. Key considerations include:

  • Human oversight: AI-generated output should undergo detailed human review to validate its completeness, accuracy, reliability and alignment with business requirements. This includes not only verifying data sources but also cross-referencing outputs with other reliable information and consulting subject matter specialists when necessary.

  • Tailored review processes: Adjust the level of review based on the complexity and risk associated with the specific use case. High-stakes outputs, such as financial reporting or forecasting, may require more rigorous validation.

Example use case: AI-assisted analysis of a new revenue contract

A finance team uses an AI tool to analyze a complex, multi-element revenue contract under ASC 606. The AI model identifies two performance obligations but determines they should be combined, resulting in a more aggressive revenue recognition schedule under the terms of the contract and the applicable accounting standard. The company takes the following actions:

  • Reviews AI output: The management team thoroughly reviews the AI tool's identification of performance obligations, comparing its conclusions to the specific terms and conditions of the contract. Based on accounting policies and their judgment, management determines that the two performance obligations should be recognized separately. The allocation of transaction price is adjusted to confirm it aligns with distinct obligations and appropriate timing of revenue recognition.

  • Cross-references: The preparer validates the AI tool's referenced guidance against the latest interpretations of ASC 606 and identifies discrepancies in the criteria used to evaluate distinct obligations. The team updates the AI model with more specific accounting guidance and verifies proper tagging of documents to identify and differentiate multi-element arrangements.

  • Undergoes an enhanced review by specialists: The revised memo and revenue recognition schedule is reviewed by a technical accounting specialist to confirm accuracy and completeness. The specialist recommends incorporating more effective validation steps in the AI-driven drafting process, such as keyword searches for multi-element deliverables and flagging ambiguous contract terms for additional review.

  • Refines process: The team establishes a refined protocol for AI-assisted contract analysis, which includes manual validation of key judgmental areas and periodic updates to confirm AI tools reference evolving interpretations of ASC 606.

Key takeaway

The AI tool’s initial misclassification of performance obligations created a more aggressive revenue recognition schedule, but the structured, human-led review process identified and corrected the issue. While additional review and adjustments may seem to duplicate effort, this process is essential for maintaining compliance with accounting standards and refining the AI model’s performance. Over time, these refinements may help reduce the need for manual intervention, as the AI model becomes better at recognizing distinct performance obligations and applying accounting guidance. This iterative improvement in AI enablement, paired with effective controls and informed oversight, can enhance efficiency, reduce errors and support better decision-making in complex accounting scenarios.

3. The importance of third-party oversight: Evaluating AI dependencies

The integration of AI capabilities into software-as-a-service (SaaS) solutions and other third-party services is rapidly changing how finance functions operate. These services can range from AI-powered Enterprise Resource Planning (ERP) systems to specialized tools for lease accounting, stock-based compensation and record-to-report functions. It also includes the use of AI by third parties providing services to the company, such as system implementation providers and inventory management providers. Key considerations include:

  • Evaluation of AI in third-party systems: Finance executives should evaluate how third-party providers incorporate AI into their services and understand the specific AI models and tools being used. This includes assessing how providers manage risks associated with these technologies.
  • Third-party controls reporting: While SOC 1 reports provide assurance over third-party systems of internal control relevant to financial reporting, they often do not cover AI-specific risks, such as those related to models, data, infrastructure, usage, or legal and compliance issues. Finance professionals should request additional information, including the expansion of external controls reporting to help mitigate these specific AI risks as well as a service organization’s overall governance practices. Further, organizations should evaluate whether their own ICFR frameworks should be enhanced to address gaps.
  • Competence within the finance function: Confirm that the finance team has the necessary skills to critically evaluate AI-driven outputs and third-party reporting, identifying gaps in controls or risks to financial reporting.
  • Periodic monitoring: Integrate processes into broader risk-management frameworks to periodically review third-party AI systems and their associated controls. This monitoring should be commensurate with the risks identified with the service provider and AI-specific tools.

Example use case: AI-driven lease accounting tool

A finance department adopts an AI-driven lease accounting tool to streamline ASC 842 compliance. During implementation, errors are identified in the AI’s interpretation of variable lease terms and renewal options. The company takes the following actions:

  • Supplemental assurance: The company requests detailed documentation from the provider about the AI model's training data, validation processes and inherent limitations, working with the vendor to refine the model.
  • Control enhancements: The finance team implements an additional layer of internal control to manually validate lease terms extracted by the tool during the first reporting cycle.
  • Ongoing monitoring: A periodic review process is established to test the tool's accuracy against a sample of leases, confirming performance remains consistent over time.

Key takeaway

While business functions and activities may be outsourced or supported by a third-party vendor, responsibility for identifying, understanding, managing and overseeing those risks remains with the organization outsourcing the activities. In these instances, additional oversight and monitoring are essential to confirm controls are in place to help mitigate risks and facilitate compliance.

Defining AI responsibilities: How key stakeholders can take action now

Engagement from various stakeholders within the organization is crucial for leveraging the opportunities that AI offers, while managing the associated risks. The roles and responsibilities may evolve as organizations mature in their use of AI, but oversight and input of new use cases, along with ongoing monitoring of AI models, will likely continue to be critical to realizing transformative results. Here are key actions stakeholders can focus on now.

Finance leaders: CFO, CAO and Controller

As leaders of a company’s finance function, CFOs, CAOs and controllers are responsible for evaluating the impact AI may have on the company’s ICFR, whether through the company’s direct use of AI or by relevant third-party service providers. Key actions include:

  • Understand the company’s AI strategy and communicate how it will likely impact the finance function.
  • Engage with relevant stakeholders to understand the company’s Responsible AI practices and champion consistent execution across the finance function.
  • Design and implement key controls over AI uses cases to validate the completeness and accuracy of resulting outputs.
  • Engage in vendor management processes with users of third-party service providers to evaluate use of AI.
  • Discuss the use of AI in the finance function with the external audit provider on a regular basis.

Sarbanes-Oxley (SOX) program owner

The SOX program owner should evaluate the impact of the company’s AI strategy on the company's ICFR program. Key actions include:

  • Educate key stakeholders responsible for operation of internal controls on the implications and related risks of using AI on the company’s ICFR. Equip the SOX team with skills to evaluate and monitor emerging technologies. 
  • Identify areas where AI is being integrated into financial reporting processes and assess whether AI introduces new risks, such as inaccuracies, bias or overreliance on the technology.
  • Evaluate the design of existing controls and potential need for new controls or modifications to existing controls given AI-specific risks.
  • Update SOX-related policies and procedures to incorporate considerations for the use of AI, including detailed documentation of inputs, processing and outputs for transparency and auditability. 
  • Test the design and effectiveness of controls relying upon AI and develop response protocols for issues arising from AI, such as misstatements or control breaches.

Audit committee

As management seeks to improve productivity using AI, the system of internal controls within the company will likely be subject to changes and risks, with oversight by the audit committee. Audit committee members may have an increased need for digital upskilling to enable their understanding and ability to govern new and emerging risks from AI. The audit committee chair should own the conversation on the big picture strategy to enable trust at the board level — including responsible use, accuracy of outcomes, and data security and privacy. The audit committee should also engage with the external auditor to understand the possible impacts of the company’s use of AI on the audit. Key actions include:

  • Engage in ongoing discussions with members of management to understand the company’s AI strategy and implementation, including possible impact to financial reporting and relevant internal controls.
  • Develop and execute a training program for audit committee members to enhance necessary skills to support AI governance. 
  • Understand internal audit and SOX program owner’s plan for testing the company’s AI governance framework. Obtain regular updates. 
  • Discuss the company’s approach to Responsible AI practices with external auditor.

Advancing Responsible AI in finance: A coordinated effort

The adoption of AI in finance functions can present both transformative opportunities and significant challenges. By proactively addressing data integrity, validation of AI outputs and the integration of AI in third-party services, companies can mitigate risks while unlocking the potential of these technologies. An effective governance framework — supported by skilled finance leaders, SOX program owners and audit committees — will likely be critical to navigating this complex and evolving landscape with confidence.

Jennifer Kosar

AI Assurance Leader, PwC United States

Email

Keith Bovardi

Assurance Partner, PwC United States

Email

Kelly VanCura

Assurance Partner, PwC United States

Email

Follow us