Responsible Artificial Intelligence Framework in Accountancy

The advancement of AI has generated significant interest in its application across the accountancy industry. Despite the excitement around AI opportunities to improve work effectiveness and efficiency, there are some concerns on its development and deployment risks. This final report from the joint ISCA and Nanyang Technological University (NTU) study addresses these risks and concerns. It validates and revises our Responsible AI Framework using key insights from expert interviews. The report also features four practical AI use cases, showcasing challenges, derived benefits, and measures adopted for responsible deployment.

Some of the quick guide shared in the report include: 

  • Use the Responsible AI Framework to guide the design, development and deployment of AI technologies.  
  • Tailor AI solutions to align with organisational, regulatory, social and environmental goals. Ensure their integration with legacy systems and processes. Pilot solutions with end-users. 
  • Maintain human-in-the-loop processes and independent verification of AI methods and outputs. 
  • Consistent with a shared responsibility framework, communicate and collaborate with AI developers, users and other stakeholders in the value chain to holistically address and manage AI risks.  
  • Be sufficiently trained and updated on what AI can and cannot do, its risks and opportunities, and its evolving threats.