Home Sectors People News Careers Partners
Contact Us

Artificial Intelligence (AI) Transparency Statement ​

The Company is committed to the safe and responsible use of artificial intelligence (AI).

We consider AI offers significant opportunities to improve productivity and service delivery within our workplace.

We govern our AI in line with applicable laws and regulations having regard to the voluntary AI safety standards (VAISS) and current best practice.

Our approach to AI

The Company applies the guidance outlined in the VAISS and have chosen to limit our use of AI to low-risk use cases.

We assess each use case against the 10 guardrails of the VAISS - these are:

  • 1 - Accountability and Governance
  • 2 - Risk Management 3 - Data and System Security
  • 4 - Testing and Monitoring 5 – Human Oversight
  • 6 -Transparency to End - Users 7 - Contestability
  • 8 - Supply Chain Transparency 9 - Record Keeping
  • 10 - Stakeholder Engagement

This assessment determines if a use case is low, medium, or high risk. A low-risk use case means AI does not:

  • directly interact with, or significantly impact the public without human intervention
  • risk the security of the information or data we hold
  • harm the privacy of any individual, including our staff.

We reassess our use cases when:

  • a notable change is made to our approach to AI or use of AI
  • a use case progresses to another stage in its life cycle
  • an AI risk or harm is identified with a use case.

How we use AI

We allow our staff to use AI in their work with the objective of enhancing productivity and service delivery. This includes enterprise AI deployed in our closed internal information and communication technology (ICT) environment, as well as publicly available AI that is not deployed in our closed internal ICT environment.

The tasks completed by our staff using AI falls into several usage patterns and domains as outlined by the Digital Transformation Agency (DTA’s) Classification system for AI use . These are:

  • the Analytics and insights usage pattern primarily in the Scientific, and Policy and legal domains, where the sensitivity of the data is low risk
  • the Workplace productivity usage pattern primarily in the Service delivery, and Corporate and enabling

Our staff may use AI to:

  • assist in the creation of accessible versions of government documents,
  • assist with research and analysis
  • summarise data across multiple sources
  • interrogate, analyse and obtain insights from datasets
  • review and communicate workplace policies, procedures and processes
  • assist in the analysis, creation or summarisation of documents, emails or other content
  • create and debug code used in data analysis, management and processing
  • assist in the creation of meeting minutes or interview transcripts
  • search information repositories and retrieve documents, information or data.

We do not use AI within the Decision making and administrative action or Image processing usage patterns, or the Compliance and fraud detection, and Law enforcement, intelligence and security domains.

How we govern our AI

The Company has a risk-based approach to the use of AI. This approach focuses on identifying, evaluating and monitoring the level of risk associated with implementing AI systems. Once implemented, we monitor the effectiveness of our AI systems by:

  • having robust governance arrangements, policies and processes; and
  • monitoring AI usage.

Our AI Governance Committee

We have an AI Governance Committee (the Committee) to oversee our adoption and use of AI within the Company. The functions of this Committee include:

  • ensuring AI is introduced and implementing AI safely
  • identifying, assessing and managing AI risks and opportunities
  • promoting a culture of safe and responsible use of AI
  • overseeing and implementing policies and advice.

This Committee membership includes senior executive representatives from the following areas:

  • Company Director
  • Manager and Associate
  • CAD Manager

Our internal policies and processes

The Company has policies and processes for the adoption and use of AI by our staff, including:

  • Acceptable use policy
  • Guidance on the use of generative artificial intelligence
  • Information security management policy framework
  • Information and data policy
  • Privacy policy
  • Risk management policy framework.

These are regularly reviewed to ensure they remain fit for purpose.

We provide our staff with guidance and training on the safe and responsible use of AI. Staff are required to complete this training prior to being granted access to our secure internal AI.

Who to contact regarding our statement

For any questions regarding this statement, or for more information about how the Company uses AI, please email: info@provenpm.com.au

This statement is authorised by the Company Board of Directors.