Navigating the AI revolution – considerations for board of directors

Like it or not – AI is here. 

The accessibility of advanced generative AI tools to automate tasks, improve customer experience and build on competitive edge presents a significant opportunity to improve efficiencies. While AI continues to evolve, its ability to shape the way business is done has the capacity to substantially disrupt industries. 

Why should boards care?

The reality is that use of AI in business functions is already commonplace – Google is using AI for search, and whether you like it or not, employees are using chatbots like ChatGPT.

With such large potential impacts as well as intricacies around data privacy, ethics, transparency and accountability it must be monitored by the board. 

Employees, swept up in enthusiasm to use these new tools are operating unsupervised and without sufficient testing or reviews.

Last year, Samsung employees infamously shared confidential information while using ChatGPT for help at work – which involved sensitive source code. Employees learnt the hard way that anything you share with ChatGPT is retained and used to further train the model. Oversight from the board of directors and management is essential to empower organisations to harness the technology, without compromising classified information. 

It’s important to remember the saying – if you’re not paying for it, you are the product.

 

Currently, there is limited legislation on AI, let alone board of directors responsibility when it comes to this technology. Directors should look to stay ahead of the curve and ensure AI use is governed.

We’ve put together 3 key considerations for directors in order to implement as well as oversee the governance of AI.

1. Get educated on AI

An initial start should be to educate and increase knowledge and understanding of AI. Read articles, pursue external training and workshops, bring in speakers and subject matter experts.

Although it’s not expected that directors must learn how to code AI, there is a duty for directors to be informed and keep up with the latest.

 

 

2. Understand how it is or will operate in your organisation

Alongside education on AI, it is important for directors to understand the use cases and future use cases of AI in their organisation. Specifically, the type/s of AI technology in order to establish an effective structure to govern and mitigate AI risks. 

This may include:

  • Machine learning systems
  • Virtual assistants and chatbots
  • Recommendation systems for personalisation
  • Facial recognition systems 

Another thing to note in regards to understanding how AI will operate in your organisation is assessing your organisation’s data quality. As the saying goes – garbage in, garbage out. It’s essential to assess data quality and take steps to improve if required. 

Part of understanding how AI will operate in your organisation is to ask the right questions. See below some points to consider:

  • What is required to confirm your company is AI ready?
  • Bias can have a major impact and is a concern in AI models. How can your organisation mitigate this?
  • How do you maintain ownership of IP? Who owns the IP of AI generated work? 
  • How can you track progress in AI use?
  • Do you have suitable in-house skills, or need additional support?

3. Understand your compliance obligations

The increasing use of AI undoubtedly brings opportunities, this does however come alongside potential risks. Particularly concerning data privacy, discrimination and ethical considerations.   

With AI, particularly generative AI developing so quickly, specific AI laws and regulations are still playing catch up. However legal obligations may arise from existing legislation e.g, In New Zealand, The Privacy Act, Human Rights and Fair Trading Act among others. Directors should ensure they are familiar with how use of AI within their organisation may fail to comply with these laws and how to manage compliance. Understanding this is crucial for defining acceptable AI practices and set the standards for accountability in AI deployment.

By staying informed about legal developments and fostering a culture of responsible AI usage, boards can not only mitigate compliance risks but also position their organisations as leaders in ethical AI practices. A proactive approach is key to navigating the complex and evolving landscape of regulation while protecting organisational reputation and stakeholder trust.

 

As the AI revolution evolves, directors who educate themselves on AI fundamentals, understand how this technology will function in their organisation and keep a close eye on compliance obligations will be able to effectively guide the way through AI integration. The decisions made today will shape the future, and a well-informed board is crucial to harness the potential of AI while safeguarding against risks. In this rapidly-moving landscape, vigilance and adaptability will be the hallmarks of successful leadership.

With board management software like StellarBoard, we can simplify the administration and management of board processes so you can get back to focusing on managing your organisation. Learn more here.

Previous Post
Introducing our latest feature – SplitView
Next Post
Introducing our latest feature – User Groups
Menu