Mitigating AI Risk

18 Apr 2023

Mitigating AI Risk

With the growing popularity of OpenAI, ChatGPT, Google Bard and other generative pre-trained transformers (GPT), RFA has been monitoring data leaks associated with the improper upload of sensitive corporate data to platforms that make data available to all GPT users. Within their terms & conditions, these platforms state they may reuse the data shared with them to train the AI.

RFA has a multi-pronged approach for mitigating these risks. First and foremost, we recommend notifying all staff to opt-out of data sharing to ensure conversations are not used by AI model(s). Please follow this link to do so: Open AI Opt-Out.

RFA additionally recommends:

  1. Implementing a private, corporate instance of GPT client to extend corporate controls for security, data leak prevention, and improved accuracy of output.
  2. Restricting access to OpenAI and other public GPT clients.
  3. Updating corporate Information Security policies and procedures to ensure that all users are aware of the dangers of sharing data on public GPT chat bots. RFA IT Compliance recommends placing a policy to block the main consumer website: https://chat.openai.com/, until your security strategy has been developed.

To configure your corporate instance of GPT client or request assistance with updates to policy language, please contact the RFA IT Compliance team at itcompliance@rfa.com.


Redefining technological support every day

Let our experienced team discuss your organization’s requirements, review your current IT setup, and provide tailored guidance on the right course for you.

Get a callback