Google Warns Employees to Never Use Private Info on ChatGPT or Bard Chatbots

Google, maker of the popular AI chatbot, Bard, has warned company employees and the general public against sharing private information in chatbots. In its privacy notice page updated a few days ago, Google urged that “Please do not include information that can be used to identify you or others in your Bard conversations.”

The warning makes it clear that OpenAI’s ChatGPT which is backed by Microsoft and Google’s Bard among other AI chatbots is not to be trusted with confidential information. Even though Google and Microsoft market distinct generative artificial intelligence tools to the entire world, they made it clear that their products have limitations where user privacy and authentic information output are concerned.

Google even went further to warn its engineers against relying heavily on computer codes generated by chatbots, since some code suggestions could be undesirable. For context, ChatGPT and Bard as well as other similar AI chatbots can be used to write articles, software codes, emails, adverts, and short stories.

But given that the AI tools can share information inputted or generated by one user with another user – and that user prompts and information can be used to further train the chatbot’s language models – there is always the risk of data leaks. In several tests, the chatbots have been found to provide excerpts from published novels or other copyrighted materials to users who may not know the source of the materials.

Again, AI companies store the details of chats that human users hold with these intelligent chatbots, and employees can access and review the content to better improve the capability of the chatbots. With AI chatbots learning to better themselves from the inputs provided by human users, there is always the risk that it will reveal private data provided by human users to others around the world.

A company that defends websites against cyberattacks, Cloudflare, said it is working on a product that will enable people and businesses from entering some private data into chatbots. With tags and alerts placed on certain data, inputs going into ChatGPT and Bard will be restricted to only what the user wants to share without risks of privacy leaks.