This should serve as a warning against oversharing with AI chatbots. Here are a few things you should keep from AI programs until you can trust them with your privacy:
1. Personal data that can identify you. This includes your name, address, birthday, and CNIC number. Even though OpenAI has privacy features that prevent your prompts from reaching ChatGPT, there is no guarantee that your data will stay private. If a hacker gets access to your data, they could use it to commit identity theft or other crimes.
2. Usernames and passwords. Hackers love to get their hands on login data, so don’t share your usernames and passwords with AI programs. There are plenty of password managers out there that can help you keep your passwords safe.
3. Financial information. There is no reason to give ChatGPT your personal banking information. OpenAI will never ask for this information, and ChatGPT doesn’t need it. If a program claiming to be a ChatGPT client asks for financial information, it’s probably malware.
4, Workplace secrets. A few Samsung employees contributed code to ChatGPT in its early stages, which contained private information about Samsung. As a result, Samsung banned the use of generative AI bots in the workplace. Google also restricts the use of generative AI at work, so keeping your workplace secrets to yourself is best.
5. Health information. You might be tempted to give ChatGPT prompts that include “what if” scenarios about someone with particular symptoms. However, I don’t recommend using ChatGPT to self-diagnose or diagnose others. Generative AI is not yet advanced enough to do this accurately.
ChatGPT and other chatbots don’t offer the level of privacy you can trust. Your thoughts and conversations will be sent to OpenAI, Google, and Microsoft servers, which will be used to train the bots. While generative AI products may one day be able to serve as personal psychologists, we’re not there yet. If you decide to talk to a generative AI chatbot, be careful about what data you share.