News Pricer.lt

How to prevent (customer) data leaks when using ChatGPT?

How to prevent (customer) data leaks when using ChatGPT?

Employees in many companies are using chatbots on a daily basis – with or without authorisation. However, ChatGPT and the like can cause major data breaches. How to protect against them?

Often unlawful

What customer profiles are buying in your shop? Where do deliveries still need to be made? How to prepare a quote in Polish? These are all questions that AI chatbots like ChatGPT can answer in an instant: all you have to do is upload your documents and the digital assistant will carefully summarise them. As a result, it only makes sense that many employees use such digital assistants in the workplace, either to save time or to make less fun tasks easier.

However, their use is often unlawful: as soon as personal data are entered, it is considered a data leak as all the data that such an online AI tool receives are stored and used to train its model. “A data leak involves unauthorised and/or unintentional access to personal data”, the Dutch Personal Data Authority explains.

News source

Dalintis:
0 0 balsai
Straipsnio vertinimas
guest
0 Komentarai
Seniausi
Naujausi Daugiausiai įvertinti
Inline Feedbacks
Rodyti visus komentarus
Parent container not found.

Taip pat skaitykite: