Prekybos naujienos

Employees reveal company secrets to artificial intelligence

Pracownicy zdradzają sztucznej inteligencji firmowe sekrety

With the growing popularity of AI tools, more and more employees use them to perform professional tasks. At the same time, it is a ticking time bomb, because most often the accounts used by employees are private accounts that operate outside the control of the IT department. LLM (Large Language Model) tools work in such a way that the information entered into them is absorbed into the database. And a large part of it is sensitive data or data that should never flow outside the organization.

Uncontrolled flow of information

Cyberhaven Labs analyzed the patterns of use of generative artificial intelligence tools by 3 million employees. Over the 12 months from March 2023 to March 2024, the amount of corporate data entered by employees into AI tools increased by 485%. In 96 percent employees use tools from technological giants, i.e. OpenAI, Google and Microsoft products.

When it comes to employee users of AI tools, they are most often representatives of technology companies. In March this year, nearly every fourth employee (23.6%) in the technology industry entered company data into an AI tool. These employees were much more likely than others to place data in AI tools (over 2 million events per 100,000 employees) and copy it from there (1.7 million events per 100,000 employees). In addition, employees of the media and entertainment sector (5.2%), finance (4.7%) and pharmaceutical companies (2.8%) used generative artificial intelligence tools.

Sensitive data

It turns out that as many as 27.4 percent information that employees enter into AI tools is sensitive data. A year earlier, this percentage was much lower and amounted to 10.7%.
– AI tools have great potential to positively transform workplaces, increasing employee efficiency and productivity. The use of these tools makes it faster and easier for them to perform their tasks, but it is associated with a number of challenges. Therefore, it is important to properly manage this technological change and take care of issues such as security and ethics, explains Natalia Zajic from Safesqr, a company specializing in cybersecurity. – AI models work in such a way that the data they are fed with becomes part of their database. Entering information about the organization into such tools may create a risk of their misuse, the expert adds.

What data is entered?

The most popular type of sensitive data included in AI tools is information related to customer service (16.3 percent of sensitive data), which includes confidential content from customers reporting their problems. Others include source code (12.7%), research and development materials (10.8%), employee records (3.9%), and financial data (2.3%).

The biggest problem is feeding AI tools with data about the organization through employees' private accounts. It turns out that as many as 82.8 percent legal documents placed by employees in AI tools get there this way. This could lead to company data being used in potentially dangerous ways. Example? 3.4 percent research and development materials created in March this year. came from AI tools. What if a patent is granted on their basis? In the same period, 3.2 percent. source code was generated by IA tools, which in turn may be a security concern. AI tools are already responsible for 3%. graphics and designs, which in turn may involve copyright and trademark infringements.

Today, the average employee puts more corporate data into AI tools on a Saturday or Sunday than they did in the middle of the workweek a year ago.

News source

0 0 balsai
Straipsnio vertinimas
0 Komentarai
Inline Feedbacks
Rodyti visus komentarus

Taip pat skaitykite: