kapubanner for mobile
Published: 3 week

A company can be at serious risk if it uses AI in an ill-considered way

In recent years, artificial intelligence-based services have increasingly captured the imagination of companies. Many companies have already implemented AI to help, speed up or even make their operations more efficient and cost-effective, but the expert warns that to implement such applications, employees must first be made aware of the need to update their data protection and security.

Komoly veszélyt jelenthet a cégre nézve, ha átgondolatlanul használja az AI-t-

More and more workers are encountering AI in some way. Most of them probably through the company they work for. But this can be risky in more ways than one, so it's worth being careful.



Dr. Krisztián Bölcskei has been working for many years on both data protection issues and the rise of artificial intelligence, its features and pitfalls.



As the creator of DPO Store, an application that helps data protection officers, his two main areas of expertise, privacy and data security, and AI, are now intertwined, he has a good insight into the gaps and mistakes that can hinder businesses that want to implement - or are already using - AI.



"Data protection is an ongoing task that companies should have been complying with since 2018, and even before that, but the fact is that there is still a lot of backlog in this area. It is the introduction of artificial intelligence that will fill in the gaps and errors, and that is where the real problems start" - said Dr. Bölcskei, who believes that it is worth for companies to take a backward step to put these processes in order and first ensure the company's operation from the data protection and data security side.



Can AI see into trade secrets?



Data protection officers and privacy experts from dozens of domestic businesses also pointed out that in many cases, neither employers nor employees are doing their due diligence on what data and information is shared with AI when it is used. Employees are not aware and cautious because they are not properly trained, so they unwittingly share information with AI that could pose a risk later on.



"We need to be aware that when we ask questions of AI, we have already made a claim about ourselves or even the business. AI learns and evolves, and learns most from us and from us. A simple language-learning app will simply ask you for information about you that you mentioned in a previous conversation. This is both very scary and thought-provoking, because in the course of their work, employees may upload documents containing business secrets, for example for translation purposes, or share know-how, strategies, etc. with AI. In this way, AI can see into internal processes, because sometimes it is the very processes it is tasked with creating. In addition, there is always the fact that personal data is processed by the AI, about the user or even about the person the user has entered into the system. In many cases, employees are only instructed to use the AI, but they are not properly informed or given a set of rules about what can and cannot be shared with it," he added.



"The solution would be for businesses to firstly get their data protection and data security compliance in order, select the AI that best fits the purpose and guarantees the right security, and then introduce AI into their operations on an ongoing basis, unit by unit and for their targeted activities, continuously monitoring the implementation, developing and refining internal rules, and educating colleagues. But in the future, there may be a need for specialists, like data protection officers, to oversee and assist or even monitor the implementation and use of AI in companies," the expert added.





photo: unsplash


© Copyright HRKnowledgehub.com - 2024