AI-based technologies are already a staple in one out of every two companies, with an additional 33% of commercial organizations anticipated to adopt them within the next two years. In various forms, AI is rapidly becoming ubiquitous. Its economic advantages span from boosting customer satisfaction to driving direct revenue growth. As businesses gain a deeper understanding of AI systems’ capabilities and limitations, their implementation will only grow more effective. Nevertheless, it is evident that the risks associated with AI adoption must be tackled proactively.
Even the early stages of AI deployment have highlighted the potential for costly errors—affecting not just financial outcomes but also reputation, customer trust, patient well-being, and more. For cyber-physical systems such as autonomous vehicles, safety concerns take on heightened significance.
Attempting to implement safety measures after the fact, as was often the case with earlier generations of technology, can prove both costly and, in some cases, impossible. For context, global economic losses due to cybercrime were estimated at $8 trillion in 2023 alone. Against this backdrop, it’s unsurprising that nations vying for technological leadership in the 21st century are pushing to establish AI regulations. Examples include China’s AI Safety Governance Framework, the European Union’s AI Act, and the United States’ Executive Order on AI. However, regulatory frameworks rarely delve into technical specifics or offer practical recommendations—that isn’t their role. To bridge this gap and effectively implement requirements such as reliability, ethics, and accountability in AI decision-making, there is a pressing need for clear, actionable guidelines.
To assist practitioners in implementing AI today and ensuring a safer future, Kaspersky experts have developed a set of recommendations in collaboration with Allison Wylde, UN Internet Governance Forum Policy Network on AI team-member; Dr. Melodena Stephens, Professor of Innovation & Technology Governance from the Mohammed Bin Rashid School of Government (UAE); and Sergio Mayo Macías, Innovation Programs Manager at the Technological Institute of Aragon (Spain). The document was presented during the panel “Cybersecurity in AI: Balancing Innovation and Risks” at the 19th Annual UN Internet Governance Forum (IGF) for discussion with the global community of AI policymakers.
Following the practices described in the document will help respective engineers — DevOps and MLOps specialists who develop and operate AI solutions — achieve a high level of security and safety for AI systems at all stages of their lifecycle. The recommendations in the document need to be tailored for each AI implementation, as their applicability depends on the type of AI and the deployment model.
The diverse applications of AI force organizations to address a wide range of risks:
“Under the hood” of the last three risk groups lie all typical cybersecurity threats and tasks involving complex cloud infrastructure: access control, segmentation, vulnerability and patch management, creation of monitoring and response systems, and supply-chain security.
To implement AI safely, organizations will need to adopt both organizational and technical measures, ranging from staff training and periodic regulatory compliance audits to testing AI on sample data and systematically addressing software vulnerabilities. These measures can be grouped into eight major categories:
As Technovera Co., we officially partner with well-known vendors in the IT industry to provide solutions tailored to our customers’ needs. Technovera makes the purchase and guarantee of all these vendors, as well as the installation and configuration of the specified hardware and software.
We believe in providing technical IT solutions based on experience.