INTERVIEW | Russian expert of the UN group on AI: regulation should not hold back innovation

ИНТЕРВЬЮ | Российский эксперт группы ООН по ИИ: регулирование не должно сдерживать инновации

Artificial intelligence technology is developing faster than the legislative framework for AI. INTERVIEW | Russian expert of the UN group on AI: regulation should not hold back innovation Economic development

The development of technology should not be an end in itself – it is necessary to introduce those innovations that help people. At the same time, artificial intelligence is precisely the invention that can make the lives of people all over the world better. Andrey Neznamov, head of the Center for Human-Centered Artificial Intelligence of Sberbank, who was recently appointed to the UN Independent International Scientific Group on AI, is confident of this.

In an interview with Evgenia Kleshcheva of the UN News Service, the lawyer and regulatory expert emphasized that participation in this body opens up the opportunity for a broader and balanced discussion on the future of AI.

“This is a very important step towards creating safe AI that will benefit everyone. It is important to do this at the international level,” noted  Neznamov. Neznamov was included among 40 experts in the field of machine learning, cybersecurity, healthcare, human rights, child development, data management and other related disciplines. All group members serve in their personal capacity, independent of any government agency, private company or organization.

The main task of the group is to summarize and analyze existing research regarding the opportunities and risks associated with artificial intelligence. The functioning of the new structure is closely connected with another specialized mechanism of the UN system – the Global Dialogue on AI Governance. legal mechanisms. Thus, many AI products are automatically subject to current industry legislation.

“We underestimate the power of already established norms. As a rule, I noticed that in four out of five cases the issues of introducing AI products are always covered with almost no problems by already existing rules and regulations – explained Neznamov. – I really believe that many risks associated with artificial intelligence are overestimated because they are treated as if in an airless space, as if there were no legislation in the countries of the world.” At the time, he admitted that in some cases adaptation of legislation was required. For example, the traffic rules today do not provide for the absence of a driver at the wheel, so the emergence of unmanned vehicles requires their updating.

Underestimated threats

Among the underestimated risks, the expert identified potential threats associated with the development of artificial general intelligence (AGI).

“When I started working on artificial intelligence, the conversation about the dangers of AGI getting out of control was almost always in the realm of science fiction,” noted Neznamov. As a lawyer, he had to consider all possible scenarios, but many interlocutors did not take such risks seriously and were not ready to discuss them. Now the development of general AI is one of the main topics of discussion among both experts and the general public. At the same time, according to the specialist, “perhaps we still do not assess these risks seriously enough.”

General rules

Neznamov called another challenge the lack of a unified global approach to regulating AI, recalling that even with regard to the Internet, humanity has not yet developed universal rules. At the same time, more active participation in international discussions is necessary countries Global South. The positions of developing countries, according to the Russian expert, often remain underrepresented. It is important to support a pluralism of opinions, including voices from all regions of the world and in different languages. a recommendation on international regulation, as well as a conclusion about its uselessness.

“Maybe, on the contrary, it will turn out that national legislation is already doing an excellent job,”  noted he.

Regulation without slowing down innovation

One of the key challenges is finding a balance between control and development. The situation is complicated by the speed with which artificial intelligence is changing.

“While you write a law, while you accept it, while it comes into force, technology can change radically,” said  Neznamov.

According to expert, and excessive regulation, and the complete absence of rules can equally negatively affect the development of AI. 

While you write the law, while you accept it, while it comes into force, technology can radically change…

Among the effective tools, he named experimental legal regimes (“digital sandboxes”), which allow you to quickly test regulation bypassing existing laws. The second example, which, according to the lawyer, works successfully in Russia, is “controlled self-regulation.”

“For example, there is the Russian National Code of Ethics for AI. He has more than a thousand signatories. In fact, anyone who develops AI or implements it, signed this code, which set some ethical framework for how the sphere should develop in the absence of strict laws, Neznamov explained. – The state also signed this code. Within the document there is a certain dialogue. If we understand that some ethical issue is too strict, that it creates problems in society, or that the developers have signed up but do not fulfill their obligations, then the state will pass a law.”

The expert noted the consensus of the state and developers in Russia regarding ethical standards: the private sector is ready to voluntarily accept certain restrictions. “This is a very bad conversation, because AI is needed to make human life better. Human-centricity implies that at the center of any conversation about AI is always a person,” noted he.

Human-centricity implies that at the center of any conversation about AI there is always a person

As part of his work at the Sberbank Center Neznamov and his The team evaluates the impact of technology on people’s lives, seeking to minimize negative consequences and enhance positive ones. The regulatory specialist brings to the work of the UN group an interdisciplinary approach that combines science, practice and public opinion, as well as “cautious techno-optimism.” developed – let it be safe, let there will be frameworks, but we must do everything to ensure that technology reveals its potential for the benefit of people,” he said. control AI. 

Источник

Leave a Reply

Your email address will not be published. Required fields are marked *