Select Page

On 8th of November 2020 Jean Monnet Centre of Excellence in EU Law (JMCE), part of the Jean Monnet Network entitled “European Union and the Challenges of Modern Society (Legal Issues of Digitalization, Robotization, Cyber Security and Prevention of Hybrid Threats)” hosted a round table with participation of the Members of European Parliament. The event was opened by Associate Professor Naděžda Šišková, Head of JMCE and the main coordinator of the network. The network is bringing together five scientific institutions: University of Heidelberg (Germany), Tallinn University of Technology (Estonia), Comenius University in Bratislava (Slovakia), the Taras Shevchenko National University of Kiev (Ukraine) and Jean Monnet Centre of Excellence in EU Law at Palacký University in Olomouc, Czech Republic (Leading partner). Madam Šišková also introduced the Jean Monnet Network two honourable speakers.  The speakers were the Vice-president of the European Parliament – Dita Charanzová, and former Chair of the European Parliament’s Committee on Legal Affairs (JURI) – Pavel Svoboda, who works as Associate Professor of European Law at the Faculty of Law, Charles University in Prague.

The moderator Naděžda Šišková welcomed the participants and expressed her great appreciation of the fact, that in the round table are participanting the representatives of the European Parliament, the institution, which had started the debate on robotics. She also mentioned the  inspiration by Joseph Borell – High Representative of the EU for Foreign Affairs and Security Policy – who said that: “It is important that we join forces and formulate common proposals in all sectors where there is no solid multilateral agreement: artificial intelligence, cyber, disinformation, or Internet data. In all these areas of the future, whether it be cyber or artificial intelligence, there is a regulatory vacuum and this vacuum has to be filled; otherwise, everyone will defend its narrow interests, imposing its standards.” Having on mind these words, she asked fellow speakers, whether a key document of the European parliament from 16th February 2017 (a resolution with recommendations to the European Commission on civil law rules on robotics (2015/2103(INL)) is sufficient in relation to ethical and legal questions of the robotics.

Dita Charanzová: I remember that day well because it was the first time I saw so  many journalists arrive to debate about non-binding resolution- about 200 of them. It was a very important moment 3-4 years ago because we presented our presented  first ideas on how we see the future of robots and robotization in the EU. The initial draft scared many people. Together with my colleague Svoboda, we promoted the fact that the author of the word “robot” was Karel Čapek, and therefore any EU “robot agency” should be placed to the Prague. However, the first draft was too much in the spirit what the robots would take:  changes in the labour market, lost jobs, the issue of taxes for robots, whether a robot shall have the right to a holiday… At the beginning, it sounded more like a sci-fi novel to me. Nonetheless, there were two or three important issues including 1) Ethical questions, 2) Liability of Robots and 3) the issue of trust and trustworthiness. If people do not trust robots, then introducing robots in real life will be difficult. We already had this experience with ATM machines, which were invented much earlier than when people started to use them. This is partly because people were afraid to lose their rites: to drink a cup of coffee with a banker and discuss money. Nonetheless, in Europe we have the first cases involving autonomous vehicles and the issue of liability. It is necessary to answer this question in a way that also enables AI to develop. And it is also necessary to distinguish between various kinds of AI – for example there is difference between AI in transport and AI in medical care. In both cases, a “human oriented” approach is required.

Pavel Svoboda: The European Parliament’s Committee on Legal Affairs was also responsible for solving ethical questions and for all members of the committee it was very interesting topic in relation to competences and process. I would like to appreciate the work of the European Parliament who greatly contributed to the document preparation. Especially two aspects were important: the depth of the issue and setting up the document into the reality. For example, all documents are refusing the acknowledgement of legal subjectivity to the AI. It is also necessary to mention a shift in a debate in the USA. There is a proverb: “In the USA they invent it, in China they build it and in Europe they regulate it”. However, even USA acknowledged that regulation is necessary despite firs reaction in the Silicon Valley was focusing on the attitude to use existing tools which were later evaluated as insufficient. This has also important dimension for the EU: regulation between USA and the EU shall be as compatible as possible so the technology can function without limits in the Euro-Atlantic area.

Naděžda Šišková: In the Resolution of the European Parliament with recommendations to the European Commission on Civil Law Rules on Robotics there is an article which calls on the Commission, when carrying out an impact assessment of its future legislative instrument, to explore, analyse and consider the implications of all possible legal solutions. Very interesting is the point 59 f) dealing with creation of a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently. I would like to ask honourable speakers what was the purpose of this point? Was it intended to open the possibility of the creation of legal personality for electronic persons, who will obtain certain rights and duties or just to reduce the issue on the liability for damages only?

Pavel Svoboda: European Parliament in last reports recommends to refuse legal subjectivity of the AI, however it is still not fully out of the table. For example, two months ago there was a debate over a question about creation of an original work by the AI. Thus, it is possible to distinguish between situation when AI is given task to create some work and the situation when AI is creating author’s work from its own initiative – very spontaneously. This shall be somehow solved in law. Fortunately, there is an institute of legal fiction which might theoretically solve the issue. However, at the end there shall be some clarification which will set clear rights to the partners, owners or establishers. So, when European Parliament will create something these end users shall be not neglected.

Dita Charanzová: Also my comment was criticising the fact that the debate was not based in reality and was more in the realm of science-fiction. It is hard to imagine that liability would go to the robot, and creators or owners would be not responsible. That is why I voted against.

Naděžda Šišková: The European Commission did not accept the concept of electronic personality and in the following White paper the Commission did not even mentioned it. Because of that I expect that maybe European parliament also searched for different ways how to address the issue of liability than using concept of electronic person. However, the Pandora’s box was opened, and scientists started to debate about the granting of certain rights to robots related to provision of their   services. For example, right for access to information, right for remuneration,

right for integrity… Other authors go more forward – for example Mark Goldfeder and Yosef Razin are thinking about the personal rights, such as right for marriage. Other authors suppose the right for having a name, right for a citizenship or even right for gender. For example, the involvement of robot Sophia (Hanson Robotics) into the different fields of social life had opened  a lot of the issues connected with humanoid robots. On the other side some experts are warning over the expanding rights for robots. What way is correct? And what way is most probable?

Dita Charanzová: Even today there are complex issues in the proposal which are necessary to solve so AI can develop in Europe. However, some ideas are clearly in the sci-fi category. For example, during elections in France there was the question as to whether a robot should have the right to a holiday or whether the work made by robots shall be subject to taxation. At least for the foreseeable future, robots will be not be part of our society without being controlled and managed ultimately by humans. That is why I think it is maybe too soon to go into such questions.

Pavel Svoboda: I would like to add, that AI is for me like a machine and that is why it shall be treated as a machine. Maybe some people will like to “act as the Good – the Creator” and create new persons, which will be soon at very similar level to humans. However, for me this is wrong way. It is just a machine, a tool. Rights of the robots is not at the table now.

Naděžda Šišková: The relation between AI and human rights is also having another aspect. The debate is not only whether to give certain rights to robots or not. The robots already intervene into the rights of human beings. The Committee of the experts of the Council of Europe published a study “Algorithms and human rights” which demonstrated how AI might  intervene into the human rights: for instance could be affected the right for privacy, the right for free and fair elections, consumers protection, non-discrimination etc. Also, the White paper of the European Commission On Artificial Intelligence (2020) mentioned that AI might have impact on values on which the EU is based. European Commission provided details how this violation might look like but did not provided solutions. Instead, open public consultation was launched by the Commission. In this respect would like to ask about the eventual ways of proper legal regulation – whether is it necessary to update existing Charter of Fundamental Rights of the European Union or to create a new special catalogue of legal and ethical norms in this field? Shall be created new national or international monitoring body? How shall be sanction regime set-up in order to be really effective? Because it is evident, that these issues are already happening.

Pavel Svoboda: I think that this is only partly a legal question. It is necessary to set up good mechanisms for violation. Obviously, it shall be not only up to the AI – even though we can all imagine some sort of AI execution (switch off). This option shall be protected. However, much more important are ethical principles and rules which shall reflect human rights and its incorporation into the creation of the AI. Human rights shall be considered in all actions of the AI and thus being necessary part of the algorithms. I think there is also one more element: AI shall be not confident and shall always ask for clarification. Here monitoring will be necessary and again, we are opening the issue of liability. Most probably it will be necessary to answer whether monitoring will be made by humans or with the help of AI.

Dita Charanzová: There are ongoing discussions about AI regulation also in other states. What is clear is that we will have very specific approach to the AI, different to that of the USA or China. This is in large part due to the issue of human rights, and the issue of privacy and data protection being much more sensitive in Europe than in the USA. I think it will be not necessary to change treaties because, so far, these issues are solved by secondary legislative acts which are sector based. And here we come back to an ethical framework of AI. What is very important to me is that every person shall know when they are interacting with AI. Another issue is the anonymized data collection for analysis which is necessary to ensure a high level of privacy protection and protection of personal data. This is already the case for monitoring the application of the GDPR, where we have the European Data Protection Board. which is already guarding these principles.

Naděžda Šišková: On 20th October 2020 European Parliament adopted a new resolution. It is Resolution with recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014(INL)). The document is introducing new approach. What is your opinion about this output of the European Parliament?

Dita Charanzová: In this resolution we are putting on the table very specific topics we want the European Commission to develop and focus on. There is a call to create European ethical framework for instance, and the issue of liability, which may differ depending on the level of risk. The document is distinguishing between low risk (e.g. advertisement) and high risk, where the person suffering from damage will be compensated regardless of the degree of fault. We are asking the European Commission to specify what shall be included in the area of high risk. The debate is not so much about the principle of liability, but what to place into the category of high risk.

Pavel Svoboda: I do welcome the principle of objective liability. It is very positive that it comes from existing provisions but also that it highlights the issue of specification and harmonization. Maybe we are at the beginning of some general harmonization in civil law – very similar to the issues solved within consumers protection.

Naděžda Šišková: Regarding Cyber Security there is relevant a Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union. This directive shall have been transposed until 9th of May 2018. Do you think that this norm is specific enough for the ensuring the effective solution or it is rather general? And whether it provides the sufficient response against cyber-threat at the EU level?

Dita Charanzová: From my perspective it is just first step and it is necessary to adopt a new revision next year. Cybersecurity is the basis for trust. The core of cybersecurity is related to sensitive data which are processed at the supranational level. Unfortunately, every state had its own approach. At the level of the EU, the debate goes in the direction of greater autonomy and sovereignty of Europe. The question is- what we can do together? New legislation has proven that it is possible to create common cyber standards. What is certified in one country can be sold in the other. Another very important topic is also the protection of 5G networks, and here I really appreciate the Czech information services, which are doing a good job to protect this area.

Pavel Svoboda: I would like to highlight that robots and AI are used in the information war. For example, Estonia provided information, that up to 80 % of disinformation might be robot made and people often don’t know that they are chatting with a robot. Madam vice-president Charanzová pointed out, that Europe will probably go through the way of autonomy. However, I think that in relation to the USA standards shall be connected at the top level otherwise there is a danger of a “digital colonization of Europe”. So, it is necessary for Europe not only to miss a train, but not to miss a whole railway station. For this reason, it is necessary to cooperate with the USA.

Naděžda Šišková: And the last question: The European Commission in its Programme “Digital Europe” mentioned as one of its priorities the “advanced digital skills”. To what extent these skills and in general the ability still to learn new things are important for the future of the EU?

Dita Charanzová: Yes, it is vital, and particularly for the Czech Republic. And here I would like to thank you, for guiding students in this sense and drawing their attention to the issues we are facing. There are many programmes and centres which might help to make sure that we do not miss the train or the railway station. Thank you very much for your invitation and I wish the students good luck in their studies.

Pavel Svoboda: Yes, it is a key. However, we cannot just simply stay with this statement. Americans they have a proverb: “Put your money where your mouth is”. In the case of investment (including AI) we have big reserves. Not even talking about skills of the Czech Republic to get EU funding for digitalization. Here we have big debts! I would like also thank for invitation and good luck to all students.

Transcript made by the redaction.