Exercises

Robotics

Robotics can be thought of as a Grand Challenge in AI: it requires that we address virtually all aspects of AI including

  • motor control
  • computer vision
  • speech recognition and synthesis
  • natural language processing
  • information retrieval
  • reasoning (logical and probabilistic)
  • search
  • ...
And of course, machine learning, deep learning, and other techniques for developing any of the above are relevant in making better and more general purpose robots.

Sorry to disappoint you...

We will not dive deeper into robotics in this course because we think that learning robotics actually requires doing robotics — just like learning programming without actually doing programming is pointless.

Perhaps just to point out a few things that we think are important about robotics: First of all, we already have plenty or robots in everyday use. Our favourite example is the dishwashing machine, of which the first models were invented in around 1850. The dishwashing machine is a robot in the sense that it has sensors that help it sense its real-world environment (the water temperature and its flow, the amount and dirtyness of the dishes, etc.) and actuators that enable it to affect its environment (heating the water, releasing the detergent, sploshing the water, etc.). With this definition, even a radiator thermostat is a robot, which is fine by us!

Another thing to know about robotics is more difficult to explain in words, and is precisely the reason why we don't even try diving deeper into robotics in this course. Briefly, it's the hardness of operating in the real world. Sensors are never perfect and they malfunction all the time, which makes it really hard to keep track of the state of the system; for instance, the relative location of the robot and the objects around it. Likewise, the actuators such as motors that help the robot move itself around, are imperfect so that whenever the poor robot may try to move its hand or roll forward on its wheels exactly 10 cm, it would in reality end up moving 9 or 11 cm.

The latter issue implies that developing robust robotics solutions often requires operating on a whole another level, often on a "lower level" in some sense, than developing other kinds of AI applications that only have to deal with simulated environments or static data, not the messy real world.

If you have a chance to try working with actual robots, we encourage you to do it. We used to do robotics in the course in the previous editions, using LEGO Mindstorms robots, but the robots we have in stock are becoming technically so outdated that programming them is a real pain in the butt. The latest LEGO models also don't seem to support sensible Python or even Java coding. (Let us know if you know better.) Raspberry PI looks like a promising alternative. In case you'll have a chance to try any of these and build and program robots, you'll hopefully understand what we're trying to say here!

With that, let's move on to the last major topic on the course.

Ethics of AI

Our last main topic on this course is the Ethics of AI. Just to get one thing off our chest, we must emphasize that this is indeed "last but not least"; in other words, the reason why ethics is discussed as the last topic, is by no means that it would be somehow a less fundamental, nice-to-know topic. This may have been what people — to be honest, even us — thought years ago, but as the impact of AI in the society is becoming more and more prominent, ethical concerns are also being brought to the surface.

On a related note, the fact that the very material you are now reading is work in progress, reflects the fact that the field is very much in motion as more and more people are talking about it.

Calm down, an engineer has arrived!

One thing that seems to happen repeatedly is that AI people or other technologists arrive at the scene, encounter a problem, and think they must be the first to ever have thought about it. This tends to lead to awkward situations where the work by other people, who may have been studying the same issues, sometimes for decades, are ignored.

When you deal with ethical issues related to technology, you should make an effort to look for earlier work on the topic. It may be difficult, since people may have used different terminology, but it's definitely worth at least trying. You're unlikely to be the first to come across the issue. Keep an eye out in particular for work in Science and Technology Studies (STS).

Ethical AI or Ethics of AI?

There are actually several ways in which AI and ethics meet. For instance, there is a topic called machine ethics, which is focused on building computational systems that behave in ethical ways or that have, in a very literal sense, a moral code. You have perhaps heard about the Three Laws of Robotics from the sci-fi author Isaac Asimov's 1942 short story Runaround. Machine ethics focuses on this type of rules that could in principle encoded into AI systems.

Machine ethics is a legitimate scientific field. However, it is disconnected from existing AI systems and their societal impact, because as you've probably noticed by now, modern AI systems don't operate on a level where rules like "A robot may not injure a human being or, through inaction, allow a human being to come to harm", which is the First Law of Robotics according to Asimov.

What we will discuss in this part of the course is therefore not machine ethics, but the ethics of AI, by which we mean the broader viewpoint into the societal and ethical issues related to the use of AI systems. This entails that we shift focus from the technical aspects of the AI systems themselves to the entire socio-technical context that involves not only the AI system(s) but also the humans that are involved and affected by the use of such systems.

A Note of the Scope of AI Regulation

As we've discussed, there is no commonly agreed definition of AI. Therefore, any regulation that tries to govern the use of AI will be somewhat hard to define in an unambiguous way. In fact, it is our personal opinion that AI shouldn't be regulated as such, but that it should be regulated along with any other technology, or in many cases, not even as technology, but as human activity.

Why would it make a difference, from an ethical or legal point of view, whether something is done using AI or without AI? We can't think of a single case, where the use of AI should make something legal or illegal. Any law that that depends on whether something is AI or not is prone to finding loopholes in the assumed definition of AI.

However, since the deployment of AI systems amplifies some potential harms to people — mass surveillance, deanonymization, loss of accountability, complicated authorship questions, to name a few — various regulatory stakeholders, most notably the European Union, have decided to create AI regulation. So the milk is most likely already spilled. This will unavoidably lead to more work for lawyers figuring out when the AI regulation applies and when not, and how it will be consolidated with existing sector-specific regulation. But that's what lawyers do... 🤷

EU AI Act

It is very important to study the EU AI ACT for example with available guide to the act online EU AI ACT Explorer.

High-level summary. Here we provide you with a high-level summary of the AI Act, however is important to consult the act directly online for example using the EU AI ACT Explorer To explore the full text of the AI Act yourself, use our AI Act Explorer. Alternatively, if you want to know which parts of the text are most relevant to you, use our Compliance Checker.

The AI Act Classifies AI According to Its Risk

  • Unacceptable risk is prohibited (e.g., social scoring systems and manipulative AI).
  • Most of the text addresses high-risk AI systems, which are regulated.
  • A smaller section handles limited-risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (e.g., chatbots and deepfakes).
  • Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters—at least in 2021; this is changing with generative AI).

The Majority of Obligations Fall on Providers (Developers) of High-Risk AI Systems

  • Those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.
  • Also, third-country providers where the high-risk AI system’s output is used in the EU.

Users Are Natural or Legal Persons That Deploy an AI System in a Professional Capacity

  • Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).
  • This applies to users located in the EU and third-country users where the AI system’s output is used in the EU.

General Purpose AI (GPAI)

  • All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training.
  • Free and open-license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.
  • All providers of GPAI models that present a systemic risk—open or closed—must also conduct model evaluations, adversarial testing, track and report serious incidents, and ensure cybersecurity protections.

Prohibited AI Systems Chapter IIArt. 5

  • Deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
  • Exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
  • Biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
  • Social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
  • Assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
  • Compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
  • ‘Real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:
    • Searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited;
    • Preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
    • Identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.).

Notes on remote biometric identification:

Using AI-enabled real-time RBI is only allowed when not using the tool would cause considerable harm and must account for affected persons’ rights and freedoms. Before deployment, police must complete a fundamental rights impact assessment and register the system in the EU database, though, in duly justified cases of urgency, deployment can commence without registration, provided that it is registered later without undue delay. Before deployment, they also must obtain authorisation from a judicial authority or independent administrative authority[1], though, in duly justified cases of urgency, deployment can commence without authorisation, provided that authorisation is requested within 24 hours. If authorisation is rejected, deployment must cease immediately, deleting all data, results, and outputs.

High risk AI systems

(Chapter III)

Some AI systems are considered ‘High risk’ under the AI Act. Providers of those systems will be subject to additional requirements.

Classification rules for high-risk AI systems (Art. 6)

  • Used as a safety component or a product covered by EU laws in Annex I AND required to undergo a third-party conformity assessment under those Annex I laws; OR
  • Those under Annex III use cases (below), except if:
    • The AI system performs a narrow procedural task;
    • Improves the result of a previously completed human activity;
    • Detects decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or
    • Performs a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.
  • AI systems are always considered high-risk if it profiles individuals, i.e., automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement.
  • Providers whose AI system falls under the use cases in Annex III but believes it is not high-risk must document such an assessment before placing it on the market or putting it into service.

The Eu AI Act contains both Requirements to High Risk Systems, and Obligations of providers.

Section 2 contains : Requirements for High-Risk AI Systems, Article 8: Compliance with the Requirements, Article 9: Risk Management System, Article 10: Data and Data Governance, Article 11: Technical Documentation, Article 12: Record-Keeping, Article 13: Transparency and Provision of Information to Deployers, Article 14: Human Oversight, Article 15: Accuracy, Robustness and Cybersecurity

Obligations for providers of high-risk AI systems (Art. 817)

  • Establish a risk management system throughout the high risk AI system’s lifecycle;
  • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
  • Design their high-risk AI system for record-keeping to enable it to automatically record events relevant for identifying national-level risks and substantial modifications throughout the system’s lifecycle.
  • Provide instructions for use to downstream deployers to enable the latter’s compliance.
  • Design their high-risk AI system to allow deployers to implement human oversight.
  • Design their high-risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
  • Establish a quality management system to ensure compliance.

Annex III Use Cases

Non-banned biometrics: Remote biometric identification systems, excluding biometric verification that confirm a person is who they claim to be. Biometric categorisation systems inferring sensitive or protected attributes or characteristics. Emotion recognition systems.
Critical infrastructure: Safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.
Education and vocational training: AI systems determining access, admission or assignment to educational and vocational training institutions at all levels. Evaluating learning outcomes, including those used to steer the student’s learning process. Assessing the appropriate level of education for an individual. Monitoring and detecting prohibited student behaviour during tests.
Employment, workers management and access to self-employment: AI systems used for recruitment or selection, particularly targeted job ads, analysing and filtering applications, and evaluating candidates. Promotion and termination of contracts, allocating tasks based on personality traits or characteristics and behaviour, and monitoring and evaluating performance.
Access to and enjoyment of essential public and private services: AI systems used by public authorities for assessing eligibility to benefits and services, including their allocation, reduction, revocation, or recovery. Evaluating creditworthiness, except when detecting financial fraud. Evaluating and classifying emergency calls, including dispatch prioritising of police, firefighters, medical aid and urgent patient triage services. Risk assessments and pricing in health and life insurance.
Law enforcement: AI systems used to assess an individual’s risk of becoming a crime victim. Polygraphs. Evaluating evidence reliability during criminal investigations or prosecutions. Assessing an individual’s risk of offending or re-offending not solely based on profiling or assessing personality traits or past criminal behaviour. Profiling during criminal detections, investigations or prosecutions.
Migration, asylum and border control management: Polygraphs. Assessments of irregular migration or health risks. Examination of applications for asylum, visa and residence permits, and associated complaints related to eligibility. Detecting, recognising or identifying individuals, except verifying travel documents.
Administration of justice and democratic processes: AI systems used in researching and interpreting facts and applying the law to concrete facts or used in alternative dispute resolution. Influencing elections and referenda outcomes or voting behaviour, excluding outputs that do not directly interact with people, like tools used to organise, optimise and structure political campaigns.

Ethics of Generative AI: Hallucinations, Bias, and Environmental Impact

Along with the benefits, generative AI raises concerns about misuse and errors. There are some limitations where legal frameworks have not caught up with technological developments. To mitigate these risks and ensure the technology benefits society, the OECD works with governments to enable policies that ensure the ethical and responsible use of generative AI. (This section adapted and extended from OECD Policy Observatory for AI).

AI “hallucinations”, or convincing but inaccurate outputs

When large language models, or textual generative AI, create incorrect yet convincing outputs, it is called a hallucination. This is unintentional and can happen if a correct answer is not found in the training data. Beyond perpetuating inaccurate information, this can interfere with the model’s ability to learn new skills and even lead to a loss of skills.

Fake and misleading content

While generative AI brings efficiencies to content creation, it also poses risks that must be considered carefully. One major concern is the potential for generating fake or misleading content. For example, generative AI can be used to create realistic-looking but entirely fabricated images or videos, which can be used to spread disinformation or deceive people. This poses challenges for the detection and verification of digital media.

Intellectual property right infringement

Generative AI raises intellectual property rights issues, particularly concerning:

  • unlicensed content in training data,
  • potential copyright, patent, and trademark infringement of AI creations,
  • and ownership of AI-generated works.
  • Whether commercial entities can legally train ML models on copyrighted material is contested in Europe and the US. Several lawsuits were filed in the US against companies that allegedly trained their models on copyrighted data without authorisation to make and later store copies of the resulting images. These decisions will set legal precedents and impact the generative AI industry, from start-ups to multinational tech companies.

Job and labour market transformations

Generative AI is likely to transform labour markets and jobs, but exactly how is still uncertain and being debated among experts. Generative AI could automate tasks traditionally performed by humans, leading to job displacement in some industries and professions.

While some jobs might be automated or eliminated, generative AI could transform existing jobs. This could lead to humans performing tasks more efficiently and generating new creative possibilities. This transformation would lead to a shift in required skills.

Addressing these risks will require combining technical solutions, policy frameworks, and responsible practices to ensure that generative AI benefits society while minimising potential harm.

Energy consumption and the environment

Generative AI requires tremendous computing power and consumes natural resources, leading to a significant ecological footprint. Poorly controlled use of generative AI in areas like climate modelling and environmental simulations could unintentionally exacerbate ecological challenges and undermine conservation efforts.

Bias, stereotype amplification and privacy concerns

AI can analyse large amounts of data to extract precious information that humans could not see otherwise. But the risk is an amplification of existing biases present in the training data. If the training data contains biases, such as racial or gender stereotypes, the generative AI model may inadvertently produce biased outputs, such as misleading or inappropriate content. This can perpetuate and even amplify societal inequalities and discrimination.

Generative AI also raises privacy concerns. By training on large amounts of data, these models may inadvertently capture and reproduce private or sensitive information. For example, a language model trained on text data may reproduce personal details or confidential information.

Potential future risks and concerns

In the near term, generative AI can exacerbate challenges as synthetic content with varying quality and accuracy proliferates in digital spaces and is then used to train subsequent generative AI models, triggering a vicious cycle. Over the longer term, emergent behaviours such as increased agency, power-seeking, and pursuing hidden sub-goals to achieve a core objective might not align with human values and intent. If manifested, such behaviours could lead to systemic harms and collective disempowerment. Given these risks, overreliance, trust and dependency on AI could cause deep, long-term harm to societies. And a concentration of AI resources in a few multinational tech companies and governments may lead to a global imbalance.

This part to be continued, in the meantime, please check out the lecture slides and watch the lecture recording on Moodle in case you missed the lecture... Here are the exercises nonetheless.

In their blog post, Eighteen pitfalls to beware of in AI journalism, Sayash Kapoor and Arvind Narayanan present recurring problems in AI coverage in the press. You can find a list of the problems, or pitfalls, here.

Find three examples of news stories about AI. Try to find stories that fall into as many of the pitfalls as you can. Bonus points if the stories you find come from reputable sources like MIT Tech Review, the Atlantic, or (I dare you!) the University of Helsinki.

In each case, propose ways to fix the stories as best as you can.

Ethics can be thought of not only a set of standards of right and wrong, but also as the process by which such standard can be deliberated and refined, usually through dialogue between the stakeholders (see here).

So let's "do ethics"!

Choose one of the following talking points:

  1. AI-generated art is currently a hot topic with systems like DALL-E, Imagen, and Stable Diffusion being used to create amazign artwork. Who (or what) should be entitled to intellectual property rights in these cases?
  2. The real-world impact of AI is growing all the time. Until recently, AI researchers were mostly oblivious to such issues and weren't really expected to consider the societal implications of their work. Nowadays, some AI conferences (most prominently the NeurIPS conference, see this study), require a broader impact statement or a checklist that asks the authors of submitted research articles to have a think about the impact of their work. Some argue that this is a path towards censorship and political ideology messing up with the academic freedom. Others argue that science can no longer pretend to be separate from the real world.

Your task is to have a debate about the topic of your choice (from the above two alternatives). You can have the debate with your alter ego, pretending to take two opposing views on the topic. Even better, you can ask your fellow student or someone else to assume the opponent's role.

Document your dialogue by summarizing at least ten (10) arguments and counter-arguments in an alternating order like

You
I think X
Your Opponent
No, X is wrong because Y
You
That may be, but even so, there's Z...
So both you and your opponent get to make at least five points. (And no, "you're a f**king idiot" does not count as an argument.)

Hint: Remember to remain polite and respectful.

Hint 2: In case you don't have an opponent available in real life, there's nothing wrong with trying social media like Twitter... Just remember the first hint.

Consider the case of a pharmaceutical company “Rite Aid”. Where “Rite Aid deployed artificial intelligence-based facial recognition technology in order to identify customers who may have been engaged in shoplifting or other problematic behavior. The complaint, however, charges that the company failed to take reasonable measures to prevent harm to consumers, who, as a result, were erroneously accused by employees of wrongdoing because facial recognition technology falsely flagged the consumers as matching someone who had previously been identified as a shoplifter or other troublemaker. The company did not inform consumers that it was using the technology in its stores and employees were discouraged from revealing such information. Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them; sometimes in front of friends or family, of shoplifting or other wrongdoing, according to the complaint. In addition, the FTC says Rite Aid's actions disproportionately impacted people of color.” (FTC US, 2024)

The Guardian: "As part of its contract with two private, unnamed vendors, Rite Aid created or directed the companies to create a database of “persons of interest” that included images of the people and other personally identifying information. Those images were often low quality and were captured through Rite Aid's CCTV cameras, the facial recognition cameras or on the mobile phones of employees, according to the settlement."

How would obligations of the EU AI act for high-risk systems possibly prevent this problem? Write a 1 page essay describing the solution.

The development of this course relies on your feedback.

Submit the feedback anonymously through the university feedback system Norppa.

Table of Contents