Exercises

Robotics

Robotics can be thought of as a Grand Challenge in AI: it requires that we address virtually all aspects of AI including

  • motor control
  • computer vision
  • speech recognition and synthesis
  • natural language processing
  • information retrieval
  • reasoning (logical and probabilistic)
  • search
  • ...
And of course, machine learning, deep learning, and other techniques for developing any of the above are relevant in making better and more general purpose robots.

Sorry to disappoint you...

We will not dive deeper into robotics in this course because we think that learning robotics actually requires doing robotics — just like learning programming without actually doing programming is pointless.

Perhaps just to point out a few things that we think are important about robotics: First of all, we already have plenty or robots in everyday use. Our favourite example is the dishwashing machine, of which the first models were invented in around 1850. The dishwashing machine is a robot in the sense that it has sensors that help it sense its real-world environment (the water temperature and its flow, the amount and dirtyness of the dishes, etc.) and actuators that enable it to affect its environment (heating the water, releasing the detergent, sploshing the water, etc.). With this definition, even a radiator thermostat is a robot, which is fine by us!

Another thing to know about robotics is more difficult to explain in words, and is precisely the reason why we don't even try diving deeper into robotics in this course. Briefly, it's the hardness of operating in the real world. Sensors are never perfect and they malfunction all the time, which makes it really hard to keep track of the state of the system; for instance, the relative location of the robot and the objects around it. Likewise, the actuators such as motors that help the robot move itself around, are imperfect so that whenever the poor robot may try to move its hand or roll forward on its wheels exactly 10 cm, it would in reality end up moving 9 or 11 cm.

The latter issue implies that developing robust robotics solutions often requires operating on a whole another level, often on a "lower level" in some sense, than developing other kinds of AI applications that only have to deal with simulated environments or static data, not the messy real world.

If you have a chance to try working with actual robots, we encourage you to do it. We used to do robotics in the course in the previous editions, using LEGO Mindstorms robots, but the robots we have in stock are becoming technically so outdated that programming them is a real pain in the butt. The latest LEGO models also don't seem to support sensible Python or even Java coding. (Let us know if you know better.) Raspberry PI looks like a promising alternative. In case you'll have a chance to try any of these and build and program robots, you'll hopefully understand what we're trying to say here!

With that, let's move on to the last major topic on the course.

Ethics of AI

This section is still under construction the most important topic for the exam is the EU AI ACT for example by studying an available guide to the act online EU AI ACT Explorer.

Our last main topic on this course is the Ethics of AI. Just to get one thing off our chest, we must emphasize that this is indeed "last but not least"; in other words, the reason why ethics is discussed as the last topic, is by no means that it would be somehow a less fundamental, nice-to-know topic. This may have been what people — to be honest, even us — thought years ago, but as the impact of AI in the society is becoming more and more prominent, ethical concerns are also being brought to the surface.

On a related note, the fact that the very material you are now reading is work in progress, reflects the fact that the field is very much in motion as more and more people are talking about it.

Calm down, an engineer has arrived!

One thing that seems to happen repeatedly is that AI people or other technologists arrive at the scene, encounter a problem, and think they must be the first to ever have thought about it. This tends to lead to awkward situations where the work by other people, who may have been studying the same issues, sometimes for decades, are ignored.

When you deal with ethical issues related to technology, you should make an effort to look for earlier work on the topic. It may be difficult, since people may have used different terminology, but it's definitely worth at least trying. You're unlikely to be the first to come across the issue. Keep an eye out in particular for work in Science and Technology Studies (STS).

Ethical AI or Ethics of AI?

There are actually several ways in which AI and ethics meet. For instance, there is a topic called machine ethics, which is focused on building computational systems that behave in ethical ways or that have, in a very literal sense, a moral code. You have perhaps heard about the Three Laws of Robotics from the sci-fi author Isaac Asimov's 1942 short story Runaround. Machine ethics focuses on this type of rules that could in principle encoded into AI systems.

Machine ethics is a legitimate scientific field. However, it is disconnected from existing AI systems and their societal impact, because as you've probably noticed by now, modern AI systems don't operate on a level where rules like "A robot may not injure a human being or, through inaction, allow a human being to come to harm", which is the First Law of Robotics according to Asimov.

What we will discuss in this part of the course is therefore not machine ethics, but the ethics of AI, by which we mean the broader viewpoint into the societal and ethical issues related to the use of AI systems. This entails that we shift focus from the technical aspects of the AI systems themselves to the entire socio-technical context that involves not only the AI system(s) but also the humans that are involved and affected by the use of such systems.

A Note of the Scope of AI Regulation

As we've discussed, there is no commonly agreed definition of AI. Therefore, any regulation that tries to govern the use of AI will be somewhat hard to define in an unambiguous way. In fact, it is our personal opinion that AI shouldn't be regulated as such, but that it should be regulated along with any other technology, or in many cases, not even as technology, but as human activity.

Why would it make a difference, from an ethical or legal point of view, whether something is done using AI or without AI? We can't think of a single case, where the use of AI should make something legal or illegal. Any law that that depends on whether something is AI or not is prone to finding loopholes in the assumed definition of AI.

However, since the deployment of AI systems amplifies some potential harms to people — mass surveillance, deanonymization, loss of accountability, complicated authorship questions, to name a few — various regulatory stakeholders, most notably the European Union, have decided to create AI regulation. So the milk is most likely already spilled. This will unavoidably lead to more work for lawyers figuring out when the AI regulation applies and when not, and how it will be consolidated with existing sector-specific regulation. But that's what lawyers do... 🤷

This part to be continued, in the meantime, please check out the lecture slides and watch the lecture recording on Moodle in case you missed the lecture... Here are the exercises nonetheless.

In their blog post, Eighteen pitfalls to beware of in AI journalism, Sayash Kapoor and Arvind Narayanan present recurring problems in AI coverage in the press. You can find a list of the problems, or pitfalls, here.

Find three examples of news stories about AI. Try to find stories that fall into as many of the pitfalls as you can. Bonus points if the stories you find come from reputable sources like MIT Tech Review, the Atlantic, or (I dare you!) the University of Helsinki.

In each case, propose ways to fix the stories as best as you can.

Ethics can be thought of not only a set of standards of right and wrong, but also as the process by which such standard can be deliberated and refined, usually through dialogue between the stakeholders (see here).

So let's "do ethics"!

Choose one of the following talking points:

  1. AI-generated art is currently a hot topic with systems like DALL-E, Imagen, and Stable Diffusion being used to create amazign artwork. Who (or what) should be entitled to intellectual property rights in these cases?
  2. The real-world impact of AI is growing all the time. Until recently, AI researchers were mostly oblivious to such issues and weren't really expected to consider the societal implications of their work. Nowadays, some AI conferences (most prominently the NeurIPS conference, see this study), require a broader impact statement or a checklist that asks the authors of submitted research articles to have a think about the impact of their work. Some argue that this is a path towards censorship and political ideology messing up with the academic freedom. Others argue that science can no longer pretend to be separate from the real world.

Your task is to have a debate about the topic of your choice (from the above two alternatives). You can have the debate with your alter ego, pretending to take two opposing views on the topic. Even better, you can ask your fellow student or someone else to assume the opponent's role.

Document your dialogue by summarizing at least ten (10) arguments and counter-arguments in an alternating order like

You
I think X
Your Opponent
No, X is wrong because Y
You
That may be, but even so, there's Z...
So both you and your opponent get to make at least five points. (And no, "you're a f**king idiot" does not count as an argument.)

Hint: Remember to remain polite and respectful.

Hint 2: In case you don't have an opponent available in real life, there's nothing wrong with trying social media like Twitter... Just remember the first hint.

The development of this course relies on your feedback. To give you a concrete incentive to give feedback, you'll even get exercise points for giving feedback!

The deadline for this exercise is November 2 (one week after the course exam, by which time you can also comment on the exam, but you most definitely haven't received your grade yet).

Give feedback in both of the following ways:

  1. First, submit anonymously through the university feedback system Norppa.
  2. After you have submitted the anonymous feedback, send email to the lecturer: giulio.jacucci@helsinki.fi. Important: Include the magic word IntroAI2023 on the subject line so that the lecturer will find your message in his inbox.

To get 1p for item 1, mention in your email that you have submitted the anonymous feedback through the feedback system — after actually doing so, of course.

Also give feedback in the email. You can summarize your anonymous feeback briefly. Don't worry if the content is overlapping.

Table of Contents