Marc Schuilenburg, professor of Digital Surveillance at Erasmus University, got into the Tesla and then entered the address of the restaurant, where they would eat that evening, in Waze, navigation app. The distance was more than twenty kilometers, and several routes appeared marked as green. It was quiet on the road. He selected the shortest, fastest and most logical route, but before starting it, a message appeared on the screen suggesting that he take another route, the longest one, because it would be safer not to drive through a certain neighborhood. According to the app, that was a 'high-risk area'. The time difference would be eleven minutes. He decided to follow the advice anyway.
According to Schuilenburg, Waze is an example of a technology that subtly steers our behavior in a certain direction with the help of Artificial Intelligence (AI) and algorithms. Some areas are marked red, based on data from users and the police. You then drive around it, even if it means you arrive at your destination later than you would like. The temptation of the object - the green 'safe' areas set against the 'dangerous' red ones - becomes stronger than the desire of the subject - namely to arrive somewhere on time.
Schuilenburg calls these psychological stimuli that influence our behavior and direct our emotions 'Algorithmic psychopower'. In the same category, for example, he places the technology that was used to keep Stratumseind, a well-known and busy entertainment street in Eindhoven, safe. Experiments have been carried out there with the spread of the scent of oranges, which is said to have a calming and soothing effect, and could therefore increase safety and reduce aggression - both in general and very locally and in 'real time', for example when sensors catch a lot of shouting somewhere, and the scent can therefore be spread in a targeted manner.
These types of techniques are also used in football stadiums. Smart microphones that hang near fanatical supporters record chants and analyze them. When there are racist slogans, positive chants are automatically shown on the billboards to influence the atmosphere in the stadium - also called 'mood detection'.
That's different from the fly in the urinal
'Mental guidance and influence is becoming increasingly important for tech companies, especially in the field of safety and health. Not only the body, but also the psyche, the mind, must be guided. That is indeed different from classic nudging - the fly that does not make men pee next to the pot, but in the pot - because it is done with enormous amounts of data and it can be continuously personalized. At a speed bump, everyone has to drive slower, while psychopower can be tailor-made, for example through micro-targeting, or by showing Facebook users certain messages that do not appear on others' timeline or in a different form. Or with an app like Waze.'
You place Waze in the same category as the Apple Watch and Amazon's digital doorbell
'I call those kinds of products luxury surveillance. Nowadays people want to monitor themselves and are willing to pay for it. That is interesting. Surveillance was always seen as something that came from the state. That's where all the dystopian stories - China, Orwell - come from. With luxury surveillance you see that people pay a large sum of money to voluntarily surveil themselves. That illustrates the transition from disciplinary- to psychopower.'
'We know disciplinary power from the French philosopher Michel Foucault. For Foucault, surveillance was woven into architecture, for example in schools and prisons, or in something like a timetable. It was also physical: the body was drilled and disciplined to make it obedient and productive. In the 1990s, surveillance became networked. The cameras on the street are connected to each other and control is therefore everywhere. Now, in phase three, surveillance cameras are equipped with AI and algorithms, such as automatic facial recognition technology. We are no longer only watched by the police and large companies, but it is also self-evident that we are constantly watching ourselves, especially in the field of health, for example via the Apple Watch, and in the field of safety, with smart doorbells and Tesla’s camera system. We are therefore increasingly changing from a bundle of muscles into a bundle of data. The Apple Watch still disciplines, but shifts the focus from body to mind.'
What does such a development say?
'That surveillance is commodifiable. That data has become valuable. It has also become softer, in the sense that we often no longer recognize it as surveillance. And it is a matter of normalization. Surveillance has become an integral part of our lives, and this will only increase.'
'You see that surveillance with AI and algorithms is no longer only used by the police, but also above, by organizations such as Europol, below, when citizens patrol the neighborhood and use apps such as Veiligebuurt or Citizen, and next to it, because tech companies such as Amazon and Google that increasingly facilitate security concerns by developing smart devices with built-in cameras, microphones and tracking equipment.'
Schuilenburg started his career at the Public Prosecution Service, where he worked for six years and was trained as a prosecutor. He switched to academia then. He wanted to read more, do more research, go more in depth. After working at the Vrije Universiteit for more than ten years, he has been a special professor at Erasmus University for a few years now.
His chair is funded for one day a week by the independent research organization TNO, which, among other things, develops tools for the police. They do this based on the idea that they, as an organization, have a lot of technical expertise, but less ethical and legal expertise, while such knowledge is also inextricably linked to the development of technology.
And that is exactly what Schuilenburg's new book, Making Surveillance Public: Why You Should Be More Woke About AI and Algorithms, is about.
He believes that because technology is still too often designed and deployed from a 'technological-economic' perspective, public values such as privacy, non-discrimination, transparency and taking responsibility (accountability) lose out to values such as safety and efficiency. When it turns out afterwards that something went wrong in the design, such as with the Waze app, lawyers are called in. But then it's already too late.
Can you elaborate a little more on what you mean by “making surveillance public”?
'It consists of three elements. The first is to show what surveillance is all about. Many people do not recognize the Apple Watch, Fitbit and Amazon doorbell as surveillance. You are constantly being watched, but people don't experience it that way.'
How bad is that?
'Then you arrive at the second aspect of making surveillance public, namely that we must realize that surveillance is not only about guiding values such as safety and efficiency, values that fall under the dominant technological-economic vision. There are also anchored public-legal values, such as non-discrimination and privacy. And process values, as well, such as transparency and accountability. There is always a battle between these three - guiding, anchored and process-based - in which the anchored and process-based values often lose out.'
What is the consequence of neglecting those values?
'Something like the Allowance Affair, for example, where the decision-making system contained a self-learning algorithm that focused on disadvantaged neighborhoods, single mothers, people of a certain nationality. The discriminatory side of AI and algorithms is also very much in the Waze app, of course. Significantly, a similar app was previously called 'Ghetto Tracker'. This illustrates how such technology builds on political and economic structures in which racism is deeply rooted. Racism and territorial stigmatization reenter society through the back door of technological tools - via invisible data and algorithms. That is a form of data violence, of algoracism, which at the same time we rarely recognize as 'violence' and criminalize.'
'But accountability is also often lacking. As a government, you must be accountable for why and how you use certain AI tools. That is often not possible at the moment, while public accountability is truly an essential characteristic of a government.'
'And that will only become more difficult when AI starts making its own decisions to a certain extent. In that respect, this time is completely different from the time of Max Weber, with the classic bureaucracy with rules and laws in which - in principle - judgment is made without respect of persons. Then you as a government could clearly say: look, this rule is applied in this way. With AI tools, responsibility can no longer be allocated to one point. It takes place at least at three levels: during data collection, when it is unclear, for example, how the data was obtained, during data processing via algorithms, when it may be unclear how such an algorithm works, and finally at the output, when a decision is made. Is a human involved in that, or not? And can he deviate from the technology's judgment, or does he automatically follow the system?'
Is it so difficult to avoid such a technological judgment?
'I have often had discussions about this with judges. They say: it is a tool, an advice, I can definitely deviate. But we know from Foucault and Weber that bureaucracy, norms and rules, discipline you. You can deviate once, but if that leads to a TBS patient being given leave and doing something terrible, you don't do that a second time. In addition, it is tempting to pass the buck to technology. Then you can blame the system when something goes wrong, as happened with the Benefits Affair.'
'That is why, and then you come to the third pillar of making surveillance public, it is so important that the discussion about AI and algorithms does not come last, but is at the forefront, before the moment data is collected or tools are built. In that case you can literally gather an audience around the creation of new technology. This is about expanding beyond just that 'coding elite': the white, heterosexual male group that makes all the decisions regarding the development of tools and is insufficiently aware of their power and of public values such as privacy, accountability and non- discrimination.”
'For example, you can gather ethicists, lawyers and people from the field. This can even go so far as to involve nature, because AI surveillance leads to enormous CO2 emissions and consumes a lot of energy. But it is also about collecting other forms of knowledge that often cannot be codified or captured in bits and bytes. The American sociologist James Scott calls this mētis, which refers to local and context-bound experiential knowledge that is often elusive, such as the farmer who knows when the time comes to harvest, or the police officer who senses that something is going on somewhere.'
'The idea is that you let that audience, and those different types of knowledge, to think and weigh in from the start. Because once the technology is there, nothing will change. Fundamental decisions are made in the preliminary phase. So you have to be there.'
Schuilenburg is on the Police Ethical Advisory Committee. Twice a year he discusses with other scientists about a number of current developments within the police that lead to ethical questions and dilemmas, or to issues that they should be aware of. Example: Nowadays, disturbances often start online. How far does the police have power to monitor this? Through these types of questions and reflections, they provide advice to the leadership of the National Police.
According to Schuilenburg - he emphatically tries to remain neutral - they are trying to become more and more receptive to this. But even within the police, it is still mainly about safety and less importance is attached to values such as privacy and accountability. While police work is not only about detecting crime, but also about trust and the legitimacy of their actions, that is damaged when you use technology that is inherently discriminatory, for example, says Schuilenburg.
There has always been a lot of criticism about something like 'predictive policing'
'That was the dominant story within the police for a long time: predicting where crime would take place. The Netherlands was the first country in the world to roll out that system nationally, in which issues such as non-discrimination were taken into account, in any case much more than in America, where these systems are not developed by the police themselves, but are purchased from large tech companies, like Palantir.'
'In the Netherlands it is mainly about predicting where a crime will be committed, while in Chicago, for example, it was also about who would do it and who would become the victim. Such technologies have been heavily popularized by films such as Minority Report and by the police themselves, who claimed they could prevent 40 percent of home burglaries.'
'The interesting thing is that there is almost no empirical evidence that prediction works. There have currently been five to six good studies that have shown several times that there is no causal link between those 'prediction systems' and the decline in crime. So in the Netherlands, but also in other countries, you see that the police are increasingly moving away from it, and are increasingly focusing on algorithms that look back, or stay in the here and now. You see this in particular in the EncroChat case, in which the police have millions of criminal messages. Because everything and everyone is surveilled, the police have also found themselves in a situation where they have so much data at their disposal that not looking forward, but looking back is becoming increasingly important and interesting.'
'That causes a reversal: in the past there was first a suspicion and then surveillance followed. Now there is surveillance first and only then do suspicions follow. That is the paradigm shift that is taking place. From a constitutional perspective, this leads to many new questions. The legal reality is currently not tailored to this way of collecting and processing data. That is really a gap in the law, because it is not clear on what grounds the police may collect, process and use data for new investigations. Recently, facial recognition technology was in the news. The police currently decide for themselves when and how they use them, but the legal bases for this are lacking or unclear.'
In your book you paint a picture of criminology as a scientific discipline that does not seem to have really made the digitalization move and is lagging behind reality in terms of research. How is that possible?
'Criminology has traditionally focused on people. As a result, there is insufficient awareness of how AI itself will lead to completely new forms of crime. Crime has fallen sharply in all areas since 2001. An exception is cybercrime. Logically, society is digitalizing, and so is crime. Now you can already see the emergence of its successor: AI crime. That is fundamentally different from cybercrime. Previously, hacking required extensive technical knowledge. For AI crime, simply ask free chatbots such as ChatGPT to write code for scam emails asking people to donate money.'
'In addition, the scale is much larger, because entire sectors are dependent on AI. For example, self-driving cars are already driving in San Francisco. You can imagine that if the central AI system is hacked, terrorist attacks will be committed with it. It's actually a matter of waiting for something like this to come along.'
How do you view the rapid developments within AI? Should this be stopped? Or, above all, be guided in the right direction?
'That last thing. There is a certain 'innovation realism', i.e. the idea that technological innovation is inevitable. There are also good aspects to this, but I always try to make it clear that you can sometimes question the implementation of certain new technologies. We don't do that enough now. While sometimes it is not necessary at all. A good example is the Outlet Center in Roermond, which had to deal with increasing shoplifting. It was therefore decided to scan the roads around Roermond with algorithms. When passing cars had an Eastern European license plate, there was a good chance that they would be stopped.'
'Then you launch a so-called technological solution that you already know in advance is at odds with many public values. And at the same time, you do not pay enough attention to the fact that an analogue solution - three guards in the outlet center - may be easier, cheaper, more effective and less unethical than rolling out a new technology of which no one knows exactly how it works. The underlying view that prevails is that technology is neutral and can function as the solution to almost any social problem, what Morozov has called 'solutionism'.
'You see the same thing happening at the self-checkout in supermarkets. Technology is first used as a solution - in this case saving on cashiers and therefore increasing profits - but in practice it appears to lead to more crime. The solution is not to remove the self-checkout, but that the technology must become even more pervasive, for example by installing more cameras with facial recognition.'
While people also need social contact
'For example, you want to see the police physically on the street, because you want to be able to talk to them and ask them for an explanation. Research also shows that most people consider this more important than effectively combating crime and increasing safety. When the police become invisible, the social legitimacy of the government is at stake.'
'But digitalization and AI actually reduces social cohesion. At a small level you see this, for example, when installing the digital doorbell, which is of course not a bell, but a surveillance camera, just like the Tesla has become a driving surveillance camera. This way you delegate the responsibility of looking out for each other to technology. While we know that social control mechanisms are important in the fight against insecurity. That is ironic: tech companies claim that you can prevent home burglaries with these cameras, for example, but there is no evidence for this at all. As if that doesn't matter anymore.'
At times it gives quite a dystopian feeling. Where is this going?
'I have no interest in dystopian images. Surveillance is always presented in science fiction-like scenarios, both negative and positive. I find that distracting. What I wanted to show in my book is that it is already happening. It's already here, only not widely distributed. And that will increase further due to normalization and advancing technological developments, but we don't have to talk about far-away scenarios, because the effects are already there. Now! Only we don't see it.'
Autor: Tom Grosfeld
Source: Vrij Nederland >>
Read more: M. Schuilenburg (2024), Making surveillance public. Why you should be more woke about AI and algorithms, The Hague: Eleven 2024