söndag 2 oktober 2011

Clueless AI researcher

.
I listened to a 45-minute long drive-by talk by a guest (German AI researcher) at our weekly meeting. It was depressing. I can bet 1000-to-1 that he hasn't read Joseph Weizenbaum's 1976 classig "Computer power and human reasoning: From judgement to calculation".

Weizenbaum was one of the pioneers of AI (Artificial Intelligence), but he was shocked by the emotional attachment people exhibited even to early (simple, stupid) computer programs that haltingly chatted with users. He didn't turn against AI as such, but he did turn against the technocratic agenda of attempting to use (fancy, "exciting") computer tools for anything and everything without thinking about the resulting systems and the consequences in terms of (for example) human dignity. Should we replace psychiatrists with computer programs? Should we develop computer-enhanced remote-controlled animal "tools" (weapons, spies)? Weizenbaum's empathic answer was "NO!" and for that reason he became disliked by other AI researches that were fattened by suckling grants from the generous purses of the US military (Edwards 1996).


So, here we have a researcher who lives by and for applying for money for pan-European EU projects, but who doesn't seem to care (or think about, or notice) the larger agendas that are inextricable parts of his research, or the consequences of using his "solutions" in modern society. Anyone can apply for money to build a system, hallucinate and write down some potential beneficial uses of said systems and then stand back and hope for the best. But computer systems and artifacts do have politics (Winner 1986, Friedman 1997), and some systems have a much higher potential for being used in destructive ways and for putting down, controlling or sometimes even killing other people (more "efficient" weapon systems like for example attack drones would be a prime example), rather than for empowering users and for the benefit of humanity and. In the first Amazon review of Weizenbaum's book (above), "A Customer" (Jan 1997) writes that:

"This remains one of the best books about the role of computers in our society, dealing with such topics as: [...]
(3) The social responsibility of technical workers, who generally are myopically focused on "efficiently" doing whatever they do, without being concerned about what should be done or whether what they are working on is something that should be done differently or not be done at all"


Some systems are just a waste of time and effort. Why not solve simple problems with simple solutions? Why attempt to solve simple problems by building complicated (AI) systems? Why build a worker's vest with lots of sensors and computing power and expert systems and a large research budget in order to monitor the order that a worker fastens screws to an airplane hull - sending a warning/error message if this is done in the wrong order? Wouldn't it be a lot easier, a lot cheaper, a lot more empowering and dignified for workers to solve this problem themselves by pairing up a more junior with a more senior worker?

Some computer systems should not be built purely based on the balance between system costs and system benefits. Other systems should not be built at any cost for moral or other reasons. I have noticed that I am becoming more and more doubtful, perhaps even hostile to the concept of research for its own sake. Here we have a guy who belongs to a privileged class and who shuttles around Europe (sending the bill to the nowadays hard pressed European taxpayers) in search of partners for the next "great" idea and the next research application - for project that I personally hope will never come to fruition. I got the distinct feeling that what we had here was a loose cannon, an (amoral - not immoral) traveling salesman of a "solution" and a set of tools in search of a problem. I got the feeling that "anything goes" (as long as there is a juicy computational challenge involved, and as long as there's money to be found to support an inquiry into the topic in question). So I got an urge to ask him a question.

In my question I asked if there was not a contradiction between on the one hand the "participatory design" aspects of one project (the terms was placed centrally on the slide, in the middle of a circle), and on the other hand some conspicuous aspects of some other of the presented systems (hinting at his systems stripping people of their knowledge and their agency, reducing them to becoming generators-of-data for sensors and "smart" AI systems and experts/doctors who will do all the interpretation of said data). The answer I got was incoherent and it was embarrassingly obvious that this researcher had not thought about any of these aspects before. Being socially well-adjusted human beings, we all nodded and pretended he said something that made sense (even though it was incomprehensible).

My first epithet of this guy was "clueless AI researcher", but with that non-answer of his I choose to change it to "clueless (accidentally misanthropic) AI researcher". That's a tough judgement but I stand by it (writing about an anonymized researcher and communicating it to a for the most part anonymous audience on the Internet makes me really brave :-)


PS. I thought one thing he said was interesting. He organized computer systems on an axis:
Implanted --- Wearable --- Mobile --- Tangible --- Ambient.
.

3 kommentarer:

  1. I unfortunately seen these kind of projects way too much in european IT-research since i've become involved in one myself. Everything called "smart"-something that involves more computation and algorithms in a new area gets funded (if the proposal comes from the right kind of consortium). Smart cities, smart health. Even seen smart communities. Sociologists like me gets called in to find out problems where their pre-conveived solutions can be applied.

    The last thing in focus for eu tech research is the problems they are supposed to adress with the technology. Thats for the end of the project when the rest is already developed. Frustrating!

    Thanks for a superb blog by the way. Always a pleasure to read!

    SvaraRadera
  2. Also, good comment on the difference between participatory design and designing for participation. The former is often reduced to a couple of focus groups for finding out requirements rather than actually designing with a community for the emancipation of thst community as the original purpose was.

    SvaraRadera
  3. How about a smart smart project? A meta-study of "smart" projects, perhaps using some nifty computer algorithm (to justify the moniker "smart" in "smart smart")?

    What were "smart communities"? What has the idea that made them smart (as apart from the apparently pre-computer age "stupid communities")?


    On the blog: my aim is to publish a minimum of one and a maximum of two blog posts per week. For some reason, blog posts have lately become pretty long. I'd like to rectify that.

    SvaraRadera