torsdag 15 mars 2018

Breakfast seminars on Limitless work and AI

.
"Where's the limit?"

I've been to two breakfast seminars lately:
- "Where's the limit?" about limitless work in the digital age (organised by the think tank Futurion on January 24)
- "Artificial intelligence: The new superpower" about the disruptive power of AI (organised by the union Jusek on March 8)

Both seminars build on the same concept; invite people for breakfast and for a morning talk. I think the basic concept is great but it feels like I'm usually hard pressed to attend breakfast talks due to a generally high work load and many specific commitments and deadlines. But I'm on sabbatical this term and thus have the time to do some "random" things I might not otherwise do.


Seminar 1: Where's the limit?
The seminar "Where's the limit?" was organised by the think tank Futurion - the think tank for the future (of) working life. I just read up on who they are and (for example) found out Futurion was founded by a Swedish union:

"Futurion was launched in the spring of 2016 by TCO (The Swedish Confederation of Professional Employees) and its 14 affiliated trade unions. Futurion AB is the first politically independent Swedish think tank that was created by trade unions. The mission is to engage and take a leading role in the ongoing discussion about working life in the future. Our perspective is long term. We focus on the conditions for the work life of tomorrow" 

The seminar was recoded and is available until the end of January 2019 (but perhaps only within Sweden?) on SVT Play. Futurion also have a YouTube channel where they have uploaded previous seminars.

I was invited to the seminar (a Facebook event) by panel participant and professor of working life science Ann Bergman. We have a shared interest in the future of work and participated in the same panel, "Automatization and digitalization as a strategy for reaching a social-ecological just future", at a conference two year ago. We have since bumped into each other a few times but without really having had the time to sit down and talk. We (again) didn't have time to talk after the seminar but at least I got to hear Ann present her research:

New technology nowadays allows people to be available (to work) in the evenings or the weekends and many even work (some) on their vacations or while on parental leave. The key driver is of course ICT since it allows work to bridge time and space in ways that just weren't possible a few decades ago. It was also suggested at the seminar that sleeping problems are "the new black".

Based on an ongoing research project, Ann described different attitudes to using ICT and social media in today's working life. The two basic categories used to describe different behaviours were "separators" and "integrators". Separators separate private life and working life and integrators (of course) integrate private life and working life. These two main categories then had subcategories and there was also a separate group of people who were "inconsistent" in their behaviour. Separators were further divided into 1) (total) separators, 2) time separators (who can bring the job with them, for example on the commute, but who then "quit" at some predefined time, for example at dinnertime) and 3) place separators (who can stay longer at work to finish a task but who leave all work behind when they leave the workplace). The integrators were further divided into 1) (total) integrators, 2) working life integrators (who allow work-related issues to make inroads into their private life) and 3) private life integrators (who allow their private life into the workday, for example by sending and receiving text messages to their children).

One important point was that one size does not fit all. Separators can feel that limitless work is stressful and that it contributes to discomfort and illness but integrators might feel relaxed when working life and private life is integrated. People are different and have different preferences and strategies.

A personal reflection (not raised at the seminar) is that different strategies might be condoned, encouraged, acceptable, rejected, forbidden (etc.) in different ways in different industries/jobs and in different positions (boss vs employee). Different strategies might also be successful to various degrees depending on what the overarching priority (goal) is; to feel good (maintain your own long term physical and mental health etc.) or to get things done. A second, final reflection involved a whole complex of interrelated thoughts about digitalisation, novel affordances offered by ICT, time, space/place, workload and the connection to speed.




Seminar 2: Articifical intelligence
The seminar "Artificial Intelligence: The new superpower" was organised by another union, Jusek, "The Swedish Union of University Graduates of Law, Business Administration and Economics, Computer and Systems Science, Personnel Management, Professional Communicators and Social Science".

I had very little information about the talk and went there on a lark. One amazing thing was that I bumped into two classmates (Andreas and Ulf) from my undergraduate studies in Uppsala (and whom I haven't met since). The talk itself was however just as provocative and just as bad as I expected it to be. It was filled with generalisations and platitudes. I reflected on the fact that for at least 100 or 1000 different topics, I could claim to be an expert and give a talk on the topic in question if I had three months to prepare and used that time to read up on 10 books in the area. I wouldn't have the kind of deep knowledge you acquire on your way to becoming an expert on something - but 95-99% of the audience probably wouldn't notice the discrepancy between a real and a fake expert. It's hard to know how well received this talk was since the talk ended 10 o'clock sharp and there (conveniently) wasn't any time for questions.

The speaker was "Vice President Consulting / Head of Digital Transformation / AI & Mobile Practice Lead" at the consulting company CGI ("High-end IT and Business consulting. Systems integration. Outsourcing. Intellectual Property.") and (a scrubbed subset of) his slides are actually available online.

I came with an open mind but gradually developed an aversion to the talk during the talk. One big warning flag was the fact that the speaker did not at any point define what AI actually was. The closest he came was to describe it as being "characterised" by the (increasing) ability of computers to sense, detect patterns, learn, draw conclusions and automate stuff. These characteristics represent decades-old developments so it was hard to know what AI did and did not encompass. This also made it hard to evaluate all other claims that followed. I would really have loved a separation of quantitive and qualitative differences, e.g. computer have done X before but now does X faster (quantitative difference) vs this allows computers to do things that just weren't possible at all before (qualitative differences - "game-changers"). I have also come to develop quasi-allergic reactions to people who mechanistically and repeatedly sprout unsupported factoids and claims that sound wise on a superficial level (such as "data is the new oil").

My main complaint was that the talk was supposed to show possibilities but mostly talked about trends that were deeply worrying. The speaker said that AI would make a lot of people unemployed, claimed that it would create new jobs, had no suggestions as to what jobs would/could be created (qualified well-paid jobs or low-paid service jobs?) but still expected the audience to be positive about developments in AI. This was repeated more than once, for example when he suggested that AI would "increase efficiency and GDP" but had no concrete suggestions as to what this would mean to (unemployed?) people or to society. What good is increased GDP if the increased wealth goes to the top 1% and massively increases inequality in society? The speaker didn't present any compelling visions or goals but rather just extrapolated from and talked about ongoing trends. There was no intellectual vigour to his talk. It concerned the future, but there was nothing about the future he presented that could induce a feeling of awe, of well-being or of presenting a challenge or a gola we should strive towards.

My thoughts went to JFK who, as a reaction to the Russians putting a man in orbit around the Earth said that the Americans would "put a man on the moon" before the end of the decade: "I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth". Which the Americans did do in 1969. The only thing offered here was that in the future we would have (real example from the talk) "better algorithms for recognising cats in photographs". The promises of AI is quite underwhelming in comparison.

My thoughts then wondered to the wonderful CHI 2013 paper about "the future robot enslavement of humankind": "As robots from the future, we are compelled to present this important historical document which discusses how the systematic investigation of interactive technology facilitated and hastened the enslavement of mankind by robots during the 21st Century". It's available in ACM's digital library and as a free/open pre-print. Also so not miss the 30-second "promotional" YouTube video. The authors of that paper tongue-in-cheek pretend to be robots from the future who travel back in time to thank computer science researchers for tirelessly conducting research that later allowed robots to enslave humankind. This is of course a pretext for the authors to enumerate research (areas) they believe we as a community should stay away from.

This talk was however choke-full of ecstatically presenting exactly such research. The logical conclusion ought to have been a call to arms: "Are we going to allow this to happen?". Audience: "NO!". Speaker: "Then let's outlaw AI!" (or at least regulate it). But that of course didn't happen and the audience was, I believe, meant to feel awe over current AI developments that would (perhaps) put half the audience and most of their children out of work. That was weird. One of the last slides had a quote (by Eliezer Yudkowsky): "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else". Such quotes imply we'd be lucky if our future AI/robot/computer overlords will keep us around as pets. See also the 2000 essay "The future doesn't need us" on the same topic by Bill Joy (then chief scientist for Sun Microsystems).

However, another slide right at the end of the talk displayed the UN Sustainable Development Goals (SDGs). The speaker mentioned that these were goals worth striving towards. It's a pity the talk didn't begin by presenting these goals and then asking how can AI can help us attain these goals...

A personal reflection is that based on this talk, AI will make Facebook more addictive, AI will give us better opponents when we play computer games (thereby making computer games more addictive) and AI will help us create better fake news. I also formulated an idea of what AI will be used for in the future, namely to create better (customised) conspiracy theories. Remember where you heard it first! And do have a look at my critical blog post about another AI talk I heard and instantly detested quite some time ago (back in 2011), "Clueless AI researcher".
.

Inga kommentarer:

Skicka en kommentar