03:00:10 Larisa: so good to be here and to see you too, Pete! day off for me, so I have control of my calendar. 03:15:32 Peter Sorenson: How are the principles you propose Stu different from the principles proposed by Albert Chern’s and Eli Berniker? 03:17:11 Valentina Moldovan: Jacob, Bruce and Brian, would you like to add your questions to the chat?:) 03:17:15 Larisa: @Valentina Moldovan will the recording be available to us after? I would like to share this with some people working on these issues right now 03:17:18 Jacob Styburski: Reacted to "Jacob, Bruce and Bri..." with πŸ‘ 03:17:39 Jacob Styburski: My question is how do we define what a work system is…thanks 03:17:42 Seymour Hersh: suggest we start discussing EU in 2025! 03:17:46 Valentina Moldovan: Reacted to "My question is how..." with πŸ‘ 03:17:52 Peter Sorenson: Reacted to "My question is how d..." with πŸ‘πŸΌ 03:17:57 Valentina Moldovan: Reacted to "@Valentina Moldova..." with πŸ‘ 03:18:40 Bruce Mabee: My question: "Stakeholders." AI and other automation allows much wider reach to humans -- stakeholders. What examples are actively empowering and incorporating very diverse stakeholders? I.e., Which Humans -- all those affected by what the org does? 03:19:20 Brian Fisher (SFU): Trust is the problem. Human centred AI is a fine concept but based on their actions it is hard to believe that tech companies are serious about it. It looks more like β€œhumanwashing” for symbolic capital 03:20:31 Peter Sorenson: Reacted to "Trust is the problem..." with πŸ‘πŸΌ 03:20:48 Peter Sorenson: Reacted to "My question: "Stakeh..." with πŸ‘πŸΌ 03:21:03 Brian Fisher (SFU): If they were serious about this they would have more social scientists at the helm 03:21:15 Larisa: Reacted to "Trust is the probl..." with πŸ‘πŸΌ 03:21:20 Peter Sorenson: Reacted to "If they were serious..." with πŸ‘πŸΌ 03:21:28 Larisa: Reacted to "If they were serio..." with πŸ‘πŸΌ 03:23:12 steve alter: So what do we mean by AI? Is AI something different from automation when we are talking about AI in organizations? For example, a new organization (e.g., a startup) can produce a a product that uses AI capabilities but cannot use machine learning for running the organization because there is no meaningful history. Some AI is highly knowledge intensive, e.g., Alpha-Fold which won the Nobel prize in medicine. Other AI is like ChatGPT, which occasionally hallucinates, etc. Some AI applications are highly focused on clearly defined data. Other applications are less controlled. Therefore .... when we talk about AI and organization, what do we really mean by AI (as opposed to complex automatinon)? 03:23:25 Peter Sorenson: Reacted to "So what do we mean b..." with ❀️ 03:23:27 Brian Fisher (SFU): Reacted to "So what do we mean b..." with πŸ‘πŸΌ 03:26:14 Brian Fisher (SFU): Replying to "So what do we mean b..." In TechHypeLand, AI has become another way of saying β€œcomputer” but with the assumption that it can take actions that we should accept as intelligent. I did my dissertation and postdoc in AI working on focused applications like you mention. We wanted to augment human capabilities rather than to create β€œcounterfeit humans” 03:27:05 Brian Fisher (SFU): Replying to "So what do we mean b..." One approach is to use graphical interfaces rather than linguistic ones 03:28:16 Brian Fisher (SFU): Replying to "So what do we mean b..." https://www.dagstuhl.de/25511 03:31:03 William Zybach: Replying to "So what do we mean b..." This is one of the slides I use in my AI for Good work with organizations 03:31:29 Brian Fisher (SFU): Reacted to "Screenshot2025_10_23_133724.jpg" with ❀️ 03:31:49 Larisa: Reacted to "Screenshot2025_10_..." with πŸ‘ 03:33:05 William Zybach: Replying to "So what do we mean b..." And the β€œUse of Self” visual 03:34:44 Larisa: most of my work centers on alignment, thank you Stu. It's a framing that even lagging orgs seem to understand and be willing to focus on 03:34:58 Joe Norton: Reacted to "most of my work cent..." with πŸ‘ 03:35:06 William Zybach: Reacted to "most of my work cent..." with ❀️ 03:35:13 Peter Sorenson: Reacted to "most of my work cent..." with πŸ‘πŸΌ 03:36:45 bianca indipendente: I wonder where the shadow AI ( the one that β€œunofficially” uses at work every day) fits in the system? 03:38:28 Larisa: @bianca indipendente good question, I wonder if all AI use eventually acts as shadow AI 03:39:09 Annja Neumann: Reacted to "I wonder where the s..." with πŸ‘ 03:39:16 Brian Fisher (SFU): Reacted to "I wonder where the s..." with πŸ‘ 03:39:22 Brian Fisher (SFU): Reacted to "most of my work cent..." with πŸ‘ 03:40:13 bianca indipendente: Reacted to "@bianca indipendente..." with πŸ‘ 03:40:23 Joe Norton: @bianca indipendente I appreciate the idea of shadow AI to link it back to ensuring the incorporation of "informal" elements in the operating model for AI design processes 03:42:08 Peter Sorenson: Sounds like our Hyperloop organization that we visited at the 2018 STS RT Meeting in LA 03:42:08 Terri Adkisson: Shadow AI becomes part of vibrant Hybrid work system 03:42:16 Joe Norton: Reacted to "Shadow AI becomes pa..." with πŸ‘ 03:42:39 bianca indipendente: Reacted to "@bianca indipendente..." with πŸ‘ 03:42:57 Brian Fisher (SFU): Hybrid Work systems design would seem to require a good deal of cognitive psychology 03:43:38 Terri Adkisson: Reacted to "Hybrid Work systems ..." with πŸ‘ 03:44:03 Brian Fisher (SFU): We are looking at broadly interdisciplinary design processes 03:44:14 Larisa: Reacted to "We are looking at ..." with πŸ‘ 03:44:15 Peter Sorenson: Reacted to "We are looking at br..." with πŸ‘πŸΌ 03:44:16 bianca indipendente: Replying to "@bianca indipendente..." Yes, and yet I feel there is a complex boundary crossing and primitive emotions which make it fundamental but challenging. 03:44:29 Joe Norton: Reacted to "Hybrid Work systems ..." with πŸ‘ 03:44:30 Joe Norton: Replying to "Hybrid Work systems ..." and developmental and... organizational psychology 03:44:38 Larisa: Reacted to "and developmental ..." with ❀️ 03:44:38 Brian Fisher (SFU): Reacted to "and developmental an..." with πŸ‘πŸΌ 03:45:44 William Zybach: Replying to "Trust is the problem..." And Brian, I pay a little attention to what other are doing, as interesting (what Peter Block - at the Ghana summit - had to say, is that - β€œAI is an Interesting Tool” - of course that is an understatement - but I love it because it gives us pause over the hype). I am totally excited about what each one of us does with it - where ever we are at in the AI dance, because the important thing - beyond fretting - because that doesn’t help anything - is to Create with AI, in the Way We Want - now is the time of pioneering experimentation. 03:46:30 Brian Fisher (SFU): Replying to "Trust is the problem..." I agree, but we need to find a way to have our voices heard over the VC-directed hype 03:46:35 Terri Adkisson: Reacted to "and developmental an..." with πŸ‘πŸΌ 03:47:13 Erik Nicholson (he, him, Γ©l): Replying to "Trust is the problem..." πŸ‘πŸΌ 03:47:32 Annja Neumann: Reacted to "I agree, but we need..." with πŸ‘ 03:47:39 William Zybach: I’m glad for those who are in those places, their voices are needed, but I am not there - I’m out experimenting and creating the future I want - because I can create that!@Brian Fisher (SFU) 03:48:04 Joe Norton: Reacted to "I agree, but we need..." with πŸ‘πŸ» 03:48:15 Peter Sorenson: Reacted to "I agree, but we need..." with πŸ‘πŸΌ 03:48:30 Brian Fisher (SFU): Replying to "Trust is the problem..." @William Zybach I tech in a tech design school, so the process is front-and-centre for me 03:48:38 bianca indipendente: Reacted to "Trust is the problem..." with πŸ‘ 03:48:42 William Zybach: Reacted to "@William Zybach I te..." with ❀️ 03:49:56 Larisa: What I find interesting and challenging is that often, too often AI engineers assume the role of social system designers as well. Which is why there seems to be issues in alignment and calibration. At least where I am. The question becomes, how do we intervene when AI developers are not recognizing their boundaries? 03:50:41 Brian Fisher (SFU): Replying to "Trust is the problem..." One approach to cross-functional collaboration is to design interactive technologies to support building common ground and effective collaboratoin process 03:50:45 Annja Neumann: Replying to "Trust is the problem..." Agreed we need interpretative AI that doesn’t replace human interpretation but helps building alternative architectures that tolerate ambiguity, plurality and difference 03:52:00 William Zybach: Replying to "What I find interest..." And yes, that is a phenomen of siloed organizations, and why stu talks about getting cross functional teams - but if a company doesn’t have a culture of lateral mechanisms, then you have to try to find your way in. 03:52:05 William Zybach: Reacted to "What I find interest..." with ❀️ 03:52:06 Brian Fisher (SFU): Reacted to "Agreed we need inter..." with πŸ‘πŸΌ 03:52:20 Brian Fisher (SFU): Reacted to "What I find interest..." with πŸ‘πŸΌ 03:52:21 Larisa: Reacted to "And yes, that is a..." with πŸ‘ 03:52:38 Larisa: Replying to "What I find intere..." yes, exactly. thank you 03:52:48 bianca indipendente: Reacted to "Agreed we need inter..." with πŸ‘πŸΌ 03:52:53 William Zybach: Reacted to "yes, exactly. thank ..." with πŸ‘ 03:53:04 bianca indipendente: Reacted to "What I find interest..." with ❀️ 03:53:17 Joe Norton: Reacted to "Agreed we need inter..." with πŸ‘πŸ» 03:53:44 Brian Fisher (SFU): Replying to "What I find interest..." I see the same thing in my conferences, where computer scientists take on social science and cognitive science research without the collaboration they need 03:54:15 Peter Sorenson: Replying to "What I find interest..." There is also an issue of status. Who is seen as having value to bring to the discussion. Too often the silos exist between people and their thought processes and what they see as valid and useful in making choices and decisions - not just β€œmental models” but also technical models of how to handle data and problem solving and decision making. 03:54:25 William Zybach: Reacted to "I see the same thing..." with πŸ‘ 03:54:28 Brian Fisher (SFU): Reacted to "There is also an iss..." with πŸ‘πŸΌ 03:54:56 William Zybach: Reacted to "There is also an iss..." with ❀️ 03:55:02 Joe Norton: Replying to "What I find interest..." And, the correlation between social skills & experiences and technical skills & experiences is quite drastically negative. 03:55:27 Terri Adkisson: Reacted to "There is also an iss..." with πŸ‘πŸΌ 03:55:37 William Zybach: Reacted to "And, the correlation..." with ❀️ 03:55:48 Valentina Moldovan: Stu's email address: stu@winby.biz 03:55:51 Larisa: Reacted to "There is also an i..." with ❀️ 03:55:52 Terri Adkisson: Reacted to "And, the correlation..." with πŸ‘ 03:55:54 Todd Christian: Reacted to "Stu's email address:..." with πŸ‘ 03:56:02 david clements: Reacted to "Stu's email address:..." with πŸ‘ 03:56:14 Bruce Mabee: AI is transforming our world, potentially in a broader way. It offers opportunities for design PARTICIPATION among stakeholders, including the humans who are NOT EMPLOYEES OR MEMBERS -- communities of those who eat or not, get access to good healthcare or not, etc. I hope that each of us helps shape the new way toward that, while things are disrupted. Traditional, unfolding design processes need to be transformed, to have ongoing adaptability. 03:56:42 david clements: second that -case studies 03:58:01 bianca indipendente: Reacted to "And, the correlation..." with ❀️ 03:58:20 William Zybach: There may be a feeling of Deja Vu - because this is simply a Change effort on Steroids - but this builds on what we have done in the past. 04:00:13 Joe Norton: I'd love to take a look at what our individual definitions are. Anyone interested in capturing the diversity of our community, please email your SHORT definition of AI and I'll send it back out to participants. docsnorton2@gmail.com 04:00:27 William Zybach: One of my slides about AI 04:01:36 Joe Norton: Replying to "Screenshot2025_10_23_140650.jpg" Quite an integral model Bill, I can use it as a check list :) 04:01:41 Terri Adkisson: Yes we do exist. How do we connect with those willing to allow their systems to be redesigned at this level? 04:02:09 eric-hans: Reacted to "Screenshot2025_10_23_140650.jpg" with πŸ‘Œ 04:02:37 William Zybach: Reacted to "Quite an integral mo..." with πŸ‘ 04:02:42 Terri Adkisson: Replying to "Yes we do exist. How..." most seem to be looking for rapid patches 04:03:34 Bruce Mabee: Under stress, we all want to keep what we have and know. How do we find and convene those you are naming here? 04:03:50 Terri Adkisson: Reacted to "Under stress, we all..." with πŸ‘ 04:04:11 William Zybach: Replying to "Yes we do exist. How..." Except the largest organizations, most middle and small organizations are not close to being able to do this - and so this is the time to learn, not do! 04:05:20 Joe Norton: Replying to "Yes we do exist. How..." Squad HCAI taking over Squad Taylor [Swift] 04:05:31 William Zybach: Replying to "Yes we do exist. How..." @Bruce Mabee The biggest challenge with any change - is to have our minds let go of the attachment to the As Is… 04:05:41 Terri Adkisson: Reacted to "Squad HCAI taking ov..." with πŸ˜€ 04:06:19 William Zybach: Siloed functions - professional or otherwise - are mirroring what is happening in siloed organizations 04:06:37 Bruce Mabee: To open our toughened skins and brains! 04:06:48 Larisa: we need our own design lab to work out these outstanding questions around language, application, integration 04:07:03 Peter Sorenson: Reacted to "we need our own desi..." with πŸ‘πŸΌ 04:07:15 William Zybach: Reacted to "we need our own desi..." with ❀️ 04:08:22 Terri Adkisson: Reacted to "we need our own desi..." with ❀️ 04:08:43 Valentina Moldovan: Reacted to "we need our own de..." with ❀️ 04:09:31 Peter Sorenson: Perspective, Interests, Power 04:09:45 Joe Norton: I would suggest our last several roundtables and their deliberation processes have been design labs and case studies in the making 04:10:11 Larisa: Reacted to "I would suggest ou..." with πŸ‘ 04:10:39 Peter Sorenson: Reacted to "I would suggest our ..." with πŸ‘πŸΌ 04:10:46 Terri Adkisson: Reacted to "I would suggest our ..." with πŸ‘ 04:12:33 Larisa: Reacted to "Perspective, Inter..." with ❀️ 04:14:26 Larisa: Wonderful meta summary Pete!! 04:14:43 Brian Fisher (SFU): Reacted to "Perspective, Interes..." with ❀️ 04:14:48 Brian Fisher (SFU): Reacted to "we need our own desi..." with πŸ‘πŸΌ 04:14:50 Brian Fisher (SFU): Reacted to "we need our own desi..." with ❀️ 04:15:10 Bruce Mabee: Said well, Pete! The span of what we take on (and with whom) is what we begin to define in these disrupted times. 04:15:23 Peter Sorenson: Interdisciplinary Knowledge Work!! 04:16:41 Brian Fisher (SFU): Reacted to "Interdisciplinary Kn..." with πŸ‘πŸΌ 04:17:45 Peter Sorenson: Design Labs as Interdisciplinary Knowledge Work that is a Distinctive Competency of Operating Models and Organizations, Networks of Organizations and Ecosystems - If you do not develop this competency and capability you will be left behind! 04:17:57 Bruce Mabee: We can make the real situations into Design Labs as our clients deal with them. Action Learning in Current Reality. 04:18:18 ken Nishikawa: Here is a humble curiosity. If AI were intelligence made through an interaction with a non-human substance, could we see the intelligence as an independent existence from cultural influence? 04:18:31 Brian Fisher (SFU): Reacted to "We can make the real..." with πŸ‘πŸΌ 04:18:37 Brian Fisher (SFU): Reacted to "Design Labs as Inter..." with πŸ‘πŸΌ 04:18:48 Terri Adkisson: Reacted to "Design Labs as Inter..." with πŸ‘ 04:18:50 William Zybach: Reacted to "We can make the real..." with ❀️ 04:19:55 Bruce Mabee: Rich session! 04:20:06 Christian Wandeler: Thank you all! 04:20:19 Joe Norton: Reacted to "Here is a humble cur..." with πŸ‘πŸ» 04:20:29 Peter Sorenson: Reacted to "Thank you all!" with ❀️ 04:20:51 ken Nishikawa: If we see AI as a partner, people in the different cultures must see AI as a cultural products. How do you think about that? 04:21:15 Marcia Murphy: Big thank you to Stu and to everyone who put the session together. 04:21:21 Valentina Moldovan: https://stsroundtable.com/upcoming/2025-annual-general-meeting/ 04:21:22 Wolfgang KΓΆtter: Just to take Advantage from the great audience: lease save the date for the 2026 Annual Meeting of Global STS Roundtable - September 22nd through September 25th 2026 at Berlin, Germany 04:21:30 Valentina Moldovan: Reacted to "Just to take Advan..." with πŸ‘ 04:21:37 Joe Norton: Reacted to "Just to take Advanta..." with πŸ‘πŸ» 04:21:42 Larisa: Reacted to "Just to take Advan..." with πŸ‘ 04:21:42 bianca indipendente: Thank you for this thought-provoking conversation, and nice to get to know this community. I would love to connect with anyone who fancies a virtual cafe. 04:21:51 david clements: Huge thanks Stu - very interesting and informative as always; and to all contributors. Plenty of food for thought and further discovery. 04:22:39 Larisa: excellent session - deeply grateful to Stu and the RT for this valuable conversation.