Detailed information about the course

[ Back ]
Title

Human-AI Collaboration

Dates

27-31 janvier 2025

Organizer(s)

Denis Lalanne

Speakers

Pr. Simone Stumpf (Glasgow University, UK), Dr. Pat Pataranutaporn (MIT, US), Pr. Ujwal Gadiraju (Delft University of Technology, NL), Pr. Joe Paradiso (MIT, US)

 

Description

This winter school will address the topic of Human-AI Collaboration and host several of the main influential researchers in the domain. Prof. Simone Stumpf from Glasgow University will provide insights on Reponsible and Explainable AI through theories and hands-on exercises. Prof. Ujwal Gadiraju from Delft University will present an overview of the empirical pursuit of facilitating appropriate reliance in human-AI decision-making and how to design effective conversational interfaces for Human-AI collaboration. PhD. Pat Pataranutaporn from the MIT will present the notion of Cyber Psychology and current research methodologies to understand how AI can influence human psychological processes. Finally, Prof. Joseph Paradiso, director of the MIT Responsive Environment group, will provide his insights on the notion of Human-AI Collaboration to conclude this Winter School.

Program

Monday 27.01

  • Afternoon session:

    • 15:30 - 16h - Welcome
    • 16h00 - 17h30 - Session A1 Responsible AI by Simone Stumpf
    • 17h30 - 17:45 - Coffee break
    • 17:45 - 19:00 - Session A1 Responsible AI by Simone Stumpf

 

Session details

Session A1 - Responsible AI: AI technologies are rapidly advancing and are transforming our work and lives. However, there are grave concerns that AI carries risks and might create harms to individuals, groups and society. There have been many calls that we need to develop more responsible AI (RAI) systems. In this session, you will learn what RAI is, fundamental aspects of developing RAI, and cover current research strands in RAI. We will have hands-on tasks to critically evaluate how 'responsible' current AI technologies are, possible ways forward in creating responsible AI systems, and the role of AI in your own research programme.

Tuesday 28.01

  • Morning session:

    • 08:30 - 10:00 - Session A2 Explainable AI by Simone Stumpf
    • 10:00 - 10h15 - Coffee break
    • 10:15 - 11h30 - Session A2 Explainable AI by Simone Stumpf
  • Afternoon session:

    • 16h00 - 17h30 - Session B1 - Trust and Reliance in Human-AI Decision-making by Ujwal Gadiraju
    • 17h30 - 17:45 - Coffee break
    • 17:45 - 19:00 - Session B1 - Trust and Reliance in Human-AI Decision-making by Ujwal Gadiraju

Session details

Session A2 - Explainable AI: Transparency is one of the corner stones of Responsible AI and Explainable AI has been seen as the solution to transparency issues. In this session, we will cover different ways of explaining AI systems and their pitfalls. We will center human interpretability as the main purpose of explaining AI systems and delve into aspects that need to be considered when providing explanations of AI systems as well as when measuring the effects of providing explanations. We will explore current research gaps and integrate exercises and activities to deepen your understanding of explanations.

Session B1 - Fostering Appropriate Trust and Reliance in Human-AI Decision-making: Advances in AI and machine learning technologies have snowballed the proliferation and adoption of AI systems across different domains ranging from finance to health and education. Researchers and practitioners in different communities exploring the societal impact of integrating AI systems in our everyday lives have recognized the dangers of over-trust and blind reliance on AI systems. In equal measure, there has been a recognition of the potential benefits in collaborating with AI systems that can aid humans in domains and contexts that go beyond their expertise or otherwise complement human capabilities. Striving to foster appropriate reliance (i.e., simultaneously preventing over-reliance or under-reliance) on AI systems has been akin to walking a tight rope. Over the last decade, several methods and interventions have been proposed to this end, but with limited success. This lecture will present an overview of the empirical pursuit of facilitating appropriate reliance in human-AI decision-making and the lessons we learned along the way. The lecture will also discuss the open opportunities and challenges that lie ahead of us in the imminent future.

 

Wednesday 29.01

  • Morning session:

    • 08:30 - 10:00 - Session B2 Effective Conversational Interfaces by Ujwal Gadiraju
    • 10:00 - 10h15 - Coffee break
    • 10:15 - 11h30 - Session B2 Effective Conversational Interfaces by Ujwal Gadiraju
  • Afternoon session:

    • 16h00 - 17h30 - Session C1.1 End-user Programming for AI by Emmanuel Senft
    • 16h00 - 17h30 - Session C1.2 End-user Programming for AI by Sandrine Tornay
    • 17h30 - 17:45 - Coffee break
    • 17:45 - 19:00 - Session C1.3 Cyborg Psychology by Pat Pataranutaporn

 

Session details

Session B2 - Designing Effective Conversational Interfaces for Human-AI Collaboration: The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. There is a growing familiarity among people with conversational interactions mediated by technology due to the widespread use of LLM agents, mobile devices, and messaging services. Over half the population on our planet has access to the Internet with ever-lowering barriers to accessibility. Though text modality is a dominant way to implement conversational user interfaces (CUIs) today, foundational AI models enable the implementation of multimodal CUIs using voice and visual modality. Adopting visual and auditory cues in addition to text-based responses provides an engaging user experience, specifically in complex scenarios like health guidance, and job interviewing, among others. This lecture will present a review of state-of-the-art research and best practices on building and deploying multimodal CUIs and synthesize the open research challenges in supporting such CUIs. The lecture will also showcase the benefits of employing novel conversational interfaces in the domains of human-AI decision-making, health and well-being, information retrieval, and crowd computing. The lecture will also discuss the potential of conversational interfaces in facilitating and mediating the interactions of people with AI systems.

Session C1.1 - End-user Programming for AI: End-user programming (EUP) or no-code programming tools aim to lower the barrier of entry for end-users to directly specify AI and robot programs. By transferring this capability from engineers to end-users, EUP enables users to customize their experiences according to their unique needs and preferences. This talk will discuss specific challenges of EUP and traditional modalities, with a particular focus on recent works in EUP for human-robot interaction.

Session C1.2 - Assistive Technology for Sign Language Learning: In language learning, learners need to develop comprehension and production skills, both of which are necessary for successful interaction. The use of digital technologies to support the acquisition of these skills has proven effective in spoken language learning and is emerging in sign language learning. Most of the existing tools for sign language learning have been developed for the acquisition of comprehension, while the production side involves only self-comparison. However, learning effective sign language production requires good proprioception, spatial reasoning and observation skills, as sign language is a gestural mode of communication that uses multiple channels of information to convey meaning: hand gestures, body posture, facial expression and mouthing. There is a need to develop applications that can guide the learner in different aspects of sign language production. This talk will present the methods behind an AI-driven, web-based sign language learning application that automatically assesses sign language production. The talk will provide insights into an assistive technology that deals with hand movement and handshape modeling, multi-channel modeling, and explainable requirements, in a low resource framework.

Session C1.3 - Cyborg Psychology: Designing Human-AI Systems that Support Human Flourishing: As Artificial Intelligence (AI) becomes increasingly integrated into our daily lives, understanding the psychological implications of human-AI interaction is crucial for developing systems that truly support human capabilities. This talk introduces "Cyborg Psychology," an interdisciplinary, human-centered approach to understanding how AI systems influence human psychological processes. Cyborg Psychology emphasizes applying insights to design and develop AI systems that support human flourishing through the cultivation of Wisdom, Wonder, and Wellbeing. For example, the "Wearable Reasoner" seeks to enhance human rationality, "Personalized Virtual Characters" aims to support learning motivation, and "Future You" is designed to encourage long-term oriented thinking and behavior. The ultimate goal is to empower the development of AI systems that foster human flourishing by nurturing intellectual growth, cultivating motivation, stimulating critical thinking, and preserving individual autonomy in decision-making.

 

Thursday 30.01

  • Morning session:

    • 08:30 - 10:00 - Session C2 Teaming with Generative AI Agents by André Freitas
    • 10:00 - 10h15 - Coffee break
    • 10:15 - 11h30 - Session C2 Teaming with Generative AI Agents by André Freitas
  • Afternoon session:

    • 16h00 - 17h15 - Session D Workshop
    • 17h15 - 17:30 - Coffee break
    • 17:30 - 19:00 - Session D Workshop

 

Session details

Session C2 - Teaming with Generative AI Agents: This tutorial offers an in-depth exploration of human-expert and multi-agent collaboration in the context of Generative AI Agents, with a particular focus on domains requiring advanced analytical capacity-such as policymaking, biomedicine, and the physics of novel materials. As these specialized domains increasingly adopt Generative AI systems, we witness an unprecedented opportunity to realize a vision of augmented rationality, wherein experts and AI agents cooperate in a synergistic manner to address complex analytical challenges.

We approach this topic through the lens of neuro-symbolic AI, emphasizing the importance of reasoning mechanisms that can guide, justify, and refine the outputs of generative models. We introduce the notion of analytical Generative AI Agents - systems designed to tackle domain-specific complexities while upholding the principles of Responsible AI. The tutorial will dissect the architectural elements needed to implement such agents effectively, from model composition to interpretability and transparent decision-making workflows.

In addition, participants will be introduced to emerging practices for evaluating these multi-agent systems in human-AI teaming contexts. By the end of the tutorial, attendees will have gained a deep understanding of how to design, deploy, and assess analytical Generative AI Agents, aiming to foster responsible and effective collaborations between humans and AI-driven multi-agent systems.

Session D - Workshop: A workshop on Human-AI Collaboration. The participants of the Winter School will be divided in groups and reflect on a specif topic.

 

Friday 31.01

  • Morning session:
    • 08:30 - 10:30 - Session E Approaching the Augmented Human by Joe Paradiso
    • 10:30 - 11h00 - Coffee break
    • 11:00 - 11h30 - Closing-out by Denis Lalanne

Session details

Session E - Approaching the Augmented Human: In so many ways, humanity and civilization are at a fascinating crossroads. Although we are far from many of the fanciful concepts of past decades like antigravity and flying cars, revolutions in information technology have held their promise, and despite the cyberpunk novelists having anticipated many things here early on, the ways in which these technologies impact humanity have often taken their inventors and many of the rest of us by surprise. In many ways, I view this process as a progressive blurring of physical identity, as our mind expands into networks that extend beyond the confines of our head - and connections made in virtual space deeply affect us, just as synapsing neurons do within our brains.

Throughout my life, there were several deep questions that motivated me. One was whether there is life elsewhere in the universe. We are on the verge of answering that one (or at least pushing the boundaries way back) though remote observation and planetary probes (my guess is no - the universe is here just to make us - but stay tuned!). The other was when would real AI become a reality. We know the answer to the latter question - essentially now. This will profoundly affect humanity. Rather than view AI as competition, I see it as manifesting as extension - AI will essentially provide a medium through which we interact with and even perceive the world in hopefully richer ways. Although this vision is fraught with concern, it's also one of promise.

I grew up during the last space age. It was an era of technological optimism, although tempered and edged on by the dark luster of the Cold War. Space operations subsequently fell short of our visions then - we don't even have lunar colonies, not to mention Mars bases - the more we learned about space, the more hostile it became and getting there stayed expensive and difficult. The latter issue is changing now with new players in the game. Although the former concerns persist, we try to think of ways of subverting them. Hence space operations will grow in coming years - but the big question is what will go? Humans or machines? And if human, will it be the augmented human that is evolving here on earth, not the WWII mold of the space heroes of our ancestors?

Bringing our lens closer to the present, we are living in an era driven by ubiquitous sensing. The visions that many of us touted in the early days of ubiquitous/pervasive computing have largely come to pass in this age of IoT, and now sensors of all kinds are embedded in smart devices across our environments that draw very little power and connect seamlessly to widespread networking infrastructure. Where do we go next? The crux of much of this will be in how this information connects to people, and how our perception and cognition effectively expand beyond our corporeal confines. Illustrated by the above speculations, this talk will explore the augmented human as viewed through the lens of recent projects happening in my Responsive Environments research group that involve sensing at various scales in the physical world (wearables, smart buildings, connected landscapes, and space missions) and how this information connects to people in different ways. Examples will include viewing smart buildings as 'prosthetic' extensions of their inhabitants, manifesting sensed or inferred phenomena in virtual analog environments, and interfaces modulated by user attention and focus or augmented by real-time AI.

Closing-out: Denis Lalanne will say a few words before closing the CUSO Winter School.

 

Location

Champéry

Information

cuso-ah25.human-ist.ch , Please register before December 20th 2024.

Places

35

Deadline for registration 20.12.2024
short-url short URL

short-url URL onepage