Colloqium #14

January 2020

source: presentation by s.schaffer source: presentation by s.schaffer
source: presentation by s.schaffer

Description

Programm:

An introduction on assistance systems of unmanned aerial vehicles and cars, followed by a lecture on cognitive assistance systems, based on machine learning technologies and their implementation in traffic, health, knowledge systems and other areas.

We are going to draw on what we have discussed in a previous colloquium on AI, what deep learning is, or is supposed to be, what algorithms are subsumed by this concept and we are going to talk about some of these technologies in more detail.

11:30 - 12:00 Coffee and hello´s

12:00 - 12:20 Intro Auris-E. Lipinski

12:30 - 13:30 Interactive Workshop with Stefan Schaffer Part 1

13:30 - 14:00 Pause, food, drinks, coffeeeee

14:00 - 15:00 Interactive Workshop with Stefan Schaffer Part 2

16:00 ~ 18:00 Discussion

  1. Introduction

“Cognition, computational assistance and artificial performance improvement, through so called AI” - by Auris-E. Lipinski

(PhenCoCo, VIOM, HU Berlin)

Title: Cognition, computational assistance and artificial performance improvement, through so called AI

Summary:

  1. What are assistance systems? Assistance systems are computer processes which are supposed to ease the handling of various technologies, not seldomly technologies supporting mobility. Assistance systems aim at making navigation and steering tasks easier and safer. However, to ensure these aims, human-machine interaction needs to go smoothly.

  1. How are they used? Depending on the technology in question, assistance systems bring forth varying usability and user experience designs. Depending on these designs, the use of a system can be extraordinarily better than before - or harmfully worse.

  1. Why are they being used? Machines are better at some things, than humans. Mostly quantitative things, to keep it general. These things or tasks, it is assumed, are better outsourced, so humans can unfold their full potential, while not having to deal with the uninteresting bits and unpleasant work in life. To ensure this goal, however, in practice, someone needs to fill the databases. And those someones are not machines - at least not at first.

  1. What does that have to do with cognition and philosophy and or machine learning? The aims people try to reach with certain technological developments are not always ethical, invented under fair circumstances or aimed at those. Machine learning can be helpful for humans, but has it´s limits and depending on the technology used, has both practical and ethical strengths and weaknesses - whether these can come close to human … let´s call them inner workings, is highly controversial.

  1. When do assistance systems take over? Well, we shall see, won´t we… . As this is an introduction, the answer to the question will be two fold, though there are of course many more apects to it. Firstly, there is the question of autonomy vs. the idea of one entity doing all the work and one entity making all the creative leading decisions. Secondly, autonomy in the technological sense differs greatly from what is understood as human autonomy. The differences melt together. One way of fleshing out the question above is, to ask what it means to understand one´s environment and to form concepts of it on the basis of computational ideas, instead of anthropological, societal or biological ideas.

  1. “Cognitive Assistants” - An interactive lecture by Dr. Stefan Schaffer

(Deutsches Forschungsinstitut für Künstliche Intelligenz)

The main part of this colloquium will be an interactive explanation of bike assistance systems and discussions on their conceptualisation. The talk will be split in two parts:

Part 1: Multimodal conversational assistance systems

With the rise of chatbots and voice interaction, conversations with machines become more human-like. But when people communicate, in addition to their voice they use everything they have, including gestures, emotions and facial expressions. The support of such multimodal communication will be an important requirement for human-computer interaction in the future. In the first part of the talk I will outline concepts and a basic architecture of multimodal systems. Advantages of multimodal interfaces will be illustrated through several technological demonstrators, which have been developed in research and industrial projects of the DFKI.

Part 2: SmartBike – AI-based mobility scenarios for cyclists

The second part of the talk will focus on a more concrete scenario of AI based mobility, a new traffic concept which we call a bicycle swarm. After a short introduction in how we understand AI-based mobility scenarios, participants are invited to take part in an interactive session elaborating the following challenges:

More fun to ride a bike – Which approaches can contribute to this? (Gamification, reward systems, challenges, CO2 awareness, …)

Mobility Mate Matching – How can I as a cyclist identify like-minded people better, faster, easier? (Riding style and speed, spatial overlap, intensity of use, racing bike enthusiasts, …)

Swarm Intelligence? How can wearables support collective cycling? How can security and visual visibility be increased? “Green Wave”, wearables or app functions for the formation and recognition of cycling groups (e.g. by having the same goal, motivation to cycle etc.)

In conclusion a set of possible future scenarios for swarm interaction will be presented in order to lead over into a discussion with the participants.

Speakers

source: https://www.unite.ai/de/Stefan-Schaffer-Senior-Researcher-Deutsches-Forschungszentrum-f%C3%BCr-KI-DFKI-Interviewreihe/
source: https://www.unite.ai/de/Stefan-Schaffer-Senior-Researcher-Deutsches-Forschungszentrum-f%C3%BCr-KI-DFKI-Interviewreihe/

Stefan Schaffer: Stefan Schaffer is a senior researcher and project manager at the Cognitive Assistants Department of the German Research Center for Artificial Intelligence (DFKI). He holds an M.A. in Communication Science and Computer Science and a PhD in Computer Science from the Technical University Berlin. His main research interests include conversational, mobile, and multimodal human-computer interaction, as well as related techniques like multimodal intent detection, dialogue management, and multimodal output generation. His works have resulted in several assistant systems for different domains, as mobility, automotive, tax information, customer service, and others. He was involved in several research and industry-funded projects.

source: ael
source: ael

Auris–E. Lipinski: Auris-E. Lipinski is a studied philosophy teacher with experience in the tech industry, providing one-on-one lessons and tech-communications for companies and entrepeneurs, as well as language trainings and simultanoues translations. While studying Philosophy & English at Humboldt University, Berlin, she became a scientific assistant at VIOM GmbH. She founded PhenCoCo in the aftermath of university seminars like “Konstruktion und Phänomenologie der Wahrnehmung”, Phänomenologie und Kognition" (M. Thiering) and “Computation und Geist” (J. Bach). She has been involved in different research and development projects, guiding her academic interests towards way finding and cognitive preconditions for navigation, both computational and phenomenological. This includes working on spacial concepts found in philosophy, psychology and robotics, subsuming Gestalt theory, embodiment theories, language/ concept importance, association and intuition. Her personal interests lie a.o. in current issues in philosophy, technology, and science, specifically navigation, optimisation, and telematics.

Colloqium Material