Session 1

Definitely not as right-wrong or true-false style. The questions might not make sense or might require a 4 year PhD project to get answered decently. So time-box your effort, for example by giving the best possible answer you can do in 10 minutes, rather than be exhaustive and accurate.

  1. In Social Robots from a human perspective an overview of the state of the art in Social Robotics is given, organized in topics of human perception and expectation (Part I), Interaction (Part II) and application domains (Part III). Read (a.o.) chapter 8. Which pointers does this chapter give in/to a design research approach?

The first pointer is to keep the design simple and intuitive. This makes interactions easier and makes it feel more familiar. The second pointer is pragmatic of human communication, which encompasses the notion that there is always communication happening verbally or nonverbally. The type of communication and the messages the robot should convey should be examined and made appropriate for users. The third pointer is facial prominence, which states that the level of prominence of the face can influence the perception of users. The last pointer is to take the ‘uncanny valley’ into account, which shows the more robots are trying to look realistically human the more uncanny it can be perceived [1].

  1. Read chapter 9 as well. What general statements can be made regarding embodiment vs digital content. Apparently users are very capable of maintaining (and living and being assisted by) a boxy thing with a screen. Do we need actuators? Which part of embodiment turns a smartphone into a social robot?

There is a lot of information and data stored on our mobile phones, which includes a lot of personal content. This content is just random pictures or information to most people, but to the owner it holds a lot of emotional value. It is part of their identity. The embedded emotion lifts it from digital content to embodied content. The fact that the smartphone holds this content and allows users to review it and interact with it at all time, binds users to it, and allows them to explore their own person [1]. It does not need anything in between, as the phone and its user are becoming one. The knowledge that the phone contains all that and the existence of the emotional bond turns a smartphone into a social robot.

  1. Now restate your answer to the previous question after watching the movie 'her'. (the trailer is already a good hint if you didn't know the movie) Does this change your perspective?

It does not change my perspective. The movie shows a guy slowly falling in love with his operator system, which confirms the answer to the previous question. The emotional bond and the personalised content really engage the user on a deeper level than another device.

  1. If you now compare the work (vincent2015) with the overview by Bar-Cohen et.al.(with a more gripping title) The coming robot revolution which differences in focus can you point out? This work is a decade old. Where did the real world deviate from the predicted revolution?

The work of Vincent [1] has more laid back and guiding content for designers than the work by Bar-Cohen [2]. Bar-Cohen [2] describes the upcoming robot development more as a thread and something to be warned about, as he will not be able to stop it. In his book he also points out the possibility of having a robot equipped with a consciousness. This could bring up a debate on slavery or suppression of said robots. Or that the robots will turn against us. This has not happened (yet), but the fear is still occurring in this day and age. He also describes cyborgs, where the only human part remaining in the body is the brain. Technology is not far enough today for this to be realised.

  1. Read about the robot hotel - and compare their practice against for example the heavily automated CitizenM hotel. Where did the robot hotel go wrong?

The robot hotel feels a bit more like an attraction than a functional hotel. The robots are there because it is cool to have robots, but their functionality and durability do not match the job they are made to do. The creators tried to fit the robots into the job instead of having the job fit the robot. The CitizenM hotel is automated, but not through the use of robots. They included self-check in and out, to ensure an easier and more efficient experience for their customers. Instead of swapping out all human employees, they still kept them around for questions and other needs and wants from customers.

  1. Consider the story of the Nabaztag rabbit. The 'internet made social' in 2009. A physical embodiment as a user interface for the internet of things. How does this fit definitions of social robot technology? Where does it deviate? No, why was it not successful?

The Nabaztag rabbit is a little robot that can connect to the internet and has various services which are a.o. setting an alarm, giving the weather forecast and sending and receiving MP3s and having conversations with users. It can be customised and interacted with, and additionally allows different owners of Nabaztag rabbits to interact with each other. As it can converse with you, provide wanted information and recognise objects, all whilst moving and looking cute it fits the description of a social robot.
They did try to put too much into the rabbit, making it more difficult to keep all aspects and functionalities on a high standard. This takes away from the experience and reduces the level of enjoyment a user gets from the interactions.

  1. Check out the work on cuddlebits paper - what are key insights in the role and application of design research in this project?

After creating and testing the CuddleBit it became apparent that that the level of arousal in the robot is easy to determine for users, but it is more difficult to link valence. Without context the level of arousal can be interpreted in different ways. Another insight is that the physical presence of the different pieces has an impact on the level of emotional understanding the user has of the robot. This also includes whether the robot is viewed live or on video [3].

  1. if you can, check out ‘Be Right Back’ - Black Mirror - https://www.netflix.com/watch/70279173 (or most of the black mirror scenarios dealing with robots. A very good recipe against nativity :)…

[1] J. Vincent, S. Taipale, B. Sapio, G. Lugano, and L. Fortunati, Social Robots from a Human Perspective. 2015. doi: 10.1007/978-3-319-15672-9.

[2] D. Hanson and Y. Bar-Cohen, The coming robot revolution. 2009. doi: 10.1007/978-0-387-85349-9.

[3] P. Bucci et al., “Sketching CuddleBits,” CHI ’17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3681–3692, May 2017, doi: 10.1145/3025453.3025774.

Session 2

  1. Which categorisation can you come up with for Robot/AI stories? Use, for example, https://link.springer.com/article/10.1007/s00146-021-01299-6 

A category that could be used for Robot/AI stories is a need for survival, where survival could be survival of the ‘robot race’ or the survival of the earth and or the human race. Hermann [1] mentions the movie I, Robot in his paper and how VIKI is ready to sacrifice part of the human population to ensure the survival of the entire race. Other examples would be stories about robot uprising. 

  1. Check out Johnson, B.D., Science Fiction Prototyping, Designing the future with SF (2011) or Johnson, B.D., 21st Century Robot, Maker Media (2014).  What can you find about his approach/recipe for sci-fi prototyping? What are the main differences between Science Fiction Prototyping and 'generic' scenario-based design as for example explained by Stanford http://ldt.stanford.edu/~gimiller/Scenario-Based/scenarioIndex2.htm? (sorry, link no longer working, check the following at archive.org)

Johnson [2] presents a five step process: 

    1. Pick your science and build your world
    2. Identify the scientific inflection point
    3. Consider ramifications of the science on the people
    4. Identify the human infliction point
    5. Reflect on: what did we learn?

In his process there is not really a problem scenario identification, which is the first step of Stanford's framework [3]. Johnson’s process also focuses on a reflection in the end [2], whereas Stanford finalizes the framework with both a summative and formative evaluation of the usability specifications [3].

  1. Check the 21st century robot project http://www.21stcenturyrobot.com/. The outcome of the project is a (reasonably predictable) NAO-like robot personality. Which conditions contribute to that, and how could the story have evolved differently?

The project works with a kit that limits the variety of the possible outcome. The base of the robot will always be the same, and even though the children had all different designs and shapes, it all will be redesigned and fit together to fit the kit. The shell can differ and can to a little extent have some unique features, but the feel will be the same. By making a more versatile kit, with possible differences in limbs f.e. The outcomes could differ more and the story can be more personal.

  1. How would you classify the stories being told to consumers accompanying the sold products? Where do they differ from reality?

Marketing stories are always the most perfect and ideal version of the actual product. It will always seem like the robot is made specifically for that purpose and is actually a perfect fit for you, when in reality it could have been a third rebrand after the first two were not that successful. It takes quite some time to develop the products and if the intended purpose won't sell well, whilst a slightly different target audience will be more willing to buy the products, it is easily rebranded as something different.

  1. Which storytelling aspects (back stories?) should you (always) consider when designing social robots? Can, for example, the uncanny valley be seen as linked to (a) story?

What ‘sells’ a social robot is its ability to create a social or emotional connection with the user. When designing a social robot one should understand what parts of the design will allow users to create an emotional bond. They should feel like they can trust the robot and not like it is evil and will take over the world or another scenario described by Bar-Cohen [4]. The uncanny valley is incorporated into this. Uncanny robots do not tell a good story, unless you are going for horror. The designer should know the audience and what it wants to see and generate the story around that.

[1] I. Hermann, “Artificial intelligence in fiction: between narratives and metaphors,” AI & Society, vol. 38, no. 1, pp. 319–329, Oct. 2021, doi: 10.1007/s00146-021-01299-6.


[2] B. D. Johnson, Science Fiction Prototyping: Designing the Future with Science Fiction. 2011. doi: 10.1007/978-3-031-01796-4.

[3] “Scenario-based design overview.” https://web.archive.org/web/20180621145854/http://ldt.stanford.edu/~gimiller/Scenario-Based/scenarioIndex2.htm

[4] D. Hanson and Y. Bar-Cohen, The coming robot revolution. 2009. doi: 10.1007/978-0-387-85349-9.

Session 3

  1. The paper by Ju & Hoffmann (Design with motion in mind) is a very good source on using motion in your design(process). Which lessons would apply to your case?

In their paper, Hoffman & Ju [1] describe the importance of motion in robot design. They explain that humans are quite sensitive to motion of both other humans but also of objects. During our design for motion, this is very important to keep in mind. Additionally, certain movements a robot makes can give information to users about its intentions and how one could interact [1].
Other lessons that could be applied are different techniques such as use of Wizard of Oz, video prototyping and interactive DoF exploration [1]. These techniques could all be used when designing ROSE for elderly care and would help to get a clear overview of motion possible with ROSE.

  1. An often used tool for analysis of motion is the Laban Framework. Can you give some good sources where it is applied to HRI, but (more importantly) can you also find one or two alternatives? (So, the question here is why 'Robot Motion' + 'Theatre' = Laban seems to be the main route)

The Laban framework consists of four elements: Body, Effort, Shape and Space or BESS in short [2] and focuses on movement. Emir & Burns [3] use the Laban Movement Analysis, and then mainly the Effort component, to apply to motion design for robotic vacuum cleaners. Cui et al. [4] also use the Effort component for the creation of aerial robots. Going in a different direction, Siregar et al. [5] used Kinematics analysis, which focuses more on the geometry of motion and the movement of the joints of the robot. The difference here is that Laban uses a more qualitative approach, whereas the kinematics analysis focuses on a quantitative approach.

  1. Reflect on the following statement: To design a social robot and its communication we might need to take into account the way people parse behaviours and not necessarily replicate one-on-one human-like behaviour. Every robot can be interpreted as social if it displays animacy, agency (intentional behavior); which method would you use to design movement to achieve intentionality without relying on behaviours modelled on human behaviour?

As previously mentioned, Hoffman & Ju [1] state in their paper that humans are sensitive also to motion of objects and shapes. These objects and shapes are not human-like nor do they move the same way a human does. But despite their unhuman look, humans will still give meaning to the motions and understand what it is trying to convey. Therefore it is indeed not necessary to replicate one-on-one human-like behaviour, as humans will understand it either way. It is then way more interesting to delve into how humans perceive motion of objects or in this case, motion of social robots. Through video prototyping, a designer can gain insights into how the social robot is perceived.

  1. So far we have focused mainly on motion as a means for expression. Can you find fundamental insights (such as the Laban framework for motion) that gives you a starting point for sound design? or haptics? or morphology?

Robinson et al. [6] present nine design principles that can be used to create robot sound, divided in five themes: 1) Fiction, 2) Source, 3) Scope, 4) Interactivity, and 5) Content production as shown in Figure 7 in their paper. These principles provide an overview of what is incorporated into robot sound design, and function as a clear start of sound design. Haptics can be partly addressed with the Effort part of the Laban framework. It focuses on  Flow, Space, Time and Weight [3] and these components are incorporated into haptics. This will not contain the full design for haptics for social robots, but can surely be a starting point. The Shape part of the Laban framework can be used as a base for morphology as Shape focuses on the shape and how it changes. Diving into this will help to understand morphology, but could also function as a base for morphology design for social robots.

  1. If a large part of the design work seems to evolve around making robots sociable, what would the design space of a (deliberately) anti-social robot look like?

Technically the design space of an anti-social robot would be the opposite of a social robot. However, this would not apply to all design aspects. The knowledge and technologies can still be used but the applications will be different. For example, when designing a social robot it is important to understand motion and use that knowledge to ensure that the robot will use its motion in a social manner. In the design space for an anti-social robot, the same knowledge would be used to ensure that the robot will use its motion in an anti-social manner. It can even go beyond that and leave components out of the design, such as understanding things that are said, or a neutral/comforting face to maximize rudeness and unfriendliness.

[1] G. Hoffman and W. Ju, “Designing robots with movement in mind,” Journal of Human-Robot Interaction, vol. 3, no. 1, p. 89, Mar. 2014, doi: 10.5898/jhri.3.1.hoffman.

[2]  S. H. Hong and T. W. Kim, “A Study on the Use of Motion Graphics and Kinect in LMA (Laban Movement Analysis) Expression Activities for Children with Intellectual Disabilities,” in Communications in computer and information science, 2019, pp. 149–154. doi: 10.1007/978-3-030-30712-7_20


[3] E. Emir and C. M. Burns, “A Survey on Robotic Vacuum Cleaners: Evaluation of Expressive Robotic Motions based on the Framework of Laban Effort Features for Robot Personality Design,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 66, no. 1, pp. 182–186, Sep. 2022, doi: 10.1177/1071181322661040.


[4] H. Cui, C. Maguire, and A. LaViers, “Laban-Inspired Task-Constrained variable motion generation on expressive aerial robots,” Robotics, vol. 8, no. 2, p. 24, Mar. 2019, doi: 10.3390/robotics8020024.


[5] W. Siregar, P. Siagian, L. Rong-Shine, R. Am. Napitupulu, M. Tampubolon, and S. L. Simanjuntak, “Kinematics analyses for robot motion,” IOP Conference Series Materials Science and Engineering, vol. 852, no. 1, p. 012072, Jul. 2020, doi: 10.1088/1757-899x/852/1/012072.


[6] F. A. Robinson, O. Bown, and M. Velonaki, “Designing Sound for Social Robots: Candidate Design Principles,” International Journal of Social Robotics, vol. 14, no. 6, pp. 1507–1525, Jun. 2022, doi: 10.1007/s12369-022-00891-0.

Session 4

(copied from the reader, see referred literature in the reader)

  1. Is the discussion on embodied agents vs virtual agents still a relevant form? Can you find other (hybrid) shapes too?

It is still relevant as there are still agents in both forms being created. AI is coming up really fast these days as a virtual agent, and there are also physical forms created that make use of AI to make an embodied agent. Both agents serve different functions and address different needs and people. Virtual agents are also easier to deploy as it is only virtual, whereas the embodied agents need more developing time to create a fitting embodiment. An interesting combination is Alexa, a smart device you can put in your home. You can converse with it much like the current AI versions such as ChatGPT and  have it play music or read messages. But you do not have to be home or in the vicinity of an Alexa device to interact with it. You can always operate it from your phone even from the other side of the world.

  1. Can you find insights or work on dealing in a structured way with the robotic capabilities and social aspects of the design? Are they the same? Could you formulate requirements for social interaction in a similar (structural) way as 'normal' engineering requirements are formulated?

Piirisild et al. [1] present a structured method for the analysis of requirements for Socially Assistive Robots (SAR). They developed their own SAR, the SemuBot, targeting children with speech and communication disorders. In their requirements they also address emotional goals and user perceptions aside from the technical specifications. Piirisild et al. extracted requirements from the information they gathered through interviews, observations and reviews and put them into a Motivational Goal Model. The requirements are categorized by emotion, perception and quality goals, but after duplication recategorised in functional and non-functional requirements. 
This shows that there is a slight difference in approach for social aspects and robotic capabilities. The capabilities are easier and more clearly formulated as it has to be capable or not capable of doing things. The social aspect is way more refined and needs more attention and sometimes more clarification to ensure that the design will convey the intended emotions.

  1. Can you find examples where there is a clear conflict in design aspects that come from 'function defines form' and from social requirements? Can you come up with a design that has a very good match?

The RAMCIP robot [2] is a robotic assistant with a screen functioning as a face and an arm with a gripper at the end. It is targeted at people with Mild Cognitive Impairments, but the design of the robot makes it also look like a medical device. I could imagine that if a patient needs a robot to assist them due to an impairment, it would be nice if it did not look like a giant cardiac monitor from the hospital. Its function came first in the design. 
A form fitting the actual social scenario is
Moxie. Even though there are some questionable moments and aspects in the video, the outer design of Moxie is well done. They knew who they were designing for and used a form and feel that children could accept and possibly befriend. 

  1. Duffy (see below) formulated a nice reference list of aspects relevant for design of embodiment. Which of these are the most relevant for your case?

Duffy [3] describes several guidelines in his paper. The ones relevant to our case are 1) The use of natural motion, 2) Facilitating the development of the robot’s own identity, 3) Use of emotions and 4) Autonomy. These aspects focus more on the application of the robot than the design of the robot. As we use an existing robot (ROSE) in our case, we cannot easily change the way it looks, but we can influence the way it moves and behaves toward the elderly and others that may appear in the surrounding environment.

[1] A. Piirisild et al., “Structuring social robot requirements: Integrating user perception and emotional goals in technical development,” in Springer proceedings in advanced robotics, 2025, pp. 128–135. doi: 10.1007/978-3-031-89471-8_20.


[2] I. Kostavelis et al., “RAMCIP Robot: a personal Robotic assistant; Demonstration of a complete framework,” in Lecture notes in computer science, 2019, pp. 96–111. doi: 10.1007/978-3-030-11024-6_7.


[3] B. R. Duffy, “Anthropomorphism and the social robot,” Robotics and Autonomous Systems, vol. 42, no. 3–4, pp. 177–190, Feb. 2003, doi: 10.1016/s0921-8890(02)00374-3.