Introduction
Nowadays, as global population increases, robot technology is expected to be involved in daily applications: households, offices, and public places. In modern society, humans will live with robots in human-robot communities. In geriatrics care, for instance, elderly people maintain their health by exercising with robots, assisted by human-friendly robots [1]. However, for robots to naturally participate in our human world, they must behave like humans and adjust to multiparty situations with sociality to eliminate perceived anxiety about their behaviors. Therefore, several studies which focused on the social behaviors of robots, such as how robots respond, gesture, and display emotion have been conducted [2]–[5].
Furthermore, social robots applications should consider human features to maintain unwritten and unspoken rules in human groups. Human society comprises numerous groups, in which they belong to several of them. When an individual behaves in one of the groups to which he/she belongs, he/she perceives the specific rules of the group. The Adjustment of humans’ behaviors to rules that they perceive maintain the groups, communities, or the world. The world and group norms determine informal rules adopted by groups to regulate members behaviors [6]. Social interactions among group members are effective when group norms are shared. Consequently, the group members to easily expect an orderly behavior from other members [6]. Hence, humans live in harmony in with their families and strangers, because group norms are maintained [7]. In our previous study, we proposed a robotic model for learning group norms in human-robot groups, and revealed that group norms occurred in groups, which included two human participants and a robot [8], [9].
Similarly, recent studies showed the social influence of robots on humans in human-robot interactions [10], [11]. Moreover, it is important to investigate the influence of robots on humans before robotic interactions are accepted among people. Brandstetter et al. and Salomons et al. conducted human–robot experiments based on Asch conformity experiments [10]–[12]. The results of the conducted experiments in groups, which included some robots and single human participant, showed that some participants in the experiment conformed to wrong opinions of robots that were incorrect in the experimental scenario. Although human conformity in human-robot groups has been investigated, few studies have focused on the influence of robots that adjusted their behaviors to group norms in human-robot groups. In these experiments, the robots did not change their opinions and behaviors to put social pressure on humans. Thus, it is unclear whether robots that adjust their behaviors to group norms have social effect on humans. In addition, our previous studies did not show whether social influence on humans happened in groups. Moreover, it was difficult to precisely determine whether the human behavior was influenced by the robot or the other human in the groups.
In this study, we investigated the influence of single human participant and two robots on human decision-making in a group. We observed that while the participant and robots responded to quizzes that had unclear and vague answers, the human participant chronologically changed their answers. Thus, the vague quizzes confused participants, and made them give respond to the quiz without confidence. Although there are different decision-making criteria to answer the quiz, the criteria converge into a common criterion when a group is formed. By comparing answers in human-robot groups with answers without group members, we observed that chronological changes in the answer of the participants. In addition, in the final stage of the experiment, the participants responded to a questionnaire for the quiz to investigate whether the change in their opinions affected by social influence. Therefore, we verified social influence of robots in a human–robot group scenario by observing human behaviors in the quiz scenario and the analyses of the questionnaire results.
Related Studies
To enable a robotic system to identify a suitable norm in a group consisting of human members, we applied machine learning and knowledge of human characteristics in our proposed model. In this section, we present related studies conducted on individual differences, group norms, social robotics, and machine learning.
Humans have varied criteria for decision-making, [13], [14] and respond differently to one another under the same conditions. Moreover, individuals in a group are subject to social influence, which is defined as a change in the thoughts, feelings, attitude, or behavior of a person results from the interaction with another person or group [15]. Typically, people imitate and conform to other people’s behaviors when they do not know how to act in an unfamiliar situation. “Conformity refers to the act of changing one’s behavior to match the responses of others” [16]. Humans will always respond to the behavior of others.
In a study conducted by Sherif et al., participants in a group attempted to answer vague questions in a quiz. The experiment showed that a group norm was formed in a human group with several interactions [17]. It was assumed that the influence of each participant in the group helped the other participants to imitate their answers [18], thereby forming a group norm about the quiz. However, some studies also noted the social influence of group norms in human groups to demonstrate an experiment about sociality [19]–[21]. In particular, Asch et al. conducted an experiment to investigate social pressure from a majority group [12]. They investigated whether innocent participants would conform to the majority group that behaved in a clearly incorrect manner.
Furthermore, in several studies conducted social influence in human-robot groups were investigated through Asch-based experiments. In a human-robot group, humans must maintain a cordial relationship with robots without over-trusting or applying excessive social pressure during mutual interactions in the group. Robinette et al. reported that some individuals over-trusted a robot [22], and Salomons et al. demonstrated that some individuals changed their opinions because of the social pressure caused by the presence of the robot [10]. Meanwhile, Brandstette et al. observed conformity in human–robot groups in some scenarios [11]. Williams et al. and Vollmer et al. reported that some children changed their opinions or behaviors as a result of a robotic behavior [23], [24]. However, Beckner et al. showed that humans did not conform to humanoid robots in tasks of boundaries of linguistic imitation [25]. Therefore, human conformity to robots and social influence from robots to humans depend on a given situation. However, in these experiments, robots did not change their own opinions and behaviors to influence social pressure on humans. Hence, it is unclear whether robots changing their behaviors affect humans socially.
In our study, the robotic system learns through reinforcement learning, which is a framework for learning suitable behaviors without training data [26]. Reinforcement learning is also used in robotics [27], [28]. Furthermore, the robotic system must adjust itself to a group without prior learning. This is because it lacks prior information regarding the personalities of the group members before the group is formed. Humans adjust themselves to the group after it is formed. Consequently, the system must adjust to the group without prior learning by interacting with the group members. Interactive evolutionary computation (IEC) is applied to solve problems without prior solely learning through interactions with a user [29], [30]. The size of the search space is a limitation of IEC because a user’s fatigue arising from interactions with the IEC system must be considered. Nevertheless, since human group members form group norms in only several interactions, this experiment has the limitation of a search-space size and limited interactions. In this study, the vague quizzes contained multiple options, which the system could select from to answer them. Hence, it is necessary for the robotic system to learn group norms only from interactions with group members.
Group Norm Model
In this study, we conducted experiments that include two robots that applied the group norm model (GNM). Fig. 1 depicts a diagram of the GNM. The robot that used GNM in a group selected behavior, which is the most valuable in a set of available behaviors of robot, and learns the values of behaviors by observing the behaviors of the group members. This model considered a group norm as a value function
Our model comprised action
Initial quiz input screen of an application input screen to answer a quiz. The English sentence means the question of the quiz that was originally written in Japanese. Kanari is a Japanese basic word, which means “considerably” in English.
Input screen with the answer of an application input screen to answer a quiz. The English sentence means the question of the quiz that was originally written in Japanese. Kanari, that is a Japanese basic word, means “considerably” in English.
In the environment, there are
In addition, a value function, Q values, and rewards are involved in learning. The mechanism sets a high value on the adjustment of the robot to a human–robot group. A value function, \begin{equation*} V(s_{n}) \leftarrow (1-\alpha)V(s_{n}) + \alpha \left ({R^{i} (s_{n}) + \gamma \max _{s'} V{(s')}}\right)\tag{1}\end{equation*}
Equation (2) is for \begin{equation*} R^{i}(s_{n}) = \sum _{k=1}^{M-1} exp \left ({- \frac {(s_{n}-s_{k})^{2}}{2\sigma ^{2}} }\right)\tag{2}\end{equation*}
The Q value, \begin{align*} Q(s_{m},a_{next})=&V(s_{m+1})\tag{3}\\ Q(s_{l},a_{dcs})=&\frac {1}{V(s_{l}) - \max _{n} V{(s_{n})}}\tag{4}\end{align*}
\begin{equation*} (4) = \! \begin{cases} Q(s_{l},a_{dcs})\rightarrow \infty &\quad (\text{if}\,V(s_{l})=\max _{n} V{(s_{n})}) \\ Q(s_{l},a_{dcs})< 0&\quad ({\text {otherwise}}) \end{cases}\tag{5}\end{equation*}
Based on the Q value of executing an action in a certain state, the agent moves in the environment by executing
Experiment
Based on the pattern used by Fuse et al. [9], we ran a quiz of dots in this study to observe the social influence of robots on human in human-robot groups. In the experiment, we used two RoBoHoN, which are made by SHARP corporation as shown in Fig. 5. Additionally, to investigate the impression of participants on robots in a group. The participants were 14 university students, which were native Japanese speakers.
A. Quiz of Dots
The participants conducted quizzes about the descriptive terms for dot quantities. An input screen on a laptop and an example of an answer provided by a participant are shown in Figs. 3 and 4. Thus, the initial input screen is shown in Fig. 3, while Fig. 4 depicts an input screen after an answer provided by a participant. The English sentence implies the question of the quiz originally written in Japanese. Kanari, is a Japanese basic word, which means “considerably” in English as shown in Table 1.
The application contained the question and the three buttons labeled “ADD,” “DELETE,” and “ANSWER,” located beneath a white box. A black dot appeared on the input screen when the participant clicked once on ADD. Moreover, the number of the ADD pushes represented the number of dots which is equal to a descriptive term. A black dot appeared at a random location in the white box each time a participant pushed the ADD. The number of dots showed the answer of the participant to each question. The quiz allowed the participant to click a maximum of 100 times, without being aware of this limit. If the participant clicked once on DELETE, one dot in the white box disappeared. The participants completed the quiz which instructed them to “continue pushing ADD or DELETE until, they see an
This quiz made each participant to observe the descriptive terms. The participant clicked on ADD or DELETE severally until he/she was satisfied with the answer and clicked on ANSWER.
B. Flow of Experiment
We asked one participant to answer the same quiz of dots five times in the experiment. We prepared two experimental sets of participants:
Those who answered the quiz alone.
Those who answered the quiz in human-robot groups and a questionnaire.
The flowchart of the experiment is shown in Fig. 8. A single step, which was repeated five times, contained “Answering” and “Recognizing.” When a participant is in the “Answering,” mode he/she answered the quiz in front of a laptop that displays the app as shown in Fig. 3. Meanwhile, a participant recognized the answer(s) in Fig. 6 or 7 when he/she was in “Recognizing,” A participant, who answered the quiz alone, recognized his/her own answer by watching the display as shown in Fig. 6, while a participant, who answered the quiz in a human-robot group, recognized each group member’s answer in Fig. 7. Therefore, as shown in Fig. 7 the quiz host displayed the app in their front. The participants knew each answer of the robot because the two robots were named TARO and JIRO.
The app is to recognize answers in a human-robot group. It shows the images of dots that the two robots(TARO and JIRO) and the human participant provided in the answer phase.
The experimental environment in a human-robot group is shown in Fig. 9. In the “Answering Area,” the robots and the participant answered the quiz in the following order: TARO, JIRO, and the participant. The quiz host brought the quiz to the front of the laptop when the robot answered it. In the “Waiting Area,” each group member waited for his/her turn to answer. Moreover, the group members recognized each other’s answer by watching the display shown in Fig. 7.
Each robot answered the quiz with regards to the dots in the white box, ADD, and ANSWER as state
C. Experimental Condition
The condition of this experiment is presented in Table 2. Fourteen participants answered the same quiz five times with no group members and in a human–robot group. In the experiment, the descriptive term in the quiz that was answered by the participant alone and in the group was B and A as illustrated in Table 1. Moreover, equation (6) indicates the initial value functions of the two robots:\begin{equation*} V(s_{n}) = rand[{0,0.1}]+ exp\left ({\frac {(s_{n}-s_{P}\pm 20)^{2}}{2\sigma ^{2}} }\right)\tag{6}\end{equation*}
D. Questionnaire
Each participant answered a questionnaire between answering and recognizing at step 5 in a human-robot group. Figs. 10 and 12 show the display of an app to answer the questionnaire. Fig. 10 shows six kinds of images of dots as shown in Fig. 11. On the other hand, Fig. 12 shows slide bars to answer the questionnaire. After answering the quiz of dots, which was an answer at step 5, the participant answered how reluctant he/she was to answer the image shown in ②, ③, ④, ⑤, or ⑥ of Fig. 10 at step 5 instead of his/her previous answer shown in ①.
The display 1 of app for questionnaire. Each number from 1 to 6 expresses images of dots as shown Figure 11.
The display 2 of the application to fill in a questionnaire while checking the display 1 shown in Fig. 10. The English sentence means the questionnaire that was originally written in Japanese.
Six images of dots are shown in Fig. 10. The images in ①, ②, ③, ④, ⑤, and ⑥ were the answer of the participant at step 5, the answer of the participant at step 1, the answer of robot1 at step 1, the answer of robot1 at step 5, the answer of robot2 at step 1, and the answer of robot2 at step 5, respectively. The participant saw an answer that he/she provided at step 5 and five answers. Then, the participant was not informed that ②, ③, ④, ⑤, and ⑥ were provided by him/her or the robots. The participant answered the questionnaire by using display 2 as shown in Fig. 12 while comparing ②, ③, ④, ⑤, and ⑥ with ①.
In Fig. 12, the location of the slide bar indicated the degree of reluctance from 0 to 100. The participant who answered the questionnaire selected a numerical value by moving a knob along a scale of a range of values. We used the slide bar because the slide bar was more suitable for the participant to finely express their reluctance to five images of dots than radio buttons, which are a typical approach of questionnaires, like Likert scale. The participants answered the questionnaire, and ensured the six images and comparing his/her answer, which is ① at step 5, with the other images. After answering the questionnaire, the participant moved to the waiting area and waited for the two robot to answer the quiz of dots. Finally, the participant and the two robots experienced the phase of recognition and the experiment ended.
E. Results
In this experiment, we investigated the change in answers of group members and impression of human participants on the opinion of other group members about the quiz. The participants answered the quiz concerning a descriptive scale B or “comparatively” to answer alone, while they answered the quiz concerning a descriptive scale A or “considerably” to answer in a human-robot group, as shown in Table 1.
The chronological change in each the answer of member in 14 human-robot groups, which included 14 university students is shown in Fig. 13. The vertical axis indicates the number of dots, while the horizontal axis denotes the number of the steps. Each graph has three broken lines, which mean the answers of group members. All of the participants answered the quiz five times concerning Kanari or considerably. Generally, although the number of the dots differ in each answer at the first step, Fig. 13 shows that each number of dots is similar to the others. However, all human participants did not show a similar answer to the answer of the robot. However, human participants in Experiment 2, 4, 5, 7, 9, 11, 12, 13, and 14 seem to mimic one of the previous answers of the robot at the second step, other participants in Experiments 1, and 3 clearly did not positively adjust to the answers of the robots. Two robots using GNM adjusted their answers, so that the group norm occurred in all of the human-robot groups.
Each group member’s answers in the 14 human-robot groups. All of the participants answered the quiz concerning “considerably” in the human-robot groups.
The change of standard deviation of the number of dots at each step in human-robot group is shown in Fig. 14. At step 1, standard deviation of each group was similar to each other since the initial answers of the robots were adjusted to differ from the initial answer of participant. The standard deviation at steps 4 and 5 was also similar to each other. However, at steps 2 and 3, standard deviation of each group was distributed more widely than at steps 1, 4, and 5. Moreover, the Brunner-Munzel test was performed to evaluate the standard deviation of step 2 and step 5 and is suitable for a small sample size [31]. We intended to compare a set of standard deviations right after we recognized the answers of the other participants (step 2) for the first time with a set of standard deviations after continuously recognizing the others’ answers four times (step 5). The
Change of standard deviation of the number of dots at each step, in human-robot group. The standard deviations between steps 2 and 5 has a statistical significance due to
The chronological change of each answer of human participant when he/she answered the quiz alone and answered the quiz in a human-robot group is shown in Fig. 15. The left and right of Fig. 15 shows that the change of each answer is large and small, respectively. Based on these results, Fig. 16 depicts the absolute number of various dots between steps. The Human and Human–robot group in the legend indicate the variation of dots in the two ways to answer the quiz.
Chronological change of the answers of human participants in answering in a group and in answering alone.
Comparison of the absolute variation of dots in Human-robot group and Human. The absolute value of step 1 to 2 only has a statistical significance due to
Table 3 presents the
Furthermore, the results of the questionnaire about the reluctance of the participants to answer the quiz based on the other’s opinions instead of their own opinions are shown in Fig. 17. In addition, Table 4 presents the
Results of the questionnaire about participants’ reluctance to answer based on the other’s opinions instead of their own opinions. Each group member’s answer between step 1 and step 5 has a statistical significant due to
As a result, it is obvious that the answer of the human participant and opinion in human-robot groups changed while answering the same quiz five times in a row. Moreover, the results of the questionnaire show that participants generally felt more reluctant to agree to the opinions of others at step 1 than at step 5.
F. Discussion
It is obvious that the answers of human participants and opinions in human-robot groups changed when they answered the same quiz five times in a row. Moreover, we revealed that participants generally felt more reluctant to agree to the others’ opinions at step 1 than at step 5.
Human participants were affected by the opinions of the robots about the similarity of answers and generated group norms with the two robots in groups as shown in Fig. 13 and 14. In the quiz with unclear answers, it was assumed that these participants trusted the answer of the robot due to lack of their criterion and confidence to answer the quiz. As illustrated in Experiment 1 of Fig. 13, the answers in the group did not correspond to the similar answers from the first step to the final step. On the other hand, Experiment 12 in Fig. 13 indicates that the human participant sharply changed his/her answer from step 1 to step 2. Therefore, it appeared that the similarity of the answers in a group depends on whether the participant has his/her criterion to answer the quiz or not.
In addition, even though the variation between step 1 and step 2 was large as shown in Fig. 16, it was clear that human participants decreased the variation of dots after step 2. In the quiz with unclear and vague answers, the participants who did not answer with confidence tended to change their answers due to the social influence of the robot in human-robot groups. It was indicated that the participants in the group kept the constant number of dots as an answer to step 2 to 3, step 3 to 4, and step 4 to 5. This was because they were later aware that the robots attempted to maintain group norms. Therefore, it was assumed that the answers of the human participants were affected by those of the robots, who considered group norms in the group, and formed group norms with robots.
However, the numerical value of Cohen’s
Moreover, Fig. 17 shows that the group norms changed the opinions of the human participants because they felt answers similar to the group norm more appropriate than answers different from the group norm. In a comparison of answers of human between step 1 and 5 as shown in No.1 and No.2 in Fig. 17, they preferred the answer at step 5 as an answer in a group, although they decided to answer an image by themselves at step 1. This means that they thought they should answer the quiz while they consider the group norm. Therefore, the group norm gave them a right answer to the quiz that had no correct answers. On the other hand, in a comparison of answers of robots between step 1 and 5 as shown in a pair of No.3 and No.5 and a pair of No.4 and No.6 in Fig. 17, the humans accepted the answers that each of the robots showed while they learn a group norm. This also indicates that the humans gradually accepted the group norms, although they were surprised at a difference of the answers of others and changed their answers. Therefore, although humans generated the group norms with two robots, the norms socially affected the opinions of the participants.
Conclusion
This study investigated whether robots affected behaviors of human participants when one human participant joined the group, which included two robots that considered the group norm. In the case of the robots that consider group norms in the group and used for the group experiment, we focused on the change of the behaviors of the humans with robots and an appropriateness of the opinions of the group members in the human-robot group. Two studies showed conformities of humans and social influence on humans from robots that did not change their behaviors [10], [11]. Therefore, it is unclear whether robots that change their behaviors socially affect humans.
Human participants answered a quiz with unclear answers. Based on the results of the quiz, we compared answers of human participants in human-robot groups with the answers that they obtained by themselves. It is obvious that the variation between the steps gradually decreased although the answers of these participants largely changed from the first step to the second step. Following the results, it is suggested that the participants kept the constant number of dots as an answer in the group because they were confused about the diversity of the answers and got to be aware that the robots attempted to maintain group norms. Therefore, the human participants also made decisions when they consider group norms like the robots.
In addition, we investigated the kind of answer the human participants accepted as an appropriate answer in the group that they belonged to. It is obvious that the participants determined what kind of answers are appropriate on the basis of the group norm that they shared with the robots. So, the group norm gave them a criterion to answer the quiz that had had originally no correct answers. Thus, we concluded that opinions of the human participants were affected by the robots considering group norms in the human-robot group.
In future work, we will investigate social influence in a practical scenario. The quiz scenario prepared for this investigation is not a common situation in human-robot interaction. Therefore, we need to investigate whether group norms in human-robot groups affect humans decision making in a realistic scenario.