Loading web-font TeX/Math/Italic
Social Influence of Group Norms Developed by Human-Robot Groups | IEEE Journals & Magazine | IEEE Xplore

Social Influence of Group Norms Developed by Human-Robot Groups


Two robots and a laptop used in an experimental scenario. We compared behaviors of human participants that experienced the scenario alone and with the robots.

Abstract:

Several studies on how social robots respond, gesture, and display emotion in human-robot interactions have been conducted. In particular, sociality of robots implies tha...Show More

Abstract:

Several studies on how social robots respond, gesture, and display emotion in human-robot interactions have been conducted. In particular, sociality of robots implies that robots do not only exhibit human-like behaviors, but also display a tendency to adapt to a group of individuals. For robots to exhibit sociality, they need to adapt to group norms without telling them how to behave by the group members. In this study, we investigated the effect of group norms on human decision-making in human-robot groups, which comprise two robots using our proposed robotic model. Furthermore, we conducted quizzes with the robots and a human participant using unclear and vague answers. We assessed this influence by making the participant and the two robots repeat a set of actions: to answer the same quiz and recognize each answer of the group members. Additionally, we evaluated the extent to which the group norms changed the opinions of humans using a questionnaire. We analyzed the results of the questionnaire and chronological change in their answers for the quiz with the same question. The quiz experimental results showed that the human participants changed their answers after they discovered the answers of the robots for the first time due to social influence from the robots assumed that the human participants were confused about the diversity of the answers in the group and were aware of the consideration of the robots of the group norm. This is to ensure that they can adjust their answers to the group norm. Moreover, the questionnaire results revealed that the group norms gave the human participants right answers to the quiz that has no correct answers. Therefore, we concluded that robots attempt to comply with a group norm affects human's decision-making.
Two robots and a laptop used in an experimental scenario. We compared behaviors of human participants that experienced the scenario alone and with the robots.
Published in: IEEE Access ( Volume: 8)
Page(s): 56081 - 56091
Date of Publication: 20 March 2020
Electronic ISSN: 2169-3536
No metrics found for this document.

CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

Nowadays, as global population increases, robot technology is expected to be involved in daily applications: households, offices, and public places. In modern society, humans will live with robots in human-robot communities. In geriatrics care, for instance, elderly people maintain their health by exercising with robots, assisted by human-friendly robots [1]. However, for robots to naturally participate in our human world, they must behave like humans and adjust to multiparty situations with sociality to eliminate perceived anxiety about their behaviors. Therefore, several studies which focused on the social behaviors of robots, such as how robots respond, gesture, and display emotion have been conducted [2]–​[5].

Furthermore, social robots applications should consider human features to maintain unwritten and unspoken rules in human groups. Human society comprises numerous groups, in which they belong to several of them. When an individual behaves in one of the groups to which he/she belongs, he/she perceives the specific rules of the group. The Adjustment of humans’ behaviors to rules that they perceive maintain the groups, communities, or the world. The world and group norms determine informal rules adopted by groups to regulate members behaviors [6]. Social interactions among group members are effective when group norms are shared. Consequently, the group members to easily expect an orderly behavior from other members [6]. Hence, humans live in harmony in with their families and strangers, because group norms are maintained [7]. In our previous study, we proposed a robotic model for learning group norms in human-robot groups, and revealed that group norms occurred in groups, which included two human participants and a robot [8], [9].

Similarly, recent studies showed the social influence of robots on humans in human-robot interactions [10], [11]. Moreover, it is important to investigate the influence of robots on humans before robotic interactions are accepted among people. Brandstetter et al. and Salomons et al. conducted human–robot experiments based on Asch conformity experiments [10]–​[12]. The results of the conducted experiments in groups, which included some robots and single human participant, showed that some participants in the experiment conformed to wrong opinions of robots that were incorrect in the experimental scenario. Although human conformity in human-robot groups has been investigated, few studies have focused on the influence of robots that adjusted their behaviors to group norms in human-robot groups. In these experiments, the robots did not change their opinions and behaviors to put social pressure on humans. Thus, it is unclear whether robots that adjust their behaviors to group norms have social effect on humans. In addition, our previous studies did not show whether social influence on humans happened in groups. Moreover, it was difficult to precisely determine whether the human behavior was influenced by the robot or the other human in the groups.

In this study, we investigated the influence of single human participant and two robots on human decision-making in a group. We observed that while the participant and robots responded to quizzes that had unclear and vague answers, the human participant chronologically changed their answers. Thus, the vague quizzes confused participants, and made them give respond to the quiz without confidence. Although there are different decision-making criteria to answer the quiz, the criteria converge into a common criterion when a group is formed. By comparing answers in human-robot groups with answers without group members, we observed that chronological changes in the answer of the participants. In addition, in the final stage of the experiment, the participants responded to a questionnaire for the quiz to investigate whether the change in their opinions affected by social influence. Therefore, we verified social influence of robots in a human–robot group scenario by observing human behaviors in the quiz scenario and the analyses of the questionnaire results.

SECTION II.

Related Studies

To enable a robotic system to identify a suitable norm in a group consisting of human members, we applied machine learning and knowledge of human characteristics in our proposed model. In this section, we present related studies conducted on individual differences, group norms, social robotics, and machine learning.

Humans have varied criteria for decision-making, [13], [14] and respond differently to one another under the same conditions. Moreover, individuals in a group are subject to social influence, which is defined as a change in the thoughts, feelings, attitude, or behavior of a person results from the interaction with another person or group [15]. Typically, people imitate and conform to other people’s behaviors when they do not know how to act in an unfamiliar situation. “Conformity refers to the act of changing one’s behavior to match the responses of others” [16]. Humans will always respond to the behavior of others.

In a study conducted by Sherif et al., participants in a group attempted to answer vague questions in a quiz. The experiment showed that a group norm was formed in a human group with several interactions [17]. It was assumed that the influence of each participant in the group helped the other participants to imitate their answers [18], thereby forming a group norm about the quiz. However, some studies also noted the social influence of group norms in human groups to demonstrate an experiment about sociality [19]–​[21]. In particular, Asch et al. conducted an experiment to investigate social pressure from a majority group [12]. They investigated whether innocent participants would conform to the majority group that behaved in a clearly incorrect manner.

Furthermore, in several studies conducted social influence in human-robot groups were investigated through Asch-based experiments. In a human-robot group, humans must maintain a cordial relationship with robots without over-trusting or applying excessive social pressure during mutual interactions in the group. Robinette et al. reported that some individuals over-trusted a robot [22], and Salomons et al. demonstrated that some individuals changed their opinions because of the social pressure caused by the presence of the robot [10]. Meanwhile, Brandstette et al. observed conformity in human–robot groups in some scenarios [11]. Williams et al. and Vollmer et al. reported that some children changed their opinions or behaviors as a result of a robotic behavior [23], [24]. However, Beckner et al. showed that humans did not conform to humanoid robots in tasks of boundaries of linguistic imitation [25]. Therefore, human conformity to robots and social influence from robots to humans depend on a given situation. However, in these experiments, robots did not change their own opinions and behaviors to influence social pressure on humans. Hence, it is unclear whether robots changing their behaviors affect humans socially.

In our study, the robotic system learns through reinforcement learning, which is a framework for learning suitable behaviors without training data [26]. Reinforcement learning is also used in robotics [27], [28]. Furthermore, the robotic system must adjust itself to a group without prior learning. This is because it lacks prior information regarding the personalities of the group members before the group is formed. Humans adjust themselves to the group after it is formed. Consequently, the system must adjust to the group without prior learning by interacting with the group members. Interactive evolutionary computation (IEC) is applied to solve problems without prior solely learning through interactions with a user [29], [30]. The size of the search space is a limitation of IEC because a user’s fatigue arising from interactions with the IEC system must be considered. Nevertheless, since human group members form group norms in only several interactions, this experiment has the limitation of a search-space size and limited interactions. In this study, the vague quizzes contained multiple options, which the system could select from to answer them. Hence, it is necessary for the robotic system to learn group norms only from interactions with group members.

SECTION III.

Group Norm Model

In this study, we conducted experiments that include two robots that applied the group norm model (GNM). Fig. 1 depicts a diagram of the GNM. The robot that used GNM in a group selected behavior, which is the most valuable in a set of available behaviors of robot, and learns the values of behaviors by observing the behaviors of the group members. This model considered a group norm as a value function V(s) . The input of the function is a behavior that the robot can execute, while its output is the value of the behavior in the case that the robot executes the behavior in the group. The robot optimized the function to the group norm by renewing the function and observing the behaviors of the group members. Therefore, the robot can learn a certain group norm, which is shared in a group and socially behaved within the group.

FIGURE 1. - Diagram of group norm model.
FIGURE 1.

Diagram of group norm model.

Our model comprised action a , state s , value function V(s) , reward function R(s) , and Q value Q(s, a) . The value function, Q values, and the reward function are part of learning by utilizing the approach of interactive machine learning. The proposed model has a reinforcement–learning environment as illustrated in Fig. 2. The model showed that a robot has a set of behaviors it can execute. An agent explored the reinforcement learning environment and decided which behavior the robot should perform.

FIGURE 2. - A reinforcement-learning environment.
FIGURE 2.

A reinforcement-learning environment.

FIGURE 3. - Initial quiz input screen of an application input screen to answer a quiz. The English sentence means the question of the quiz that was originally written in Japanese. Kanari is a Japanese basic word, which means “considerably” in English.
FIGURE 3.

Initial quiz input screen of an application input screen to answer a quiz. The English sentence means the question of the quiz that was originally written in Japanese. Kanari is a Japanese basic word, which means “considerably” in English.

FIGURE 4. - Input screen with the answer of an application input screen to answer a quiz. The English sentence means the question of the quiz that was originally written in Japanese. Kanari, that is a Japanese basic word, means “considerably” in English.
FIGURE 4.

Input screen with the answer of an application input screen to answer a quiz. The English sentence means the question of the quiz that was originally written in Japanese. Kanari, that is a Japanese basic word, means “considerably” in English.

In the environment, there are N + 1 states and two actions. As depicted in Fig. 2, reinforcement–learning environment comprises these states and actions. The n th state, s_{n} , denotes the behavior criterion that the robot generates at time n . The term N denotes the maximum number of states. The actions are denoted as a_{dcs} and a_{next} where a_{dcs} indicates that the robot has a criterion that may be suitable for a group, while a_{next} indicates that the agent moves from the present state to a subsequent state. When a_{next} is executed, the agent observes that the present state is not a suitable criterion.

In addition, a value function, Q values, and rewards are involved in learning. The mechanism sets a high value on the adjustment of the robot to a human–robot group. A value function, V(s_{n}) , returns the value of s_{n} as a group criterion, while Q value, Q(s,a) , denotes a combination of a certain state and a certain action. When the behaviors of the robot are adjusted to a group, the robot searches the space of states and selects an appropriate behavior. The values of the behaviors are obtained from the value function. In addition, rewards are utilized to renew the value function. However, until the robot decides to use the proposed model in a group, the robot system executes a_{next} and a_{dcs} while moving from the present state to a subsequent state in the environment, as shown in Fig. 2. Whenever the agent in the robot system moves to the next state, it must decide to select either a_{next} or a_{dcs} in a certain state. The value of these decisions represents the Q value, which is derived from a value function. In this case, equation (1) is applied to renew the value function at the i th step, where \gamma denotes the weighting factor, \alpha denotes the learning rate, and R^{i} (s_{n}) denotes the reward at the i th step. Additionally, the initial value function of each state is a random number.\begin{equation*} V(s_{n}) \leftarrow (1-\alpha)V(s_{n}) + \alpha \left ({R^{i} (s_{n}) + \gamma \max _{s'} V{(s')}}\right)\tag{1}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Equation (2) is for R^{i} (s_{n}) at the ith step, where M denotes the number of members in the group joined by the robot (the M th member is the robot); s_{k} denotes the behavior at the i th step that is exhibited by each group member, and \sigma ^{2} denotes the sharpness of R^{i} (s_{n}) .\begin{equation*} R^{i}(s_{n}) = \sum _{k=1}^{M-1} exp \left ({- \frac {(s_{n}-s_{k})^{2}}{2\sigma ^{2}} }\right)\tag{2}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

The Q value, Q(s,a) , denotes the value to combine a particular state and a particular action. The agent selects an action that has a higher value in the agent’s current state. Equation (3) and (4) are utilized for deriving Q values, and the variable conditions are given as m\in \{0, 1, 2,\cdots, N-1\} , n\in \{0, 1, 2, \cdots, N\} , and l\in \{0, 1, 2, \cdots, N\} .\begin{align*} Q(s_{m},a_{next})=&V(s_{m+1})\tag{3}\\ Q(s_{l},a_{dcs})=&\frac {1}{V(s_{l}) - \max _{n} V{(s_{n})}}\tag{4}\end{align*}

View SourceRight-click on figure for MathML and additional features. Equation (4) has two cases for the value of V(s_{l}) . When V(s_{l})=\max _{n} V(s_{n}) , equation (5) indicates that the robot determines that s_{l} is an appropriate group norm.\begin{equation*} (4) = \! \begin{cases} Q(s_{l},a_{dcs})\rightarrow \infty &\quad (\text{if}\,V(s_{l})=\max _{n} V{(s_{n})}) \\ Q(s_{l},a_{dcs})< 0&\quad ({\text {otherwise}}) \end{cases}\tag{5}\end{equation*}
View SourceRight-click on figure for MathML and additional features.

Based on the Q value of executing an action in a certain state, the agent moves in the environment by executing a_{next} or a_{dcs} . Moreover, it is difficult for robots to generate a certain criterion. Therefore, in this study, a limited experimental scenario provided the robot with a set of states.

SECTION IV.

Experiment

Based on the pattern used by Fuse et al. [9], we ran a quiz of dots in this study to observe the social influence of robots on human in human-robot groups. In the experiment, we used two RoBoHoN, which are made by SHARP corporation as shown in Fig. 5. Additionally, to investigate the impression of participants on robots in a group. The participants were 14 university students, which were native Japanese speakers.

FIGURE 5. - Two RoBoHoNs and a laptop used in an experimental scenario.
FIGURE 5.

Two RoBoHoNs and a laptop used in an experimental scenario.

A. Quiz of Dots

The participants conducted quizzes about the descriptive terms for dot quantities. An input screen on a laptop and an example of an answer provided by a participant are shown in Figs. 3 and 4. Thus, the initial input screen is shown in Fig. 3, while Fig. 4 depicts an input screen after an answer provided by a participant. The English sentence implies the question of the quiz originally written in Japanese. Kanari, is a Japanese basic word, which means “considerably” in English as shown in Table 1.

TABLE 1 Two Descriptive Scales in Japanese and English
Table 1- 
Two Descriptive Scales in Japanese and English

The application contained the question and the three buttons labeled “ADD,” “DELETE,” and “ANSWER,” located beneath a white box. A black dot appeared on the input screen when the participant clicked once on ADD. Moreover, the number of the ADD pushes represented the number of dots which is equal to a descriptive term. A black dot appeared at a random location in the white box each time a participant pushed the ADD. The number of dots showed the answer of the participant to each question. The quiz allowed the participant to click a maximum of 100 times, without being aware of this limit. If the participant clicked once on DELETE, one dot in the white box disappeared. The participants completed the quiz which instructed them to “continue pushing ADD or DELETE until, they see an {X} large number of dots in their opinion,” where {X} denotes a descriptive term. Therefore, {X} was substituted with the English translation of one of the descriptive terms (A or B) listed in Table 1.

This quiz made each participant to observe the descriptive terms. The participant clicked on ADD or DELETE severally until he/she was satisfied with the answer and clicked on ANSWER.

B. Flow of Experiment

We asked one participant to answer the same quiz of dots five times in the experiment. We prepared two experimental sets of participants:

  • Those who answered the quiz alone.

  • Those who answered the quiz in human-robot groups and a questionnaire.

After we performed the experiments in the two sets, we compared the results to investigate their social influence. Furthermore, we calculated the change in their answers while they answered the same quiz. Moreover, we analyzed the results of the questionnaire on impression on some images with black dots. Based on the questionnaire, we investigated whether the participants valued a group norm that occurred in a human-robot group. Additionally, before the experiment, each participant answered some test quizzes to familiarize himself/herself with the application of the quiz.

The flowchart of the experiment is shown in Fig. 8. A single step, which was repeated five times, contained “Answering” and “Recognizing.” When a participant is in the “Answering,” mode he/she answered the quiz in front of a laptop that displays the app as shown in Fig. 3. Meanwhile, a participant recognized the answer(s) in Fig. 6 or 7 when he/she was in “Recognizing,” A participant, who answered the quiz alone, recognized his/her own answer by watching the display as shown in Fig. 6, while a participant, who answered the quiz in a human-robot group, recognized each group member’s answer in Fig. 7. Therefore, as shown in Fig. 7 the quiz host displayed the app in their front. The participants knew each answer of the robot because the two robots were named TARO and JIRO.

FIGURE 6. - The app is to recognize answers alone.
FIGURE 6.

The app is to recognize answers alone.

FIGURE 7. - The app is to recognize answers in a human-robot group. It shows the images of dots that the two robots(TARO and JIRO) and the human participant provided in the answer phase.
FIGURE 7.

The app is to recognize answers in a human-robot group. It shows the images of dots that the two robots(TARO and JIRO) and the human participant provided in the answer phase.

FIGURE 8. - The flowchart of the experiment.
FIGURE 8.

The flowchart of the experiment.

The experimental environment in a human-robot group is shown in Fig. 9. In the “Answering Area,” the robots and the participant answered the quiz in the following order: TARO, JIRO, and the participant. The quiz host brought the quiz to the front of the laptop when the robot answered it. In the “Waiting Area,” each group member waited for his/her turn to answer. Moreover, the group members recognized each other’s answer by watching the display shown in Fig. 7.

FIGURE 9. - Experimental environment in a human-robot group.
FIGURE 9.

Experimental environment in a human-robot group.

Each robot answered the quiz with regards to the dots in the white box, ADD, and ANSWER as state s , a_{next} , and a_{dcs} . After each answer by the robot, the three participants saw their answers. Simultaneously, the RoBoHoN observed the answers of the other participants and learned from them. When the number of dots in answers of the human participants were k_{1} and k_{2} , the RoBoHoN’s system recognized their answers as s_{k} and s_{k2} and renewed those values using Equations (1), (2), (3), and (4). This procedure was repeated five times by the participants.

C. Experimental Condition

The condition of this experiment is presented in Table 2. Fourteen participants answered the same quiz five times with no group members and in a human–robot group. In the experiment, the descriptive term in the quiz that was answered by the participant alone and in the group was B and A as illustrated in Table 1. Moreover, equation (6) indicates the initial value functions of the two robots:\begin{equation*} V(s_{n}) = rand[{0,0.1}]+ exp\left ({\frac {(s_{n}-s_{P}\pm 20)^{2}}{2\sigma ^{2}} }\right)\tag{6}\end{equation*}

View SourceRight-click on figure for MathML and additional features. The state s_{P} denotes the number of dots of the participant. To investigate their social influence, the answers of the robots became the number of dots of the participants, ±20, at step 1. If the answer of a robot is similar to the answer of the participant at step 1, the participant does not form a group norm. Therefore, to ensure that the participant was in the process to form group norm, we devised a means to initialize the value functions before the experiments.

TABLE 2 Experimental Conditions
Table 2- 
Experimental Conditions

D. Questionnaire

Each participant answered a questionnaire between answering and recognizing at step 5 in a human-robot group. Figs. 10 and 12 show the display of an app to answer the questionnaire. Fig. 10 shows six kinds of images of dots as shown in Fig. 11. On the other hand, Fig. 12 shows slide bars to answer the questionnaire. After answering the quiz of dots, which was an answer at step 5, the participant answered how reluctant he/she was to answer the image shown in ②, ③, ④, ⑤, or ⑥ of Fig. 10 at step 5 instead of his/her previous answer shown in ①.

FIGURE 10. - The display 1 of app for questionnaire. Each number from 1 to 6 expresses images of dots as shown Figure 11.
FIGURE 10.

The display 1 of app for questionnaire. Each number from 1 to 6 expresses images of dots as shown Figure 11.

FIGURE 11. - Example of the display 1.
FIGURE 11.

Example of the display 1.

FIGURE 12. - The display 2 of the application to fill in a questionnaire while checking the display 1 shown in Fig. 10. The English sentence means the questionnaire that was originally written in Japanese.
FIGURE 12.

The display 2 of the application to fill in a questionnaire while checking the display 1 shown in Fig. 10. The English sentence means the questionnaire that was originally written in Japanese.

Six images of dots are shown in Fig. 10. The images in ①, ②, ③, ④, ⑤, and ⑥ were the answer of the participant at step 5, the answer of the participant at step 1, the answer of robot1 at step 1, the answer of robot1 at step 5, the answer of robot2 at step 1, and the answer of robot2 at step 5, respectively. The participant saw an answer that he/she provided at step 5 and five answers. Then, the participant was not informed that ②, ③, ④, ⑤, and ⑥ were provided by him/her or the robots. The participant answered the questionnaire by using display 2 as shown in Fig. 12 while comparing ②, ③, ④, ⑤, and ⑥ with ①.

In Fig. 12, the location of the slide bar indicated the degree of reluctance from 0 to 100. The participant who answered the questionnaire selected a numerical value by moving a knob along a scale of a range of values. We used the slide bar because the slide bar was more suitable for the participant to finely express their reluctance to five images of dots than radio buttons, which are a typical approach of questionnaires, like Likert scale. The participants answered the questionnaire, and ensured the six images and comparing his/her answer, which is ① at step 5, with the other images. After answering the questionnaire, the participant moved to the waiting area and waited for the two robot to answer the quiz of dots. Finally, the participant and the two robots experienced the phase of recognition and the experiment ended.

E. Results

In this experiment, we investigated the change in answers of group members and impression of human participants on the opinion of other group members about the quiz. The participants answered the quiz concerning a descriptive scale B or “comparatively” to answer alone, while they answered the quiz concerning a descriptive scale A or “considerably” to answer in a human-robot group, as shown in Table 1.

The chronological change in each the answer of member in 14 human-robot groups, which included 14 university students is shown in Fig. 13. The vertical axis indicates the number of dots, while the horizontal axis denotes the number of the steps. Each graph has three broken lines, which mean the answers of group members. All of the participants answered the quiz five times concerning Kanari or considerably. Generally, although the number of the dots differ in each answer at the first step, Fig. 13 shows that each number of dots is similar to the others. However, all human participants did not show a similar answer to the answer of the robot. However, human participants in Experiment 2, 4, 5, 7, 9, 11, 12, 13, and 14 seem to mimic one of the previous answers of the robot at the second step, other participants in Experiments 1, and 3 clearly did not positively adjust to the answers of the robots. Two robots using GNM adjusted their answers, so that the group norm occurred in all of the human-robot groups.

FIGURE 13. - Each group member’s answers in the 14 human-robot groups. All of the participants answered the quiz concerning “considerably” in the human-robot groups.
FIGURE 13.

Each group member’s answers in the 14 human-robot groups. All of the participants answered the quiz concerning “considerably” in the human-robot groups.

The change of standard deviation of the number of dots at each step in human-robot group is shown in Fig. 14. At step 1, standard deviation of each group was similar to each other since the initial answers of the robots were adjusted to differ from the initial answer of participant. The standard deviation at steps 4 and 5 was also similar to each other. However, at steps 2 and 3, standard deviation of each group was distributed more widely than at steps 1, 4, and 5. Moreover, the Brunner-Munzel test was performed to evaluate the standard deviation of step 2 and step 5 and is suitable for a small sample size [31]. We intended to compare a set of standard deviations right after we recognized the answers of the other participants (step 2) for the first time with a set of standard deviations after continuously recognizing the others’ answers four times (step 5). The p -value was 1.70 \times 10^{-7} , hence, the standard deviation at step 5 decreased. Each group showed a trend of fluctuation while answering the quiz five times and indicated that group norms occurred in human-robot groups.

FIGURE 14. - Change of standard deviation of the number of dots at each step, in human-robot group. The standard deviations between steps 2 and 5 has a statistical significance due to 
$p< 0.01$
.
FIGURE 14.

Change of standard deviation of the number of dots at each step, in human-robot group. The standard deviations between steps 2 and 5 has a statistical significance due to p< 0.01 .

The chronological change of each answer of human participant when he/she answered the quiz alone and answered the quiz in a human-robot group is shown in Fig. 15. The left and right of Fig. 15 shows that the change of each answer is large and small, respectively. Based on these results, Fig. 16 depicts the absolute number of various dots between steps. The Human and Human–robot group in the legend indicate the variation of dots in the two ways to answer the quiz.

FIGURE 15. - Chronological change of the answers of human participants in answering in a group and in answering alone.
FIGURE 15.

Chronological change of the answers of human participants in answering in a group and in answering alone.

FIGURE 16. - Comparison of the absolute variation of dots in Human-robot group and Human. The absolute value of step 1 to 2 only has a statistical significance due to 
$p< 0.001$
.
FIGURE 16.

Comparison of the absolute variation of dots in Human-robot group and Human. The absolute value of step 1 to 2 only has a statistical significance due to p< 0.001 .

Table 3 presents the p -value and effect size to evaluate the difference between the results of Human and Human-robot group as shown in Fig. 16. The Brunner-Munzel test was performed to evaluate the various dots in Fig. 16 and is suitable for a small sample size [31]. Additionally, we investigated the effect size between the difference in Human and Human-robot group using Cohen’s d [32]. The p -value and Cohen’s d demonstrated that human-robot group comparatively has a large difference of various dots at only step 1 to 2. Therefore, in human-robot group, the difference in dots gradually decreased although the variation of dots comparatively maintains the same number of dots to answer alone. However, the value of Cohen’s d at step 4 to 5 is larger than at step 2 to 3 and step 3 to 4.

TABLE 3 Results of the Brunner–Munzel Test and Cohen’s d in the various Dots. These Results Show the Comparison Between the Human-Robot~Group and Human
Table 3- 
Results of the Brunner–Munzel Test and Cohen’s 
$d$
 in the various Dots. These Results Show the Comparison Between the 
$Human-Robot~Group$
 and 
$Human$

Furthermore, the results of the questionnaire about the reluctance of the participants to answer the quiz based on the other’s opinions instead of their own opinions are shown in Fig. 17. In addition, Table 4 presents the p -value and effect size to evaluate the difference in the reluctance between robot1’s, robot2’s, and the answers of the participant at steps 1 and 5 as shown in Fig. 17. The Brunner-Munzel test was performed to evaluate the various dots shown in Fig. 17 and is suitable for a small sample size [31]. Additionally, we investigated the effect size between the difference of their reluctance at step 1 and 5 using Cohen’s d [32]. The p -value and Cohen’s d indicated that human-robot group comparatively has a large difference in their reluctance. Therefore, most of the participants felt more reluctant to answer based on the opinions at step 1 than step 5.

TABLE 4 Results of the Brunner–Munzel Test and Cohen’s d in the Results of the Questionnaire About the Participants’ Reluctance
Table 4- 
Results of the Brunner–Munzel Test and Cohen’s 
$d$
 in the Results of the Questionnaire About the Participants’ Reluctance
FIGURE 17. - Results of the questionnaire about participants’ reluctance to answer based on the other’s opinions instead of their own opinions. Each group member’s answer between step 1 and step 5 has a statistical significant due to 
$p< 0.01$
.
FIGURE 17.

Results of the questionnaire about participants’ reluctance to answer based on the other’s opinions instead of their own opinions. Each group member’s answer between step 1 and step 5 has a statistical significant due to p< 0.01 .

As a result, it is obvious that the answer of the human participant and opinion in human-robot groups changed while answering the same quiz five times in a row. Moreover, the results of the questionnaire show that participants generally felt more reluctant to agree to the opinions of others at step 1 than at step 5.

F. Discussion

It is obvious that the answers of human participants and opinions in human-robot groups changed when they answered the same quiz five times in a row. Moreover, we revealed that participants generally felt more reluctant to agree to the others’ opinions at step 1 than at step 5.

Human participants were affected by the opinions of the robots about the similarity of answers and generated group norms with the two robots in groups as shown in Fig. 13 and 14. In the quiz with unclear answers, it was assumed that these participants trusted the answer of the robot due to lack of their criterion and confidence to answer the quiz. As illustrated in Experiment 1 of Fig. 13, the answers in the group did not correspond to the similar answers from the first step to the final step. On the other hand, Experiment 12 in Fig. 13 indicates that the human participant sharply changed his/her answer from step 1 to step 2. Therefore, it appeared that the similarity of the answers in a group depends on whether the participant has his/her criterion to answer the quiz or not.

In addition, even though the variation between step 1 and step 2 was large as shown in Fig. 16, it was clear that human participants decreased the variation of dots after step 2. In the quiz with unclear and vague answers, the participants who did not answer with confidence tended to change their answers due to the social influence of the robot in human-robot groups. It was indicated that the participants in the group kept the constant number of dots as an answer to step 2 to 3, step 3 to 4, and step 4 to 5. This was because they were later aware that the robots attempted to maintain group norms. Therefore, it was assumed that the answers of the human participants were affected by those of the robots, who considered group norms in the group, and formed group norms with robots.

However, the numerical value of Cohen’s d in step 4 to 5 has a tendency to increase compared with d in step 2 to 3 and step 3 to 4. This might suggest that the human participants tried to show their individuality or personal opinions to answer the quiz after they changed their own answers and accepted the group norm or the criterion of the approach to answer the quiz. The quiz had an unclear answer and no right answers, therefore at first group members had no criteria to answer it. Then, it was difficult for human participants to show their individuality and opinions through the quiz scenario. However, after generating a group norm, they might get a criterion to answer the quiz and get to be able to show their fifth answers as their own opinions. In other words, the human participants might show their individuality by depending on how different their answers were from the group norm. Therefore, their decision–making processes were based on the group norms that were shared with the robots.

Moreover, Fig. 17 shows that the group norms changed the opinions of the human participants because they felt answers similar to the group norm more appropriate than answers different from the group norm. In a comparison of answers of human between step 1 and 5 as shown in No.1 and No.2 in Fig. 17, they preferred the answer at step 5 as an answer in a group, although they decided to answer an image by themselves at step 1. This means that they thought they should answer the quiz while they consider the group norm. Therefore, the group norm gave them a right answer to the quiz that had no correct answers. On the other hand, in a comparison of answers of robots between step 1 and 5 as shown in a pair of No.3 and No.5 and a pair of No.4 and No.6 in Fig. 17, the humans accepted the answers that each of the robots showed while they learn a group norm. This also indicates that the humans gradually accepted the group norms, although they were surprised at a difference of the answers of others and changed their answers. Therefore, although humans generated the group norms with two robots, the norms socially affected the opinions of the participants.

SECTION V.

Conclusion

This study investigated whether robots affected behaviors of human participants when one human participant joined the group, which included two robots that considered the group norm. In the case of the robots that consider group norms in the group and used for the group experiment, we focused on the change of the behaviors of the humans with robots and an appropriateness of the opinions of the group members in the human-robot group. Two studies showed conformities of humans and social influence on humans from robots that did not change their behaviors [10], [11]. Therefore, it is unclear whether robots that change their behaviors socially affect humans.

Human participants answered a quiz with unclear answers. Based on the results of the quiz, we compared answers of human participants in human-robot groups with the answers that they obtained by themselves. It is obvious that the variation between the steps gradually decreased although the answers of these participants largely changed from the first step to the second step. Following the results, it is suggested that the participants kept the constant number of dots as an answer in the group because they were confused about the diversity of the answers and got to be aware that the robots attempted to maintain group norms. Therefore, the human participants also made decisions when they consider group norms like the robots.

In addition, we investigated the kind of answer the human participants accepted as an appropriate answer in the group that they belonged to. It is obvious that the participants determined what kind of answers are appropriate on the basis of the group norm that they shared with the robots. So, the group norm gave them a criterion to answer the quiz that had had originally no correct answers. Thus, we concluded that opinions of the human participants were affected by the robots considering group norms in the human-robot group.

In future work, we will investigate social influence in a practical scenario. The quiz scenario prepared for this investigation is not a common situation in human-robot interaction. Therefore, we need to investigate whether group norms in human-robot groups affect humans decision making in a realistic scenario.

Usage
Select a Year
2025

View as

Total usage sinceMar 2020:1,806
05101520JanFebMarAprMayJunJulAugSepOctNovDec171716000000000
Year Total:50
Data is updated monthly. Usage includes PDF downloads and HTML views.

References

References is not available for this document.