Skip to main content

An Open Access Journal

Configuring user information by considering trust threatening factors associated with automated vehicles

Abstract

Background

The accelerated development of automated driving technology has raised the expectation that commercially available automated vehicles will be increasingly become ubiquitous. It has been claimed that automated vehicles are safer than conventional manual vehicles, leading to the expectation of fewer accidents. However, people expect not only better but also near-perfect machines. Given that accidents involving automated vehicle do occur and are highlighted by the media, negative reactions toward automated vehicles have increased. For this reason, it is critical to research human–machine interaction to develop suitable levels of trust between human users and newly introduced automated vehicle systems.

Method

We start this study by defining user distrust toward automated vehicles in terms of four types of trustthreatening factors (TTFs) along with trust-threatening situations. Next, with 30 volunteer participants, we conduct a survey and a humanin-the-loop experiment involving riding in a simulated automated vehicle and experiencing 21 distrust scenarios.

Result

In terms of the information configuration type suitable for alleviating the TTFs, the participants preferred to receive information on external object recognition for all TTFs in general with an average necessity level score of 24.2, which was 8.0 points higher on average than the scores of the other information configuration types. The haptic modality-based method was the least preferred compared to the other information configuration methods, namely visual and auditory.

Conclusion

In this study, we focused on participants’ subjective responses and complementary quantitative studies, and the results of these studies put together are expected to serve as a foundation for designing a user interface that can induce trust toward automated vehicle among users.

1 Introduction

Automated vehicle systems assist humans with driving, and they are being actively developed with expected outcomes such as crash elimination, productivity improvement for users, and improved energy efficiency. The Society of Automotive Engineers (SAE) automation standard level 3 vehicles (partially automated but the driver is necessary) are expected to be mass produced soon, and SAE automation standard level 4 vehicles (highly automated but with optional control by the driver) are in the pipeline [20]. These automated vehicles are expected to be safer than conventional human-driven vehicles. For instance, 94% of traffic accidents are caused by human driver error, for instance, driving under the influence, driving when drowsy, and reckless driving. By contrast, automated vehicles can monitor the environment continuously, thus compensating for lapses in user attention. For this reason, they are expected to help reduce the frequency of traffic accidents [4]. However, people not only expect machines to perform better than humans but are also extremely sensitive to minor mistakes made by machines. It has been reported that people’s attitudes toward automated vehicles have been adversely affected after such instances [12], for example, an accident involving an Uber self-driving car and a pedestrian in Arizona in March 2018 [1]. Even though many accidents involving human-driven vehicles occur every day, they draw less attention than the few accidents involving automatically driven vehicles. In the early days of the mass production of automated vehicles, such instances may hurt vehicle marketability and negatively affect market formation [15]. Moreover, such instances generate negative awareness among users even before they utilize the system. Moreover, this negative awareness of automated vehicles may amplify users’ anxiety about not driving the car themselves while utilizing the system.

In general, from the perspective of an automated system, building trust between the system and human users largely influences whether the latter are willing to utilize the system [10]. For instance, regardless of how good a system is, if users do not trust it and use it, its full functional benefits cannot be exploited [21]. Therefore, trust can arguably be viewed as a key factor for enhancing users’ acceptability of automated systems [6]. Against this backdrop, it is necessary to research and design vehicle-interface-based automated driving information configurations to relieve users’ anxiety toward automated vehicles.

The main goals of this study are original in terms of (1) identifying distrust situations based on actual cases of driving automated vehicles on the road and (2) examining information configuration in automated vehicles by analyzing the trust-threatening factors (TTFs) identified from the experimental data. The findings of this study are expected to serve as a foundation for the developers of automated vehicle interfaces, including European transportation research groups, to design mass produced structures that enhance user trust.

2 Literature review

Before we describe our work, a few selected previous studies on user trust in automated vehicle systems are introduced as follows. Barber [3] defined trust as “the expectation of technically competent role performance” and Johns [11] as “the willingness to place oneself in a relationship.” Meyer [19] defined trust as “a behavioral state of risk.” Based on these studies, we discuss the definition of trust and distrust in Sect. 3.

Lee and See [16] analyzed the factors influencing the trust of drivers and highlighted the importance of establishing an appropriate level of trust. They defined the factors influencing trust based on purpose, process, and performance, and presented the following seven points that should be considered to develop a suitable level of trust in drivers: design for appropriate trust, not greater trust, show past performance of an automation system; show the process and algorithms of an automation by revealing intermediate results, such that they are comprehensible to the operators; simplify the algorithms and operation of an automation to make them more understandable; show the purpose, design basis, and range of applications of an automation in a manner that relates to the goals of the users; train operators on the expected reliability of an automation, the mechanisms governing its behavior, and its intended use; and carefully evaluate any anthropomorphization of an automation, such as the use of speech to create a synthetic conversational partner, to ensure appropriate trust.

Ekman et al. [7] presented a framework for human–machine interaction (HMI) design to create appropriate trust in automated vehicle systems. They defined the user flow of an automated vehicle in 11 sections and identified 11 factors influencing user trust in an automated vehicle system in terms of HMI. The researchers then associated the factors influencing user trust with each of the sections and the causes underlying these factors.

Körber et al. [14] studied the influence of trust-promoting and trust-lowering introductory information on the reported trust and takeover characteristics of drivers. Forty participants were presented with three takeover situations during a 17-min highway drive in a conditionally automated vehicle. The results of their experimental study indicated that an individual’s trust level determines the extent to which a driver monitors the environment when performing non-driving-related tasks in an automated vehicle.

As stated beforehand, most of the previous studies have conceptually addressed user trust in automated vehicles and information configuration methods (ICMs), while only a few have defined user distrust situations based on experiences of using automated vehicles and have proposed specific solutions to the aspect of information configuration.

3 Definitions of distrust, TTFs, and scenarios

To achieve the study goals, we provide an operational definition of distrust in the context of automated vehicles. Then, we present TTFs and information configuration types (ICTs) and describe the development of automated driving scenarios.

3.1 Definition of distrust in automated vehicles

As presented in the Introduction, trust between humans and automated systems has been defined in terms of the following three aspects: an attitude or expectation, a behavioral state of risk, and the intention or willingness to rely on someone/something [3, 11, 16, 19]. We combined these definitions to arrive at the following definition of trust in automated vehicle systems in our study: The attitude that an automated vehicle will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability. In previous studies, the relationship between trust and distrust defined trust and distrust as the two ends of one line [17]. In this sense, we define distrust as the attitude of a user in response to a situation in which an automated vehicle does not help achieve the user’s goals in a scenario characterized by uncertainty and vulnerability. Our definition considers distrust and trust a continuum, which can be expressed in terms of a degree rather than a binary value.

User trust in automated vehicles can be considered in diverse ways from a user’s perspective [7]. For example, there can be an “automated mode” section of normal driving without system failure, a “control transition” section where a user takes over driving controls from an automated vehicle, and so on. In the broad spectrum of studies on user trust in automated vehicles, we limit the scope of this study to fulfilling the following two conditions: (1) automated vehicle system in operation and (2) automated vehicle system in normal status.

3.2 Rationale underlying TTFs

We propose our version of TTFs by benchmarking the work of Lee et al. [17] and Schoettle and Sivak [23]. Lee et al. [17] investigated six human subjects who developed negative awareness after riding in a high-level automated driving system, that is, an SAE automation standard level 4 vehicle. After the subjects drove the vehicle over a predetermined 10-km riding section in the automated mode, their opinions were collected by administering a semi-structured interview. By building their opinions and behaviors into the data, the researchers divided and defined automated vehicle distrust factors into eight categories in total: fiduciary irresponsibility, value incongruence, lack of information, unpredictability, machine-like, functional incompetence, out of control, and lack of confidence. Schoettle and Sivak [23] surveyed 11 concerns that induce worry among users about using automated systems and analyzed the differences among these concerns. The 11 concerns were safety consequences of system failure, user’s legal liability, system security, vehicle security, data privacy, interacting with non-self-driving vehicles, interacting with pedestrians and bicyclists, learning to use self-driving vehicles, system performance in poor weather, self-driving vehicles getting confused by unexpected situations, and self-driving vehicles not driving as well as human drivers in general. We examine whether these 19 factors (8 categories and 11 concerns) fulfill the conditions set in the definition of distrust presented in our study, that is, factors occurring during the operation of automated vehicles and those occurring in normal status (e.g., no system problems). Table 1 summarizes the rationale underlying the suggested TTFs: (1) Lack of information (TTF1), (2) Out of control (TTF2), (3) Unpredictability (TTF3), and (4) Value incongruence (TTF4).

Table 1 Rationale for TTF suggestions based on Lee et al. [17], Schoettle and Sivak [23]

3.3 Proposal of information types to mitigate TTF

Automated vehicle systems at the SAE automation standard level 3 or 4 are equipped with abounding sensors (e.g., camera, radar, lidar, inertial measurement units, GPS, etc.) to detect external objects or to determine the path of the vehicle itself. These sensors collect diverse data during driving (e.g., types of external objects, distance, speed, and expected path of the car) [18]. From the user perspective, these data can be displayed to users in certain ways through a vehicle interface. However, displaying all of this information at once on a crowded display can cognitively overload users. To develop an appropriate level of trust, a suitable amount of select information that does not cause distrust should be displayed [13]. To study this aspect, we have defined the following types of information that can be presented to users. We have attempted to categorize the data related to the driving status of the ego car (automated vehicle) and its surroundings obtained using the sensors installed in the automated vehicle into simple concepts that users can understand. Because we aim to investigate the scenarios of distrust that occur during normal functioning of an automated vehicle system, we do not consider information on system status. We propose the following information types that can be obtained from an automated vehicle.

3.3.1 Recognition status of external objects (ICT1)

This type provides information about whether the automated vehicle system recognizes nearby people, animals, or other vehicles. It can be displayed as simple icons (e.g., circle, square) or specific icons (e.g., car form, human form). For instance, the front pedestrian braking system of General Motors’ Cadillac CT6 shows an icon of a human crossing a road on the head-up display [8].

3.3.2 Location of external objects (ICT2)

This type provides information about the relative distance between people, animals, or other vehicles and the ego car. The relative distance to an external object can be abstractly expressed (e.g., “There is an object nearby.”) or a specific figure can be provided (e.g., “Car accident 50 m ahead”). For instance, UX Studio presented an interface concept that expresses the relative distance to a pedestrian or bicycle on an automated vehicle’s path by using a red-colored line [5].

3.3.3 User vehicle acceleration/deceleration (ICT3)

This type provides information about the longitudinal acceleration and deceleration of an automated vehicle in which a user is riding. Vehicle speed information can be provided in real time or, in the event of abrupt deceleration, by using a visual icon or auditory sound. For example, the Audi A8 shows a red-colored icon between the car ahead and the user’s car through a cluster when quick deceleration is necessary while using the adaptive cruise control function [2].

3.3.4 User vehicle location and projected path (ICT4)

This type provides information about the location of an automated vehicle in the driving environment and its projected path. Information about relative location can be related to whether the car is driving well in the middle of the lane, whether it will keep driving straight, whether it will change lane, and so on. Absolute location and path can be expressed as the vehicle location on a map, as in navigation, and projected driving path. For instance, to change lanes for overtaking, the user interface in modern Volvo vehicles informs the user about an overtake plan first and then presents the lane change path on a cluster screen [25].

3.3.5 Status of external objects (ICT5)

This type provides information about the movement of nearby people, animals, or other cars. Once an automated vehicle system detects a change in an external object’s speed, it can inform the user about this event by means of an abstract expression (e.g., “stop,” “accelerate,” “decelerate”) or specific figures (e.g., 60 km/h). For example, Tesla autopilot 8.0 presents users with information about the movements of nearby cars through a cluster screen in real time [24].

3.4 Suggestions for distrust-evoking scenarios in automated driving

To design scenarios that evoke distrust in automated driving, we investigated several cases to identify the types of scenarios that induce user anxiety. Thirty-eight sets of automated driving release reports submitted to the Department of Motor Vehicles in California between 2016 and 2018 were investigated to examine the data related to disengagement of automated driving on actual roads. Seven manufacturers who submitted automated driving release reports (i.e., Bosch, Delphi, Google, Nissan, Mercedes-Benz, Volkswagen, and Telenav) specified numerical data and causes of takeover in the reports, which made it possible for us to carry out further analysis [27]. Of the automated driving release cases described in these reports, we investigated around 700 cases that involved users’ subjective sense of anxiety that led to the resumption of manual driving. Of the collected cases, those without a clear reason for user anxiety were excluded and similar situations were combined, resulting in the formation of 19 sets of distrust situations. In addition to these, after discussions with HMI specialists, we added several scenarios—large-scale buses moving out of the lane center and congestion due to lane reduction—which are the main cases that induce anxiety in drivers because of the associated high accident rates on Korean roads. As a result, we identified a total of 21 scenarios that evoke distrust during automated driving: (1) Jaywalker, (2) Pedestrian near the curb, (3) Lane sharing with bicycles, (4) Parked vehicles on the shoulder, (5) Rapid acceleration, (6) Two changes in direction, (7) Too close to the vehicle in front, (8) Vehicle accident, (9) Vehicle cutting in, (10) Large-scale bus moving out of the lane center, (11) High traffic at a corner, (12) Incorrectly parked vehicle, (13) Vehicle exiting a parking lot, (14) U-turn too sharp, (15) Too close to the curb, (16) Too close to another vehicle when turning, (17) Change in direction, (18) Traffic signal turning yellow, (19) Lane recognition when entering a tunnel, (20) Too many pedestrians, and (21) Congestion when merging with a highway.

4 Method

4.1 Study goal and hypotheses

The study goals are two-fold: (1) analyze users’ distrust characteristics and the causes thereof in the event of a distrust scenario during normal operation of an automated vehicle system and (2) study how to configure the information presented on the vehicle interface based on the causes of user distrust. To implement these goals, we constructed and tested the following study hypotheses:

H 1

Differences exist in the causes that make users feel that their trust has been threatened depending on the distrust scenario.

H 2

Differences exist in the ICTs and ICMs desired by users in relation to the vehicle interface depending on TTFs.

H1 was analyzed to determine the reasons why users felt anxiety and to categorize the distrust situations according to each reason. Through H2, the ICTs and ICMs were analyzed to mitigate the causes of distrust.

4.2 Independent and dependent variables

Based on the data obtained from the experimental scenario, independent variables (IVs) and dependent variables (DVs) were set, as reported in Table 2, to verify the study hypotheses. Among the drivers’ characteristics, the age variable ranged from 20 to 60 in 10-year increments. Ten-year interval clustering for age is conventional in the Republic of Korea. With respect to ADAS(advanced driver-assistance system) experience, those who had used the driving assistance function in a mass-produced vehicle at a level equivalent to SAE automation standard level 1 or 2 were regarded as having ADAS experience. Driving experience was categorized into three groups—less than 3 years, between 3 and 10 years, and 10 years or more—for the data analysis. Drivers with less than three years of experience are categorized as novices in the Republic of Korea and usually pay extra insurance charges.

Table 2 Description of independent variables (IVs) and dependent variables (DVs)

The independent and dependent variables of H1 are “types of distrust scenarios” and “trust-threatening factors,” respectively. Based on the test results of H1, 21 distrust events were categorized under the four causes of trust. The independent and dependent variables of H2 are “types of trust-threatening factors” and “ICTs and ICMs” respectively. Based on the results, user responses to multiple-choice-question-based surveys were analyzed to determine the necessity of interface configuration in relation to each distrust cause, information type, and ICM. The ICTs were subjected to the five types mentioned in Section 2.C: (1) recognition status of external objects, (2) location of external objects, (3) status of external objects, (4) user vehicle acceleration/deceleration, and (5) user vehicle and projected path. Furthermore, preferable ICMs were collected as visual (ICM1), auditory (ICM2), and haptic (ICM3) modalities [20].

4.3 Experimental procedure

The study participants were informed about the purpose of the experiment and procedures, and they wore an experimental device before participating in the experiment. The participants performed a driving exercise for approximately 15 min before the main experiment to familiarize themselves with the simulator environment. The experiment was performed based on the within-subject design, and all scenarios were introduced in a random order. The participants provided their opinions about 21 distrust scenarios, as shown in Fig. 1. The scenarios in Group A in Fig. 1 were created using a virtual driving simulator so that the participants could experience them in the actual simulation, including four scenarios on downtown roads and six on a highway. On the virtual downtown roads, the car was driven in the automated driving mode at approximately 40 km/h for 6 min on average, and the distrust events occurred at intervals of 2.5 min on average from the start of driving. The eight-lane downtown road had sidewalks on both sides, and the vehicle on the road crossed 10 intersections and made four right turns when driving in the automated mode. On the eight-lane virtual highway, the car drove in the automated mode at approximately 90 km/h for 7 min on average, and the distrust events occurred at intervals of 3 min on average from the start of driving. The scenario were applied in random order to minimize the order and learning effect.

Fig. 1
figure 1

The identified scenarios based on the distrust events investigation

The participants entered the virtual driving simulator and watched the driving situations by using the vehicle functions during automated driving by an SAE automation standard level 4 vehicle. To minimize confounding variables, we did not provide information through a cluster, head-up display, or central fascia; instead, we informed the participants about whether the car was in the automated driving mode or not through icons. We did not restrict the participants’ behavior. For example, some of the participants resumed manual control by operating the accelerator or brake pedal if they thought it was inappropriate to let the automated vehicle drive. The participants assessed which type of information should be provided through the vehicle interface in each distrust case and provided their opinions on the distrust cases in the survey each time they completed a scenario set. The data were gathered from the survey after implementation of all of the scenario sets. Six questionnaires with detailed descriptions and figures of the corresponding scenarios were presented. The first three questions suggested that the participants rate the levels of unexpectedness, risk, and willingness on a seven-point Likert scale. The last three multiple-choice questions investigated the reasons for feeling distrust (if any), information types for reducing or removing distrust, and information modality types (such as visual, auditory, and haptic modalities).

For the remaining 11 scenarios in Group B, the participants were provided with explanatory texts in Korean with visualization images due to our constraints, such as simulator capabilities and project duration. The same survey questionnaire was presented after the implementation of all of the 21 scenarios. Finally, the participants were interviewed to provide their general opinions about the interface of the automated vehicle before completion of the experiment. The total experiment duration was 150 min.

4.4 Participants

The experimental participants (volunteers) were recruited through a public announcement. Thirty subjects participated in the experiment—19 men and 11 women. The participants’ ages ranged between 20 and 60 years (M = 34.7, SD = 8.2 years). Their average driving experience was 9.3 years (SD = 7.5). Around 40% of the participants had utilized at least one driving assistance system provided in a conventional mass-produced automated vehicle. The experiment was approved by the Institutional Review Board (IRB; KMU-201808-HR-183) and complied with IRB regulations.

4.5 Apparatus

As shown in Fig. 2, the virtual driving environment was established using a full-scale driving simulator and a three-channel image projector. The virtual automated vehicle system and distrust scenarios were realized using SCANeR 1.7 software.

Fig. 2
figure 2

Examples of a a front and rear view image, b an interior cluster view, and c an exterior view of the virtual driving simulator

5 Results

The hypotheses mentioned in Sect. 3 were tested statistically by considering the characteristics of the IVs and DVs of each hypothesis [22].

5.1 User distrust scenario analyses (H1)

A Chi-squared test was performed to check whether there were any significant differences in the total aggregation of response frequencies in the participants’ answers to the questions about the TTFs that they had considered for all of the 21 distrust scenarios. For instance, a subject responding to the Jaywalker scenario considered it a trust-threatening scenario for automated driving because of its unpredictability (TTF3), another user considered it a trust-threatening scenario because of its out of control nature (TTF2) and its unpredictability (TTF3), whereas a third user stated that there was no threat to trust. We tested whether there were any differences in user TTFs depending on the distrust scenarios. The results indicated that there were significant differences in the causes of TTFs across the 21 distrust scenarios (χ2 = 135.24, DOF = 60, p < 0.001). This finding indicates that each distrust scenario had different underlying causes (i.e., accept H1).

Based on these results, we selected the cause with the maximum response frequency among the four TTFs of each distrust scenario as the major cause of the corresponding scenario. As a result, 11 scenarios were categorized under TTF1, four under TTF2, four under TTF3, and five under TTF4 (see Table 3).

Table 3 TTF-specific scenario classification based on the experimental data

5.2 User TTF-specific information configuration type and method (H2)

Inferential statistical analysis was conducted on the necessity means of the ICTs and ICMs by using the survey data provided by the participants in relation to the individual distrust scenarios corresponding to the four TTFs identified during H1 verification (see Table 4 and Fig. 3). A Kruskal–Wallis H test (α = 0.05) was performed to verify whether there were any significant differences in information type and configuration method depending on user TTF, and a Bonferroni test was conducted to perform a post hoc pairwise comparison. With respect to the necessary information to be provided in this case, the significance level was set to α = 0.005(= 0.05/5C2) to check for any differences in the necessity levels deemed by users for the five types of information in each TTF. In terms of the configuration method, α = 0.017 (= 0.05/3C2) was applied to check for any difference between the three types of ICMs based on the visual, auditory, and haptic modalities (the 3 ICMs). A non-parametric test was conducted to determine the necessity levels of the five TTF-specific ICTs (i.e., external object recognition status, external object location, external object status, user vehicle acceleration/deceleration, and user vehicle location and path) and necessity levels for the three ICMs. Significant differences were observed in all cases (i.e., accept H2). Accordingly, a pairwise comparison test was conducted for each sub-variable, and based on the outcomes of these tests, the information types and configuration methods preferred by the participants were ranked for each TTF (see Table 4). Tables 5 and 6 present the results of the Kruskal–Wallis and Bonferroni post hoc tests and the associated significance levels.

Table 4 Results related to information configuration types and methods
Fig. 3
figure 3

TTF-specific a necessity level means according to provided information configuration types and b necessity level means according to information configuration methods

Table 5 Statistical result of information configuration types according to TTFs
Table 6 Statistical result of information configuration methods according to TTFs

5.2.1 Lack of information: TTF1

In terms of ICTs (H = 341.909, DOF = 4, p < 0.001), ICT1 and ICT2 exhibited significant differences compared to ICT 4 and ICT 5 (p < 0.001). The information presented in ICT3 and ICT5 exhibited significant differences (p < 0.001), with necessity level scores higher by 3.3 points on average compared to the information presented in ICT4. All types of information, excluding user vehicle location and path, had the maximum values across the four TTFs. In terms of ICM (H = 240.456, DOF = 2, p < 0.001), the visual and auditory modalities exhibited significant differences compared to the haptic modality (p < 0.001), with 15.3-point higher necessity level scores on average. The necessity levels of ICM2 and ICM3 were the highest across the four TTFs.

5.2.2 Out of control: TTF2

In terms of ICTs (H = 96.133, DOF = 4, p < 0.001), the information presented in ICT1, ICT2, and ICT4 exhibited significant differences compared to the information presented in the other two ICTs (p < 0.001), and the necessity level scores were higher by 4.8 points on average. The necessity level of ICT4 was the lowest across the four TTFs. In terms of ICM (H = 76.418, DOF = 2, p < 0.001), ICM1 and ICM2 exhibited significant differences compared to ICM3 (p < 0.001), with necessity level scores higher by 16.1 points on average. The necessity level of ICM1 was the lowest across the four TTFs.

5.2.3 Unpredictability: TTF3

In terms of ICTs (H = 146.724, DOF = 4, p < 0.001), ICT1 was found to have significant differences, with necessity level scores higher by 10.2 points on average than those of the other four information types: p = 0.004 for ICT2 and p < 0.001 for ICT3, ICT4, and ICT5. The information presented in ICT2 “user vehicle location and expected path” exhibited significant differences (p < 0.001), with a configuration necessity score higher by 6.9 points on average compared to those of ICT3 and ICT5. The necessity level of ICT3 was the lowest across the four TTFs. In terms of the ICMs (H = 80.723, DOF = 2, p < 0.001), ICM1 and ICM2 exhibited significant differences (p < 0.001) compared to ICM3, with necessity level scores higher by 17.1 points on average. ICM1 had the highest necessity level across the four TTFs.

5.2.4 Value incongruence: TTF4

In terms of ICTs (H = 80.845, DOF = 4, p < 0.001), the information presented in ICT1 and ICT2 exhibited significant differences (p < 0.001), with necessity level scores higher by 5.2 points on average compared to those of the other three information types. All types of information, excluding ICT3, were found to have the smallest necessity levels among the four TTFs. In terms of the ICMs (H = 133.758, DOF = 2, p < 0.001), ICM1 exhibited significant differences compared to ICM2 and ICM3 (p < 0.001), with necessity level scores higher by 11.2 points on average. Moreover, ICM2 exhibited significant differences compared to ICM3 (p < 0.001), with necessity level scores higher by 15.2 points on average. The necessity levels of ICM2 and ICM3 were the lowest across the four TTFs.

6 Discussion

A descriptive inferential statistical analysis was performed on the experimental data to numerically examine the ICTs and ICMs needed by users to enhance user trust in automated vehicles according to their characteristics and to mitigate the TTFs. The ICTs and ICMs, which exhibited significant differences across individual TTFs and a first-priority information requirement in the test results of H2, can prevent the occurrence of the TTFs during operation of the automated system when they are provided to all users in the context of a trust threat (see Table 4). Furthermore, strongly distrusting users can receive additional information (i.e., ICTs and ICMs) that display second-priority information requirements. The manner in which such results can be utilized in a distrust scenario is summarized in Table 7.

Table 7 Information configuration according to TTFs

In our study, user “trust” in the simulated automated driving environment was measured by administering a survey, as has been done in many studies in the literature (e.g., [9, 26]. In addition, the concept of “distrust” was considered in this study, and the TTFs associated with automated driving scenarios were identified. Moreover, we included the ICTs and ICMs that depend on the TTFs. The results of this study are mainly based on subjective responses and should be complemented with quantitative results. For example, the auditory and haptic modalities have been found to be more effective than the visual modality in terms of human behavioral, vehicle control, and physiological metrics in imminent control takeover scenarios involving partially automated vehicles [28]. However, in this study, users preferred the visual and auditory modalities over the haptic modality. This implies that although the haptic modality can be an effective method for eliciting swift user action in an emergency situation, it might be inappropriate in normal automated driving. Further in-depth studies should be conducted to concretize this idea, and diverse driving situations should be considered when designing automated vehicle interfaces for both safety and convenience.

The ICTs and ICMs were presented in relation to the ways in which the results of our necessity level data analysis could be applied to each distrust scenario. Inferential statistical analysis was performed for each TTF, and based on the results, the ICTs and ICMs with the highest necessity level (i.e., types and methods under No. 1 in Table 4) could be used to provide the information suitable for each situation. The ICTs and ICMs with the second highest necessity level with significant differences (i.e., types and methods under No. 2 in Table 4) could be adopted for users who exhibit relatively high levels of distrust toward automated vehicles. As mentioned in the Literature Review section, various methods that increase the trust levels of drivers who drive automated vehicles were investigated. In our study, we referred to relevant studies performed by several European transportation research groups, (e.g., [7, 14], and extend their results using the results obtained from experiments conducted outside Europe. For example, Körber et al. [14] investigated the effect of individual trust level on the monitoring characteristics of drivers. Likewise, in the present study, we focus on the effect of the individual trust levels of drivers and provide information that can promote user trust in various automated driving scenarios. Rather than examining cultural differences, our Asia-based study provides a link to extend European research worldwide. Furthermore, automated vehicles represent a fast-growing market in Asia, and our research can generate useful ideas for examining human factors in the development of automated vehicles in a global context, which includes the European context. For example, do people have different reasons for distrusting automated vehicles? Do they prefer different information configurations to alleviate TTFs? What are the implications of these differences? Although it could be challenging to compare our research results directly with those of European studies because we did not perform a replication study, our results are generally compatible with those of the relevant European studies, as demonstrated in the Results section. Further opportunities to contribute to European journals will provide benefits to both the European and Asian transport research communities.

7 Conclusions

The main goals of our study are as follows: identify distrust situations based on actual cases of driving automated vehicles on the road, and examine the configuration of information in automated vehicles. The latter can be achieved by analyzing the trust-threatening factors identified from the experimental data. The main findings of this study are as follows: (1) The responses to distrust events depended on user characteristics, (2) user distrust was caused by different reasons depending on distrust events, and (3) the ICTs and ICMs provided in the interface to alleviate distrust factors could be designed differently. These findings can be applied to provide interface options that are appropriate considering automated vehicle system users’ characteristics and information configuration designs that are appropriate for mitigating the distrust factors in distrust events, as well as for promoting the formation of trust among users toward the system. Our study shows similarities to the studies conducted in Europe in terms of the measurement of trust with diverse metrics. For example, Gold et al. [9] measured user trust in automated driving takeover scenarios by conducting a survey, and Walker et al. [26] measured user trust in automated vehicles based on gaze characteristics. We comprehensively investigated diverse trust-threatening scenarios encountered in automated driving and proposed a product design framework for ICTs and ICMs depending on TTFs. In the future, researchers can verify the results of the present study by conducting experiments in which the ICMs identified in this study (i.e., interface ICMs to mitigate the TTFs in Sect. 4) are provided in a virtual simulation environment. Moreover, given the reality gap between actual cars and simulators, the results of this study should be verified with an actual car on real roads.

Availability of data and materials

The datasets generated or analyzed in the current study are available from the corresponding author on reasonable request.

References

  1. AAA NewsRoom, Heathrow. (2018). American trust in autonomous vehicles slips. Retrieved September 4, 2020, from https://newsroom.aaa.com/2018/05/aaa-american-trust-autonomous-vehicles-slips/.

  2. Audi Vorsprung durch Technik. (2017). Owner’s Manual 2018 A8, 4H0012721BF. Retrieved September 4, 2020, from https://ownersmanuals2.com/audi/a8-s8-2018-owners-manual-72492.

  3. Barber, B. (1983). The logic and limits of trust. Rutgers University Press.

    Google Scholar 

  4. Barkenbus, J. (2018). Self-driving cars: How soon is soon enough? Issues in Science and Technology, 34(4), 23–26.

    Google Scholar 

  5. David, P. (2016). How to build trust in self-driving cars? Case Study, uxstudio. Retrieved September 4, 2020, from https://uxstudioteam.com/ux-blog/how-to-build-trust-in-self-driving-cars/.

  6. Dixon, S. R., Wickens, C. D., & McCarley, J. S. (2007). On the independence of compliance and reliance: Are automation false alarms worse than misses? Human Factors, 49(4), 564–572. https://doi.org/10.1518/001872007X215656

    Article  Google Scholar 

  7. Ekman, F., Johansson, M., & Sochor, J. (2017). Creating appropriate trust in automated vehicle systems: A framework for HMI design. IEEE Transactions on Human-Machine Systems, 48(1), 95–101. https://doi.org/10.1109/THMS.2017.2776209

    Article  Google Scholar 

  8. General Motors LLC. (2018). Owner’s Manual, Part No. 84423351 C. Retrieved September 4, 2020, from https://my.gm.ca/content/dam/gmownercenter/gmna/GMCC/dynamic/2018/cadillac/CT6/en/18i_CAD_CT6_OM_en_US_U_84423351C_2018MAR01_3P.pdf.

  9. Gold, C., Körber, M., Hohenberger, C., Lechner, D., & Bengler, K. (2015). Trust in automation—Before and after the experience of take-over scenarios in a highly automated vehicle. Procedia Manufacturing, 3, 3025–3032.

    Article  Google Scholar 

  10. Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570

    Article  Google Scholar 

  11. Johns, J. L. (1996). A concept analysis of trust. Journal of Advanced Nursing, 24(1), 76–83. https://doi.org/10.1046/j.1365-2648.1996.16310.x

    Article  Google Scholar 

  12. Joseph, Y. (2018). Briton who drove tesla on autopilot from passenger seat is barred from road. The New York Times. Retrieved September 4, 2020, from https://www.nytimes.com/2018/04/29/world/europe/uk-autopilot-driver-no-hands.html.

  13. Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing (IJIDeM), 9(4), 269–275. https://doi.org/10.1007/s12008-014-0227-2

    Article  Google Scholar 

  14. Körber, M., Baseler, E., & Bengler, K. (2018). Introduction matters: Manipulating trust in automation and reliance in automated driving. Applied Ergonomics, 66, 18–31. https://doi.org/10.1016/j.apergo.2017.07.006

    Article  Google Scholar 

  15. Lim, C. (2016). An information representation study for automation level of driverless car based on trust theory of automation system. M.S. Thesis, Human–Computer Interaction, Yonsei Univ., Seoul, South Korea. Retrieved March 14, 2019, from http://dcollection.yonsei.ac.kr/public_resource/pdf/000000423849_20200904131522.pdf.

  16. Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

    Article  MathSciNet  Google Scholar 

  17. Lee, J. I., Kim, N. E., & Kim, J. W. (2017). A study on driver experience in autonomous car based on trust and distrust model of automation system. Journal of Digital Contents Society, 18(4), 713–722. https://doi.org/10.9728/dcs.2017.18.4.713

    Article  MathSciNet  Google Scholar 

  18. Meinel, H. H., & Dickmann, J. (2016). The application of automotive radar—The further development towards safety features. In Proceedings of the European radar conference (EuRAD).

  19. Meyer, J. (2001). Effects of warning validity and proximity on responses to warnings. Human Factors, 43(4), 563–572. https://doi.org/10.1518/001872001775870395

    Article  Google Scholar 

  20. NHTSA (National Highway Traffic Safety Administration). (2018). Preparing for the future of transportation: Automated Vehicles 3.0. Retrieved September 4, 2020, from https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf.

  21. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886

    Article  Google Scholar 

  22. Richardson, A. (2010). Nonparametric statistics for non-statisticians: A step-by-step approach by Gregory W. Corder, Dale I. Foreman. International Statistical Review, 78(3), 451–452. https://doi.org/10.1111/j.1751-5823.2010.00122_6.x

    Article  Google Scholar 

  23. Schoettle, B., & Sivak, M. (2014). A survey of public opinion about autonomous and self-driving vehicles in the US, the UK, and Australia. University of Michigan.

    Google Scholar 

  24. Tesla. (2018). Model S Owner’s Manual, 2108.48.12. Retrieved September 4, 2020, from https://www.tesla.com/sites/default/files/model_s_owners_manual_north_america_en_us.pdf.

  25. Volvo Car Group. (2015). Volvo Cars reveals safe and seamless user interface for self-driving cars. Retrieved September 4, 2020, from https://www.media.volvocars.com/global/en-gb/media/pressreleases/167739/volvo-cars-reveals-safe-and-se26.amless-user-interface-for-self-driving-cars.

  26. Walker, F., Verwey, W., & Martens, M. (2018). Gaze behaviour as a measure of trust in automated vehicles. In Proceedings of the 6th humanist conference, The Hague, Netherlands. Retrieved May 26, 2021 from http://www.humanist-vce.eu/fileadmin/contributeurs/humanist/TheHague2018/29-walker.pdf.

  27. Yun, H., Kim, S., Lee, J., & Yang, J. (2018). Analysis of cause of disengagement based on U.S. California DMV autonomous driving disengagement report. Transactions of the Korean Society of Automotive Engineers, 26(4), 464–475.

    Article  Google Scholar 

  28. Yun, H., & Yang, J. (2020). Multimodal warning design for take-over request in conditionally automated driving. European Transport Research Review, 12, 1–11. https://doi.org/10.1186/s12544-020-00427-5

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Hyundai Motor Group Academy Industry Research Collaboration. The corresponding author was partly supported by the Basic Science Research Program of the National Research Foundation of Korea, funded by the Ministry of Science, ICT, and Future Planning (2021R1A2B5B01005433) and BK21 Program (5199990814084) through the national Research Foundation of Korea (NRF) funded by the Ministry of Education. The authors appreciate Hwan Hwangbo, senior engineer at Hyundai Motor Company, for providing experimental ideas; Sujin Baek for collecting and analyzing the data; and Myeongkyu Lee for supporting the editing of the manuscript.

Funding

This work was supported by the Hyundai Motor Group Academy Industry Research Collaboration. The corresponding author was partly supported by the Basic Science Research Program of the National Research Foundation of Korea, funded by the Ministry of Science, ICT, and Future Planning (2021R1A2C1005433).

Author information

Authors and Affiliations

Authors

Contributions

H.Y., as the major contributor, performed the experiment and wrote the manuscript. J.Y. reviewed the experimental design and data analysis and contributed to the formulation and substantive revision of the initial draft. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Ji Hyun Yang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yun, H., Yang, J.H. Configuring user information by considering trust threatening factors associated with automated vehicles. Eur. Transp. Res. Rev. 14, 9 (2022). https://doi.org/10.1186/s12544-022-00534-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12544-022-00534-5

Keywords