Unexplored Opportunities

Allowing a collaborative driving experience between humans and autonomous vehicles.

Environment
Research
Concepting
Future Roadmap

How to Use This

This project was extremely exploratory, and throughout each phase we made research and design decisions (which can be seen in the full documentation) that took us on a parth towards a curved screen interface. However, along the way, there were options that were left behind. The below list is not inclusive of well-vetted and realized idea, but rather concepts that ended up did not end up being incorporated but are still worthy of exploration.

Please use this as a retrospective into unexplored opportunities, as well as a roadmap for the future.

Environment

We chose to focus on levels 3-4 of autonomy because it will be most people's first explosure to autonomy and opens up conversations about building trust between humans and autonomy.

A rich area of additional exploration is the grey area of responsibility that can obscure knowing who has control over particular functions of the car. This is especially true with advanced ADAS features in which humans have more control over some features, but not others (such as having the AI control speed & tail distance, but not brake assist). This has the opportunity to create cognitive dissonance with a user's mental model of autonomy.

Of course, exploring L5 autonomy can open up a wealth of opportunities with passenger interaction, but has a longer lead time for development and deployment.

The timeline for productization of this project is set at 3-5 years because of the contraints of creating a real product. We built our logic around technologies that would reach maturity around this timeframe as well as how ready and willing people's thinking and societal models would be.

If the timeline were to be extended to 5+ years out, we could assume that autonomous technology to be more smoothed. With less issues and takeover, opening up the opportunity to focus on true user engagement without concentrating on a driver's ability to takeover operations of the vehicle.

From the begining, our project is heavily focused on private ownership models, with Harman selling our idea into production vehicles that would be in the hands of individual drivers. By switching the design and research constraint to a world of shared autonomy, many more options are opened up including:

  • Designing autonomous white label vehicles for fleets that can be branded for rideshare and taxi companies instead of the OEM
  • Customizeable interiors that pull preferences from in-phone app settings so the rider feels that they are in a personalized space every time
  • Using rider-reported incidents to further tweek and curate the overall experience (likert scale ratings inside the car)

Research

With this context, we set out to create a list of questions to address, those in-scope can be found here.

We also have a wealth of questions that we felt were out of scope and not feasible to pursue in our timeline and vision. The questions we left on the table include:

  • What kind of environment do people expect in an L5 AV?
  • What is the mental model closely associated with riding in an L5 AV?
  • What is the difference of shared autonomy and public transit?
  • How do public and private spaces in vehicles differ?
  • Does engaging in secondary tasks while being driven cause car sickness?
  • Why do people buy luxury cars? Why do people own vs lease cars?
  • Does exterior styling matter in shared autonomy?
  • What would people do with the ability to jailbreak their AV?
  • Do human drivers think they're above the mean, and does impact their preference for control?
  • Will "manual" driving be seen as a special hobby? will it be cheaper or more expensive?
  • Will there be stunted driving ability in those with dependencies on autonomous features?
  • How can AV's improve the world/society? Is there a pro-social aspect?
  • What accessibility markets can AV's serve?
  • Who are the under served populations that can benefit?
  • What modality do people prefer when interacting with AVs (voice vs haptic vs traditional)?
  • What are the weaknesses of current infotainment systems?
  • How do mobile ecosystems change the way users perceive their experience?
  • Why do people use Apple Carplay or Android Auto?
  • What is the opportunity in next level entertainment?
  • Why are backseat entertainment screens not that successful?
  • How can we make a car ride interactive when not driving?
  • Do people want to be entertained or engaged by a L5 AV, or left alone?
  • Is it acceptable to gamify the road?

Given the scoping of our project, we decided that the three types of people we needed to interview were Tesla Owners, Car Enthusiasts and Car Salespeople.

Car sales people - what our target demographic/early adopters look for in vehicles, trends in what early adopters buy and why

Operators of autonomous vehicles under testing - have experience with real autonomous driving, more than Tesla drivers, but issues with confidentiality and whatnot

Pilots - found out late in the game that they would be beneficial to hear from, could you used the input of a couple more maybe?

Early on, we identified several Areas for Exploration, which were larger questions we would spend time looking into before narrowing on a solution to develop in the AV Domain.

Addressed:

The Moments that Build Trust

Users’ trust in the AV system increases based upon its performance in specific moments during the ride. What are these moments, and how can we facilitate them?

Establishing Driver Agency

Users need reassurance that they have control over the vehicle, even when it is in autonomous driving mode. How can we communicate that most effectively and make the user feel in control?

Unaddressed:

Learning my Driving Preferences and Incorporating Feedback

Users want the AV to drive like they would. How can users convey their driving preferences to the AV during the drive, and how should the vehicle acknowledge and incorporate this feedback?

Personalized Interior Experiences

Users want the car to recognize their desires and remember their preferences. What specifically do users want or not want the car to do for them?

What to Make Transparent to the Driver

Users need the right amount of transparency into the workings of the AV system. What is necessary to show, what is too much, and how should this information be displayed?

Enjoyable Control of Autonomous Driving

Users derive joy from controlling their vehicles. How can this be retained in an AV system?

Concepting

Here is the raw matrix from the March trip to Mountain View.

Themes:

  • communication to pedestrians and other vehicles
  • other ways to distinguish autopilot from manual driving mode/who's in control (collapsing steering wheel; different colors across dash or car for different mode)
  • personalize what the user wants to see (CV, maneuver options, non-driving related like changing interior based on mood)
  • NLP seamless interaction with vehicle (not really realistic though so maybe don't mention?)
    • AV understanding of natural language (makes a note for future driving when the person says ""whoa that was close""
    • use voice to ask what's happening in surrounding environment, or to figure out why the car did something (more NLP)
  • windshield displays (of CV, prompts within CV)
  • living room on wheels/office car
  • different ways for AV to get human's attention (tap on shoulder?)
  • communication between AI and human when learning or teaching something about preferences (sounds, highlighting situation, rewards)
  • methods of strategic distraction (games?)
  • gesture detection for communication
  • using biometrics to know when to present certain info to human (like show different amount of info when scared or when the AI needs to keep human awake)

Future Roadmap

One of the key concepts of our product is the curved shape. The slope and unique S-curve shape allows users to know where their hand is physically on the control, and which control options are available.

This creates a type of hybrid-tangibility; the physical curve of the screen acts as an affordance of interactions as well as having digital feedback of your actions. However, there are additional techniques that should be included in the next version that further foster these benefits:

  • Haptic feedback to make thresholds and confirmations more salient, using vibrations as resistance, as well as notifying of the level of speed or lane shift that is happening
  • Audio prompt notifications to avoid change
  • Feedback tones for confirmation and understanding
  • Static textures to create different zones and incorporating hand-feel as a signifier
  • Refreshing textures and responsive glass to indicate zones of operations
  • Adjustable mounting allowing users to move the screen for comfort
  • Responsive slope: placing actuators behind the OLED screen to make the slope of the curve responsive and map to the speed. The steeper slope, the higher speed the car is moving, the harder it would be to speed up due to more friction when swiping up because less speed gain is available.

Our research indicated that the amount of info shown to the user is vital. Too much information (showing a user a direct video stream of the car's computer vision) leaves people feeling even more overwhelmed and confused. Too little information, and the user doesn't know if the AI has enough situational awareness and understanding of the environment and world around it.

Through testing we were able to uncover that it is vital to show that the AI sees a user's command (immediate action feedback) and that the car is then following through on that command and executing the intent. We also were able to see that some users wanted to have more specific information about the world around them, not just trusting that the car knows the boundaries of a lane or what options are available to execute in the form of a prompt.

The AI also needs to explain to the user why it is not executing something. It is a level of visibility and transparency that may not quite available with existing technology, but it is vital to the success of the human-AI relationship.

The limitations of our testing environment in terms of realism did not allow us to push the boundaries of how much or how little environmental information the user needs to see in order to feel safe both in giving contol over to the car and before making a decision.

We see the next steps in user testing happening after additional fidelity is gained in the design and placed inside the cabin of a real vehicle, either with a better calibrated rear projector or creating a prototype from a custom-bent OLED screen. This additional level of realism well better allow users to see themselves operating the vehicle through this control providing better data and feedback, leading to more specific design decisions.

The amount of information a user would want to see will be impacted by real world scenarios and real world uncomfortable situations (like actually being behind a large vehicle, or feeling the frustration of the AV not moving by itself). Without this, it is difficult to measure realistic success of the product.

Test subjects: our user testing was focused on getting a representative of the general population with recruiting on craiglist and reddit. The only limitation was the ability to be a legal, active drive (due to the need for takeover in level 3/4). The last round of testing involved Tesla owners who frequently use the Autopilot functions on their cars - these were the closest drivers that we could get to know what it's like to be a level 3/4 scenario.

However, there is a population that could be even closer to that - test vehicle operators that put miles on the prototype vehicles from companies working on this technology (Samsung, Waymo, Uber, Lyft, Argo, Aurora, etc). Pending conflict of interest resolution, they have the potential to yield the best data around using these vehicles.

Note: in all future research it is important to separate out the novelty of being in an autonomous vehicle, from the novelty of a new interface, from the actual usability successes and challenges.

In addition, the interaction pattern of swipes creates certain mental models of usability with people that are important to understand. Our user testing showed that people equate this to swiping to unlock a phone, using a volume slider, or swiping outside of a zone to confirm (in the Robinhood mobile app). In order to make the next set of design iterations, a Mental Model Analysis study should be ran to note which models we want to foster and which ones to design against. Suggestions for methodologies include: a relatedness study (similarity rankings), a task analsis with attention paid to the interactions, and laddering interviews.

A human factors study needs to be conducted in order to finalize the exact measure of the curve. We were able to achieve some degree of certain about the angle of the curvature, but it is not precise nor conclusive enough to make it productizeable at this stage.

We also need to gain conclusive information of the effectiveness of the benefits of our product such as increased comfort, decreased anxiety, and a trust built over time. This could be done with a multi-stage study that uses ECGs and EEGs to measure a body's response to the movement of the car.

Harman has the incredible opportunity to integrate their biometric sensing technology into this product. By measuring a human's internal state during the drive, and mapping that information with how they manipulate the self-driving car's operations, Harman would have the unique ability to create data sets for how humans want the car to operate in specific moments and how they want to influence autonomy. Every person is different, and this technology can then be incorporated into how level 5 cars adapt their driving style to a rider's internal state.

Directional, localized sound solutions can be used to create a further connection between the human and the AI inside the cabin. By piping in directional sound from the environment, the user can get an even richer understanding of the prompts being offered by the AI - it's possible to highlight the noise of the bus that the AI is asking for a pass and have the user experience more of the world.

By considering the general idea of integrating the human into the autonomous driving experience based on our control, we see several opportunities for expanding our gestural-based interface with new functionality in the future.

Knowing that Harman works in the infotainment space, we envision the AI portion of our solution (the secondary screen) to be integrated with infotainment into one, larger environment (such as in our design graphics). This strategically positions Harman to be able to better cater to OEMs as a "self-driving as a service" model by integrating their revered infotainment technology with the capacity to control an autonomous vehicle.

We also see opportunities for the future of our concept by allowing the AI to suggest local stops and other bits of information based on driving habits, or through biometric sensing on the human. Perhaps one day a vehicle can recommend a restaurant stop closeby with the human able to simply accept through our gestural control before the car shifts course on its own.

Expanding on this, the general concept of ""pushing something to the car"" can be expanded to include pushing action items to the car. A human might find on the lower curved control a place to go to, such as a grocery store, and push it to the AI for the car to start acting upon. This idea of submitting preferences to the car doesn't have to stop at simple autonomous controls, it can augment the overall motivations of what the human wants to do with, and inside of, their vehicle.

Furthermore, creating personas and brands for this human/AI relationship (such as how people know Apple CarPlay and Android Auto) can help establish Harman's identity in the car space, or OEMs as well, helping OEMs regain control of their interior experience from Apple and Google's infotainment tie-ins.