Process: Concept Validation

In order to verify our assumptions, we fanned out across NASA to locate potential users who could not only give us feedback about our ideas, but possibly become sources of future user testing.

We located contacts at the nearby Air Force base Moffett Federal Air Field, the local Palo Alto Airport, and a nearby Aircraft Maintenance Training School, and within NASA Ames at the Arcjet complex, the Vertical Gun Range, and the wind tunnels. Each group presented a similar picture, fitting that of our workflow model. After we identified and supported several important trends in our data we brought concept validation to a close, deciding that it would not be in our best interests to spend the significant amount of time necessary for additional contextual modeling.

We identified five major focus questions that needed answering, which are summarized below.

People write on documents to note specific parts either by specifying them on diagrams, or to remember part numbers for future use.

Techs filter their problems through informal conversations with senior technicians and tech leads in their work area before submitting them on to Quality for formal vetting, followed by dismissal of illegitimate problem reports, and elaboration and forwarding of legitimate problem reports.

Old PRs are referenced as copied templates. Urgent new problems require designation as such.

Technicians list discrepancies by description and location, just enough to find them. Quality vets the listed discrepancies and either dismisses them or forwards them to engineering, which analyzes them and decides on corrective action.

Techs and Quality personnel attach annotated design documents and rich media, for context.

The additional perspective also helped us fill out personas of our imagined users. We created six personality archetypes: one each for the young tech, the older tech, the technical lead, quality personnel, the engineer, and the manager. This order, as described here, roughly matches the progression of knowledge about a problem as time progresses. These personas were made for identifying users, and helped to further describe them.

For example, we found it bluntly affirmed at the Palo Alto Airport that the major reason for engineers delaying the review of problem reports was due to the many superficial problems reported by young techs without checking informally with local experts first.

We found that the Vertical Gun (a means of testing meteorite impacts on spacecraft hull tiling and atmospheric re-entry behavior on orbiter scale models) did not have a formal problem reporting system. This was initially surprising, but fits the pattern we identified earlier with the Robotics Professor. Although this meant that the Vertical Gun was of no use for supplying users or concept validation, a characterization of situations that call for formal problem reporting systems was created, and kept in mind for use in identifying future users at Ames.

At the Aircraft Maintenance Training School, it was also demonstrated to us how recent redesigns in the way aircraft are piloted caused the roles of pilot, copilot, flight engineer, and navigator to be reduced to those of just pilot and copilot through cockpit automation. At the Air Force base at Moffett Field, Quality personnel once spent much of their time trying to ensure reports for obsolete maintenance system was valid, and this is similar to some of the role requirements of Quality personnel elsewhere we visited; at KSC there were even entire roles just for running the PR system. If we could automate these roles, we knew we could reduce global staff-hours while also freeing up personnel to focus on roles as technician, Quality personnel, & engineer, which cannot be automated.