Interactive Gigapixel Prints: Large, Paper-Based Interfaces

Interactive Gigapixel Prints: Large, Paper-Based Interfaces


Creative professionals in many domains employ
large paper artifacts in their work, for the high resolution which aids visual context,
for their ease of transport and to facilitate co-located collaboration. We suggest that future workplaces will include
large multiuser interactive displays, that are inexpensive, low-power, lightweight, and
mobile. Interactive gigapixel prints, or gigaprints,
are large paper interfaces augmented with digital pens and displays. Gigaprints both
pragmatically offer a solution today and epistemic-ally serve as a prototype for these future displays. In this video, we explore four aspects of
the gigaprint user experience by examining a wide range of applications we have developed. We then briefly describe the design tools
we have built to support the protyping process. Large printed displays provide rich visual
context for graphics heavy applications. This visual context aids in search and comparison
tasks. Our first application supports the task of
monitoring the health of a large computer network. Each morning a visualization summarizing the
prior day’s traffic is printed. The amount of data is enormous. over half
a million data points are visualized on the three foot by six foot display. Due to papers higher resolution, all data
is immediately accessible without feeling cluttered. This enables the user to perform search and
comparison tasks naturally. No zooming or information hiding is necessary. When the paper visualization is augmented
with the digital pen and projector, these search and comparison tests become richer. For example, the user can filter yesterday’s
data by specifying a geographic region of interest. All machines which communicate with that region
are illuminated. A user can then pivot on a particular machine
of interest and then the projector illuminates all autonomous systems with which that machine
communicated. This highlighting enhances search and exploration
tasks while retaining visual context. Likewise, a high-resolution display can provide
visual context for live information that is relatively low in spacial resolution. Here the projector illuminates machines involved
in live traffic. A machine’s present activity can be easily
compared to the prior day’s data. Our second application is “Twister,” a competitive
photo-browsing game that further explores the rich visual context recorded by paper. In this game two users compete to find and
photos on a large sheet. An auxiliary display presents four photos, two for one player and
two for the other. The first user who locates and touches one of his photos with the appropriate
left or right pen earns points. Because photos are always in the same visual context or location,
a user is able to develop skill at the game. The size of the interface and the ability
to leverage multiple pens and devices supports collaboration, both co-located and remote,
and synchronous and asynchronous. Twister for example leverages co-located synchronous
collaboration as a game play element. Likewise, the network monitoring application introduced
earlier supports remote asynchronous collaboration by allowing users to report concerns to a
system administrator. A third application body sketch augments the
remote synchronous collaboration teleconferencing with sharing of sketches. The sketches of
one user appearing real time on secondary display of the other user. A fourth application
displays blog entries from an RSS feed in a public space. Users are able to comment
on these entries with a digital pen. The ink affords co-located asynchronous collaboration
additionally, immediately after a comment is left, it is made available digitally via
an RSS feed enabling remote collaboration. The robustness of pen and paper, the flexibility
of a choice between batched and real-time input methods and the ability to use real-time
audio displays aids in mobility. A fifth application, audio guide, augments
a map with an audio display. The user is able to access descriptions  of locations
on the map by selecting them with a digital pen. Similarly, a sixth application, targeted
at field biologists, enables straightforward retrieval of geotagged photos by augmenting
a map with a mobile display. This application fits in the work practice
of field biologists: printed maps, pens and mobile phones are standard tools. This application
also allows digital storage of ink annotations on the map for later batch processing in the
lab. Finally pen and paper aligns with current
practices and even if technological fails a gigaprint will degrade gracefully. It is
never worse than paper-only tools. The field biology application for example,
works identically to current practice, should technology fail. If a pen fails to record
information digitally, it is still captured in ink. Our paper applications tool kit provides tools
for the rapid development of giga-prints. In fact, five of the six applications presented
here were built using this tool kit, each in under a day. Development with the tool kit consists of
two phases: user interface design and interaction specification. User interface design begins
with the designer employing industry-standard tools, such as Adobe Illustrator. Any application that supports PDF export can
be used to develop a paper UI. The PDF is opened in Adobe Acrobat, and an Acrobat plug-in
is used to specify regions of interaction. Finally, the interactions themselves are specified
using a Java library that closely mimics the event model used in the Java AWT and Swing
libraries. The paper applications tool kit has been made
available to the community as open source and is currently being used in an undergraduate
user interface design course.

Leave a Reply

Your email address will not be published. Required fields are marked *