About Gierad


I'm a 3rd year PhD Student at the Human-Computer Interaction Institute at Carnegie Mellon University, where I am advised by Chris Harrison at the Future Interfaces Group.

The central theme of my research explores novel sensing and interface technologies that makes people's interaction with computers more expressive, powerful, and intuitive. My work is anchored on applied research, strongly influenced by the process of invention, and drawing value systems from computer science and technical HCI.


Contact Information
   CV / Resume

Outside of research, I enjoy cooking, running, and taking photos. Occasionally, I play drums for Disney Research Pittsburgh.

My name is pronounced as "Girard" with no second "r."


Nov 01Crafted a series of scupltures: wood and steel
Oct 29At NYC for Engadget LIVE
Jul 29Gearing up for SIGGRAPH. Demos are ready!
Jul 142 papers accepted at UIST '15: One on Sensing, another on Fab
Jun 01Working at Google Research for the Summer.
Apr 17Flying to Seoul for CHI 2015
Apr 14UIST rush is over. Now for CHI 2015.
Mar 22Painting Party at Nathan's House with fellow PhDs
Mar 15Kendrick's To Pimp a Butterfly drops, world goes nuts
Mar 03FIGLab drone is up and flying!
Mar 01Acoustruments wins Best Paper Award at CHI 2015
Feb 17First submission to SIGGRAPH ETech. Crossing Fingers.
Feb 02Apple visits FIGLab. Demos galore.

Latest Research


EM-Sense: Touch Recognition of Uninstrumented Electrical and Electromechanical Objects

Gierad Laput, Chouchang Yang, Robert Xiao, Alanson Sample, Chris Harrison (UIST 2015)    BEST TALK AWARD

Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. We can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment.


3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles

Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison (UIST 2015)

We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in fused deposition modeling 3D printers. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. Further, we detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We show several examples of output, demonstrating the feasibility of our approach on a low cost printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware.


Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds

Gierad Laput, Walter Lasecki, Jason Wiese, Robert Xiao, Jeff Bigham, Chris Harrison (CHI 2015)

The promise of “smart” homes, workplaces, schools, and other environments has long been championed. Unattractive, however, has been the cost to run wires and install sensors. More critically, raw sensor data tends not to align with the types of questions humans wish to ask, e.g., do I need to restock my pantry? In response, we built Zensors, a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors. With Zensors, users can go from question to live sensor feed in less than 60 seconds. Through our API, Zensors can enable a variety of rich end-user applications and moves us closer to the vision of responsive, intelligent environments. Published at CHI 2015.


Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices

Gierad Laput, Eric Brockmeyer, Scott E. Hudson, Chris Harrison (CHI 2015)    BEST PAPER AWARD

Acoustrument are low-cost, passive, and powerless mechanisms, made from plastic, that can bring rich, tangible functionality to handheld devices. Through a structured exploration, we identified an expansive vocabulary of design primitives, providing building blocks for the construction of tangible interfaces utilizing smartphones’ existing audio functionality. By combining design primitives, familiar physical mechanisms can all be constructed. On top of these, we can create end-user applications with rich, tangible interactive functionalities. Acoustruments adds a new method to the toolbox HCI practitioners and researchers can draw upon, while introducing a cheap and passive method for adding interactive controls to consumer products. Published at CHI 2015.


Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections

Gierad Laput, Robert Xiao, Xiang ‘Anthony’ Chen, Scott Hudson, Chris Harrison (UIST 2014)

Smartwatches are a promising new platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches to allow human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size.


CommandSpace: Modeling the Relationships Between Tasks, Descriptions and Features

Eytan Adar, Mira Dontcheva, Gierad Laput (UIST 2014)

Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop.


Expanding the Input Expressivity of Smartwatches Using Mechanical Pan, Twist, Tilt, and Click

Robert Xiao, Gierad Laput, Chris Harrison (CHI 2014)

Smartwatches promise to bring enhanced convenience to common communication, creation and information retrieval tasks. Due to their prominent placement on the wrist, they must be small and otherwise unobtrusive, which limits the sophistication of interactions we can perform. We propose a complementary input approach: using the watch face as a multi-degree- of-freedom, mechanical interface. We developed a proof of concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. We developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices.


Pixel-Based Methods for Widget State and Style in a Runtime Implementation of Sliding Widgets

Morgan Dixon, Gierad Laput, James Fogarty (CHI 2014)

We present new pixel‑based methods for modifying existing interfaces at runtime, and we use our methods to explore Moscovich et al.’s Sliding Widgets in real‑world interfaces. This work examines deeper pixel-level understanding of widgets and the resulting capabilities of pixel-based runtime enhancements. We present three new sets of methods: methods for pixel-based modeling of widgets in multiple states, methods for managing the combinatorial complexity that arises in creating a multitude of runtime enhancements, and methods for styling runtime enhancements to preserve consistency with the design of an existing interface.


PixelTone: A Multimodal Interface for Image Editing

Gierad Laput, Mira Dontcheva, Gregg Wilensky, Walter Chang, Aseem Agarwala, Jason Linder, and Eytan Adar (CHI 2013) [Pre-PhD]

Photo editing can be a challenging task, and it becomes even more difficult on the small, portable screens such as camera phones that are now frequently used to edit images. To address this problem we present PixelTone, a multimodal photo editing interface that combines speech and direct manipulation. We utilize semantic distance modeling, allowing users to express voice commands using their own words instead of application-enforced terms. Additionally, user's can point to subjects in an image, tag them with names, and refer to those tags while simultaneously editing using voice commands.


Tutorial-based Interfaces for Cloud-enabled Applications

Gierad Laput, Eytan Adar, Mira Dontcheva, and Wilmot Li (UIST 2012) [Pre-PhD]

Powerful image editing software like Adobe Photoshop and GIMP have complex interfaces that can be hard to master. To help users perform image editing tasks, we introduce TappCloud, a system for authoring and running tutorial-based applications (Tapps) that retain the step-by-step structure and descriptive text of tutorials. Tapps can also automatically apply tutorial steps to new images. Additionally, tapps can be used to batch process many images automatically, similar to traditional macros. Tapps also support interactive exploration of parameters, automatic variations, and direct manipulation (e.g., selection, brushing).