About Gierad


I am a 2nd year PhD Candidate at the Human-Computer Interaction Institute at Carnegie Mellon University, where I am advised by Chris Harrison at the Future Interfaces Group.

The central theme of my research explores novel sensing and interface technologies that makes people's interaction with computers more expressive, powerful, and intuitive. My work is anchored on applied research, strongly influenced by the process of invention, and drawing value systems from computer science and technical HCI.


Contact Information
   CV / Resume

Outside of research, I enjoy cooking, running, and taking photos. Occasionally, I play drums for Disney Research Pittsburgh.

My name is pronounced as "Girard" with no second "r."


Mar 22Painting Party at Nathan's House with fellow PhDs
Mar 15Kendrick's To Pimp a Butterfly drops, world goes nuts
Mar 03FIGLab drone is up and flying!
Mar 01Acoustruments wins Best Paper Award at CHI 2015
Feb 17First submission to SIGGRAPH ETech. Crossing Fingers.
Feb 02Apple visits FIGLab. Demos galore.
Jan 22Sending off Ishan: Lab dinner at Udipi. Bowling afterparty.
Jan 05Back at the FIGLab. Projects for 2015 in full swing.
Dec 29Gizmodo names Skin Buttons Top 7 UI/UX Innovations for 2014
Dec 27Ice skating at Campus Martius. I survived!
Dec 24Holiday collaborative cooking. White Christmas in Michigan.
Dec 23Detroit Zoo Wild Lights, then after, after party.
Dec 16Back in Berkeley. Catching up with Eric, Amy, Cesar, and Valkyrie.

Latest Research


Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections

Gierad Laput, Robert Xiao, Xiang ‘Anthony’ Chen, Scott Hudson, Chris Harrison (UIST 2014)

Smartwatches are a promising new platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches to allow human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size.


CommandSpace: Modeling the Relationships Between Tasks, Descriptions and Features

Eytan Adar, Mira Dontcheva, Gierad Laput (UIST 2014)

Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop.


Expanding the Input Expressivity of Smartwatches Using Mechanical Pan, Twist, Tilt, and Click

Robert Xiao, Gierad Laput, Chris Harrison (CHI 2014)

Smartwatches promise to bring enhanced convenience to common communication, creation and information retrieval tasks. Due to their prominent placement on the wrist, they must be small and otherwise unobtrusive, which limits the sophistication of interactions we can perform. We propose a complementary input approach: using the watch face as a multi-degree- of-freedom, mechanical interface. We developed a proof of concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. We developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices.


Pixel-Based Methods for Widget State and Style in a Runtime Implementation of Sliding Widgets

Morgan Dixon, Gierad Laput, James Fogarty (CHI 2014)

We present new pixel‑based methods for modifying existing interfaces at runtime, and we use our methods to explore Moscovich et al.’s Sliding Widgets in real‑world interfaces. This work examines deeper pixel-level understanding of widgets and the resulting capabilities of pixel-based runtime enhancements. We present three new sets of methods: methods for pixel-based modeling of widgets in multiple states, methods for managing the combinatorial complexity that arises in creating a multitude of runtime enhancements, and methods for styling runtime enhancements to preserve consistency with the design of an existing interface.


PixelTone: A Multimodal Interface for Image Editing

G. Laput, M. Dontcheva, G. Wilensky, W. Chang, A. Agarwala, J. Linder, and E. Adar (CHI 2013)

Photo editing can be a challenging task, and it becomes even more difficult on the small, portable screens such as camera phones that are now frequently used to edit images. To address this problem we present PixelTone, a multimodal photo editing interface that combines speech and direct manipulation. We utilize semantic distance modeling, allowing users to express voice commands using their own words instead of application-enforced terms. Additionally, user's can point to subjects in an image, tag them with names, and refer to those tags while simultaneously editing using voice commands.


Tutorial-based Interfaces for Cloud-enabled Applications

Gierad Laput, Eytan Adar, Mira Dontcheva, and Wilmot Li (UIST 2012)

Powerful image editing software like Adobe Photoshop and GIMP have complex interfaces that can be hard to master. To help users perform image editing tasks, we introduce TappCloud, a system for authoring and running tutorial-based applications (Tapps) that retain the step-by-step structure and descriptive text of tutorials. Tapps can also automatically apply tutorial steps to new images. Additionally, tapps can be used to batch process many images automatically, similar to traditional macros. Tapps also support interactive exploration of parameters, automatic variations, and direct manipulation (e.g., selection, brushing).