About Gierad

placeholder

I am a 2nd year PhD Candidate at the Human-Computer Interaction Institute at Carnegie Mellon University, where I am advised by Chris Harrison at the Future Interfaces Group.

The central theme of my research explores novel sensing and interface technologies that makes people's interaction with computers more expressive, powerful, and intuitive. My work is anchored on applied research, strongly influenced by the process of invention, and drawing value systems from computer science and technical HCI.

 

Contact Information
   gierad.laput@cs
   CV / Resume
   @gierad

Outside of research, I enjoy cooking, running, and taking photos. Occasionally, I play drums for Disney Research Pittsburgh.

My name is pronounced as "Girard" with no second "r."

Updates

Apr 17Flying to Seoul for CHI 2015
Apr 14Impressive number of projects submitted to UIST
Mar 22Painting Party at Nathan's House with fellow PhDs
Mar 15Kendrick's To Pimp a Butterfly drops, world goes nuts
Mar 03FIGLab drone is up and flying!
Mar 01Acoustruments wins Best Paper Award at CHI 2015
Feb 17First submission to SIGGRAPH ETech. Crossing Fingers.
Feb 02Apple visits FIGLab. Demos galore.
Jan 22Sending off Ishan: Lab dinner at Udipi. Bowling afterparty.
Jan 05Back at the FIGLab. Projects for 2015 in full swing.
Dec 29Gizmodo names Skin Buttons Top 7 UI/UX Innovations for 2014
Dec 27Ice skating at Campus Martius. I survived!
Dec 24Holiday collaborative cooking. White Christmas in Michigan.

Latest Research

placeholder

Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds

Gierad Laput, Walter Lasecki, Jason Wiese, Robert Xiao, Jeff Bigham, Chris Harrison (CHI 2015)

The promise of “smart” homes, workplaces, schools, and other environments has long been championed. Unattractive, however, has been the cost to run wires and install sensors. More critically, raw sensor data tends not to align with the types of questions humans wish to ask, e.g., do I need to restock my pantry? In response, we built Zensors, a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors. With Zensors, users can go from question to live sensor feed in less than 60 seconds. Through our API, Zensors can enable a variety of rich end-user applications and moves us closer to the vision of responsive, intelligent environments. Published at CHI 2015.

placeholder

Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices

Gierad Laput, Eric Brockmeyer, Scott E. Hudson, Chris Harrison (CHI 2015)

Acoustrument are low-cost, passive, and powerless mechanisms, made from plastic, that can bring rich, tangible functionality to handheld devices. Through a structured exploration, we identified an expansive vocabulary of design primitives, providing building blocks for the construction of tangible interfaces utilizing smartphones’ existing audio functionality. By combining design primitives, familiar physical mechanisms can all be constructed. On top of these, we can create end-user applications with rich, tangible interactive functionalities. Acoustruments adds a new method to the toolbox HCI practitioners and researchers can draw upon, while introducing a cheap and passive method for adding interactive controls to consumer products. Published at CHI 2015.

placeholder

Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections

Gierad Laput, Robert Xiao, Xiang ‘Anthony’ Chen, Scott Hudson, Chris Harrison (UIST 2014)

Smartwatches are a promising new platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches to allow human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size.

placeholder

CommandSpace: Modeling the Relationships Between Tasks, Descriptions and Features

Eytan Adar, Mira Dontcheva, Gierad Laput (UIST 2014)

Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop.

placeholder

Expanding the Input Expressivity of Smartwatches Using Mechanical Pan, Twist, Tilt, and Click

Robert Xiao, Gierad Laput, Chris Harrison (CHI 2014)

Smartwatches promise to bring enhanced convenience to common communication, creation and information retrieval tasks. Due to their prominent placement on the wrist, they must be small and otherwise unobtrusive, which limits the sophistication of interactions we can perform. We propose a complementary input approach: using the watch face as a multi-degree- of-freedom, mechanical interface. We developed a proof of concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. We developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices.

placeholder

Pixel-Based Methods for Widget State and Style in a Runtime Implementation of Sliding Widgets

Morgan Dixon, Gierad Laput, James Fogarty (CHI 2014)

We present new pixel‑based methods for modifying existing interfaces at runtime, and we use our methods to explore Moscovich et al.’s Sliding Widgets in real‑world interfaces. This work examines deeper pixel-level understanding of widgets and the resulting capabilities of pixel-based runtime enhancements. We present three new sets of methods: methods for pixel-based modeling of widgets in multiple states, methods for managing the combinatorial complexity that arises in creating a multitude of runtime enhancements, and methods for styling runtime enhancements to preserve consistency with the design of an existing interface.

placeholder

PixelTone: A Multimodal Interface for Image Editing

G. Laput, M. Dontcheva, G. Wilensky, W. Chang, A. Agarwala, J. Linder, and E. Adar (CHI 2013)

Photo editing can be a challenging task, and it becomes even more difficult on the small, portable screens such as camera phones that are now frequently used to edit images. To address this problem we present PixelTone, a multimodal photo editing interface that combines speech and direct manipulation. We utilize semantic distance modeling, allowing users to express voice commands using their own words instead of application-enforced terms. Additionally, user's can point to subjects in an image, tag them with names, and refer to those tags while simultaneously editing using voice commands.

placeholder

Tutorial-based Interfaces for Cloud-enabled Applications

Gierad Laput, Eytan Adar, Mira Dontcheva, and Wilmot Li (UIST 2012)

Powerful image editing software like Adobe Photoshop and GIMP have complex interfaces that can be hard to master. To help users perform image editing tasks, we introduce TappCloud, a system for authoring and running tutorial-based applications (Tapps) that retain the step-by-step structure and descriptive text of tutorials. Tapps can also automatically apply tutorial steps to new images. Additionally, tapps can be used to batch process many images automatically, similar to traditional macros. Tapps also support interactive exploration of parameters, automatic variations, and direct manipulation (e.g., selection, brushing).