<

About Gierad

placeholder

I'm a 3rd year PhD Student at the Human-Computer Interaction Institute at Carnegie Mellon University, where I am advised by Chris Harrison at the Future Interfaces Group.

My research explores novel sensing technologies for smart environments, the Internet of Things, and mobile or wearable devices. I am interested in sensing opportunities that do not require special-purpose hardware or invasive instrumentation of users or the environment. Most recently, I have started a new thread of research investigating unconventional uses of 3D printing.

 

Contact Information
   gierad.laput@cs
   CV / Resume
   @gierad

Outside of research, I enjoy cooking, running, and reading books. I also play drums for Disney Research

Name pronunciation: "Girard" with no second "r." Last name: La-put.

Updates

05/10/2016    Tour of Google X. Rode self-driving car. Impressive.
05/09/2016    SkinTrack goes viral!
05/08/2016    In the Bay Area for CHI 2016. Escape Room w/ Lab and Friends.
04/20/2016    SkinTrack receives Best Paper Nomination.
04/13/2016    Multiple projects submitted to UIST 2016.
04/01/2016    Honored to receive the Adobe Research Fellowship.
01/15/2016    New FIGLab website is up!
12/21/2015    Accepted role as Associate Chair (AC) for CHI 2016 LBW
12/18/2015    Star Wars opening day with Disney gang; The Force is strong
12/14/2015    Two new papers accepted: IUI '16, CHI '16
11/14/2015    BBC, NPR, Post-Gazette Interviews. Research getting out.
11/11/2015    EM-Sense wins Best Talk at UIST 2015
11/01/2015    Crafted a series of scupltures: wood and steel

Latest Research

SkinTrack: Using the Body as an Electrical Waveguide for Finger Tracking on the Skin

Yang Zhang, Junhan Zhuo, Gierad Laput and Chris Harrison (CHI 2016)    BEST PAPER NOMINATION

SkinTrack is a wearable system that enables continuous touch tracking on the skin. It consists of a ring, which emits a continuous high frequency AC signal, and a sensing wristband with multiple electrodes. Due to the phase delay inherent in a high-frequency AC signal propagating through the body, a phase difference can be observed between pairs of electrodes. SkinTrack measures these phase differences to compute a 2D finger touch coordinate. Our approach can segment touch events at 99% accuracy, and resolve the 2D location of touches with a mean error of 7.6mm. We envision the technology being integrated into future smartwatches, supporting rich interactions beyond the confines of the small screen.

SweepSense: Ad-Hoc Configuration Sensing Using Swept-Frequency Ultrasonics

Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison (IUI 2016)

Devices can be made more intelligent if they can sense their surroundings and physical configuration. However, adding special purpose sensors increases size, price and build complexity. Instead, we use speakers and microphones already present in a wide variety of devices to open new sensing opportunities. Our technique sweeps through a range of inaudible frequencies, allowing us to deduce information about the immediate environment. We offer several example uses, two of which we implemented as self-contained demos, and conclude with an evaluation that quantifies their performance and demonstrates high accuracy.

EM-Sense: Touch Recognition of Uninstrumented Electrical and Electromechanical Objects

Gierad Laput, Chouchang Yang, Robert Xiao, Alanson Sample, Chris Harrison (UIST 2015)    BEST TALK AWARD

Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. We can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment.

3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles

Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison (UIST 2015)

We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in fused deposition modeling 3D printers. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. Further, we detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We show several examples of output, demonstrating the feasibility of our approach on a low cost printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware.

Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds

Gierad Laput, Walter Lasecki, Jason Wiese, Robert Xiao, Jeff Bigham, Chris Harrison (CHI 2015)

The promise of “smart” homes, workplaces, schools, and other environments has long been championed. Unattractive, however, has been the cost to run wires and install sensors. More critically, raw sensor data tends not to align with the types of questions humans wish to ask, e.g., do I need to restock my pantry? In response, we built Zensors, a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors. With Zensors, users can go from question to live sensor feed in less than 60 seconds. Through our API, Zensors can enable a variety of rich end-user applications and moves us closer to the vision of responsive, intelligent environments. Published at CHI 2015.

Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices

Gierad Laput, Eric Brockmeyer, Scott E. Hudson, Chris Harrison (CHI 2015)    BEST PAPER AWARD

Acoustrument are low-cost, passive, and powerless mechanisms, made from plastic, that can bring rich, tangible functionality to handheld devices. Through a structured exploration, we identified an expansive vocabulary of design primitives, providing building blocks for the construction of tangible interfaces utilizing smartphones’ existing audio functionality. By combining design primitives, familiar physical mechanisms can all be constructed. On top of these, we can create end-user applications with rich, tangible interactive functionalities. Acoustruments adds a new method to the toolbox HCI practitioners and researchers can draw upon, while introducing a cheap and passive method for adding interactive controls to consumer products. Published at CHI 2015.

Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections

Gierad Laput, Robert Xiao, Xiang ‘Anthony’ Chen, Scott Hudson, Chris Harrison (UIST 2014)

Smartwatches are a promising new platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches to allow human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size.

CommandSpace: Modeling the Relationships Between Tasks, Descriptions and Features

Eytan Adar, Mira Dontcheva, Gierad Laput (UIST 2014)

Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop.

Expanding the Input Expressivity of Smartwatches Using Mechanical Pan, Twist, Tilt, and Click

Robert Xiao, Gierad Laput, Chris Harrison (CHI 2014)

Smartwatches promise to bring enhanced convenience to common communication, creation and information retrieval tasks. Due to their prominent placement on the wrist, they must be small and otherwise unobtrusive, which limits the sophistication of interactions we can perform. We propose a complementary input approach: using the watch face as a multi-degree- of-freedom, mechanical interface. We developed a proof of concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. We developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices.

Pixel-Based Methods for Widget State and Style in Runtime Sliding Widgets

Morgan Dixon, Gierad Laput, James Fogarty (CHI 2014)

We present new pixel‑based methods for modifying existing interfaces at runtime, and we use our methods to explore Moscovich et al.’s Sliding Widgets in real‑world interfaces. This work examines deeper pixel-level understanding of widgets and the resulting capabilities of pixel-based runtime enhancements. We present three new sets of methods: methods for pixel-based modeling of widgets in multiple states, methods for managing the combinatorial complexity that arises in creating a multitude of runtime enhancements, and methods for styling runtime enhancements to preserve consistency with the design of an existing interface.

PixelTone: A Multimodal Interface for Image Editing

Gierad Laput, Mira Dontcheva, Gregg Wilensky, Walter Chang, Aseem Agarwala, Jason Linder, and Eytan Adar (CHI 2013) [Pre-PhD]

Photo editing can be a challenging task, and it becomes even more difficult on the small, portable screens such as camera phones that are now frequently used to edit images. To address this problem we present PixelTone, a multimodal photo editing interface that combines speech and direct manipulation. We utilize semantic distance modeling, allowing users to express voice commands using their own words instead of application-enforced terms. Additionally, user's can point to subjects in an image, tag them with names, and refer to those tags while simultaneously editing using voice commands.

Tutorial-Based Interfaces for Cloud-Enabled Applications

Gierad Laput, Eytan Adar, Mira Dontcheva, and Wilmot Li (UIST 2012) [Pre-PhD]

Powerful image editing software like Adobe Photoshop and GIMP have complex interfaces that can be hard to master. To help users perform image editing tasks, we introduce TappCloud, a system for authoring and running tutorial-based applications (Tapps) that retain the step-by-step structure and descriptive text of tutorials. Tapps can also automatically apply tutorial steps to new images. Additionally, tapps can be used to batch process many images automatically, similar to traditional macros. Tapps also support interactive exploration of parameters, automatic variations, and direct manipulation (e.g., selection, brushing).