Zhuo Chen

PhD candidate
Computer Science Department
Carnegie Mellon University

Phone: +1 412 230 7066
Email: czxxdd at gmail dot com
Address: Carnegie Mellon University, GHC 9120
Pittsburgh, PA, 15213, USA

[BIO] [THESIS] [PAPERS] [POSTER & DEMOS] [RESEARCH] [CV]

Zhuo Chen

BIOGRAPHY

I am a final year PhD student at Carnegie Mellon University, Computer Science Department. I received B.E. from Tsinghua University in 2012. During my college, I also worked as a research intern in the Wireless and Networking Group at Mircosoft Research Asia, working with Guobin (Jacky) Shen. Last summer, I worked at an exciting wearable and Virtual Reality start-up company, Nod Labs, with Anush Elangovan.

My main research interest lies in Mobile Computing, Distributed Systems, and the application of Computer Vision in such context. I am currently working on Cloudlet project with my advisor, Professor Mahadev Satyanarayanan (Satya). Specifically, I explore how the effortless video capture of smart glasses, such as Google Glass, can benefit people with the Cloudlet infrastructure.

THESIS

SELECTED PAPERS

  • Zhuo Chen, Wenlu Hu, Junjue Wang, Siyan Zhao, Brandon Amos, Guanhang Wu, Kiryong Ha, Khalid Elgazzar, Padmanabhan Pillai, Roberta Klatzky, Daniel Siewiorek, Mahadev Satyanarayanan "An Empirical Study of Latency in an Emerging Class of Edge Computing Applications for Wearable Cognitive Assistance".
    SEC 2017. [PDF]
  • Zhuo Chen, Lu Jiang, Wenlu Hu, Kiryong Ha, Brandon Amos, Padmanabhan Pillai, Alex Hauptmann, Mahadev Satyanarayanan. "Early Implementation Experience with Wearable Cognitive Assistance Applications".
    WearSys 2015. [PDF]
  • Mahadev Satyanarayanan, Zhuo Chen, Kiryong Ha, Wenlu Hu, Wolfgang Richter, and Padmanabhan Pillai. "Cloudlets: at the Leading Edge of Mobile-Cloud Convergence (invited paper)".
    MobiCASE 2014. [PDF]
  • Kiryong Ha, Zhuo Chen, Wenlu Hu, Wolfgang Richter, Padmanabhan Pillai, and Mahadev Satyanarayanan. "Towards Wearable Cognitive Assistance".
    MobiSys 2014. [PDF] [slides]
  • Zhuo Chen, Wenlu Hu, Kiryong Ha, et al, "QuiltView: a Crowd-Sourced Video Response System".
    HotMobile 2014 [PDF] [slides]
  • Pieter Simoens, Yu Xiao, Padmanabhan Pillai, Zhuo Chen, Kiryong Ha, Mahadev Satyanarayanan, "Scalable Crowd-Sourcing of Video from Mobile Devices".
    MobiSys 2013 [PDF]
  • Guobin Shen, Zhuo Chen, Peichao Zhang, Thomas Moscibroda, and Yongguang Zhang. "Walkie-Markie: Indoor Pathway Mapping Made Easy".
    NSDI 2013 [PDF]

POSTERS & DEMOS

  • Zhuo Chen, Wenlu Hu, Kiryong Ha, et al, "QuiltView: a Crowd-Sourced Video Response System".
    Demo at HotMobile 2014 [PDF] (Best Demo Award!)
  • Zhuo Chen, Yang Chen, Cong Ding, Beixing Deng, Xing Li. "Pomelo: Accurate and Decentralized Shortest-path Distance Prediction in Social Graphs".
    Poster at SIGCOMM 2011 [PDF] [Poster]

RESEARCH PROJECTS

School of Computer Science Carnegie Mellon University Sept. 2012 - present
Graduate Research Advisor: Prof. Mahadev Satyanarayanan (Satya)
    Project Gabriel
    • I build wearable cognitive assistance applications that help users to complete daily tasks, such as cooking, assembling, or exercising. The system captures a user's actions with wearable devices such as Google Glass, interprets user state using computer vision in real-time, and gives appropriate feedback. Video demos can be found HERE .
    • At the core of this work, we have designed, implemented, and evaluated Gabriel, a platform that simplifies the creation of such applications. It offloads heavy computation to cloudlets to achieve fast system response. An application level flow control mechanism is used to reduce end-to-end latency. Gabriel is capable of exploiting coarse grain parallelism on cloudlets to improve system performance, and conserving energy on mobile devices based on user context. Gabriel is an open-source project at HERE.
    • The client side of Gabriel can run with Google Glass, Vuzix M100, ODG R7, and HoloLens (hologram feedback).
    Project QuiltView
    • QuiltView leverages the ability of wearable devices such as Google Glass to provide near effortless capture of first-person view-point video. The extreme simplicity of video capture can be used to create a new near-real-time social network. In this network, users can pose brief queries to other users in a specific geographic area and receive prompt video responses. The richness of video content provides much detail and context to the person posing the query, while consuming little attention from those who respond. The QuiltView architecture incorporates result caching, geolocation and query similarity detection to shield users from being overwhelmed by a flood of queries.
    Project GigaSight
    • GigaSight is a scalable Internet system for continuous collection of crowd-sourced video from devices such as Google Glass. Our hybrid cloud architecture for this system is effectively a CDN in reverse. It decentralizes the cloud computing infrastructure using VM-based cloudlets. Based on time, location and content, privacy sensitive information is automatically removed from the video, which we refer to as denaturing. Users can perform index or content-based searches on the total catalog of denatured videos. Our experiments reveal the bottlenecks for such a system and provide insight on how parameters such as frame rate and resolution impact the system scalability.
Wireless and Networking Group Microsoft Research Asia Sept. 2011 - Mar. 2012
Research Intern Advisor: Dr. Guobin (Jacky) Shen
    Project Walkie-Markie
    • We build and experiment a crowdsourcing indoor mapping and localization system based on IMU sensors and WiFi infrastructure. Walkie-Markie is able to reconstruct internal pathway maps of buildings without any a-priori knowledge about the building, such as the floor plan or access point locations. We use the tipping point of WiFi signal strength as landmarks to fuse crowd-sourced user trajectories obtained from inertial sensors on users' mobile phones. The maximum discrepancy between the inferred pathway map and the real one is around three meters.
    • We analyze various features of human walking patterns, propose and verify new models for walking direction detection, step number counting and stride length estimation using mobile phone sensors.

[TOP]

Last Updated: