RESEARCH

Research Statement

I enjoy both algorithm development and system-level integration.  I like working with both software and hardware.  I prefer projects that require novel and creative solutions.  There are several areas of research that I find exciting.  My focus has been primarily in control, robotics, and machine learning.  I have also dabbled in computer vision and augmented reality.

Robotics was my first love.  Since the age of 5, I have expressed a desire to be a robotics engineer.  There are many varied applications of robotics that I have found exciting, e.g., entertainment, agricultural, and medicine.  Robotics requires expertise in many fields of engineering, and it requires complex system integration.  A lot of robotics draws on biological inspiration, but there are also robots that come in novel forms that don't resemble anything living.  I have worked on various robotics platforms including an abstract mars rover, a humanoid robot, and a self-driving robot.  I would like to explore soft robotics, and I would also really like to work with novel flying and walking robot platforms.  A robotic system cannot be realized without control, and for this reason, I consider it to be the most important ingredient.

Control engineering was my primary focus in graduate school.  My Ph.D. thesis focused on solving generalized nonlinear optimal control and estimation problems using a high-dimensional manifold projection.  When I began this work, the emphasis was on finding optimal solutions offline through sophisticated mathematics.  However, the most successful control applications in robotics today that involve walking and flight have used model predictive control, which re-solves the entire optimal control sequence each time step.  This approach has only become realizable through powerful embedded hardware solutions such as GPU-accelerated MPC with ADMM and embedded deep learning.  One direction of research that I would like to engage in would be to integrate these tools into interesting and novel systems that do not currently have control solutions.  I would also like to work on cloud-based control solutions that allow small light-weight robots to have access to scalable high-level planning.

Machine learning is a more generalized set of tools from control engineering.  There is some overlap, and a common theme of optimization is shared by both.  Some of these tools can be employed in robotics, but they have a much wider scope of application.    In the last few years, there have been several very exciting developments that have broadly fit under the envelop of deep learning (architectures with deep network layers).  The problem of classification with machine vision has essentially been solved.  The most exciting developments have been in networks that are capable of generating new samples from any data set.  For example, generative adversarial networks have been used in NVIDIA's driving simulator, style-based generators, and GameGAN.  Variational auto-encoders (with internal recurrent neural networks) have also been used to produce similar results, e.g., WorldModels.  And, end-to-end neural Turing machines appear to be able to learn and replicate any computational process.  Most, if not all of these solutions can also be deployed as cloud-based machine learning services.  It is difficult to say what is the most exciting aspect of this work yet to do.  I am eager to see these technologies get adopted by the game industry, and at some point, in the not-to-distant future, we may see photo-realistic games generated from raw data.  This connects to my interest in VR/AR.  I would really like to work on 6DOF video (both synthesizing from real-world data and generating from sophisticated editing tools).  The possibilities are limitless.

BlueRiver (2021-present)

Vector - Researching MPC for wheeled systems with variational contact. 

DUNE

Clostra (2020-2021)

I worked on the Keymaker project.  We researched anomaly detection in time-series data using forecasting and reconstruction-based deep learning models.  We also developed system-level integration of a time-series database with MLOps that interfaced with a multi-user front-end.

Sententia (2018-2019)

I worked with Jeremy Adsitt to build a cloud document classification pipeline.

I built image to text services with Tesseract and text and image classifiers using TensforFlow.

Sprite Robotics (2017-2018)

Robotics and control research at a startup based in Champaign, IL.

Visit the Kickstarter campaign and product website for details.

While at Petronics (Sprite Robotics), I developed an AR control interface with integrated sensor fusion and map building, multi-session SLAM with a monocular camera and globally consistent occupancy, autonomous driving with surface-aware planning and dynamic obstacle avoidance, and motion stabilization for a mobile 360 camera.  I also co-wrote and won an SBIR grant for over $1 mil.  I primarily worked with Dr. Erik Johnson, Dr. David Jun, and Dario Aranguiz.

Mousr

Machine Learning on the iCub (2010-2017)

Visit the Language Acquisition and Robotics Group for regular updates.

For more videos, visit our YouTube channel CogRoboticsUIUC.

recent

Projects

VALVE Vortex Group 2012

I contracted for Valve over the summer of 2012 implementing adaptive tracking systems for AR and VR.  Our group consisted of Michael Abrash, Atman Binstock, Aaron Nicholls, Jason Jerald, Dean Dejong, Fabian Giesen, Ben Krasnow, Jeri Ellsworth, and Gordon Stoll. We were experimenting with many different head mounted displays.  My work focused on sensor fusion with noisy position measurements from fiducial markers and Sixense readings.  The HMDs had inertial sensors and gyros haphazardly taped on, and I combined these measurements to get high-accuracy low-latency estimates of head position and orientation.  I built sensor fusion tools for HMDs with an arbitrary arrangement of completely uncalibrated sensors.  Simply shaking the HMD produced all of the necessary parameters, and then the filter gave extremely smooth low latency tracking.  Some members of the group joined the Oculus team which is now owned by Facebook. Others left Valve to start Technical Illusions and work on CastAR.  The members that stayed partnered with HTC Vive

Gordon Stoll

NASA Lunar Planetary Science Academy 2009

In the summer of 2009, Cynthia Cheung approached me with a team lead opportunity for the LPSA at NASA's Goddard Space Flight Center. The members of my team were Aaron Silver, Cletis Nicklow and Grant Moore. Each team within the academy had its own project. Our project involved designing and implementing a laser system that could be used for both ranging and communication. We designed a system capable of meeting all the requirements, and implemented various portions of it on an FPGA platform. Aside from our primary project, there were many other exciting activities within the academy. We gave weekly reports and had many interesting guest speakers. We even spent a week at a meteor impact sight in Canada, collecting data and enjoying the scenery.

Aaron Silver, Cletis Nicklow, Grant Moore

NASA Tetrahedral Robotics Research (2006-2007)

The tetrahedral system that we explored is classified as ART (Autonomous Reconfigurable Technology), which means it uses modular/reconfigurable components to construct complex robotic structures. The individual components that comprise this system are designed to be as simple as possible. TET robotics employ linear actuators in tetrahedral configurations. The coordinated extension and contraction of these linear actuators allows for a myriad of complex motions. In addition, the ability to replace broken parts easily and configure for different tasks makes such technology desirable for space robotics. NASA is presently exploring this technology for rovers, but has a large array of other applications in mind.

The autonomy of these structures comes from the controller hierarchy. NASA wants to limit the amount of commands it needs to send to the rover, and the lag-time between transmission and reception is appreciable even at the speed of light. For this reason it is ideal to be able to send a basic set of commands like move to this location and perform test. To do this, 3 control levels are required. The first involves the command sent by NASA. Then the rover's central brain must translate this command into the necessary length configurations for the structure. This includes both walking and avoiding obstacles. These length command then proceed to each strut to be controlled independently by a PID or some type of decentralized adaptive control for precise movement. Coordination on the second control level is being explored in two main ways. The simplest approach involves constructing a library of basic movement commands (walking gaits) and executing them in a desired order. After a base library has been complete a neural based control is sought to learn new motions through interacting with the environment.

Summer 2006: NASA / ESMD Faculty Student Fellowship at GSFC

Our research work involved dynamic modeling and control of a 12TET.  Using inertial tensors and motion constraints, we were able to construct dynamic kinematics models from the Euler-Lagrange equations constructed within SimMechanics. This software made the assembly of modular components quick and easy. It operates within MATLAB using a graphical block interface just like Simulink. In fact it can interact with Simulink and other packages including custom MATLAB code. If you are not familiar with Simulink, it is very similar to LabView, but in my opinion it offers considerably more modeling power. Using SimMechanic, a single strut was constructed with the desired frictional and motor behaviors, and then it was simply cut and paste into nodal configurations. Once the model was obtained Simulink and MATLAB code was used to implement the strut control and high level walking gait commands.

During that summer, We were employed on site at Goddard Space Flight Center in Maryland for modeling and control work. The other two members of the Hope College team were Aaron Silver and our mentor Dr. Miguel Abrahantes. We teamed up with another controls team from Georgia. All of our work was coordinated. We worked alongside NASA employees and other student teams exploring the mechanical and electrical engineering aspects of the project. Each week we reported progress and discussed new ideas. Our models became invaluable to understanding the dynamics of the 12TET and optimal mechanical designs such as minimizing the diameter and spread of the struts meeting at the node.

Miguel Abrahantes, Aaron Silver

Spring 2007: NASA/MSGC Hope College

Potential strut controllers were explored.  A decentralized adaptive controller was tested on a 1TET model.  The adaptive Seraji controller was determined to be unstable over long executions and discrete inputs.  The struts responded well to PI control, which was ultimately chosen for application in 4TET.

Summer 2007: NASA/ESMD Hope College

A 4TET prototype was designed, constructed, and tested.  The 4TET followed the construction of a 1TET assembled as an Engineering Design project. This was the first closed loop controlled walking gait of an over-constrained tetrahedral robot ever! The summer team consisted of Dr. Miguel Abrahantes, myself, Aaron Silver and Dan Lithio. Collectively we developed the control scheme and hardware for the 4TET 's autonomous control. The control system took 5 weeks to implement and cost $1k in hardware.  It used MATLAB, a USB DAQ, IFI controller Board, Custom String POTs, ESCs and Relays.

We used an onboard processor from IFI that read in 2 analog signals from a reference passed through the laptop. The analog signals were multiplexed to correspond to each strut. This let the processor know what length the struts should be. The struts reported their lengths via spring loaded string potentiometers I constructed from retractable key chains. The struts themselves were made from power car antennas. The on board control was a simple PI control where the weighted addition of error and integral error defined how much power to send to the motor. The non-responsive dead-zone of the motor was corrected for. The power was controlled with 7 PWM signals out of the processor. Alternate struts were controlled via relays because we never need to control the 3 floor struts. Each PWM went to a Victor 883 electronic speed controller which was capable of reversible polarity and was provided with a 12V power supply.  [Our implementation that summer required a wired tether, but Miguel implemented a wireless communication system the following summer.]

Senior Design Projects

Spring 2007: affordable linear actuator with controlled 5:1 extension

The purpose of this project was to obtain the necessary linkage to build and test a physical TET robot.  The project was completed ahead of schedule leaving time to construct and test a 1TET

https://sites.google.com/site/lukeawendt/research/senior-design-projects/Capture1.PNG

Fall 2006: dual prop micro air vehicle with payload tilt control

Stable hover was achieved via two counter rotating sets of blades that canceled torque.  The battery and electronics were connected via a a servo joint.  A relative trimming between the two blades allowed the vehicle to rotate, and when the servo was driving, it cause the payload to tilt in the desired direction of motion.  This caused the blades to pitch in the desired direction of travel.

https://sites.google.com/site/lukeawendt/research/senior-design-projects/Capture2.PNG

 Lakeshore Vision & Robotics (2005-2007)

LVR was a small privately owned company. I met the owner, Dr. Greg Caskey, as a part time Physics professor teaching Modern Physics. This job called on a nice mix of my Physics and Engineering background. A lot of the work I did for LVR involved the construction of mathematical models to interpret scan data and determine part geometries. This involved post processing of scan data and nonlinear regression for model comparisons. In addition repeatability and fit confidence were tested with each system. I set up new hardware for data acquisition primarily taking advantage of encoders and camera triggers. Some controls work was also done to eliminate vibrations and provide smooth motion. Most of the programming was done alongside my partner Robert Van Ark, who was a 2003 Computer Science graduate from Grand Valley. We did most of the low level hardware programming in C++ and constructed the graphical user interfaces with VB6.

Summer 2005

Fall 2007