Two Stevens Faculty Receive Prominent ONR YIP Award

Mechanical Engineering Assistant Professors Nick Parziale and Brendan Englot receive prestigious young investigator honors

Nick Parziale and Brendan Englot

The faculty at Stevens Institute of Technology are an innovative group of professionals committed to exploring cutting-edge research that furthers the school’s mission of seeking solutions to the most challenging problems of our time. This year the Office of Naval Research (ONR) has bestowed two Stevens mechanical engineering professors with Young Investigator Program awards.

Assistant professor Nick Parziale received $469,000 to fund his project measuring transitional and turbulent hypervelocity air flows and assistant professor Brendan Englot received $508,693 to fund his research leveraging a new variant of a classic artificial intelligence.

Tagging air flow of objects moving at hypersonic speeds

Parziale will use these awards to build upon his previous research that focused on developing measurement techniques to study hypersonic boundary layers, the thin gas or liquid layer near an object moving through a fluid. His group will measure how the boundary layer transitions from a well-ordered state to a chaotic one.

Parziale will also study the structure of the chaotic state. A vehicle with a chaotic layer will have higher drag and heat transfer, slowing down the speed of an object and increasing the weight requirement of its heat shield. A vehicle with a boundary layer that remains in a well-ordered state can maintain a higher speed with less thermal protection.

“Once we are able to better understand how to measure and assess how and why the boundary layer transitions at hypersonic speeds, the possibilities are endless,” said Parziale. “That data can inform how we design aircraft and make it possible to take day trips across the world.”

Parziale will be conducting this research by taking photos a millionth of a second apart to show how the gas moves in a wind tunnel, by tagging and illuminating it with laser beams. This strategy is broadly called tagging velocimetry. A specific form of tagging velocimetry called krypton tagging velocimetry was developed in a collaborative effort between Arnold Engineering Development Complex and Parziale’s group through the Air Force Summer Faculty Fellowship Program.

“The concept is similar to if you took multiple photos of a stick moving down a creek,” Parziale explained. “You will be able to calculate how the object has moved based on how it shifts between photos. To make the “stick” in our hypersonic flows, we zap or “tag” the gas with a specialized laser.”

Using new AI to train robots to make safe, reliable decisions

Englot, an MIT-trained mechanical engineer at Stevens Institute of Technology, a 2020 Young Investigator Award of $508, 693 to leverage a new variant of a classic artificial intelligence tool to allow robots to predict the many possible outcomes of their actions, and how likely they are to occur. The framework will allow robots to figure out which option is the best way to achieve a goal, by understanding which options are the safest, most efficient – and least likely to fail.

“If the fastest way for a robot to complete a task is by walking on the edge of a cliff, that’s sacrificing safety for speed,” said Englot, who will be among the first to use the tool, distributional reinforcement learning, to train robots. “We don’t want the robot falling off the edge of that cliff, so we are giving them the tools to predict and manage the risks involved in completing the desired task.”

For years, reinforcement learning has been used to train robots to navigate autonomously in the water, land and air. But that AI tool has limitations, because it makes decisions based on a single expected outcome for each available action, when in fact there are often many other possible outcomes that may occur. Englot is using distributional reinforcement learning, an AI algorithm that a robot can use to evaluate all possible outcomes, predict the probability of each action succeeding and choose the most expedient option likely to succeed while keeping a robot safe.

Before putting his algorithm to use in an actual robot, Englot’s first mission is to perfect the algorithm. Englot and his team create a number of decision-making situations in which to test their algorithm. And they often turn to one of the field’s favorite playing grounds: Atari games. For example, when you play Pacman, you are the algorithm that is deciding how Pacman behaves. Your objective is to get all of the dots in the maze and if you can, get some fruit. But there are ghosts floating around that can kill you. Every second, you are forced to make a decision. Do you go straight, left or right? Which path gets you the most dots – and points – while also keeping you away from the ghosts?

Englot’s AI algorithm, using distributional reinforcement learning , will take the place of a human player, simulating every possible move to safely navigate its landscape.

So how do you reward a robot? Englot and his team will be assigning points to different outcomes, i.e., if it falls off a cliff, the robot gets -100 points. If it takes a slower, but safer option, it may receive -1 point for every step along the detour. But if it successfully reaches the goal, it may get +50.

“One of our secondary goals is to see how reward signals can be designed to positively impact how a robot makes decisions and can be trained,” said Englot. “We hope the techniques developed in this project could ultimately be used for even more complex AI, such as training underwater robots to navigate safely amidst varying tides, currents, and other complex environmental factors.”