Photonics Unfettered: Beam-Steering, Spatial-Light Modulators, and Superfast Microscopy

From aerospace odysseys to neuroscience applications, Boulder Nonlinear Systems research scientist Janelle Shane develops optics for industry — and explores machine learning with her playful blog
06 November 2019
by Daneet Steffens
By day, Janelle Shane is an optics-industry research scientist; by night, she moonlights as an artificial-intelligence explorerer.

“It’s fun,” says research scientist Janelle Shane of her perpetual learning curve at Boulder Nonlinear Systems, a custom light-control manufacturing company. “This was my first job after my PhD. I knew I wanted to go into industry, and this merges post-doc-style research with business.”

With her colleagues, Shane works on projects that encompass a multitude of optics-related technologies, from nonmechanical beamsteering for planetary landers and self-driving cars, to ultrafast microscopy and spatial light modulators for neuroscientists. “We’re driven by cutting-edge science and pushed to build something new,” she says. “That’s part of what makes this job so cool. We’re targeting applications that need really high speed and/or really high pixel count. It’s not a huge market, but the people who do need those speeds, those pixel counts, that performance, they really need it: it’s essential for them to do their research.”

One recent focus for Shane was working with liquid crystal polarization gratings (LCPG), or geometric phase plates, to utilize nonmechanical beamsteering more efficiently. “As long as you’re able to switch the polarization of the incoming light, suddenly you have a way to send it one direction or the other direction with 99.5% efficiency. And that switching is entirely nonmechanical — it’s not going to spin and put inertia on your spacecraft, for example — so the aerospace industry and NASA are really interested.” One current project examines the feasibility of using LCPGs for planetary landers. Imagine a probe intended for a moon landing: on its way down, it has to watch out for obstacles and steer its way to a safe landing spot. The system has to be small and light for payload purposes, but also has to be able to steer that beam and look in different directions, similar to the lidar on a self-driving car: “We can use the same technology.”

Shane--3D lidar image of model space station

For a NASA-funded project similar to the planetary-lander project, this one designed to help free-flying Astrobee robots navigate inside the International Space Station, using LCPGs to steer the field of view of a time-of-flight flash lidar camera to eight different look angles. Steering is video rate and nonmechanical. Lower right is a model space station stitched together from the eight separate flash lidar camera-viewing angles. 

A separate pilot project for NASA entailed exploring the possibility of an add-on for a microscope on the space station: a device for studying particle suspension. It’s a big area of research for companies who make shampoos, hand-creams, medicines, cleaning fluids, paints, and other substances that are dependent on particle suspension. One of the goals of developing particle suspensions for these products is to make the particles stay in suspension for as long as possible, thus increasing shelf-life. That can be a challenge to study, says Shane, because of gravity-driven aspects like turbulence and settling effects. “The idea of doing these studies in the space station is to try and figure out some of the math behind particle suspensions, to find out how the subtler, long-term effects work. If we do it in the space station where there’s no gravity, suddenly we get rid of turbulence as well as gravitational pull which impact settling effects.”

Though that project was never implemented, others thrive. For a “focus-change” microscopy project, Shane is working on a version of the LCPG beam-steering device that doesn’t just steer in terms of direction. “It changes the focus of your microscope. With our LCPGs, you’re looking in different planes but also able to do that rapidly: you don’t physically have to move your sample up and down or move your microscope objective up and down. Especially if you have big samples with big heavy microscope objectives, this is a nice way to change the focus of your microscope very, very quickly.”

Shane--: LCPG_lens_motivation - Motivation for the focus-change microscopy project, which uses LCPG lenses for fast large-aperture focus change.

Motivation for the focus-change microscopy project, which uses LCPG lenses for fast large-aperture focus change. Credit: Figures A-C are from Yang W, Miller JK, Carrillo-Reid L, Pnevmatikakis E, Paninski L, Yuste R, Peterka DS. Simultaneous Multi-plane Imaging of Neural Circuits. Neuron. 2016 Jan;89(2):269-284.

And for a neuroscience-related project, Shane created computer-programmable holograms powered by spatial light modulators (SLMs), a topic she presented on at SPIE Optics + Photonics in August. “We worked with neuroscientists at Stanford University’s Deisseroth lab who study neural circuits: all these neurons are connected together, firing together, interleaving with one another.”

Attempting to isolate which brain-parts react to which stimuli, the scientists present stimuli to a mouse — show it something on a screen, let it smell something — and note which area of the brain lights up in response. Once they’ve got a hypothesis, they can use holograms to test it by manually causing those neurons to fire: they can observe if the mouse behaves as if it has just seen, say, vertical stripes, or if it has just smelled something, even if there’s nothing there. “Can we cause a hallucination by exciting those particular neurons?” says Shane. “It’s like some kind of mouse version of The Matrix.”

For this particular project, the neuroscientists wanted a high SLM pixel count, but they also wanted it to go faster than any previous SLM. “We’ve been working on that since 2015,” says Shane, “since before I joined the company, and we’ve been designing all the different parts of this system. Not just the array with the pixels, but also the electronics that have to get the data onto those pixels at speeds that are the equivalent of streaming an HD movie every second.”

Shane--AI cartoon

NONE WILL LEAVE: A representative example of Shane's engaging — and endearing — AI cartoons. Credit: youlooklikeathing.com

And that’s just her day job: in her spare time, Shane writes AI Weirdness, an entertaining blog that recounts her adventures with machine learning algorithms. Shane feeds data to neural networks and shares their attempts to imitate the data, including attention-catching lists of recipes (Artichoke Gelatin Dogs, Crockpot Cold Water), beers (La Cat Tas Oo Ma Ale, River Smush Hoppy Amber Ale), paint colors (Clear Paste, Sudden Pine), and crochet patterns (“My first indication that something was going wrong was when the hats kept exploding into hyperbolic super-surfaces….Ruffles would turn to tight ruffles, and then to corals, and then to brains.”) The latest extension of the blog — whose merchandise page includes a tea towel with AI-generated cookie samples such as Quitterbread Bars, Apricot Dream Moles, and Hand Buttersacks — is a charming, smart, delightfully illustrated book, You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Better Place. Channeling the clever-comedic spirits of both Douglas Adams and Randall Munroe, Shane addresses such questions as “What is AI?” “How does it actually learn?” “What are you really asking for?” and whether machine-learning programs are any good at acquiring an understanding of knock, knock jokes. (Spoiler alert: sometimes!).

“I saw a really cool presentation as a freshman in college,” says Shane, who has also transformed her fascination with AI into an engaging and informative TED Talk. “Professor Erik Goodman was presenting his work on genetic algorithms, evolutionary programming. It was a form of AI that imitates evolution and uses that to evolve answers to a problem. Usually, potential solutions are like an organism: they’re part of a population and some of them are getting killed off and some are mating or mutating to form the next generation. If you’ve picked your fitness function correctly — so you’re correctly selecting the ones that survive versus the ones that croak — then you can evolve toward an answer. And, just like biological evolution comes up with really weird examples of stuff that we would never have thought of, like, “Oh yeah, you can survive by looking like bird poop,” or “You can survive by eating hydrogen sulfide,” the artificial evolution comes up with some really weird stuff: it would come up with a car bumper that was some kind of weird organic mess, but it did crumple in just the right way when it was hit.”

Compared to the rush to gush over AI and its enabling capabilities, Shane has a more studied approach. AI, she points out, is in its infancy; we’re still learning what it can and cannot do. Her writing is imbued with a palpable affection for her neural-networks’ responses — their often-off-piste outcomes are part of a learning process, after all — and she gets a kick out of AI’s spontaneous and unexpected solutions. “It’s a kind of a different way of looking at our world through a completely different point of view,” she points out. “And reflecting it back to us without some of the assumptions that we have about what’s connected to what, or what the context is. So, yeah, I like that when that happens. It’s like, ‘Oh yeah, technically you could solve a problem this way. We weren’t expecting that and maybe that’s not useful for us, but that is an interesting way of looking at it.’”

And there’s nothing artificial about that intelligence.

 

 

 

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research