Part 1: Purpose
The paradigm of space technology is shifting from large, single, expensive spacecraft towards groups of smaller, cheaper spacecraft which intelligently cooperate to achieve challenging objectives. This creates many potential benefits in terms of flexibility, coverage and performance of space systems, with applications as diverse as distributed science, communications, positioning, space domain awareness, remote sensing, manufacturing, and on-orbit servicing. The point is, there are lots of advantages! But also, distributed space systems are hard.
Operating multiple spacecraft in orbit brings with it a host of challenges. How should we model the behaviour of these systems? How should they communicate with each other? How can we accurately locate them? How can we safely control them? How do they make decisions?

StarFOX – or the “Starling Formation-flying Optical eXperiment” – aims to address one point in particular: distributed space system “navigation”. Navigation here refers to “state estimation”, or being able to use measurements (from some available source) to determine the physical state of a system (which might include its position, velocity, and other properties). If you want to achieve something in space, you’ve gotta know where you are… and space is a big, complicated place.
Furthermore, lots of common navigation solutions don’t work in places we’re interested in. For example, GPS can’t be applied near Mars (…yet), and using radio measurements from Earth-based antenna isn’t scalable to big multi-spacecraft groups. Ideally, we’d like a navigation method that’s:
- Autonomous, i.e. can operate without human supervision or inputs
- Self-contained, i.e. can run using only on-board resources
- High-performing, i.e. is robust enough and accurate enough to enable useful missions
- Distributed, i.e. can be shared across a group of spacecraft of arbitrary size
- General, i.e. can operate in any orbit
A favourable solution for many scenarios is “angles-only” navigation, in which observer spacecraft take photographs of target space objects with cameras. Cameras are nice because they’re small, ubiquitous, robust sensors that can image non-cooperative objects (…like rocks. Even if a rock doesn’t have GPS, you can still take a picture of it with a camera.) Then, you can easily take the camera image and process it to get the “bearing angles” (or direction) to each target. Because these measurements come from within the system itself, angles-only methods are independent and self-contained, which is great!


The disadvantage (there’s always one) is that bearing angles are tricky for state estimation. Basically, they don’t include direct range information: the angle tells you the direction of your target but not how far away it is. Knowing those distances is pretty vital for, say, walking around your house, let alone flying through space, so we need to figure out that distance somehow. Typically, we deduce target range by watching target behaviour over time instead… which does work, but extracting the complete state of the system can be mathematically complicated.
Several prior tests have actually performed angles-only navigation in orbit, including ARGON (in 2012) and AVANTI (in 2016). Both experiments successfully navigated a single target satellite with respect to a single observer. However, ARGON and AVANTI didn’t quite fulfil all of those criteria above, and featured prominent deficiencies:
- Prior knowledge of the target state was required to begin navigation.
- Satellites had to perform maneuvers to help deduce target distances.
- Only one observer and one target could be handled by the system.
- It was not possible to estimate observer and target orbits using only bearing angles.
- It was not possible to estimate auxiliary state variables.
This (finally) brings us to StarFOX! In essence, StarFOX intends to be the first in-flight demonstration of autonomous, distributed navigation for multi-spacecraft systems. Through new algorithms, it overcomes the above shortcomings with the goal of enabling new and exciting multi-spacecraft missions. StarFOX at its core is a technology demonstration, in that if we can prove these methods work, we can be confident of applying them in future for effective navigation.
Part 2: Concept
StarFOX was formally proposed in 2018 by the Stanford Space Rendezvous Laboratory (SLAB) as a core payload of the NASA Ames Starling mission. Starling is a swarm technology demonstration consisting of four small satellites in Earth orbit, aiming to demonstrate four vital capabilities:
- Autonomous and distributed swarm navigation
- Autonomous and distributed swarm communication
- Autonomous and distributed swarm decision making
- Autonomous and distributed swarm control
Starling will test whether the technologies work as expected, what their limitations are, and what developments are still needed for swarms of small spacecraft to be successful. Currently, Starling plans to launch in June 2022 and will conduct a six-month baseline mission. StarFOX will consist of fourteen 2-3 day experiment “blocks” throughout, each designed to investigate a different navigation configuration. Initial tests will focus on simpler single-observer scenarios, while later tests will explore ambitious multi-observer multi-target distributed navigation in several different swarm formations.

The specific software being evaluated by StarFOX is “ARTMS“, or the Absolute and Relative Trajectory Measurement System. ARTMS is a complete, autonomous architecture for navigation and timekeeping of single and distributed space systems. It applies angles-only navigation whereby cameras aboard cooperative observer satellites provide bearing angle measurements to target resident space objects. Angles are also exchanged between observers over an inter-satellite radio link. The figure below provides a notional illustration.

As mentioned, ARTMS overcomes five major shortcomings of previous angles-only flight demonstrations. It achieves this by leveraging three novel algorithms developed by SLAB. (Here’s where the pitch gets a bit technical.)
ARTMS is divided into three core modules: image processing (IMP), batch orbit determination (BOD), and sequential orbit determination (SOD). IMP robustly identifies multiple targets in 2D images from a single monocular camera without requiring a-priori relative orbit knowledge. This is achieved by applying multi-hypothesis tracking aided by domain-specific scale-independent kinematic modeling of orbital motion. BOD is achieved by exploiting the advantages of a Relative Orbit Element state representation and using a one-dimensional sampling scheme to resolve the weakly-observable range to each target. SOD continually refines the global state estimate of the distributed system by using a newly designed Unscented Kalman Filter with adaptive process noise estimation. SOD fuses measurements from multiple observers received over an inter-satellite link and exploits nonlinear dynamical effects to achieve complete maneuver-free state observability and time synchronization. The figure below illustrates the architecture.

Overall, ARTMS is able to provide autonomous orbit and auxiliary state estimates (e.g. clock offsets or ballistic coefficients) for the host spacecraft and each detected target. It does not require GPS availability or external absolute orbit updates and is able to detect and track unidentified or noncooperative targets. The only hardware requirements posed by ARTMS are a camera for each observer (e.g. star trackers) and an inter-satellite link between observers (e.g. low-bandwidth radios).
High-fidelity simulation and verification of ARTMS has been performed for a variety of scenarios, including navigation around Earth, the Moon and Mars; low- and high-altitude orbits; near-circular and eccentric orbits; and for variable inter-object separations ranging from several kilometers to several thousand kilometers. This includes hardware-in-the-loop simulations using CubeSat star tracker imagery and simulations running on CubeSat flight processors. ARTMS can therefore be flexibly generalized to navigation scenarios of interest and can be applied to spacecraft swarms and constellations of arbitrary size in a decentralized fashion.
To provide a more visual indication, here’s a quick video of ARTMS outputs during a StarFOX simulation. Key elements are the simulated camera measurements (bottom left); the absolute and relative orbits of the swarm (center); the SOD state estimates, which converge nicely (top right); and IMP and BOD outputs (bottom right).
Looks pretty good, doesn’t it? (I mean, at least the graphs are pretty.) Of course, ARTMS isn’t perfect and there’s a LOT more developmental work and research we’d like to achieve at SLAB. Nevertheless, StarFOX is a crucial first step towards autonomous multi-spacecraft navigation in space and as a bonus, it’s SLAB’s very first flight mission! That means I’m personally both extra-excited and extra-nervous.
Lastly, a quick acknowledgement of those at SLAB who’ve contributed vital work to StarFOX and ARTMS:
- Simone D’Amico (founder and director)
- Joshua Sullivan (SOD and ARTMS development)
- Adam Koenig (BOD and ARTMS development)
- Justin Kruger (IMP and ARTMS development)
- Toby Bell (ARTMS C++ development)
- Katie Wallace (ARTMS Mars development)
- Keidai Iiyama (ARTMS lunar development)
Thanks for reading! Feel free to poke me if you’ve got any questions.
Recent Comments