There has been a push recently to develop technology to enable the use of UAVsin GPS-denied environments. As UAVs become smaller, there is a need to reducethe number and sizes of sensor systems on board. A video camera on a UAV canserve multiple purposes. It can return imagery for processing by human users. Thehighly accurate bearing information provided by video makes it a useful tool to beincorporated into a navigation and tracking system. Radars can provide informationabout the types of objects in a scene and can operate in adverse weather conditions.
The range and velocity measurements provided by the radar make it a good tool fornavigation.
FMCW radar and color video were fused to perform SLAM in an outdoor environment. A radar SLAM solution provided the basis for the fusion. Correlationsbetween radar returns were used to estimate dead-reckoning parameters to obtain anestimate of the platform location. A new constraint was added in the radar detectionprocess to prevent detecting poorly observable reflectors while maintaining a largenumber of measurements on highly observable reflectors. The radar measurementswere mapped as landmarks, further improving the platform location estimates. Asimages were received from the video camera, changes in platform orientation wereestimated, further improving the platform orientation estimates. The expected locations of radar measurements, whose uncertainty was modeled as Gaussian, wereprojected onto the images and used to estimate the location of the radar reflector inthe image. The colors of the most likely reflector were saved and used to detect thereflector in subsequent images. The azimuth angles obtained from the image deteciiitions were used to improve the estimates of the landmarks in the SLAM map overprevious estimates where only the radar was used.