Introduction
The probability of detection and identification of targets in cluttered environments is influenced by clutter density and the associated ability of a remote sensor to see through holes in clutter. When acquiring imagery from low-flying dynamic platforms such as small UAVs or ground-based sensors used by special forces, often only glimpses are possible though holes between trees or into steep canyons. For a given EO/IR image, only a limited number of ground patches might be visible. When a second image is coregistered with the first, the number and size of patches is increased. Given collection from enough viewpoints, it is theoretically possible to piece together enough patches to view a significant portion of the scene behind the clutter and discover otherwise obscured targets. Piecing things together accurately poses a major problem without an accurate determination of the fine details of the clutter geometry, i.e. the shape of trees and the geometry of the intervening holes. The compilation of this complex geometry through stereoscopic techniques is problematic when many points on the ground are only viewable through one hole seen from one vantage point only. This paper presents an approach to extracting EO data from multiple holes in clutter with the assistance of a range sensor such as LADAR that is fused at the pixel-level. The resulting 3D “patch data” provides an opportunity to characterize the viewability of targets in clutter and to compute the probability of target detection, given a target is hiding in the clutter. An approach to dealing with this socalled negative information is also introduced along with a way to deal with the temporal evolution of target estimation and expectation given this type of data.