Virtual reality: New challenges for sensor simulation

Virtual reality: New challenges for sensor simulation

SHARE

<< How to incorporate the real world into the virtual one? This is the challenge facing the military sensor simulation industry throughout the world. And the advancement of technology is only making things tougher… >>

High-fidelity, real-time sensor simulations have always been constrained by the cost, capability and availability of technology. In the past, the imagination of simulation engineers often exceeded that of widely-available equipment, resulting in hardware based solutions that were costly to build and difficult to maintain. Thus, in the early days of simulation, the primary challenge faced by engineers was cost-effectively modelling a particular sensor.

With the advent of affordable PC, computing technologies, engineering professionals soon developed the resources needed to implement complex models of various sensor systems. But, while they found it difficult to model a specific sensor in the past, advancement in technology has shifted the challenge of simulation from modelling a sensor to modelling the environment.

Challenges of the Past: Modelling the Sensor
One of the earliest radar simulations was designed in late 1950s by R. K. Moore at the University of Kansas. It was an acoustic simulation using a water tank and piezoelectric transducers to simulate the transmission, delay, reflection and reception of a radar signal. The device could generate a representative radar range trace and thus was useful for research, and aided in radar system design.

By the mid-1960s, glass plates with flying spot scanners were routinely used to generate reasonably good radar displays. The glass plate was a photographic positive of a radar image; when backlit with a uniform light source, a photodiode could be rapidly scanned on the front side to produce a voltage proportional to the radar return. Unfortunately, this system had numerous shortcomings, including the inability to accurately represent angle resolution effects or terrain shadowing due to changes in aircraft altitude.

3D terrain boards were used in 1970s. A vertical video probe was used along with a directional light source, positioned according to the encounter geometry. The light source illuminated a monochromecoloured terrain model in a manner that created pseudo-radar imagery with fairly accurate shadowing. The system worked, but was cumbersome and inflexible. And while this could represent the occultation effects of altitude change, it lacked the ability to simulate complex effects such as angular resolution.

Another difficulty with early radar simulation was the lack of adequate source data from which to build an accurate landmass database. This problem was abated in the early New Challenges for Sensor Simulation 1970s with the introduction of the Defense Mapping Agency (DMA) Digital Landmass System (DLMS), the forerunner of Digital Terrain Elevation Data (DTED) and Digital Feature Analysis Data (DFAD). It was not long before these data sources provided the foundation for new simulation methods. By 1980, hardware-intensive radar simulators comprising 800-900 printed circuit boards stuffed with small-scale integrated circuits were common. While unprecedented in capability, they were big, expensive and nontrivial to maintain.

By mid-1980s, several radar simulators were built with minicomputer front ends and array processor back-ends. Unfortunately, these systems shared many of the undesirable characteristics of their predecessors. Hence, simulation pioneers like Dr George Bair of Merit Technologies (and later Camber Corporation) developed softwareonly solutions using high-speed, single board computers on a VME bus. This approach proved to be successful, and as computing technology improved, so did the software sensor models.

By 1990, general-purpose computer workstations were powerful enough to replace VME systems, thus setting the stage for steady migration across platform architecture – from large consoles such as Silicon Graphics machines to PC workstations. Over the years, this trend continued and now, in 2013, it is not uncommon to see high fidelity sensor models running on tablet systems or single board embedded systems.

Challenges of Today and Future: Modelling the Environment
While the simulation industry has dramatically increased the fidelity of software sensor modelling through the use of faster multi-core processors, high speed interfaces such as Gigabit Ethernet, and highcapacity memory and storage devices, the industry is facing a new set of challenges. The advances in hardware technology essentially levelled the playing field for high-fidelity sensor simulations, with low-cost hardware providing an ever-widening array of software developers, a platform upon which complex software-based simulations could run.

While in the past the fidelity of the environment was secondary to modelling the sensor, the tables have now turned; modern complex sensor models are only as realistic as the fidelity of the simulated environment in which they operate. In the training and simulation world, this simulated environment generally includes terrain and cultural databases, atmospheric and weather phenomena, and independent target models.

Challenge 1: Acquisition and Incorporation of High- Resolution Satellite Imagery
Prior to the commercialisation of satellite imagery, which began with the launch of the IKONOS satellite in 1999, the incorporation of imagery in sensor simulation was largely limited to the use of geo-typical textures. These representative samples provided artificial detail across areas of interest, but were not accurate representations of any real region. These techniques worked for a while, but as the fidelity of radar and visual sensors such as Electro-Optical (EO) and Infrared (IR) systems increased, so did the need for high-resolution, geo-specific imagery. Simulating state-of-the-art sensors, such as EO/IR imaging systems that can achieve zoomed-in narrow fieldof- views or Synthetic Aperture Radar (SAR) systems that can render high-resolution ground images, require accurate source imagery of comparable resolution. Unfortunately, satellite imagery at needed resolutions are often cost-prohibitive over large areas and either unavailable for many regions of the Earth or are not ideal for simulation use.

Satellite imagery must be processed before incorporation into sensor databases. Typically, it is captured as a high-resolution, panchromatic image (grey-scale captured from all visible colours) and multispectral band images at lower resolutions. Since the multispectral images are obtained at resolutions lower than desired, they are merged with the panchromatic image to produce a single, high-resolution, colourised image. This colorisation process is known as pan-sharpening. Additionally, the satellite imagery must be orthorectified to warp the imagery so that it can be accurately applied or referenced within a map or coordinate system.

Aside from technical issues related to obtaining useful satellite imagery, there are quality issues to consider as well. One common example is the possibility of clouds and weather patterns obscuring strategic points of interests. Another is imagery captured at off-nadir angles (that is, not from directly above the area of interest), which results in long shadows and lean-over effects from tall buildings and structures in urban environments. Since these effects imply the position of the viewer (the satellite), they confuse any simulated sensor perspective, which is almost certainly in a different position. Over larger coverage areas, care should be taken to ensure that the imagery is taken at the same time of day and the year to provide consistency in appearance.

Each of the above deviations in image quality and technical format requires manual correction. Sometimes special tasking for specific flyovers of areas is required, which adds to the cost and the schedule of delivery. At other times, a quality issue can be addressed in labour-intensive post processing, such as, manually correcting or reconstructing portions of the image.

Obtaining high resolution imagery, for instance, of over a 500 sq km area, is cost prohibitive for most sensor simulation programmes even before labour associated with postprocessing is accounted for. Thus, smaller areas of interest are usually identified for which high-resolution imagery would be required. This pre-set designation of interest areas represents a particular inflexibility in a training environment for which an urgent, real-world need for training in a new area can quickly arise.

Challenge 2: Incorporation of Cultural Data and Material Classification
In order to accurately simulate the characteristics of various sensor systems, a more thorough representation of the environment is required compared to a typical out-the-window scene, to sufficiently simulate the rendered sensor scene. Material classification is the process of assigning a combination of materials with sensor attributes to each pixel in imagery.

This can be achieved by maintaining an extensive library of material characteristics, and by accurately cross-referencing the materials present in the multi-spectral satellite imagery to those in the library. For each material classification, there is an attribute table that includes spectral reflectance and temperature throughout the diurnal cycle as a function of season, weather and surface azimuth and slope. Through this classification, for example, an IR simulation would be able to determine whether to classify a particular kind of material as hot or cold.

The first challenge of achieving accurate material classification is that it requires source data that includes multispectral imagery covering the spectrum of the sensor to which the database will be applied. For performing material classification of imagery to be incorporated into a radar sensor database, no such multispectral imagery is widely or commercially available. While some level of classification can be performed on colour (RGB) imagery, its resultant fidelity will be limited. Camber has developed database tools specifically for performing material classification for radar sensor databases.

The second challenge of achieving accurate material classification involves the resolution of the multispectral imagery or other commercially available source data. In general, commercially available multispectral imagery is provided in resolutions much coarser than resolution of the visible imagery that would be rendered by the sensor simulation. Alternative sources of data, such as land use/ land cover GIS data are only available in even coarser resolutions than multispectral imagery. In addition, it has been found that since land use/ land cover data is collected using the human element, numerous errors exist in such data.

The third challenge of achieving accurate material classification involves the process itself. Commercial image mapping tools are available that utilise algorithms to automatically analyse satellite imagery and perform material classification based on a reference materials library. But, in practice, these tools have limited accuracy and the resulting mapping must be carefully and arduously reviewed by human eyes and tuned manually. Alternatively, image mapper tools are available for man-in-the-loop classification on per pixel basis which is a labour intensive and time consuming process.

Challenge 3: Accurate Correlation across Multiple Sensor Databases
High-fidelity landmass databases incorporating high-resolution terrain (that is, elevation) data, cultural and natural features, coastline and inland vector data, and highresolution, multispectral satellite imagery are essential to the modern military readiness trainer. A further requirement is that these individual databases be cross-correlated across the sensor simulations ensuring 100 per cent correlation between each sensor’s rendered scene and operation. Furthermore, to support training operations for rapidlychanging scenarios, these databases including high-resolution areas of interest insets, must be generated and quickly made accessible.

To address cross-sensor correlation, a database toolset has been developed to generate run-time sensor databases directly from the visual polygonal representation. These tools use an industry-standard OpenFlight format as an input. Most visualbased databases are available in this standard polygonal format, thus ensuring that the non-visual and sensor simulation runtime databases are generated from a common source.

Camber has pioneered efforts to completely eliminate the schedule impact and labour costs associated with generating separate sensor databases for each sensor system application, by providing its simulation applications, beginning with its flagship Radar Toolkit® product, as a plug-in to major Image Generation (IG) application providers. Such a configuration allows the IG application to provide the sensor application in run-time with required radar-relevant data to support accurate sensor simulation.

To eliminate the need to generate run-time versions of visual and non-visual sensor databases, the simulation industry is moving towards software products that can render visual and sensor scenes ‘on-the-fly’ in run-time. Labour costs required to generate separate databases are eliminated. And most importantly, the time required to provide a customer with the capability to train time-critical scenarios is greatly decreased. We have developed plug-ins allowing our sensor simulations to support the same rendering methodology.

Challenge 4: Modelling and Correlation of Simulated Live Actors
In modern training scenarios, the inclusion of unconventional forces is becoming more prevalent as the scope of military combat operations widens. Twenty or thirty years ago, the scope of actors required to be supported in a training scenario was generally limited to conventional weapon platforms such as surface and sub-surface targets, groundbased vehicles, and airborne fixedwing and rotary-wing platforms. Physical modelling of these entities was not a time-consuming or costly process as polygonal models supporting such a limited set of entities were widely available to the simulation industry. Additionally, simulation of the reactive behaviours of these entities due to entity-toentity interaction and resulting weapon modelling was limited in scope and usually well-defined within commercially available data.

With the incorporation of unconventional forces into today’s training scenarios, that scope of entities to be supported has greatly widened to include individual human beings and a wide array of ground and surface targets and weapons. Polygonal models supporting such a wider scope of entities are not as commercially available and require more labour to generate. Additionally, the behaviours of such an array of entities often requires the use of higher-fidelity and costlier simulations such as Computer Generated Forces and Special Automated Forces.

Cross-sensor correlation is also a challenge when generating target models used by various visual and non-visual sensor simulations. We utilise the same design approach as in our terrain database generation process in order to ensure correlation. This approach is based on the use of a common data source when generating individual target models to support each of our sensor simulation applications.

Conclusion
As we head into the future, the military community requires cost-effective, high-fidelity trainers that support simulation of sensors that are greatly enhanced in range, resolution and capacity, the correlation and synthesis of track information from multiple sensors, simulation of unconventional forces exercising unconventional behaviours and quick turnaround on the generation of training scenarios that reflect the rapidresponse required for modern operations. These needs reflect new challenges to the military sensor simulation industry. These will be met through the innovations of the computing industry providing even faster processing with greater throughput, the entrepreneurship of various customers and the data collection community providing larger and even more accurate data to a competitive international market. And these challenges will be overcome by the sensor simulation industry leveraging the advancements of these partner industries and implementing innovative solutions.