The sensor is a main structure and concept utilized by the Context component. It defines how agents observe their environment. Although it is listed as a component, it actually is implemented as a ScriptableObject
.
The spatial sensor is feasible for general 3D scenarios like controlling aircrafts, spacecrafts, submarines or superhero characters and so on. Using this sensor allows a controller to handle all degrees of freedom in 3D space. But this freedom comes at a price though. Whenever possible, you should prefer the Planar Sensor since it has a smaller receptor count in general and can use planar-exclusive optimizations like the Planar Interpolation.
Currently, there are two sphere shapes available. The so-called UV sphere and the icosphere. Both have their advantages. The icosphere has the most equidistant distribution of receptors on a sphere that is possible. This property has advantages when it comes to more complex algorithms as, for instance, a spatial interpolation approach. Whereas the UV sphere is more flexible in its configuration and has the neat property that it is well structured along the latitude and longitude. This is favorably when it comes to implementing a stable controller. So our recommendation is to use the UV Sphere as default.
Figure 1: Shows a selection of possible sensor configurations. The icosphere topology is on the top, while the bottom sensors represent UV spheres.
Creating a sensor asset is very easy because it is a serialized object. To create a sensor, navigate to Assets/Create/Polarith AI » Move/Sensors/AIM Spatial Sensor.
Figure 2: The popup for creating sensor assets.
Once you have got your sensor asset, you can see the following information concerning the created sensor.
Property | Description |
---|---|
ReceptorCount | Indicates how many receptors are used for this sensor. |
There are two sphere shapes which can be created using the inspector. The UV sphere sensor and the icosphere sensor.
Property | Description |
---|---|
Radius | The distance from each receptor to the sphere's center. |
Rings | The number of receptor rings (longitude). |
Segments | The number of receptors in each ring (latitude). |
Subdivisions | The subdivision level of the sphere-shaped sensor. The higher the level, the more receptors are used. |
The Radius
parameter is applied to both shapes. Rings
and Segments
must be set for the UV sphere only. Whereas Subdivisions
is used by the icosphere. If you are satisfied with the configuration, hit the corresponding Build New as ... button. Then, you can look at the result by enabling the gizmo.
The subdivisions parameter is limited for a good reason. The receptor count grows exponentially with the number of subdivisions. The sequence goes like this: 12, 42, 162, 642, 2562, 10242, and so forth.
Note that sphere poles are always aligned with the forward vector, and not the up vector. This might be a bit confusing when it comes to latitude and longitude orientation, but this alignment has far better properties regarding controllers.
For visualization, you can use the sensor gizmo. This gizmo is always rendered at the center of the currently opened scene.
Property | Description |
---|---|
Enabled | Determines whether to display the gizmo. |
Scale | Visually scales the gizmo on screen. |
Color | Sets the color for the displayed receptors. |
A receptor is one single part of a sensor forming the actual sensor. So what a sensor pixel means for a camera sensor, a receptor means for a Polarith AI sensor. For each receptor, the associated sensor stores specific data, like directional vectors, as well as neighbourhood information forming a certain sensor shape which determines how an agent can observe the world.
All receptor data (but not the objective values) are stored within the sensor, and thus, the data can be shared between different agents using the same sensor. Though, it is possible to automatically create copies (clones) at runtime if you want to modify sensors dynamically without affecting other agents, therefore see SensorShared
in Context.
Data | Description |
---|---|
Position | Offset to the local position of the sensor. Note, behaviours using forEachPercept ignore this value. |
Direction | The normalized direction of this receptor. This is always of type Vector3 . |
MagnitudeMultiplier | The global multiplier for this receptor, it can be interpreted as weight. |
Sensitiviy | The maximum angle for this receptor to recognize a behaviour's ResultDirection for writing objective values. |
Spatial sensors do not use neighbourhood definitions at all. Thus, planar behaviours do not work with them. For spatial sensors, 3D movement can be achieved only through generalized (non-prefixed) behaviours or by behaviours prefixed with Spatial
.