Polarith AI
1.8
Perception

A very important issue when working with our system is how to get data from the scene to the AI agents and their behaviours. It is not possible to directly access game object references within our AI behaviours when these behaviours should be thread-safe. The reason is that all Unity classes are not thread-safe, only structs are. Because of this, it is inevitable to extract the data from Unity's objects to a so called percept. Currently, there is only one percept type used together with steering behaviours: The SteeringPercept.

A percept is a data container that provides the necessary information for a behaviour to work properly, whereby most behaviours in our system execute their algorithms at least once for each percept they receive. A percept can be created and handed over to a behaviour by the following two different methods, whereby these mechanisms are described in detail at the environment manual page.

  1. Using the perception pipeline
  2. Referencing a game object directly in a behaviour (more expensive!)

Pipeline

The perception pipeline works as follows and illustrated in Figure 1.

  1. Game objects are referenced in an object list of an AIMEnvironment component, whereby an environment is identified by an individual label.
  2. Environments are grouped by using Steering Perceiver components. Every object which is implicitly referenced (by the environments) in this perceiver is unpacked to percepts. Note that if an object is referenced by two different Steering Perceiver components, it would be unpacked twice.
  3. An agent which has to perceive environments needs a Steering Filter component which can take a reference to a Steering Perceiver. If this is done properly, all behaviours of this agent gain access to every percept provided by the Steering Perceiver and the referenced environments.
  4. All behaviours derived from PerceptBehaviour can make use of the percept data.

Figure 1: Illustrates the perception pipeline.

Percept Data

Besides the data which is unpacked for all game objects by default, e.g., the position, the collider data and so forth, you can specify additional data which should be extracted and packed into a corresponding percept by attaching a Steering Tag to an object. You can easily find out what default data the perception pipeline receives by looking at the API reference page of the SteeringPercept class.

Optimization

The whole process is automatically done in a way that optimizes the performance of your game or application: The data of a percept is extracted only when it is in the range of at least one agent processing it. The range determining when percepts are relevant is defined in an agent's Steering Filter. Moreover, a Steering Perceiver can utilize optimized spatial structures which additionally improve the access time to relevant percepts.

Remarks

For now, we have talked only about percepts in general. For our system, we decided to use only the SteeringPercept class. But we made it possible to create a whole different percept type by implementing the IPercept interface. However, if you do that you also need to implement your own perceivers and filters as well.

So, before you are going to re-implement all pipeline components. You have the following possibilities to inject your custom data, like health points etc., into the system so that behaviours can make use of it.

  1. You can use the float Values array of the Steering Tag.
  2. Instead of rewriting all components by inheriting from the upper base classes, you can derive your custom perception classes directly from SteeringPerceiver, SteeringFilter and SteeringPercept for both back-end and front-end classes. Then, you just need to call the base methods in overridden methods of your custom classes and to add in your additional code.
Imprint