Sunday, 9 October 2016

Role of Artificial Intelligence for Computer Games(CL2- A* )

Introduction

Elements of artificial intelligence used in computer games have come a long way. In the beginning, the developed systems were based on sets of rules written directly in the code of the game or on the behaviour scripts interpreted by the code, with the whole thing based most commonly on the appropriate selection of importance of the random factor in the process of choosing the appropriate behaviour. That time witnessed the birth of such memorable games as the immortal River-Raid, Donkey-Kong, Boulder-Dash, and many other objects of fascination for users of eight-bit machines, back in the 1970s.
Another step in the development process was introducing simple computer science methods, such as the still popular and frequently used Finite State Machine method, into describing the behaviour of the computer-controlled enemies. However, as the demands of the players grew day by day, games grew more and more complicated, thanks to the use of more and more advanced computing algorithms. The dawn of the era of RTS-type games (Real Time Strategy) has caused a significant shift of interest (in terms of frequency of use) to algorithms which determine the optimal path between two specified points on a map.
Fast technical progress and rapid increase of processing power of home computers were also a catalyst for the development of applications using artificial intelligence in computer games. The first games and artificial intelligence algorithms had to settle for limited capabilities of machines available at that time, with the processor frequencies no higher than 2 MHz. The first PCs brought in new possibilities and new applications. After PCs with 386/486 processors became the standard for a home computer, programmers were given new possibilities; that led to the start of a race between game development companies. For a long time, the foremost indicator of a computer game's quality was the quality of three-dimensional graphics it featured; however, a realisation soon came that nice graphics, sound, and character animation is not everything. Recently, one of the most important elements of computer games has been identified as artificial intelligence – as the primary factor behind the so-called playability of present-day video games.
The process of production of computer games has undergone significant changes as well. Even though programming the artificial intelligence of a game used to be treated slightly unfairly, and its implementation tended to be pushed to near the end of the production of the game's engine, at present, planning the modules of artificial intelligence and their co-operation with other components of the game is one of the most important elements of the planning process.
More and more frequently, at least one of the members of a programming team is designated to, full-time and ever since the beginning of the project, handle designing and programming the modules of artificial intelligence.
At present, when in most homes, one can find PC-class computers with Pentium IV processors with frequencies in the range of 3 to 4 GHz, it is being considered to let computer games make use of the most advanced and sophisticated methods of artificial intelligence: neural networks, genetic algorithms, and fuzzy logic. In the age of Internet and network games, artificial intelligence systems in games have been given new tasks: a computer player should, in its behaviour and strategies of playing, be indistinguishable from a real player on the other side of an Internet connection.

Milestones in the Development of Artificial Intelligence in Games

While discussing the evolution of artificial intelligence in computer games, one definitely should mention the games which have turned out to be milestones in the development of intelligent behaviour in games.
One of the most popular games of the 1990s was WarCraft – a game developed by the Blizzard studio. It was the first game to employ path-finding algorithms at such a grand scale, for hundreds of units in the game engaged in massive battles. SimCity, created by the company Maxis, was the first game to prove the feasibility of using A-Life technologies in the field of computer games. Another milestone turned out to be the game Black&White, created in 2001 by Lionhead Studios, in which technologies of computer-controlled characters' learning were used for the first time.

AI in FPS-type Games

FPS-type games usually implement the layered structure of the artificial intelligence system. Layers located at the very bottom handle the most elementary tasks, such as determining the optimal path to the target (determined by a layer higher up in the hierarchy) or playing appropriate sequences of character animation. The higher levels are responsible for tactical reasoning and selecting the behaviour which an AI agent should assume in accordance with its present strategy.
Path-finding systems are usually based on graphs describing the world. Each vertex of a graph represents a logical location (such as a room in the building, or a fragment of the battlefield). When ordered to travel to a given point, the AI agent acquires, using the graphs, subsequent navigation points it should consecutively head towards in order to reach the specified target location. Moving between navigation points, the AI system can also use local paths which make it possible to determine an exact path between two navigation points, as well as to avoid dynamically appearing obstacles.
The animation system plays an appropriate sequence of animation at the chosen speed. It should also be able to play different animation sequences for different body parts: for example, a soldier can run and aim at the enemy, and shoot and reload the weapon while still running. Games of this kind often employ the inverted kinematics system. An IK animation system can appropriately calculate the parameters of arm positioning animation so that the hand can grab an object located on, e.g., a table or a shelf. The task of modules from higher layers is to choose the behaviour appropriate for the situation – for instance, whether the agent should patrol the area, enter combat, or run through the map in search of an opponent.
Once the AI system has decided which behaviour is the most appropriate for the given situation, a lower-level module has to select the best tactics for fulfilling that task. Having received information that the agent should, for instance, fight, it tries to determine the approach that is the best at the moment – e.g., whether we should sneak up on the opponent, hide in a corner and wait for the opponent to present a target of itself, or perhaps just run at him, shooting blindly.

AI in RTS-type Games

In RTS-type games, it is possible to distinguish several modules of the artificial intelligence system and its layered structure. One of the basic modules is an effective path-finding system – sometimes, it has to find a movement solution for hundreds of units on the map, in split seconds – and there is more to it than merely finding a path from point A to point B, as it is also important to detect collisions and handle the units in the battlefield avoid each other. Such algorithms are typically based on the game map being represented by a rectangular grid, with its mesh representing fixed-sized elements of the area. On higher levels of the AI system's hierarchy, there are modules responsible for economy, development or, very importantly, a module to analyse the game map. It is that module which analyses the properties of the terrain, and a settlement is built based on the assessment, e.g., whether the settlement is located on an island, thus requiring higher pressure on building a navy. The terrain analyser decides when cities should be built and how fortifications should be placed.
Figure 1. Representation of the world in a RTS-type game
Figure 2. Representation of the world in a FPS-type game

AI in Sports Games

Basically, in the case of most sports games, we are dealing with large-scale cheating. Take car racing games, for instance. For the needs of the AI, from the geometry of the game map, only and only the polygons belonging to the track of a computer-controlled opponent should travel on and get distinguished. Two curves are then marked on that track: the first represents the optimal driving track, the second – the track used when overtaking opponents. The whole track gets split into appropriately small sectors and, having taken parameters of the surface into account, each element of the split track gets its length calculated. Those fragments are then used to build a graph describing the track, and to obtain characteristics of the road in the vehicle's closest vicinity. In effect, the computer knows it should slow down because it's approaching the curve, or knows that it's approaching an intersection and can, e.g., take a shortcut. Two important attributes of Artificial Intelligence systems in such games is being able to analyse the terrain in order to detect obstacles lying on the road, and strict co-operation with the physics module. The physics module can provide information that the car is skidding, having received which the Artificial Intelligence system should react appropriately and try to get the vehicle's traction back under control.
Figure 3. The method of presentation of reality in car race (segmentation and optimalisation of the track)
Figure 4. The method of presentation of reality in car race
Similar cheating can also be found in other sports games. In most cases, a computer-controlled player has its complete behaviour determined even before the beginning of the turn – that is, it will, e.g., fall over while landing (acrobatics, ski jumping etc.), have the wrong velocity, start false etc. Additionally, in games simulating sports with scoring by judges, the scores are generated according to the rules defined by the appropriate sports bodies.
The predefined scenario of a computer-controlled player is then acted out by the character animation system.

The Most Popular AI Algorithms in Computer Games

In the following part of the article, I would like to discuss the two most popular algorithms used in programming computer games. Possessing knowledge about them, one can successfully design a simple artificial intelligence system fulfilling the needs of simple FPS or RTS games. The first of the two is the A-Star algorithm, used in performing fast searches for the optimal path connecting two points on the map (graph) of a game. The other is the finite state machine, useful, e.g., in preparing behaviour scenarios for computer-controlled opponents, typically delegating its low-level tasks to a path-finding module.

The A* Algorithm

The problem of finding a way from point A to point B on a map is a key problem in almost any computer game (possibly not counting certain sports games and some other types of games which can be counted using the digits of one hand). At the same time, algorithms from this group belong to the lower level of the games' AI, serving as a base for constructing more complicated and more intelligent types of behaviour, such as strategic planning, moving in formations or groups, and many others. This issue has already been thoroughly evaluated in the world of computer games, with one algorithm – A* – having become a present-day standard.

Representation of the World of the Game

The world of almost any computer game can be represented with a graph, its form depending on the kind of the game. In RTS-type games, the world is typically represented by a two-dimensional array, each of its elements corresponding to fragments of the game world's rectangular map. Each element (except boundary ones) has eight neighbours. Using such a representation of the RTS world, we can construct a graph in which every element of the 2D array will be corresponded to by one vertex of the graph. The edges of the graph (typically present only between the nearest neighbours) illustrate the possibility (or lack thereof) of moving from one of the elements of the map to the neighbouring element. In real-time strategies, we usually assign one vertex of the graph to an area the smallest unit in the game can fit into.
In FPS-type games, the vertices of the graph are typically locations/rooms, with the graph's vertices denoting the existence of a direct connection between the two rooms.

Choosing an Algorithm

There are a lot of algorithms for finding the optimal path in a graph. The most simple of such algorithms, commonly called fire on the prairie, works by constructing consecutive circles around the starting point, with each step of the algorithm building another, wider circle. Consecutive circles and elements belonging to them are assigned larger and larger indices. As one can see in Figure 5, the circle with index 4 passes through our target point.
Figure 5. A simple path-finding algorithm
Now, heading in the opposite direction and following the rule that in each step we move to the nearest map point located on the circle with a smaller index, we reach the starting point; the elements of our map we have returned through make up the shortest path between the starting point and the destination.
Examining the way this algorithm works, one can see that, in addition to its great advantage – the simplicity – it also possesses a severe drawback. The path the algorithm has found in our example consists of only five elements of the game world, even though 81 fields of the map would have to be examined in the worst-case scenario. In case of a map consisting of 256x256 fields, it might mean having to examine 65536 map elements!
Enter A* and its primary advantage – minimisation of areas being examined by consciously orienting the search towards the target. Keeping it brief, I could say that, when calculating the cost of reaching a point on the map, the A* algorithm adds to it some heuristics indicating the estimated cost of reaching the destination; this function is typically the distance to the destination from the point currently being examined.

Requirements for the Algorithm

Many requirements are presented to optimal path-finding systems. Optimal does not necessarily mean the shortest; the algorithm can take into account such additional factors as the type of the terrain (for instance, a tank in a RTS game will pass the swamp faster going around it than traversing it), turning angle limitations, the number of enemies in the area, and many other elements depending on the particular game. The algorithm should avoid uncrossable areas of the map or, for example, maintain distance from friendly units. The foremost requirement is that the algorithm should always be able to find the optimal path, as long as a path between the two points exists. Listing 1 presents the pseudocode describing the A* algorithm.

Listing 1. The A* algorithm

PriorityQueue OpenList
List          ClosedList

startNode.g = 0;
startNode.h = EstimateCostToEndNode(startNode)
startNode.f = startNode.g + startNode.h
startNode.parent = null

Open.Insert(startNode)

while(OpenList is not empty)
   //obtain the topmost element from the priority queue
   Node node = Open.GetNode()
   if(node == endNode)
      return TRUE
   for(each neighbour _succ of the node node)
      newG = node.g + CalcCostFromNodeToNode(_succ, node);
      if(examined _succ is on OpenList or ClosedList 
                  and the new cost is >= than the previous)
         analyse another neighbour
      else
         _succ.parent = node
         _succ.g = newG
         _succ.h = EstimateCostToEndNode(_succ)
         _succ.f = _succ.g + _succ.h
         if(_succ is already on ClosedList)
            remove _succ from ClosedList
         if(_succ isn't on OpenList yet)
            add _succ to OpenList

ClosedList.Insert(node)
return FALSE

Optimisation

The algorithm applied directly may turn out to be ineffective as a result of how much time operations on the structures from the priority queue (the OpenList) and the ClosedList can take. Multiple programming methods exist which work around those imperfections. Optimisation issues can be approached from two ways:
  • optimising the search algorithm itself,
  • optimising the data structures.
In the first case, one often applies the method of dividing the whole world (map) into regions and splitting the algorithm into two sections: first, we search for the path by checking which regions we should go through; then for each region, we move from the entry point to the exit. Within each region, we find the optimal path, using the A* locally for the region we are in. That way, we significantly limit the search area, thus decreasing the amount of resources required for calculations.
In fact, this method is strongly based on how a human looks for a way to the target – when traveling to another end of a large city, a walker doesn't plan the whole route with equal precision; instead, he/she travels between known orientation points, planning precisely the way between each two points, up to the street he/she is going to walk.
Another optimisation factor is the appropriate choice of functions and parameters for heuristics, as this is what decides how much the search area spreads over the game map.

Finite State Machines

Finite state machines are one of the least complicated, while at the same time, one of the most effective and most frequently used methods of programming artificial intelligence. For each object in a computer game, it is possible to discern a number of states it is in during its life. For example: a knight can be arming himself, patrolling, attacking, or resting after a battle; a peasant can be gathering wood, building a house, or defending himself against attacks. Depending on their states, in-game objects respond in different ways to (the finite set of) external stimuli or, should there be none, perform different activities. The finite state machine method lets us easily divide the implementation of each game object's behaviour into smaller fragments, which are easier to debug and extend. Each state possesses code responsible for the initialisation and deinitialisation of the object in that state (also often referred to as the state transition code), code executed in the game's each frame (e.g., to fulfill the needs of artificial intelligence functions, or to set an appropriate frame of animation), and code for processing and interpreting messages coming from the environment.
Finite state machines are typically implemented using one of the two following methods:
  • finite state machine language – implemented in C as a set of preprocessor macros,
  • state design pattern – a special object-oriented project pattern.
In the age of object-oriented design and programming, the first method is being phased out by the second, i.e., machines implemented on the basis of a project pattern state. Here is an example of such an object-oriented machine, describing the partially possible behaviour of a knight; each state of the object is represented by an abstract base class:
class CGameObjectState{
public:
   virtual void OnEnter()                 { return; }
   virtual void OnLeave()                 { return; }
   virtual void OnUpdate()                { return; }
   virtual void OnMessage(CMsg* pMessage) { return; }
};
All classes deriving from this class define the behaviour in each state (Listings 2, 3, and 4). An in-game object possesses a pointer to the object of its present state's base class and a method assisting the state transition (Listing 5). A knight class derives from the main object, and initialises its default state in the constructor:
class CKnightObject : public CGameObject{
   CKnightObject(){
      SetState(new CKnightPatrolState());
   }
};

Listing 2. State: knight under attack

class CKnightAttackState : public CGameObjectState{
   void OnUpdate(){
      // If no enemy seen, change to “patrol” state
      if(CanSeeEnemy()==FALSE)
         SetState(new CKnightPatrolState());
      // Here is the code handling the attack, e.g. get as close as needed
      // to the target if it is too far away, use the sword if it is close
      if(IsEnemyCloseEnough()==FALSE)
         MoveToEnemy(enemyID);
      else
         HitTheEnemy(enemyID);
   }
   void OnEnter(){
      // Alert allied troops that you're under attack
      CallForReinforcement();
   }
   void OnLeave(){
      // Notify command of received wounds
      CallForMedic();
   }
   void OnMessage(CMsg* pMsg){
      // Check the type of the order, take appropriate action if it is
      // a “fall back” order
      CMsgGetBack* pGetBackMsg = dynamic_cast<CMsgGetBack*>pMsg;
      if(pGetBackMsg != NULL)
         SetState(new CKnightRunAwayState());
   }
};

Listing 3. State: knight on patrol

class CKnightPatrolState : public CGameObjectState {
   void OnUpdate(){
      // Here is the code to check if we can see the enemy
      if(CanSeeEnemy() == TRUE){
         SetState(new CKnightAttackState());
      }
      else{
         // Continue the march through subsequent way points of the patrol
         GoToNextPatrolPoint();
      }
   }
   void OnLeave(){
      // Notify we are not patrolling any more, a substitution may be needed
      NotifyOneLessInPatrol();
   }
};

Listing 4. State: knight running away from the enemy

class CKnightRunAwayState : public CGameObjectState{
   void OnUpdate(){
      // Here is the code to check if we can see the enemy
      if(CanSeeEnemy() == FALSE){
         SetState(CKnightPatrolState());
      }
      // Run as far away as possible from the nearest enemy
      RunFarAwayFromNearestEnemy();
   }
};

Listing 5. Present state and state transition of an object

class CGameObject{
   CGameObjectState* m_pMyState;
public:
   SetState(CGameObjectState* pNewState){
      // Call the deinitialisation method and destroy the object
      m_pMyState->OnLeave();
      delete m_pMyState;
      // Assign the new state to the object
      // and call the method initialising it 
      m_pMyState = pNewState;
      m_pMyState->OnEnter();
   }
   OnUpdate(){
      // perform tasks as defined for the present state
      m_pMyState->OnUpdate();
      // now perform other, per-frame tasks of the object
   }
   void OnMessage(CMsg* pMsg){
      // perform tasks as defined for the present state
      m_pMyState->OnMessage(pMsg);
      // now process the messages, regardless of the object's state
   }
};

Artificial Neural Networks and Advanced Algorithms in Computer Games

The issue of artificial neural networks and their applications in video games has become one of the trendiest topics of recent days in the field of computer games. A lot has been said for years about their potential applications in computer games, in many magazines and on many Web portals. The “neural networks in computer games” problem has also been discussed, multiple times, at the GDC (Game Developers Conference – an annual event taking place in London and San Jose). At the same time, we had to wait long to see a game enter the market whose engine would run based, at least minimally, on the potential of the artificial neural network theory.
The game Collin McRae Rally 2 is one of the first applications of neural networks in computer games, which became a total success. The trained artificial neural network is responsible for keeping the computer player's car on track while letting it negotiate the track as quickly as possible. In that game, just like I described in the AI in Sports Games section, each track is represented by a set of broken lines making up a graph. In a gross simplification, the neural network's input parameters are information such as: curvature of the road's bend, distance from the bend, type of surface, speed, or the vehicle's properties. It is up to the neural network to generate output data to be passed further to the physical layer module, that data being selected in such a way that the car travels and negotiates obstacles or curves at a speed optimal for the given conditions. Thanks to this, the computer player's driving style appears, contrary to other games of this kind, highly natural. The computer can avoid small obstacles, cut bends, begin turning appropriately soon when on a slippery surface etc. The game uses the multi-layered perceptron model, the simplified form of which one can see in Figure 6.
Figure 6. The multi-layered perceptron model
Artificial neural networks could, in theory, be applied to solving most tasks performed by AI in computer games. Unfortunately, in practice, a number of obstacles exist which limit the neural networks' application in games. These include:
  • problems with choosing the appropriate input for neural networks,
  • neural networks' sensitivity to changes in a game's action logic, and the need for re-training the network whenever such a situation occurs,
  • rather complicated theory, and difficulties with debugging in case of problems,
  • time-consuming and complicated process of training the network.
What steps do we need to undertake in order to take advantage of an artificial neural network in a simple computer game? Let us have a brief look:
To begin, we have to answer our own question about the kinds of information the neural network should provide us with in order to help us solve the given problem. For example, let us consider a game in which a neural network controls the flight of our opponent's fighter plane. The information we should be obtaining from the neural network would then be, e.g., the optimal vectors of velocity and acceleration which, when provided to the physics module, will guide the enemy fighter to our plane. Another example could be a neural network used to choose the best strategy in a RTS-type game. Based on situation analysis, the network decides how greatly to concentrate on development, arms production, repairs after battles etc. All the parameters required by the game will be provided by the neural network on its output.
While defining the effect of the neural network's actions is quite easy (since we know exactly what we want to achieve), choosing the network's input parameters is a much more serious problem. The parameters should be chosen in such a way that its different combinations will let the neural network learn to solve complicated situations which haven't appeared in the example set of signals. The general rule states that the input data (variables) should represent as much information about the game world as possible; it could be, for instance, vectors of relative positions of the nearest obstacle and the nearest opponent, the enemy's strength, or the present state of armaments and damage.
Another step is to acquire a set of input data which will be used to train the network. The direct method could imply, e.g., remembering several to several hundred samples, successful attacks, and actions of a human player, and providing the recorded data to the neural network. Typically, however, the process used is automated, i.e., the samples themselves are computer-generated – which requires an additional, often quite significant, effort from the programmers.
The final step is training the neural network. Any training algorithm can be used here. The training process should be interwoven with simultaneous testing in order to make sure the game is not becoming too difficult or, the opposite, if it's not still too easy and in need of further training and optimisation.
Applying neural networks practically is not an easy task. It requires a lot of time, experience, and patience. In addition, neural networks are often used together with fuzzy logic, which makes it possible to convert the computer's traditional zero-one reasoning into something more strongly resembling the way a human thinks. Logic lets us decide if and to what degree the given statement is true. Although simultaneous use of the two technologies is a difficult task, when it is successful, the results are simply breath-taking, and incomparable with what we can achieve by using rules hard-coded into the code with algorithms and traditional logic. Technologies such as neural networks, genetic algorithms, and fuzzy logic are the future of computer games – and a future that is not that distant any more.

AI Libraries

Developing an advanced artificial intelligence engine requires both time and an experienced team of programmers. If a development studio cannot allocate enough human resources to build an artificial intelligence system, it has a possibility of purchasing an existing AI system, many of which are available in the market. Here, I would like to provide a detailed description of one of the most popular libraries in the market – Renderware AI, as well as of one of the newer libraries there, which could become a less expensive alternative to Renderware AI – AI.Implant.

Renderware AI

Renderware is a commercial, multiplatform computer game engine. The Renderware engine consists of several modules; among them, the one of interest to us here – the Renderware AI artificial intelligence module.
The Renderware module can be used both in games wholly based on the Renderware engine, and in games which use their own or other engines, merely willing to make use of the Renderware AI as a basis for an advanced artificial intelligence system.
The Renderware AI library follows the layered philosophy of building artificial intelligence systems. Renderware AI discerns three layers:
  • the perception layer, responsible for situation analysis – primarily for analysing the static (e.g., terrain) and dynamic (enemies, elevators etc.) surroundings.
  • the decision layer, responsible for making strategic decisions basing on the information provided by the perception module. Low-level actions, such as attacking, path-finding, evasions etc., are executed by the action module.
  • the action layer.
The most important element of the whole library is representing the perception of the world, as this is what further layers of the game's AI base on. In Renderware AI, this module is called PathData (a slightly misleading name, considering path analysis is only one of the perception module's functions), and uses the tool called PathData Generator. The PathData module can successfully analyse the game world with respect to its topological properties, with the streaming method it features, making it possible to generate information required for the AI module even for very large game maps. PathData conducts both a global analysis of the terrain's topology and an analysis of the unit's nearest surroundings. The results of the analysis can then be, if such a need arises, subject to further manual processing.
Global analysis provides such information as the place on the map interesting from the point of view of its topological properties. This information can include data about: well-hidden locations on the map, locations from which large areas of the map can be seen well, where a camera could be placed so that its view won't be obscured by a minor element of the scene etc. Local analysis can let us detect walls, obstacles which have to be walked around or jumped over, and a lot of other locally important elements.
Renderware AI's another important feature is the module responsible for the function of widely understood planning and the execution of the movement of units. Using data provided by the world analysis module, an appropriate graph is built, which is then used by the A* algorithm to preliminary plan the optimal path from point A to point B. Other features include unit type-dependent paths, path smoothing, avoiding dynamic objects getting into the unit's way, coordination with the animation system, and many others, extremely important in practice.
The engine is available for many platforms, from Sony Playstation through Nintendo or XBox, to Sony PlayStation 2 and PCs. The libraries are optimised for each platform, and make it possible to create incredibly advanced AI systems. It is worth considering as an alternative to time-consuming development of one's own solutions for the field of artificial intelligence.

AI.Implant

This engine, demonstrated for the first time in 2002 at the Game Developers Conference, immediately peaked the wide interest of computer game developers. The most important features of this system include advanced, hierarchic algorithms of path planning, a decision module based on binary decision trees and a friendly user interface enabling their edition. In addition, one of its great advantages is close integration with such programs as 3DStudio Max and Maya, which allows intuitive manipulation of data controlling object behaviour, as early as at the stage of their development in graphical packages. Among many other properties of the AI.Implant package, one worth mentioning is an advanced group behaviour module, making it possible to very realistically simulate crowds. AI.Implant is a multiplatform package available for PC, GameCube, XBox, and Sony PlayStation architectures.

What Now?

Artificial intelligence is a very broad and, at the same time, fascinating part of computer science. In this article, I have introduced the reader to certain algorithms and methods of artificial intelligence used in programming computer games; however, it is only a small fragment of the knowledge any real computer game programmer must master. The most important issues not having been discussed here include: genetic programming, fuzzy logic, impact map method, flock algorithms, and many, many others; getting familiar with these.

No comments:

Post a Comment