I shall start by looking at the creatures in Black & White, and then go on to speculate about what sorts of interesting agents we can expect in computer games in the next few years.
The creatures in Black & White had to fulfil two very different requirements:
- We wanted the user to feel he was dealing with a person. The creatures had to be plausible, malleable, and loveable.
- They had to be useful to the player in his many quests and goals. The creatures in Black & White aren’t just toys you experiment with, they can be trained to be invaluable helpers in the campaign.
To my knowledge, this combination of features has not been attempted before. There is some software (Creatures, The Sims) in which you feel you are dealing with passably plausible agents, but these packages, excellent as they are, are more like sand-boxes than games: they are pure goal-less simulations, in which the entertainment is to be gained from experimentation, not from progressing through a series of quests. There are some games (Daikatana) in which the player’s character is given helpers to aid him on his quest, but in these games the helpers are just state machines, hard-coded for the particular task at hand.
|Daikatana||Creatures||Black & White|
|Useful, helpful agents?||Yes||No||Yes|
At first glance, there seems to be some sort of conflict between these requirements: the person-like requirement implies the creatures are autonomous, whereas the usefulness requirement seems to preclude too much autonomy. Later on we shall see how this conflict was “resolved”. But first we shall look at the first requirement: making persons out of creatures.
1. Making a Person: the Architecture of an Agent
In order for the player to see his creature as a person, the creatures had to be:
- Psychologically plausible
1.1 Psychologically Plausible Agents
To make agents who were psychologically plausible, we took the Belief-Desire-Intention architecture of an agent, fast becoming orthodoxy in the agent programming community, and developed it in a variety of ways. The underlying methodology was to avoid imposing a uniform structure on the representations used in the architecture, but instead to use a variety of different types of representation, so that we could pick the most suitable representation for each of the very different tasks (See Marvin Minsky in his paper on Causal Diversity). So beliefs about individual objects were represented symbolically, as a list of attribute-value pairs, beliefs about types of objects were represented as decision-trees, and desires were represented as perceptrons. There is something attractive about this division of representations: beliefs are symbolic structures, whereas desires are more fuzzy.
To make a plausible agent, there must be an explanation of why he is in that particular mental state. In particular, if an agent has a belief about an object, that belief must be grounded in his perception of that object: creatures in Black & White do not cheat about their beliefs – their beliefs are gathered from their perceptions, and there is no way a creature can have free access to information he has not gathered from his senses. I call this requirement Epistemic Verisimilitude.
Further, if a creature wants something, there must be an explanation of why he wants it. (For example: if the creature is angry, it might be because he has been watching you being destructive, and has decided to copy you; or, the creature might grow angry after getting hurt). Each desire has a number of different desire-sources; these jointly contribute to the current intensity of the desire.
1.2 Malleable Agents
We wanted the creatures to be malleable in many different ways: we wanted them to learn many different types of thing, and we wanted there to be many different types of situation which would prompt learning.
“Learning” covers a variety of very different skills:
- Learning that (eg: learning that there is a town nearby with plenty of food)
- Learning how (eg: learning how to throw things, improving your skill over time)
- Learning how sensitive to be to different desires (eg: learning how low your energy must be before you should start to feel hungry)
- Learning which types of object you should be nice to, which types of object you should eat, etc. (Eg: learning to only be nice to big creatures who know spells).
- Learning which methods to apply in which situations (eg: if you want to attack somebody, should you use magic or a more straightforward approach?)
The architecture was designed to allow all these different types of learning.
Learning can be initiated in a number of very different ways:
- From player feedback, stroking or slapping the creature.
- From being given a command: when the creature is told to attack a town, the creature learns that that sort of town should be attacked.
- From the creature observing others: observing the player, other creatures, or villagers.
- From the creature reflecting on his experience: after performing an action to satisfy a motive, seeing how well that motive was satisfied, and adjusting the weights representing how sensible it is to use that action in that sort of situation.
The architecture was designed to allow all these different ways in which learning can be initiated.
All these different types of learning, and different types of occasions which prompt learning, coexist in one happy bundle. I will only go into detail about one of these types of learning: learning which types of object are most suitable for various different desires.
How does a creature learn what sorts of objects are good to eat? He looks back at his experience of eating different types of things, and the feedback he received in each case, how nice they tasted, and tries to “make sense” of all that data by building a decision tree. Suppose the creature has had the following experiences:
|What he ate||Feedback – “how nice it tasted”|
|A big rock||-1.0|
|A small rock||-0.5|
|A small rock||-0.4|
He may build the following simple tree to explain this data:
A decision tree is built by looking at the attributes which best divide the learning episodes into groups with similar feedback values. The best decision tree is the one which minimises entropy, a measure of how disordered the feedbacks are.
To take a simplified example, if a creature was given the following feedback after attacking enemy towns:
|What he attacked||Feedback from player|
|Friendly town, weak defence, tribe Celtic||-1.0|
|Enemy town, weak defence, tribe Celtic||+0.4|
|Friendly town, strong defence, tribe Norse||-1.0|
|Enemy town, strong defence, tribe Norse||-0.2|
|Friendly town, medium defence, tribe Greek||-1.0|
|Enemy town, medium defence, tribe Greek||+0.2|
|Enemy town, strong defence, tribe Greek||-0.4|
|Enemy town, medium defence, tribe Aztec||0.0|
|Friendly town, weak defence, tribe Aztec||-1.0|
Then the creature would build a decision tree for Anger like this:
The algorithm used to dynamically construct decision-trees to minimise entropy is based on Quinlan’s ID3 system.
1.3 Loveable Agents
We wanted the player to feel some sort of emotional attachment to his creature. We soon realised that empathetic attachment is intrinsically reciprocal: the reason why it is inappropriate to feel emotionally attached to your tv remote is because your tv remove is not going to reciprocate. Conclusion: if you want the player to get attached to his creature, you must first ensure the creature is empathetically attached to you!
Agents in computer games are at best like severely autistic people: capable of perceiving and predicting the behaviour of objects in the world, but incapable of seeing other people as people – incapable of building a model of another agent’s mind which could be used, to great effect, to predict his actions.
In Black & White, the creature’s mind includes a simplified model of the player’s mind. He watches what actions the player is doing, and tries to make sense of those actions by ascribing goals to the player which would explain those actions. He stores a simple personality model of the player, which he uses in decision making. As well as a model of what he thinks the player is like, he also has goals which relate directly to his master: the desire to help his master, the desire to play with his master, and the desire for attention.
If we want to enable agents to build mental models of others, and we should want to do so very much, the first thing we must do is ensure our architecture is sufficiently clean that a useful (and short) description of the current mental-state can be read-off from the actual mental state of the creature. It is easy to read-off a clear description of a part of a mental state if the architecture is organised around symbolic data-structures, but if the architecture is a net with merely numerical connections, it is holistic and opaque, making it doubly difficult to extract a short description which is useful for making predictions.
So, if we want the creatures to be capable of modelling other creature’s minds, we must design the architecture around symbolic data-structures. But this does not mean we should ignore all that non-symbolic learning has to offer: the creature architecture uses threshold functions to adjust desire tolerance, and entropy functions to estimate the amount of noise. But these soft fuzzy functions are housed within the hard framework of a symbolic architecture.
2. Making a Person Useful: Autonomy Can Go Too Far!
The creatures in Black & White had to be person-like, but they also had to be useful. The person-like requirement implies the creatures are autonomous, whereas the usefulness requirement seems to preclude too much autonomy. How can we resolve these conflicting requirements? The solution we arrived at was that creatures start off completely autonomous, but over time, through training, you can mould them so that they only do what you want them to do. This gives the player the enormous feeling of satisfaction that he has trained his creature to actually be useful in the game! The down-side is that your creature loses something of his charm the more you train him: he becomes more focussed on a few goals in a few situations on a few types of objects: as he becomes more useful, he becomes more “robotic”.
3. Future Directions: Extrapolating from Black & White
3.1 Person-Like Agents
What could we do to make more realistic agents?
- An infinite number of goals. Creatures in Black & White have a finite number of goals. People, by contrast, have an indefinite number of goals: you can want to write a story, you can want to write a detective story, you can want to write a detective story in which the detective is a woman, you can want to write a detective story in which the detective is an old, cantankerous woman, and so on. Desires have the same combinatorial structure as human language itself, and this needs to be captured explicitly.
- An infinite number of ways of satisfying goals: real-time planning. Creatures in Black & White use plan-libraries to work out what to do: the creatures know (at birth, if you like) that there are k ways of satisfying a particular goal. When they try to satisfy that goal, they just go through those k ways, seeing which is most suitable in the current situation. People, by contrast, sometimes use real-time planning, where they generate novel ways of satisfying a goal by trying out various options, and considering the results of those actions. This is computationally expensive, but gives the agent a freedom and a flexibility currently missing.
3.2 Empathetic Agents
If we want to make more plausible agents, we enrich the mental model of the agent. If we want to make more empathetic agents, we enrich the mental model which the agent uses to model other agents. (These two are quite distinct: the latter is invariably going to be simpler than the former, for space-efficiency reasons – the agent is going to have models about lots of different agents, so these models should be small). The creatures in Black & White have simple models of other agents’ minds: they just model the desire part of the architecture. Wouldn’t it be nice to add more?
The trouble is that the more we enrich the agent’s model of other agents, the harder it is for the agent to figure out what the other agent is thinking. For instance, suppose our agent’s model of another agent includes data about the other agent’s beliefs as well as his desires. Then we have made the task of understanding the other agent considerably harder, because there will be more models which fit the data, and it will be harder to figure out which is best. Suppose, for instance, that an agent fails to eat the apple. This might be because he hasn’t seen the apple (and consequently has no belief about it), or because he doesn’t like apples, or because he just isn’t hungry. Which of these is the right explanation? We can’t tell until we have seen a lot of examples. (This problem just doesn’t arise if you keep an excessively simple model of other agents: if you just model them as a bunch of desires, then the only possible explanation is that he isn’t hungry). (There are proposed solutions to this in the philosophical literature: the Principle of Charity solves the apple problem by assuming the agent’s beliefs are correct, but if we are going to assume this across the board, then there is no point in modelling beliefs at all).
There are three features of Black & White which will become increasingly commonplace:
- The creature’s mind includes both symbolic and connectionist representations, happily coexisting in one unified architecture.
- Creatures are both person-like and useful in the game.
- Creatures are empathetic (this is clearly an aspect of being person-like, but is sufficiently important to be stated separately).