CHAPTER IV: SIMULATION AND INTERPRETATION
(This chapter is part of Gonzalo Frasca's Thesis. Get the full text here).

1. Simulation and Semiotics

����������� The fact that simulation theory may have taken for granted that simulations could have different interpretation, reminds me of the difference between the Sassurean and Peircean models of sign (Eco, 1976). While Saussure�s model distinguished between two poles of the sign�the signifiant, understood as the material manifestation of the sign and the signifi�, which is the concept that it refers to- Peirce�s introduced a new element, the interpretant.

Figure 2 � Peirce�s triadic model of sign

Peirce�s defines a sign as something which stands to somebody for something in some respects or capacity�. He proposed a model that included three categories (Figure 2): the sign (or representamen, which is the equivalent to Saussure�s signifiant), the object and the interpretant, which he defines as the product of the interpretation of the sign in somebody�s mind. Umberto Eco (1976) describes Peirce�s interpretant as the �definition� of the representamen; �as another representation which is referred to the same �object��.Peirce states that sign is a sign because it is interpreted by somebody and that interpretation creates a new sign, the interpretant, which would be an idea that the observer has about the original sign.

����������� As we have seen in the example of the differences between my interpretation of Tetris and Murray�s, the interpretation of simulations, as any semiotic interpretation, does not escape from the need to consider that different observers may interpret it differently and, therefore, associate it with different source systems. However, as I will show, the reasons for different interpretations of a simulation may not be just caused by two observers having different concepts of which is the source model that is being simulated. I will explain this by using two examples. The first one is Pong, the classic videogame, and the second is a toy.

����������� In Pong, the first highly popular arcade videogame, the player controls a paddle and must use it to hit a ball. While its original name clearly refers to another game,ping-pong, it would make perfect sense to say that it is a tennis videogame. Obviously, ping-pong and tennis are structurally very similar games, but they are still different. The fact that the first is also called �table tennis� shows both its similarities and its main difference: the use of a table instead of a tennis court.

As a simulation, Pong is representing a complex system through a less complex one. During the process of abstracting the source system, Pong only kept some of its characteristics: the ball and the paddles, plus an abstract delimited space. Since ping-pong and tennis are very similar systems, their abstract simulations could be very similar and the final system could be interpreted as either of them by a player. Pong could have been marketed as either a tennis or ping-pong simulation without any problem.

����������� Let�s suppose that the worst videogame player on Earth, who until that moment never heard of Pong, decides to give it a try. This player does not understand that he has to hit the ball back. Instead, he thinks that the computer-controlled paddle throws the ball against him and his goal consist on dodging it. This player would not recognize tennis as the simulation�s source system. Instead, he might think that this is a weird soccer penalty-kick simulator, or maybe that the thing does not make any sense at all. His interpretation would be different from the one of somebody who is very experienced in Pong. The fact that both think of different source systems is not caused by them having different ideas of what penalties and tennis are but because of their particular perception of the model. While one saw a model that has the basic rules of a penalty game, the other perceived a model that has the rules of tennis. These different interpretations are caused by the particular experience that each player had with the model. Interpretation not only depends of the idea that the observer has from the source system, but also from the idea that the observer has from the model.

����������� I will propose a second, simpler example to explain this situation. . Imagine that we have a toy representing a robot. If we analyze it as a Peircean sign, we can say that we have a representamen (the toy), which represents something else (a robot, the object) according to the observer�s concept of �robot�, the interpretant (�if it is anthropomorphic but made of mechanical parts, it�s a robot�). But there is a very particular kind of toy, known as �Transformer�. Based on a Japanese animated television series, the Transformers are robots that can transform themselves into different machines. When you first open a box containing a Transformer, you see a puppet with all the characteristics of a robot. After certain manipulations --which may be tricky and, in certain cases, puzzle-like-- the robot can be transformed into, let�s say, a plane. The toy is articulated, made of connected moving parts but at any moment you have to dismantle it into different pieces: the transformation takes place without the toy losing any matter. Obviously, the toy has two different states: robot and plane. Each one of them can be understood using the triadic sign (Figure 3). Our problem starts when we try to understand the Transform as a whole. Is it a robot or a plane or both at the same time?

Figure 3 � Is the Transformer a robot, a plane or both?

Imagine that we gave a Transformer to a child who has never watched the television series and is not familiar with its ability to change. If the transformation is not easy to perform --actually, it is quite common that you have to use a lot of pressure to transform the toy -- the child will just use it as a robot and never discover that it could also become a plane. In order to fully appreciate the toy you need something more than the mere object: you need a rule of behavior. In this case, the rule is �if you perform certain movements, your toy will change its state�. Without that rule, the toy is simply a robot; with it, it becomes a Transformer, a dual state toy. Peirce�s model of sign does not take into account this inner mechanism that can modify the representamen and transform it into something else.

����������� Before going further, it is important to make sure that the reader is not thinking that the Transformer issue is nothing more than an interpretation problem. There is a classic example in semiotics about different interpretations: the color black is conventionally used by occidentals for showing grief after somebody�s death, while some cultures use the color white. Facing a woman dressed in black, the Frenchman�s interpretation of her feelings will differ from the Chinese�s. In this case, the representamen (the woman in black) remains the same. What varies is the interpretant. In the case of the Transformer, the representamen does change depending on the player�s actions. Of course, the interpretation that the player makes of the resulting sign may differ. If we created a robot that got transformed on a doll dressed in black, it would be open to different interpretations. Our Transformer toy is not just open to different interpretations, but, unlike most signs, it can change its representamen through the performance of the player by applying a particular rule. Of course, it is possible to consider the Transformer not as one toy, but as two. To do this would be to avoid the issue, without trying to understand the essence of the proble. The Transformer was designed to have two different states. There is a specific rule that transforms it from robot to plane and vice versa.

����������� Since Peirce�s triadic model of sign does not take into account that the representamen could be dynamic, it seems that it would need to be expanded it in order to allow it to explain simulations (for the sake of the simplicity of the explanation, we will keep using toys as examples of simulations. However, this could be applied to more complex simulations).

2. Mental Model and Simulation

����������� Peirce suggested that there is not a universal concept for an object. For example, different observers have different ideas (interpretants) of what a tree is. A botanist may think of it with more detail than somebody who lives in a desert and had very little contact with trees. A similar thing happens with simulations. Based on our previous example, it would seem that, at least in the case of simulations, representamens are not fixed entities, but they also depend on the observer�s idea of what it is. Again, the idea that an observer has about the Transformer just by playing with it during two minutes is different from the idea that somebody who owned one for a year may have. Actually, there is a category in human-computer interaction (HCI) theory that exactly described this missing category: the mental model. Philip Johnson-Laid introduced the concept of mental model in his book Mental Models (1983) and, since then, it became a crucial concept in HCI. In The Design of Everyday Things (1990), Donald Norman explains the concept of mental model as the idea that a user has of a system based on her interactions with it.

People form mental models through experience, training, and instruction. The mental model of a device is formed largely by interpreting its perceived actions and its visible structure. (Norman, 1990)

However, HCI theorists� idea of interpretation of simulations heavily relies on the designer�s intention. They usually pay attention to what the author meant and not on what is interpreted by the observer, as this quote from an HCI manual shows:

Mental models are often partial: the person does not have a full understanding of the working of the whole system. They are unstable and are subject to change. They can be internally inconsistent, since the person may not have worked through the logical consequences of their beliefs. They are often unscientific and may be based on superstition rather than evidence. However, often they are based on an incorrect interpretation of the evidence. (Dix, et al., 1993)

The key of this quote is in the words �incorrect interpretation�. Semiotics only analyzes interpretations: it does analyze signs as it, independently on what were the intentions of the entity that emitted it. On the other hand, HCI�s goal is to make sure that the designer�s intentions match the user�s interpretation. In other words, that the user�s mental model is identical to the design�s model.

In the example of the Transformer, it is possible to say that the interpretation of an observer would depend on her mental model of the toy, on the idea that she has on what the toy�s behavioral rules are. Therefore, I propose to borrow the concept of mental model form HCI and incorporate it as a new category of Peirce�s model of sign. By doing this, we will have an expanded model that would be able to explain the Transformer in particular, and simulations in general, as a sign (and, therefore, will allow us to understand how the interpretation process of simulations work). To be coherent with Peirce�s terminology, I propose to call this category the interpretamen - since the mental model is to the representamen what the interpretant is to the object- understood as the idea, or mental model, that an observer has from the representamen.

Figure 4 - Observer A views the Transformer as a toy plane.

Figure 5 - Observer B views the Transformer as a toy that can be transformed into either a plane or a robot.

Figure 4 and 5 show two different observer�s interpretations of a Transformer. Observer A (Figure 4) was given a Transformer without knowing that it could be transformed into different states. Instead of considering the Transformer as a multiform toy, observer A viewed it as a static toy. In this case, the representamen is the plastic toy object, the interpretant is �articulated object with the shape of a plane�, the interpretant is the particular idea that the observer has from planes (for example, that they are metallic and they have wings) and the object is the ideal concept of plane.In the second case (Figure 5), observer B was able to interact with the Transformer and changed its shape into a robot. Therefore, observer B had a different interpretamen of the representamen. In this case, the interpretamen could be described as �articulated object that can be transformed into two different objects: a robot or a plane�. Both observers A and B interpreted the Transformer differently: one recognized it as a plane, while the other interpreted it as a dual-state toy. These interpretations were a consequence of the different ideas, or interpretamens, that the observers had about the Transformer as a system � or as a sign, or model-: one viewed it as a plane, while the other recognized it both a robot and a plane.

As Murray (1997) states, one of the main pleasures of digital artifacts is, precisely, transformation. By applying a rule of behavior (i.e. to manipulate the toy in certain way), the player discovers that the robot can become a plane. In other words, the player discovers the possibilities of the system through manipulation. As Aarseth (1997) explains, this manipulation is not trivial, such as the flip of pages in a book, but requires that the player get engaged into a process of decision-making that will affect his experience of the system. This process of manipulation and transformation is what renders possible the interpretation of the multiple facets of a simulation.

With the example of the Transformer, I have showed that simulations allow two different kinds of interpretation. One is the traditional kind, as described by semiotics. The second is the one that the observer makes of the representamen as a system, represented by the interpretamen, and it is based on the personal experience that she had with it.

In my expanded version of Peirce�s sign, the representamen is not static but works rather as a machine that produces different signs (interpretamens) for different users. This is exactly what Aarseth meant by cybertexts as machines, but in an expanded version that can be applied not only to texts but also to any simulation.

3. Simiotics 1.0

����������� Before moving on, I would like to describe in this chapter how my expanded Peircean concept of sign not only can be applied to simulations, but to any kind of signs. What I have done is to separate Peirce�s category of representamen in two different ones: representamen and interpretamen. Traditional semiotics does not differentiate these two categories because, in general, signs have only one state: remain unmodified for different observers. However, this does not apply to some particular examples, as cybertexts, toys or works of art that Umberto Eco (1989) described as �works in movement�. In The Open Work, Eco defines the �work in movement� - here �movement� must be understood in the same way as Murray�s concept of �transformation� in computer software- where he includes, among others, Calder�s mobiles. The concept of�works in movement� is clearly defined by David Robey when he states

what such works have in common is the artist�s decision to leave the arrangement of some of their constituents either to the public or to chance, thus giving them not a single definitive order but a multiplicity of possible orders. (in Eco, 1989)

Calder�s mobiles can be pretty immobile if there is a lack of wind. Therefore, the perception of the mobile itself, as a representamen, will vary depending on the amount of wind of a particular moment (or in the ability of the observer to produce wind or push the structure to make it move). Again, an observer who sees the mobile without wind would consider it simply as a statue, without learning its ability to move. The interpretamen in this case will be different to the one of an observer that can see it moving, even if their interpretation (interpretant) is similar or different.

����������� This effect can also be found in more traditional works of art. The interpretamen of an observer that just sees a statue from a single angle is different from the one of somebody that can turn around it. The second can perceive details that the first could not, such as the signature of the sculptor that could affect his interpretation of the work. The same happens in the process of skipping words, paragraphs or even pages of a book, as described by Barthes (1973) in Le plaisir du texte. While the book itself will remain the same as a physical object, two readers will have a different exposure to the text if one of them systematically skips certain chunks. This is different to say that the two readers will interpret the text in a different way. What happens is that the idea (interpretamen) that they have of the text is different. This becomes evident in works such a hypertext or Cortazar�s Hopscotch (1994). If somebody reads the novel starting on the first page in a lineal way, the idea that she will have of the text would be significantly different that the one of somebody who followed a random pattern. Thus, both share the same representamen (the book), but different interpretamens (the text as crafted by their different readings). The interpretant (their personal interpretation) could be the same or different - but it is likely that the more their interpretamen differs, the more different will be their interpretant.

����������� In a more subtle way, it is possible to apply the concept to painting. The cathedral in the French city of Rouen is famous for having been painted by Monet at different times of the day and the year, each one of them being different in color and shades. If somebody sees the cathedral in a single photography, her interpretamen will be different -and narrower- than Monet�s, who was able to perceive the same object under various lightning conditions. The cathedral (representamen) is still the same, what changes is the way light reflects on it and its perception by different observers.

In some cases, such as words or other graphic signs, the difference between interpretamen and representamen is very subtle or almost non-existent. In general, the concept might be useful to analyze certain particular cases, like the ones that we have previously described. By incorporating the interpretamen we were able to integrate simulation with �traditional� semiotic representations. Simulations are not essentially different from other representational objects, since, as we have seen, most signs can produce different �readings� of their representamen. Still, some simulation representamens are far more complex than a painting and, even if there are related by the continuum between their interpretamens, it is advisable to classify them as different genres of signs.

My goal in this section was to understand how interpretation works in simulations, in order to later analyze videogames. I have suggested an expanded version of Peirce�s sign by incorporating the interpretamen, defined as HCI�s mental model of the sign. As we will see later, the incorporation of this fourth category will allow me to better understand how some of Augusto Boal�s theatrical techniques work. While I do believe that this expanded explanation of signs could bring more light on the understanding of how simulations work as representations, I do not think that a semiotic analysis would be enough to fully explain simulations and videogames. When he analyzed previous attempts to explain computer software through semiotics, Aarseth (1997) affirmed that this approach �is not beneficial as a privileged method of investigation�. While I agree in general with Aarseth�s claim, I do think that an expanded semiotics that takes into account simulations would be helpful to understand the basis of how videogames work as a representational medium. Other non-semiotic tools that focus on the internal rules of simulations, like the concepts of ludus and paidea that I have proposed, are also needed to understand how videogames behave.

(This chapter is part of Gonzalo Frasca's Thesis. Get the full text here).