Debating about Animal Signals & Communication
Letters to John Maynard Smith: July 18, 2000
Letters to JMS:
The following is my first letter in response to John Maynard Smith's request for clarification about terms I used for different signal types.
18 July, 2000
Thanks for your kind letter of May 31st. I received it only a few days ago (on July 13th), so I must apologize for not responding sooner.
I congratulate you for the effort. The trick, in my opinion, is not to make certain definitions “official”, but to clarify where the sources of differences arise from. This is not an easy task, because one should give credit to all those involved in the different schools of definitions, that they make sense, given their different assumptions and thinking methods. A good clarification of the issue will be through the understanding of the sources of the differences, rather than by making an authoritative statement alone (i.e., unlike what most people have done, myself included; sometimes I find it even hard to talk to people about their own assumptions and logic). I wish you luck, and shall do my best to clarify my assumptions, methods and logical thinking (Table 1 in Hasson 1997 exposes most of the assumptions and definitions, though, regarding your questions, I suppose some verbal explanations and a few examples are still required). If you need any further help, please let me know.
I shall probably repeat below things we
agree upon, so I apologize in advance. I just want to make sure I am
perfectly clear on things, and that there is no further misunderstanding
(though a few disagreements, maybe).
The definition of “signals”
I don’t think you and I have any conflict in the understanding of what signals are. There are, however, some difficulties with current definitions. I think I have recently clarified this in a recent paper. I’ll try to explain this point below, in this section.
My 1997 paper (toward a general model, following my 1994 paper [cheating signals]) went a long way in clarify what signals are, as you and Harper pointed out (with regard to the 1994 paper), but the definition was still incomplete.
I am afraid your current definition “an act or structure that alters the behavior of another organism and which evolved because of that effect” is insufficient as well. It allows for pushing and shoving and other coercion activities being signals, and then one has to specifically exclude each by additional verbal explanations. We want a definition that does not need appendices that specifically exclude certain activities, but one that is sufficient by itself. My 1997 definition advances toward the elimination of non-signaling influences on recipients, but not all the way. I briefly talked to Alan Grafen about a year ago, and he obviously had a problem with the signal definition precisely because of this issue (coercion). Hence he and Amotz define signals more or less as strategic handicaps. Amotz has his own reasons. I often think I know how he thinks until he surprises me again by being so much more creative than I could have previously imagined. Grafen’s view, I think, comes mainly from his method of thinking, i.e., game theory. Amplifiers, indices, camouflage, cheating in general, are substantially different from strategic signals to be excluded. However, defining signals as strategic handicaps is obviously much too restrictive. No wonder Amotz thinks all signals must be handicaps, and Grafen (1990) thinks that there is signalling and there is their exploitation (cheating), which isn’t signalling. I often get the impression that when I push Amotz a little, Amotz likes to avoid a definition of signals altogether. I know you and I believe it is best to include the different cheating types within the definition of signals.
In 1997 (following “Cheating Signals, 1994) I defined signals as traits that impose non-negative cost on their bearers F-component, and a positive specific effect on their bearers’ S-component stemming from their potential to influence the behavior of other individuals.” The S-component is the complex of fitness features that are affected (for better for worse) by other individuals with whom signalers interact, and therefore changes as a result of signalling. The F-component is the complex of all other features that affect fitness. To put it simply, signals are defined there as traits (acts or structure or chemicals, if you wish, but also a touch [tactile communication] and certainly sound), whose sole benefit stems from changing the behavior of other individuals. This definition has not rejected coercion activities.
Recently I have published a paper that solves this problem (Hasson O. 2000, Knowledge, information, biases and signal assemblages. In: Espmark, Y., Amundsen, T. & Rosenqvist, G. (Eds.), Animal Signals. Signalling and Signal Design in Animal Communication, pp xxx-xxx. The Royal Norwegian Society of Sciences and Letters. The Foundation of Tapir Publishers, Trondheim, Norway.) This publication should be out very soon, if it is not already out. I have enclosed a hardcopy of this paper.
My first problem there was to make a distinction between “information” and “knowledge”. Buds of this view can be seen in my 1997 paper, where I used “information state” of signal recipients (rather than just “information”), defined in the 2000 paper as “knowledge”. More specifically, in the 2000 paper I defined “knowledge” as the values [and their variance] an individual assigns to all variables that affect its decisions about future actions. Hence, knowledge refers to individuals, whereas information is assigned to cues and signals. Knowledge may change in response to signals, cues, and logical reasoning. By definition, knowledge increases if this change results in, on average, a change that improves performance (usually in units of fitness; an exception to this may be some human purposeful behaviors that may reduce fitness [e.g. improve likelihood of gaining status, contests, self sacrifice, religious activities ect., which may not necessarily increase fitness, even on the average; our reasons for employing them are unimportant for the argument). Similarly, knowledge decreases, by definition, if performance is poorer (e.g., as a result of a cheating signal). This is discussed more fully in my 2000 paper.
Equipped with such a definition of “knowledge”, it is now easy to define signals as “characters that evolve because they change recipients’ knowledge, hence their behavior” (in the 2000 paper). This removes entirely the question of coercion, because it defines signals as their effect on CHOICE of behavior, and excludes forced behaviors. It also avoids the sometimes problematic use of the word “choice”, that may imply, to some people, a higher order of intelligence. In addition, some people see it as a problem when they observe that signals are given and make no effect on recipients. By making an effect on “knowledge”, signals may be given with no apparent response. This may be the case, for example, if signals only reduce estimated variances, but not means of certain variables that affect decision. In the long run, sometimes in the very long run, this may nevertheless change behavior. I don’t think we should have any disagreement here.
Cues and Signals
It is easy now to make the distinction between cues and signals. Both cues and signals change knowledge of individuals who perceive them, but signals evolved in organisms because they change recipients knowledge (i.e. this is the evolutionary force that gives them a selective advantage), whereas cues are there or had evolved as a result of other causes or constraints.
Your definition “A feature of the world, animate or inanimate, that can be used by an animal [should be “organism”] as a guide to future actions” does not exclude signals, but you surely mean here a change in “knowledge”.
In the following I shall try to explain things that may require some clarification, based on your letter. I shall not try to convince you mine is the only way to look at signals, because it is not. I shall try to explain why, for me, this seems the most intuitive, simple and thorough method of looking at signals, and why I have defined and used them the way I did.
My choice was to classify signals by looking at their effect on their carrier’s fitness components, and therefore on evolutionary mechanisms. The reason for this probably arose from my attempt to make a distinction between the handicap mechanism and that of amplifiers/attenuators. Handicaps work as a result of tradeoffs between costs and benefits in the F and the S components, respecively, whereas amplifiers/ attenuators are the result of effects that occur mostly in the S component.
Amplifiers and indices
Here is the major difference between an index and an amplifier, according to my terminology, and which you and Harper (1995) failed to see: For some pointing signals, both low quality and high quality individuals benefit by making the signal. A cat arching its back is a good example, because any cat that does not arch its back appears smaller relative to what it could have been perceived, or relative to others that do so, and will be in a disadvantage in its S component. Hence, the understanding of the evolution of such signals is really simple and straightforward. All individuals benefit by employing it and, at equilibrium, all advertise their maximum size, and the signal is reliable (but not necessarily along the evolutionary process).
However, assume a signal improves perception of quality, without changing the perceived quality (erection of fur or arching the back changes the perceived cue upon which recipients base their estimate of size). For example, fanning the tail improves perception of the tail quality (either dimensions or condition or both). Here, individuals of high quality obviously benefit, but individuals with poor tail lose by fanning the tail. Hence, assuming a fixed behavior (i.e. not condition dependent), the evolution of fanning the tail is not as easy to explain as the evolution of a cat arching its back. Here, some individuals actually lose by fanning the tail (the poor quality ones). Will such fixed traits evolve? My 1989 paper (JTB) and my joint paper with Cohen and Shmida (1992) explore this possibility.
It is easy to see that behavioral amplifiers can become condition dependent. For structural amplifiers (for a good example in a jumping spider, see Taylor, P.W., Hasson, O. and D.L. Clark. (2000) Body postures and patterns as amplifiers of physical condition. Proceeding of the Royal Society, London, Series B. 267: 917-922. I don’t have a reprint, yet, so I can’t send you one; sorry), condition dependence may not be possible, if quality fluctuates after the signal had developed. For such cases, the mathematical model is relevant.
The distinction between amplifiers and indices is, therefore, real. It relates to two distinct mechanisms. Both, however, are signals that have an effect on perception of other cues or signals, such as size or condition. The perceived action of arching the back may or may not change knowledge of recipient by itself, but the change in perceived size does. Similarly, fanning the tail, or bars across feathers may or may not change knowledge by themselves, but they do change knowledge as a result of a change in perception of the tail or of feather condition, respectively. This pure effect is common to both indices and amplifiers/attenuators, hence I called them “pointers”. Both point at other cues or signals.
Hence, the way I see it, you call indices to what I call pointers, but then you have failed to see the distinction between what I call an index to what I call an amplifier, by failing to look at their effect on poor quality signallers. The distinction between amplifiers and indices can be summarized in three points:
1. If poor quality individuals benefit by a reliable pointer, it is an index, if they lose by the improved perception, they are amplifiers, and my 1989, 1992 models are appropriate to explain them. For this reason, for example, indices should not be condition dependent (unless small individuals benefit by avoiding confrontation altogether), and amplifiers, under some conditions, should
2. An amplifier is reliable in the first individual it is performed, whereas an index is reliable only at equilibrium. If it first arises in a small cat that appears bigger than it really is because it arched its back, it is not reliable. However, since it is beneficial to all of its carriers, it should spread in all individuals (as long as it spreads fast enough, before a counter response, a change in perception, evolves). Unlike amplifiers, it becomes reliable only at equilibrium.
3. Amplifiers have evolutionary counterparts, attenuators. Attenuators and amplifiers evolve by the same mechanism, only by different values assigned to variables that affect them. As a result of the different evolutionary mechanism, indices do not have such evolutionary counterparts.
Pointers and Activators
The reasons I have made the distinction between pointers and activators was because I wanted to put indices and amplifiers under a single umbrella, and make it distinct from handicaps and Fisherian characters. “Pointers” in my opinion is a good term, because both amplifiers/attenuators and indices point at other cues or signals that provide information about quality. If the response of a recipient is based on the expression or intensity of a signal, the signal is an “activator”. Handicaps and pure attractors or repellents are activators. Amplifiers/indices/attenuators may become activators (amplifiers/attenuators because they may become condition dependent, indices because they may show readiness for a confrontation, i.e. intentions, not size differences), but this is not their pure effect. If they become activators, other evolutionary mechanism get into action.
Another point that was, perhaps, unclear to you, was my use of “choice-based environment”. Clearly, in my 1997 paper, I did not use this as a classification of signals. I did in my 1994 paper (Cheating signals, to which you and Harper responded), an approach that I preferred to abandon. I used the term “environment” to indicate that certain forces operate on the communicating interactors (signaller and recipient). In such systems, interactions are based on the need of some participants to select among others, and it is often members of the selected party who signal in order to change knowledge of selectors, and change their choice to their own benefit. This is the case in “assessment signals”, but not in “recognition signals” where organisms that make choices may also signal (e.g. mimicry or camouflage).
Most signalling is, therefore, based on choice, but some are not (Hasson 1997). When they are not, different variables become relevant (see assumptions in Table 1 there).
As you can see, I have not classified signals according to mechanisms that keep them reliable, but according to forces that have an effect on their evolution, i.e. according to evolutionary mechanism. Some keep them stable as reliable, some others keep signals stable as unreliable, and some are evolutionary unstable, in my opinion (e.g. Fisherian).
Hasson’s classification of signals:
A. Environment based on choice:
An evolutionary environment (i.e., a host to certain selective forces) in which some individuals select their interactors among a group of other individuals. Both interactors, those who select or those from which selection is made, may signal. Assessment signals, however, are given only by the latter, in order to change selection to their own benefit, whereas mimicry and camouflage, on rare occasions maybe even identity signals and attention signals, can be given by both interactors.
i. Handicaps, Bluffs
Handicaps reliable because costly… Bluff are practically handicaps, but they are given by individuals that have somehow managed to avoid some of the constraints shared by other signallers.
[As to your question “how”, I have explained this in Hasson 1994,1997. Two examples: one – if environment changes (e.g. fewer predators) or genetic background or habits or habitat changed such that, say, new types of food are available, cost of producing the handicap or maintaining it is reduced (to all or to a group of individuals within the population). Both cases are unstable. Two – when individuals do not have to consider constraints that most individuals do, such as reserving some resources for another breeding season: old individuals may not reserve resources and make, during their last breeding season, a last, major effort, “swan’s song” if you like, therefore producing a signal that is greater than that given by younger individuals who share the very same quality. Here, bluffs are stable, as the population is ever replenished by old individuals, and recipients’ response must take it into consideration. As reliability is in the eyes of the recipient, a bluff is an unreliable indicator of quality (it reduces recipient’s knowledge). However, if age, or last breeding season can be recognized, and the signal is judged and compared only within peers (last breeding season), then the signal should be considered a regular handicap, because among individuals that have the same constraints, it is a handicap, and obeys the same rules. Bluffs may not be an important category, but it is a valid, plausible one.]
ii. Pure activators
No disagreement here.
I have discussed this in length above.
Regarding “reliable by design”:
A handicap is reliable as a result of its cost. This cost may be a general one, or specifically in units of the quality advertised, depending on the type of handicap. Handicaps that advertise needs may have some general cost. Other handicaps, however, which advertise signaller’s quality, are more reliable if they specialize on the quality sought by recipients. Here, Amotz would say, correctly, I think, that a design of a handicap is also important, because certain designs specialize better on particular qualities than other designs (Hasson 1997). However, although design is important for the specialization of handicaps, it is not design alone that makes it reliable. A particular design only assigns a certain cost to a certain quality, and it is the cost on each quality that determines how reliable a handicap is, in advertising that particular quality.
Amplifiers, however, are reliable not because they are costly (or cheap) to produce, but because certain structures or behaviors reveal certain information better (attained via cues or signals). An amplifier’s special design makes it an efficient pointer – giving a particular structure or behavior that improves perception of a certain quality. Hence, reliability is by design, not by tradeoffs. This is an empirical observation rather than a theoretical one, though it has theoretical implications.
I think the rest of the summary of my definitions is a good representation of what I said. Regarding examples of lies: Well… is it not what we, human, do so very often? Novels are full of them, the movies too. In terms of definitions, lies are symbols and icons that are used in unreliable contexts (hence, decrease knowledge). If you like, here is another way to define lies: a symbol (a true convention) may be regarded as a conditional stimulus that has no necessary link with its unconditional stimulus (reward). A symbols or an icon may be practically cost free to the signallers’ F-component (why not? After all, in a cooperative system both sides benefit by being efficient at minimum cost). When a symbol is used in the absence of an unconditional stimulus, it becomes a lie.
How common is it among animals? Well, that depends on how frequently do we find symbols and icons among animals. I don’t suppose we really know. They could be very rare, or they could be more common than we suspect, but I know no studies in which people were deliberately looking for symbols and icons while being fully aware of the distinction between them and other signal types. Since, however, symbols and icons are expected to evolve in a cooperative system (e.g. bees’ dance and pheromones in the “superorganism” beehive, hormones in a multicellular organism), lies are expected to be rare in non-human organisms.
Many consider warning colors as conventional signals. I don’t think so, and it would be difficult to explain how they could have evolved as such. In my opinion, not unlike Mullerian mimicry, warning colors evolve mostly as identity signals of distasteful food (not of different taxa!). At best, they are glorified identity signals. More often, they are relatively inconspicuous.
Regarding your comments and queries, 1 –5:
When comparing my view to Equist’s you wrote “…whereas H. means that receiver can choose a response depending on intensity of the signal”.
My first response was that this is not true. My second is that this is not entirely true. One way or another, it does not represent my view of choice-based environments. It does represent, a little imprecisely, my view on singalling in general.
I have already explained above my view of choice-based environment, so I won’t repeat it.
You surely agree that for all signals recipients make choices of actions based on the information received (which changes their knowledge). However, recipient’s response is not necessarily a function of signal intensity alone (e.g. when signals are amplifiers or attenuators, and probably, but not as dramatically, this is also true for indices [S-component is always a positive function of x, but possibly also a function of quality]).
Nevertheless, I agree that the emphasis on receivers’ choice of action, with or without the presence of signals, is a major distinction between my view of signalling systems and that of a number of people in the field, probably including Enquist. People have used to study signalling and responses together, but overlooked other means by which recipients can gather information (I have expanded on this in Hasson 2000). In my opinion these other options have a major impact on the evolution of signals (in fact, the whole concept of amplifiers is based on this assumption).
Comment 2, regarding amplifiers and indices, is already discussed above.
Comment 3: Bluffs are already discussed above.
Comment 4: Yes. You make a good point. As long as a signal is a mimic of another living organism and exploit it, this is a case of mimicry (but if it mimics an organism that is part of the background, i.e., giving the effect of a negative search image, it should be considered camouflage rather than mimicry; there is no exploitation of the model, and the signal negatively affects attention rather than identity). I agree that it does not matter whether the signal mimics a cue or a signal.
It is important, however, to maintain that there is a continuum between identity signals (reliable) and mimicry (cheating), where only values of the variables that affect their evolution determines which of the two evolve, mimicry or an identity signals. Therefore, anything that falls under this dichotomy, identity signals vs mimicry, should be included here.
Deception: Perhaps, in my 1997, I was wrong to include wing feigning under the cue-reading environment. If a predator classifies birds not by their taxa (as I have already argued in 1994, 1997), but by their health, then wing feigning is certainly mimicry. It is not in our eyes, perhaps, because a bird wing feigning does not affect our capability to identify it. In a sense, this is the opposite case of Mullerian mimicry which, to our classifying eye, may look like mimicry, whereas to the predator eye, who makes attempt to recognize “food”, Mullerian mimicry is not mimicry at all. The latter is what determines a different evolutionary mechanism than that of Batesian mimicry (Hasson 1994, 1997). However, unlike Batesian mimicry, models are not harmed in the wing feigning case. If anything, wounded birds may benefit, because predators learn “injured birds” may be, sometimes, hard to catch. One way or another, wing feigning is found within the regime of the choice-based environment.
If I remember correctly, in “In the Shadow of Man”, Jane Goodall describes a young chimpanzee who spots a banana. In order to keep it for himself, he started walking, determined, into the forest, the rest of the chimpanzees slowly following. After a while, when there was no competition, he returned to retrieve the banana. You can argue this is mimicry of a certain behavior. Perhaps. It is not part of the choice-based environment, however (it was aimed at other chimpanzees, but not in order to affect choice, nor to improve access to other chimpanzees to improve own choice [e.g. predator camouflage]), and selective forces that operate on it are different than those working on mimicry signals. I would therefore hesitate to include it there.
Is it a lie? Possibly, but then you have to change the definition of lies and make it more inclusive, because the chimpanzee was faking a cue (readiness to move on) rather than a signal, certainly not a symbol.
Let’s consider another example, of an animal using a pursuit deterrent signal (normally aimed at a predator) to distract an enemy (I’ve seen this in bulbuls, Charnov & Krebs 1975 describe something of that sort as well). Here, it is a manipulation of others being eavesdropping to other communicating individuals (using it to learn of the presence of predators). I would lump this example with the previous one as a deception. What is the environment then? Cue-reading? Eavesdropping? It is certainly a manipulation of the ability of animals to read their environment. Cue-reading still seems to me a more inclusive term, because from the recipient point of view, eavesdropping to signals is, in many ways, equivalent to reading of cues from the environment.
Regarding your Definitions and Terminology
Signals and Cues: already discussed.
Ritualization: Although this is how ritualization has been used, it is not only cues themselves that become signals, but also structures and patterns that improve or attenuate their perception.
Efficacy cost: Cost required to make or maintain signals (and, therefore, let them do what signals do: change recipient’s knowledge).
Your definition: “Cost needed to convey information unambiguously” is problematic. The assumption here is that signals are reliable, whereas they may or may not be reliable. Camouflage and mimicry require cost, but increase ambiguity.
Strategic cost: OK.
Note that both of these costs are costs on the F-component. The first (efficacy) is required to make a signal, the second (strategic), to make a handicap reliable.
Note, however, that there is a third type of cost, on the S-component. This is the cost of reliability (paid by poor quality individuals for amplifiers) or of ambiguity (paid by high quality individuals for attenuators). It is this cost that confuses Amotz, at times, to think that amplifiers are handicaps after all. At other times he thinks they are not signals at all (these are parts of verbal discussions we’ve had, unpublished, so don’t quote me).
Handicap: I think a better definition is required, one that includes reliability by tradeoffs. I don’t like the distinction you make between handicaps and strategic signals. Is this really necessary? Doesn’t it take us back to times where handicaps were illegitimate?
Assessment signals: You define them as “Signals whose variation in intensity carries information, includes indices and handicaps.” I wonder. If, by “carries information” you mean “changes recipients knowledge”, this is true for all signals, not only for assessment signals. Otherwise, if you mean the greater the signal intensity the more likely it is to be estimated of high quality (and then being preferred in attraction systems [e.g. mate choice], avoided in deterrence systems [e.g. threat]), than there is some contradiction in your terminology.
At one time you say indices are pointers: “Thus an index is not the quality itself (e.g. size), but a signal (arch or structure) that reveal the quality” (your letter, Some Comments and Queries, Point 2). Here you say an index is an assessment signal, i.e. its intensity IS the signal (if this is what you mean by “assessment signals”).
In my terminology, I think you mean “activators”. Possibly you want to also include cues of quality, but I do not think you want to include indices.
My preference here is expressed in my 1997 paper, where I used “assessment signals” as the more inclusive category (for both activators and pointers), and activators as the more restricting one (excluding amplifiers, attenuators and indices). I chose to use the terms assessment signals and indices after considering Maynard Smith and Harper 1995, and feeling the need to make the classification the way I did.