by Gene Callahan
In C. Mantzavinos’s Philosophy of the Social Sciences there is a paper by Philip Pettit entitled “The Reality of Group Agents.” (He decides, by the way, that sometimes it makes perfect sense to attribute agency to a group, but that’s a topic for a different day.) What I wish to talk about today is the following passage, a preliminary to the issue of group agency, which discusses when it is sensible to posit agency for an individual creature such as, say, a wasp:
“Consider the Sphex wasp that Daniel Dennett describes. This wasp brings its eggs to the edge of a hole that it has found or dug, enters the hole to make sure it is still provisioned with the paralyzed prey that is has previously deposited there, then comes up and takes the eggs back into the hole. But it turns out that if the eggs are moved even a little bit away from the edge while the wasp is in the hole, then the wasp goes through the whole routine again and that it can be forced by this intervention to repeat the exercise an indefinite number of times. The failure here prompts us to recognize that the wasp is not displaying the pattern of ensuring that its eggs are placed in a suitable hole, as if it were focused in an agential way on that abstract purpose.” (pp. 73-74)
I have two reservations about this passage. First of all, no agent can fulfill an abstract purpose. As Michael Oakeshott said, “I cannot want to be happy; what I want is to idle in Avignon or hear Caruso sing” (On Human Conduct).
Secondly, why not model the wasp as being like, say, a character in Awakenings: The wasp has a purpose, but can only realize it in a rather obsessive way: it dives in the hole, decides “Everything is OK,” comes up to move the eggs into the hole, and discovers someone has been messing with it! “Hmm,” it thinks (and of course I’m am depicting an extremely anthropomorphized wasp here, but you’ll get the point), “if things have gone wrong up here, then they could be all whack in the hole as well!” So back down it plunges to check on its victim. Finding the victim in place, it comes back up, only to discover the eggs have been messed with again!
Because what, after all, would a much more capable and flexible agent make of our behavior? Let’s say the agent was a perfect Bayesian reasoner; might she not decide that we really are not to be thought of as agents, since our actions do not display the requisite flexibility in adjusting to changed circumstances?
Now, I am not saying that the above is the correct explanation of the Sphex wasp’s behavior; nor am I even saying that we should attribute agency to the wasp. But I am skeptical of the reasoning that Pettit and Dennett use in denying it is an agent. What do you think?
I agree with you. Your description makes a great deal of sense. We have been told not to “anthropomorphize,” but too often that means ignoring the considerable overlap between human and animal behavior. I have discovered in my research that there are few things that are truly uniquely human. Take all the cultural universals of chimpanzees, add fire and language, and you get all the human cultural universals. Most behaviors are likely to have a common source. Is the wasp concerned in the way you suggest? I would venture to guess that it is. It’s not languaging its thoughts, but it doesn’t have to to have those thoughts.
Some skepticism of Dennett’s conclusion.
First, what good reductionist would ever say that any behavior of any organism is not ultimately reducible to physical biochemistry, up to and including human consciousness? On the hardest form of strict determinism even human “free” agency is an illusion, though it will never seem that way to us. On this view, of course the Sphex Wasp lacks free agency – it’s pure biochemistry all the way down, man!
Another problem, what about the “agential” intervention of the experimenter, the one who repeatedly moves the eggs? At the quantum level it has been argued intervention changes the outcome, affects the observer’s conclusion. Can we be sure this doesn’t manifest in the macro world? I’m not so sure.
Lastly, you can find evidence in biology (nature) to support all sorts of conclusions, but this raises the question of confirmation bias. What about controls on this experiment? What do other Sphex species (>100) do? Etc.
One doesn’t have to resort to quantum physics to understand that intervention changes the outcome. That the point of intervention. But purely quantum effects tend to cancel each other out at the macro level. That, too, is part of quantum physics.
Reductionism is fine and dandy, but one cannot ignore the corollary — emergence. There are system behaviors which cannot be merely reduced to the constituent parts. Complex systems have emergent properties. Economies don’t act like humans, hu
Economies don’t act like humans, humans don’t act like their cells, cells don’t act like chemical systems, and molecules don’t act like quantum physical particle-waves. Yet each system has common rules of complex systems running through them. We need to know the real similarities (without making false extrapolations) as well as the real differences.
“On this view, of course the Sphex Wasp lacks free agency – it’s pure biochemistry all the way down, man!”
This is the reductionist leap of faith. For several hundred years now we’ve been promised that “real soon” this reductionism will be demonstrated. We’re still waiting. I guess the check is in the mail.
The theory of emergence has answered the problems with reductionism. Dennett is behind on his science — I recommend people like Stuart Kauffmans the corrective.
This may be a tangent, but my worries about attributing agency lead me to confine attributions to levels of nature and entities we have reason to believe exhibit consciousness. I agree with Searle and some recent work by Uriah Kriegel and Terry Horgan that a central sort of intentionality is phenomenal in character, which is to say that mental states are only intentional if they are in some sense potentially conscious.
This is a worry for folks like Dennett and Pettit, in my view. Dennett is happy with a highly implausible quasi-eliminativism about consciousness but his view is, well, pretty bizarre. My understanding is that in conversation, Pettit has told Kriegal that he is happy with a kind of faux-intentionality (“schmintentionality” if you will). So maybe Pettit thinks he can give an account of group agency that lacks important features of normal, conscious and intentional agency. For my part, I have no idea what that means.
Anyway, I’m taking as pre-theoretical the Ned Blockian intuition that the nation of China isn’t conscious no matter how it is functionally organized and saying that collectives can have agency unless they’re capable of having intentional states and that they can’t have paradigmatic intentionality without consciousness.
You’ll get no disagreement from me on the point of emergence; I’m an enthusiastic systems biologist. But I disagree a bit w/Troy: the emergent properties of a system often can be reduced to interactions among its multiple parts. Not the physical properties (or intrinsic information) of the parts themselves, but the interactions between the parts and, of import to biology, the environment.
I agree with the sentiment of Gene’s leap of faith statement, but whenever I’ve made similar comments people – usually reductionists – jump all over me, “What, you’re saying there’s a ghost in the machine?”
Funny enough, years ago I briefly worked at Kauffman’s start-up in Santa Fe. It fizzled in the dot-bomb crash but I enjoyed my time there. Lots of smart people.
What I meant was that there are qualities that are irreducible. A single water molecule is not wet — you have to have a large enough number of water molecules interacting for the property of wetness to emerge. But that doesn’t mean that you can’t understand that wetness emerges precisely because of the geometry and bipolar nature of water molecules. The same can be said of molecular motors in cells. They physical configuration has emergent properties, but you can understand those properties by looking at the geometry that comes about from the chemical bonds among the atoms.
Count me as jealous that you got to work for Stuart Kauffman! 🙂 I wish I had the computer abilities for much of this complexity work. I have ideas, but no technical abilities to realize them.
You may be interested in Peter Carruthers opinion on this matter. It’s within the chapter entitled, Animals and Rational Agency, in his book. The Animal Issue,
http://www.philosophy.umd.edu/Faculty/pcarruthers/Blurb-AI.htm
KV: “This may be a tangent, but my worries about attributing agency lead me to confine attributions to levels of nature and entities we have reason to believe exhibit consciousness.”
Yes, Pettit seems to set aside the question of when something is “really” an agent and instead address the topic, “When does it make sense to talk about something as if it is an agent?”
And for some purposes, all we need is an answer to the second question.
Say hello to Uriah for me.
Troy said, “Count me as jealous that you got to work for Stuart Kauffman! I wish I had the computer abilities for much of this complexity work. I have ideas, but no technical abilities to realize them.”
At the time I had the opposite challenge.
I say don’t trust any philosopher on this topic.
What you always get at some point is language on holiday.
Philosophers can’t help it. It’s what they are trained to produce.
At bottom, what you are _always_ getting in one of these discussion is a stipulative definition of “agency”.
But, Greg… aren’t you a philosopher? And so doesn’t your statement mean that we can’t trust your statement?
My head hurts.
Philip Pettit on Group Agency,
http://philosophybites.com/2010/12/philip-pettit-on-group-agency.html