AI & Human Cognition

AI and Human Cognition

I enjoyed our brief discussion last Friday for many reasons.  Our discussion ranged across a number of items including molinsm, the nature of religious language (God-Talk) and more, but I wanted to limit what I wrote to you about to only the AI and human cognition issues we talked about.  

Besides length considerations, the reason for me doing that is because I’m interested in the general discussion comparing AI cognition and human cognition. It seems to have some important implications about what human persons are.  That’s why I was asking you if AI computers have “beliefs.”  If you don’t have time to reply to this, maybe we can talk at our next meeting if we both attend.

Currently I am engaged with a few friends about this very issue (friends who are not in our Sinklings group), and I appreciate the opportunity for some analysis outside of that other group.  I appreciate your opinion(s) especially because you are directly involved in AI stuff. :-)

It was difficult for me to pin down a potential problem that I think I see, but I thought with a little reflection I might do a better job and it would help to write it out.  Of course, email discourse is only a small step up in rigor from our discussion, but I thought it might help a bit, even though defining the problem is itself very challenging.  So let me retry to state what I was getting at and see what you think. 

It is sometimes asserted that AI cognition is approaching human cognition, or will soon “replicate” in kind, human cognition, or has reached and will surpass human cognition.  It’s obvious (or seems obvious to me) that computers already surpass human cognitive abilities in certain areas…especially certain computational sorts of tasks.  Given that, as remarkable as that is when one considers some of human computational ablities, it’s not the problem I’m trying to get at.

The problem I’m trying to get at seems to arise when we say AI cognition and human cognition are examples of multiple realization—that is, even we use different substrates (wet hardware/software vs. dry hardware/software), they are functionally the equivalent.  (So I’m not trying to define my worry/concern using psychoneural or “neural equivalent” identity theory.)

So here’s the problem: if one takes a functionalist explanation of how AI and human cognition realize themselves in what we could call equivalent “brain states,”  then the functionalist (seems) to have to choose either reductionist explanations OR non-reductionists explanations for BOTH…and they would want to do that consistently to maintain functional equivalency.

That is, to be consistent as to what they think “AI brain states” and human “brain states” are or must be, their explanations must be cashed in consistently in either of two ways:  

Way #1: If one wishes to maintain that all AI brain states are all materially caused (that is, no non-material causes—no emergent properties or emergent substances involved), then this seems inadequate to explain human cognition.  Human cognition seems to “need” emergent properties or emergent substances in order to adequately explain consciousness, intentionality, qualia and so forth to explain some brain states.  These do not seem to be epiphenomenal.

Way #2: If one takes the other fork, and concedes these are real properties or substances that are involved human cognition, then the same must be admitted for AI to be a functional equivalent of human cognition.  But then AI must admit to have non-material causes for SOME of its beliefs (brain states) and so forth just like human cognition.  There are two sort of problems/issues with that that I think of: 

a)  AI folks would be admitting to non-material causes affecting material things to cause actions and “brain states"…that’s not great for those AI folks who call themselves “scientists” who claim that self respecting scientists can ONLY methodologically look for material causes;  thus, they might have to take the term ‘science’ off of the Computer Science label; and 

b)  AI admitting or conceding there are non-material causes affecting material things so as to cause actions and “brain states," can open the door for supernaturalism to get its foot in.

If that better clarifies what I was trying to get at on Friday, would you have anything to say about the problem(s) I’m raising?  Because if I’m right about this, then certain kinds of AI folks shouldn’t be too excited about claiming that AI cognition and human cognition are functionally equivalent (or soon will be).



Jim Cook's Personal Website