mind-body Dualism
In discussions of mind-body Dualism, one controversial issue has been whether the mind is part of the body or is it separate. On the one hand, some argue that the brain has its own identity, ‘substance dualism.’ On the other hand, others contend that body and mind are the same things, ‘materialism.’ Others maintain that the brain is more than the sum, but the result of the make-up of the body. My view is, both mental and physical properties constitute the body.
If body and mind are the same, can we call pain a brain state? Knowing the constituting elements of a substance, can we argue that we understand it? Understanding the mechanics through which our minds operate does it mean we can explain the body also? Here many maintain that, if a person says they are in pain, it does not mean this is a mental state. Therefore this would be implying that the state can be changed by only applying different stimuli. The standard way of thinking about this issue has it that whatever part of the body is experiencing pain, is experiencing pain.
This law leads us to ask ourselves even more intriguing question: Do machines have the ability to think? Though computers can perform creative things, imitating the very things we do in our day to day life, does this mean computers can make judgments in the same perspective as we do? Does the brain have no consciousness just the same way as the computer does?
With rapid developments in the field of computing, having digital computer programs that ultimately can challenge the outspoken chess champion, Allan Turing (1950), one of the pioneer theoreticians in the field of computing, believe the answer to these questions is “Yes.” He added that if a computer can surpass humans on an online chat, we should take this as intelligence, an argument known as ‘Turing Test.’
John Searle, U.C. Berkeley philosopher, introduced a short and widely-argued discussion directed to nullifying the theory that digital computers can neither think nor understand natural language. Searle belived that the best way to test the philosophy of mind, for instance, an approach that states that one can acquire knowledge by doing this and that, is by simulation of what the theory says would create understanding. Concluding his opinion, the Chinese Room argument, more concisely. Don't use plagiarised sources.Get your custom essay just from $11/page
In making this comment, Searle urges us to imagine a native English speaker who doesn’t speak or understand Chinese being locked in a room. In this room, he is provided with boxes full of Chinese symbols (typifying digital computers’ databases ). The English speaker is also given a book (digital computer program) containing all the instructions relating to interpretation of the symbols. Let’s assume that the people on the other side of the room are to send in other Chinese symbols which are unknown to the man inside the room( typifying digital computers’ inputs) — assuming that this person was able to give feedback by the aid of the instructions contained in the book, in the form of answers to the questions ( output from digital computers). Therefore we can conclude that this person passed the ‘Turing Test’ for understanding Chinese, not through his understanding, but the instruction contained in the book.
John Searle added, if the individual inside the room can be viewed not to understand Chinese on the ground that he is unable to implement the appropriate program for understanding Chinese, the same case applies to the digital computer. Thirty years later, in the year 2010, Searle continued that: by implementing a computer program, we cannot take this to mean that the computer contains a sense of consciousness or intentionality.
Searle admits that computation can be defined only as formal or as syntactic. Since brains have actual mental or semantic contents, therefore we cannot derive semantics form syntactical operations only. To explain this more technically, they believe the ‘same implemented program,’ points to the independence of physical realization. These specifications leave out the biological powers of the brain in causing cognitive processes. Therefore one cannot argue that they understand Chinese by merely following the steps outlined in a computer program that simulates native Chinese speaker’s behavior.
Having discussed the example and reaching to a conclusion, Searle considered several replies from workers in artificial intelligence. After this, he offered a rejoinder to these replies. First, it’s the Systems Reply. Systems reply suggested that Searle’s Chinese room experiment focus was on the wrong agent. It admitted that the Searle’s experiment mistakes the person in the room from what would be subject possessed of mental states. This reply claims that it is wrong to argue that the person locked in the room didn’t understand Chinese lather we should admit that it was the system itself that didn’t understand the story since the individual was just a part of the whole system.
Searle replied: the only solution was to let the person internalize all of the systems by doing rules and script alongside other operations in their brains. Searle maintained that he understood nothing relating to Chinese, and the same case applies to the system. Searle continued that the reply from the system was absurd since it was taking the whole system to represent the mind.
The second was the Robots reply. This reply also suggested that, what prevented the man inside the room from interpreting the meaning of the Chinese ciphers is the sensory motoric disconnection of the ciphers from the realities they ought to represent. Robot’s reply also suggested that to promote the ‘Chinese symbol’ manipulation to genuine understanding, alongside a causal-theoretic line of thought, this manipulation needs to be based on the outside world via the agent’s causal relations to the things to which the Chinese symbols apply. Therefore concluding: if a computer was installed inside a robot, and was to control the robot, causing it to perform activities such as walking, perceiving, and moving around, however, then the robot would be described as having a genuine understanding of another mental state in the line of this thought.
In response to the robot’s reply, Searle maintained that: his experiment applies even in this case only that it required some minor modifications. If Searle was put in the room, inside the robot, assume that some of the symbols come from a television camera attached to the robot and that other symbols that Searle is giving act in moving motors attached to the robot’s joints to create motion in the robot’s arms and legs. Searle asserted that he didn’t understand anything save the rules for symbol manipulation. John Searle added that by instantiating the program, he had no mental state of the exact type, all he has to do was to follow instructions for manipulating the symbols. He concluded that the robots reply tactically concedes that cognition is not solely a matter of formal symbol manipulation since a robust Artificial Intelligence only adds a set of causal relations to the outside world.
Thirdly is the Brain Simulator’s reply. This reply asks us to assume that the program running in the computer simulates the actual sequence through which neurons fires at the synapses of a Chinese speaker as he understands the Chinese symbols and gives an appropriate answer to them. In this way, this reply negated the assumption that the program running in the digital computer represents the information we have about the world. It also means that the computer or the machine in question understood the stories else we have to deny that native Chinese speakers understood the stories. Since at synapses level, there is no difference between the program running in the computer and that running in the brain of the native Chinese speakers.
Defending his views, Searle insisted that: investigating operations of the brain is not sufficient to produce understanding. He added that let’s assume a man who is responsible for operating a set of water pipes with valves connecting them. Take Chinese symbols as inputs manipulated by the program to tell the man which valve to turn on or off. Searle urged: assume this typical connection of water pipes with valves represents the basis through which brains work to give an output at the end. He continued that he believed the man doesn’t understand Chinese symbols; neither did the pipes. Searle concluding on the response from the brain simulator said that: it only focuses on the brain’s formal structure, a sequence of neuron firings shortcomings of which are laid out in the example he gave.
Forth, was the Combination Reply. This response assumes a computer installed in a robot running a brain simulation. Based on this reply, we would have to ascribe intentionality to the system.
Searle asserted that all these replies, taken singly, would not overthrow his experimental result’s thought. He added neither do all responses combined could nullify his theory because we all know that zero multiplied by three is naught. Though it is rational an irresistible if we have so little knowledge about the robot, and the robot itself behaved sufficiently like us. At this point, then, we would have to make an assumption that it must have mental states like ours, caused and expressed by its behavior. But while we have a knowledge of this robot and can account for its behavior without assumptions made above, as with an ordinary computer, we would not attribute intentionality to it, especially when we are aware that it is operating on a formal program.
The fifth reply was Other Minds. This reply reminds us that the best way to know whether people understand Chinese or not is by observing their behavior. Implying that if a computer can pass the behavioral test just the same way as a person could, then if we were to entitle cognitive attributes to this person, we have to qualify computers these attributes.
Searle, in response, stated that: what matters is not on how we know that other people have cognitive states, but what is it that I am attributing when we attribute them cognitive states. Implying that it cannot be just computational processes and their outputs since we all know that computational processes and their output can exist without the cognitive state.
Lastly was the many mansions reply. It suggested that even if Searle’s view that programming cannot suffice to attributes computers cognitive states, other means beside programming can be employed to reach the same end. This reply implies that digital computers may be lodged with whatever suffices for intentionality using different methods.
Refuting Many Mansions reply, Searle replied that this, then, was to belittle projects aimed at producing a strong AI as whatever artificially provides and explains cognition. This reply would automatically lead one to abandon the original goal of artificial intelligence; mental processes are computational processes over formally defined elements. He summed his point by stating that: “if AI is not identified with the precise, well-defined thesis, his objections no longer apply because there is no longer a testable hypothesis for them to apply to” (1).
The dramatic shift in Searle’s views from machine understanding to intentionality and consciousness is evident since 1980 when he put into print his document summarizing the Chinese room experiment. Some scholars still criticize this view by stating that running a computer program can create understanding without necessarily creating consciousness. Critics continue to argue that a robot can have the knowledge, an attribute of creatures without necessarily having to understand natural language. However, Searle’s theory shows us the close connection between understanding and consciousness.
Therefore Searle simplified his argument even further, stating that: it was only aimed at refuting the functional approach commonly applied in understanding minds. The technical approach holds that the brain can only be understood by studying the causal role it plays, and not through the study of its role players like neurons. Searle’s theory also criticizes the Computation theory, an approach that views the mind as information processing systems. The dynamic nature of the Chinese Room argument makes it be discussed widely in philosophical fields and cognitive science fields. Steven Pinker, the Cognitive psychologist, admits that over 100 articles have been put into print following Searle’s Chinese Room Experiment.
In conclusion, since the appearance of the Chinese Room argument in 1980, it has kindled exciting discussion across various disciplines, do machines have the ability to think? Despite extensive talks, scholars have not yet come into consensus on the soundness of the theory. Julian Baggini, 2009, on the one hand, admitted that the Chinese room argument is one intellectual punch that inflicted so much damage on the then-dominant theory of functionalism that many would argue it has never recovered (2). On the other hand, Daniel Dennett, the philosopher in the year 2013, concluded that Searle’s argument was and is a fallacious and misleading argument. Though I believe Searle’s argument is irrefutable with the current age of technology, this is not proof that limits the aspirations of Artificial Intelligence or researches on the computational accounts of the mind. In the future, I have significant anticipations of the coin-tossing to the other side.
Chinese Room Experiment, John Searle, 1980a
Chinese Room experiment, https://plato.stanford.edu/entries/chinese-room