Can machines think?

The progressive development of technology till its present-day condition had and has to do with humans’ needs. Most inventions were conceived to supply solution to a missing thingy, or as problem solving tools. The more advances were made, the easier was to create new artifacts. Therefore, as Heidegger states (btw, what’s with all those gerunds?), technology is a means to an end and a human activity at the same time. Personally, I understand these two statements as joined at the hip. Technology is not an independent entity; it is not self-governed.  The means to a particular end have been created and configured by a person who, at the same time, is involved in dealing with that end once it is reached and, as a consequence, responsible for the result.

Due to the fact that technology is involved in most of our daily life tasks such as turning off the alarm clock, starting the car, playing some music on the radio (I believe singing like crazy on your favourite tune is still a human (re)action), etc., we no longer spend time thinking about the implications of the digital being something good, bad, dangerous or profitable. 60 years ago, though, things were different, for they were creating the digital computers that we now use.

While reading Turing’s “Computing Machinery and Intelligence” (1950) and Heidegger’s “The Question Concerning Technology” (1953) I couldn’t help but wonder what they would think about Siri. And since I was having a hard time understanding what both of them were saying, I found it easy to amuse myself with the toy. I had asked it some weird things before, but today I focused on some of the concerns of “the father of artificial intelligence”.

Nope, I’m not Alexis. But I don’t own an iPhone yet, so I borrowed his.

According to Apple: “Siri lets you use your voice to send messages, schedule meetings, place phone calls, and more. Ask Siri to do things just by talking the way you talk.”  Clearly, that makes it a tool and, as such, it won’t tell us if it is a machine; it cannot tell if it loves; it doesn’t know, it seems, whether it is beautiful; it won’t joke. Following one of the questions in Turing’s, it hasn’t tried strawberries (really?? Apple, you should give Siri some strawberries right now). Rather, it has been programmed to answer who its teacher was for commercial purposes: Apple, in California. Interesting, since Turing suggests teaching machines, to some extend, like children in school. Finally, I decided to ask the tool for a controversial opinion among humans: “What do you think about war?” Not surprisingly, Siri avoids giving an answer that would point out the political view of the company. Nonetheless, the answer is quite striking: “I think, therefore I am.” Bweep, bip bip, bweep. Fire alarm. It can think. Those afraid of technology should do something about this, right now!!

It was Habermas who raised, in my opinion, the most relevant question in that it is still valid to our days: are we using technology, or is it using us? In order to answer this question it is impossible to escape politics and economy. “Capitalism is the first mode of production in world history to institutionalize self-sustaining economic growth,” explains Habermas. The economic growth is what is managing the world. As a result, we have changed our interaction in society. The problem then lies not on how many electronic doodads we own but, rather, how we have reshaped communities due to the use of technology. According to the philosopher, there is a lack of balance.

The real issue lies on the fact that it is us, humans, who teach machines what to do, when and how to act. It is us who can manage them. We could choose to destroy every single digital artifact. But that would be going back in time to those days were society was based on companionship rather than rampant competitiveness. As a result, some technology has become more a reflection of society than an aid. And that is the big issue here.

As a final note and connecting the post back to our debates in class, I feel that some humanists, or people in general, are following what Turing calls “The Head in the Sand Objection,” preventing a real understanding of the problem. We have already talked about humanists been afraid of digital tools being applied to the study of the human knowledge. Again, why? Why do we believe that “the consequences of machines thinking would be too dreadful?” (444). On Humanities, it would help on research, as it does to medicine. Since technology is a means to an end and a human activity, we can still use it to find some responses.

I may be going to far here, but the too dreadful a consequence is that machines, as driven by humans, would “think” as we do, therefore being biased, and cruel in some extend (they have no soul, right?), and creating still more trouble in an already too cracked a society.

Advertisements

3 thoughts on “Can machines think?

  1. I do not think machines thinking and being too practical would be worse than people using the highest technology to get everything they want and, what it is worse, being conniving. Because the danger lies within the ignorance that a large part of society is accumulating due to a disconnection from the technical advances. A large number of humans are benefiting from progress but are being crushed by the ones who control the progress pace.

  2. Great post! I really enjoyed your comments on why humanists are so terrified of computers learning to think. As you note in your discussion of Habermas, one of the relevant questions concerning technology is are we using it or is it using us? Even though people are fundamentally in charge of determining the tasks computers are given, I think that the notion of control is still an important issue for humanists contemplating the ways that technology are shaping academia. Maybe humans control machines, but are the humanists the one’s in charge? I think that many humanists regard DH with some skepticism simply because they feel left out of the formation, management, and creation of technology. There is already a solid fear of academic institutions becoming more like businesses. I think that the fear of technology is very much related. As long as humanists feel that the technology they use is something that they don’t have control over, that comes from outside of a humanistic (or even an academic) perspective, I think they are going to be worried.

    • Hi Gabi,
      Thank you for your comment. I see and agree with your idea of humanists being afraid of the digital because they lack the means to control it.
      Still, technology itself is not shaping the academia, right? The control you are referring to is on the hands of those who, indeed, are making a business out of education. That’s why humanists, or any other group that might be worried about the digital, should learn how to use some of the tools available in order to be able to have control of their own field. I feel that DH is trying to do that in the hope to change things a bit.
      Cheers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s