The progressive development of technology till its present-day condition had and has to do with humans’ needs. Most inventions were conceived to supply solution to a missing thingy, or as problem solving tools. The more advances were made, the easier was to create new artifacts. Therefore, as Heidegger states (btw, what’s with all those gerunds?), technology is a means to an end and a human activity at the same time. Personally, I understand these two statements as joined at the hip. Technology is not an independent entity; it is not self-governed. The means to a particular end have been created and configured by a person who, at the same time, is involved in dealing with that end once it is reached and, as a consequence, responsible for the result.
Due to the fact that technology is involved in most of our daily life tasks such as turning off the alarm clock, starting the car, playing some music on the radio (I believe singing like crazy on your favourite tune is still a human (re)action), etc., we no longer spend time thinking about the implications of the digital being something good, bad, dangerous or profitable. 60 years ago, though, things were different, for they were creating the digital computers that we now use.
While reading Turing’s “Computing Machinery and Intelligence” (1950) and Heidegger’s “The Question Concerning Technology” (1953) I couldn’t help but wonder what they would think about Siri. And since I was having a hard time understanding what both of them were saying, I found it easy to amuse myself with the toy. I had asked it some weird things before, but today I focused on some of the concerns of “the father of artificial intelligence”.
According to Apple: “Siri lets you use your voice to send messages, schedule meetings, place phone calls, and more. Ask Siri to do things just by talking the way you talk.” Clearly, that makes it a tool and, as such, it won’t tell us if it is a machine; it cannot tell if it loves; it doesn’t know, it seems, whether it is beautiful; it won’t joke. Following one of the questions in Turing’s, it hasn’t tried strawberries (really?? Apple, you should give Siri some strawberries right now). Rather, it has been programmed to answer who its teacher was for commercial purposes: Apple, in California. Interesting, since Turing suggests teaching machines, to some extend, like children in school. Finally, I decided to ask the tool for a controversial opinion among humans: “What do you think about war?” Not surprisingly, Siri avoids giving an answer that would point out the political view of the company. Nonetheless, the answer is quite striking: “I think, therefore I am.” Bweep, bip bip, bweep. Fire alarm. It can think. Those afraid of technology should do something about this, right now!!
It was Habermas who raised, in my opinion, the most relevant question in that it is still valid to our days: are we using technology, or is it using us? In order to answer this question it is impossible to escape politics and economy. “Capitalism is the first mode of production in world history to institutionalize self-sustaining economic growth,” explains Habermas. The economic growth is what is managing the world. As a result, we have changed our interaction in society. The problem then lies not on how many electronic doodads we own but, rather, how we have reshaped communities due to the use of technology. According to the philosopher, there is a lack of balance.
The real issue lies on the fact that it is us, humans, who teach machines what to do, when and how to act. It is us who can manage them. We could choose to destroy every single digital artifact. But that would be going back in time to those days were society was based on companionship rather than rampant competitiveness. As a result, some technology has become more a reflection of society than an aid. And that is the big issue here.
As a final note and connecting the post back to our debates in class, I feel that some humanists, or people in general, are following what Turing calls “The Head in the Sand Objection,” preventing a real understanding of the problem. We have already talked about humanists been afraid of digital tools being applied to the study of the human knowledge. Again, why? Why do we believe that “the consequences of machines thinking would be too dreadful?” (444). On Humanities, it would help on research, as it does to medicine. Since technology is a means to an end and a human activity, we can still use it to find some responses.
I may be going to far here, but the too dreadful a consequence is that machines, as driven by humans, would “think” as we do, therefore being biased, and cruel in some extend (they have no soul, right?), and creating still more trouble in an already too cracked a society.