Viva la ‘Smart Home’
One of the more interesting new developments in home-based commodities is in fact the home itself. For decades, the technology industry has created new, more convenient machines and gadgets to assist with common tasks and the general upkeep and maintenance of a household. Now, previously tedious and difficult tasks such as actually opening a freezer door to remove ice, standing up to change a television channel, waiting three extra minutes to heat a can of soup over the stove, or god forbid even vacuuming, are almost extinct. And thank God for that. Now, with just the touch of a button, we can spend more time sitting on the couch drinking soda, and eating Spaghetti-O’s, while a tiny box-like vacuum thing that runs around the room, sucking up dirt until it hits a wall. We finally have time to do things that are really important. Like tell our house to raise our kids.
The new Smart Home system is becoming wildly popular as a means to conveniently maintain in-house control systems like lighting, power, and security. Voice activated software can be set to turn lights on and off at different times while a family is out of town to give the illusion that someone is home, warding off possible burglars. Although many of these things are simply commodities of luxury, many, like security for example, truly can improve the quality of life (safety) in a home.
What is becoming more popular, however, is software and technology that can perform some of the simplest tasks of parenting. The new, ‘Power Cop’ can monitor and “Limit the amount of time your kids waste playing video games or watching TV with this programmable timer for electronic devices that can be set for different times each day” (Smarthome website). A parent doesn’t even need to go in the upstairs bonus room to check on their eight year old, because he may be up there alone, but at least they know he is not watching television. Many of these new developments are what attract so many buyers, but also perpetuate a large sentiment of fear and resentment toward computers. If we give so much responsibility to a computer, what happens if it falters?
Such a question brings in mind a very popular computer technology film of 1968, “2001: A space Odyssey”. In Stanley Kubrick’s film about a journey to find buried objects on the moon, the main character, Dr. David Bowman, more commonly known as Dave, is paired with a computerized companion, HAL. Hal is a voice activated supervisor of the passenger’s health, mental well-being, operations, and procedures. Like the SmartHome, Hal can open doors, control temperature and lighting, and perform tasks of entertainment with a simple voice command. In these ways the two software are much alike. Hal is simply an exaggerated system built for, let’s say, an exaggerated, and much more expensive home, a space shuttle.
However, in the latter half of the film, Hal begins to what seems like turn on the crew of the ship. In simply performing tasks that preserve it or himself rather, he kills off all of the other crew except for Dave. I use the word kill, but I want to say murder. In the end, Dave has to destroy Hal in order to stay alive. This is one of the more emotionally charged scenes in the movie because its seems almost sad when he cuts Hal’s wires. The machine begs and pleads for redemption, and something in us wants to respond to that.
So was Hal alive? Would we think that the Smarthome was alive it the garage door crushed every member of our family? The first question is probably more difficult to answer than the second. In Sherry Turkle’s Life on the Screen , she paraphrases Minsky’s idea of artificial life as “ the discipline of building organisms and systems that would be considered alive if found in nature” (Turkle, 151). Both Hal and the Smart home, although it would be unlikely to find in nature, would most commonly be compared to simply another human. The software would not be considered alive in the same way as Blind Watchmaker can be, as a growing, unpredictable organism. What would be hard to distinguish however, are our feelings toward such a human-like machine.
These machines help us, and make us happy, but at the same time can hurt us with their malfunction. I haven’t heard of anyone being killed by a Smart home, but if it did happen, I think it would be spoken of as faulty technology. Turkle would probably explore the implications of this new technology challenging the way she perceives life, but then toss it aside as actual life itself. Hal’s situation would be harder to decipher. In Hal’s case, he is a perpetuating organism that is self-serving and mostly unpredictable. It is very clear that he is an object of technology, but he seems to have evolved into one of real animal, and even further, human characteristics.
Both cases however, portray a certain way that humans begin to interact with computers. In our more real scenario, with the Smart home, many people begin to rely on this technology and trust it as a huge part of their lives, almost like a butler, maid, and nanny combined. We may not yet see the technology as alive, but it plays such an active role in our new commoditized, day to day living, that we may value it as much as a person or pet. The trouble with determining AI and liveness, is the set of emotions that exist within ourselves. We are capable of loving people that help and entertain us, so we make the connection of the same emotions with many technologies that do the same. The issue is within our set of what real value is. And in our world, with such a strong value placed on convenience, We love our technology like a close friend. To others, these things may not be alive, but we breath life into them with our own emotions.
Wednesday, October 25, 2006
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment