Artificial intrusion?
Let's think for a minute about some things a good AI system would be able to do. I'm not talking about having "human emotions" or any junk like that, I mean being useful to real, practical human tasks. Take, for instance, an away message a friend of mine had posted: it said something like "my new email address is the same as my old but with indiana.edu at the end." A good AI helper background task would notice that, know that AIM contact corresponds to a certain person in my address book, and update the entry (probably without getting rid of the old one, but making the new one the default).
This kind of thing would be incredibly useful. (And, of course hard... and a fascinating problem to study :-) ) However, how did you feel reading that? Did you feel like it might be a bit invasive, to have a program running with that kind of knowledge about you and the real-world correlations in your electronic data (or at least enough equivalence classes in the electronic world), and access to make a change like that?
(An aside: in part I find this example interesting because it might not bother us for a human to do this - if you had a secretary doing things like that, it might not be so odd. Of course, the setting where secretaries work on things for you usually has stuff that you're not emotionally protective of the way you would be of personal information. The electronic agent doing this task might not be troubling in a business setting either.)
The problem with good artificial intelligence is not only that it's hard to do, but that we're really very bothered by it, even without it supposedly having feelings or intentions or anything like free will. I mean, heck, we're bothered by some algorithm at Google scanning our Gmail to automatically put in ads (see earlier commentary). It doesn't matter that the program can actually take no action which would cause us any conceivable harm; we anthropomorphize it unconsciously.
Maybe this will be a mentality shift of the next few generations (and I mean a couple of decades; not long enough for a few generations in straight succession). As it is, we're bothered my machines doing things for which we'd be bothered if a human did them or even some things we wouldn't. Maybe we'll come to realize the difference on a more visceral level, and stop caring if a bot reads our diaries. I think in part it depends on the direction technology goes - if you buy or subscribe to some kind of electronic journal which makes comments on your writings ("you like Jimmy? What do you see in him?"), that's really not going to help the case of AI not being invasive or weird.
This kind of thing would be incredibly useful. (And, of course hard... and a fascinating problem to study :-) ) However, how did you feel reading that? Did you feel like it might be a bit invasive, to have a program running with that kind of knowledge about you and the real-world correlations in your electronic data (or at least enough equivalence classes in the electronic world), and access to make a change like that?
(An aside: in part I find this example interesting because it might not bother us for a human to do this - if you had a secretary doing things like that, it might not be so odd. Of course, the setting where secretaries work on things for you usually has stuff that you're not emotionally protective of the way you would be of personal information. The electronic agent doing this task might not be troubling in a business setting either.)
The problem with good artificial intelligence is not only that it's hard to do, but that we're really very bothered by it, even without it supposedly having feelings or intentions or anything like free will. I mean, heck, we're bothered by some algorithm at Google scanning our Gmail to automatically put in ads (see earlier commentary). It doesn't matter that the program can actually take no action which would cause us any conceivable harm; we anthropomorphize it unconsciously.
Maybe this will be a mentality shift of the next few generations (and I mean a couple of decades; not long enough for a few generations in straight succession). As it is, we're bothered my machines doing things for which we'd be bothered if a human did them or even some things we wouldn't. Maybe we'll come to realize the difference on a more visceral level, and stop caring if a bot reads our diaries. I think in part it depends on the direction technology goes - if you buy or subscribe to some kind of electronic journal which makes comments on your writings ("you like Jimmy? What do you see in him?"), that's really not going to help the case of AI not being invasive or weird.