I just love this... No doubt there's a strong rationale behind it, and the technology is foolproof:
'Terrorist Facebook' – the new weapon against al-Qa'ida
but seeing as I'm unlikely to get to discuss this with anybody, I'm left to ponder my misgivings with myself... Let's say, for example, that I'm paranoid. Not just a bit, but a lot. Let's say that everything that I don't have complete control over represents a threat to me and my interests - if I can't be sure that a certain person(s) is doing a certain thing, and that that thing is precisely what (s)he has been told to do, by me, then this is a cause for concern.
Let's say that I am in a position of control and authority. Let's say that my view of control and authority is that I say "jump" and everybody else says "how high?" Let's say that this is what I use as an indicator of who and what is a threat to me: those who don't say "how high?" are very likely plotting something against me, in my world. Are you getting the picture?
Now, apart from being paranoid, this also looks vaguely autistic (according to my understanding of the word, anyway). That is, if things aren't happening in exactly the way that I had envisioned, I get nervous, and seek to exert/reassert control.
OK, so much for that. Now, can you see my dilemna? No? Well, it's this: I think that everything, and I mean everything that I do will have traces of that paranoia and obsession about them. If I write a post, on this blog, it will reek of paranoia. If I have a conversation with somebody, then that's the direction I'll try to steer the conversation. Everything will be about control. And if I were to write, or have written, a computer program to plot the relationships between known and suspected terrorists...
Let's model this. Andrew is a terrorist - don't ask me how we know this, but he is. Now, Andrew regularly has coffee with Brian. We don't know that Brian is a terrorist, but the fact that he's meeting with Andrew suggests this to us. Andrew could have a life outside terrorism, of course, and Brian might be part of that non-terrorist life - so, how can we check that?
All of Brian's other associations (ie, people he knows - Charlie, Dick, Edward, and so on), are not suspected terrorists - they have never had any links with any radical organizations, or anything of that nature. So, Brian's in the clear, right - we've satisfied ourselves that the relationship between Andrew and Brian is merely social? Well, that rather depends upon how paranoid we are. What if it occurs to us that Andrew is in contact with Brian, precisely because of his clean record, as far as the intelligence services are concerned?
Brian might be covertly sympathetic to Andrew's activities, and may be recruiting for Andrew from his own circle of friends and acquaintences. How do we check that?
Hmmm. That's about as far into fantasy land as I'm inclined to go, to be honest. The point I'm trying to make is this: if one's mindset is geared a certain way, then that is the way that one will see the world. If one sees a threat in everything, then one's computer program will do so, too, because it was created by one. It cannot think differently to one, because a computer program has no independent learning capability - it will do what one instructs it to do, and if what one instructs it to do is look for potential threats, then that is precisely what it will do, and it will keep on looking, until it finds one. And nobody is ever off the hook.
I think that it is possible to write a program that can cope with that amount of information. I think it's possible to have a processor that's capable of dealing with that information in a timely manner. However, the success of the operation is dependent upon this, I think: has one accommodated the possibility that what a person is doing is completely innocent? In other words, does the computer/program have the licence to dismiss a certain piece of information, or else shelve it and re-evaluate it in the light of information that might be received in the future?
Alternatively, is the computer/program obliged to ignore innocent actions, as most people do when they've made their minds up? Let's say that 99.99% of what a person does is beyond reproach. But we don't care about that. We dismiss that, and look at the 0.01% that is, at least, dubious; aberrant to our eyes. But the whole of that 0.01% looks threatening, and because we're only looking at that 0.01%, it becomes the universe with respect to that individual, and that individual becomes dangerous in their entirety.
Because it seems to me that a computer doesn't care - it has no concept of master/servant, and it is not afraid of somebody pulling the plug on it. The computer is neutral - so how does one have it tell us that a certain person is dangerous, to us, when it doesn't care one way or the other? One skews its way of thinking. With a program.
Nope, this isn't going to work: it's founded on flawed reasoning and a false premise. The false premise is this: that any human being, no matter how powerful in the human world, has the right to say that they are right and somebody else is wrong, when one only has to view the scene from the opposite perspective to see how fallacious that argument is. You think right and wrong are objectively ascertainable? God help us.
Addendum: Perhaps a better way to phrase my summation is this: I think that there's a risk that we think the computer is infallible, when it's not. If we believe that it's infallible, then even if we perceive that its conclusions are incongruous, we will dismiss our own misgivings and proceed as it has instructed us.
But if the program that it's operating is based on a false premise (our own fear), and it is not required to weigh the positive along with negative, then the results it will arrive at will be flawed. Because, let's face it, at some level everybody, but everybody, is acting against our personal interests.
Dependent upon how microscopic the dubious behaviour is that our computer will pick up on, I think there are two potential scenarios, in extremis. First, the computer will find that everybody is engaged in terrorism, at some level. Second, it will conclude that nobody is. If the program is True, and not skewed in favour of those who have ordered that it be written, then the latter is more likely.
Subscribe to:
Post Comments (Atom)
3 comments:
Hi!
Interesting take on the face book networking phenomenon, Matt.
I'll have to read this a few times to make any other comment worth reading!
Hi Matt,
Good thought provoking post, thanks.
Came by to say hello and wish you well.
Hope you are having a good day.
Love,
Herrad
Stephany: Yes, it's curious, isn't it? Some behaviour appears to be punishable on a strict liability basis - the reasoning behind the action is irrelevant; the mere action itself is sufficient to render a person liable.
Herrad: Hey. I can't see this working, to be honest, because the one thing that these people (the people who create these devices), seem incapable of doing is contemplating why others behave as they do (even if they're behaving murderously), and accepting that that is a logical consequence of the stimuli that they are acting under. If they don't ever acknowledge the validity of that kind of response, then they will most likely have to face this scenario, again and again. And if they find that the routine use of violence is effective as a solution, then they will make recourse to it all the more quickly. Tant pis!
Post a Comment