General Zod wrote:The problem is you seem to be using some sort of nebulous definition of "safety" that I don't get. It can't be "prevent people from getting killed" because people can still do whatever they want once they leave the airport tarmac under your system or any other system, including building bombs.
Fair enough. The simple answer is that no system of security can provide 100% safety. The goal of a security system should be to minimize the number of successfully executed terrorist attacks, and the amount of damage that those inflict, with a finite expenditure of resources (that is "safety"--a 100% safe system would be able to guarantee that no attacks ever take place). Improvements in safety, therefore, require either an increased expenditure in resources or an improved ability to use the same resources currently inputed into the system and using them in a more efficient way to either detect and stop or deter attacks. This is a game theory problem.
Let's go over a simple hypothetical example to see the general idea of how this works. You have 5 people trying to board a plane to New York from Chicago on a Monday: a young black mother with her 4-year-old daughter who's staying the week and over the weekend in New York and flying back Sunday evening, a white man with a round-trip ticket returning late that night, a young Arab woman with a one-way ticket and an elderly Hispanic man who's returning to Chicago on Friday. A real system would probably have much, much more information about all of these passengers (e.g., it could see how often they had flown in the past, detect simple patterns in their flights, see where they've traveled in the past, etc.). In any case, you can search 2 of them (limited resources). A system without
any profiling whatsoever would simply pick two of them randomly.
A more sophisticated system, like the one I think most advocates of profiling would argue in favor of, tries to take into account as many variables as possible to try and determine which people are most likely to be carrying bombs. In all likelihood, each person has some non-zero chance of carrying a bomb onto the airplane, but I think you'll agree that some are more likely than others to be carrying the bomb. The four-year-old, for example, seems intuitively unlikely to be carrying a bomb. Let us say that the system evaluates that there is a .0000000000001% chance that any of the five of them are carrying a bomb. You still get to search two of them, but how to do this? A profiling system evaluates the independent likelihood that each person is carrying a bomb, based on the profile data, and then compares that vis-a-vis each other.
Based on that probability, it then assigns each person a probability of being searched. Let's say that, given everything the system knows about these people, it is able to determine that the white man and Arab woman are equally likely to be carrying a bomb, the elderly Hispanic guy is ~80% as likely as the white man to be carrying a bomb, and the young woman roughly 70% as likely to be carrying a bomb; her daughter 30% as likely. Thus, the system determines that the white man and the Arab woman are each given a 50% chance of being searched, the elderly Hispanic guy has a 40% chance of being searched, the young woman has a 35% chance of being searched, and the daughter has a 15% chance of being searched. The exact probabilities are basically irrelevant. The concept is important, though: because not everyone has an equal probability of carrying a bomb (if there is one), it's actually very inefficient to assign them all equal probabilities of being searched--doing so makes it much more likely that, if one of those people is carrying the bomb, they will not be searched and be able to sneak it onto the plane. That means that, if the system has few enough resources that searching every person is impossible, the system can be made safer by use of profiling. If someone's profile suggested that, vis-a-vis his fellow passengers, he was 99 times as likely to be carrying a bomb, he would be searched 99 times as often as any one of his fellow passengers using the profile system. Using the system without any profiling at all, he would be searched only as often as they are. This is clearly an "unsafe" result.
In practice, this result is much more complicated: the "search" or "don't search" decision provides many, many more options (different layers of screening), and so the real choice isn't between "search" or "don't search," but rather to select an appropriate level of screening, none of which are 100% likely to detect a bomb carrier (if they are carrying a bomb) but some of which are better than others. (So, for instance, the screening they make everyone goes through subjectively doesn't seem terribly effective to my untrained eye, but they do it to everyone--would it be better for them to let 20% of people who are identified as being trustworthy, and subject the remaining 80% to more intensive searches? Not clear--it's an empirical question).
In addition, and this gets more to the point that several people have talked about, decisions on safety not only affect the likelihood of detecting an individal bomb carrier but can affect the probability that any of those individuals is carrying a bomb. Perhaps every white guy realizes that, under this system, he's likely to be searched, and rather than carrying bombs into airplanes the crazy white guys of the world decide to shoot up their workplaces, instead. That seems like a bad result, but depending on the relative costs of them shooting up a workplace as opposed to blowing p an airplane, it may actually be a good one. If the costs of shooting up a workplace, for instance, are so expensive that even if every white guy who would have blown up a plane decides to shoot up their boss, instead, only 1/10 as many attacks will occur and those attacks will on average do only half as much damage, then that could actually be a very successful system (as some people have suggested, the terrorists will recruit other types of people--that's actually a partial victory for the system since it seems pretty easy for them to recruit 17 year-old Arab boys; it's much harder for them to recruit 75 year-old Asian women, and so if they have to switch over entirely to that group it will reduce the number of attacks).
There's some complicated math that would have to be done about this, depending on things like the substitution rate, the magnitude of the improvements in efficiency, etc., but it's basically a systems analysis problem, and systems analysis problems are solveable, and are solveable even without perfect information about many of the variables. The point is that that is a better system if your goal is to reduce the number of terrorist attacks and the amount of damage that they do (improve safety) with finite expenditure. I don't think it's unreasonable to say that we should look into it.