Destructionator XIII wrote:Go to hell, you stupid piece of shit. Some of us have actual things to do or prefer to sleep than repeat the same thing to two different posters.
PS read my responses to Mike and see how they apply to what you said.
Your responses, both to me and Mike, are almost art, they are so bizarre. It's like an avant-garde argument, here ... the things you say to respond are so disjoint from what they are supposedly responding to that they don't really make any sense. But, still, concession accepted, this is just a strange way of avoiding an argument on your part.
Destructionator XIII wrote:A model is like applying the laws of physics to the computer processor. The computer didn't choose to do that, physics made the selection.
Once again, the point goes sailing WAAAY over your head (not to mention the definition of the word 'model'). Let's just start from the top here.
The simplest computer programs are deterministic. That is, there is no randomness involved. It will always produce the same output for any given input. However, simple computer programs are not a good model for human reasoning. Therefore, we use an arbitrarily complex computer program as a model. I am not an expert at computer science, and I don't see Starglider around here, but let's just use a very simple non-deterministic program. It is possible to embed "choice points" at certain locations in the program that control program flow, with the method of choice not specified by the programmer. The programmer can specify the number of possible alternatives, but the program itself chooses between them at run time, via some general method that is consistently applied (more sophisticated programs can change the method, as well). There are all sorts of implications for this type of programming, and ways to make it more complex, but let's keep it simple for now.
The point is, the computer program makes a choice based on its inputs. The output will not always be the same, hence non-deterministic. This is possibly the most basic type of non-determinism, by the way (unless you count random number generation). But the point remains, the program itself makes the choice, the programming language only sets the architecture and the parameters under which the choice can be made.
Now, the human brain is a very complex non-determinstic model. The neuroanatomy of the brain is set (the architecture), and the methods by which it operates (the parameters) are set, by evolution. We can't exceed the capacity of the brain that developed as a result of evolutionary pressures. Furthermore, there are specific areas of the brain that are associated with choosing between alternatives, and weighing the options of each. Although situations differ, our brain uses the same general method for choosing. It all has to do with the weight of certain stimuli as your brain processes them; the type of choice might involve input from different parts of the brain or body, but the actual method of choice is general, not specific. In fact, because the method is general, and predictable, it is possible to trick the brain in all sorts of ways (for example, right-handed people will usually choose their right-hand if asked to randomly use one ... however, through the introduction of certain stimuli that obfuscate the decision making process, it is possible to "make" them choose their left-hand, although they will still feel as if they made the choice freely. More details
here). This isn't the only example of this. Since it is possible to predictably "trick" people into choosing a certain output by controlling the inputs, we know that there is a general method of choice, even if the model is still non-deterministic.
It is on you, now, to explain exactly what makes the latter instance, the human choice, "better" than the computer choice. Why should one be classified as free will and the other not? The selection (output) is made as a result of predictable reaction to certain factors (inputs).