The Romulan Republic wrote: ↑2019-05-06 04:47amI mean, Russian troll accounts could be just individual trolls. They could be more orchestrated. That claim certainly appears to be taking a leap beyond what the study proves, and its sloppy journalism-
They are claiming that the study says something that the study's author has specifically denied, that 50% of the negative tweets were Russian. Not true in the slightest. I've read the thing myself.
That's not sloppy journalism, that's dishonesty, pure and simple. Its misleading their audience into thinking they read it, and they obviously didn't. And few if any news source retracted the inaccurate figures as well. CNet is the only source I have found that has definitely read it.
but it is not a claim that I made in either my title or my posts. So if this is the basis for implying that I am a liar and demanding that my title be edited, then you can fuck right off.
The fact that you can't see the naked sampling bias in this study says you are either gullible enough to take news sources at face value rather than reading the
primary source, or you just don't know what good and bad study design looks like. The latter is fine, given most people don't have that knowledge, but when someone tells you "this is shitty study design", listen to what they have to say.
Note that at no point have I called you a liar. I've called the media dishonest, but despite your hyperventilating, I haven't said anything about
your honesty.
The Russian government doesn't care about Star Wars. But it does care about stirring up divisions in Western nations. And will use any tool to that end.
And blaming "the media" (as if the media is a homogenous monolith with a single view), and whining about how their out to get the poor innocent Russians, is right out of the Trumpist propaganda playbook.
Please provide evidence for the claim that the Russian Government cares about stirring up divisions in Western Nations. And don't use this very study as evidence; its a shit study, as I've already explained, and will further elaborate on since you need it explained.
And please, take your straw man and burn it. The media is not a monolith, but it
is homogeneous in many cases due to many "journalists" being too lazy to read primary sources. If you don't read a primary source, what is left for you to read?
Other journalists. This clearly happened in this case because the same exact claims were repeated by so many news outlets, and anyone who has read the study knows the numbers aren't what they say they are. One does
not have to be a Trump supporting conservative to have legitimate criticisms of journalism as its practiced in the Internet age.
Source for that?
Also, I just love how you keep putting "manually" in sarcastic/scare-quotes while implying that the fact that it was reviewed by a person rather than entirely automatic (because automated allgorythms are so reliable) means it can be dismissed out of hand and I'm a liar for posting it.
Source for what? The part where he said
in the actual study that he used manual sorting, or the part where manual sorting is known to be prone to human bias? By the way, those aren't scare quotes, since manual sorting is a technical term not everyone will be familiar with I thought it best to leave it in quotes. Its obvious that you, for instance, don't really know what it means either.
If you want me to source the claim that manual sorting, that is, having a trained rater go through the data by hand and rate whether the archived twitter post is positive, negative, or neutral; or politically motivated or not politically motivated; is an unreliable methodology especially when there is only one rater sorting the posts, then try "Research in Psychology: Methods and Design, 7'th edition":
chapter 6, page 203 wrote:Experimenter Bias
As well as illustrating falsification and parsimony, the Clever Hans case (Box 3.3 in Chapter 3) is often used to show the effects of experimenter bias on the outcome of some study. [...] Similarly, experimenters testing hypotheses sometimes may inadvertently do something that leads participants to behave in ways that confirm the hypothesis. Although the stereotype of the scientist is that of an objective, disspassionate, even mechanical person, the truth is that researchers can become emotionally involved in their research. Its not difficult to see how a desire to confirm a strongly held hypothesis might lead an unwary but emotionally involved experimenter to behave (without awareness) in such a way as to influence the outcome of the study.
I've omitted the details about Clever Hans, because you can look it up for yourself online. This goes on to talk about how an experimenter's behavior influences the participants' behavior in an experiment, so it might not seem immediately relevant. However, chapter 12 clarifies that it
is relevant even to observational studies such as this:
Chapter 12, pages 409 and 410, on challenges facing observational methods wrote:Observer Bias
A second problem for those doing observational research is experimenter bias. In Chapter 6, you learned that when experimenters expect certain outcomes to occur, the might act in ways that could bring about such a result. In observational research, observer bias means having preconceived ideas about what will be observed and having those ideas color one's observations. For example, consider what might happen if someone is studying aggression in preschoolers and believes from the outset that little boys will be more aggressive than little girls. For that observer, the exact same ambiguous behavior could be scored as aggressive if a boy did it but not aggressive if performed by a girl. [...] Bias can also occur because observational studies may collect huge amounts of information. Deciding which observations to report involves reducing this information to a manageable size, and the choices about what to select as relevant and what to omit can be effected by preconceived beliefs.
Biasing effects can be reduced by using good operational definitions and by training observers to identify precisely defined target behaviors. When actually making the observations, behavior checklists are normally used. These are lists of predefined behaviors that observers are trained to spot. [...]
In addition to defining behaviors with precision, anther way to control for observer bias is to have several observers present and see if their records match. This is interobserver reliability, a concept you encountered in Chapter 7. This form of reliability is usually measured in terms of percentage of times that observers agree. Of course, both observers could be biased in exactly the same way, but a combination of checklists, observer training, and agreement among several observers generally controls bias.
The omitted sections I deemed irrelevant (information on animal studies, detail on checklists you can figure out yourself, and videotaping observations). By now, you should get the point: manual sorting, i.e. having a (hopefully) trained observer sort through open ended responses in a survey or archived posts in "natural" online environment like Twitter and make judgements about the behaviors they see in the writing, is just as prone to experimenter and observer bias as any other study, and the way you correct for it is exactly the same as in any other study. Have multiple observers sort through the data with behavior checklists and good operational definitions for (in this case) political motivation. The checklist should not be applied
after someone has given it a pass with no checklist, but that's essentially what he did here. Moreover, this study appears to have only one researcher doing all of the work as a PhD student, so by definition it basically has no inter-rater reliability. Hence why I do not consider it a good study.
And plenty of professional researchers submit bad studies, so this is just something that you have to be wary of any time you read one. Its just a fact of life in the social sciences.
Satisfied, Rom?
There are many things that would be very easy to judge on less than 245 characters. If someone posts "God hates F*gs", for example, it doesn't take a great deal of analysis to figure out their motives. In any case, you have to either back up your insinuation that the study's standard was "common experience among liberal leaning readers", and the implication that the author (much like reality) has a liberal bias. Or else retract it.
Not my claim, Rom. Read again. I didn't say that the writer of the study has a liberal lean, I said the two of
us lean liberal, and biases us to believing his methods correct. Except, I have been trained not to rely on my intuitions, and instead to look at whether the study has anti-bias measures built in. It doesn't. That's a problem. And whats more, the study makes untenable claims of being representative. That's also a problem.
Though, if you want to know whether the writer of the study leans liberal himself, read it yourself. His central argument that Conservatives in this country are picking up on Russian propaganda tactics and the negative tone he takes towards this development is quite telling, IMO.