Conversational Behavior: Politeness and Respect

Language from Police Body Camera Footage shows Racial Disparities in Officer Respect” by Voigt et al investigates almost exactly what the title describes.

I am surprised they mention cases of conflict between communities and their police forces only in the midwest and east coast states, but then go on to study the police force in Oakland California. If the study was looking to impact public perception on the conflicts between police forces and their communities, then I would think the best approach would be to study the areas where the conflicts take place. I am not sure what the authors would conclude if they didn’t find supporting evidence of their argument. Would the authors claim that police forces in general do not treat black drivers differently? Or would they then claim that Oakland police forces are more respectful than their counterparts in other areas? Applying this same analysis to the cities mentioned as conflicted and comparing the results could answer these questions readily.  It would also provide a more impactful conclusion since it can rule out alternative explanations.

An extension of the study would be very helpful to see if this racial disparity is persistent or changeable. If the same analysis was used on data that came before major news stories on the behavior of police officers, maybe these ideas could be explored. Future studies and follow-ups with this analysis could also show how police respond following a news event or change when adopting new tactics. High profile police cases likely have an effect on police behavior far from the incident, and this effect could be measured.

Polarization and Selective Exposure

Exposure to Ideologically Diverse News and Opinion on Facebook” by Bakshy, Messing and Adamic is exploring how social media sites like Facebook give political information to users and how it affects their political perspective.

This research shows the influence that social media can have on polarizing opinions on certain topics. Since social media algorithms are tuned to recommend content that the user ‘likes’, then most content will match up to the user’s political leaning, polarizing the political landscape even more.

Perhaps this could be remedied by adjusting the name ‘like’ to a different action word that has less problematic outcomes. Perhaps a user study could be used to both replicate this study and test out some alternatives and their effects. A breakthrough in this area would allow social media sites to create a more transparent showcase of what this new action name means. For instance, if users now ‘nod’ instead of ‘like’ articles on Facebook then users might take this as ‘they endorse this content as correct‘ instead of the currently existing ‘they like the content, want more of it, and want their friends to know they like the content‘.

This also is a good chance for social media sites to adjust algorithms to take into account political leanings, and to continually work on tuning a recommendation engine that is as unbiased as possible. This would have a great impact on the political exposure and perspective social media users experience, and could lead to more moderate views being expressed and more nuanced arguments being used as support.

In addition, I wonder if there is a measurable difference of user’s accuracy in self-reporting as a minority political party in countries where dissenting political opinion is actively repressed? Could this be used to determine if other countries have a repression of opinion that is implicitly known among the population instead of explicit and visible political threats?

Echo Chambers Online?: Politically Motivated Selective Exposure Among Internet News Users” by Garrett is an investigation into if users prefer support of their political stance over counterarguments to an opposing argument.

I believe that this result might be caused by users having a stronger background on the information their political party promotes, and thus have a better understanding of the support article that the user appreciates? Participants in the study could be finding it difficult to parse the article that counters the view of an political opponent, since they could be less familiar with that viewpoint.

Could the user who only selects articles that support their viewpoint be considered self-censorship? Forcing users out of this behavior would likely qualify as a violation of freedom of speech and freedom of press. Perhaps incentivizing users to read articles that contain a conversation between opposing political perspectives or a read from a less biased news source is more viable.

Credibility and Misinformation

“The Promise and Peril of Real-Time Corrections to Political Misperceptions”
Garrett and Weeks are finding ways to respond to inaccurate political claims online.

“A Parsimonious Language Model of Social Media Credibility Across Disparate Events”
Mitra, Wright, and Gilbert are using language analysis to predict the credibility of Twitter posts.

Garrett and Weeks rightfully point out that a longer-term study is a priority goal of future studies. People are naturally inclined to defend their worldview, so they resist changing their opinions within a short amount of time. But the effect of the repeated corrections over a period of time might have more influence on a person. Participants might need more time to be able to build trust in the corrections before accepting them. The added insight from the corrections might also lead them to consider that there are more nuance to many of their other views, and that they are worth looking into. There are many Psychological elements to consider here in terms of persuasion, trust, participant’s backgrounds, and social media.

I think the truth might be more aligned with Garrett and Week’s hypotheses than the results show. Self-reporting from participants on changes to their opinion likely keeps some participants from reporting an actual change. The study notes how participants are defensive of their original position before the experiment and resist change. If a correction does change a participant’s view, then they could be quite embarrassed for being manipulated with misinformation and not being as open-minded or unbiased as they believed. This is a version of a well-known psychological reaction called cognitive dissonance. People usually resolve cognitive dissonance over time, tuning their opinions slowly until they are supported by the subject’s experiences. Again, this can be investigated in a longer-term study of the corrections.

Mitra, Wright, and Gilbert all consider credibility has a direct connection to language and vocabulary. I don’t know if they can correctly account for context and complexities such as sarcasm. The CREDBANK corpus may be quite useful for training using labeled social media concerning events, but real world data could still have these complications to overcome. Perhaps there are ways of measuring intent or underlying message of social media posts in other studies. Otherwise, using humor or sarcasm in social media could produce error since they are not measured as such in the variables of language.

With both of these papers, we know we can identify dubious claims made online and how to present corrections to users in a non-harmful way. But I believe that computers are likely not adept at crafting the corrections themselves. This would be an opportune time for human-computer collaboration, where the computer gathers claims to an expert user, who checks the claim and crafts a correction, which is then given to the computer to distribute widely to others who make the same claim. This type of system both adapts to new misinformation being reported and can be tuned to fit each expert’s area uniquely.

1 2