Anti-Teaching / Mindful Learning

Anti-teaching is usually described as standardized and test-centered education, but I would like to clarify the definition further. Without this clarification, I think that changing current education systems could be a great risk, since throwing out the old without a new plan would cause chaos. Schools could become isolationists, refusing students from other schools and not forming community-wide plans to increase education. One school could completely disregard the achievements of students from other schools if they didn’t meet a set of vague standards. Would the ‘educational heritage’ and background of a student become even more impactful than their achievements? Even Ken Robinson in his talk [2] mentioned that testing is useful as a diagnostic tool, and the harm it causes today is from our over-reliance on it. This is a future of education that is not pretty, even if standardized testing is thrown out.

So what should standardized testing be replaced with? This is the question answered by the other readings of week, which have one thing in common: an inclusive and strong social connection with empowered students. A very real and practical example I have heard of this happening in classrooms at the very moment is a short “Good Times” moment at the beginning of class. In the Capturing Kids Hearts initiative, these moments are for teachers to simply ask the students to share with everyone the good things that they experienced in the past couple days. Several teachers [1] have talked about how it leads to students that are more inspired and focused on accomplishments. So a single student’s goal becomes a shared victory and a moment to inspire their peers to extend the success. I think that there is a great dual benefit though because it lets teachers understand what is happening to the students outside of the classroom. This is a true connection that lets teachers know what the students care about and to empathize with them, rather than a teacher-versus-students environment where the teacher separates themselves from the students. The role of a shepherd guiding a direction-less flock should never match up to what is happening in classrooms. Teachers that view themselves as the students’ student, as part of the community, strengthen that meaningful connection to students.

[1] Quillen, Ian. “’Capturing Kids’ Hearts’ Initiative Focuses on Relationship Building.” Educational Week, Edweek.org, 17 Oct. 2011, www.edweek.org/dd/articles/2011/10/19/01conversionside-relationships.h05.html.

[2] TED. YouTube, TED, 10 May 2013, www.youtube.com/watch?v=wX78iKhInsc&feature=youtu.be.

Networked Learning

Networked learning, and especially experiential learning are both teaching environments that I strive towards. The combination of socialization and intelligence is undoubtedly the main factors behind our progress as a species, so to focus on one without the other cripples ourselves severely. Networked learning blends together the experiences of many people so that each person sees how others learn and view the world. This is an amazing opportunity for students to find their own strengths, be inspired by another’s strength, and work to put them towards a shared goal.

But is there a tradeoff in too much networking in a learning environment? Does a classroom reach a critical point and then become a socializing group? Online groups can grow so large that they are hard to visualize, let alone understand. The focus of a large online communities has to remain on learning in order to avoid the enveloping trap of a social media platform. Moderation tools and community organization (see Wikipedia) can be quite effective, but they absolutely need to be backed up with empathy and honesty. Otherwise the audience of students will know that they are reduced to numbers in the learning community. Empathy and honesty are the cornerstones of learning environments that a teacher needs to establish in order to keep the space and mindset of learning sacred. Digital or analog, large lecture or mentorship, students need know that they are valued before they invest in their own learning.

Micheal Wesch showed how much he values and empowers his students during his animated talk [1]. He constantly demonstrated how much he valued his students and their progress, but I want to know more about his own experience and the school he is in. I am sure many modern education administrations would discourage or even outright prohibit his teaching methods. How did he manage administrative expectations on himself? Was he under scrutiny for attempting something different? What was the size of his class, and how precisely did he implement the ‘helping every person in the group creep up the hills’? I want to build a classroom experience like that myself, but that information is necessary.

[1] Wesch, Michael. “What Baby George Taught Me About Learning | Dr. Michael Wesch | TEDxMHK.” What Baby George Taught Me About Learning, YouTube TEDxMHK, 15 Apr. 2016, www.youtube.com/watch?v=SP7dbl0rJS0&feature=youtu.be.

Revisiting Political Polarization

The Hostile Audience: The Effect of Access to Broadband Internet on Partisan Affect by Lelkes, Sood, and Iyengar.

The data behind this research covers from 2004 to 2008. But there are recent changes stemming from the government and ISP laws could be coming due to the recent attacks on Net Neutrality. These changes could quickly overshadow or render useless many of the findings and conclusions made in this paper. Measuring polarization due to Internet access becomes much more convoluted when considering that Internet traffic can manipulated and treated so differently depending on the household as to constitute censorship. And there would be no way to defend the claim that ISPs can be unbiased providers of news, information, and communications.

These kinds of issues makes me conflicted over whether governments, laws, and societies should change rapidly or slowly. A ‘fast’ government would have such a short-term outlook on solving issues that progress would interfere with ongoing research while also making previous research inapplicable. But a ‘slow’ government would restrict the application of research findings be laborious enough to not be reactive or effective. Perhaps it is research methodology that should change instead, allowing studies that are faster and less thorough to be admissible as strong conclusive evidence. This could change the public outlook on reported scientific evidence as a quicker and more useful source of information, while losing a (hopefully) small amount of trustworthiness.

I also doubt that the cost of building Internet infrastructure be the single factor to determine broadband accessibility.  There are many other factors that can interfere with this assumption. There are many known instances where broadband access is only provided by a single ISP in an area, allowing them complete control over Internet pricing. This makes all households on either extreme of financial stability be misrepresented or wrongly categorized in studies that assume Internet infrastructure costs are driving factor of Internet access. I am glad they address this in future sections in the publication, talking about the other factors behind broadband demand and references Larcinese and Miner’s previous work when determining a proxy for broadband uptake.

Online Reviews

“Predicting Sales from the Language of Product Descriptions”
by Reid Pryzant, Young-joo Chung, and Dan Jurafsky

“Do Online Reviews Affect Product Sales? The Role of Reviewer Characteristics and Temporal Effects”
by Nan Hu, Ling Liu, and Jie Jennifer Zhang

Both papers here are focused on how corporations are increasingly using social science to better connect with customers and conduct business. This trend has been gaining a large amount of traction recently, but is now undergoing controversy since the privacy questions aimed at Facebook and Cambridge Analytica. These topics should be scrutinized closely in order to verify positive societal growth while avoiding manipulation and malfeasance against the public.

Pryzant et al are trying to estimate buying behavior based off of the textual features present in product descriptions. But was there any preliminary analysis to determine if this complex and novel RNN+GF model was necessary? Could a simpler model be just as effective and have less computational cost? It would still have novelty just by approaching the textual analysis of product descriptions rather than the basic summary statistics that were in the previous studies.

Pryzant’s research focuses on just chocolate and health categories. In particular, ‘health’ must have been an incredibly broad range of products, from fitness tools, weight loss foods, vitamins, books, and medicine. I feel that sticking to just these two categories would bias the results of both categories researched towards phrases such as ‘healthy’ or ‘low-fat’.

I don’t see much information on what Pryzant et al did to verify their training data. Some items listed may have been severely misrepresented in the description. I also believe that the presence of pictures on product sites are a large impact on buying behavior. They immediately convey much information that may not be covered in the descriptions. There is surely a reason so many company marketing agendas focus on graphics and visuals

Does Pryzant’s research translate well to other cultures? Politeness in Japanese culture is fairly well known as a very prominent characteristics of the majority of people there’. The research here gives Politeness status as an influential word group that increases buying behavior in this Japanese market. Could the same work be applied to the USA or other cultures that may not place the same importance socially on politeness?

I am glad Hu et al preferred the simple yes/no recommendation review system. The popular 5-star review system is more subjective and can be unhelpful. In a 5-star review system, a 4-star review is sometimes considered harmful to a product or service. This is partially due to the subjectivity of those who are doing the rating, who can become biased or manipulated quite easily. So a review system that is built to give a finer degree of resolution on their reviews, can actually lead to more bias and noise in the dataset. Perhaps it would be best that most review systems adopt a simple recommend / do not recommend system of review, as there is almost no question as to where the author stands in this straightforward setup.

I am not sure about the Hu’s conclusion that the impact online reviews on sales diminish over time. Another explanation would be that interest in a product naturally decays over time, and leads to lower sales. It would be difficult to effectively measure this effect and also compare it how the impact of online reviews also decline over time.

Censorship in Chinese Social Media

Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions” by Chaya Hiruncharoenvate et al.
Reverse-Engineering Censorship in China: Randomized Experimentation and Participant Observation” by Gary King et al.

It seems obvious that as long as massive and automatic censorship is possible to the censor without incurring any major cost, then the censor will remain powerful. However, if the only action the censor can effectively employ is through using human actors, then they should eventually be defeated by any anti-censorship group (except by some extreme response). This is because a larger anti-censor group with the same tools available will be able to focus efforts on overwhelming the censor. Similarly, a small anti-censor group can be innocuous or unassuming where they can focus on remaining undetected. There is another issue for any powerful and growing censor because the increased chances that anti-censor groups will infiltrate and sabotage the censor’s goals.  However, a censor can employ computational methods to judge content en masse and with great detail as an effective guard against all of these points.

This leads me to believe that the censorship seen in this research is not sustainable, and is only kept alive through computational methods. The direct way of defeating any censorship is to defeat any machines driving them currently.  I think this is the greatest implication of this research by both Hiruncharoenvate et al and King et al. By understanding and breaking down a censor’s computational tools in this manner, a censor would only be able to revert to human censor methods. And when this censorship is unacceptable to people, then they have the strategies I listed above to actually defeat the censor. This is a necessary point to make because without an accompanying anti-censorship movement of people, then defeating the computational tools of the censor is meaningless. So in this case, the computer adversaries are best defeated by computational approaches, and the human adversaries are best defeated by human approaches. I think there should be special consideration taken for problems that match this description because not tackling them in the best way proves to be an incredible waste to time and energy.

I also have trouble estimating whether the censorship is really working as intended. From King’s findings, if China is very concerned about calls for collective action, then it should be surprising that it is less concerned with what could be ‘seeds’ of outrage or activism. China may censor the calls for action by a movement, but they strangely allow the spread of criticism and information that could motivate a movement. This seems problematic because it does not address the underlying concerns of the people, but instead just makes it more difficult to do something in return. Also, the censorship is targeting publicly viewed posts on social media, but doesn’t seem to have any focus on the private direct messages and communication that is being used as well. In the case of a rebellious movement forming, I think this kind of direct and more private communication would naturally come about when a large group has a unifying criticism of the government.

Social Contagion, Influence, and Behavior

A 61-million-person Experiment in Social Influence and Political Mobilization” by Bond et al. reports on the effects that political mobilization messages on social media have on elections and personal expression.

There were 61 million in the social message group, but 600 thousand for the other groups? I see that their methods are fairly sound in this research, but this kind of difference makes me consider how other company-sponsored research could become biased very easily. This should be a concern to many people, especially when news media have repeatedly hastily reported or drew unrelated conclusions out from published research articles. I feel the areas of academic research, news media, and corporations are becoming so interconnected that people are finding it difficult to tell them apart from each other.

I see one issue with the design of the experiment. The informational message is still on the Facebook website, where nearly all information and actions available to the user are shareable with their Facebook friends. The assumption that many people could have would be that any available message or action given to the user can be shared to their friends. So participants might have wrongly assumed that it is another common social media sharing ploy, and not realized that the self-reported “I voted” would be kept confidential.  I think this design in the experiment actually impacted the results to make it harder for the authors to come to these conclusions than necessary.

I think that in most instances, people should know when they are being studied. There can be exceptions to this if it has an obvious negative impact on the integrity of the research data. But participants might be more honest and accurate in their self-reporting if they knew it was being researched. Participants might be more mindful that their answers could lead to research and social changes that are unfounded and unjustified. This question should be investigated in meta-analyses of research methods, how participants perceive them, and how they change a person’s behavior. I understand that there is lots of previous work done on studies like this, but I think the results and conclusions from such research deserve to be so widespread that more people outside academia understand them. The importance of this makes me surprised that this isn’t quite ‘common knowledge’ yet. Maybe I shouldn’t be since the scientific method is another incredibly important process to understand that many people brush away.

Experimental Evidence of Massive-scale Emotional Contagion through Social Networks” by Kramera, Guillory, and Hancock

If “Posts were determined to be positive or negative if they contained at least one positive or negative word…”, then how can mixed emotions in social media posts be measured? Simplifying emotions to simple and quantifiable categories can be helpful in many cases, but it should be justified. Emotions are much more complex than this, even in infants who have the most basic desires. Even using a one-dimensional scale instead of binary categorization can give a better degree of emotional range someone can feel.

The researchers also find that viewers that were exposed more emotional posts is connected to them making posts and being more engaged with social media later. I think this is alarming since Facebook and other social platforms are financially motivated to keep users online and engaged as much as possible. This contradicts recent claims by Facebook and other social media outlets that they wish to defend against purposefully outrageous and inflammatory posts. I see this as a major issue in the current politics and tech industry.

Conversational Behavior: Loyalty & Betrayal

Linguistic Harbingers of Betrayal: A Case Study on an Online Strategy Game” by Nucuale et al investigates how language can be used to predict future interactions and choices of users within an online game.

I wonder if these results can go beyond online games to in-person and normal interactions between friends? I think that there are many people who are relatively unconcerned with the game’s outcome either due to a non-competitive nature or having a separate motivation for playing. These players would act quite differently when placed in a situation that demands more commitment. I feel that the authors hoped that this research would extend beyond the game table, but I do not see a strong connection and think a new study in real-world relationships would be needed to find linguistic patterns that can be safely generalized.

There is also the question of if these findings can be extended to other games? Many games do not have a “Prisoner’s Dilemma” setup, and therefore, would not entice players to betray one another. Even for games that do have a possible win-loss scenario, games are not always played between strangers or anonymized. An interesting demonstration of how trust evolves naturally from simple systems that are in place over long periods of time can be seen in “The Evolution of Trust“, an interactive graphic by Nicky Case. It can be seen how players that cooperate in games actually fare better than players that do not in the longer run. Perhaps the same kinds of small simulations that “The Evolution of Trust” uses can be applied to simulate a multitude of Diplomacy games, and see if the results match up to human behavior. These simulations can even account for occurrences of miscommunication or overall player strategics. Perhaps this would give an alternative conclusion to the original paper’s findings by suggesting the type of communication and strategy that would benefit Diplomacy players the most in games, and also verify if the real Diplomacy games and players can be effectively simulated in order to find better modes of communication (whether by building more trust or betraying more).

I highly recommend taking the time to view “The Evolution of Trust“, as it is a great demonstration of some core facets of sociology and communication, and is applicable to everyone, not just computer scientists.

Conversational Behavior: Politeness and Respect

Language from Police Body Camera Footage shows Racial Disparities in Officer Respect” by Voigt et al investigates almost exactly what the title describes.

I am surprised they mention cases of conflict between communities and their police forces only in the midwest and east coast states, but then go on to study the police force in Oakland California. If the study was looking to impact public perception on the conflicts between police forces and their communities, then I would think the best approach would be to study the areas where the conflicts take place. I am not sure what the authors would conclude if they didn’t find supporting evidence of their argument. Would the authors claim that police forces in general do not treat black drivers differently? Or would they then claim that Oakland police forces are more respectful than their counterparts in other areas? Applying this same analysis to the cities mentioned as conflicted and comparing the results could answer these questions readily.  It would also provide a more impactful conclusion since it can rule out alternative explanations.

An extension of the study would be very helpful to see if this racial disparity is persistent or changeable. If the same analysis was used on data that came before major news stories on the behavior of police officers, maybe these ideas could be explored. Future studies and follow-ups with this analysis could also show how police respond following a news event or change when adopting new tactics. High profile police cases likely have an effect on police behavior far from the incident, and this effect could be measured.

Polarization and Selective Exposure

Exposure to Ideologically Diverse News and Opinion on Facebook” by Bakshy, Messing and Adamic is exploring how social media sites like Facebook give political information to users and how it affects their political perspective.

This research shows the influence that social media can have on polarizing opinions on certain topics. Since social media algorithms are tuned to recommend content that the user ‘likes’, then most content will match up to the user’s political leaning, polarizing the political landscape even more.

Perhaps this could be remedied by adjusting the name ‘like’ to a different action word that has less problematic outcomes. Perhaps a user study could be used to both replicate this study and test out some alternatives and their effects. A breakthrough in this area would allow social media sites to create a more transparent showcase of what this new action name means. For instance, if users now ‘nod’ instead of ‘like’ articles on Facebook then users might take this as ‘they endorse this content as correct‘ instead of the currently existing ‘they like the content, want more of it, and want their friends to know they like the content‘.

This also is a good chance for social media sites to adjust algorithms to take into account political leanings, and to continually work on tuning a recommendation engine that is as unbiased as possible. This would have a great impact on the political exposure and perspective social media users experience, and could lead to more moderate views being expressed and more nuanced arguments being used as support.

In addition, I wonder if there is a measurable difference of user’s accuracy in self-reporting as a minority political party in countries where dissenting political opinion is actively repressed? Could this be used to determine if other countries have a repression of opinion that is implicitly known among the population instead of explicit and visible political threats?

Echo Chambers Online?: Politically Motivated Selective Exposure Among Internet News Users” by Garrett is an investigation into if users prefer support of their political stance over counterarguments to an opposing argument.

I believe that this result might be caused by users having a stronger background on the information their political party promotes, and thus have a better understanding of the support article that the user appreciates? Participants in the study could be finding it difficult to parse the article that counters the view of an political opponent, since they could be less familiar with that viewpoint.

Could the user who only selects articles that support their viewpoint be considered self-censorship? Forcing users out of this behavior would likely qualify as a violation of freedom of speech and freedom of press. Perhaps incentivizing users to read articles that contain a conversation between opposing political perspectives or a read from a less biased news source is more viable.

1 2