Brave

I’m at the end of my second semester of teaching. I like the way class has gone this fall. No major snafus, and the students are clearly interested and seem happy. I’ve seen their work improve. They speak up in class.

I’m at the end of my third semester as a PhD student. I have seen a lot of school throughout my life. Before coming to VT I was a student and employee at a community college, a small liberal arts college and a big university in New York City. In K-12 I was in public schools in multiple different districts, was homeschooled for a period of time, and went to an experimental, arts-focused private school for a few years.

At that school, we called teachers by their first names and nobody got grades. When I went to college for the first time, I had no idea what a GPA was.

Now my life is devoted to knowledge — the production, pursuit and dissemination of it. My students call me by my first name, but I haven’t had to explain to any of them what a GPA is. I think they have a better idea of what “knowledge” is than I do.

A lot of our readings in GEDI have emphasized a progressive, future-focused approach to teaching. Generally, they reflect a critical perspective on old-fashioned teaching methods, including any view of the academy as an intellectual silo — an ivory tower, protected from the exigencies of the real world. Hierarchies should be diminished, transparency and open-endedness should be encouraged. Although I’ve been critical of a lot of our readings about tech and education, in a wider sense, I support the vision of this course. My own pedagogical practice affirms these values, and it always will.

I have looked at this final week’s readings for a message I can take forth that specifically regards a state of affairs that perhaps none of us could have predicted — the political climate that has evolved over the last year or so, which is impacting education profoundly. Seth Godin’s discourse on bravery is applicable here.

He writes:

“Can risk-taking be taught? Of course it can. It gets taught by mentors, by parents, by great music teachers, and by life. Why isn’t it being taught every day at that place we send our kids to?”

That last question is mostly rhetorical, but I’m going to answer it anyway.

There’s a difference between school and everything he lists above. A mentor, a parent, a music teacher, and life itself are all conceptualized as singular entities. Each establishes its own rules, and the context in which young people are subject to them does not stipulate that those rules are being negotiated and undermined by other rules. School rules reflect the broad, multilateral entity that is a school, which includes the multiple functions schools serve. School rules are subject to checks and balances by many different parties. Any bravery that gets condoned within a school setting cannot challenge its most fundamental premises, and thus does not reflect the quality of bravery it takes to truly make a difference out there in society.

School rules and standards change from class to class, year to year, department to department. A school is a collective. When you’re in school, the rules that govern you — whether explicit, as in the rules listed in a syllabus or university handbook, or implicit, as in social etiquette norms — are vulnerable to change.

In this sense, schools are more like societies than they are like a mom or dad or the person who teaches you piano after school and doesn’t have to answer to the rules of any higher administration, because they’re a private contractor. But in a more important sense,  schools are different from societies. Even though both are governed by multiple purposes, requirements and contracts, schools generally have a more cohesive and comprehensible internal logic than anything which goes by the name “society.”

Schools can’t, ultimately, reward students who are brave in the sense of transgressing the very rules that define them. That’s paradoxical. This is why it’s hard (if not impossible) for schools to teach true, authentic bravery, bravery that isn’t just symbolic (like the “bravery” of playing devil’s advocate in debate club).

Heroic people in society can go to jail. Schools can’t support the development of that much audacity.

Or can they?

I think schools can teach bravery if they understand students first as citizens, as actors in society, before it sees them as students. Only then will they support the character development necessary to be audacious, to meet the world where it is today — crazy, paradoxical, unjust, and — to get real here — extremely scary.

Seth Godin tells us that “school was invented to control students and give power to the state.” He gives us a sort of People’s History of the Current State of Schooling and offers ways that educators can change this.

Hot on the heels of a tax bill that, if it passes, will seriously damage the state of higher education — we must change this.

If the classroom has a natural kinship with the real world, we must learn how to instill bravery in our students. Not so they stand up here  — not so that the terminal point of their brave intentions manifests as disrespect to fellow students or instructors while they’re still ensconced in the quasi-realism of undergrad life — but because the unique function of schools is to train citizens in the practices of being a good citizen. That sounds rather old-fashioned, but “good citizen” is a really loose term. It recognizes that no matter what you do, you act to make and change society. It recognizes that adulthood is defined by service to others. Ut prosim; selflessness.

Schools train children to become adults (in this definition of adulthood).

I’m going to end with a question for everybody. How can we preserve what is unique about schools and teaching — what has made it so that we formalize education, rather than dissolving education into the flows and practices of “real” life — while reframing our teaching practices to cultivate brave citizens?

Because now more than ever, that’s what we need.

 

classroom tech and its discontents. (why am i always thinking about this?)

As a Master’s student, I read Nicholas Carr’s book The Shallows, and it resonated with me deeply. Skimming the GEDI syllabus a few weeks ago, it was interesting to see his name pop up in my coursework once again. I didn’t realize then that our course’s emphasis on technology would take a self-reflexive turn. This is encouraging.  I think we could go further in trying to reconcile the realities of our hyper-networked lives with the fact that teaching and learning demand deep presence. Perhaps we will in class. This is relevant to all of us as instructors and students alike, and I’m afraid that a lot of the articles here lend themselves to simplistic analyses and easy takeaways. The conversation can’t end here.

I have a unique advantage with respect to tech use in the classroom. Critiquing information technology from a humanistic perspective is what I do. Critically examining “digital culture” is part of the courses I teach, and as a doctoral student, it is pretty much my raison d’etre right now. (In fact I am pretty sure my friends have been sick of me talking about the awfulness of social media since I started griping… when I was in high school). The syllabus from which I currently teach explicitly bans the use of Internet-connected devices in the classroom — including but not limited to laptops, smartphones and tablets — except in cases of disability (which is super important; I’ll come back to that). I have yet to come up with a more nuanced or flexible approach to this, in part because I simply can’t see any other way. To be clear, I don’t think Google is “making us stupid” (and I don’t think clickbait headlines foster any sort of useful conversation). But I absolutely believe that our engagement with information technology reflect addictive patterns of behavior, and that Internet use may be rewiring our brains. As with any addiction, to speak of “choice” with regard to our tech use, then, becomes more difficult. This compulsory behavior may not be a choice, so explicit technology bans may be the best option.

But explicit technology bans are a major injustice to people with disabilities and students who are not completely fluent in the course’s language. Darren Rosenblum’s article addresses “medical exemptions,” but I fear the way he phrases it glosses over the issue. Many people have disabilities that affect their reading and writing which can be aided by laptops and similar devices. These include vision impairment, dyslexia, dyscalculia, and motor impairment. Moreover, students who are not fluent in the language of the course may rely on translation software to help them in class. Making special exception for these students work, but it also forces them to “out” themselves in front of their peers. As an instructor, I’m uncomfortable with this. I’d be interested to hear others’ thoughts on it.

I found Clive Thompson’s article extremely reductive. Honestly, in a lot of these readings, the use of the word “technology” did a lot of heavy lifting. It’s easy to appear to have an innovative idea on the use of technology when most of what you are doing is distinguishing between various forms and uses of tech. Suffice it to say that Thompson provided a very narrow account of human intelligence and our “cognitive past and future” (his article didn’t engage any major critiques of human cognition and intellectual milieu after the Enlightenment, the invention of the printing press, and so on). Many techno-optimistic writings like this advance their arguments based on cherry-picked sources, ranging from techno-hype that seems rather smart at first blush, but is mostly clever rhetoric use, to neuroscience, to arguments for the benefits of technology based mostly on how it optimizes our productivity. It’s just not that substantial.

In terms of pedagogy, however, that’s neither here nor there. Since we also have a focus on diversity this week, and ableism is a huge concern for engagement with diversity, I wonder if we can synthesize these two angles to develop a non-techno-optimist classroom tech policy that respects differences in ability.

My voice? My VOICE?

Before I started teaching, a friend shared with me learned wisdom from his time as a student-slash-instructor, that peculiar situation in which many of the GEDI order now find themselves. “You learn a lot about yourself by teaching,” he said. This didn’t make me excited. I already know a lot about myself — I kind of wish I knew less, actually. And I definitely don’t want undergraduates to serve as a mirror to any self-knowledge of which I myself am unaware. Like Sarah Deel, I’ve had age and gender-based concerns about emphasizing too much of my “real” self in the classroom. Young female instructors already have to work harder than their male counterparts to gain respect, and it seemed to me like “learning about myself” would only come through an over-emphasis on me in the classroom.

At the end of my first semester of teaching, I was emotionally and intellectually depleted. Instructors field everything from frequent, unnecessary questions about assignments (how many times can you say: i t ‘ s  o n   t h e   s y l l a b u s) to potentially grave psychological issues among their students. This is all, of course, aside from the work of conveying the content of your course. It seemed to me then that people who take up teaching for the “soft” payoffs — the gratification of doing such meaningful work; a sense of connection to the rising generation — are in it for the wrong reasons. Teaching, I thought, should be about a love of the subject. Passion for knowledge, not people, is what makes a good teacher. Maybe it’s even okay to see it as “just” financial security while pursuing your own research.

Over time,  my perspectives on this have become more nuanced. College kids have a sixth sense for BS — so the appearance of naturalness in the classroom is  important, except you can’t be too natural if you yourself are obviously still in your twenties and of the gender that always has to fight to be taken seriously in intellectual professions. Authenticity as a measure of pedagogical success seems unfair when “realness” can discredit you. What a mess!

For these reasons, Dr. Fowler’s paper on authentic teaching self is a bit of a godsend. I’ve sometimes found myself walking into the classroom while the mental tape in my head continually reminds myself that what’s about to happen is a performance. Dr. Fowler’s focus on the similarities between teaching and acting — and especially on the physical component of the teaching-performance — really compounds this. Good acting always includes some reality: actors are instructed to “think the thought,” to try to genuinely feel the emotion of a scene and get caught up in the story. This is why getting in and out of character is a practice, just like memorizing lines and stage directions. Likewise, I think there are shades and degrees of authenticity that you can exploit to bolster your teaching performance. I do care about my students, and I really love what I teach. Now I think of this positive regard as its own self-replenishing source of energy that can be channeled toward every element of teaching (including administration and grading).

Authenticity (or at the very least, its appearance) seems key to establishing yourself as a Yearner, too. Seymour Papert didn’t really investigate experience and demographic-based obstacles to the kind of paradigm shift he’s interested in — perhaps he does that later in the book. So I think Dr. Fowler’s tools are a little bit more useful to me than Papert’s theory, although I certainly read the former as a vehicle for the latter. In truth, I’m still not totally at ease as an instructor. My teaching “voice” is still relatively untrained. Only with more time will I be able to gracefully navigate the space between absolute transparency and an overly stiff professional mask, both of which are hardly ideal as teaching personas. Perhaps the self that I’ll learn “a lot” about will be composed of those parts I feel comfortable showing in front of students. Those elements of ourselves that we draw from when we teach have got to be some of the most timeless, the most meaningful.

***

For those not in GEDI, the Virginia Tech graduate pedagogy course, here are the writings I’m responding to in this post:

http://cte.virginia.edu/wp-content/uploads/2014/02/Deel.pdf

http://amynelson.net/grad5114F15/wp-content/uploads/2015/08/The-Authentic-Teaching-Self-and-Communication-Skills.pdf (I really like this one)

https://ia801002.us.archive.org/28/items/pdfy-WeLwkqLL6w830OqF/Papert%20Seymour-The%20Children’s%20Machine.pdf

The Buddhist and The Hot Dog Vendor

One of my favorite songs by one of my favorite bands has this lyric as a refrain:

“Change is the thing that is what we do, change is the change that’s changing you…”

It’s surreal, not really logical, but that’s why I like it — it emphasizes the disorienting quality of change. Theorizing the causes and nature of change has been a big project for contemporary philosophers, and there’s no reason why educators shouldn’t incorporate some deep reflection on change as part of their teaching. I approached the Douglas Thomas and John Seely Brown reading from this perspective.

Okay, some of their musings on contemporary change struck me as glib, and my first reaction was to fall back on the same basic critique I’ve had with many of our readings: in their embrace of the new, they fail to acknowledge the unique character of older teaching methods that can’t simply be updated and made more accessible through the, uhhhh, magic of technology. (Although I hated it at the time, I’m actually glad my sophomore year Medieval Literature professor made me memorize the opening to Canterbury Tales in the original Middle English). But the project of rendering change visible can not only equip students to deal with the world beyond the classroom (which is, for better or worse, in a state of constantly-accelerating flux), it can offer a good philosophical message about the status of knowledge: facts are constructed. That doesn’t mean they can’t be true, but they are the result of methods and inquiry which are themselves a product of human innovation. Bodies of knowledge change, presumptions are overhauled — and if you’ve read Thomas Kuhn, you know that sometimes entire scientific paradigms shift so dramatically that we can speak of qualitative breaks in our shared understanding of the world.

Focusing learning programs on this notion of change and inherent instability could (and maybe they should) represent a break in pedagogy where students come to a deep awareness of their own agency in producing knowledge. Wikipedia is a great example of transparent knowledge production, and using Wikipedia edit records as a way to emphasize the constantly-changing, actively-generated nature of knowledge is an interesting idea. This awareness shouldn’t be limited to philosophy students with a focus on epistemology, the study of knowledge itself. The tenuous status of knowledge and informational authority is too present in the real world right now, and I suspect that this ambiguity is only on the rise. (Unfortunately, I’m thinking of fake news).

Meanwhile, I have no problems at all with Ellen J. Langer’s article — except that, maybe, its emphasis on presence and focus seems to challenge a lot of the technology-happy work we’ve done so far! I insist that my students put away their internet-connected devices at the beginning of class, not because I inherently dislike smartphones and laptops (that’s another issue), but because I want to create the conditions for mindfulness. Her observations on mindfulness and adaptability to change really hit home for me. Doing coursework in an interdisciplinary PhD program means constantly adapting not only to new content, but new ways of thinking about things. I didn’t major in any of the departments that I take ASPECT courses in, so I often find myself sitting in history, political science or cultural studies classes, attempting to grasp the methodological / epistemological assumptions of historians, political scientists, and so on. (Stuff that some people picked up as undergrads and master’s students, to be sure). The only way I’ve accomplished this while maintaining a sense of clarity and consistency is by paying very close attention to context. I adopt the idea that I’m coming into new disciplines not just to learn the explicit content, but to grok the assumptions that professors and long-term students of a discipline take for granted. This has been completely necessary whenever I’ve encountered quantitative methods… I’ve learned that those who see themselves as math and numbers people somehow appear to intuitively grasp contextual frameworks in ways that I don’t. That’s a bit unfair, but math becomes much easier for me when I try to explain those frameworks to myself before learning a new equation or concept. (It means I spend less time thinking about “why” we use a certain equation, who came up with these methods anyway, and how, and so on…). Hopefully that makes sense. The project of becoming aware of context is so, so important when trying to make sense of content in environments subject to rapid change. Mindfulness is a key component of this.

By the way, all this thinking about change and mindfulness reminds me of a dumb joke about a Buddhist monk and a hot dog vendor. It starts with a cheesy one-liner and then gets even worse:

A Buddhist goes up to a hot dog vendor and says “make me one with everything.” 

When he asks for change, the vendor replies: “change comes from within.”

And I’ll end this post here!

***

For anyone reading this not in GEDI class, here is the first article I’m responding to: http://www.newcultureoflearning.com/newcultureoflearning.pdf , pp. 39-49. The second is only available through the Virginia Tech network.

Infrastructures, Mental and Digital

In A New Culture of Learning: Cultivating the Imagination for A World of Constant Change, Douglas Thomas and John Seely Brown write: “the relentless pace of change that is responsible for our disequilibrium is also our greatest hope.” They move on from there to embrace fundamentally the same mindset that underscored last week’s readings. Even though I want to engage this material with as much sincerity as my teaching practice deserves, it’s a bit frustrating to have read this.

I say that  after a lot of consideration… really, I don’t come at this from nowhere. My Master’s capstone project was on an organization devoted to independent video gaming; I’ve designed and coded video games; I’ve showed my students TED talks on the power of gaming, and pretty much all of my good friends are internet / game nerds. The phenomenon of gaming — and, more broadly, play and creative in a networked environment — is cultivated by a technologizing society. Manifestoes like this one tend to position themselves as coming from outside the mainstream. Often they begin with a reference to “traditional” education or societal conventions, the framework which would mark texts like this as ideological outliers — but this is not unconventional by any means.

A more radical move is to indicate that the unscrutinized acceptance of these technologies necessarily precludes critical discourse about them. Facebook (which Thomas and Brown reference right at the beginning, lumping it in with the very different technology that is Wikipedia) has become a significant part of many peoples’ lives in the last decade. The sheer magnitude of its role in society means it needs as much constructive skepticism as social sciences and humanities thinkers have accorded to phenomena whose impact took centuries to work toward. But the newness of digital phenomena does not mean that their embrace is in any way unusual. That understanding of them, which is sort of intuitive, is wrong — it just conveniently fits the authors’ ethos. The truth is that in the twenty-first century, with the rate of technological change quickly accelerating, accepting what is new is much easier than thinking critically about it. That’s opening a can of worms that education of all sorts is unprepared to deal with, especially as “digital education” receives a lot of outside attention and funding. “Digital humanities” is more attractive for venture capitalism than non-digital humanities, but I digress…

Right…

I don’t want to keep on this angle for too long. Really this is a recapitulation of last week’s blog. My soapbox isn’t strong enough to hold me up that long.

My favorite article from this week’s batch was Robert Talbert’s “Four Things Lecture Is Good For.” As a humanities instructor with under forty students in my class, I have the luxury of being able to deliver lectures where I encourage students to raise hands, ask questions, and (if they’re excited enough) even interrupt me as I speak. Nobody has to wait until the end of a twenty-minute diatribe about Virginia Woolf to make a remark about a very specific bit of information mentioned at the beginning. That sort of thing bothered me a lot in high school and during my undergrad years.

Lectures can be awfully boring and pointless, and I think Talbert hits the nail on the head when he notes that they’re bad at transferring information. If I want my students to absorb facts about a writer — and, although I tend to de-emphasize informational learning in general, sometimes I want them to know where Woolf  lived when she wrote “A Room Of One’s Own” — I’ll assign them to read certain pages and refer to slides I upload to Canvas. Lectures should contextualize and make lessons “come alive” (if I can use a cheesy cliché). The speaking style of an instructor can convey a sense of purpose and excitement around the course material that’s impossible to give through homework assignments alone.

Having said that, what he meant by “mental models” and “internal cognitive frameworks” was a bit confusing to me.  I guess I’ll just assume that my hunches about them are accurate. “Internal cognitive framework” just sounds like one’s way of making sense of information. I agree that it’s important to share this with students. In fact I wish my own instructors had been more transparent about how they as students had made sense of the same content they then went on to teach. The thing is — this requires a fair amount of self-knowledge and self reflection. Most people (including professors) can’t articulate how they learn, at least not in a way that can easily be modeled by others.

Some of my own best learning has come from talking with fellow students. I’ll never forget when my best friend in college, a philosophy major, described to me how she made sense of some of the most notoriously abstruse writers (Derrida, Heidegger and Kant — oh my!). That advice has stayed with me for years. I think about it as I work on the earliest stages of a philosophical dissertation project today.

And — one of my favorite things about programmer culture (and hacker culture in particular) is the bootstrap, DIY approach to learning code. Despite my critique, something from techno-culture I can usually support is the moxie it takes to learn a new programming language. The fact that there are still few established conventions for teaching programming, at least outside academia (most of what I’ve learned about code is self- or friends-taught), means that learning how to learn is always part of the deal. Thus you can’t help but explicate “internal cognitive frameworks.” I’ve come up with my own way of practicing coding skills that would be pretty easy to teach someone else, because I know exactly how they work.

Actually, I’d be interested to hear what folks with a more of a traditional academic background in computer science think about that.

***

Here are the readings I’m responding to:

Douglas Thomas and John Seely Brown, A New Culture of Learning (2011), pp. 17-38 (“Arc of Life Learning” and “A Tale of Two Cultures”)

Robert Talbert, “Four Things Lecture is Good For

Mark C. Carnes, “Setting Students’ Minds on Fire

Skepticism

I don’t want to begin my GEDI blogging journey on a negative note, but I couldn’t help but be critical of this week’s readings. In general, they smacked of techno-utopianism — hype that disguises as much truth about the networked world as it reveals. Here I have to disclose some bias — my PhD research is on the philosophical implications of technology. In particular, I spend a lot of time thinking about why end-users have so uncritically embraced digital networks and social norms that multiply and are amplified in a networked context. As an instructor, my research interests are echoed in the way I treat the use of tech in my classroom. As a student of pedagogy, it’s only honest to connect my views on these topics to my (admittedly very-much-in-development) teaching practice.

Despite my overarching critique, however, there’s interesting stuff to be explored here. All of this week’s thinkers have a sincere interest in leveraging innovation toward the best end possible. That’s great. Unfortunately, these pieces all demonstrated a very specific ideology: the ethos of Web 2.0. They would have been more interesting to me if they revealed their bias. Or at least the social trajectory by which this bias has come to support an ever-more popular perspective on networked learning. Here are a few examples:

In Gardner Campbell’s article, he emphasizes the need for students to understand the Internet from a more practical perspective. I completely agree with this: at this point in history, many of us live cyborg livss — part machine, part human — and it helps us to know a thing or two about how the web actually works. But “how the web actually works” includes more than just technical expertise on URLs, packet routing, HTTP and data infrastructures. There is an economic ideology at the core of Web 2.0. The constant clicking and “sharing” that facilitates the business model of the Internet, which makes a profit from all of this online activity in ways that are generally unknown to users. The networked self, including the networked student, generates a profit for the platforms they use. Although Seth Godin states that “blogging is free,” it is only free in the sense that we do not have to pay directly for certain blogging services. In these cases, user data is extracted that is highly valuable to all sorts of entities, include surveillants and social media sites fine-tuning algorithms to serve more profitable ads to their users. The networked student becomes a source of income. No explanation of the Internet should fail to include observations like these, which are not political interpretations — they’re just facts.

But my problems with this ideology aren’t entirely related to economic exploitation. They also have to do with the spirit of what it means to learn and produce intellectual and creative work.

Campbell tells us:

By forcing students to write ‘publicly’, their writing rapidly improves. 

This can be true in many cases. But it also reinforces the idea that the only meaningful behavior is that which we can display and promote. In research she did with teenage students, social critic Sarah Leonard indicated that a staggering amount of millennial writers aspire to become marketers. Marketing is a safe job option for budding writers, at least in comparison to journalism, editorial work or (God help us) fiction writing. Public writing can be extraordinarily helpful insofar as it forces writers to emphasize clarity and conceptual legibility. But we also need to affirm the value of creative work  for its own sake, not for its profit margin or the number of hits it garners (two metrics now deeply intertwined).

This issue is separate from the related concern addressed by Tim Hitchcock. In his piece, Hitchcock says  “a lot of early career scholars, in particular, worry that exposing their research too early, in too public a manner, will either open them to ridicule, or allow someone else to ‘steal’ their ideas.” Those are fair concerns and should be left to the consideration of each individual scholar, particularly based on their field (although Hitchcock perfunctorily dismisses them with a vague reference to his own life). He does not consider the deeper process by which scholars come to feel ready and comfortable to share their work. Incessant self-publication and self-publicization may lead to more Twitter followers and a “constant conversation,” but often one needs to work in isolation to hit on a truly singular finding. The short-term reward of “growing your network” and being marginally recognized with Internet fame is a foil to what might be achieved when time spent on social media is rerouted solely to one’s own paper.

I mention this, of course, in the context of graduate work. But I think we also need to share this viewpoint with undergraduates, because it’s becoming increasingly understood to be old-hat. To demonstrate that it’s possible to be both tech-savvy and critical of the Internet is sometimes interesting to students, I guess because they don’t get that perspective very often. In the Digital Culture unit of the course I teach, I try to offer this to students.

Meanwhile, the “Working Openly On The Web Manifesto” has its roots in free and open software culture, which (again) is not inherently bad, but may have problematic implications for scholarship. The notion that work is primarily meaningful for its sharing value seems like a quick route to short-circuiting educational processes that take gestation, painstaking attention to detail, and long stretches of time to come to fruition. Academic work is not the same as crowdsourced coding projects. That is, until over-embrace of the Internet forces us to lose sight of the reasons why scholarly practice is distinguished from other types of work. (Also known as: why, for the most part, grad students don’t publish drafts of their dissertation until it’s been through extensive review).

Okay, I think that’s all I have. Despite my polemic, my mind is still open, and I wouldn’t mind being challenged on this. (Hey, it’s good for my research).

***

For those who aren’t part of GEDI class, here are the articles I’m responding to:

Gardner Campbell, “Networked Learning as Experiential Learning” (2016)

Doug Belshaw “Working Openly On the Web” (2014)

Tim Hitchcock “Twitter and Blogs are Not Just Add-ons To Academic Research” (2014)

Seth Godin and Tom Peters on Blogging (2009)

GEDI post IV: My Voice? My VOICE?

[course assignment]

Before I started teaching, a friend shared with me learned wisdom from his time as a student-slash-instructor, that peculiar situation in which many of the GEDI order now find themselves. “You learn a lot about yourself by teaching,” he said. This didn’t make me excited. I already know a lot about myself — I kind of wish I knew less, actually. And I definitely don’t want undergraduates to serve as a mirror to any self-knowledge of which I myself am unaware, even if their youth and scholarly acumen could produce some creative insights. Like Sarah Deel, I’ve had age and gender-based concerns about emphasizing too much of my “real” self in the classroom. Young female instructors already have to work harder than their male counterparts to gain respect, and it seemed to me like “learning about myself” would only come through an over-emphasis on me in the classroom.

At the end of my first semester of teaching, I was emotionally and intellectually depleted. Instructors field everything from frequent, unnecessary questions about assignments (how many times can you say: i t ‘ s  o n   t h e   s y l l a b u s) to potentially grave psychological issues among their students. This is all, of course, aside from the work of conveying the content of your course. It seemed to me then that people who take up teaching for the “soft” payoffs — the gratification of doing such meaningful work; a sense of connection to the rising generation — are in it for the wrong reasons. Teaching, I thought, should be about a love of the subject. Passion for knowledge, not people, is what makes a good teacher. Maybe it’s even okay to see it as “just” financial security while pursuing your own research.

Over time,  my perspectives on this have become more nuanced. College kids have a sixth sense for BS — so the appearance of naturalness in the classroom is  important, except you can’t be too natural if you yourself are obviously still in your twenties and of the gender that always has to fight to be taken seriously in intellectual professions. Yes, authenticity as a measure of success somehow seems unfair when “realness” can discredit you. What a mess!

For these reasons, Dr. Fowler’s paper on authentic teaching self is a bit of a godsend. I’ve sometimes found myself walking into the classroom while the mental tape in my head continually reminds myself that what’s about to happen is a performance. Dr. Fowler’s focus on the similarities between teaching and acting — and especially on the physical component of the teaching-performance — really compounds this. Good acting always includes some reality: actors are instructed to “think the thought,” to try to genuinely feel the emotion of a scene and get caught up in the story. This is why getting in and out of character is a practice, just like memorizing lines and stage directions. Likewise, I think there are shades and degrees of authenticity that you can exploit to bolster your teaching performance. I do care about my students, and I really love what I teach. Now I think of this positive regard as its own self-replenishing source of energy that can be channeled toward every element of teaching (including administration and grading).

Authenticity (or at the very least, its appearance) seems key to establishing yourself as a Yearner, too. Seymour Papert didn’t really investigate experience and demographic-based obstacles to the kind of paradigm shift he’s interested in — perhaps he does that later in the book. So I think Dr. Fowler’s tools are a little bit more useful to me than Papert’s theory, although I certainly read the former as a vehicle for the latter. In truth, I’m still not totally at ease as an instructor. My teaching “voice” is still relatively untrained. Only with more time will I be able to gracefully navigate the space between absolute transparency and an overly stiff professional mask, both of which are hardly ideal as teaching personas. Perhaps the self that I’ll learn “a lot” about will be composed of those parts I feel comfortable showing in front of students. Those elements of ourselves that we draw from when we teach have got to be some of the most timeless, the most meaningful.

***

For those not in GEDI, the Virginia Tech graduate pedagogy course, here are the writings I’m responding to in this post:

http://cte.virginia.edu/wp-content/uploads/2014/02/Deel.pdf

http://amynelson.net/grad5114F15/wp-content/uploads/2015/08/The-Authentic-Teaching-Self-and-Communication-Skills.pdf (I really like this one)

https://ia801002.us.archive.org/28/items/pdfy-WeLwkqLL6w830OqF/Papert%20Seymour-The%20Children’s%20Machine.pdf

 

The post GEDI post IV: My Voice? My VOICE? appeared first on .

GEDI Post III: The Buddhist And The Hot Dog Vendor

[course assignment]

One of my favorite songs by one of my favorite bands has this lyric as a refrain:

“Change is the thing that is what we do, change is the change that’s changing you…”

It’s surreal, not really logical, but that’s why I like it — it emphasizes the disorienting quality of change. Theorizing the causes and nature of change has been a big project for contemporary philosophers, and there’s no reason why educators shouldn’t incorporate some deep reflection on change as part of their teaching. I approached the Douglas Thomas and John Seely Brown reading from this perspective.

Okay, some of their musings on contemporary change struck me as glib, and my first reaction was to fall back on the same basic critique I’ve had with many of our readings: in their embrace of the new, they fail to acknowledge the unique character of older teaching methods that can’t simply be updated and made more accessible through the, uhhhh, magic of technology. (Although I hated it at the time, I’m actually glad my sophomore year Medieval Literature professor made me memorize the opening to Canterbury Tales in the original Middle English). But the project of rendering change visible can not only equip students to deal with the world beyond the classroom (which is, for better or worse, in a state of constantly-accelerating flux), it can offer a good philosophical message about the status of knowledge: facts are constructed. That doesn’t mean they can’t be true, but they are the result of methods and inquiry which are themselves a product of human innovation. Bodies of knowledge change, presumptions are overhauled — and if you’ve read Thomas Kuhn, you know that sometimes entire scientific paradigms shift so dramatically that we can speak of qualitative breaks in our shared understanding of the world.

Focusing learning programs on this notion of change and inherent instability could (and maybe they should) represent a break in pedagogy where students come to a deep awareness of their own agency in producing knowledge. Wikipedia is a great example of transparent knowledge production, and using Wikipedia edit records as a way to emphasize the constantly-changing, actively-generated nature of knowledge is an interesting idea. This awareness shouldn’t be limited to philosophy students with a focus on epistemology, the study of knowledge itself. The tenuous status of knowledge and informational authority is too present in the real world right now, and I suspect that ambiguity is only on the rise. (Unfortunately… I’m thinking of fake news…)

Meanwhile, I have no problems at all with Ellen J. Langer’s article — except that, maybe, its emphasis on presence and focus seems to challenge the technology-happy work we’ve done so far! I insist that my students put away their internet-connected devices at the beginning of class, not because I inherently dislike smartphones and laptops (that’s another issue), but because I want to create the conditions for mindfulness. Her observations on mindfulness and adaptability to change really hit home for me. Doing coursework in an interdisciplinary PhD program means constantly adapting to not only new content, but new ways of thinking about things. I didn’t major in any of the departments that I take ASPECT courses in, so I often find myself sitting in history, political science or cultural studies classes, attempting to grasp the methodological / epistemological assumptions of historians, political scientists, and so on. (Stuff that some people picked up as undergrads and master’s students, to be sure). The only way I’ve accomplished this while maintaining a sense of clarity and consistency is by paying very close attention to context. I adopt the idea that I’m coming into new disciplines not just to learn the explicit content, but to grok the assumptions that professors and long-term students of a discipline take for granted. This has been completely necessary whenever I’ve encountered quantitative methods… I’ve learned that those who see themselves as math and numbers people somehow appear to intuitively grasp contextual frameworks in ways that I don’t. That’s a bit unfair, but math becomes much easier for me when I try to explain those frameworks to myself before learning a new equation or concept. (It means I spend less time thinking about “why” we use a certain equation, who came up with these methods anyway, and how, and so on…). Hopefully that makes sense. The project of becoming aware of context is so, so important when trying to make sense of content in environments subject to rapid change. Mindfulness is a key component of this.

By the way, all this thinking about change and mindfulness reminds me of a dumb joke about a Buddhist monk and a hot dog vendor. It starts with a cheesy one-liner and then gets even worse.

A Buddhist goes up to a hot dog vendor and says “make me one with everything.” 

When he asks for change, the vendor replies: “change comes from within.”

And I’ll end this post here!

***

For anyone reading this not in GEDI class, here is the first article I’m responding to: http://www.newcultureoflearning.com/newcultureoflearning.pdf , pp. 39-49. The second is only available through the Virginia Tech network.

 

The post GEDI Post III: The Buddhist And The Hot Dog Vendor appeared first on .

GEDI Post II: Infrastructures, Mental and Digital

In A New Culture of Learning: Cultivating the Imagination for A World of Constant Change, Douglas Thomas and John Seely Brown write: “the relentless pace of change that is responsible for our disequilibrium is also our greatest hope.” They move on from there to embrace fundamentally the same mindset that underscored last week’s readings. Even though I want to engage this material with as much sincerity as my teaching practice deserves, it’s a bit frustrating to have read this.

I say that  after a lot of consideration… really, I don’t come at this from nowhere. My Master’s of Science capstone project was on an organization devoted to independent video gaming; I’ve designed and coded video games; I’ve showed my students TED talks on the power of gaming, and pretty much all of my good friends are internet / game nerds to some degree. The phenomenon of gaming — and, more broadly, play and creative in a networked environment — is (obviously) cultivated by a technologizing society. Manifestoes like this one tend to position themselves as coming from outside the mainstream. Often they begin with a reference to “traditional” education or societal conventions, the framework which would mark texts like this as ideological outliers — but this is not unconventional by any means.

A much more radical move is to indicate that the unscrutinized acceptance of these technologies necessarily precludes critical discourse about them. Facebook (which Thomas and Brown reference right at the beginning, lumping it in with the very different technology Wikipedia) has become a significant part of many peoples’ lives in the last decade. The sheer magnitude of its role in society means it needs as much constructive skepticism as social sciences and humanities thinkers have accorded to phenomena whose impact took centuries to work toward. But the newness of digital phenomena does not mean that their embrace is in any way unusual. That understanding of them, which is sort of intuitive, is wrong — it just conveniently fits the authors’ ethos. The truth is that in the twenty-first century, with the rate of technological change quickly accelerating, accepting what is new is much easier than thinking critically about it. That’s opening a can of worms that education of all sorts is unprepared to deal with, especially as “digital education” receives a lot of outside attention and funding. “Digital humanities” is more attractive for venture capitalism than non-digital humanities, but I digress…

Right…

I don’t want to keep on this angle for too long. Really this is a recapitulation of last week’s blog, and my soapbox isn’t strong enough to hold me up that long.

My favorite article from this week’s batch was Robert Talbert’s “Four Things Lecture Is Good For.” As a humanities instructor with under forty students in my class, I have the luxury of being able to deliver lectures where I encourage students to raise hands, ask questions, and (if they’re excited enough) even interrupt me as I speak. Nobody has to wait until the end of a twenty-minute diatribe about Virginia Woolf to make a remark about a very specific bit of information mentioned at the beginning. That sort of thing bothered me a lot in high school and during my undergrad years.

Lectures can be awfully boring and pointless, and I think Talbert hits the nail on the head when he notes that they’re bad at transferring information. If I want my students to absorb facts about a writer — and, although I tend to de-emphasize informational learning in general, sometimes I want them to know where Woolf  lived when she wrote “A Room Of One’s Own” — I’ll assign them to read certain pages and refer to slides I upload to Canvas. Lectures should contextualize and make lessons “come alive” (if I can use a cheesy cliché). The speaking style of an instructor can convey a sense of purpose and excitement around the course material that’s impossible to give through homework assignments alone.

Having said that, what he meant by “mental models” and “internal cognitive frameworks” was a bit confusing to me. They appear to be conceptualizations from a specific school of educational psychology. I guess I’ll just assume that my hunches about them are accurate. “Internal cognitive framework” sounds like one’s interiorized (and typically unconscious) way of making sense of information. I agree that it is very important to share this with students. In fact I wish my own instructors had been more transparent about how they as students had made sense of the same content they then went on to teach. The thing is — this requires a fair amount of self-knowledge and self reflection. Most people (including most professors) can’t articulate how they learn, at least not in a way that can easily be modeled by others.

Some of my own best learning has come from talking with fellow students. I’ll never forget when my best friend in college, a philosophy major, described to me how she made sense of some of the most notoriously abstruse writers (Derrida, Heidegger and Kant — oh my!). That advice has stayed with me for years. I think about it as I work on the earliest stages of a philosophical dissertation project today.

And — one of my favorite things about programmer culture (and hacker culture in particular) is the bootstrap, DIY approach to learning code. Despite my maybe at times overweening critique, something from techno-culture I can usually support is the moxie it takes to learn a new programming language. The fact that there are still few established conventions for teaching programming, at least outside academia (most of what I’ve learned about code is self- or friends-taught), means that learning how to learn is always part of the deal. Thus you can’t help but explicate “internal cognitive frameworks.” I’ve come up with my own way of practicing coding skills that would be pretty easy to teach someone else, because I know exactly how they work.

Actually, I’d be interested to hear what folks with a more of a traditional academic background in computer science think about that.

***

For those who aren’t in GEDI class, here are the readings I’m responding to:

Douglas Thomas and John Seely Brown, A New Culture of Learning (2011), pp. 17-38 (“Arc of Life Learning” and “A Tale of Two Cultures”)

Robert Talbert, “Four Things Lecture is Good For

Mark C. Carnes, “Setting Students’ Minds on Fire

 

 

 

 

The post GEDI Post II: Infrastructures, Mental and Digital appeared first on .

GEDI Post I

I don’t want to begin my GEDI blogging journey on a negative note, but I couldn’t help but be critical of this week’s readings. In general, they smacked of techno-utopianism — hype that disguises as much truth about the networked world as it reveals. Here I have to disclose some bias — my PhD research is on the philosophical implications of technology. In particular, I spend a lot of time thinking about why end-users have so uncritically embraced digital networks and social norms that multiply and are amplified in a networked context. As an instructor, my research interests are echoed in the way I treat the use of tech in my classroom. As a student of pedagogy, it’s only honest to connect my views on these topics to my (admittedly very-much-in-development) teaching practice.

Despite my overarching critique, however, there’s interesting stuff to be explored here. All of this week’s thinkers have a sincere interest in leveraging innovation toward the best end possible. That’s great. Unfortunately, these pieces all demonstrated a very specific ideology: the ethos of Web 2.0. They would have been more interesting to me if they revealed their bias. Or at least the social trajectory by which this bias has come to support an ever-more popular perspective on networked learning. Here are a few examples:

In Gardner Campbell’s article, he emphasizes the need for students to understand the Internet from a more practical perspective. I completely agree with this: at this point, we’re all digital beings, and it behooves us to know a thing or two about how the web actually works. But “how the web actually works” includes more than just technical expertise on URLs, packet routing, HTTP and data infrastructures. There is an economic ideology at the core of Web 2.0. The constant clicking and “sharing” that facilitates the business model of the Internet, which makes a profit from all of this online activity in ways that are generally unknown to users. The networked self, including the networked student, generates a profit for the platforms they use. Although Seth Godin states that “blogging is free,” it is only free in the sense that we do not have to pay directly for certain blogging services. In these cases, user data is extracted that is highly valuable to all sorts of entities, include surveillants and social media sites fine-tuning algorithms to serve more profitable ads to their users. The networked student becomes a source of income. No explanation of the Internet should fail to include observations like these, which are not political interpretations — they’re just facts.

But my problems with this ideology aren’t entirely related to economic exploitation. They also have to do with the spirit of what it means to learn and produce intellectual and creative work.

Campbell tells us:

By forcing students to write ‘publicly’, their writing rapidly improves. 

This can be true in many cases. But it also reinforces the idea that the only meaningful behavior is that which we can display / promote. In research she did with teenage students, social critic Sarah Leonard indicated that a staggering amount of millennial writers aspire to become marketers. Marketing is a safe job option for budding writers, at least in comparison to the  potential of making it as a journalist, editor or (God help us) fiction writer. Public writing can be extraordinarily helpful insofar as it forces writers to emphasize clarity and conceptual legibility. But we also need to affirm the value of creative work  for its own sake, not for its profit margin or the number of hits it garners (two metrics now deeply intertwined).

This issue is separate from the related concern addressed by Tim Hitchcock. In his piece, Hitchcock says  “a lot of early career scholars, in particular, worry that exposing their research too early, in too public a manner, will either open them to ridicule, or allow someone else to ‘steal’ their ideas.” Those are fair concerns and should be left to the consideration of each individual scholar, particularly based on their field (although Hitchcock perfunctorily dismisses them with a vague reference to his own life). He does not consider the deeper process by which scholars come to feel ready and comfortable to share their work. Constant self-publication and self-publicization may lead to more Twitter followers and a “constant conversation,” but often one needs to work in isolation to hit on a truly singular finding. The short-term reward of “growing your network” and being marginally recognized with Internet fame is a foil to what might be achieved when time spent on social media is rerouted solely to one’s own paper.

I mention this, of course, in the context of graduate work. But I think we also need to share this perspective with undergraduates, because it’s becoming increasingly viewed as outmoded. To demonstrate that it’s possible to be both tech-savvy and critical of the Internet is sometimes interesting to students, I guess because they don’t get that perspective very often. In the “Digital Culture” unit of the course I teach, I try to offer this (without hitting them over the head with it).

Meanwhile, the “Working Openly On The Web Manifesto” has its roots in free and open software culture, which (again) is not inherently bad, but may have problematic implications for scholarship. The notion that work is primarily meaningful for its sharing value seems like a quick route to short-circuiting educational processes that take gestation, painstaking attention to detail, and long stretches of time to come to fruition. Academic work is not the same as crowdsourced coding projects. That is, until starry-eyedness about the Internet forces us to lose sight of the reasons why scholarly practice is distinguished from other types of work. (Also known as: why, for the most part, grad students don’t publish drafts of their doctoral dissertation until it’s been through extensive review).

Okay, I think that’s all I have. Despite my polemic, my mind is still open, and I wouldn’t mind being challenged on this. (Hey, it’s good for my research).

 

The post GEDI Post I appeared first on .