Four Things I Learned from Working with Students… but not with Grades
I have an odd job if you can call it that – some days, “lifestyle” seems like a better descriptor. For my assistantship, I am a live-in employee of Housing and Residence Life at Virginia Tech. In that role, I supervise RA’s, serve as a conduct officer, and handle daily operations of two all-male residence halls, which, as you might expect, brings a new adventure every week. The work is hard to predict and takes a great deal of time, but I love it.
My path to this line of work felt reflective of Dan Pink’s TED Talk on motivation. My undergraduate study was in chemical engineering, where I was driven purely by extrinsic motivation, primarily grades, to painfully grind through a curriculum that did not feel meaningful to me. However, my “side-project” of serving as a resident advisor (RA) provided me the sense of autonomy, mastery, and purpose mentioned by Dan Pink. As such, I ended up putting a lot of time and effort into that work, which didn’t feel like work, and decided to pursue work in student affairs, beginning my studies in higher education.
My work directly with students, both the RAs I supervise and residents of the buildings I oversee, has provided insight regarding the struggles students face academically. Aside from one semester where I served as a TA for a course all RAs must take, my connections to the academic side of campus are few and far between. However, I often talk to students about their coursework, and make a point to talk about academics with my RAs. The subject matter of those conversations is often total foreign to me, but they have been productive. I certainly can’t tutor these students, but we still learn together as I learn about their struggles and challenge them to think about learning differently. Here are a few things I learned from my conversations:
Intrinsic motivation produces the best work.
With the student population of Virginia Tech, I work with a lot of engineering students and hear a lot of stories that bring me back to my own undergraduate education. Fortunately, I see a marked difference between myself as an undergrad and many of those students. Recently, I was having a conversation with one of my RAs about a project he was tasked to complete earlier in his engineering career in an intro class. All groups in the class were tasked with building a machine that kept a ping-pong ball suspended in the air. He discussed his group’s process of buying electrical hardware that could integrate with code they had written with the code controlling how high about the fan the ping pong ball would be suspended. (To be clear, I am not doing this process justice in summarizing it.) He then said that other groups in the class seem to do the bare minimum with Legos and a fan, probably something I would have done back in the day.
I imagine that both groups met the criteria for an A on that project, though my RA added that his professor specifically thanked his group for leaning into a creative process and going above and beyond. There might have been a difference in ability between those two groups, but the primary difference was motivation. My RA wanted to be creative and engineer something, not simply get a grade that could be achieved with a much lower level of effort. I see the same thing in my RAs within their role in the hall. Those that perform the highest do it because they are passionate about serving, not for a positive performance evaluation or from fear of being fired.
Our current method of grading does not expose students to open-ended problems.
I once had a professor who many times repeated, “When you cannot measure what you value, you come to value what you can measure.” While this statement was made in context in assessment for research in higher education, I found in rules true from our assessment in grading. Learning is not easily measurable; it’s difficult and time-consuming to make judgments about how much a person has learned or developed. It’s based on qualitative evidence, which can be interpreted differently from person to person. However, one can measure how many facts a person has retained or how a project or composition adheres to a pre-defined rubric. It makes grading a lot easier, but then education becomes a series of boxes to be checked rather than true learning. Perhaps students can be extrinsically motivated to tackle these tasks, but it doesn’t seem to prepare them for open-ended problems.
I never doubt the intelligence of the students I work with. Since they’ve made it into Virginia Tech, I’m confident in their ability to learn, but I believe our use of “measurable learning” does not equip them to tackle ill-structured problems. Fortunately, the RA role is full of these sorts of problems. I often get questions from staff members like “How do I get more guys involved on my hall?” or “Is it okay to be friends with my residence?” I’ve found that some of my RAs expect to get a list of steps or a quick yes-or-no answer, and I’m occasionally met with frustration when I launch the conversation by admitting I don’t have a clear-cut answer. While I feel fortunate to facilitate a conversation that helps them generate their own solutions, I can only hope that other students are finding these ill-structured problems elsewhere, so they are prepared to face them beyond their time in college, where rubric won’t define success.
“Exceeds Expectations” doesn’t mean much.
The one time in my work that I use quantitative measures to “grade” students is when it comes time for RA performance evaluations. The process by which RAs are evaluated includes a self-evaluation, an evaluation from a supervisor, and a discussion comparing those two evaluations to come up with a consensus. I’ve quickly learned that a key part of this process is to explain the evaluation scale beforehand. The scale is descriptive and is made up of the following descriptors: exceeds expectations, strong performance, meets expectations, inconsistently meets expectations, and needs improvement. I’ve found that it is essential to explain that “meets expectations” will likely be the most common grade on any positive evaluation. If I don’t RAs are inclined to give themselves “exceeds expectations” or at least “strong performance” in every category unless we’ve discussed performance issues in a particular area.
While this tendency made for a few frustrating evaluations early on in my role, the broader implications seem more problematic. It seems that with the way we assess, an A denotes adequacy. If there are no problems with an assignment and it matches the rubric criteria, it is deserving of an A. This leaves little room to formally note excellence and little reason for students to reach for it. How can we challenge students and get more out of them if the bar is set low?
Grades are not for the learner but for the outsider, and even they shouldn’t be using them.
In my work with students, I find myself reiterating that learning is about far more than a grade. In my academic work, I try to work to produce quality, independent of a grade. However, at times as a professional, I find myself overly reliant on those measures that I know to be problematic and imperfect. The process of hiring RAs is logistically difficult. In a typical year, a couple hundred applicants compete for what ends up being about one hundred spots. While the process includes a written application, individual interviews, and group interviews, it all eventually boils down to one number. Though I try my best to find RAs based on interactions I’ve had with students, recommendations from those who have worked with applicants, and the qualitative measures that accompany the scales, I still must prioritize a list of a hundred names to fill three positions, so numbers end up playing more of a role than I’d like them to. It’s just not possible for me to read every application or note written by an interviewer.
I think if we are to ever step away from grades in working with students, we need to think more critically about how we compare and evaluate in daily life. While quantitative measures seem quick and easy, they are not always effective in dealing with complexity, particularly when people are involved. We cannot capture a person’s abilities in a single number, and a single number cannot provide someone with clear opportunities for improvement. It takes more time to engage in the complexity that a single measure attempts to evade, but it must be done if we truly wish to invest in student learning.