Episode 2: Advancing Assessment with AI in Academia [ChatGPT created our title]
Dear Listener/Reader: We recorded this episode before the Guiding Principles document, created by Institutional Lead for A.I. Strategy, Christian Blouin, was made available to the Dal community. Also, the CLT has a new quick-start resource on A.I. (that includes an editable PowerPoint introducing A.I. to students) that you might like to check out. Enjoy the show!
In the Kates Discuss podcast, we discuss various topics surrounding pedagogical practice in higher education. Today we are discussing generative artificial intelligence (like ChatGPT) and its implications for assessment and assessment design.
Timestamps:
2:06: Introduction
06:59: What is a large language model?
08:28: What excites and concerns us
13:47: Ethical issues
19:43 Challenges for educators
27:29: The big problem
33:45: The relationship between the writing and the learning
35:15 Ameliorating student fear of failing
38:15: An assignment idea
42:00: Reducing teaching labour
43:21: Ideas for altering assessments
51:15: Conclusion
Introduction
Kate T.: Hello and welcome to episode two of the Kate Discuss podcast. I'm Kate Thompson.
Kate C.: And I'm Kate Crane.
Kate T.: We are educational developers at the Center for Learning and Teaching at Dalhousie University, which is located in Mi’kma’ki, the ancestral and unceded territory of the Mi'kmaq. We are all treaty people. In this podcast, we discuss various topics surrounding pedagogical practice in higher education. Today we are discussing advancing assessment with AI in academia. Before we get started, our producer Jake will take a moment to provide some timestamps that might be of interest to you. This will serve as an outline of the discussion and might also help you skip around to sections of particular interest to you. Take it away, Jake.
[Jake provides timestamps]
Kate T.: So, I don't know about you, Kate Crane, but my work life has been inundated with talk of A.I. lately. And by A.I., of course, I mean artificial intelligence. In particular, we’ve heard a lot about ChatGPT-3, which is a relatively new machine learning algorithm that is really great at generating texts, which is, I think, something fairly novel for A.I., and that's caused quite a stir.
Kate C.: Quite a stir.
Kate T.: There's a lot of different types of AI that have been embedded in our lives for quite some time. Grammarly, predictive text in your cell phone or in your email, all kinds of things like that we're fairly used to, but this is a jump, and I believe this ChatGPT-3 version came out late last year, I think November, and since then people have been realizing the power of this tool. So, you can essentially put in a prompt, whatever you want it to do for you, and it will generate some content for you. And it's quite good at doing this. In particular, what has caused some concern with people, is you could say tell it what kind of essay you would like for it to write and it will generate a pretty decent…
Kate C.: Pretty decent result, I would say. So, when I did finally manage to get in, I asked it: “what is the role of genuflecting for the constitution of the religious subject?” I wanted to throw something of fancy at it, and it gave me what I thought was a solid first- or second-year university student response, and I was kind of blown away. Yeah, I was pretty blown away.
Kate T.: I've seen a few presentations and talks about AI, and one of the presenters, here internally at Dal, submitted, before the session, a bunch of essays, some of which were written by students and some of which were not, and we were to try and determine which was which.
Kate C.: And how successful were you?
Kate T.: Well, it was very tricky. Yeah, it was very tricky.
Kate C.: I've actually had trouble even playing with it. I haven't found that kind of playful practice with it. Yeah, it's kind of an awkward thing. I don't know how to talk to it yet.
Kate T.: Oh, definitely. And that is something that I've heard people talk about a lot, as well. So there's this notion of, oh, what are they calling it, prompt engineering; it's the skill of writing appropriate prompts for the AI to generate what it is that you're actually looking for.
Kate C.: And I think even more, I might ask it something and it gives me something either satisfying enough that I…maybe I don't know how to push it. It's both satisfying, and also, especially in the realm of—I gave it a creative fiction prompt the other day—satisfying and yet so trite that I don't even know how to get it to go further, or differently, or more creatively. It kind of stopped me dead in my tracks. It was a weird experience.
Kate T.: Yeah. I think you and I talked about this as well. I was reading an article about whether it will ever truly be able to write poetry, so I was trying to get it to write me a poem that I thought was interesting. I asked it to write me a poem about consciousness and it wrote me a pretty good poem about consciousness, but it was very rhyme-y and regular, like the AI “knows” how a poem should be.
Kate C.: And it actually didn't know a different type of poem you were asking it to do. It should have known this thing.
Kate T.: In iterations, I tried to get it to write a more free-flowing, naturalistic and rhyming type.
Kate C.: Specifically, non-rhyming…
Kate T.: It kept rhyming, because I said the word poem and whatever its definition of poem, its intrinsic that the words should rhyme.
What is a large language model?
Kate C.: And this maybe actually points to, if we want to get behind—I have no business talking about this—but if we wanted to talk about how it does the thing that it does, it relies on a large language model. Which is basically—if I can really lay woman term this, and I'm going to use such technical terms as “thingamajig” and “whatsit”—but it relies on really large and vast quantities of written text that, that’s been smushed up, but there's a sort of statistical aggregation, such that it knows that certain words will come after or before other words, certain percentages of the time. And it's going to rely on this kind of statistical average of the way that words relate to each other, mostly in our text.
Kate T.: Yeah. That's essentially the way that I understand it; it has been given the text of the Internet as its data, and it is able, based on that, to predict the next word, the next most likely word. And that essentially is how it's doing everything that it's doing. It is immensely complex, obviously, but as a simplistic description, it serves.
Kate C.: Right, very simplistic. I think that we probably will pop some resources in the description box to people who can more fulsomely describe this, either the science or other things that we'll talk about separately.
What excites and concerns us
Kate T.: So yeah, so I guess that's sort of an introduction. It's something that's on everybody's mind, as far as I can tell, in higher education, especially. I think what we would like to do first is, just briefly, each of us take a chance to talk about one thing that excites us about A.I., generally or ChatGPT in particular, and then a concern that we have. So, do you want to start first?
Kate C.: Sure. I think maybe the word excited isn’t the right one for me…I'm a little cold-blooded on AI; it's not really a topic that I am drawn to, generally speaking, it's just not really part of my body of interest. So maybe not the word “excite”, but intrigued—I’m really intrigued by what ChatGPT can either do for, or detract from, the writing process. I'm really interested in ideas such as, if I allow ChatGPT to help me brainstorm story topics or drafting an essay, am I cutting out an essential piece of the writing process that’s essential to the whole, or even essential to practice as a writer? Am I allowing the bot to do something for me that I should do myself? Or, is it actually generative in a positive sense such that it could help me get over a writer's block? Or, will it help me actually think of things I hadn't thought about and sort of set me off in a different trajectory entirely? And that's sort of exciting too. So, I am intrigued by the implications for writing, either positive or negative. I’m interested in either writing classes or just those courses that depend heavily on writing assignments.
Kate T.: I think that's an interesting distinction, those two types of classes, and I think this is what we're going to focus on in this episode, assessment in general, but in particular probably writing assignments. So, we're going to get into that.
Kate C.: Oh, and then concerns. Yes. Oh gosh. I never really like hearing about stuff that the First World uses to its advantage after the majority world, I won’t call it Third World, is put to work for low wages to sort of screen or validate or have to be witness to all of the horrors of the underbelly of the human mind. So, this article, probably from The Times, describing how Kenyan workers paid less than two bucks an hour to make it less toxic. So, I'm always sort of uncomfortable using something when I know about such origins as this.
Kate T.: Yeah. So really good point for sure. Okay, so we'll flip back to positive. My excitement about A.I., and I do have concerns as well, so we will get back to that. But the thing that I am excited about with these advancements in AI and with chatbots in particular is to me what I see is just an extremely powerful tool that represents a bit of a leap from where we are now; so, that represents a lot of potential to revolutionize the way that we work on a daily basis. I'm excited by that. I mean, it's also one of the things that people are terrified of.
Kate C.: Yes.
Kate T.: Because it's like with great power comes great responsibility. Right. Is it Uncle Ben that said that in Spider-Man, maybe?
Kate C.: Maybe. Ask ChatGPT!
Kate T.: Yeah. And it will tell you. And it may or may not be right, but it will be very confident—if it can be confident. So, yes, I think it's a powerful tool and I think that that's exciting. And I'm excited to discover and learn about the new ways that we'll be able to use it to make our lives more efficient. But yes, I also have concerns and it is related to the kind of ethical concerns which I think we're going to cover briefly what the ethical concerns are, that we are aware of. That's not going to be the focus here. It is a huge topic, and very important, just we're not going to discuss it super fulsomely since it's not really our particular area of expertise.
Ethical issues
Kate C.: Yeah, a lot of issues, security issues, proper use of the data; there’s so much.
Kate T.: Yeah. So the idea here is, like I mentioned with ChatGPT, for example, the data that is making up its database is just information from the Internet, all the information from the Internet, I'm not entirely sure, but it's whatever is publicly available, I'm pretty sure, that was scrubbed and made a part of the database that the AI learned from. There's two things related to that that are concerning. One is, consent for your data being a part of that database. With ChatGPT and the massive amount of data that that represents, I don't know how impactful that would be, but somebody should theoretically, from an ethical standpoint, be able to choose to remove their data from there, or to not agree to have it included.
And this is a discussion that I've heard a lot with respect to AI art in particular. There is some really impressive art applications now that are generating pretty compelling images for people. But the art styles that they're using are derivative of whatever artist’s art was put into the app, and that was done without consent. And the huge thing there, and this maybe is related to the to the academia worries, but the thing is that these artists, that is their livelihood, and the AI is poised to take over, because when you can use AI to generate art, it's much more efficient and cheaper. You know, you can spend five bucks and get 50 selfies or whatever.
So, their art is being used to create the AI, which is then taking over their livelihood. So that's a huge problem and so there needs to be policy around that, regulating what data goes into the AI, and based on that, what purpose the results of that can be used for, right?
And then the other issue with that data is, bias in the database is going to come out in the results.
Kate C.: It's scraping that which people have written, and the people who have written might also be an overrepresentation of certain groups of people. So, biases from what might possibly be an overwhelmingly white and northern hemisphere point of view will be overrepresented in the text that it spits back out at you.
Kate T.: Yeah. So I think ChatGPT is actively trying to counteract that by manually making a judgements. It sounds like that Time article is about that.
Kate C.: Yes.
Kate T.: So maybe even their practice for doing that isn't the best. But that's the interesting thing with this—we can never take the human out of the picture entirely because we need to be doing stuff like this. I heard somebody else talk about, like, a sandwich where there's a human on either side and the AI in the middle.
Kate C.: Because it does need managing and tempering. So, the idea that AI is going to make everyone obsolete in some kind of way, it still needs the guidance of the human.
Kate T.: But my concern, too, about that particular issue is that, even then, we're only going to catch what we catch. Who is doing this monitoring, and what are they aware of, and what are they going to pick up on, and how does that happen? And what harm is done before something has been noticed, right?
Kate C.: Yeah.
Kate T.: One other ethical concern that we put down here is the sustainability question. This is something I've heard discussed a little bit, and this is just referring to the fact that it is highly resource intensive to run an AI of this magnitude. It has impact on the environment and costs money, like a lot of money, to run. So, you know, people have this idea now of implementing AI on a wide scale, like all over the place. Not sure how feasible that currently is.
Kate C.: That's a great question. Yeah. In terms of cost.
Kate T.: And that refers back to your concern about first world access. Right now, ChatGPT is free for anybody with an Internet connection to use. I think there's already a paid version of it, and I don't know how long it becomes available for free. I think right now it's free because it is actually using the information that people are inputting to continue to build up its database, and so that's the rationale for just allowing it to be used as much as possible. But it's totally possible that eventually it will be behind a paywall.
Challenge for educators
Kate C.: Let’s get down to the education bit now. So, challenges educators are facing now, in terms of how students are using it, how they might potentially be using it, what this means for assessment design and plagiarism and academic integrity and things like the writing process and, you know, being disciplined into a writing process has always meant a certain thing, now might mean a certain other thing. What other challenges generally educators might be thinking about?
Kate T.: I think that that summarizes it, or at least the concerns that I have. I would say maybe if I'm going to add something, I don't know if educators would consider this a challenge, but I think part of what is challenging right now is a lack of understanding about it because it's so new; what it can actually do, what it can't do, and understanding how and when to apply it.
Kate C.: Oh, yes.
Kate T.: Because just like any new tool, there's going to be cases where it's a great thing to use and will create efficiencies for you in your work. And it's just smart, like a calculator or a spreadsheet.
Kate C.: Right.
Kate T.: But there are some times when you need to be able to do those calculations yourself, right? What is it doing? How is it calculating? Like, when is it accurate? And then how do you apply that? So, educating instructors about that, but also instructors educating students about that so that they're aware.
Kate C.: Yes, in some of our pre episode conversation, the importance for the students…as we kind of implied, it will be invading all aspects of the work we do, the kind of labor we work at, so anything from people who write code for software and websites, or who write press releases, even internal communications people, HR might be experiencing change from AI because it can write memos.
Kate T.: I've heard reference letters, patient letters…
Kate C.: Yes, in a medical setting. So, it is becoming clear that we should be preparing students in some way to, to face this, what might be a new companion in the workplace, and to know how to use it appropriately, which of course we're still learning how to do. There are some great resources about like this is what it can do, what it can't do, and this is how you might want to use it. Do we want to maybe run through a few of those things, or cite them at the end…
Kate T.: We’ll, definitely cite them at the end, maybe we can briefly go over…
Kate C.: So, what it can do: maybe our context is just a typical undergraduate course, really in any discipline where you might set an essay. And so what, what it can do is, as we've said, generate texts about a topic. Yeah, it is sort of descriptive.
Kate T.: It might even be able to compare and contrast to some degree.
Kate C.: To some degree. But as I saw in this fake little essay prompt that I set it, it was very impressive. But—moving to what it can't do now—it generated a lot of really good statements but they didn't necessarily cohere, or sort of hold together as a piece of writing. There, of course, isn't that layer of either reflection or argument or critiquing anything, anything kind of “extra” that one will need for a university essay. It can't do that work. Yeah, not really.
Kate T.: It also it doesn't have emotion, or a personal perspective, or history or background. Not every discipline will expect that kind of stuff in an essay or a written assignment, but many do. And it's not something that it can generate.
Kate C.: What it can’t also do— and for those educators who might be looking for, you know, possibly pretty apparent ways that a student who might be using ChatGPT— is that it's very bad at citing works, it doesn't really properly cite; it might actually make up entirely, books and articles. It might assign an actual author to a fake article, or vice versa.
Kate T.: And with all of this stuff—with what it can and can't do—what we're talking about is, what it can currently do, which is useful to know. And that's something you're going to need to be up to speed on. But I would like to throw in the caveat like, currently. And so, for the references thing, for example, is something that I can see getting better. It's also something that I bet you could tweak with proper prompt engineering, because I have heard of people using ChatGPT to apply APA formatting to a reference list, for example. So, if you have the references that you wanted to use, you know, that might be a way to engineer your prompt to more focus the content of the writing and then have accurate references at the end as well.
Kate C.: You know, you're so right. I think a lot of the uncomfortability right now is precisely because we know that it's actually going to get better at what it does. I think the position that at least makes me feel not so tense about this is that we shouldn't try to outpace the machine; we shouldn't really sort of place ourselves in this sort of like competitive stance with the machine, because in many ways, it's just going to be faster and quicker at certain things.
Kate T.: We will lose.
Kate C.: We will lose, yeah, in many ways. And so that I don't know, just, just saying! For what it's worth.
Kate T.: That might be disheartening, but it's yeah, I mean, I think you're a little more pessimistic about it. I am a little more optimistic. I think that is true, that that we can't keep up with it, but because it's so powerful and so like, we need to harness it for good.
Kate C.: Yeah. Like comic book heros that harness power for good.
The Big Problem
Kate T.: Yeah, like Spider Man! So, I think one of the critical concerns with academia and artificial intelligence right now, and academic integrity, is the problem that many typical assignments and essays, currently the way that they're being implemented, could be done by ChatGPT or some other AI. It’s not going to be an A-plus paper, maybe; and it's also probably not going to suffice for higher level, like third or fourth year, or graduate level work. But the problem is that, if the first- and second-year students are using this tool instead of building those skills, then they, too, will not be able to do the third and fourth year and graduate level work, and so it's sort of disadvantaging them. So, what do we do if we can't for sure tell if they're using that in an appropriate way or not? Which, there are tools out there that claim to currently be able to check for it*. But, my argument is the AI is going to continue to develop, and it's not 100% accurate, and there's never going to be a way to just ban use of it, in the same way that academic integrity infringements of other kinds still happen. Even before this, a student who was assigned an essay could have gone online and paid somebody else to write the essay for them, and that’s been an issue this whole time. It's just more widely available and easy to get, and free now.
Kate C.: You're so right. What I am enjoying (maybe that's not the right word) about this moment in time, is that, because it is such a leap, as you said, what it actually does for me as I'm reading through opinion pieces and the critiques and the pros and the cons, “yes, let's use it”; “No, by no means use it”. It really highlights some of our deep-seated beliefs about the way that we teach. And what I'm actually really kind of finding is this, especially, for the people who might argue, “I don't want to change my assignment, you know, I shouldn't have to. What I want to do is maybe come down and police students a bit more,” and I think what I’m reading from that is—and I'm speaking from experience—is this overestimation of the quality of those pedagogical interactions. Do you know what I mean?
Kate T.: It if an instructor came to me and said that, my approach would be to ask them, “why?” Because I actually do think there are some courses— and this I think we kind of hinted in your in your start—there are some courses for which a main learning outcome for the course is for the student to develop writing skills. Like, that is an actual, explicit goal for that course, in which case it is important for the student to write something from scratch themselves and to develop that process.
So you are going to need to have writing assignments in those classes and we will need to keep them and figure out how to approach that. I have ideas for that. And then there's classes where we have been using writing as a way to demonstrate the desired learning outcomes for the course, but for which the actual writing part is not necessary. Like, it's just the method, or it's the medium that we've been having them use. And so for those courses, why not allow them to use chat GPT and train them in that use? And this method applies to the other course, too, it's just that in the end you're going to have them actually write something on their own from scratch, but you know, the method is: first, here's an example of an essay that I wrote and here's why it's good. Here's the things about it that are great. Here's some skills and strategies. Okay. Now we're going to ask GPT to generate a draft based on this prompt and we're going to assess that output and we're going to notice some things, like, this isn't actually correct, and, do I agree with this interpretation? Am I going to use that in my essay, or no; and, this seems pretty impersonal and sterile, so, I'm going to inject some voice here, but I'm going to use this [portion of the output]. And so the [assessment] submission is: here's what ChatGPT wrote, and here's how I took it and used it. Then you keep going like that, step-by-step, using ChatGPT less and less. So, if it's a course where you need to do the writing from scratch, eventually they're going to come in and write an in-person assessment where they write, because there's no other way to ensure that they didn't use ChatGPT. So, it's like if you need to have that proof, that's how you do it.
But then otherwise, when you use ChatGPT, recognizing and citing or referencing or saying that you used it, providing that output, showing how it's different from what you created.
The Relationship between the Writing and the Learning
Kate C.: And I'm thinking about the kinds of courses that might exist on this spectrum, where the writing actually is not just the medium [for the end product], but it just might be germane to content, it might, you know, in some cases, be kind of important to be able to develop a reflexive voice. I was reading across what's out there right now, this idea that even just examining the students need to jump to ChatGPT as a solution. And I was thinking on this— I think for many students the writing product and the learning that's happening in the course is somehow so divorced. The writing is just this task that needs to be done. They don't see the relevance of the writing to the course. That's a real shame. That should actually tell us a lot about maybe some of our typical pedagogical choices. Again, what I'm really enjoying about this moment is it's helping to, I think for me, to really shine a light in those dark crevices that we just haven't really been maybe thinking about.
Kate T.: Topics that have been around for a long time, ideas that have been around for a long time that, the importance of them are renewed. Yes, I totally agree. Why are students using ChatGPT when they shouldn't be? There's: “I don't see the point of this,”; “it's hard, it's onerous, but it's not giving me anything”. So, the question is: well, is it? Because if it is, and they're not seeing that, then we need to bridge that gap for them, to show them why it's meaningful for them. Like, how is that going be useful in your real-world job after school or whatever, and why do you care? And then if it isn't, like, do we need to keep [the assessment]?
Ameliorating student fear of failing
(Kate T:) Another thing that that this reminds me of, why students might use a tool like AI when they shouldn't, or other methods of academic integrity infringement, if you want to call it that, they also might do that because they are fearful of failure, and they don't feel prepared or confident in their ability to do it easily. We can also address that concern with students by being much more careful to adequately prepare them for the assessments that giving them leave.
Kate C.: Leave lots of that space in the beginning of an assignment to break through some of those barriers, of the blank page, or the brainstorming portion, or just the points in which we all know, or should know as instructors, where, like, this was really hard back in the day. [Drawing from] my undergraduate experiences, [those hard moments] are completely glossed over as, as a moment and a process that actually might require a little bit of resting so you can get the lay of the land, you get a hand, you need some help.
Kate T.: Yeah, it makes me think of when I was teaching lab courses in psych neuroscience, a thing that I really loved to do when I was, you know, given the freedom to do so. We would have, say, four lab reports throughout the course of the semester, and these are daunting, daunting documents for students to write, especially first- and second-year students. So instead of just saying, “we're going to write an entire lab report four times this semester,” it's like, the first lab report you just have to write the methods section, which is the easiest section. Right? “What did we do?” The intro and the results in the discussion are provided to you. Here's what those would look like for this experiment. And then next time it's the methods, and then the results section, and the intro and discussion are provided for you, and you just scaffold from there. Scaffolding is a very common term and it can be applied in, I think, any disciplinary setting.
Kate C.: And might be a particularly important one to employ in this context.
An assignment idea
Kate T.: Yeah. And I think that GPT can be used in scaffolding, as well. To get from that blank page, “I'm going to start writing from scratch” state. Well, to get to there from “I can't do that at all,”; “I have no idea where to start,” well, let's start with this. One idea that I've had, and this is just to help spark creativity, right; the listener, hopefully, is maybe able to apply some of these ideas. But even if that's not the case, this just gives some ideas for how you might be able to creatively embed it into your assessment. Something that I thought would be interesting: instead of having the student write a short answer assessment or an essay, provide them with a prompt to send to the AI and then provide them with the rubric that they should grade with and then they should assess the output with respect to the rubric, give it a grade, and rationalize that.
Kate C.: Yeah, I love that. It also works to increase their metacognition about the learning process; I love those kinds of assessments.
Kate T.: Wouldn't it be great if that rubric was later going to be used for one of their own assessments, and then, because we always say you should write a rubric, first of all, you should provide it to your students, but I mean, leading a horse to water, etc., if you provide the rubric, how many students are actually looking at it? But if one of your assessments does require them to actually use the rubric, not only have they looked at it, but they've thought about it a lot more deeply than they probably ever would have.
Kate C.: So precisely. And they're internalizing some of those expectations and criteria that they're going to, then, probably more properly meet when it comes to writing an essay. I heard this idea— there's courses in which instructors are really keen to, you know, improve students’ critical thinking and argumentative skills and debate skills; having a dialog with the bot and turning in the dialog as an assessment. You know having just a back and forth, playing Socrates.
Kate T.: Oh, that's actually really interesting because I'm just remembering consultations that I've had with faculty who want to do debate activities in their classes, but they've got a class size that's not conducive to that. You could just like, say, like have a debate with them. You could even provide some text from ChatGPT that you got; “I’ve asked it to give me this side of the debate. Now, you need to counter that,” although it's possible that ChatGPT could generate that answer to potentially, I'd be interested to play around and see…
Kate C.: Couldn't one turn in…ChatGPT saves your conversations, right? I don't know if it allows you to print it out. Maybe screenshot it.
Kate T.: I don't know. I'm not sure. But the very least you could copy and paste the text. It would be nice if there was an export feature. We'll have to get them on that right. If it doesn't.
Kate C.: What else? Within an assessment?
Kate T.: I think with respect to having students use the AI themselves, that sort of exhausts my ideas. One other little caveat, and I think I do have some resources that we can include about this, but instructors can make use of AI in grading assessments or coming up with assessment questions and that sort of thing, which is a whole other topic, like, how can you use AI in your work?
But it's an important one to consider and think about and actually start doing because, I think a lot of the recommendations that we want to make about how to how to change your assessment protocols requires a lot of work and time and effort on the part of the instructor. And instructors are already busy. They already don't have enough time to do all the stuff that they would love to be able to do for their pedagogical practice. So, to allow them to make space for doing that saving time with a AI may help. But that's not that's not the fulsome topic of this. So, I'll just say we'll have some ideas in the description.
Reducing teaching labour
Kate C.: Can you give us one idea right now?
Kate T.: Well, we talked about sort of writing reference letters and things like that. Also, I just briefly mentioned, having ChatGPT help you generate questions. If we're talking about building up a question bank, ChatGPT can help you do that. It can summarize really well if you have texts that you need summarized, it can do that. I saw somebody put all of their course feedback into ChatGPT and had ChatGPT summarize that, and then she uploaded her syllabus.
Kate C.: So it could help in building out your teaching dossier?
Kate T.: And making changes to your course based on student feedback. And then she uploaded her syllabus and asked, “where can I incorporate more active learning into this?” And so yeah, I'll have links to that. So these things could help you save time.
Ideas for altering assessments
Kate C.: So maybe a couple of ways to alter assessments, or your courses in general, versus using it within an assessment tool. Such as an oral exam.
Kate T.: Oh, circumventing the use of AI in cases where you need students to be demonstrating a skill that ChatGPT could do for them.
Kate C.: Right. So this is I don't think anyone really wants… I think some people are attracted by oral exams, but I don't know if it's a very popular alternative. But, I wish it was! You know, when you're an undergraduate student, you don't want to do those things, but when you look back on your education, you know what kinds of interaction are really impactful, and I think oral exams are in that category.
Kate T.: I think that the oral exam thing is one of those things that students will react with that fear response, too. if you feel like that's something you could implement, it's important to prepare them adequately for that, and that involves scaffolding and little practice sessions and one-on-one chats that are not summative, so that the atmosphere is comfortable for them and that they don't feel…because there is this power dynamic between the instructor and student. And that's especially true, I think, the earlier in the degree program the student is like, I can remember having to ask an honor supervisor to supervise me for my honors, and I was more nervous than if I was asking somebody out on a date. Like it was, it was daunting at the time. Now, I talk to faculty every day, but at the time I was scared about that. And so we have to be aware that that's there, and to make efforts to show the students that it's going to be comfortable and fun talking to you, and that it's going to be easy.
Kate C.: I really like the idea of is to alter… some of the kind of tips and tricks around this is, like, teach more obscure texts because the ChatGPT may not know about it, but I think, in a more fulsome sense, where you might direct your efforts is to actually, you know, create, with your students, a sort of a bigger emphasis on locally-derived knowledge. So, what is said in class is actually a repository of knowledge they can draw on for their essays. What kind of local people that they might either interview speak to or be brought in for guest lectures? That can be the source that is either cited or worked with in some way. Turning to the local for your course is actually an exciting thought.
Kate T.: I agree with that. And then, when you think you've got something like that, that GPT or the AI is not going to do well with, pop your assignment instructions into that AI and see what it comes out with. Just to see if it's going to struggle or not, and if it struggles, that’s a good sign.
Kate C.: And then also you're going to know when students turn things in because, you know, if you want them to cite your lecture notes, you're going to know if they've done that or not easily.
Kate T.: Another sort of obvious one, I think, is just straight up in-class assessments.
Kate C.: Yes. And in-class time for formative assignments.
Kate T.: Yeah, absolutely. Kate, you and I are both on the online pedagogy wheelhouse/team here at the CLT team. So, I'm all about online assessments. But the thing is, we had to change a lot about how we did assessments when we moved to online, and to do it in a way that preserves academic integrity; because when you're just asking for a regurgitation of facts, it's very easy to do that in a not honest way. When we had to be online, people were struggling with that. But before that, we would just give the test in-person and on paper, and that is actually still an option, and it doesn't require instructors to do a whole bunch of new, crazy stuff, designing all these different assessments. I would recommend some kind of combination of things, like, if you're slowly going to be iterating your, course, maybe you think about how you can alter one assessment to incorporate AI in a more innovative way and scaffold that, etc, but for all the stuff that you don't have time to change, have them do that in class.
Kate C.: What I love, too, about the in-class, either assessing in-class or giving lots of time in class to work on assessments; what that brings to mind is that building of the camaraderie in the work that I think is also missing from many undergraduate courses, to sit beside someone and really work through something either together or by yourself, but with people.
Kate T.: Yeah, I think group work is… I mean, everybody knows about group work, and we know that there are challenges to it, but there are real benefits to it. I think when students are working together, they're probably less likely to just go ahead and do the whole thing without an AI?
Kate C.: I think as we've talked about over and over again, a lot of this rush to the tools is fear based. I think students, overall, want to learn.
Kate T.: Or if it seems onerous. I can also remember when it seemed like such a big deal to write an essay.
Kate C.: And like, you don't really know the reason why (as an undergraduate]. Like, it can be onerous, but if you know why one might take on that burden, it's easier to bear the burden. But if it's arbitrarily onerous, yeah, you're going to find ways to work around it.
Kate T.: Yeah. Well, so this is another good idea then, just really thinking about those reasons that students might not do it the way that you want them to do it and trying to counteract those. Maybe the essay isn't the same for every student. Maybe students are allowed to choose what they want to write their essay about. Maybe there are options that you thought of, or you can have students suggest topics. And I mean, those are things that have been around for a long time. I remember choosing my own essay topics in undergrad, right? And those were some of my favorite essays that I ever wrote, and I still remember them.
Just practices like that. Again, like I said, these are sort of strategies that we have been thinking about for a long time, have been great for many reasons, that also serve to circumvent this issue to some degree.
Conclusion
Kate C.: Yeah, so maybe that is a comforting thought, that there's actually plenty of things that have been around for a long time. Tested; even time tested at this point, that we can draw on to navigate this very strange moment.
Kate T.: Yeah. Experiential learning, personal relevance, student motivation and all of those things are going to be related to this thing.
Kate C.: Yeah, well, it makes me feel better.
Kate T.: I’m glad!
Kate C.: Right. So those are our thoughts and we thank you for listening to them. We will post any great resources in our description box.
Kate T.: Yeah, it's just a fascinating topic. It's huge. We tried to focus it a little bit in this episode just on assessment, but I'm hoping you found it inspiring, or at least useful.
Kate C.: Or maybe that your blood has cooled about it. That's our hope. So, we hope to see you around for next episode. We don't know what it is yet, but…
Kate T.: TBD, I'm sure it'll be great.
*Tools that claim to identity AI-generated work are unreliable, at this point, and produce false positives.
References and Resources
Read the evolving Guiding Principles written by Dalhousie’s Institutional Lead for Artificial Intelligence, Christian Blouin, in collaboration with other groups and units across campus.
Centre for Learning and Teaching’s Artificial Intelligence in the Higher Ed Classroom
Perrigo, B. (2023, January 18). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time Magazine. https://time.com/6247678/openai-chatgpt-kenya-workers/
_______________________________________________________________________________
Things the auto-transcriber thought we said when we said “ChatGPT”:
“Chad Djibouti”
“Chachi Beatty”
“Jackie Beatty”
“Chat shaped”
“Chalkbeat”
“Chelsea”
“Chad”
“Chatty Betty”