Developing an AI Pedagogy: Reluctant Steps into a Brave New World
Note: No AI beyond spellcheck was used to write this article.
Where it started
Heather Cray
Senior Instructor
Faculty of Science, School for Resource & Environmental Studies
The starting place for my AI pedagogy journey would be best described as “bury my head in the sand and hope it goes away,” followed by “ban it and explain its weaknesses.” Starting last spring, I began to explore the chatbot technology more deeply, partly in recognition that this was not something that was going away, but also in response to reports from colleagues and friends that at Environment and Climate Change Canada, and in consulting positions, they were expected to use chatbot AI to complete government reports, environmental assessment
documents, etc. As these are common employers of our students, both in co-op and for their careers, it seemed that the most responsible thing was to engage in, and explore the discourse around, generative AI so that I was better positioned to equip the students with what they might need. In contrast to my initial approach of “catching” students using it—which proved less and less possible as the technology rapidly developed—my focus has shifted to a course and assessment design approach which acknowledges the existence of generative AI but puts rails around its use, as well as designing elements for which generative AI would not be able to replace student learning (e.g., authentic assessments and in-person elements).
Resources along the way: Conferences, Modules, and the CLT Studio Course
During the summer of 2025, my journey was informed by a week-long “Teaching with AI” virtual conference hosted by the University of Guelph where I benefitted as much from the discussions in the chat as I did from the presentations themselves. There were diverse opinions among the faculty and students attending, but the general sense of unease was pervasive.
Coupled with readings and news stories, I completed the Dalhousie GenAI Orientation module on Brightspace and attended the July “Adapting Learning Outcomes to a GenA.I. World” session hosted by CLT. As a result of these explorations, before and after the Fall term lockout, I developed a “stream” system for a major assignment in one of my courses: Stream A was traditional research and writing, and Stream B was guided, reflective use of permitted generative AI. I also redesigned elements of my Environmental Informatics course to shift elements more vulnerable to generative AI (reflection questions, for example) into an in-person quiz and in-person tutorial session assignments. Looking to take this adventure further, I enrolled in the Fall 2025 “Developing an AI Pedagogy” CLT Studio Course led by Kate Crane and joined “The Opposite of Cheating” Book Club for the term.
The “Why” of Cheating
After reading The Opposite of Cheating: Teaching with Integrity in the Age of AI (Bertram Galland & Rettinger 2025) and benefitting from the Dalhousie book club discussions about it, my view of cheating has shifted. Honestly, I had not considered all the various reasons why a student would cheat using generative AI and had not approached it from a sufficiently relational lens. During my time as a student, I didn’t know many students who were not engaged in their learning, and while I did absolutely recognize that the (sometimes) seemingly transactional and outcome-focused school system in North America could make cheating seem like a rational choice, the idea that there was a fundamental disconnect between what students perceived as cheating versus what I and other faculty considered cheating, was an interesting revelation.
For example, a friend of mine, typically an engaged student, talked to me about using ChatGPT to write their reflections for class assignments. This was an interesting case study to examine through the lens of the book and authentic assessments. To my friend, the reflection was a “throwaway assignment,” one they saw no value in and thus didn’t think it was an issue to cheat; they didn’t think it was worth their time to write it themselves, and they thought they would get a better grade using ChatGPT because their writing is not strong. In my course redesigns for last Fall term and going forward, I am keeping the “why” firmly in mind and have been even more dedicated to explaining the purpose of each assignment and ensuring that, as much as possible, it is in the students’ interest to complete it themselves.
Memorable Media and Voices from the AI Frontier
The most memorable piece of media that I have found in this journey to date is Laura Preston’s ‘Human_Fallback’ essay (Preston 2023); it has stayed with me for months now. I was aware during my investigations of generative AI models that there were, indeed, humans in the system, and that some of the work of sorting through what goes into chatbot models is truly harrowing, horrific work, sorting through the vilest parts of the internet, for very poor wages, done by workers in the global south. I was less aware, before this article, about the form of gig economy that has sprung up to prop up conversational AI bots that were pretending not to be conservational AI bots. The essay is written from the perspective of the author based on her experience in this type of position in 2019 as a recent graduate from a creative writing program. In that role, the author humanized generated responses from “Brenda,” and took over in instances where there was an emotional disclosure or other tricky situation, all while preserving the pretense that Brenda was not a bot. It was depressing, demoralizing work, with an element of the surreal; this human cost, and the dystopia-framed-as-progress image of the proponents of the program, has stuck with me. The surreal disconnect between hype and reality was also present in another of the author’s essays in the same magazine, “An Age of Hyperabundance” (2024), which further explored the role of conversational AI and chatbots in society, the biases inherent in their design and in the results of the executions of their functions. A particularly chilling quote from this second essay is:
“Beneath this promised future, however, was a shadow future, one that suggested itself at every turn. This was a future of screens in every establishment and no way to get help, a future in which extractive algorithms yielded relentless advertising, a future of a crapified internet, too diluted with sponcon [sponsored content] and hallucinated facts to be of any use. In this future, if you wanted to use a product you would have to download an app and pay a monthly fee. It was a future of ultra-sophisticated scams and government surveillance, a future where anyone’s face could be spliced into porn. Our arrival in this future would be a gradual surrender, achieved through a slow creep of terms and conditions, and the capitulations had already begun.” (Preston 2024).
An Adaptive Approach
My own opinion of generative chatbot AI has not changed—I am still opposed to the current use of this technology; arguments for improved accessibility are belied by a current for-profit “one size fits none” approach where accessibility needs would be better suited by software designed to actually meet the needs of users instead of further marginalizing them (UNRIC 2024; Begum 2025). An example of this is the Dragon line of software for transcription, which is customizable, includes more technical vocabulary (i.e., recognizes “science-y” words as words), and unlike the Microsoft 365 “Dictate” feature, does not need to be restarted every time you take a pause (this is not a sponsored ad, though, Dragon—call me). I look forward to conversations about this at this year’s DCUTL conference “Opening Doors, Disciplines, and Minds: Embracing the potential of an accessible world.”
I recognize, however, that my opinion of chatbot technology is not universal and should not entirely dictate my course design approach. In addition to informal conversations with students throughout the fall term, I included the following question in my SLEQs: “How would you like generative AI to be handled in your courses?” Responses were mixed and included: full avoidance due to environmental and social issues with the technology; if AI is permitted it should be critically evaluated; and variations on “if we’re going to be expected to use it as soon as we graduate, why is it banned and why aren’t we being taught to use it properly?” Leaving aside the issue of whether “ethical use of AI” is possible, given that models are built on stolen materials and involve deeply embedded socioeconomic and ecological inequalities, the practical arguments for teaching the critical, reflective use of this technology to our students has led me to a place of uneasy compromise.
My current questions and challenges centre around in-person elements (e.g., assignments, oral exams, discussion) and inclusive design (e.g., students who write in the Accessibility Centre having to leave class, English as an additional language speakers, the hidden curriculum elements of oral exams and discussion elements), as well as the practicalities (or lack thereof) or grading iterative, authentic assessments with rapid enough turnaround for them to be useful to our students, especially without the substantive support of teaching assistants, which budget restrictions render impossible. Experimenting with different adaptive approaches this Fall was informative, and I benefitted enormously from discussions with colleagues at every stage. As the technology continues to change at an eyewatering pace, learning from one another and listening to our students is the only way I can imagine keeping somewhat apace and helping our students do the same.
Positionality statement: I am a white cisgender woman, a settler in Kjipuktuk, and I live with disability. My experiences of techno-joy involve an enduring love for GIS and the potential of technology to enhance the human experience and our connections to and understanding of one another and the planet that we share.
References
Begum, Marufa. 2025. Chatbots and Web Accessibility: Addressing Usability Issues and Embracing Inclusive Design. Make Things Accessible. https://www.makethingsaccessible.com/guides/chatbots-and-web-accessibility-addressing-usability-issues-and-embracing-inclusive-design/
Bertram Gallant, Tricia & Rettinger, David A. 2025. The Opposite of Cheating. University of Oklahoma Press, USA.
Preston, Laura. 2023. Human_Fallback. N Plus 1 magazine, Issue 44: Middlemen. https://www.nplusonemag.com/issue-44/essays/human_fallback/
Preston, Laura. 2024. An Age of Hyperabundance. N Plus One magazine, Issue 47: Passage. https://www.nplusonemag.com/issue-47/essays/an-age-of-hyperabundance/
United Nations Regional Information Centre for Western Europe (UNRIC). 2024. Building an accessible future for all: AI and the inclusion of Persons with Disabilities. https://unric.org/en/building-an-accessible-future-for-all-ai-and-the-inclusion-of-persons-with-disabilities/