Dr. Lauren Zentz on Pedagogic Implications for Bard and Other AI Platforms

In our UH AI/ChatGPT working group, Dr. Teresa Acosta and I recently gave a very rudimentary talk about how to use Google’s Bard application, how it differs from ChatGPT, how we might envision using these apps in our classrooms, and how we might envision our students using these apps, for better and for worse – as both learning aids and “homework bots”. 

My peers in the meeting listed an impressive number of ways in which they are already engaging these AI platforms in their courses, under the premise that as educators we must critically embrace these new technologies. Some of these methods follow: 

  • having students write their own texts and then asking the platform to write a similar text, and comparing/contrasting the two; 
  • critically examining the references given by the platforms (these are generally quite terrible, even sometimes leading to dead links) and then having students research actual reliable sources and considering what the platforms gave them in comparison to what they got from reliable sources – this can lead to broader conversations about the circulation of misinformation and algorithms’ and media/tech companies’ roles and responsibilities in such circulation; 
  • polling students to see which platforms they use and what they use them for; 
  • inviting students to use these platforms as they write essays and answers, but making clear that a) the platforms must be cited and the students must explain how they used those platforms in formulating their responses/texts, and b) the students will still be held primarily responsible for the content of those texts and any erroneous information contained therein; c) the students will still be graded for the quality of writing that they have adopted from these platforms (which is generally below expectations for college writing); 
  • inviting students to inquire as to the labor that can be done with AI platforms, the ways in which they can be integrated into learning and text production experiences, and the ways in which we still must expand on and improve any information that they provide us; 
  • working with students to learn appropriate literacy skills needed for entering inquiries into these applications under the premise that our inquiries shape the responses we get back from any of them;
  • having an AI platform take a test in the subject at hand and then examining its answers

While many faculty members agreed that we must “embrace”, or at least “stay on top of” these technologies moving forward, the group as a whole agreed that we must do so critically – that our embrace must include critical use of and engagement with these platforms. Such criticality includes everything from understanding who makes these platforms, how the language models work for each of them, what their profit and privacy models are, what the limitations of these platforms are, how frequently they get facts wrong – egregiously or subtly so – and so on. 

As a sociolinguist, I am deeply invested in both learning how these language models are created, understanding how they sift through, organize, and process linguistic data, and also how they reassemble them. As prominent scholars like Noam Chomsky and Brendan O’Connor have pointed out, computer models for language are quite a ways off from achieving the generativity and interdiscursivity that human language and thought has achieved. As cognitive linguists like Daniel Everett have pointed out, there is actually so very little that we have managed to learn about the human brain. Others have pointed out that results from these platforms are exceedingly unoriginal and often plagued with factual errors. So if we humans who are building the language models that these platforms are based on still barely know anything about how the mind works, then let’s go ahead and meet these new platforms based on the premise that they are new tools, and they aren’t going anywhere, but they are quite rudimentary still. As users of them then, we don’t need to trust them at all; but we can meet them where they are at and critically come to terms with the ways in which we and future generations will be integrating them into their lives as communicative and educational tools.