There is an extraordinary opportunity in the field of education for equity and innovation with the rise of Artificial Intelligence - and that opportunity isn’t solely technological.
The industry frenzy around the rapid development and availability of AI has been breathtaking. Since the release of ChatGPT and open.ai in particular, public (and private) interest has exploded. School districts and institutions around the nation have responded all along the spectrum: From banning the use of ChatGPT outright as plagiarism to exploring AI-integrated tech platforms for tutoring or lesson planning. Some districts have rolled out policies or statements on the use of AI, describing it as an untested, unregulated danger or something as innocuous and fearfully-received (when first released) as the invention of the calculator.
Students in post-secondary institutions with anti-plagiarism software have flooded the internet with frustration about their own writing being dismissed as ‘cheating’ or mistaken for being ChatGPT-produced. Educator orientation towards integration of AI seems to vacillate anywhere from outright mistrust - especially as the technology itself develops so quickly without much understanding or regulation - to thoughtful adoption. Debates rage and systems, foundations, technology companies, and stakeholders are in a reactive position, struggling to either capitalize on or mitigate the impacts of AI in Education. In the midst of the exciting, yet disorienting energy around AI, we at iF recommend specific ways of approaching the technology thoughtfully, ethically, and equitably.
Educators, rightly so, find themselves in a defensive posture regarding the future of their own profession, the authorship of their student work, and the biases inherent in machine learning taught by fallible human systems. This isn’t the wrong orientation, but it does miss the opportunity inherent in this moment: To allow the advancement of AI to drive better questions and better rigor around teaching and learning. Perhaps the most important question for educators right now isn’t “how do we defend ourselves from AI in our classrooms?” or even “how do we capitalize on or use AI in our classrooms?” but rather “what are the underlying questions about the role of AI that will deepen the authenticity of teaching and learning for our students?”
AI, though unlike any previous technology, still requires us to be rigorous about the ways we integrate it into our classrooms, our pedagogy, and our larger narrative about teaching and learning. AI and ChatGPT are here - students are curious, innovative, and industrious, and they are using it with or without our blessing. Imagine: Instead of (reactively) engaging software to ‘catch’ the ChatGPT-produced work that we are afraid of students producing, we answer the call to provide assignments and assessments that can’t be replicated by ChatGPT in the first place. AI in education also gives us the opportunity to take a second look at the ways we think about teaching and learning.
Integrating AI into classrooms in ways that accelerates learning will require centering the idea that students inherently want to learn, that teachers are essential to teaching and learning, and that truly great classrooms are centered on students and teachers. So where does that leave AI? We need to shift from a reactive to a proactive role in the way we leverage AI - or any technology, for that matter - in our classrooms.
Earlier this year, a professor at the Wharton School took an open-minded stance on AI in his classroom: Rather than banning the use of ChatGPT, he actually required that his students use it. In his syllabus, he asked students to learn the technology, reflect on their experiences, and become familiar with its uses, its limitations, and its impact on their own learning. Together, they explored how to leverage AI to provide information, tutoring, and thought-partnership; they also explored how AI works, and demystified its capabilities and impacts. In his course, students became the experts, and were given the freedom - in fact, the mandate - to learn about the technology, rather than fear it (or more likely, use it despite their institution’s policies).
On the education team at Intentional Futures, centering students and exploring the ideal state of emerging technologies is core to our work. This means everything from stakeholder interviews or focus groups while developing a strategic plan, as we are currently doing with Lakeside School, to structuring intentional opportunities for student input into institutional improvement, as we did with the Fellows Program. We ensure that we support our partners to center student innovation while, for example, developing next generation courseware or institutional equity metrics for post-secondary educational experiences.
Much like the responsive, iterative nature of AI itself, we recommend that organizations and institutions work to develop their own orientation toward the use of AI. At this stage of the evolution of this new technology, having a stance is critical. Many institutions have employed metaphors to describe their relationship with developing AI/ML technologies. Some schools have emphasized the technology as simply a tool, much like the calculator, that advances opportunities for learning but still requires student thinking and instructor guidance. Other organizations have made similar stances, referencing AI as a ‘co-pilot’ in an airplane: Able to fly the plane itself, but requiring an experienced pilot in the seat next to it for safety and leadership. Consider taking the lead on your intended relationship with AI in your educational institution, rather than reacting or passively relating to the technology.
Finally, the current energy around AI gives us the opportunity to deepen the narrative beyond the reactive attempts to simply mitigate bias (which are of course necessary and urgent), and further the conversation into how AI can be used to accelerate or make potentially exponential equitable impacts. AI technology is already being used to explore ways to create more intuitive tools and accessibility for individuals with disabilities; to provide socratic tutoring or generative democratization for students in public schools across the country; and to disrupt systemic inequities in the criminal justice system. Recently, we engaged with Equal Opportunity Schools, a national non-profit working to create equitable access to advanced courses for high school students of color and experiencing poverty. Together, we’ll engage in a cross-sector incubation and exploration of what is possible. The question we will continue to ask, at iF and beyond, is how can AI be used to make equity exponential?
The rapid generation of AI technology requires us to be rigorous about centering in thoughtful, student-centered questions in order to amplify teaching and learning. On the iF education team, we are committed to strategic, forward-thinking engagement with technology. With our partners, just some of the ways we do this include development of landscape analyses of emerging tech products; creating deep and insightful intentional learning sessions on developing issues; or frameworks for creating equitable systems. In exploring the impacts and potential for AI in education, engaging students and centering their experience will be key to exploring and identifying what is possible.