The Future Is Now: 12 Ways To Use Technology in the ESL Classroom

Perhaps you recall the Spielberg movie of the same name, or an article you read about Stephen Hawking or Elon Musk expressing their concerns about AI ‘taking over’. You might relate AI to image searching or self-driving cars, or envisage a grandmaster chess champion bowing in humiliation to a skilled computer. In both fact and fiction, AI is certainly here to stay, and software making use of artificial intelligence is one of the fastest growth areas in all of technology.
But how can we discuss these complex, technological concepts with our students? Like many ESL teachers, I initially worried that the level of required vocabulary would be too high, and that we’d get bogged down without being able to fully discuss these fascinating ideas. As it turned out, my students were way ahead of me (as usual) and brought their own experience and opinions to what was a thoroughly engaging class. So, I wanted to pass on some ideas which will help you engage with your students about AI, and produce a ton of great language in the process.
Artificial Intelligence (or ‘machine intelligence’) is the capacity of some types of technology to mimic human cognition and communications. These machines - technically, of course, pieces of software running on high-end computers - apply the logic of their programming to specific tasks, and can sometimes appear to be equal to (or, controversially, even superior to) their human creators. Examples include speech recognition software which learns your accent and cadence over time, driverless cars which communicate with each other to form a network of vehicles, machines that are gaming experts (e.g. IBM’s seminal Watson, a machine which trounced human champions at Jeopardy), and independent on-board computers being designed for use in deep space probes, where communications delays render autonomy almost essential.
This is a great place to start. Even if your students aren’t aware that their phones, tablets and computers connect with and make use of AI on a daily basis, they’ll be familiar with fictional characters and concepts. Elicit as many fictional robots and AI devices as possible, perhaps including:
If there is time, hold a short quiz where your students might guess the names and contexts of these AI examples. Then, pose this question to your students, in order to define the scope of this inquiry: What can an AI do that a conventional computer cannot?
We found that fictional AI can be prone to confusion (HAL 9000), tends to malfunction or go off-program (Westworld, the replicants in Blade Runner, Murphy in RoboCop), may have aims of its own (David in Prometheus, Bishop in Aliens and Ava in Ex Machina) and may pose a direct threat to humanity (Terminator, The Day the Earth Stood Still, Transformers, The Matrix). With these fictional experiences in mind, my students then began to figure out their feelings about AI, and how it might integrate into our lives.
I asked my students to picture a scene from twenty years in the future. They brought in some fascinating potential uses of AI, including:
I’ve had some very successful lessons with my advanced group recently, but watching my students tear into these questions was sheer fun. With sufficient background (provided in large part by our fictional examples, but also a little reading on the topic) my students were inspired to produce lots of great language, including the target vocabulary for the week:
Consciousness, artificial psychology, mimicry, the Turing Test, emulation
Threats, dangers, paranoia, conspiracy, rumination
Processing, quantifying, analyzing, coding, designing, improving
Here are the questions I posed to groups of students. If they’re in the right mood, you’ll hardly need to intervene, and the discussion is theoretically endless:
Imagine that your brain could be mapped in every detail, and this information uploaded to an AI. Would you consent to your ‘self’ being replicated in this way? How will it feel, to converse with a machine iteration of yourself? Which of these versions would be ‘you’?
If an AI claims that it ‘fears death’ (i.e. being turned off or disconnected) then should we regard that AI as mortal? Is this sense of mortality just a quirk of programming, or can we say that this machine is ‘alive’ in some way?
If an AI is considered ‘alive’, what rights should it be granted? Can we expect the public to agree to provide an AI with all of the rights expressed in the US Constitution, for example?
If we take that route, and validate the humanity of our machines, bringing them into a closer union with us, what might then happen? Will we see the rise of implant technology which blends the human and the mechanical (part of what Ray Kurzweil has termed ‘The Singularity’)? Will we even see AIs who wish to marry people?
If an AI were given rights and regarded as sentient, would those rights permit the AI to stand for elected office? Can we imagine such a ‘digital president’? Would we be able to trust a machine with life-and-death decisions, or expect it to behave with humanity and compassion?
Should an AI ever be given responsibility for a defense network (as in The Terminator, and the Cold War AI thriller, War Games?) Fictional representations provide only the starkest warnings of such an idea, but US nuclear planners insisted that post-catastrophe nuclear conflicts would necessarily be fought by machines operating automatically, as their human handlers would all have been killed.
How concerned should we be about AIs which are too pedantic? (Americans tend to say ‘literal’; I teach both words). This is the root of the ‘paperclip problem’: An AI is given the task of producing paperclips, but decides that the main limiting factor is that humans both keep using them, and competing with the AI for resources (metal, factory space, etc). The AI’s solution is therefore to murder all of humanity, so that its paperclip manufacturing will achieve peak efficiency.
We’re very used to AIs embodying human characteristics. In the recent adaptation of Isaac Asimov’s I, Robot starring Will Smith, the AIs are all bipedal humanoids with passive expressions and soothing voices. HAL 9000 sounds friendly and helpful (even when - spoiler alert - he’s busy murdering the crew of the Discovery), as does JARVIS. Will we always choose servile characters for our AIs, or might we prefer something else?
What would we do if (or, according to the skeptics, when) it all goes wrong? Should there be a fail-safe device to prevent AIs from asserting their right to protect themselves? How might an AI choose to attack us?
AI brings with it a host of social and economic issues. Chief among these, according to economists, is the likelihood of massed unemployment as AIs take over mid-level non-automated work and render a whole class of workers expensively obsolete. Is this a reason to delay the development of AI until the economy is more robust and resilient to such an epochal change?
Would we permit AIs to self-replicate? How about self-reprogramming, or self-upgrading? Are your students comfortable with giving an AI responsibility for its own development?
Our students are going to live in a fascinating century, one in which AI is certain to play a major, even defining role. There is much to fear, but there are opportunities too; discovering your students’ views and helping them to understand and discuss these issues is rewarding because of the high level of language required, and also because of the increased confidence that comes from discussing complex topics in detail.