For all the talk about whether robots will take our jobs, a new worry is emerging, namely whether we should let robots teach our kids. As the capabilities of smart software and artificial intelligence advance, parents, teachers, teachers’ unions and the children themselves will all have stakes in the outcome.
I, for one, say bring on the robots, or at least let us proceed with the experiments. You can imagine robots in schools serving as pets, peers, teachers, tutors, monitors and therapists, among other functions. They can store and communicate vast troves of knowledge, or provide a virtually inexhaustible source of interactive exchange on any topic that can be programmed into software.
But perhaps more important in the longer run, robots also bring many introverted or disabled or nonconforming children into greater classroom participation. They are less threatening, always available and they never tire or lose patience.
Human teachers sometimes feel the need to bully or put down their students. That’s a way of maintaining classroom control, but it also harms children and discourages learning. A robot in contrast need not resort to tactics of psychological intimidation.
The pioneer in robot education so far is, not surprisingly, Singapore. The city-state has begun experiments with robotic aides at the kindergarten level, mostly as instructor aides and for reading stories and also teaching social interactions. In the UK, researchers have developed a robot to help autistic children better learn how to interact with their peers.
I can imagine robots helping non-English-speaking children make the transition to bilingualism. Or how about using robots in Asian classrooms where the teachers themselves do not know enough English to teach the language effectively?
A big debate today is how we can teach ourselves to work with artificial intelligence, so as to prevent eventual widespread technological unemployment. Exposing children to robots early, and having them grow accustomed to human-machine interaction, is one path toward this important goal.
In a recent Financial Times interview, Sherry Turkle, a professor of social psychology at MIT and a leading expert on cyber interactions, criticized robot education. “The robot can never be in an authentic relationship,” she said. “Why should we normalize what is false and in the realm of pretend relationship from the start?” She’s opposed to robot companions more generally, again for their artificiality.
Yet K-12 education itself is a highly artificial creation, from the chalk to the schoolhouses to the standardized achievement tests, not to mention the internet learning and classroom TV. Thinking back on my own experience, I didn’t especially care if my teachers were “authentic” (in fact, I suspected quite a few were running a kind of personality con), provided they communicated their knowledge and radiated some charisma.
You might think we should not proceed with robot education until it is thoroughly tested and shown to cause no harm to any child. Yet we did not apply comparable standards to, say, the use of textbooks.
In America, federalism will mean deploying robots in some but not all school districts, experience that will provide a diversity of approaches and some early results before possibly extending their use. Those are healthy checks on a new technology that might disappoint us, as many other educational innovations have.
If we insist that robot education is first proven in every way to be effective, that is a recipe for allowing American education to lag behind. Not doing enough to keep up with a changing world is also a way to harm schoolchildren.
Keep in mind that robot instructors are going to come through toys and the commercial market in any case, whether schools approve or not. Is it so terrible an idea for some of those innovations to be supervised by, and combined with, the efforts of teachers and the educational establishment?
My biggest concern about robot education, by the way, involves humans. Children sometimes trust robots too much. Teachers and administrators could use robots to gather confidential information about children and their families, as the children may think they are talking to a robot only, rather than creating a database for future scrutiny. This could be addressed by comprehensive privacy standards, probably a good idea in any case.
In Isaac Asimov’s classic “I, Robot” stories, the very first tale, “Robbie,” published in 1940, concerned how much robots should be allowed to bring up and instruct children. The daughter and her father were on board with the concept, the mother skeptical.
Though the American unemployment rate is less than 5 percent, there is nonetheless a skills gap, and our schools are underperforming: Isn’t this the actual AI debate we should be having?