We are one species, and all we know is ourselves. To look outside our own zone of experience is to begin the story of creation again, and build a model of existence from the ground up, and from First Principles. In fiction, the models of alternate forms of intelligence follow the line of “us but different.” This rather reminds me of the attitudes that those in the West used to flaunt (and, arguably, still do) about those in the south and east of Europe. This viewpoint follows the line “us but less so.” From this sort of description of other people, we speak of “governmental structures,” that correspond with, but are always less complex and sophisticated than, those of the people describing them so. For robots, for machines that can think, we always seem to map them onto the framework of “us but different,” or, “us but less advanced.” This means that authors can then use the trope of applying a lens that is often used to examine human behavior, and instead apply it to the metaphors for humanity they have constructed.
Whether this approach might be good or not seems a bit beside this discussion. We know, now, something of what the area of literature might stand to gain from learning about contemporary thought, in the computer science community, about such things as computational ethics, and the problems inherent in defining something in terms that we as people, know the definitions to, but which really take more coding than they are worth to implement. Harder is wondering what the field of computer science can learn from the nature of the intelligent machines in fiction. Because of the discrepancies we’ve talked about, fictional AI is rarely considered valid in circles composed of people who think they really know what they’re doing about real AI.
But here is an inconvenient truth: all of the people in those circles: they know no more about how we are going to create the kinds of machines that we see in works like Star Wars than George Lucas, Isaac Asimov, or any of the rest of us. Indeed, these questions are so hard to answer that we all might as well just insert ourselves into this great circle of AI wisdom.
And I think that we should. We all know, in an intuitive kind of way, certain things about how AI should work, if we were only to listen to ourselves. The problem that has so far held us back is only one of imagination and drive. Maybe that’s the use of all of these works of fiction. Maybe the nature of AI in books and movies: incredibly, almost impossibly, advanced machines that can think, form opinions, feel emotions, and interact with us, serves as motivation to reach for the stars and make rapid progress towards getting to them. Sure, we might force our own reality onto the stories we create, but it might also be a good idea to allow the realities we create, to echo back into ours.