Tuesday, December 15, 2015

Forcing Our Reality

We are one species, and all we know is ourselves. To look outside our own zone of experience is to begin the story of creation again, and build a model of existence from the ground up, and from First Principles. In fiction, the models of alternate forms of intelligence follow the line of “us but different.” This rather reminds me of the attitudes that those in the West used to flaunt (and, arguably, still do) about those in the south and east of Europe. This viewpoint follows the line “us but less so.” From this sort of description of other people, we speak of “governmental structures,” that correspond with, but are always less complex and sophisticated than, those of the people describing them so. For robots, for machines that can think, we always seem to map them onto the framework of “us but different,” or, “us but less advanced.” This means that authors can then use the trope of applying a lens that is often used to examine human behavior, and instead apply it to the metaphors for humanity they have constructed.
Whether this approach might be good or not seems a bit beside this discussion. We know, now, something of what the area of literature might stand to gain from learning about contemporary thought, in the computer science community, about such things as computational ethics, and the problems inherent in defining something in terms that we as people, know the definitions to, but which really take more coding than they are worth to implement. Harder is wondering what the field of computer science can learn from the nature of the intelligent machines in fiction. Because of the discrepancies we’ve talked about, fictional AI is rarely considered valid in circles composed of people who think they really know what they’re doing about real AI.
But here is an inconvenient truth: all of the people in those circles: they know no more about how we are going to create the kinds of machines that we see in works like Star Wars than George Lucas, Isaac Asimov, or any of the rest of us. Indeed, these questions are so hard to answer that we all might as well just insert ourselves into this great circle of AI wisdom.

And I think that we should. We all know, in an intuitive kind of way, certain things about how AI should work, if we were only to listen to ourselves. The problem that has so far held us back is only one of imagination and drive. Maybe that’s the use of all of these works of fiction. Maybe the nature of AI in books and movies: incredibly, almost impossibly, advanced machines that can think, form opinions, feel emotions, and interact with us, serves as motivation to reach for the stars and make rapid progress towards getting to them. Sure, we might force our own reality onto the stories we create, but it might also be a good idea to allow the realities we create, to echo back into ours.

Are We Closer Than We Think? or “Great, kid, don’t get Cocky”


The book Artificial Intelligence: Approaches, Tools, and Applications, has this to say about definitions: 

Artificial Intelligence may be defined as a collection of several analytic tools that collectively
attempt to imitate life and has matured to a set of analytic tools that facilitate solving 
problems which were previously difficult or impossible to solve (Gordon vii).

While this sort of abstract formulation is interesting enough, its vagueness means that a lot of people can use such a definition to claim to have made advances that, unfortunately, often seem a little underwhelming. 
Meanwhile, the single most influential reason why we have yet to see something that looks truly like the Artificial Intelligence-s that appear in fiction is that there is simply a lot of work to do, and a lot of code to write, before we get to say, “Yes, this is one; here it is.” While it is both natural and useful to speak of this problem specifically in the sense of programming code, it is worth mentioning that there is a deeper reality to this problem that is covered under a discipline called Information Entropy. I include a video that I have made that talks about this problem.




There are a few side-effects of this most consistently-underwhelming of fields. To begin with, AI is forever in the public eye, with such assurance that whenever the true AI is made, every news station in the world will flock to those who created it. This means that there are a more or less constant stream of hopefuls who would flaunt their modest successes in pursuit of headlines and grant money.
I may make some enemies here, but I have yet to be convinced that the process of building a robot that can play piano in a band has any use, or contributes to the great work of humanity even a little bit. Yet it still makes the news. If it takes the audacity of a college freshman to look at all of the decades of work put in by the MIT media lab (and others like it), and say “Well, all of this is useless junk,” then, as a college freshman, allow me to say: nearly everything produced by the MIT media lab is useless junk that furthers the field in which, nominally, they are working, exactly as much as do those in the carpet-shampoo industry.

Why do I say, “Nearly?” Because there is one thing I know of that really excites me, that has come from that much-vaunted lab.
Works Cited
Breazeal, Cynthia. "The Rise of Personal Robots | Cynthia Breazeal | TED Talks." YouTube. YouTube, 8 Feb. 2011. Web. 15 Dec. 2015. <https://www.youtube.com/watch?v=eAnHjuTQF3M>.

Gordon, Brent M. Artificial Intelligence: Approaches, Tools, and Applications. New York: Nova Science, 2011. Print.

Asimov and Those Rules that Don't Work

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Seems legitimate enough, right? This will nicely ensure that the machine in question takes into consideration through all of its deeds the idea that it must put all humans first. The problem here lies in the definition of terms. To even hope to be able to implement these three rules, a machine needs first to know the definition of injure, then human, then harm. How do we do this? The answer is that we can’t.
If we begin with the word “human,” then we have a few interesting things to consider. A video on the YouTube channel computerphile calls them “edge-cases”. If I were to point at you, then it is clear that you are a human, most people in any given room with a robot are also easily called people. But do dead people count as human? I’m sure, for most applications, they don’t, but that means you, the developer, must define the word “dead” too. What about unborn people? It’s hard to get a room full of people to agree on this one as it is. So, in order to make these rules, which at first looked so appealing, work as guidelines, without first defining answers for almost every ethical question.
Of course, these rules are never seriously considered nowadays by developers of Artificial Intelligence. The reason is that they are simply not useful. I am interested in them for a different reason than their applications: I am interested in their exigence. These rules were created, essentially, as a plot-device: a means by which action could be driven along. Unfortunately for Mr. Asimov’s reputation, they were not written from the point of view of a computer scientist, which is why they look kind of like an elf-magic explanation of an idea this advanced.

So what are we left with from these rules. What I take away from them is that the ideas they present are just too advanced. People think that intuitive understandings that they might have will just tend to traverse the sort of blood-brain barrier, if you will, that separates the mind of an AI and their own. We like to think in poetry, but AI among other things tells us that we cannot make poetic assumptions about a thing this deeply rooted in logic and computer science. Everything, absolutely everything, must be specific in a computer program. If a concept is not specifically defined, then it will simply not exist in the mind of the machine. Sorry, Isaac, better luck next time.

Works Cited
"Do We Need Asimov's Laws? | MIT Technology Review." MIT Technology Review. MIT Technology Review, 16 May 2014. Web. 15 Dec. 2015. <http://www.technologyreview.com/view/527336/do-we-need-asimovs-laws/>.
"Isaac Asimov's "Three Laws of Robotics"" Isaac Asimov's "Three Laws of Robotics" N.p., n.d. Web. 15 Dec. 2015. <http://www.auburn.edu/~vestmon/robotics.html>.
Singer, Peter W. "Isaac Asimov's Laws of Robotics Are Wrong." The Brookings Institution. The Brookings Institution, 18 May 2009. Web. 15 Dec. 2015. <http://www.brookings.edu/research/opinions/2009/05/18-robots-singer>.

Star Wars and the Really-Crazily-Advanced Group

There are some members of the fictional artificial intelligence family that don’t fit comfortable into any existing theories, my own or otherwise, as to how they work. In this category I would put the technological members of the Star wars series. Compared to now, Star Wars is the product of another time. The plot-lines and story arcs were dictated by the interests of the populous during the seventies and eighties. This was just passed the civic revolutions of the post-war years, and during the time when the screenplays were being written there was a sense of the integration of new values and new technology that made itself, as a movement of thought, known in the plot of the first three movies. The prequels (the last three that were made) were the product of what is, I think, a sufficiently different time to merit their own discussion in most cases. However, the intelligence and character roles that were created in the first three films remained as a guide for the prequels, so that if we stick to the topic of robots, we might talk about them together.
Robots? Anyone who’s played around with AI knows that the robots are not absolutely necessary to the creation of intelligence, or even the research of same, so why must there always be a robot? This rule is entirely fictional. There is nothing in the development of computer science right now that would suggest that all AIs must be bound up in an entity with arms. However, and not just in Star Wars, but in other places too, this seems to be the situation that we see. Why is this? For, as we will further see, this is truly a widespread, almost universal, phenomena: we always bind AIs into things with arms. I think that there are two things that we might draw from this fact: first, that humans like things that we can put in boxes, and dislike things that we can’t. Second, that it took the spreading-out of the technology, and a wider understanding, of computers before our representation of them in fiction began to become more true-to-fact.
But I say that Star Wars represents the really-crazily-advanced group. Why, and what we can learn from them, is what I am going to attempt to address. There is a significant discussion of the droid and related phenomena on the Star Wars Fan Wiki, but to take advantage of same would be to ignore the first rule of researching pop-culture, that is: everything on a fan-wiki is utterly useless. So instead, let’s have a look at one aspect, of one droid. C3PO is fluent in six-million forms of communication? Really? This got past script editing?
How impossible is that, really? According to The Global Language Monitor, there are 1,025,109.8 words in the English Language ("Number). I don’t know what they mean by 0.8 of a word… but let’s assume that they know what they're doing. If we say that each word is approximately four characters (It’s probably more, but we’re being conservative), than the number of bytes in each word is four. If we then make the further assumption that all languages equate to this same more-or-less amount of data simply to be informed of all possible morphemes, then C3PO only needs to store 24.0260 gigabytes in order to know every word in all of those languages. Of course, he also needs much more than that in architecture so that he can synthesize those words into what he is trying to either say or understand, but I think that he could probably fit all of that inside a couple of terabytes, which are, of course, about a thousand gigabytes each. Even with our present, and fairly low, level of technological advancement, it is easily possible to store all of that information on a mere two or three hard drives. Because the problem of storage is not so frightening as to be insurmountable, then all that is stopping us from treating C3PO as possible are the limits of the ingenuity of his creators. And his creator was Darth Vader, so that pretty neatly wraps that up, right? Well, I think that we’ve proven that such a machine is possible, so the only question is how hard, or how long to implement, would it be.
Perhaps you find yourself wondering something that I’ve wondered a lot about Star Wars: namely, why the machines don’t take over? In the movies at least, they don’t, but in many, many, many other stories and films this is just what happens. I wonder if we can figure out why…

Works Cited
Mallya, Vaibhav. "Why Is There No Powerful AI in the Star Wars Franchise?" Quora. Quora, 7 
Sept. 2011. Web. 15 Dec. 2015. <https://www.quora.com/Why-is-there-no-powerful-AI-in-the-         Star-Wars-franchise>.
"Number of Words in the English Language: 1,025,109.8." The Global Language Monitor. The 
Global Language Monitor, n.d. Web. 15 Dec. 2015. <http://www.languagemonitor.com/number-
of-words/number-of-words-in-the-english-language-1008879/>.


Who Would Have Thought It? - AI in Fiction

Consciousness is like the ability to get a date. We all know, on some level, what it involves. And we’ve all definitely seen examples of it. We could probably all define what it is, and we know it when we see it. However, trying to define what it means to be conscious: to think for one’s self, to make decisions, to formulate opinions, even to feel happiness or sadness is even more complex than getting a date. No, I don’t get out that much, Why do you ask?…
Because of this, computational intelligence has always been a matter of storytelling. This, by itself, doesn't make it terribly exceptional. There are many things, including several things I’ve learned about in English Class, that have been relegated to science-fiction but have a root in some possible science fact. For thousands of years, people have been wondering how to create a machine that retains in some way the thinking capacities of a human being. It is an interesting question for literature to attempt to answer, because, in describing the methods by which intelligence can be synthesized artificially, we must among other things create a set of definitions that describe all of ethics and morality. This has several implications for me, The most unfortunate being that I have to research the Greeks.
I would like to say that the Greeks research of artificial intelligence discussed undecidability, Turing Complete problems, and had its roots firmly planted in computational theory. Of course, it didn't. In fact, unfortunately for me (and maybe fortunately for everyone else), it isn’t necessary to speak very much about computational or programmatic paradigms in order to discuss the basics of artificial intelligence. Really, all that we require are a few useful examples.
There are several examples that are commonly used, and indeed, I have a few that I like to refer to myself. First, there is the everything machine. This is a machine that has in its memory all information in the universe. This information is accessible in the dictionary, so that all you need to do in retrieving the information is ask for it. And with all the information in the universe it's trivial to predict any situation or make any conclusion. Of course, we can't build and everything machine, but we have to remember that we're talking about fiction and anything that provides useful context in the real world or about which we can make real world conclusions, is useful and worth discussing.
The next step is the Little Baby Machine as I call it, or the learning machine as everyone else calls does. The learning machine has one job: it takes information for the world and puts it in a place where can be used to make decisions in the future. This means that it effectively starts from zero, and has no limit on the amount it can learn.
This distinctly separates it from the third type of artificial intelligence that is, as I call it, the scripted AI. In many ways, scripted AI is the polar opposite of the Everything Machine. The Everything Machine knows about the universe everything that it might need to make any decision or predict any outcome. On the other hand, the scripted AI only knows what it has been told by the person who made it. This makes the scripted AI most similar to the Learning Machine, because the Learning Machine only knows what it has had access to through experience. However, we might also say that, even though it is impossible, the everything machine is most similar to the scripted AI, since everything in the universe has been scripted within it.
We cannot write an infinite number of lines of programming code, and if it if we could that infinity would simply be countable, meaning that we cannot mathematically describe the nature of the entire universe even if we were able to write an infinite number of lines every second of every hour, to the end of time.
It is possible to find examples of all three types in fiction over the last 3000 years. My question is whether, or how much, we can learn about real artificial intelligence, (that is, something that we could really make, and play with), from fictional, or even impossible depictions in literature. Where should we start?
How about Star Wars?