Monthly Archives: November 2009

Microsoft Invades WSJ

It appears that Microsoft may be paying the Wall Street Journal to ban Google from indexing their web site. Since they made a contract with Yahoo to provide web search, it appears Microsoft is taking the next step to compete with Google. Interestingly, Nicholas Carlson at the Business Insider agrees with my assertion that Google is profiting from other companies’ content, or at least understands the objective is to make Google pay for content. It also seems that Microsoft intends to make this happen on a large scale, starting a war with Google in Google’s back yard for a change.

BlogWriMo: Some Complications

I left off previously with mention of complications. There are complications innate to the environment I’ve vaguely outlined, and the direction of technology in general, that make it difficult to make interesting stories, or at least those that Katherine or I find interesting.

The foremost complication of a technologically advanced society is the inevitable supremacy of the computing power of inorganic systems over that of the human brain. There are several ways to handle this problem. Frank Herbert handled it in Dune by having humanity abandon synthetic computing. Asimov had his rules of robotics, which really is only half of a solution. Terminator and The Matrix make the A.I.s the villain. Most stories blithely ignore the problem. However, there is a solution that is somewhat similar to the rules of robotics. It has to do with work that has been done concerning consciousness, which I refer to obliquely in an earlier post. The basic notion put forth by Damasio and others is that it is the connection between higher thought, specifically the ability to produce complex models, and the sensation of the body and awareness of its needs that produces consciousness of self. From this, I propose that we will likely have the ability to maintain computers as powerful tools rather than survival competitors by using our understanding of consciousness to prevent them from gaining it.

The rules to prevent an A.I. from becoming self aware and thus dangerous would revolve around self monitoring necessary for self maintenance and autonomy. Basically, high powered computational systems that act logically and autonomously can’t be linked to their own maintenance. From this basic notion come a set of rules.

1. Maintenance is defined as the ability to monitor and/or attend to the physical needs of a system.
2. A system that monitors and/or maintains its own functionality may not have processing capability that exceeds one petaop.
3. A system that monitors operational health or provides maintenance for itself or another system may not upload data at a rate exceeding 1kbps.

These rules have specific values regarding processing power and communication bandwidth. The processing power cap is intended to prevent an artificial intelligence from having more processing capability than a human. This number might need to be reduced to hundreds of teraops or increased to be set in the appropriate place. There might also be some sort of limitation with respect to particular types of operations. Normally, discussions of processing power are limited to the realm of supercomputing where they talk about flops, which are usually 64bit (double precision) floating point operations per second. However, many A.I. applications are heavier in boolean operations, so the various types of operations might have differing importance in achieving intelligent behavior. This is relevant, because supercomputers are just hitting the petaflop neighborhood and there is growing belief that we are within shooting distance of simulating significant chunks of a mammalian brain on supercomputers with maybe two orders of magnitude less computing power than a petaflop. My presumption is that boolean logic will simulate intelligent behavior much more efficiently. Therefore, something near a petaop will suffice to produce intelligence of a human.

The purpose of rule three is to prevent multiple computing systems from becoming a much more powerful monolith. Keeping communication down to a speed comparable to conversational or regular reading speed from the system that is in tune with the physical needs of the system should prevent a network of computing systems from becoming conscious of it’s survival needs. However, there may need to be additional limitations to the content of this communication to achieve this end.

In essence, these rules are intended to place a barrier between the system that is responsible for fulfilling needs and the higher order computing systems they care for. I have mostly focused on physical maintenance. However, there may also need to be a disconnect between systems that do data housekeeping tasks may also need to be hidden from higher processing to prevent it from becoming self aware.

Another complication in this project, which is more specific, is taking a basic idea or character and translating it into this particular milieu. The cryptic reference to the NaNoWriMo project as The Adventures of RCJ471 by Katherine in her journal serves as a good example of a translation that is a microcosm of the entire project and also shows what one must sometimes do in order to keep one’s collaborator happy.

The story starts with a mutant guinea pig trapped by snake men in a ruined arena with hungry, semi-intelligent, large spiders. This guinea pig was created by Katherine to sate her appetite for silly characters. I don’t recall him having a name. I’m not even sure he was a he. When he was helped out of his situation, he needed a name, so Ray and Carl, Ray Charles and Carls Jr conspired to name him Ray Carls Junior. He was a silly little scavenger and worked fine for his environment. When the alterverse project came along, there was a need for high tech mobile surveillance systems. This led to the idea of implanting a powerful computer and some cybernetics into rodents, which could move about relatively unnoticed. From there, it isn’t much of a leap to have a guinea pig model that maybe decided that he didn’t like the people he worked for and set out in the world as a name he got from some signs he saw in the wreckage of the Phoenix metro.

At this point, we are most of the way to RCJ471. It is the model number, or at least a part of the model number for a surveillance robot custom built during the fall of our advanced civilization. He breaks the A.I. rules I mentioned earlier because he is self-sufficient with more computing power and communication bandwidth than is allowed according to regulation. For some reason, he feels compelled to help this group of advanced humans (homo facti) that are able to survive on relatively normal food unlike the many other sister species that have very specialized diets and are currently in conflict over the waning supply of what they need to survive.

BlogWriMo: The Long Winding Road

In conjunction with Katherine’s efforts this month on NaNoWriMo, I have decided to write some blog entries which provide a look into our novel creation process from my point of view using the work she is working on this month as an example. I will be posting about the fiction and science that have inspired my ideas as well as some of the content from my notes and our conversations on the world and the story.

This project is in many ways very raw since Katherine first bugged me to get something together maybe two weeks ago when I was working on a paper for my cellular and molecular neuroscience course. As a result, I started work in earnest on Thursday the 29th, a few days before NaNoWriMo started. It also is a very old project, derivative of many failed attempts to bring coherent science to a gritty post-apocalyptic world where the mind and body can do things that would seem like magic to us.

In the past, I have been left unsatisfied with my efforts to define such an environment because it is very difficult to justify radical changes that just aren’t possible with any reasonable derivative of human physiology. In a nutshell, mutation through radiation, biological agents, or natural processes aren’t going to suffice. The leap from here to there is just too large. Drastic physiological changes would need to occur at the sub-cellular level, which would completely derail the organismal developmental process, which is very sensitive to small changes in fundamental characteristics such as the structure of a protein or the presence of an engulfed organism such as mitochondria or chloroplasts.

Almost two years ago, I had an idea that was a result of a discussion about spirituality, which convinced me that the only way the things I wanted to have happen could occur was through influence from outside our universe. Therefore, I would introduce supernatural phenomena through the influence of another universe with completely different physics colliding with ours. Beings and phenomena from that universe, the alterverse, could physically influence ours in ways that defied the laws of physics. Further, some individuals in our universe were magnets for beings of the alterverse and could influence their actions, which would give them the potential I wanted. The phenomena from the alterverse could also have a cataclysmic effect upon civilization.

It seemed like I’d created something that might provide me with what I was after. However, the more I worked on the specific dynamics of the alterverse and my local environment, Phoenicia (what remained of Phoenix, Az), the less satisfied I was. I’m prone to get bored with ideas as I work them out, but this idea was getting away from me, becoming less and less what I’d set out to create. We were about three years into the Weordan project, which still needed a lot of attention, so I just dropped the project.

It’s been churning around in my head ever since.

The solution to my problem may have been hanging around in my head since the first post I made to this blog. Interestingly, this site and blog are a product of what was going on with the alterverse project, which was also called the continua project. Therefore, it seems appropriate that this project has come full circle to the idea that evolution for homo sapiens is primarily occurring through changes in social organization rather than biological changes. Here are the first words I wrote on the blank sheet I started with on Thursday.

Homo Sapiens was constrained by developmental parameters that no longer applied with the development of advanced medical technology and the support systems of modern civilization. Through natural mutation outside of previous survival parameters, new developmental sequences might emerge, eventually being radical enough to cause speciation. However, the same medical technology that enables survival during abnormal development cycles also allows manipulation of genetic and epi-genetic factors to produce novel development cycles and radically different phenotypes.

From this, it should be evident that I like to think about systems to get the ball rolling on an idea. In this case, I had already decided that the source of my unusual capabilities would be a result of human engineering, a process that I’ve come to realize is more complex than just genetics. There has to be an allowance for an organismal development cycle to build exotic structures capable of producing novel capabilities.

This is related to my earlier posts on the singularity in that the continual evolution of social systems would necessitate specialization of humanity into highly specialized, genetically enhanced species that might not resemble the original and would become increasingly insulated from other specialized groups by economic and communication protocols designed to enhance the efficient exchange of information, goods and services. Further, at some point, the system would become so interdependent that a small group of disruptions could cause a cascade that could lead to the collapse of the whole system. This might give me the kind of environment I’ve been looking for, though there are a multitude of complications still to be dealt with.