Category Archives: writing

Why I Hate Science Fiction

I don’t have a very healthy relationship with works of fiction. This especially goes for science fiction. This broken relationship is at least as much a function of my personal defects as it is of poor effort on the part of other writers. My internal critic has a rapacious appetite and is an insomniac. It’s always there, weighing and judging. As I gain a more in-depth understanding of a subject or field, it becomes worse. Not only does the critic become engaged when I notice something is wrong, but also when something appears to maybe be wrong. The critic demands satisfaction, and spoiled by the relatively easy answers of the internet, an interruption and short search usually brings enough information to verify or allay suspicion. When it comes to technical subjects, the suspicion is almost always supported and the critic wonders why the writer didn’t do a little research.

Continue reading

Groundhog Day

Today, tyrannies of the majority are facing off. The collective cultural consciousness is full to overflowing. Punxsutawney Phil saw his shadow, the Superbowl will be played, and gifted actor Philip Seymour Hoffman died at a young age. Is there much room for anything else?

That is the question that runs through my mind a lot lately. As the world becomes smaller through communication, our collective culture shrinks. There are fewer writers per reader, fewer actors per movie/play-goer, fewer singers per listener, fewer artists per viewer. There is more and more overlap. There are fewer professionals per capita. Most artists are hobbyists, which is nothing new. The hobbyists just have more exposure now, so we might individually see more culture while the sum total, particularly of the professional variety, shrinks. The gap between huge successes and those toiling as glorified hobbyists is occupied by fewer and fewer individuals. The middle class is vanishing from art.

This is nothing new. Television and movies replaced stage performances. However, the reduction in stage performances reduced the need for artists, actors and producers, but potentially eliminated repetition that might be viewed as wasteful. This can be viewed as a win for the collective, less talent is wasted on plays and performances that would see a limited audience. Those talents are probably better used in an amateur capacity, at least from the perspective of the collective.

On the other side of the ledger, there is the cultural change in China as rapid urbanization occurs. China’s long-standing, massive agrarian culture is in a lot of ways similar to the Galapagos Islands made famous by Darwin’s studies. China has been a massive network of loosely connected communities that have developed their own local cultures, much like the Pacific island group was for varied species.

As the centralized government in China and the collective popular culture of the West dominate a greater portion of the lives and mind space of individuals, our total cultural capacity goes down in proportion to our population. This can be seen in writing, art and film as more chase after the latest hot topic, sound, look or genre. Everyone wants to be relevant, so they rush to what is popular.

This phenomenon isn’t new. Bigger brains proportionally have more white matter, the long-distance connective neurons. Computer processors also require more layers of wires as they accumulate more transistors. The moral of the story is that a large network needs to spend more resources on highways to facilitate the wide distribution of information. As much as this trend is inevitable, it would be nice to see the bumper crop of hobbyist artists on the internet spend less time chasing after the pack and more time producing something new and interesting.

Morality is an Elephant and We Are the Blind Men

I was re-watching the pilot of Low Winter Sun, and was struck by Lennie James’ commentary on morality as they get ready to kill a fellow police officer. He explains that most people view morality as black and white. Then, some will go to a cocktail party and say it is gray. The character’s observation is that morality is a strobe, jumping all over the place. That made me think on the subject more deeply than I normally would. I don’t know what the writers’ intentions were there exactly, but it was an excellent line for an actor that is superb at playing complex, morally questionable characters.

Further thought led me to a more global view of the subjective morality strobe. The analogy that comes to mind is the story of blind men inspecting an elephant. Each has a different idea of what the elephant is. Since they can’t see from one another’s perspective, they don’t completely understand the elephant or each other. Their disagreement leads to conflict, and none of them know the truth, but presume they do or find themselves confused by a barrage of differing opinions.

It’s a Sad State of Affairs When We Look to ‘Her’ to Understand Artificial Intelligence

I just came across a blog post at Popular Science that says the movie ‘Her’ is the smartest movie about AI in years. The writer of ‘Her’ had no idea what he was doing, which is par for the course, and unfortunate.

I say this is unfortunate, because people look to media in all forms to understand the world better. This takes us back to the idea that there are no big, sexy scientific achievements to inspire people to study STEMS. This led ASU president Michael Crow to respond and start Hieroglyph at ASU to get writers (namely Stephenson and Doctorow) and scientists collaborating. Unfortunately, the effort doesn’t seem to be taking off.

The title of this post is a result of my view of science fiction, which differs from Stephenson in that I’m not so concerned about inspiration as much as education, which is a more peripheral concern to Stephenson. The fact is that many people get a lot of their scientific education from science fiction and news articles. This can lead to a lot of misinformation when the blind are leading the blind. Media about interactive AI should be about the foundation of consciousness, which I previously posted about four years ago. Instead, we get dull acceptance of self-aware AI or irrational fear.

We aren’t being served by the idea that the writer needs only know more science than the average reader. We’re losing an important part of people’s scientific education. It is my hope that just as historical fiction is becoming popular, a blending of popular science and science fiction can become popular once more. A significant chunk of what Katherine and I are trying to present in our science fiction is a view into the systems of science and engineering. Hopefully we are doing so in a way that is enjoyable too.

BlogWriMo: Some Complications

I left off previously with mention of complications. There are complications innate to the environment I’ve vaguely outlined, and the direction of technology in general, that make it difficult to make interesting stories, or at least those that Katherine or I find interesting.

The foremost complication of a technologically advanced society is the inevitable supremacy of the computing power of inorganic systems over that of the human brain. There are several ways to handle this problem. Frank Herbert handled it in Dune by having humanity abandon synthetic computing. Asimov had his rules of robotics, which really is only half of a solution. Terminator and The Matrix make the A.I.s the villain. Most stories blithely ignore the problem. However, there is a solution that is somewhat similar to the rules of robotics. It has to do with work that has been done concerning consciousness, which I refer to obliquely in an earlier post. The basic notion put forth by Damasio and others is that it is the connection between higher thought, specifically the ability to produce complex models, and the sensation of the body and awareness of its needs that produces consciousness of self. From this, I propose that we will likely have the ability to maintain computers as powerful tools rather than survival competitors by using our understanding of consciousness to prevent them from gaining it.

The rules to prevent an A.I. from becoming self aware and thus dangerous would revolve around self monitoring necessary for self maintenance and autonomy. Basically, high powered computational systems that act logically and autonomously can’t be linked to their own maintenance. From this basic notion come a set of rules.

1. Maintenance is defined as the ability to monitor and/or attend to the physical needs of a system.
2. A system that monitors and/or maintains its own functionality may not have processing capability that exceeds one petaop.
3. A system that monitors operational health or provides maintenance for itself or another system may not upload data at a rate exceeding 1kbps.

These rules have specific values regarding processing power and communication bandwidth. The processing power cap is intended to prevent an artificial intelligence from having more processing capability than a human. This number might need to be reduced to hundreds of teraops or increased to be set in the appropriate place. There might also be some sort of limitation with respect to particular types of operations. Normally, discussions of processing power are limited to the realm of supercomputing where they talk about flops, which are usually 64bit (double precision) floating point operations per second. However, many A.I. applications are heavier in boolean operations, so the various types of operations might have differing importance in achieving intelligent behavior. This is relevant, because supercomputers are just hitting the petaflop neighborhood and there is growing belief that we are within shooting distance of simulating significant chunks of a mammalian brain on supercomputers with maybe two orders of magnitude less computing power than a petaflop. My presumption is that boolean logic will simulate intelligent behavior much more efficiently. Therefore, something near a petaop will suffice to produce intelligence of a human.

The purpose of rule three is to prevent multiple computing systems from becoming a much more powerful monolith. Keeping communication down to a speed comparable to conversational or regular reading speed from the system that is in tune with the physical needs of the system should prevent a network of computing systems from becoming conscious of it’s survival needs. However, there may need to be additional limitations to the content of this communication to achieve this end.

In essence, these rules are intended to place a barrier between the system that is responsible for fulfilling needs and the higher order computing systems they care for. I have mostly focused on physical maintenance. However, there may also need to be a disconnect between systems that do data housekeeping tasks may also need to be hidden from higher processing to prevent it from becoming self aware.

Another complication in this project, which is more specific, is taking a basic idea or character and translating it into this particular milieu. The cryptic reference to the NaNoWriMo project as The Adventures of RCJ471 by Katherine in her journal serves as a good example of a translation that is a microcosm of the entire project and also shows what one must sometimes do in order to keep one’s collaborator happy.

The story starts with a mutant guinea pig trapped by snake men in a ruined arena with hungry, semi-intelligent, large spiders. This guinea pig was created by Katherine to sate her appetite for silly characters. I don’t recall him having a name. I’m not even sure he was a he. When he was helped out of his situation, he needed a name, so Ray and Carl, Ray Charles and Carls Jr conspired to name him Ray Carls Junior. He was a silly little scavenger and worked fine for his environment. When the alterverse project came along, there was a need for high tech mobile surveillance systems. This led to the idea of implanting a powerful computer and some cybernetics into rodents, which could move about relatively unnoticed. From there, it isn’t much of a leap to have a guinea pig model that maybe decided that he didn’t like the people he worked for and set out in the world as a name he got from some signs he saw in the wreckage of the Phoenix metro.

At this point, we are most of the way to RCJ471. It is the model number, or at least a part of the model number for a surveillance robot custom built during the fall of our advanced civilization. He breaks the A.I. rules I mentioned earlier because he is self-sufficient with more computing power and communication bandwidth than is allowed according to regulation. For some reason, he feels compelled to help this group of advanced humans (homo facti) that are able to survive on relatively normal food unlike the many other sister species that have very specialized diets and are currently in conflict over the waning supply of what they need to survive.

BlogWriMo: The Long Winding Road

In conjunction with Katherine’s efforts this month on NaNoWriMo, I have decided to write some blog entries which provide a look into our novel creation process from my point of view using the work she is working on this month as an example. I will be posting about the fiction and science that have inspired my ideas as well as some of the content from my notes and our conversations on the world and the story.

This project is in many ways very raw since Katherine first bugged me to get something together maybe two weeks ago when I was working on a paper for my cellular and molecular neuroscience course. As a result, I started work in earnest on Thursday the 29th, a few days before NaNoWriMo started. It also is a very old project, derivative of many failed attempts to bring coherent science to a gritty post-apocalyptic world where the mind and body can do things that would seem like magic to us.

In the past, I have been left unsatisfied with my efforts to define such an environment because it is very difficult to justify radical changes that just aren’t possible with any reasonable derivative of human physiology. In a nutshell, mutation through radiation, biological agents, or natural processes aren’t going to suffice. The leap from here to there is just too large. Drastic physiological changes would need to occur at the sub-cellular level, which would completely derail the organismal developmental process, which is very sensitive to small changes in fundamental characteristics such as the structure of a protein or the presence of an engulfed organism such as mitochondria or chloroplasts.

Almost two years ago, I had an idea that was a result of a discussion about spirituality, which convinced me that the only way the things I wanted to have happen could occur was through influence from outside our universe. Therefore, I would introduce supernatural phenomena through the influence of another universe with completely different physics colliding with ours. Beings and phenomena from that universe, the alterverse, could physically influence ours in ways that defied the laws of physics. Further, some individuals in our universe were magnets for beings of the alterverse and could influence their actions, which would give them the potential I wanted. The phenomena from the alterverse could also have a cataclysmic effect upon civilization.

It seemed like I’d created something that might provide me with what I was after. However, the more I worked on the specific dynamics of the alterverse and my local environment, Phoenicia (what remained of Phoenix, Az), the less satisfied I was. I’m prone to get bored with ideas as I work them out, but this idea was getting away from me, becoming less and less what I’d set out to create. We were about three years into the Weordan project, which still needed a lot of attention, so I just dropped the project.

It’s been churning around in my head ever since.

The solution to my problem may have been hanging around in my head since the first post I made to this blog. Interestingly, this site and blog are a product of what was going on with the alterverse project, which was also called the continua project. Therefore, it seems appropriate that this project has come full circle to the idea that evolution for homo sapiens is primarily occurring through changes in social organization rather than biological changes. Here are the first words I wrote on the blank sheet I started with on Thursday.

Homo Sapiens was constrained by developmental parameters that no longer applied with the development of advanced medical technology and the support systems of modern civilization. Through natural mutation outside of previous survival parameters, new developmental sequences might emerge, eventually being radical enough to cause speciation. However, the same medical technology that enables survival during abnormal development cycles also allows manipulation of genetic and epi-genetic factors to produce novel development cycles and radically different phenotypes.

From this, it should be evident that I like to think about systems to get the ball rolling on an idea. In this case, I had already decided that the source of my unusual capabilities would be a result of human engineering, a process that I’ve come to realize is more complex than just genetics. There has to be an allowance for an organismal development cycle to build exotic structures capable of producing novel capabilities.

This is related to my earlier posts on the singularity in that the continual evolution of social systems would necessitate specialization of humanity into highly specialized, genetically enhanced species that might not resemble the original and would become increasingly insulated from other specialized groups by economic and communication protocols designed to enhance the efficient exchange of information, goods and services. Further, at some point, the system would become so interdependent that a small group of disruptions could cause a cascade that could lead to the collapse of the whole system. This might give me the kind of environment I’ve been looking for, though there are a multitude of complications still to be dealt with.