Tag Archives: computing

BlogWriMo: Some Complications

I left off previously with mention of complications. There are complications innate to the environment I’ve vaguely outlined, and the direction of technology in general, that make it difficult to make interesting stories, or at least those that Katherine or I find interesting.

The foremost complication of a technologically advanced society is the inevitable supremacy of the computing power of inorganic systems over that of the human brain. There are several ways to handle this problem. Frank Herbert handled it in Dune by having humanity abandon synthetic computing. Asimov had his rules of robotics, which really is only half of a solution. Terminator and The Matrix make the A.I.s the villain. Most stories blithely ignore the problem. However, there is a solution that is somewhat similar to the rules of robotics. It has to do with work that has been done concerning consciousness, which I refer to obliquely in an earlier post. The basic notion put forth by Damasio and others is that it is the connection between higher thought, specifically the ability to produce complex models, and the sensation of the body and awareness of its needs that produces consciousness of self. From this, I propose that we will likely have the ability to maintain computers as powerful tools rather than survival competitors by using our understanding of consciousness to prevent them from gaining it.

The rules to prevent an A.I. from becoming self aware and thus dangerous would revolve around self monitoring necessary for self maintenance and autonomy. Basically, high powered computational systems that act logically and autonomously can’t be linked to their own maintenance. From this basic notion come a set of rules.

1. Maintenance is defined as the ability to monitor and/or attend to the physical needs of a system.
2. A system that monitors and/or maintains its own functionality may not have processing capability that exceeds one petaop.
3. A system that monitors operational health or provides maintenance for itself or another system may not upload data at a rate exceeding 1kbps.

These rules have specific values regarding processing power and communication bandwidth. The processing power cap is intended to prevent an artificial intelligence from having more processing capability than a human. This number might need to be reduced to hundreds of teraops or increased to be set in the appropriate place. There might also be some sort of limitation with respect to particular types of operations. Normally, discussions of processing power are limited to the realm of supercomputing where they talk about flops, which are usually 64bit (double precision) floating point operations per second. However, many A.I. applications are heavier in boolean operations, so the various types of operations might have differing importance in achieving intelligent behavior. This is relevant, because supercomputers are just hitting the petaflop neighborhood and there is growing belief that we are within shooting distance of simulating significant chunks of a mammalian brain on supercomputers with maybe two orders of magnitude less computing power than a petaflop. My presumption is that boolean logic will simulate intelligent behavior much more efficiently. Therefore, something near a petaop will suffice to produce intelligence of a human.

The purpose of rule three is to prevent multiple computing systems from becoming a much more powerful monolith. Keeping communication down to a speed comparable to conversational or regular reading speed from the system that is in tune with the physical needs of the system should prevent a network of computing systems from becoming conscious of it’s survival needs. However, there may need to be additional limitations to the content of this communication to achieve this end.

In essence, these rules are intended to place a barrier between the system that is responsible for fulfilling needs and the higher order computing systems they care for. I have mostly focused on physical maintenance. However, there may also need to be a disconnect between systems that do data housekeeping tasks may also need to be hidden from higher processing to prevent it from becoming self aware.

Another complication in this project, which is more specific, is taking a basic idea or character and translating it into this particular milieu. The cryptic reference to the NaNoWriMo project as The Adventures of RCJ471 by Katherine in her journal serves as a good example of a translation that is a microcosm of the entire project and also shows what one must sometimes do in order to keep one’s collaborator happy.

The story starts with a mutant guinea pig trapped by snake men in a ruined arena with hungry, semi-intelligent, large spiders. This guinea pig was created by Katherine to sate her appetite for silly characters. I don’t recall him having a name. I’m not even sure he was a he. When he was helped out of his situation, he needed a name, so Ray and Carl, Ray Charles and Carls Jr conspired to name him Ray Carls Junior. He was a silly little scavenger and worked fine for his environment. When the alterverse project came along, there was a need for high tech mobile surveillance systems. This led to the idea of implanting a powerful computer and some cybernetics into rodents, which could move about relatively unnoticed. From there, it isn’t much of a leap to have a guinea pig model that maybe decided that he didn’t like the people he worked for and set out in the world as a name he got from some signs he saw in the wreckage of the Phoenix metro.

At this point, we are most of the way to RCJ471. It is the model number, or at least a part of the model number for a surveillance robot custom built during the fall of our advanced civilization. He breaks the A.I. rules I mentioned earlier because he is self-sufficient with more computing power and communication bandwidth than is allowed according to regulation. For some reason, he feels compelled to help this group of advanced humans (homo facti) that are able to survive on relatively normal food unlike the many other sister species that have very specialized diets and are currently in conflict over the waning supply of what they need to survive.

A Man With a Short Memory

In an interview with Business Week Dirk Meyer shows that he doesn’t understand where the computing market is going.

Yet that is one PC segment that’s a little healthier right now and AMD isn’t participating in it. Why is that?
We consciously focused our R&D dollars, which obviously given our size are smaller than Intel’s, on the big mainstream markets as they exist today. Knowing that this trend toward lower power consumption and more mobility is going to happen, we just decided to load that into the R&D pipeline for later. It’s not a big volume target and not a big dollar opportunity.…One of the saddest things about the PC industry right now is, since late last year, all anyone seems to want to ask about is netbooks. Good grief! It’s a low-cost limited-function device. There’s not much excitement or money in dollar volume there.

The writing was on the wall years ago when the first notebook was available for $1000. Processing is going mobile. Sales in volume and value are higher on notebooks than on desktops. Not only did Intel see this first, but the CEO of AMD has firmly placed his head in the sand on the matter. He’s worried about winning the war in notebooks AMD’s already lost and ceded the next frontier to Intel.  The game isn’t going to remain notebooks or netbooks.  It will go smaller.  It will be cellphones and other personal communications devices that will evolve.  While AMD is worried about the “big money”, Intel is setting the stage to make a run at the king of cell phone processing, ARM. Not only do Meyer’s words speak to his short memory with regards to how AMD got into their most recent trouble, but their lack of preparedness in terms of products shows it.

Cloud Computing Isn’t New and Isn’t the Answer

Over the last couple of weeks, there has been quite a bit written about cloud computing, most recently about the effort by Intel HP and Yahoo. The article I linked to makes a very important point. The notion of a dummy terminal that has services delivered to it by a server is as old as computing itself. Whenever there is an application that can only be performed in a timely manner by expensive hardware, centralized computing becomes the computing paradigm for those that want to run that application. When the PC proved up to the task of word processing and spreadsheets in the early 80’s, server sales went into a nose-dive. There were still databases that needed to be managed centrally, so the server never went away, but its role was diminished. Currently, smart phones just don’t have the processing power to do a whole lot, but that will change rather rapidly as I suggested a couple of weeks ago. We are getting to the point where many users don’t know what to do with the capabilities of a low end PC or notebook. Why do we think inflicting the poor reliability of network access on computing be better than the dominant decentralized paradigm?

Increasing Demands on Cell Phones Will Advantage x86

We are fast approaching the point where cell phone sized devices will have sufficient processing power to perform functions traditionally executed by a personal computer. There is some question as to whether this is necessary or desirable. With access to the internet, the processing can be performed by machines on the network. This goes back to the conflict between centralized processing versus decentralized. Centralized processing is desirable when the cost of processing power is high. In this particular case, the expectation would be that the cost of processing power is lower in a server form factor than a cell phone. There are costs associated with this solution with respect to the decentralized solution. First, reliability is lower because the reliability of the network affects all processing tasks. Second, network traffic is increased because communication must take place for all processing tasks as well as other communication. Third, privacy is compromised due to the extent that personal information is stored on a server that is likely not in control of the user and is transferred frequently between the mobile device and server. For these three reasons, it is very desirable to have decentralized processing. Therefore, it will be desirable to have hand-held computing.

Given that hand-held computing will happen with it just being a matter of when, there is still the question of what will become the dominant platform. Intel and AMD are reducing the power consumption of their x86 based microprocessors and platforms while ARM based smart phones are becoming more powerful every year. On the software side, it will likely be Windows and Linux versus Nokia’s open source Symbian. Apple seems flexible enough to operate on either x86 or ARM, as they are currently doing so with Mac and the iPhone, though their software development is much more extensive on x86. Presuming that processing will migrate into mobile devices and will not become centralized, compatibility will remain important. This will lead to a natural advantage for x86 as functionality on cell phones comes to resemble regular computing.

There is also a question of performance. The challenge for x86 is to reduce power consumption down to a scale appropriate to a cell phone sized device. For ARM, the challenge is to scale up the capabilities of the architecture for general purpose computing. Intel has made it half-way by scaling their processor power consumption down to 0.5W while achieving the ability to run Windows XP with Atom. This is low enough power consumption to be a viable smart phone chip. The holdup is with the rest of the platform, which has been rather disappointing. Intel has failed to sufficiently reduce the size and power consumption of the chipset, Poulsbo. This could take a couple more revisions to iron out, a fact which may have motivated Apple to look for an alternative, possibly prompting the acquisition of P.A. Semi. It seems likely that the plan is to develop a high performance cell phone platform in either the Power or ARM architecture. It isn’t evident which, since P. A. Semi is rumored to be discontinuing support of their PWRficient processors based on the Power architecture. This implies that they are going in a different direction, possibly scaling ARM up instead of scaling Power down to meet cell phone sized computing needs. Needless to say, it shall be interesting to see what develops as there is considerable microprocessor design talent at P.A. Semi.

History Will Continue to Repeat Itself

We are on the cusp of a new era in computing. As microprocessors become more powerful and have more cores per die, there is less need for additional general purpose computational power. On the desktop, the computational load is primarily graphics, image processing or encoding/decoding of music and video. These tasks are computation heavy and branch/logic light, much like traditional supercomputing. As a result, the major microprocessor producers have been moving toward more floating point computational power in their processors. IBM produced the Cell processor, a powerPC core with 8 simpler vector processing cores, which is the workhorse for the first petaflop computer. Obviously, it is the fastest in the world. The top two graphics processing companies, nVidia and AMD, are also becoming more concerned about developing programming tools to allow the computation power of their graphics processors to be used for purposes other than graphics. Finally, Intel will be extending the x86 code base for vector processing when they produce an x86 based graphics accelerator codenamed Larrabee.

Computation power has always been important in research. Simulating nuclear devices is computation intensive, so the DOE has always had a top notch system in New Mexico. However, a new field is opening up that requires much more, biology. Specifically, the task of understanding protein folding and interaction. Stanford’s Folding @Home program asks people to borrow the processing power of their computers that their not using to do protein folding calculations. From the beginning, the PS3, which is powered by IBM’s Cell processor has been a strong contributor to the program. Recently, they have also developed a client in Cuda, the nVidia proprietary language which promises to bring the substantially higher processing power of GPUs to help solve the protein folding problem.

The only problem with Folding @Home is that the processing power of individuals is so small that it is really not possible to simulate a significantly long folding sequence. At least that is the claim made by D. E. Shaw. There is also an article in the New York Times, which is less technical.

More or less, Shaw’s argument is that a dedicated supercomputer is needed and he can produce a specialized ASIC that will do the job 1000 times faster than the processors used in current supercomputers in about 5 years. Unfortunately, while there will be an approximately 10x shrink in that time, supercomputers will be in excess of 100 times more powerful. Possibly 1000. This is because all but one of the top supercomputers is powered by either Intel Xeon processors, AMD Opteron processors or IBM Power processors. The emergence of the new Cell based system IBM built for the DOE and deals by Intel with Cray and Dreamworks suggest that mainstream supercomputing will no longer be driven by just general purpose CPUs, which aren’t very efficient at raw computing. Larrabee is a big part in this, as will Cell and Cuda. D. E. Shaw’s Anton is going to be yet another specialty chip that will be marginalized by higher volume processors.