Monthly Archives: July 2008

Cloud Computing Isn’t New and Isn’t the Answer

Over the last couple of weeks, there has been quite a bit written about cloud computing, most recently about the effort by Intel HP and Yahoo. The article I linked to makes a very important point. The notion of a dummy terminal that has services delivered to it by a server is as old as computing itself. Whenever there is an application that can only be performed in a timely manner by expensive hardware, centralized computing becomes the computing paradigm for those that want to run that application. When the PC proved up to the task of word processing and spreadsheets in the early 80’s, server sales went into a nose-dive. There were still databases that needed to be managed centrally, so the server never went away, but its role was diminished. Currently, smart phones just don’t have the processing power to do a whole lot, but that will change rather rapidly as I suggested a couple of weeks ago. We are getting to the point where many users don’t know what to do with the capabilities of a low end PC or notebook. Why do we think inflicting the poor reliability of network access on computing be better than the dominant decentralized paradigm?

Realistic Portrayal Through Gender Tendencies

When Katherine and I work on book, we try to make sure that characters are distinctive and realistic. One of the most important ways we do this is to differentiate the way they look at the world, other people and the problems that arise. My approach is to have a model of these sorts of behaviors which is linked in a credible manner to people in the real world.

What I’ve learned from experience is well characterized by a specific encounter I had last fall in one of my classes. It was near the end of the semester and a fellow grad student was worried about an assignment to write up a mock grant proposal. Her concern was that she hadn’t been provided sufficient specific guidance to assure her that she would get a high grade on the assignment. I was less concerned for two reasons. First, I’m going to do what I’m going to do regardless of the instructions. Second, it had become clear to me that this instructor was more concerned with us showing original thought in our proposals than in our conforming to some standard. I tried to relate to her that her concerns weren’t warranted in this situation, but failed to satisfy her. To some degree, it seemed that she was more looking for an opportunity to vent her frustration rather than look for a solution. The differences between us were evident in more than just our approaches to the grant proposal assignment. Our behaviors were considerably different during the conversation. I have a tendency to not look at the face of the person I’m talking to because it is distracting. It takes considerable effort for me to translate my thoughts into words, and keeping track of the expression on someone’s face while doing so is too much multi-tasking for me. This may cause me to miss some part of what is going on with other people. However, I can still get and idea of what they are feeling and whether they are being genuine from the tone of their voice and my general assessment of them as a person. Meanwhile, she was intently studying my facial expression to the point of it being a bit uncomfortable.

My proposition is that myself and the aforementioned fellow student represent fairly polarized points on a continuum where there is a trade-off between two modes of thinking. She operates in a mode of mostly associating details directly with experience to make a conclusion. On the contrary, I look at the relationships between the details themselves to recognize a system from which to draw a conclusion. Her approach is more common amongst women while mine is more common amongst men. However, there is significant overlap between the genders with some hybridization of the two strategies going on in many people.

This model is supported by gender tendencies with respect to navigation. In humans, males tend to prefer a visual map when given instructions to find a destination while females tend to prefer explicit instructions. Similarly, female and feminized male rats tend to rely upon landmarks when navigating mazes while males and masculinized females tend to rely upon room geometry. Instructions and landmarks are essentially directly related to the goal. In this case, getting to the next landmark or step in the directions. This is similar to the previously proposed female tendency to identify direct relationships. The male tendencies also are similar to the aforementioned premise. Having a map in one’s head or depending upon the shape of a room is working from the association between multiple characteristics of the area being navigated.

This particular model best relates to writing in that it suggests what information is most important to a given character. It allows characters to be somewhat distinctive and internally consistent. When confronted by someone that is nervous, the more feminine character might be more inclined to notice specific behavioral abnormalities such as tense muscles that suggest something is amiss. Meanwhile, the more masculine character might be more focused on why the individual might be motivated to be deceitful and be watchful for behavior that confirms this. Depending upon the circumstances, one approach may be more effective than the other. In any case, they will not be paying attention to the same information and can’t be written the same way. This is difficult to do when one is writing from the perspective of a character that thinks differently than oneself.

Increasing Demands on Cell Phones Will Advantage x86

We are fast approaching the point where cell phone sized devices will have sufficient processing power to perform functions traditionally executed by a personal computer. There is some question as to whether this is necessary or desirable. With access to the internet, the processing can be performed by machines on the network. This goes back to the conflict between centralized processing versus decentralized. Centralized processing is desirable when the cost of processing power is high. In this particular case, the expectation would be that the cost of processing power is lower in a server form factor than a cell phone. There are costs associated with this solution with respect to the decentralized solution. First, reliability is lower because the reliability of the network affects all processing tasks. Second, network traffic is increased because communication must take place for all processing tasks as well as other communication. Third, privacy is compromised due to the extent that personal information is stored on a server that is likely not in control of the user and is transferred frequently between the mobile device and server. For these three reasons, it is very desirable to have decentralized processing. Therefore, it will be desirable to have hand-held computing.

Given that hand-held computing will happen with it just being a matter of when, there is still the question of what will become the dominant platform. Intel and AMD are reducing the power consumption of their x86 based microprocessors and platforms while ARM based smart phones are becoming more powerful every year. On the software side, it will likely be Windows and Linux versus Nokia’s open source Symbian. Apple seems flexible enough to operate on either x86 or ARM, as they are currently doing so with Mac and the iPhone, though their software development is much more extensive on x86. Presuming that processing will migrate into mobile devices and will not become centralized, compatibility will remain important. This will lead to a natural advantage for x86 as functionality on cell phones comes to resemble regular computing.

There is also a question of performance. The challenge for x86 is to reduce power consumption down to a scale appropriate to a cell phone sized device. For ARM, the challenge is to scale up the capabilities of the architecture for general purpose computing. Intel has made it half-way by scaling their processor power consumption down to 0.5W while achieving the ability to run Windows XP with Atom. This is low enough power consumption to be a viable smart phone chip. The holdup is with the rest of the platform, which has been rather disappointing. Intel has failed to sufficiently reduce the size and power consumption of the chipset, Poulsbo. This could take a couple more revisions to iron out, a fact which may have motivated Apple to look for an alternative, possibly prompting the acquisition of P.A. Semi. It seems likely that the plan is to develop a high performance cell phone platform in either the Power or ARM architecture. It isn’t evident which, since P. A. Semi is rumored to be discontinuing support of their PWRficient processors based on the Power architecture. This implies that they are going in a different direction, possibly scaling ARM up instead of scaling Power down to meet cell phone sized computing needs. Needless to say, it shall be interesting to see what develops as there is considerable microprocessor design talent at P.A. Semi.

History Will Continue to Repeat Itself

We are on the cusp of a new era in computing. As microprocessors become more powerful and have more cores per die, there is less need for additional general purpose computational power. On the desktop, the computational load is primarily graphics, image processing or encoding/decoding of music and video. These tasks are computation heavy and branch/logic light, much like traditional supercomputing. As a result, the major microprocessor producers have been moving toward more floating point computational power in their processors. IBM produced the Cell processor, a powerPC core with 8 simpler vector processing cores, which is the workhorse for the first petaflop computer. Obviously, it is the fastest in the world. The top two graphics processing companies, nVidia and AMD, are also becoming more concerned about developing programming tools to allow the computation power of their graphics processors to be used for purposes other than graphics. Finally, Intel will be extending the x86 code base for vector processing when they produce an x86 based graphics accelerator codenamed Larrabee.

Computation power has always been important in research. Simulating nuclear devices is computation intensive, so the DOE has always had a top notch system in New Mexico. However, a new field is opening up that requires much more, biology. Specifically, the task of understanding protein folding and interaction. Stanford’s Folding @Home program asks people to borrow the processing power of their computers that their not using to do protein folding calculations. From the beginning, the PS3, which is powered by IBM’s Cell processor has been a strong contributor to the program. Recently, they have also developed a client in Cuda, the nVidia proprietary language which promises to bring the substantially higher processing power of GPUs to help solve the protein folding problem.

The only problem with Folding @Home is that the processing power of individuals is so small that it is really not possible to simulate a significantly long folding sequence. At least that is the claim made by D. E. Shaw. There is also an article in the New York Times, which is less technical.

More or less, Shaw’s argument is that a dedicated supercomputer is needed and he can produce a specialized ASIC that will do the job 1000 times faster than the processors used in current supercomputers in about 5 years. Unfortunately, while there will be an approximately 10x shrink in that time, supercomputers will be in excess of 100 times more powerful. Possibly 1000. This is because all but one of the top supercomputers is powered by either Intel Xeon processors, AMD Opteron processors or IBM Power processors. The emergence of the new Cell based system IBM built for the DOE and deals by Intel with Cray and Dreamworks suggest that mainstream supercomputing will no longer be driven by just general purpose CPUs, which aren’t very efficient at raw computing. Larrabee is a big part in this, as will Cell and Cuda. D. E. Shaw’s Anton is going to be yet another specialty chip that will be marginalized by higher volume processors.

MRI: the Particle Physics Laboratory for the Brain

Much like particle physics has particle accelerators such as the Large Hadron Collider at CERN and the Tevatron at Fermilab, neuroscience has MRI. All are high energy systems designed to induce and detect sub-atomic reactions. They also all require significantly complex mathematical tricks to discern useful information.

Increasingly more powerful magnets have enhanced the sensitivity of MRI and has called for new methods of analysis to learn about the structure of the brain. Voxel based morphemetry(VBM) is an analysis method which uses MRI to measure the volumes of gray and white matter structures in a living brain. I first encountered it while searching through the literature on the differences between typical male and female brain structure. It has also been used to recognize abnormalities for individuals with autistic spectrum disorders and should prove useful in determining the relationship between mental performance and structure of the brain.

On Monday, it was announced that a newer method, diffusion spectrum imaging (DSI), had been used to track white matter connections between different regions of gray matter in the cortex of the brain in five right handed male subjects. This finding supports work published two years ago in Brain and the theory of Antonio Damasio. A little studied region of the brain, the precuneus is implicated by Damasio as being a key player in consciousness and was the most widely connected part of the cortex in this latest DSI experiment. The notion is that an area key to consciousness would have to be connected to a wide swathe of cortex.

While these first looks at what DSI can reveal are exciting, it is only a glimpse. The study was done on a group of five right handed males who were instructed to stay alert with their eyes closed. The imaging was not functional in that results were averaged over a period of time where there might be considerable fluctuations. I look forward to the fruits of DSI analysis as MRI technology improves and more extensive studies are conducted. Not only does it have the potential to pinpoint important structures involved in consciousness, it also will be useful in understanding higher level reasoning by allowing us to correlate mental performance with the strength of the relationships between regions of the cortex.