To end my class in the Intro to Information Systems, I'm exploring future predictions of technology innovation.  There are a lot of interesting predictions, but one that I find fascinating is the idea of the Singularity.

Futurist and inventor Ray Kurzweil has predicted the singularity in about 30 years.  In this Ubiquity interview with Ray, he states his premise:
I make the case that this exponential progression will lead us to an understanding of human intelligence. And by understanding I mean we will have detailed mathematical models and computer simulations of all of the regions of the brain by the mid 2020s. So by the end of the 2020s we'll be able to fully recreate human intelligence. You may wonder: "OK, what's the big deal with that? We already have human intelligence; in fact, we've got six billion human brains running around, so why do we need more?" One of the answers to that question is that it will be a very powerful combination to combine the subtle and supple powers of human pattern recognition with ways in which machines are already superior. 
While I do not doubt our ability to understand pattern recognition to a very great extent, I believe that his definition of intelligence rests primarily on pattern recognition and not concept formation.  It may be possible to simulate concept formation in computers, I'm just not convinced it will be achieved in 20 years.  Regardless of the timeline, the ultimate effect may be profound.

Ray goes on to say:
My second point is that nonbiological intelligence, once it achieves human levels, will double in power every year, whereas human intelligence—biological intelligence—is fixed. We have 10 to the 26th power calculations per second in the human species today, and that's not going to change, but ultimately the nonbiological side of our civilization's intelligence will become by the 2030s thousands of times more powerful than human intelligence and by the 2040s billions of times more powerful. And that will be a really profound transformation.
The difficulty with this statement is how can intelligence be more powerful.  Is it simply faster?  Can it remember more?  Undoubtedly true.  Can it think better?  That is a subject we cannot fathom at this point, but sound highly suspect.

My second difficulty is with the assumption that human do not enhance their own cognitive capacities through integration with computing machines.  Simple electronic interfaces already exist in Cochlear implants, prosthetics, and BrainGate.  In 20 years, we may be able to achieve fantastic integration of computers and brain functionality.  Eventually, we may be able to download our memories or learn new skills Matrix style.

There is also the difficulty of moving this intelligence into economically feasible applications.  As another futurist Max More points out:
I also see a tendency in many projections to take a purely technical approach and to ignore possible economic, political, cultural, and psychological factors that could dampen the advances and their impact.
And to that I agree.


  1. When Kurzweil, Eliezer Yudkowsky et al discuss predictions of machine intelligence, the leap between the perceptual and conceptual levels of thinking is usually ignored. Intelligence is assumed to be linear, and their definitions correspond roughly to those of g as discussed by intelligence researchers such as Linda S. Gottfredson and Howard Gardner. The result is that definitions of intelligence tend to be shifting, contradictory and contentious, and assumptions based on them are speculative.

    For intelligence to become as powerful as Kurzweil implies, the possibility of some further leap analogous to the one between perception and conceptualization would have to exist. Even though I realize that it is a contradiction to try at my present cognitive level to visualize what this would be, I try all the time, and I wonder what type of cognitive growth and/or enhancement will help us either find it or verify that it does not exist.

  2. Excellent points, Kelleyn. I agree completely.

    I also see a de-emphasis on the complex nature of the endocrine system's interaction with neural activity. I've heard some futurists blow it off as a just another input. However, I doubt it will be so simple to duplicate in describing how neural systems develop over time. Indeed, I cannot see a way to achieve a real value systems within computers without pleasure and pain capabilities.

  3. William H Stoddard8:34 PM

    Your argument about the Singularity is interesting, but I'm not sure I accept your argument about the failure to address concept formation. What do you think is the difference between "pattern recognition" and "concept formation"? Doesn't recognizing a pattern mean developing the ability to say that A and B both have the same structure, despite differences of detail, and thus can both be assigned to category X? I find myself wondering if "pattern recognition" isn't simply the term computer scientists have come up with for concept formation. How do you see the two as different, if you do?

  4. Great question William. The answer could, however, be book unto itself. So my response will be brief.

    Similar to Ayn Rand, I see pattern recognition as the first step toward concept formation. It is necessary but not sufficient. Pattern recognition works well at the perceptual level. It is the labeling of patterns according to essential characteristics that differentiates concepts from just recognizing a pattern. How can essentials be determined by a thinking being or machine? That is the answer computer scientists need to solve.