tag:blogger.com,1999:blog-16768201.post2208397816112443144..comments2023-09-13T06:17:38.045-05:00Comments on Try Reason!: SingularityAnonymoushttp://www.blogger.com/profile/03560477246248417263noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-16768201.post-75570715654836210582010-04-17T06:06:39.724-05:002010-04-17T06:06:39.724-05:00Great question William. The answer could, however...Great question William. The answer could, however, be book unto itself. So my response will be brief. <br /><br />Similar to Ayn Rand, I see pattern recognition as the first step toward concept formation. It is necessary but not sufficient. Pattern recognition works well at the perceptual level. It is the labeling of patterns according to essential characteristics that differentiates concepts from just recognizing a pattern. How can <i>essentials</i> be determined by a thinking being or machine? That is the answer computer scientists need to solve.Anonymoushttps://www.blogger.com/profile/03560477246248417263noreply@blogger.comtag:blogger.com,1999:blog-16768201.post-63397018373386725652010-04-16T20:34:01.057-05:002010-04-16T20:34:01.057-05:00Your argument about the Singularity is interesting...Your argument about the Singularity is interesting, but I'm not sure I accept your argument about the failure to address concept formation. What do you think is the difference between "pattern recognition" and "concept formation"? Doesn't recognizing a pattern mean developing the ability to say that A and B both have the same structure, despite differences of detail, and thus can both be assigned to category X? I find myself wondering if "pattern recognition" isn't simply the term computer scientists have come up with for concept formation. How do you see the two as different, if you do?William H Stoddardnoreply@blogger.comtag:blogger.com,1999:blog-16768201.post-20904815501908884982010-04-12T09:23:24.680-05:002010-04-12T09:23:24.680-05:00Excellent points, Kelleyn. I agree completely.
...Excellent points, Kelleyn. I agree completely. <br /><br />I also see a de-emphasis on the complex nature of the endocrine system's interaction with neural activity. I've heard some futurists blow it off as a just another input. However, I doubt it will be so simple to duplicate in describing how neural systems develop over time. Indeed, I cannot see a way to achieve a real value systems within computers without pleasure and pain capabilities.Anonymoushttps://www.blogger.com/profile/03560477246248417263noreply@blogger.comtag:blogger.com,1999:blog-16768201.post-90508167717550211372010-04-10T11:36:46.128-05:002010-04-10T11:36:46.128-05:00When Kurzweil, Eliezer Yudkowsky et al discuss pre...When Kurzweil, Eliezer Yudkowsky <i>et al</i> discuss predictions of machine intelligence, the leap between the perceptual and conceptual levels of thinking is usually ignored. Intelligence is assumed to be linear, and their definitions correspond roughly to those of g as discussed by intelligence researchers such as Linda S. Gottfredson and Howard Gardner. The result is that definitions of intelligence tend to be shifting, contradictory and contentious, and assumptions based on them are speculative.<br /><br />For intelligence to become as powerful as Kurzweil implies, the possibility of some further leap analogous to the one between perception and conceptualization would have to exist. Even though I realize that it is a contradiction to try at my present cognitive level to visualize what this would be, I try all the time, and I wonder what type of cognitive growth and/or enhancement will help us either find it or verify that it does not exist.kelleynhttp://home.comcast.net/~shegeeknoreply@blogger.com