But, even despite the spectacular vagueness of the claim, things are hardly looking good. Graphs where the future has been conveniently 'filled in' according to the author's highly selective worldview do not count as evidence, and are nothing more than an embarrassment. Even Kurzweil's more apparently reasonable claim - that of exponential growth at a constant rate - rests on a pretty selective framing of the question and interpretation of existing data. Assuming that, indeed, drug delivery via nanobots and the engineering of replacement tissue/organs will at some point become reality, Kurzweil's estimate of the relevant timeframe is ludicrously optimistic. If one actually reads carefully what he's saying, and assumes that he is assigning standard, agreed-upon, meaning to the words he uses, then several possible reactions seem warranted: * that sinking feeling that one inhabits a universe that is completely orthogonal to those who gave this a 5-star rating * heightened skepticism and aversion to Kool-Aid * bemusement at the gap between Kurzweil's perception of reality and one's own - in particular, the evident moral vacuum in which he "operates", as well as apparent ignorance or indifference to the lot of the vast majority of the planet's inhabitants * wonder at the sheer monomaniacal gall of the man Grandiose predictions of the future, the more outlandish the better, appear to have an undiminished appeal for Homo sapiens.
Then dear friends this is the book for you! If the current state of technology has you feeling a bit ambivalent, wait a decade or so when if half the shit in this book turns out to be correct people will become freaking demigods.
This has been true for many measures such as micro-processor size, cost of mass-produced goods, etc. Kurzweil argues that since transistors are faster than neurons, they will make better brains. No engineer would try to build a brain out of transistors. The better your artificial brain performs, the more it will look like a human brain. One important point made in the book is that our only problem need be to produce an intelligence greater than our own. Ray argues that a future AI will be produced with the ability to iteratively improve its own intelligence. Among his many skills, he's an accomplished software engineer, so he should know better. Even if we "reverse-engineer the human brain" (whatever that means). Don't get me wrong, the mirrors are interesting to look at in their own right: Nano technology, genetic engineering, genetic algorithms, neural networks.
Possibility 1: Kurzweil Is Nuts I used the term 'batshit crazy' more than a few times while speeding through this book (N.B. I think I ended up reading more than I skimmed, but many of the lauded 2005 advancements are less impressive 9 years later -- which is, of course, part of Kurzweil's point). ARE YOU FUCKING KIDDING ME." Here's the major problem with Kurzweil's argument -- ASIDE FROM several more niggling, specific concerns i.e. the sort-of-embarrassing obsession with longevity & immortality (320...and so many more); the lack of appreciation for the human body as a full system (200, for example: "not everything is contained in the brain but who cares we only need the brain!" uh, what?); the assumption that immediate gratification supersedes all else (313); the comparison of current computer viruses with hacking into a human body (you fucking kidding me bro?); more and more.: He never takes into account that HUMANS FUCKING SUCK. a) You mention but never deal with the fact that "much of human thought is petty and derivative" -- well, if we have MORE time, MORE technology, how will that be abated in any way? While Kurzweil devotes fairly significant time to the philosophical issue of consciousness, he never mentions critical thought or reflection -- and he certainly never mentions the fact that the majority of people I say this as a teacher, not a misanthrope don't actually engage in it without a push. Problem is, if we are not UNIVERSALLY CRITICAL, and we are given all of the technological possibilities scattered through the book, we will not only waste them -- but probably destroy ourselves in the process. Kurzweil's faith in humanity is sort of humiliating. Though Kurzweil devotes time to the potential dangers of 'strong AI,' I'd think that the real dangers will come long before that BECAUSE PEOPLE SUCK. In Kurzweil's argument, human intelligence -- amplified by machines -- will essentially become god and that's our right and privilege. Possibility 2: Kurzweil Is a Smart Mofo You know those scientists you see in TV shows that, by the power of fiction, have extensive knowledge of every scientific discipline even though they should really only understand problems relating to physics? I'm not inclined to blindly believe anything I read -- so I think Kurzweil got to me, with all his documented examples, at some point or another. Possibility 3: This Is a Sci-fi Novel I first found out about Ray Kurzweil because of a Canadian alt-rock band, Our Lady Peace, who put out a concept album based on Kurzweil's book The Age of Spiritual Machines in 2000.
Kurzweil aims to convince his reader that we are on the cusp of an exponential growth in genetics, nanotechnology, and robotics (GNR) that will fundamentally change humanity, creating humans that are fully integrated with machines, live as long as they like, and frequently immerse themselves in virtual worlds.
The Singularity, if youve never heard it, is a term given to a theoretical point in the future when our technology will have become so advanced (compared to today) that it becomes impossible to see beyond it or understand its ramifications. In Kurzweils vision of the (not so far) future, we have transcended our mortal coils. Instead, weve shed our weak flesh to merge with (and become) immortal machine super-intelligences, spreading through the solar system, the galaxy, and then the universe in a quest to reach ever new heights of knowledge, art, beauty, and creation. And through this all, claims Kurzweil, we maintain our humanity. It is a completely indisputable, unambiguous statement to say that the history of our planet is one of escalating intelligence/complexity and that the history of human civilization is one of escalating technological and scientific advancement. Even the numerous ultimately FAILED endeavors or companies that Kurzweil cites dont render his general claims dubious. At one point, I estimated roughly 80% of the companies or technologies he was citing had since failed. Consider the solar power industry: I recently undertook some research in various solar power companies with the aim of investing in their stock. However technology is advancing so rapidly now that before this company can earn back its expensive investment, some other better solar product reaches the marketplace. Point being, when looking at broad predictions of the future, we cant let day-to-day or even year-to-year chaos and setbacks obfuscate the overall path. It explores these visions of the future without ever truly exploring how they will affect humanity at the visceral, emotional, dramatic level of individuals. Kurzweils writings were less engaging than they might have been because they rarely afforded me the opportunity to hypothesize upon what I personally might do, when faced with future ethical questions. In fact, the overall feel of the book is that its less about communicating with me as a fellow human being than it is about Kurzweil organizing his own thoughts and evidences on the matters he wishes to write about. He finishes the prologue with a quote from Muriel Rukeyser: The universe is made of stories, not of atoms and the actual last line of the prologue is: This book, then, is the story of the destiny of the human-machine civilization, a destiny we have come to refer as the Singularity. If this was truly his intention to tell this story then I would say Kurzweil singularly failed. To finish, I want to try to succeed, in at least one tiny area, where I feel Kurzweil failed. I want to talk about immortality and what itll mean to you and to me, personally. Alex turns on the debates held in a virtual building of course regarding the impending immortality treatment. Their reasons include many objections: overpopulation; that being immortal would be boring; or that death gives us meaning or otherwise motivates us. I encourage them to try to think of immortality not as some idea far out in the future but as an imminent issue requiring real, practical decisions. What will HE think about these debates regarding immortality? Im particularly interested to see how religious people will respond to the real possibility of immortality. Consider fundamentalists who WILL choose to die rather than choose to be immortal. And in a broader sense, will you vote for politicians who run on anti-immortality, pro-death platforms, knowing that such might deny us non-believers the chance to extend our lives and the lives of those we love? What if, say, becoming immortal meant becoming permanently sterile, either for biological reasons or as part of a government-enforced agreement to deal with potential overpopulation? Such conundrums and the drama, humanity, bravery, hate, and love associated with them constitute the real story of the topics that Kurzweil brings up. Perhaps I am asking for too much to have wanted him to try to capture all that in his non-fiction book. Kurzweils book, and man himself, for all their faults, is a daring adventurer who does just that.
Perhaps I will revisit this book and its subject matter relatively soon, let me just say that not long after reading parts of this work I definitely count what is called "transhumanism" to be the "World's Most Dangerous Idea".
The Singularity Is Near by Ray Kurzweil: dislike it (2/5) Too optimistic, too wacky, too wrong. What should I expect from futurist Ray Kurzweil other than futuristic foo from the future? After a rocky start predicting the near future, how are we expected to follow along as Kurzweil turns us into mind linked cyborgs with nanobot blood who can merge and reform our identities at will? By early in the 2nd decade of this century we will have fully immersive 3D environments beamed into our eyeballs, and shortly thereafter we will be plugged in using devices that directly stimulate our sense centers in the brain. Theres a lot more way-out-there stuff I could bring up from the book, but thats really not its worst failing. Kurzweils vision of the ultimate goal of humanity is a nanobot swarm with superintelligent AI flying at near the speed of light and colonizing wherever it lands without regard for existing life of any kind.
Can this book ever get to the point?
This exponential development of key technologies leads to dramatic changes in human history over relatively short periods of time. He makes a compelling argument that the singularity is not a matter of "if" but of "when", and that we should be proactive in pursuing these technologies, not just for the benefit of humanity, but to keep amoral people from exploiting these things to an unfair advantage.