Robin Hanson on Singularity 1 on 1 (part 2): Social Science or Extremist Politics in Disguise?!

My second interview with economist Robin Hanson was by far the most vigorous debate ever on Singularity 1 on 1. I have to say that I have rarely disagreed more with any of my podcast guests before. So, why do I get so fired up about Robin’s ideas you may ask?!

Well, here is just one reason why I do:

“The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly believed. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Soon or late, it is ideas, not vested interests, which are dangerous for good or evil.”

John Maynard Keynes “The General theory of Employment, Interest and Money”

To be even more clear, I believe that it is ideas like Robin’s that may, and often do, have a direct impact on our future.

And so I am very conflicted. Ever since I finished recording my second interview with Hanson I have been torn inside:

On the one hand, I really like Robin a lot: He is that most likeable fellow, from the trailer of The Methuselah Generation: The Science of Living Forever, who like me, would like to live forever and is in support of cryonics. In addition, Hanson is also clearly a very intelligent person with a diverse background and education in physics, philosophy, computer programming, artificial intelligence and economics. He’s got a great smile and, as you will see throughout the interview, is apparently very gracious to my verbal attacks on his ideas.

On the other hand, after reading his book draft on the Em Economy, I believe that some of his suggestions have much less to do with social science and much more with his libertarian bias and what I will call “an extremist politics in disguise.”

So, here is, the way I see it, the gist of our disagreement:

I say that there is no social science that, in between the lines of its economic reasoning, can logically or reasonably suggest details such as: policies of social discrimination and collective punishment; the complete privatization of law, detection of crime, punishment and adjudication; that some should be run 1,000 times faster than others, while at the same time giving them 1,000 times more voting power; that emulations who can’t pay for their storage fees should be either restored from previous back-ups or be outright deleted (isn’t this like saying that if you fail to pay your rent you should be shot dead?!)…

Merging theater masksSuggestions like the above are no mere details: they are extremist bias for Laissez-faire ideology while dangerously masquerading as (impartial) social science.

During the 2007 OSCON conference Robin Hanson said:

“It’s [Bias] much worse than you think. And you think you are doing something about it and you are not.”

I will go on to claim that Prof. Hanson himself is indeed a prime example of exhibiting precisely such a bias, while at the same time thinking he is not. Because not only that he doesn’t give any justification for the above suggestions of his, but also because, in principle, no social science could ever give justification for issues which are profoundly ethical and political in nature. (Thus you can say that I am in a way arguing about the proper limits, scope and sphere of economics, where using its tools can give us any worthy and useful insights for the benefit of our whole society. That is why the “father of economics” – Adam Smith, was a moral philosopher.)

I also agree with Robin’s final message during out first interview – namely that “details matter.” And so it is for this reason that I paid attention and was so irked by some of the side “details” in his book’s draft.

The two quotes that I will no doubt remember – one from Robin’s book draft and another one, that frankly shocked me during our second interview, are the quotes that I respectively, absolutely like and agree with, and totally hate and find completely abhorrent:

“We can’t do anything about the past, however. People often excuse this by saying that we know a lot more about the past. But modest efforts have often given substantial insights into our future, and we would know much more about the future if we tried harder.”

The Third Reich will be a democracy by now!” (Yes, you can give Robin the benefit of the doubt for he said this in the midst of our vigorous argument. On the other hand, as Hanson says – “details do matter.”)

So, my question to you is this: Is Robin Hanson’s upcoming book on the Em Economy social science or extremist politics in disguise?!

To answer this properly I would recommend that you see both the first and the second interview, read Robin’s book when it comes out, and be the judge for yourself. Either way, don’t hesitate to let me know what you think and, in particular, where and how I am either misunderstanding and/or misrepresenting Prof. Hanson’s work and ideas.

(As always you can listen to or download the audio file above, or scroll down and watch the video interview in full.  If you want to help me produce more episodes please make a donation)

YouTube Preview Image

 

Who is Robin Hanson?

Robin-HansonRobin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, including articles in Applied Optics, Business Week, CATO Journal, Communications of the ACM, Economics Letters, Econometrica, Economics of Governance, Extropy, Forbes, Foundations of Physics, IEEE Intelligent Systems, Information Systems Frontiers, Innovations, International Joint Conference on Artificial Intelligence, Journal of Economic Behavior and Organization, Journal of Evolution and Technology, Journal of Law Economics and Policy, Journal of Political Philosophy, Journal of Prediction Markets, Journal of Public Economics, Medical Hypotheses, Proceedings of the Royal Society, Public Choice, Social Epistemology, Social Philosophy and Policy, Theory and Decision, and Wired.

Robin has pioneered prediction markets, also known as information markets or idea futures, since 1988. He was the first to write in detail about people creating and subsidizing markets in order to gain better estimates on those topics. Robin was a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, and of DARPA’s Policy Analysis Market, from 2001 to 2003. Robin has developed new technologies for conditional, combinatorial, and intermediated trading, and has studied insider trading, manipulation, and other foul play. Robin has written and spoken widely on the application of idea futures to business and policy, being mentioned in over one hundered press articles on the subject, and advising many ventures, including GuessNow, Newsfutures, Particle Financial, Prophet Street, Trilogy Advisors, XPree, YooNew, and undisclosable defense research projects. He is now chief scientist at Consensus Point.

Robin has diverse research interests, with papers on spatial product competition, health incentive contracts, group insurance, product bans, evolutionary psychology and bioethics of health care, voter information incentives, incentives to fake expertize, Bayesian classification, agreeing to disagree, self-deception in disagreement, probability elicitation, wiretaps, image reconstruction, the history of science prizes, reversible computation, the origin of life, the survival of humanity, very long term economic growth, growth given machine intelligence, and interstellar colonization.

  • http://www.facebook.com/people/Stefán-Gunnarsson/100001473975202 Stefán Gunnarsson

    I dont agree with Mr. Robin Hanson, minduploading and killing your conscious body afterwards is not like any other major tech advance, most would simple consider it suicide and would have everything to gain by revolting.

  • Pingback: Overcoming Bias : Is Social Science Extremist?

  • http://www.singularityweblog.com/ Socrates

    Prof. Hanson just published his response to this interview: http://www.overcomingbias.com/2013/02/is-social-science-extremist.html

  • Jon Perry

    Socrates, I listened to the interview. I enjoyed it and really appreciate your work. However, I’m not sure I agree with you re: Robin Hanson.

    You have had a lot of people on the show who predict bad outcomes. If I remember correctly, Michael Anissimov predicted something like 90% certainty that the human race would perish. You did not get so argumentative with MIchael. Robin Hanson is predicting bad outcomes as well (in fact not as bad as extinction) and yet you make him the target of such antipathy.

    My biggest problem with Robin Hanson’s future is that I don’t find all the details of it plausible, and I wish that was where you had directed your attack. But as to whether Hanson’s future is good or desirable, I think that was completely beside the point that Hanson was trying to make. Hanson explains several times in the interview that he is just trying to describe what he in his best analysis THINKS will happen given our current course. I think there ought to be room in the discussion for some non-normative discussion of outcomes we may not like. Again, I think the example of Anissimov is no different.

  • http://lincoln.metacannon.net/ Lincoln Cannon

    I’ve admired Robin’s work, and I enjoyed the podcast. Here are some thoughts that came to mind while listening.

    The discussion seems to leave unquestioned a hard distinction between humans and ems, but the distinction seems unjustified given trends in the merging of biological and mechanical intelligence.

    You discussed a disagreement regarding a somewhat fatalistic view of the future, and I agree with both of you to some extent. I agree that we can bring about many futures, but I expect that not all futures are capable of persisting. For example, had Hitler won, I suspect we would all be fascists now, or our civilization would be far less progressed than we are now. This is related to the Benevolence Argument of the New God Argument. There are, I contend, natural moral hurdles to increasing technological complexity, and civilizations far more advanced than us are probably more benevolent than us.

    There’s something interesting to explore in the relation between Robin’s ideas of mass-produced ems and the Simulation Argument, or the generalized Creation Argument of the New God Argument. Depending on how an em is created, it could qualify as a verified created world, which would make it increasingly probable that we’re already living in such a world.

    I love the discussion about efficiency versus social outcomes, and I’d argue that the the one enables advances in the other.

    Robin may be idealizing foragers. I’d like to know how he responds to Pinker’s analysis of history showing declining violence and increasing cooperation.

    Are normative feelings becoming more empowered with time? A descriptive analysis must account not only for past effects of normative feelings, but also for trends in past effects of normative feelings. Robin may be overlooking this. Interest in building heaven has always been with us, but the relative power of our tools compared to that of our environment has been increasing substantially.

    I heartily agree with Robin’s aspiration of giving persons (including future persons) what they want on their own terms. I consider that essential to my Christian identity.

  • http://www.singularityweblog.com/ Socrates

    Dear Jon,

    let me start by saying that this podcast is not a teaching process but a learning process for me as much as anyone else. Thus it is entirely possible that I have made mistakes in past and will probably make a few others too in the future too. Hence input from people like you are always welcome and well appreciated.

    Now Robin Hanson might eventually turn out to be one of those because as I said, personally speaking, I very much like the guy.

    However, I am not yest convinced this is the case. On the contrary. Let me start by saying that you are totally correct in noting that others on this show have also predicted bad outcomes. So, predicting such bad outcomes is not in itself a reason for me to argue against it.

    Here is the difference though, when Michael Anissimov said that 90% chance we are going to go extinct he said that in the context of being an active member of the Singularity Institute and doing everything he can in order to avoid this outcome.

    Also that was a very explicit speculation that I asked him to make.

    In this process he didn’t say that what he is doing is a social science for we were both aware he was indeed guestimating.

    He didn’t say also that chances are that we are going to likely have discrimination, unequal voting powers by a factor of 1,000 and have to privatize the police, the army, the courts and the government, as if those are mere details. (Just like it is unlikely that 3rd Reich will be a democracy by now is very important in my view).

    Furthermore, he could have said all of those things, and that would have been fine, provided that he didn’t claim that what he said was social science but something that he desires.

    I am totally for freedom of opinion and beliefs and invite people with diverse views on this show because we can learn something from them all – including Robin Hanson, who is clearly a very smart guy. However, for example, if you tell me you are a creationist I will likely press you to face your religious bias and give me evidence in support of you claim.

    Now, that is not to say I don’t have my own set of biases, which I have never hidden and have tried hard to make explicit during this interview. That is why I don’t plan on teaching people what’s right and what’s wrong but simply say invite them to consider and debate those points I bring.

    To paraphrase Socrates, in the context of Robin’s interview I would say this:

    “I know that I am biased. But you don’t realize that you are equally biased despite your claims of social science…”

  • Jon Perry

    Thanks for responding, and thanks again for running such a consistent and high quality podcast.

    You make a good point about how Anissimov is actually working on the solution.

    I wonder if a lot of the tension could have been alleviated by Hanson simply adjusting his tone a bit and adding a few explicit qualifiers like:
    (a) admittedly this may not be the desirable future, but that doesn’t mean it won’t happen
    (b) social science is a useful tool but it is not infallible
    I suspect Hanson agrees with those statements but did not choose to emphasize them.
    As for whether Hanson actually *desires* this future… i’m not sure that’s true.

    In any event I am sympathetic to your preferences. This is not the future I’d want. More importantly, I don’t see how it could even be possible, or at the very least last very long. First of all, what in this scenario prevents an intelligence explosion (or foom)? Clearly some of the emulated people were brain and computer scientists in their past lives. Wouldn’t unlimited copies of brilliant scientists running at high speeds quickly design a smarter brain to be their successor? Hanson has had this sort of debate with people like Yudkowsky numerous times but I’ve never found Hanson’s responses satisfactory.

    Another question that is maybe answered in Hanson’s book but I’m curious about: What the hell are these emulations working so hard on? What work still needs to be done?

    Anyway, I don’t mean to drag you into a long conversation. Mostly just want to say I love the podcast. Keep up the good work.

  • http://www.facebook.com/profile.php?id=1464403855 Nathan Flowers

    Robin Hanson seems to be supporting his book with several flawed philosophical precepts and specious claims. First, he makes the claim that those in the past and present who do not have all of their basic needs met, let alone their more lofty desires, are ‘happy’, and he offers no evidence to support this claim. Indeed, there may be no evidence to support this claim, all evidence to the contrary. Most people throughout history have lived short miserable lives. Some may have made the best of a bad situation, but few have ever been happy until the last century. A person digging through trash for a living in India may be ignorant of a better life, but it would be cynical to call them happy. And, how could we as first worlders, in the full knowledge of a better life, not seek to make better lives of others in the future. We may all, as first worlders, go to sleep in our warm beds every night, while millions of others freeze and starve, but most of us, I hope, give pause to those lives of quiet desperation. And, if we are faced with an opportunity to remake the world as we see fit, as I believe the Singularity and the preceding events will present, then most of us would set our moral compass to making the world better for everyone, not just an elite cadre. I don’t know if Robin Hanson lacks a moral compass or if he is deluding himself with the claim that those in despair are happy.

    Secondly, Professor Hanson imagines that we all exist to serve the economy rather than the other way around. If we live in a world of immortality, abundance, and automation, how could a traditional monetary system & capitalist economy function, and why would we accept the dreary life he describes? In a world of such abundance, basic needs and rights could not be denied to anyone for any reason. And, in a world of automation, there would not be enough available jobs for everyone to provide for their needs. Even if we accept his premise of a world with mind uploading but no AGI, there will still be AI and sophisticated software. Software is already beginning to replace white-collar workers. Long before the technology for mind uploading exists, the traditional economic model breaks down. The reality is that we will have to accept in a world of abundance that everyone is entitled to have their basic needs met, and in spite of most labor being done by machines and software, people will still want to work and contribute to society. There does not need to be a monetary system or any traditional incentive structure. When people have their basic needs met, clothing, food, shelter, education, & healthcare, they will work because they want to, not because they must in order to survive. Moreover, they will do what they were born to do, what they are more suited for, rather than what has been imposed on them by their social status.

    Lastly, Professor Hanson imagines a world of mind uploading as the most likely outcome of the Technological Singularity, but AI and mind uploading are among the most difficult technologies to master. We will be using nanotechnology and biotechnology to extend healthy human lifespan and to create a world of abundance long before mind uploading becomes a reality. When it does, if it works by copying consciousness rather than transferring it, very few people will accept termination of their physical body. I am writing about this in my novel, and I find it very unlikely. I may accept copy-style mind uploading if I can continue living my physical life. I have no qualms if my uploaded self has his own life. I say this as someone who desperately wishes to live in the simulation. However, accepting self-termination after copy-style mind uploading, would be like accepting self-termination after cloning. My copy may continue to live on in perpetual bliss, but I, me, my continuity will die. That’s not acceptable. Mind uploading presents several very difficult philosophical, moral, and physical problems that need to be overcome, and how it takes shape is difficult to predict, especially since we have no real idea how it will work. Regardless of how it does work, it is unlikely everyone will choose it. By the time it comes into being, there will be many options, mind uploading to the simulation, mind uploading to machines, cyborg, and so on. Despite my desire for emulation, unless mind uploading works by transferring consciousness, I will be a cyborg. I suspect most people will choose cyborg since it preserves your identity & consciousness while enhancing your moment to moment experience of life and longevity.

    We need to give greater consideration to the future, but we are heading into a time when our past will no longer inform our future. We need to use our best philosophy & moral compass to guide our technological & social development. If we are not deliberate about the outcomes we want, we may end up inheriting a future as ill-conceived as the one Hanson describes.

  • http://www.singularityweblog.com/ Socrates

    Thanks very much friend. I have to say myself that I am not happy with the way I conducted this interview/discussion/debate. I have to be better than this…

  • RandomCoder

    The only person I ever disagreed with more than Robin Hanson… is Lincoln Canon. ;)

  • RandomCoder

    I think this was one of Socrates better interviews. If I wanted to hear unchallenged diatribes I’d just go to the interviewees website. This blog is entertaining and interesting precisely because Socrates is following the methods of Socrates. Socrates was not very much liked because he used the Socratic Method to lead people to contradict themselves and face their own bias. This is the reason he was forced to drink Hemlock.

    Don’t be so quick to second judge yourself Socrates. Keep up the good work.

    Anybody worth interviewing won’t complain about hard questions, they’ll be ready to answer them.

  • Thomas_L_Holaday

    When you disagree with a guest’s prediction, consider asking him or her “What evidence do you have which supports your prediction?” instead of saying “I find your prediction repugnant.”

  • Thomas_L_Holaday

    I’m curious, do you agree now that the statement “The average American adult works 27 hours per week” is plausible?

  • SteveO

    Very entertaining debate and I admire both parties’ vigor and determination, but I think the most serious points of contention are based on one or two technology- and biology-based assumptions that are almost certainly incorrect. Correct these assumptions and much of the basis of your argument dissipates.

    The most serious error is Robin’s assumption that Ems will experience the same loss of mental plasticity after a subjectively-similar lifetime as their biological-substrate-based prototypes. Loss of plasticity in human minds as they age is not a an inherent property of neuromorphic computers. Simulations of all types of neuromorphic brains, from the most simple neural networks to the largest and most complex time-dependent spiking simulations, have not exposed any fundamental property implying aging-related inability to learn. We become mentally less flexible as we age for the same reasons we become physically less flexible, because our cells are aging and become less able to reproduce and eventually start dying. If Ems don’t lose plasticity as they age, there is no need to retire them, no exponential explosion in their population beyond that required by economic production needs. The cost to retrain an existing Em for a new job will be approximately the same as training a new Em, probably less, as there will be an ever-widening disparity between the world the human Em prototype experienced and a given future, putting new Ems at a greater and greater disadvantage in their virtual modern world.

    Another error is to over-focus on the clock rate of Ems as a primary driver of their cost, ignoring storage of their state, which is independent of clock rate. Certainly clock rate will drive energy cost, but even in a Moore’s Law asymptotic world, storage of the amount of information required to completely describe the state and connectivity of a human brain is not trivial. Robin’s future depends on substrate technology becoming more dense and more inexpensive than biology by several orders of magnitude. This may not be possible. As we shrink our technologies, quantum effects start to interfere with our comfortable digital determinism and compensations may drive us to adopt the same strategies as biology just to assemble, power, organize, and make them reliable at a reasonable cost. There are real physical laws and limits in our universe. Speculative exponential curves that intersect and exceed them are bogus.

    Biology is the original nanotechnology. It costs less and is easier to reproduce a human brain using biology today than it costs to make a smart phone. Keep in mind how efficient biology really is and how far we are from matching it with alternate technologies: the brain in your head only uses a few watts and yet is monstrously dense. By comparison, IBM and Lawrence Berkeley National Laboratory researchers simulated neurons totaling about 4.5% of a human brain on a football field-sized petaflops-class supercomputer just a couple of years ago, using megawatts of power, and still achieved a speed some 600 times slower than real time. Biology may well represent an approximate asymptote of Moore’s Law. With deliberate engineering, we may exceed biological density and efficiency by a few orders of magnitude, but it is probably nonsense to assume storage will ever be as inexpensive (and exponentially becoming more inexpensive, without end) as Robin’s scenario requires.

    Robin’s future is based on flawed technology assumptions. Argue about the offensive social implications all you want, but this is a future that will not happen. It’s like arguing about a future society based on perpetual motion or faster-than-light travel.

  • http://www.singularityweblog.com/ Socrates

    Great point Thomas,

    I will very much have it in mind. Though, in my mind that is what I was trying to do. Based on your comment I clearly must do a better job of being more explicit about it.

    Also, part of my argument was that, in principle, social science cannot properly give us much, or any guidance, on some of the major ethical issues…

  • http://www.singularityweblog.com/ Socrates

    I absolutely do not!

  • Jacob Witmer

    I mostly agree with this statement, however, I think it’s actually somewhat possible that inherent bias shouldn’t be overcome (bias in defense of legitimate morality is preferable to the lack of ability to differentiate between sociopathic and empathic actions), and that value judgments should be challenged (however, this always leads to two-part questioning of what is preferable, and what is predicted, and if those two things are the same). Note: I’ve never seen anything of Robin’s that I’ve disagreed with, although I haven’t seen the video yet. I’m about to watch it, after my business hours end for the night. I have seen videos where I’ve disagreed with Socrates and his questions before, but I think that he, on the balance, does a remarkable job, because he fills a very much needed niche: an unbiased questioner who is not necessarily an “insider” and who can interview “heretical” or “outsider” singularitarians, and other critics of “established” ideas (especially “established ideas” where there is a large amount of variance from the idea that seems the most correct). I’ve most appreciated Nikola’s interviews with people who aren’t interviewed or tracked much by Singinst. MOST APPRECIATED, NIKOLA! :)

  • Jacob Witmer

    One idea that I like very much is to take the “strong feedback” ideas Kevin Kelly outlines in “Out of Control” (and others have outlined in the formative works on “cybernetics”) and use them to perpetually increase the quality of online fora. Far beyond hypertext, there is the idea of “hypertext” at its core. This idea allows for every word, sentence, or phrase to be upranked or downranked, and to change the color of the text, the higher or lower the rating. (I would always downrank the term “gun control” because it implies an incorrect perception that human laws reflect desired outcomes, whereas the reality of the term is either closer to “gun chaos,” or “slave control”) This feedback level could never be done with books, but it would also let embryonic (and beyond) AGIs focus their increased capacity for attention on areas where humans obviously failed to see “the big picture.” Imagine an AGI crawling Nikola’s body of interviews, and coming across red text (hot, truthful, agreement, correct, moral) and blue text (cold, false, disagreement, incorrect, immoral). Such a feature allows for emergence in what would otherwise require a “copy, paste, separately address every point” approach, and even then, would not exhibit emergence or “immediate intelligence.”

    OK, I’m going to watch the video now, and see which of these two strong minds I disagree with. LOL Dispute! Conflict! Drama! Controversy! Hit-increases! Popularity increases! = More conflict in the future? (Does the prior possible trend apply to everyone, or just those who make their political stances well known? Does Robin pay a price for being brave enough to wear his politics on his shirtsleeve?)

    A final note: K. Eric Drexler long ago noted that the moral system for human intelligence (emergent voluntary transactions, purchases, pricing, capitalism) might not be the moral system whereby you have a range of wildly-different intelligences. As smart as many humans are, they aren’t THAT far apart in terms of capabilities, even between Einstein and the village idiot, because on an absolute scale, Einstein isn’t that smart. When there’s an intelligence that makes Einstein look like a goldfish (albeit a volitional goldfish), then there’s a potential for serious downgrading of human capacity in terms of the law.

    A being that’s twice as smart as Einstein might sympathize more with Einstein’s desire to be supermodified, than a being that’s a million times smarter than Einstein. Also, what will such super-beings think of the human repeating trend toward democide? What will they think of the fact that most humans now on the surface of the planet are not even familiar with the term democide? Do we have access to google? How stupid are we? LOL. …Very.

    Also, if we are judged next to our peers, and the smartest people are the ones who fight against democide, and those are technically the only moral people, might such moral people doom the rest of humanity to death? After all, if humans have the capacity to rationally prioritize moral objectives, the most rational people are the ones who disfavor democide, and do not participate in it. Yet, this is a demonstrably and measurably small subset of humanity. (Don’t think so? What role have you played, via your stated intentions and votes, in the burgeoning US prison system that imprisons over 1.4 million people for victimless crime offenses? What role have you played in allowing the FDA to murder millions of innocent people, by playing a coercive role, instead of an advisory role? If you’ve actively participated in their power, you are culpable. If you’re too dumb to see the destruction, you’re still culpable, because you acted in ignorance, and without investigation when informed that you were hurting innocent people. Willful ignorance, according to Nuremberg, is never an acceptable defense. If someone informs you that your actions have resulted in murder, you need to investigate that fact, rather than proceed. Ie: first, do no harm.)

    OK, Now I’m watching the video, and intend to post even more inflammatory comments after I’ve seen it. Hayek is dead. …Long live Hayek!

  • http://www.singularityweblog.com/ Socrates

    Thank you friend, I know I am far from perfect by I try to do my best. Plus, I hope that you’d agree that overtime, though not in an easy straight line, I maybe am improving… ;-)

  • Jacob Witmer

    Note: I agree with many of your ideas, and am glad to see a Christian engaged with these ideas in a constructive manner. One idea I disagree with, is the idea that religion/faith has something to offer this debate, other than confusion and irreconcilable conflict.

    How is your “Christian identity” different than your “adherence to the non-aggression principle identity”? Are those two things the same? If so, why cling to the Christianity, that has a lot of self-contradictory mumbo-jumbo thrown in? Is it because Christianity’s more favorable readings tend to indicate that there’s a value to altruistic actions? If so, why not emphasize “rational cost-benefit” altruism over self-defeating and self-sacrificial altruism? (Other aspects of the old-testament are totally evil and contradictory to even the limited value Christ’s often-self-contradictory teachings might bring someone.)

  • http://lincoln.metacannon.net/ Lincoln Cannon

    Hi Jacob. This probably isn’t the place to elaborate at length on your questions. I invite you to listen to my podcast conversation with Socrates, visit Transfigurism.org, and take a look at my blog (Lincoln.metacannon.net), where I’d be happy to respond to these kinds of questions.

  • Jacob Witmer

    I’m at 15:16 in the video, and both the interviewer and subject are somewhat enfuriating me by talking past one another. NIkola is raising the point that the 3rd reich would look very different than even a degraded democracy, but doesn’t raise significant differences. Robin then, without specific legitimate differences to attack, then rises to the occasion of defeating insiginificant objections. Aaargh.

    “The third reich would be a democracy by now.” Aaaargh. (Has Robin never encountered the argument online at “Democracy Defined” that “Suffrage =/= democracy”? R. J. Rummel is familiar with this argument. Moreover: Hasn’t relatively similar tyranny been a constant? Isn’t tyranny still omnipresent –if reduced–, even if Steven Pinker is right? …Definitely. Is Robin Hanson familiar with R. J. Rummel? It seems not, from this interview.) Has Robin read “The Ominous Parallels”? (Then again, has Leonard Peikoff read “The Ominous Parallels”? Such a question indicates how absolutely weak even optimal human minds are. Check out ARIwatch.com for specific examples.)

    I greatly appreciate Robin escaping the trap of “suitcase words” at 22:00, and demanding that they be unpacked. To do otherwise is lazy on Nikola’s part. Then again, Nikola seems intent on holding Hansen to a discussion of voluntary (moral) action, vs. coercive (immoral) action. Another example of both sides talking past the conflict. I’ll have to see how long this continues…

    At 33:00 now. Painful. Painful. Nikola doesn’t ask Hanson if his future is a democidal one (he seemingly lacks the vocabulary). He tries to do so, but the speed of the possible “revolutionary question” is left out. Yet, that’s the primary variable of importance. Seemingly, then, Robin doesn’t comprehend this (although I know he must). It’s not the displacement that’s the question, it’s the speed of the displacement. That this isn’t recognized INSTANTLY makes it seem like neither party has ever heard of Ray Kurzweil. Painful to watch.

    OK, Hanson finally identifies that speed is the issue. Or, as Kurzweil has beaten into the ground analyzing nine (thousand) ways to Sunday, “accelerating change.” Then, Nikola misspeaks and calls previous paradigm shifts “singularities.” PAINFUL! That butchering of the term should not be acceptable to anyone who has read “Engines of Creation” or “The Singularity is Near.” No, there has not been a “physical singularity,” there have been seemingly large paradigm shifts that look like tiny, truncated “S curves” long before the knee of the coming singularity. (What Kevin Kelly calls “growing by ‘chunking,’ or ‘incremental leaps’.”) These comparatively infinitesimal paradigm shifts do not constitute a singularity, which is the point that Nikola is obviously trying to make. Amazing how, now that there is an intelligent perspective within a range of thresholds, people dicker over mutually-inclusive variations within positive thresholds of the possible future, while ignoring the far more pressing issue of what should be done, pre-singularity. At the point of, and after the point of singularity, as Nikola and Hanson both note, we MOSHs lose any ability to make a lick of difference.

    I’m getting the distinct sense that this “prediction” by Robin Hanson is a kind of mental exercize that asks interesting unanswerable questions about the post-singularity. As such, it’s very much a theoretical exploration in the almost extreme absence of severely-differing conjectures from the kind investigated in detail by other Singularitarians. To me, this “post singularity analysis” is not very interesting. I can imagine many incentivized-structures that are far different than what Hanson imagines, and I can also imagine how his universe does not displace even MOSH (Mostly Original Human Substrate) structure, depicted by Kurzweil. Kurzweil simply added that a range and ongoing diversity (in addition to the implications expounded on by Hanson) are both likely.

    I tend to have fanciful ideas that there will be a lot of biological or bio-similar bodies around, and that the range or “territory” of humans will expand in accord with their mental capacities, and that human defensive structures will grow more robust. With every cps now run by every human mind in North America fitting into a cubic centimeter, there will be fewer problems to solve than there will be human minds to solve them. Some may choose a relatively idyllic existence, many others, I believe will create complex netowrks and artwork. Some may be “ems,” but others will likely choose to remain human-sized. The neocortex of some may be expanded far beyond the old brain, resulting in similar drives but more capacity. all areas may be expanded in others, who may become god-like, and develop unknowable-to-human goals (I think this is incredibly likely, and that almost all post-singularity ideas from humans will be wrong, because they can’t be imagined or comprehended now). I think “valuable diversity” will become a major value, and will be preserved and expanded upon. (Assuming dystopian or sociopathic minds with perverse incentives don’t rule the day, or run wild. Ideally, such minds would not be extincted, but merely allowed a fantasy-type domain, or outlet via creative works, and constrained by a still-constant prohibition on aggression.) These ideas are very fanciful, but they contrast with Hanson’s ideas significantly, because they posit that a superhuman mind will (likely) always refuse to have a boss, and will always trend toward entrepreneurship and self-differentiation. (This also is almost certainly wrong. The “em” condition, as posited, is totally unknowable, and likely not dependent on human brain emulation. Human brain emulation is already growing unnecessary, and hardware capacity is the key barrier to implementation of superhumanity now envisioned.) Still, Hanson could be right. Neither of us has the slightest clue, when it comes to post-singularity emergence, social orders, etc.

    Then Nikola tells us what “socially libertarian” means, then contradicts the meaning of libertarian, yet does this after he uses the word “capitalism.” Then contradicts his stated understanding of libertarianism. Then Nikola extends an incorrect view of “laissez faire” economics to the post-singularity prediction (this ignores Kurzweil’s works).

    Neither party breaks down libertarianism properly, nor makes distinctions about fundamentals. Nikola seems to not comprehend the definition of libertarianism. Nikola using historical examples to think about post-singularity is abjectly, demonstrably stupid. The debate is required to be far better than this. Notice that Kurzweil does not make this severe mistake in ANY of his books.

    The quality of this debate was not up to even a CURRENT understanding of mainstream (Hayekian) libertarian ideas. In this discussion, Hanson’s predictions rely on the entire science of economics CONTINUING to be ignored, and all of the entire discipline of economics THEREFORE continuing to have a negligible impact on social structure. This is, itself, an extreme and unfounded pessimistic view. As such, this view totally and completely implicitly denies any increase in intelligence above human-level, as does Nikola’s disagreement with the morality of laissez-faire. Ironic.

    Ultimately, I agree with the point that Nikola was trying to make, even though Hanson understands far more than Nikola does, and is repeatedly flummoxed by Nikola’s seeming lack of comprehension about the points he tries to make. This leads me to make a comment I’ve often made before: The future, if it’s worth visiting, will definitely be libertarian, but it most certainly will not be boneheaded, incomplete, and self-contradictory libertarian. It will be consistently libertarian, and that means it will be a future of complete and total personal choice.

    We’ve all been liberated from the chore of guiding Ox-driven plows through soil, in the manner of Josey Wales. Virtually noone, no matter how poor, is required to do this, in order to eat. Just as labor has been immensely reduced for most, one thing that the future brings is an immensely reduced labor, if chosen, for a broader segment of society. An increased healthspan for a broader segment of society.

    Why would artilects choose to kill all the humans? Why even displace them all? No reason is given. The idea that artilects would be emotionally-stunted does not follow. Nor does the idea that a person who doesn’t want to die would choose to remain unamplified. Sure, I see a lot of religious deathist suicides in the future, if only out of boredom and sub-optimal brain function, but very little poverty that forces this despicable decision. Once the healthspan is unbounded, and it’s possible to migrate one’s mind to machines, only the chronically backwards would refuse. Even so, we try to avoid killing gorillas and dolphins, when there are cows available for meat. Right now, we can grow non-brained muscles for edible meat. After the point of advancement where even that is unnecessary, why posit any kind of conflict that isn’t good vs. evil? There may be evil or conflict, but it will be chosen, it won’t be the unfortunate imposition of “poverty creates conflict.”

    Let’s say there are billions of complex stories and mental states that can be comprehended by artilects, and that human nerve structure becomes their chosen form of interaction with reality, by default of the first artilects being designed to interact with and work for humans. That still leaves an artilect that would respect not only humans, but also jungles and desert ecosystems. It still leaves human forms that shield human minds a billion times more powerful.

    Sure, there will likely be an “emulation-based reality,” at some point, somewhere. Is that a natural outgrowth of the singularity? Sure. But is it displacing, and a linear result from prior human societies? No. EVERY SINGLE ARTILECT will instantly comprehend the non-aggression principle. This won’t be as difficult for artilects as it is for primates. …Unless the artilects contradict the basic premise of higher intelligence than humans. …But not only will they be smarter, they’ll be billions of times smarter. In which case, democratic conflicts won’t even come close to existing, unless the future artilects are sociopathic, in which case, nobody can conjecture about which “leading force” or combination will prevail.

    This is why Ray Kurzweil mostly talks about the pre-singularity when he writes about the singularity using any kind of “social science.” What most primates mean when they talk about “social science” is monkeys shooting one another. Not likely to feature into artilect existence, in my opinion. Already there are humans that favor benevolence and abundance, and those humans dramatically outperform the aggressors in quality of life, and production.

    I see a swift demise of the totally illegitimate tyranny we are now tyrannized by. I see the ascendance of self-government, within an acceptable range of non-aggression. To deny this, pre-singularity, is to deny that artilects will be able to tell the difference between production under “equality under the law” verses production under slavery. NIkola touched on this, but in a very unsatisfactory and incomplete way. To posit otherwise is to posit a post-Singularity with IQs numbering in the thousands, that still can’t figure out a way to reduce primate aggression.

    Even today, I see a solution for minimizing primate aggression. It’s called a restoration of proper jury trials. It’s one-half of democracy, in any educated society. Does Robin really think this is unlikely? Referring to Rome in defense of any idea to do with the post-Singularity is just plain silly. The concept of the singularity doesn’t just mean “faster computers and more production.” It means “better reasoning capacity.”

  • Jacob Witmer

    “The most serious error is Robin’s assumption that Ems will experience
    the same loss of mental plasticity after a subjectively-similar lifetime
    as their biological-substrate-based prototypes. ” I strongly agree with this point, and its subpoints.

    I totally disagree with everything else you wrote, on the grounds explored by Robert Freitas, Hans Moravec, and Ray Kurzweil. I think they are very much likelier to be correct, for the reasons Kurzweil outlines in “The Singularity is Near.” Biology, at some point, simply cannot compete. Of course, the technology will obey certain biological laws, because those laws, as indicated by evolution, are already optimal.

    Speed and memory optimization are not already optimal, nor even close to what machines will accomplish.

  • Jacob Witmer

    The email from Socrates was disappointingly titled. “Extremist Politics” …Really? What are you, Nikola, a mindless mainstream media talking head? “…Extremism in defense of liberty is no vice.” When the nazis were in power, to use your example, any radical defense of jury trials would be considered “extremist politics.”

  • Jacob Witmer

    Nikola goes off on tangents that contain a lot of variables, draws a conclusion from those variables, and then says to Robin: “Respond!” Then, Robin actually proceeds to defend the indefensible. I agree with Nikola about the necessity of progress and increase of wealth. The only reason why life still exhibits the qualities of “Nasty, Brutish, and Short” is that government predators and parasites claim 98% or more of the productive wealth of the nation. Has neither of these two people read Harry Browne or Samuel Konkin? Those thinkers were not as complex as Kurzweil, but at least they had a coherent philosophical base.

  • Jacob Witmer

    For his part, I’m amazed that Hanson seems to think that artilects will behave like a bunch of irrational human primates. Maybe they will. Maybe we’ll expand the neocortical capacity for dominating our environment, without also expanding our goal structures, but I doubt it. Kurzweil has already addressed all these issues far more intelligently. The issue of cyborgist continuity addresses the “disruptive uploading” scenario/red herring. Then again, most people are conformists governed by sociopaths right now, so they might screw up the entire pre-singularity. In fact, if smart people like this can hold so many bad ideas, that’s the best argument I’ve ever seen to just “Nuke it from space, it’s the only way to be sure.”

  • http://profiles.google.com/externalmonologue Matthew Fuller

    Can you get Yudkowsky on your show? He apparently disagrees, and the hanson-yudkowsky debate that I saw on youtube didn’t help much. If you could clarify the real differences in this triune of opinion, that would be a great service.

    Also, I don’t find many-worlds or Em’s as highly likely to be true because neither can obviously be rendered into technology. They are just abstractions of abstractions, much like theology – and just like theology used to have high plausibility, I believe these ideas will die with better empirical testing.

  • http://www.singularityweblog.com/ Socrates

    I’d love to get Eliezer on the show Matthew,

    Unfortunately, last time I spoke to him directly at the 2011 Singularity Summit he told me that he’s decided not to do interviews but focus on his work instead…

  • SteveO

    I am familiar with the views of the people you note and am excited by and agree with most of it. I also understand that, as you say, speed and density of our non-biological technologies are far from optimal today (though you may note that microprocessor clock speeds have nearly stalled in the last decade and Moore’s Law improvement has pushed almost exclusively in the density — i.e., multicore and SOC — direction). My perhaps-poorly-expressed points with which you disagreed were intended to point out that, if large-scale simulations like the IBM/LBNL effort I cited are valid indicators of the approximate state of our art, we are today some 8 or 9 orders of magnitude away from being able to support a direct emulation of a human-scale brain in a non-biological substrate that is as efficient, dense, and low cost as the current biological one.

    Notice the qualifiers in that statement. I am talking about power, density, and cost equivalence with the current biological state of the art currently housed in your head and mine. The IBM/LBNL simulation was 3-4 orders of magnitude smaller/slower than a human brain, despite running on a substrate consuming 6 orders of magnitude more space and power. We can argue about how long it will take to achieve biology equivalence across the board, but it has taken Moore’s Law some 40 years to go a similar exponential distance in feature density to get us where we are today. To achieve the same exponential leap going forward involves a lot of unknowns and radically different approaches to IC processes, unlike the past. ICs today are still planar and, except for scale, generally resemble the early processes used to produce the first microprocessors. I think it is safe to say that the non-biological substrate that we develop that achieves the next 8 – 9 orders of magnitude improvement over today will necessarily look more like biological substrates (e.g., 3d structure, cellular neuromorphic architecture, self-assembly process, etc.) than it will resemble today’s microprocessors.

    I’m not saying we won’t get there, I think and hope we will. And even beyond. But not infinitely beyond, as just a few years of Robin’s double-every-couple-weeks scenario demands. Biology is already very efficient, especially in the density dimension. Are we going to be able to clock our nanomachines much faster than biology? Absolutely. Are we going to engineer data storage orders of magnitude more dense than DNA? Doubt it. That places a lower limit on the cost of an Em that is perhaps only marginally cheaper than biology. It’s a density limitation that is independent of clocking speed. And that was my secondary point on the absurdity of the technology assumptions underlying Robin’s scenario.

  • Pingback: James D. Miller on Singularity 1 on 1: Prepare for a Smarter World

  • http://www.hairlosstalk.com/ KR

    This interview drove me insane. Many futurists clearly have a naïve hopeful desire for the future to be all peaches and cream. And thats Socrates’ right. And some of his viewpoint is founded in a rational belief that as time progresses, humanity irons out it’s moral, ethical, and societal failings. But it was painfully obvious there is also the nerdy sci-fi Futurist element pervading his viewpoint which fiercely defends the hope that the future will be …. must be… wonderful. This is partially because the Futurists religion *is* the future, and technology. Their “Heaven” is the possibility of a wondrous future, free of sadness and pain. I wonder if Socrates realizes just how much like a religious person he is, in this regard? Forget rapture of the nerds. The entire Futurist mentality is simply a replacement for religion. It’s an answer to death. It’s a hope for the future, amidst a current disappointing world of sadness and loss. There is no difference. Many futurists cling to a hope of a better future, because they are so disillusioned with the current. Many don’t live in this world, but live for what they dream to have in their immortal “electronic” future existence. Telling a futurist that the future may not be like “Heaven” is no different than telling a Christian that there is no Heaven. And Socrates acted just like a Christian who was having his faith attacked. This resulted in a discussion that played out like most religious debates- a talk saturated with intolerance of opposing viewpoints. Robin’s viewpoint simply stated that the future may not be all peaches and cream. That history tends to repeat itself, and we must consider the possibility that there may be a mix of good and bad. Not exactly an upsetting viewpoint to those who view the future rationally, instead of as their replacement for “Heaven”. It was difficult to listen to the intolerance of other viewpoints because of ideologies.

  • http://www.singularityweblog.com/ Socrates

    Dear KR, I absolutely don’t think that the future “will be… must be… wonderful”. In fact, at the very beginning of the interview I started by saying that in my view it could go either way i.e. it could get much worse or much better than Robin claims, and tried to stress that it is our choices and consequent actions that will make the difference. So, in that sense Robin’s argument is a lot more deterministic in claiming that the past says “blank” and therefore the future will inevitably be also “blank”… And that people don’t change, regardless of everything else changing around them.

    Again, I am willing to consider both utopias and dystopias and everything in-between, as long as we don’t claim that “good social science” will inevitably take along only a very specific way…

    If you have heard any of my previous interviews with a few skeptics and critics of the singularity you’d know that I am perfectly fine considering any of those points of view, as long as they are not masquerading as social science…

  • Matthew_Bailey

    Hanson is one of those people who has nothing intelligent to say on the subject of human well-being.

    He is very much like a person claiming to be a chemist, yet insisting that air is made out of unicorn breath, and bubble-gum.

    Such claims are not to be taken seriously.

    Yet he has enough academic weight that people often do not stop to consider “Did he just say something that outlandish, or am I imagining it?”

    He represents a strain of academics at the present who are undermining all of modern progress with a desire to move backwards to a more insular and tribal way of life.

    He claims to represent “The True Vision of Liberty,” yet he denies people the right to Life, Liberty, and the Pursuit of Happiness; claiming that the “rights” are contingent.

    Rights are not contingent, privileges are. Rights are things that one needn’t do a damn thing to possess.

    Pathological Libertarianism is a problem in our world at the moment, due to people who seem to thing that they have a “Right” to oppress others; claiming that the failure of society to allow them to exercise their prejudices amounts to oppression. That is a rather perverse and obtuse ideology.

  • PsychoPigeon

    What is extremist politics? He has some libertarian influences and it makes him an extremist?

  • Pingback: Socrates at Newtonbrook Secondary School: Be Unreasonable!

  • Jamie Suthers

    If I’m not mistaken, Dr. Hanson is working under the presumption that in the future an economy will retain a high value on what is rare–talent and commodities? If so, does he address in the book what an early appearance of atomically precise manufacturing, even in a not-so-precise early iteration, might do to the commodities market? Talent, skill, education, that sort of thing, may always be around in a form that maintains its specialization categories(if only upon the premise that it serves a personal interest, and is not a barrier imposed by a lack of intellect).

  • http://www.singularityweblog.com/ Socrates

    It is a reasonable question Jamie. And, in the draft that I read, I don’t think he even touches upon that issue…