• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

posthuman

Francesca Ferrando on Philosophical Posthumanism

January 20, 2021 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/969285499-singularity1on1-francesca-ferrando.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Though admittedly posthumanist, Francesca Ferrando‘s Philosophical Posthumanism is the best book on transhumanism that I have read so far. I believe that it is a must-read for transhumanists and non-transhumanists alike. In fact, one can argue that Ferrando’s book ranks right up there with the very best not only on the transhuman, but also on the human and the posthuman. The reason for that is simple: Philosophical Posthumanism cracks open, deconstructs, and demystifies all the major historical -isms. Furthermore, it not only lays bare words such as technology but also shows us how all the puzzle pieces fit together in the historical, ideological, theological, philosophical, etymological, scientific and decidedly political realms, like nothing else that I have read before. I hope you enjoy my conversation with Dr. Ferrando and invest the time and the effort to read her book.

During this 2-hour interview with Francesca Ferrando, we cover a variety of interesting topics such as: why I believe Philosophical Posthumanism is a must-read; why the etymological and other roots of a movement matter; child sociology and social mythology; our shared love for Ancient Greek mythology; the definitions of humanism, transhumanism, and posthumanism; why post-modernism is like the Quantum Mechanics of the humanities; the false distinction between human and transhuman; why the Hedonistic Imperative is merely a new version of the White Man’s Burden; theism and techno-solutionism; Martin Heidegger and the definition, poiesis and ontological power of technology.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

Who is Francesca Ferrando?

Francesca Ferrando teaches Philosophy at NYU-Liberal Studies, New York University. A leading voice in the field of Posthuman Studies and founder of the Global Posthuman Network, she has been the recipient of numerous honors and recognitions, including the Sainati prize with the Acknowledgement of the President of Italy.

Ferrando has published extensively on these topics culminating with her latest book Philosophical Posthumanism (Bloomsbury 2019) and, in the history of TED talks, she was the first speaker to give a talk on the topic of the posthuman. Those are just some of the reasons why the US magazine “Origins” named Francesca Ferrando among the 100 people making a change in the world.

Filed Under: Podcasts, Profiles Tagged With: Humanism, posthuman, transhuman, transhumanism

X-Men First Class: Transhumanism for the Masses or Aren’t We All Mutants?

June 11, 2011 by Socrates

Have you ever felt lonely, struggled to fit in and be normal or been rejected just because you’re different?

Have you ever sought to find out who you really are?

Have you ever wondered if there are others who feel like you?

If you have, then, you will probably like the new X-Men: First Class.

The fifth and, in my opinion, best installment of the series, is not a movie about being a comic superhero endowed with amazing powers. It is a movie about being human and facing all the accompanying eternal questions of the human condition. A movie about feeling different and trying to find your place. About striving to fit in, be accepted and normal. (Whatever that may mean?!) About finding out who you really are and embracing it all – the amazing as well as the imperfect parts of us.

We relate to the characters not because they have superhuman powers but because they have very human problems.

Transhuman is human. Just a more gifted one. But talent often undermines character and the same age old questions are not only still relevant but even more acute than ever. Thus the film depicts the X-Men as a mirror image of our imperfect humanity, with all of its faults and failures.

Superheroes or (trans)humans?

In addition, X-Men: First Class weaves in and touches on a variety of other important topics such as: war and peace, genocide, transhumanism and bio-hacking, evolution and the birth of a new species, beauty, fear of what’s different and unknown, hatred.

The mutants have little in common with each other, other than the fact that they are all different. Yet, ironically, what connects them all (and us) is our humanity — that which is retained even after getting all the super-powers we can ever think of.

We are the human race and we can be or become as any one of these superheroes or supervillains. We get lonely, we struggle to fit in and find our place; we ask the same age-old questions. We are Dr. Jekyll and Mr. Hyde, Professor X, Magneto and Sebastian Shaw. We are black, white, brown, yellow, red and blue (and all the other colors too).

We are Good and Evil.

We are all different and unique, yet human.

So, aren’t we all mutants?

***

Socrates’ verdict: 10 out of 10 (must watch)

Related articles
  • Transhumanism and the Technological Singularity
  • A Transhumanist Manifesto
  • Who are the Heroes of Transhumanism?
  • Transhumanism for Children
  • Enough Is Not Enough: The Integration of Transhumanism into Pop Culture
  • The Charlie Sheen Guide to Predicting Our Transhuman Future

Filed Under: Op Ed, Reviews Tagged With: posthuman, transhumanism

Transhumanist Philosopher Max More: Question Everything

March 20, 2011 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/188125027-singularity1on1-max-more-transhumanism.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Today my guest on Singularity 1 on 1 is transhumanist strategic philosopher Max More. (As always you can listen to or download the audio file above or scroll down and watch the video interview in full.)

As the CEO of the Alcor Life Extension Foundation Dr. More has a full schedule. Nevertheless, he generously managed to squeeze in two 30 min interview sessions in his busy day.

During our conversation we discuss issues such as Max’s early life and childhood heroes; his interest in economics, political science, and philosophy; transhumanism and extropy; the proactinary and precautionary principles; cryonics and the Alcor Foundation; his Paleo diet and exercise regimen; why it is important to question everything (and especially yourself).

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

To find more about Max More visit his website here.

My favorite quote from Max More:

No more gods, no more faith, no more timid holding back. Let us blast out of our old forms, our ignorance, our weakness, and our mortality. The future belongs to posthumanity.

My favorite short articles: A Letter to Mother Nature; The Proactionary Principle; Transhumanism: Towards A Futurist Philosophy

What others have said about Max:

Ray Kurzweil: “Max More’s ideas are very influential among other “big thinkers,” who in turn are influence leaders themselves. Max’s writings represent well-grounded science futurism, and reflect a sophisticated understanding of technology trends and how these trends are likely to develop during this coming century.”

Marvin Minsky: “the father of artificial intelligence”, said of Dr. More: “We have a dreadful shortage of people who know so much, can both think so boldly and clearly and can express themselves so articulately. Carl Sagan was another such one—and  (partly by paying the price of his life) managed to capture the public eye. But Sagan is gone and has not been replaced. I see Max as my candidate for that post.”

 

Video Update:

A fantastic straight-to-the-point interview with Max More discussing the singularity, transhumanism, technological progress, human enhancement, etc…

Related articles
  • Andy Zawacki Gives Us A Tour of the Cryonics Institute
  • My Video Tour of Alcor and Interview with CEO Max More
  • Cryonics: The Meaning and Story of Cryogenic Biostasis
  • Ken Hayworth on Singularity 1 on 1: Brain Preservation is the Logical Lifeboat
  • Natasha Vita-More on Singularity 1 on 1

Filed Under: Podcasts Tagged With: Max More, posthuman, singularity podcast, transhumanism

The Charlie Sheen Guide to Predicting Our Transhuman Future

March 19, 2011 by wpengine

As technology follows its Moore’s Law speedway toward exponentially increasing power and ubiquity, futurists are just as rapidly falling into two schools of thought on how humans will handle this new-found power.

Nanotechnology, artificial intelligence, immersive virtual reality, and dozens of other tools and technologies are poised to transform life in fundamental ways. Repetitive tasks and duties that most people think are odious could disappear. Robots will cook, clean, cut the grass, and perform dozens of other jobs that we — or at least, I — try to avoid. Virtual reality will become better than the real thing.

For those who have accepted this technocentric future, the real question is how humans will deal with this transformation. Not everyone thinks transhumanity is going to be better. In fact, some believe that the future will lead to lazy, over-indulged, shallow-thinking slugs who will probably end up starving themselves to death in a virtual reality environment.

And these are the optimistic ones. There’s always the chance that future technologies will destroy humanity entirely. Yay.

So what future will it be?

I think it’s possible to accurately model our future. We already have a group of people in society who have lots of time on their hands, employ robotic-like workers to satisfy every need, and can access incredible amounts of wealth. We call them celebrities.

 

Is this our guide to the future?!

For the future pessimists out there, they only have to point to, what I call, the Charlie Sheen guide to predicting the future. Once humanity encounters runaway abundance, effortless attainment and gobs of time on our hand, we’ll all end up in semi-lucid stupor spouting off about Tiger’s blood and trolls while dipping into softball-sized mounds of cocaine — or whatever nootropic we’ll have on hand in the near-future.

Case closed?!

Even though there is a seemingly exhaustless supply of celebrities to assure us we all face a Charlie Sheen future — this could have easily been called the Lindsay Lohan scenario — there are other members of the rich and famous set who point to another future. Some celebrities, who, granted, had more than their share of Charlie Sheen moments, grow bored with some of the baser human desires and struggle (without any pharmaceutical assistant) to achieve higher levels of consciousness. We could put the late George Harrison, of the Beatles, in this category. He grew more disillusioned with his celebrity status and devoted himself to pursuing Eastern religions and philosophy. Cat Stevens became Yusaf Islam, an Islamic fundamentalist.

Other celebrities — cushioned with time and money — devote themselves even more to their art, achieving higher forms of transcendence.

So, the answer to the question — “Will future technologies trap us or free us?” — appears, like all great questions, to be yes — and no.

Here’s the key: if human nature remains the same in the future, then the future will remain the same in human nature.

 

Strawberry Fields Forever Anyone?!

About the Author:

Matt Swayne is a blogger and science writer. He is particularly interested in quantum computing and the development of businesses around new technologies. He writes at Quantum Quant.

Filed Under: Op Ed, What if? Tagged With: Futurism, posthuman, transhumanism

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 5)

December 20, 2009 by Socrates

Part 5: The Future of (Military) AI — Singularity

While being certainly dangerous for humans, especially the ones that are specifically targeted by the kill-bots, arming machines is not on its own a process that can threaten the reign of homo sapiens in general. What can though is the fact that it is occurring within the larger confluent revolutions in Genetics, Nanotechnology and Robotics (GNR). Combined with exponential growth trends such as Moore’s law we arguably get the right conditions for what is referred to as the Technological Singularity.

Alan Turing

In 1945 Alan Turing famously predicted that computers would one day play better chess than people. Fifty years later, a computer called Deep Blue defeated the reigning world champion Gary Kasparov. Today, whether it is a mouse with a blue-tooth brain implant that directs the movements of the mouse via laptop, a monkey moving a joystick with its thoughts, humans talking through their thoughts via computers or robots with rat-brain cells for CPU, we have already accomplished technological feats which mere years ago were considered complete science fiction.

Isn’t it plausible, then, to consider that one day, not too many decades from now, machines may not only reach human levels of intelligence but ever surpass it?

(Facing the pessimists Arthur C. Clark famously said once that “If a … scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.”)

Isn’t that the potential, if not actual direction towards which the multiple confluent and accelerating technological developments lead us?

It is this moment – the birth of AI (machine sapiens), which is often referred to as the Technological Singularity.

Ray Kurzweil

So, let us look at the concept of Singularity. For some it is an overblown myth or, at best, science fiction. For others, it is the next step in evolution and the greatest scientific watershed. According to Ray Kurzweil’s definition “It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our life’s, from our business models to the cycle of human life, including death itself.”

According to Kutzweil’s argument the Singularity is nothing short of the next step in evolution. (A position often referred to as Transhumanism) For many millions of years biology has been indeed (our) destiny. But if we consider our species to be a cosmological phenomenon, with its unique feature being its intelligence and not its structural make up, then our biological past is indeed highly unlikely to depict the nature of our future. So, Kurzweil and other transhumanists, see biology as nothing more but our past and technology as our future. To illuminate the radical implications of such a claim it is worth quoting 2 whole paragraphs from Ray Kurzweil:

“The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but transcends our biological roots. […] if you wonder what will remain unequivocally human in such a world, it’s simply this quality: ours is the species that inherently seeks to extend its physical and mental reach beyond current limitations.”

“Some observers refer to this merger as creating a new “species.” But the whole idea of a species is a biological concept, and what we are doing is transcending biology. The transformation underlying the Singularity is not just another in a long line of steps in biological evolution. We are upending biological evolution altogether.”

So how does the Singularity relate to the process of arming AI?

Well, most singularitarians believe that the technological Singularity is a probable and even highly likely event, but most of them certainly do not believe that it is inevitable. Thus there are several potential reasons that can either delay or altogether prevent the event itself or any of the potential benefits for homo sapiens. Global war is, of course, on the top of the list, and it can lead into both of the above directions. In the first instance, a sufficiently large-scale non-conventional war could destroy much or all of human capacity to further technological progress. In that case, the Singularity will be at least delayed or, in the case that homo sapience goes extinct, will become altogether impossible. In the latter instance, if at the point of or around the Singularity there is a conflict between homo sapiens and AI (machine sapiens), then, given our complete dependence on the machines, there may be no merging between the two races (humans and machines) and humanity may forever remain trapped in biology. In turn, this may mean either our extinction or becoming nothing more but an inferior i.e. subservient race to the superiority of the ever growing machine intelligence.

It is for reason like those that some scientists believe that Ray Kurzweil is dangerously naive about the Singularity and especially the benevolence of AI with respect to the human race, and argue that the post-Singularity ArtIlects (artificial intellects) will take us not to immortality but, at least to war, if not complete oblivion. In a way this is a debate about the potential for either techno salvation — as foreseen by Ray Kurzweil, or techno holocaust — as predicted by his critics. Whatever the case, the more and the better the machines of the future are trained and armed the more possible it becomes that one day they may have the capability, if not (yet) the intent to destroy the whole of the human race.

The potential for conflict is arguably likely to increase as the singularity approaches and it does not need to be necessarily a war between man and machine, but can also be among humans. Looking at the current global geopolitical realities one may argue that a global non-conventional war is unlikely if not completely impossible. Yet, for the next several decades, the potential of such war may indeed grow with the pace of technology.

First of all, it is very likely that there will be a large and accelerating proliferation of advanced weapons and military, technological and scientific capabilities all throughout the twenty-first century. Thus many more state and non-state actors will be capable of waging or, at least, starting war.

Secondly, as the singularity approaches the breakpoint and becomes a visible possibility, there are likely to be fundamental rifts within humanity as to whether we ought to continue or stop such developments. Thus many people may push for a global neo-luddite rebellion against the machines and all of those that support the Singularity. This may lead to a realignment of the whole global geo-political reality with both overt and covert centers of resistance. For example, one potentiality may be an alliance between radical Muslim, Christian and Judaic fundamentalists.  (It may currently seem impossible but it is people such as the former chief counter-terrorism adviser Richard A. Clarke who raises those as possibilities.)

It was in the rather low-tech mid 19th century that Samuel Butler wrote his Darwin among the Machines and argued that machines will eventually replace man as the next step in evolution. Butler concluded that

“Our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude has commenced in good earnest, that we have raised a race of beings whom it is beyond our power to destroy, and that we are not only enslaved but are absolutely acquiescent in our bondage.”

Unabomber FBI Sketch

Another well known modern neo-luddite is Ted Kaczynski aka the Unabomber. Kaczynski not only called for resistance to the rise of the machines via his Manifesto (See Industrial Society and its Future) but even started a terrorist bombing campaign to support and popularize his cause. While Samuel Butler’s argument was largely unknown or ignored by the majority of his contemporaries and the Unabomber was called a terrorist psycho, history may take a second look at them both. It may not be impossible that as the Singularity becomes more visible, if not for the whole humanity, at least for the neo-luddites, Butler may come to be seen as a visionary and Kaczynski – as a hero who stood up against the rise of the machines. Thus, if humanity gets divided into transhumanists and neo-luddites, or if the machines rebel against humanity, conflict may be impossible to avoid.

It may be ironic that Karel Čapek, who first used the term robot, ended his play R.U.R. with the demise of humanity and robots taking over the world. The good news, however, is that this possibility is brought about by our own ingenuity and at our own pace. Hence the technology which we create doesn’t have to be nihilistic – similarly to the Terminator; it may be our exterminator or our savior, our end or a new beginning…

This blog does not try to address the issue of arming AI exhaustively or provide solutions or policy recommendations. What it attempts to do is to put forward an argument about the issues, the context and the stakes within which the above process takes place. Thus, it has been successful if after reading it one is at least willing to consider the possibility that the crude and lightly armed robots currently tested in the conflicts in Iraq and Afghanistan are not simply one of the latest tools in the large US military inventory for what they are today is not what they can turn out to be tomorrow.

Today we are witnessing the dawn of the kill-bots. How high and under what conditions will the robot star rise tomorrow is up for us to consider…

the End (see Part 1; Part 2; Part 3; Part 4)

Related articles by Zemanta
  • SINGULARITY UPDATE: Machines could ultimately match human intelligence, says Intel CTO. “The notio… (pajamasmedia.com)
  • Video of Kurzweil’s Latest Talk at Google (singularityhub.com)
  • Exit Brain, Enter Computer (abcnews.go.com)
  • A school for changing the world (guardian.co.uk)
  • Singularity University, Day One: Infinite, In All Directions (wired.com)

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence, cyborg, Future, future technology, posthuman, Ray Kurzweil, Raymond Kurzweil, singularity, Technological Singularity, transhumanism

Do you want to live forever?

November 15, 2009 by Socrates

Humanity has achieved huge progress in life-extending and anti-aging technologies.

Just weeks ago the BBC reported that today half of the babies born in the advanced world are likely to live to 100.

A quick comparative review shows us the following life expectancy change in years:

Cro-Magnon Era: 18 years
Ancient Egypt: 25 years
1400 Europe: 30 years
1800 Europe and USA: 37 years
1900 USA: 48 years
2002 United States: 78 years

The trend is hard to miss: since our Cro-Magnon times we have managed to increase our longevity fivefold and in the last 100 years we have managed to double it. Both of those trends are important to note for they reveal that we are not only living longer but this change is happening at an accelerating pace.

For example, it took tens of thousands of years to simply double the Cro-Magnon longevity from 18 to the 37 years of 18th century Europe. However, it took only 100 years to double longevity from 48 years in 1900 to 78 in 2002 (and notably more in 2009). Thus by extending life expectancy more and more, transhumanists such as Ray Kurzweil, Nick Bostrom and Aubrey de Grey believe that eventually we shall live forever.

But how will future anti-aging technologies impact human evolution?

Can our understanding of neuro-anatomy get to the point where neuro-technologies based on biogenetics, nanotechnology or brain uploading help us transcend biology and become post- or transhuman?

Can technology indeed discover the legendary and ever elusive Holy Grail of immortality?

Eternal Life Road Sign

Some people say that eventually it will.

Others say that it will not and, more importantly, that it should not.

How about you? Do you want to live forever?

Related articles by Zemanta
  • Aubrey de Grey’s Singularity Podcast: Longevity Escape Velocity Maybe Closer Than We Think (singularityblog.singularitysymposium.com)
  • Singularity Podcast: Barry Ptolemy on Transcendent Man (singularityblog.singularitysymposium.com)

Filed Under: Op Ed Tagged With: Aubrey de Grey, immortality, Life extension, posthuman, transhumanism

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy