• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Daniel Faggella

The 5 Ps that Will Drive Transhumanism: A Conversation with Dr. Nayef Al-Rodhan

September 8, 2015 by Daniel Faggella

Transhumanism“Most thinkers…have started out with a very specific view of human nature,” reflects Dr. Nayef Al-Rodhan, Center Director of Geopolitics and Global Futures at the Geneva Center for Security Policy. “My view of human nature is actually the foundation of my outlook – to me, man is an emotional, amoral, egoist…it turns out that our moral compass is governed primarily by our perceived emotional self interest, and the perception bit is just as important as reality.”

Our moral compass is malleable, according to the context of our situation and emotions. The only thing not malleable, says Nayef, is our “Predisposed Tabula Rasa” (PTR). Al-Rodhan believes that humans are not born innately clean and clear of influences. “I challenged that a few years ago and I called it PTR…we have an inbuilt biological microchip that is pro-survival, so when survival is at stake, all bets are off…the most moral creature will not behave as we predict.”

This leads into the “5 Ps”, desires and needs that Al-Rodhan believes motivate you, me, and all other human beings, i.e. ”Power, Pride, Profit, Pleasure and Permanence (longevity).” “It’s ironic that our very human nature…is what will drive us to change our human nature,” remarks Al-Rodhan. Since we are driven by these needs and desires, it is a legitimate challenge for humans to withstand wanting more of the 5 Ps.

Within the concept of Transhumanism, this inevitable innovation in human nature can be defined as the “enhancement of physiological function beyond normal physiology”, these motivators are a philosophical must. Transhumanists aim to pursue these enhancements, not just for the sake of repairing what has been lost or broken, but also in going beyond what is physiologically designed by nature – the unmatched innovator.

Nayef reminds us that “physical enhancement has been with us for a long time,” citing a range of examples of corrective technology, from reading glasses, to makeup, to replacement knees and hips. These types of enhancements don’t seem to worry most human beings from an ethical standpoint. The intuitive concerns kick in when the topic veers toward the arena of cognitive enhancements. “What defines us…is our brain – if we mess with that, which we will, we will change what it means to be human,” says Al-Rodhan. In meddling with our inner gears, we raise all kinds of untouched ethical and equality issues, as well as geopolitical, moral, and existential ramifications.

Where and how are the 5 Ps tugging us in our adoption of new technologies today? This is a question that can be taken in a myriad of directions. Nayef mentions synthetic biology, a field focused on developing abilities to synthesize bits of DNA and protein that do not exist in nature. This is different from the increasingly publicized field of biotechnology, in which scientists play with existing entities. Al-Rodhan notes that synthetic biology is taking off in a major way. “It is really the marriage between synthetic biology, artificial intelligence, nonmaterial and material sciences, that’s enabling us to do some very fancy things, and very troubling things.”

In Europe, the approval of a three-parent embryo, for example, prods us to ask from an ethical point of view, “When and where do we erect boundaries?”  Nayef emphasizes that these are precautionary lines of thought, and that these considerations are not about stifling innovation, which he deems central to our species’ future; however, we will undoubtedly encounter risks, and the global conversations that we have should be focused on how to mitigate and encourage emerging technologies.

This unprecedented and monumental field of thought will require cooperation at the individual, corporate, state, and transnational levels. “Corporate entities are the last people we would expect that would want to be regulated…I’m sure there are lots of responsible people, but it’s not their motivation, it’s not what they’re trained to do,” says Al-Rodhan. Solving tough problems requires taking into account all parties’ contextual objectives and concerns.

As far as the five Ps are concerned, in Nayef’s mind, all are active in different shades in all enterprises and all are rooted in human emotions. “Emotions as a separate entity from rational beings is a misnomer,” remarks Al-Rodhan. “Research actually shows that they are part of our rational thinking.” We are a complex entity, and the view that emotions are ‘extra baggage’ and are primordially residual is not true. “It turns out that our most rational decisions have to have some positive emotional aspect.”

Using some of the 5 Ps as a frame, Nayef reflects on the three-parent embryo innovation: pride is related to the desire to develop a fully-functional human being, pleasure is linked to wanting to help avoid faulty DNA, and permanency is linked to a sense of longevity through generations. Each of these factors drives our innovative ambitions, and as we strive to fulfill our global and systemic responsibility in producing the best outcomes for the future of the human species, none of these perceptions should be left out of the rational decision-making process.

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: transhumanism

The Future of AI Through a Historian’s Looking Glass: A Conversation with Dr. John MacCormick

September 2, 2015 by Daniel Faggella

nine-algorithms-that-changed-the-futureUnderstanding the possibilities of the future of a field requires first cultivating a sense of its history. Dr. John MacCormick, professor of Computer Science at Dickinson and author of Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today’s Computers, has waded through the historical underpinnings of the technology that is driving artificial intelligence (AI) today and forward into the near future.

I recently spoke with Dr. MacCormack about some of the possible future outcomes of AI, including self-driving cars and autonomous weapons. He gives a historian’s perspective as an informed and forward-thinking researcher in the field.

Q: Where will AI apply itself in the next 5 years?

A: New algorithms are coming out all the time.  One area where we have seen a lot of improvement is in the translation of human languages, with Google’s software being one example.  The results today are not overly impressive, but we will continue to see increasing high-quality translations between human languages in the medium term.

Another area that has rocketed is self-driving cars, which are starting to emerge and really seem like they could be a reality for everyday use in the medium term.  A half a decade ago, a lot of followers of the technology might have been doubting this reality, stating that we would need a big breakthrough; however, these views are starting to turn, just based on incremental improvements in the past few years.

Q: What about machine vision?

A: Machine vision is one subfield of AI in which we try to simulate human-like vision, like recognizing objects at rest and in motion.  It sounds simple, but this has been one of the toughest nuts to crack in the whole field of AI.  There have been amazing improvements in the last few decades, in terms of object recognition systems. They are good in comparison to what they were, but those systems are still far inferior to human capabilities.

Because this technology is so difficult to crack, current AI systems try not to rely on vision.  In self-driving cars, for example, vision systems are present but the cars are not dependent.  That vision might be used for something relatively simple, like recognizing if traffic lights are red or green. But with other objects, such as lane markings or any obstructions, the car is going to rely on other sources, such as GPS for navigating and a built-in Mac that knows where various objects are supposed to be, based on a pre-mapped location. Machine vision still poses a cumbersome challenge.

Q: High-profile names like Musk and Hawking have conveyed their AI fears – in your eyes, do you see these as unfounded?

A: I’m an unapologetic optimist on this question.  I do not think AI is going to get out of control and do evil things on its own.  As we get closer to systems that rival human capabilities, such as creativity and original thought, I think these will still be systems that humans have designed and have methods of controlling.  We’ll be able to continue building and making useful tools that are not the same as humans, but that have extraordinary capabilities and that are still able to be guided and controlled.  I think Musk and Hawking are technically correct in their hypothetical line of thought, that AI could turn ‘evil’ and out-of-control, but I also think this is an unlikely scenario.

Q: Should we research national and international protocols that guide AI?

A: Yes, this is an important point, and we need collaboration between many people, including social scientists, technologists, and many other relevant areas of society.

One area that is already starting to draw attention is that of military robotics.  We see multiple countries capable of building systems that have the ability to be autonomous and be used for lethal force.  This opens up an entirely new scenario for ethical debate and a discussion of the kinds of things that should and should not be done.  The United Nations (UN) and others are already looking at the implications of autonomous weapons, but the impact of this technology is certainly pressing and we need to formulate solutions now.

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Profiles Tagged With: AI, Artificial Intelligence

AI is So Hot, We’ve Forgotten All About the AI Winter

August 25, 2015 by Daniel Faggella

artificial intelligence is presented in the form of binary codeThe great influencers and contributors in the field of AI  today can’t help but acknowledge that part of their success comes from ‘standing on the shoulders’ of the thinkers and doers who came before.  Dr. Nils J. Nilsson, former Stanford researcher and author of The Quest for Artificial Intelligence, is such a pioneer in the field of AI that he aptly recalls the ‘AI Winter’,  a period of time in the late 1970s and early 1980s when funding dwindled and AI research went underground.  

“Work was pretty rampart at first…but it stalled”, recalls Dr. Nilsson during a recent interview with TechEmergence.  Before the AI Winter blew in, Nils was already hard at work as a Stanford Researcher, involved in early work with pattern recognition, automatic planning systems, and robotics.

An AI Freeze

“There was lots of work being done to get machines to do the kinds of things that humans could do (in the 1950s and 1960s),” notes Nilsson, after giving credit to the foundational work of Alan Turing.  This replication research took off in the mid to late 1960s with the establishment of labs at MIT, Stanford, and SRI, where researchers – including Dr. Nilsson – tried to get machines to mimic humans through performance of activities such as solving theorems and algebraic problems, and playing strategic games like chess and checkers.  

A lot of progress was made before funding and research stalled in the late 1970s, in what came to be known mostly to the outside world as the ‘AI Winter’.  

This is not a story that’s often discussed or even known about by generations today.  Though it may have been more of a light freeze than a permafrost, the dormant funds and interest was still felt by academia. A redeeming takeaway is that despite lack of funds and interest, researchers kept at their work.  “AI researchers weren’t disheartened at all – they kept at it, and many things happened that made it take off again”.

A Determined Thaw

AI researchers’ diligence in spite of lack of resources helped give rise to ‘expert systems’. The early software MYCIN, introduced in the 1970s, was able to diagnose certain kinds of bacterial infections based on symptom input (the precursor to today’s advanced medical diagnostic systems). “In those days you would sit down at a terminal – we didn’t have personal computers.  You’d type in and answer questions about tests that were being made, and the program would attempt to diagnose not only the disease, but a prescribed therapy.”

Another innovative program was one that Nils was involved in directly, known as “Prospector”, which functioned exactly as it sounds.  Based on input knowledge of ore deposits, the software made one of its most dramatic discoveries in the 1970s when it uncovered a hidden mineral deposit of porphyr molybdenum (a form of copper deposit) at Mount Tolman in the state of Washington.

Researchers also redoubled their efforts on the development of neural networks, which allowed for changes in connection strength and the addition of multiple layers to AI systems.  These innovations led to work on programs in the late 1980s and early 1990s that allowed for the beginnings of today’s AI-steered automobiles.

AI Heats Up

In the 1980s and 1990s, funding began to flow back into the field of AI.  Increased resources supported the development of much faster computers that had more memory, spearheading the creation of supercomputers like IBM’s Deep Blue, which ultimately triumphed over World Chess Champion Garry Kasparov in 1997.  

More recently, research has given way to the occurrence of key AI breakthroughs, to include the occurrence of huge databases i.e. big data and the ability of computers to mine data, find information, and make inferences.  This boom of work in the early 2000s yielded more advanced face and speech recognition and language translation software that is only on the rise.

Better AI techniques allowed rivals Stanford and Carnegie Mellon to refine and compete in the ongoing DARPA Grand Challenge autonomous vehicle contest (for the record, notes Nilsson, Stanford won over Carnegie Mellon in 2005).  Today, Google has charged into the autonomous automobile industry.  Elon Musk recently commented that these autos may eventually be so good that people will be forbidden to drive in the future.  

“Now one of the phrases that people use is way back in the 1970s and 80s…AI wasn’t really good enough, it wasn’t achieving its promises – now, sometimes people are saying AI is achieving its promises, it’s too good”, Nilsson chuckles. In light of recent coverage on autonomous weapons, leading thinkers in the industry, including Hawking, Gates, Musk and others, would likely agree with Nils’ statement. Perhaps we should all be sweating a bit more about the future directions in which the steam-powered (we may have yet to see electric) AI train is headed.

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: AI, Artificial Intelligence

Telepathic Technology is Here, But Are We Ready?

July 23, 2015 by Daniel Faggella

technological-telepathyBrain-machine interface (BMI) technology has finally gotten a taste of mainstream attention in the form of movies like Transcendence and Edge of Tomorrow. The silver screen attention is mirrored by the burgeoning startup companies in the BMI space. The Muse headband is taking the positioning of being a kind of brain-training tool to encourage calmness and tranquility, while the versatile Emotiv BMI aims to allow for telekinetic control control of virtual environments with thoughts and facial expressions.

Despite a little YouTube fame and some article features from these technology up-and-comers, most people (even techies) probably don’t imagine technological mind control as something they’ll need to be prepared for anytime soon. After all, this futuristic technology will need oodles of time to develop.

Oddly enough, it’s been a long time coming. Like 3-D printing, brain-machine interface have been developing for decades before it ever “made waves” on mainstream media. The first 3-D printer was created (by Chuck Hull) in 1983… but the researchers like Walter Hess were stimulating brain regions in animals in the 1920’s. Oh, you didn’t know? Don’t worry, most people don’t, but it’s worth having an understanding of what’s already been accomplished to get a sense for what might be possible in the future.

At BrainGate, a massive BMI research initiative starting at Brown University, breakthrough after breakthrough have made it’s lab world renown. Most notably, they have paralyzed patients that can move a mouse on a screen to check email, or who can move a robotic arm to drink coffee (see video here) – all thanks to a direct connection of dozens of thin spikes covered in electrodes, implanted into their motor cortex. This “far out” breakthrough – by the way – happened 8 long years ago. They’ve continued to make amazing progress ever since.

It might seem unusual that any kind of real business could be constructed from a technology that seems to extend human abilities by “hook into” the brain itself. Needless to say, this technology won’t be limited to clinical trails forever. While interviewing Dr. Jason Perge – one of the many brilliant researchers at BrainGate – I asked about the business implications for such a technology when it became viable for the general public. “Once a technology is developed that has a clinical value as well as a business value, it’s relatively straightforward to develop a company around it” said Dr. Perge.

BMI doesn’t have to be tremendously capable to be completely life-altering and viable as a business, either. Dr. Perge states that with tetraplegic populations – like those that the team at BrainGate are working with, “someone might be perfectly happy with the ability to say ‘yes’ or ‘no’”. In the consumer world, we might imagine a BMI implant that allows quantitative traders or software engineers to control a dozen computer screens at once – leaving “un-enhanced” workers of this same kind at a massive disadvantage. However, BMI needn’t be that advanced to gain adoption in the consumer market either. The simple capacity to answer one’s phone, unlock car doors, or find lost items by thought alone would likely make BMI more than economically viable – as a some people are opting for RFID implants in the skin to attain these same objective now.

While some companies are working directly on BMI, some are developing other enhancement technologies around the senses, particularly in the world of augmented reality. The engineers at Innovega are already aiming to allow for an even more immersive experience than the folks at Google Glass. Their headset combines with a pair of contact lenses to allow for an experience that they plan on being far more immersive than anything else on the market (see a sample here). Innovega might be ahead of the curve now, but they’re nowhere near the only game in town, and there’s reason enough to believe that Google won’t be stopping with contacts that detect glucose alone.

We got rid of rock and chisel, of the quill, of the typewriter – what makes anyone feel sure that interacting with computers through the “QWERTY” keyboard layout will last more than a few more years? BrainGate had patients checking email with their thoughts 8 years ago (video here), and Samsung is working on a surprisingly capable non-invasive cousin of this same technology. If such a technology were to make workers more effective, it would seem nearly impossible to stop the sweeping adoption of these telekinetic technologies – just as the personal computer came to dominate the workplace in a short matter of decades.

“Sci-fi” isn’t so far out anymore, and the coming years promise to extend human capacity by bypassing the usual tools and gadgets by tapping into our minds. With brain implants being referred to as this decade’s laser surgery, the future may be headed down a path that few of us are ready for.

We may not be able to predict which enhancement technologies will hit the market first, but it seems that we can safely assume that if a technology could increase human effectiveness, it will be pursued – regardless of inching outside the borders of what we consider “human” today. For technology entrepreneurs, this might imply creating software and applications that fits the immersive computing world of tomorrow. For policy makers, this certainly implies tackling new privacy concerns, and legal limits of “tinkering” with the crossroads of neurology and technology. For consumers everywhere, this shift might imply thinking through our own personal boundaries with technology. Hopefully, thinking ahead will help us navigate that near future when we face them head-on.

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: brain machine interface, telepathic technology

Would Technological “Enhancement” Make Us More, or Less, Human?

August 13, 2013 by Daniel Faggella

wearable-computingImagine you wake up in the morning after a refreshing 30 full minutes of sleep, pulling up into your retinal display your top priority tasks for that day, and manually adjusting your mood to something desirable before your colleagues have their holograms projected into your living room for your 7:00am Monday meeting.

With the advent of intelligence technologies being developed and furthered in retail, in finance, in healthcare, and beyond, we are entering the age where these “smart” technologies have become integrated into human bodies for repair and amelioration of medical conditions. Cochlear implants have been used for years to treat deafness (in both patients born without hearing and those who have lost it), and technologies are being created (some of which have already been successful) to aide blind individuals to see again. Bionic limbs are seen as relatively normal today, and the threshold for artificial senses might not be all too far off (some of the most exciting recent discoveries to be found at Brown’s BrainGate).

A profound question looms: will we direct these repair and amelioration technologies towards augmentation and enhancement of our present human faculties?

Internal combustion engines began as a way to replace animal-powered farming equipment. Today, they power cars, chainsaws, helicopters and airplanes. Airplanes themselves were initially used to move people from one place to another – faster. Not long after we had aircraft for war, gliders for tracking weather – and then unmanned drones and spacecraft.

At present, eye tracking devices to help paraplegic individuals and stroke patients communicate even if they are unable to talk, simply by looking at specific keys on a screen – triggering a computer to speak for them or communicate basic messages. If such a technology eventually made handling email and organizing one’s desktop files twice as efficient – are we to believe that this “amelioration” technology wouldn’t find it’s way to the mainstream?

P1050114bThere are also exoskeletons constructed to help weak limbs function more effectively (for upper body, lower body, or both). Imagine if such technologies became affordable and could cut a business’s warehouse crew in half by doubling the efficiency of manual workers. How many businesses would jump on that bandwagon?

Along these lines of thinking – anything that can serve a function in effectiveness or efficiency is likely to be adopted. If nobody has an iPhone, then your old clunker isn’t all that bad. If everyone has an iPhone, then there’s a world of email, photos, contact sharing, GPS-ing, and web browsing that you’re missing out on. Once the internet was in place and in use, no business in their right mind would ignore it’s presence.

Once one company can answer email and sift through tasks without even using a keyboard, the others better jump onboard. Once it becomes the norm in one industry for workers to take a biotech pill that allows them to sleep only 30 minutes per night, other companies – and eventually other industries – will likely follow suit.

In this respect, the slope of human augmentation and emerging technology is a slippery one – and we’re unlikely to develop simple answers to how these transformational technologies are developed and implemented in our world. Rather than a dogmatic “for” or “against” stance on enhancement, I pose that it is important to consider the actual issues or even opportunities for human experience and human potential with the promise of these emerging technologies. Below we’ll explore two very common objections to the very notion of enhancement, and how they might be considered beyond the surface level.

“You Want to Turn Me Into a Vacuum Cleaner?”

It might useful, first to address our resistance to these potential “enhancements” of humanity using the intelligence of tomorrow. We naturally resist the idea of a transformation to something more rigid and limited – more “mechanistic” or “robotic” – into something like “R2D2” from Star Wars.

However, just as concept of “ship” had more limited and simple connotations in Greek and Roman times than it does in the present age of space travel, the concepts of “robotic” or “mechanistic” have different connotations now than they likely will in twenty or thirty years. The “computer” was associated with a certain level of capacities in the 1980s, which now seem utterly feeble with respect to what “computers” are capable of now. We fear becoming the kinds of “machines” that we use in everyday life – such as toasters, vacuum cleaners, or Honda Escorts.

Frankly, I wouldn’t want to be a Honda Escort, either, but these present notions of “machine” cast a light on all enhancement in a way that keeps it’s present connotations today. Admittedly, there are some viable reasons for questioning any transition of our “selves” into any other “shell” (and in fact, not questioning this transition would seem neglectful). However, many of the “instinctual” responses to the thought of enhancement tend to come more from robot movies and less from a perspective on increasing robot / artificial intelligence sophistication.

Even more important than identifying our own present or cultural biases (and their tendency to be projected forward), it seems we should consider the ramifications of “enhancements” that could make us more artistically creative, or more emotionally rich, or more mentally capable. Ask me (or you, or anyone we’ve ever met) if we’d want to be more like R2D2 or a Honda Escort – and the answer (even amongst the most hard-core Star Wars fans) will likely be “no.” However, if an “enhancement” could grant me the capacity to – say – never forget an important fact, idea, or skill, I might find that improvement hard to turn down (assuming there would be no negative side effects).

Even the capacity of memory, though, tends to fit too closely with the mold of “robot” that we know today. Let’s say that I was able to enhance my creative thinking abilities, or artistic capacities in writing or painting – possibly through stimulation of certain brain regions, or brain implants that provided new modes of connecting ideas or more insight and attune-ment to beauty itself.

brain-memory-upgradeAsk me, as another example, if I’d consider a procedure that would allow me to learn multiple languages, or study twelve topics at once and improve in them all at faster rates that I can now (such as poetry, essay writing, martial arts, billiards, etc…), and I would not be so fast to turn that “upgrade” down. Imagine if it were possible through an implant to monitor and manage our emotional states more deliberately (feeling happy, courageous, focused, etc… at my own will). In the above circumstances, the question of whether or not to “change” becomes less polarized, more grey. Unlike the question of whether or not to become more like R2D2, these enhancements would make most people think long and hard and about the real possibility of moving beyond biology.

For the most part, the concept of machines or computers enhancing aspects of our emotional life seems like science fiction (much like space travel seemed like science fiction 80 years ago). However, research, theory, and even basic models for “emotional robots” are already being developed to move this technology forward, evidenced – among other projects – by the European Feelix Growing. By modeling the emotional behavior of infants and apes, European researchers in the “Feelix Growing” project are developing robots (one named “Nao”) with a basic ability to respond with fear, sadness, joy or excitement in response to interactions with humans. This includes a memory of faces and specific experiences with the people associated with those faces, allowing “Nao” and other robots of it’s kind to maintain a kind of relationship with it’s human caretakers.

Even with the explosion of robotics and emerging technologies in the last twenty years, it seems that “emotional robots” are still nowhere near the complexity or relational intelligence as human beings. This might bring us to ask the inevitable next question – which serves as another level of resistance to the notion of enhancement:

“But – How Could it Ever Be Done?”

The-Singularity-is-Near-What's-NextIf I’m claiming that emotional or creative life – in addition to just “rational” or “computational” life could be enhanced – then where is the evidence that this is possible?

This same question could have been asked about putting a man on the moon less than a hundred years ago – a feat which at that time would have been almost more absurd than the ability to enhance the “human” aspects of life with machine intelligence. Heck, a hundred years ago, the Model T was a big deal, and right now we already have brain implants helping people move robotic limbs, and mice growing human ears on their backs. It might be said that “enhancement” technologies already do exist, but are – at present – being used for amelioration rather than augmentation.

With all that we’ve achieved in just the past 100 years, the “it hasn’t been done before, so it never will be done” argument seems weaker than ever. Before we could fly, it seemed natural to pose flight impossible. Before we could travel to the moon, this too seemed impossible. Breaking the 4-minute mile seemed “impossible” – even to scientists in the 1940’s. However, this natural human tendency to resist the possibility of drastic future change – or even relatively minor change, like the 4-minute mile – won’t seem to hold. I would argue that though we remain rational, any unfounded, “instinctual” resistance of change needs to be cleared away in order to make space for the conversations we should be having regarding the future of humanity and emerging technologies. Which brings us nicely to our next point:

“What Should We Be Asking?”

P1040698As mentioned before – with the advancement of technology not stopping anytime soon (or more appropriately, not ceasing to multiply in breathtaking speed any time soon) – important questions still need important answers, though many of them don’t have nearly adequate data as of now. Though some agree more whole-heartedly than others, Kurzweil’s law of accelerating returns – despite many potential failings and what some believe to be an oversimplification – is rather convincing. The “LOAR” (as it is sometimes called) states that the price / performance of information technology approximately doubles every year. From computers the size of rooms to computers in our phones, from ear trumpets to cochlear implants, this particularly convincing trend continues in myriad form.

It’s my position that we aught think seriously about why most people might instantly “turn down” ideas of enhancement as either “wrong” or “impossible” without more serious consideration. The debate – in the eyes of many (if not most) in the fields of technology and intelligence – is not a question of “possible” or “impossible.” The question also might end up being more about “human” or “inhuman,” and what elements of our biological nature we want to keep or surpass.

Whether we should or should not surpass biology is a continuing question that will inevitably lead to disagreement, but I believe a dogmatic “no” to the questions of enhancement will likely do more harm to an open mind, willing to consider issues, opportunities, and options for our human future. In this respect, fascination seems a more appropriate response than repulsion, and its safe to say that fascination (tapered with practical wisdom and hard work) will get a lot more done in terms of channeling these developments in ways that will matter most to humanity and the world.

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

 

Related articles
  • Hamlet’s Transhumanist Dilemma: Will Technology Replace Biology?
  • A Transhumanist Manifesto

Filed Under: Op Ed, What if? Tagged With: human augmentation, human enhancement, transhumanism

Why Augmentation Isn’t So “Far Out”

May 7, 2013 by Daniel Faggella

The term “augmentation” – when referring to humans – has a tendency to call forth mental images of Terminator-like, red-eyed androids with steel limbs and laser guns. Tinkering with that is “human” may seem like a far-out concept, but from the vantage point of technology, it’s a process we’ve been engaged in since our earliest tools. For this same reason, many experts argue that “augmentation” is an inevitable result of our present technological efforts.

It serves us well to begin this article with definitions we can work with: (Dictionary.com)

transhumanism

Augment:

1. To make (something already developed or well under way) greater, as in size, extent, or quantity.

Enhance:

1. To make greater, as in value, beauty, or effectiveness; augment.

 

The first definitions of both terms are surprisingly similar, as they both relate to a kind of purposive betterment of something already in place. To “improve upon.” This desire for betterment and extending our capacities to achieve our objectives is the undercurrent of technology. It is also what will most likely make the transition to transhumanism inevitable.

Ray Kurzweil – in his new book “How to Create a Mind” – explains how he feels that Google and Wikipedia are like an extension of himself and of his own mental capacities. Like the first mallet or spear helped to extend man’s physical abilities, these tools help to extend our mental abilities in the present age. Kurzweil recalls that when Google and Wikipedia went on a SOPA strike in January of 2012, he felt as though part of his own mind was missing. I can imagine that an early hunter-gatherer would have felt that part of his body was missing if he’d had to spend a day trying to catch rabbit or boar with his bare hands.

 

Utility Wins – Why Wearable Computing is Taking Off

woman with futuristic glassesGoogle Glass has little chance of changing the world based on “cool” factor alone (though we all know a few people who will grab a pair for this reason alone). Rather, if Google Glass can meaningfully enhance our cognition by doing what it promises (price shopping online for products you’re looking at, pull up directions and maps in real time, let others “see” from your perspective), then it’s got a very good chance at being adopted.

But that’s it, right? Certainly, wearable computing is about as far as humanity will go without some kind of massive revolt. We aren’t just about to stand around and get turned into androids, are we?

Though acceptance with regards to a more literal “augmentation” of human-machine merger isn’t something we can be sure about either way, the clues seem to hint: “Go.”

When computers were the size of buildings, or large rooms of buildings, there was an initial inkling that these “devices” would never catch on. In the following decades when computers were small enough to have in our homes, there certainly must have been the same inkling (“Who needs one of these computing devices in their homes?”). I’ll admit, when the next wave hit with cell phones, I was certain that the world wouldn’t adopt the ability to be annoyed by email at any time or any place. Four years later my instinct has changed and I look confused when presented with a phone without GPS and email capability.

Google Glass represents the further extension of “wearable computing,” another trend with it’s inevitable proponents and it’s critics. It’s success, I pose, will be it’s utility to us – it’s ability to attain an end that we think we desire. Engadget.com put it well:

“That’s become clearer than ever with the advent of the personal computer, which in recent decades has drawn people away from the television, the radio, the calculator and countless other devices. More recently, we’ve seen that shift again with smartphones and tablets pulling people away from PCs, telephones, cameras and video game consoles. In each case, the new technology replacing the old has taken on a more central role in people’s lives. Whereas the personal computer became a hub in the home, the smartphone has become a source of ever-present connectivity and a near-constant accessory. Wearable computing promises to extend that always-on connection even further and, potentially, change the nature of what it means to be ‘connected.”

That “end” might be checking email everywhere and at all times. Some people may enjoy that feature, others may not. However, it might also be a more pleasant and engaging trip to museums, where real-time information about the pieces is presented. Some people may enjoy that feature, others may not. It could also mean less money spent shopping, there grocery or clothing prices could be compared in real time online and offline. If there is enough of this added utility – and the “ends” are strong enough – then Glass with catch on.

If Glass does not, another company likely will – and fast. Think about the chips already embedded in Nike shoes, or the “Pebble” watch / phone / iPod, or these rape defense underwear that zap would-be attackers. If the utility is there, then it’s coming, and thousands of companies are battling to lead that pack already.

 

Slippery Slope – “Cyborgs” as the Next Step?

futuristic cyborg

The potentially “scary” next step is a literal merger with computing or “computational substrates” to enhance our experience or improve our functioning. Unlike other improvements and technological advancements (the bow and arrow, the printing press, the cotton jinn, the calculator, the cell phone), this actually represents a genuine shift in the human condition / human experience – via the senses and capacities granted to us.

From one perspective, technology has already changed the human condition. Certainly my life now is drastically different from that of a hunter-gatherer in the year 2,000 BC. However, if you tool a human from even 50,000 years ago and raised them from birth in our environment, or tool a human baby now and raised them in the african sahara, it would be evident that our faculties, needs, and capacities are essentially identical.

With the advent of embedded enhancement to our memory, implants to improve sensory perception, or reality simulators that capable of mentally transporting us to any time and place all represent potential steps that bring us well beyond the plateau of “human” on which we’ve perched for the last 50 millennia.

There are lines of thought that either rule out this transition (IE: neglect to take a technological merger into account of humanity’s future), or which believe that humanity simply wouldn’t allow for this kind of blasphemy to our human nature.

This is one of the reasons that some experts believe that it is ridiculous to imagine homo sapiens in the cockpits of spacecraft in the year 3,000 – as well as a slew of other interesting predictions.

Speicher und Gedächtnis UpdateHowever, despite the drastic step forward that this transition would represent, it’s motivations would still remain the same: attaining an end that we think we desire. Utility.

Hence, this slope is just as slippery as the slope of the phone and mobile computing, and the chasm of “cyborg” is already being crossed. Initially, we will cure blindness and enable paralyzed people to walk, talk, or regain use of a mechanized body through their still-active brain channels. We’re “okay” with helping people “in need,” but handicapped people are not the only ones with needs, and as the ability to attain desired ends is achieved by these technologies, enhancement – I believe – will be inevitable (Here, for example, is an article about memory implants being used for people with memory problems, that one can imagine might be very desirable for “normal” folks as well).

 

Eternal Vigilance and the Importance of a Path Forward

The trends and ramifications above present us with a unique set of challenges relating to the future of our race, of sentient beings, and of consciousness itself. To point in any one direction as “the answer” seems misguided, naive and dangerous. The “progress” of greatest importance will be our effective collaboration of expertise around the very careful, very calibrated “roll-out” of these sentience-altering technologies.

In a very serious sense, “tinkering” with consciousness and conscious experience itself represents the ultimate moral precipice – the most ethically significant action conceivable. Creating human-level consciousness with circuitry alone, manufacturing an infinite number of virtual realities, expanding our senses and cognition to millions of times their present capacities, extending virtual life forever inside of computational substrates to house trillions of living consciousnesses… all of these transitions are potentially plausible – and their direction will ultimately be guided by how we release them into the world.

The “answers” are not to be found in any kind of clear-cut fashion, but through a collaboration of mindfulness about the emergence and use of these technologies – we can give ourselves the best chance of ensuring their being leveraged beneficially in the world of tomorrow (which isn’t that far out).

 

About the Author:

Daniel-FaggellaDan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on the transition to transhumanism, and the eminent issues and opportunities therein. His articles and interviews with philosophers / experts can be found at www.SentientPotential.com

Filed Under: Op Ed Tagged With: augmented humans, cyborg, human augmentation, transhumanism

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2

Primary Sidebar

Recent Posts

  • Astrobiologist Lewis Dartnell on Origins: How the Earth Shaped Human History
  • David Loy on Zen, EcoDharma, AI and Story
  • Byron Reese on Stories, Dice and Rocks That Think
  • Cadell Last on Nietzsche, Transhumanism and Story
  • Joscha Bach on AI, Cosmology, Existence and the Bible

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2022 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy