• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Artificial Intelligence

GoodAI CEO Marek Rosa on the General AI Challenge

March 4, 2017 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/310691984-singularity1on1-marek-rosa.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Marek Rosa is the founder and CEO of Keen Software, an independent video game development studio. After the success of Keen Software game titles such as Space Engineers, Marek founded and funded GoodAI with a 10 million dollar personal investment thereby finally being able to pursue his lifelong dream of building General Artificial Intelligence. Most recently Marek launched the General AI Challenge with a fund of 5 million dollars to be given away in the next few years.

During our 80 min discussion with Marek Rosa we cover a wide variety of interesting topics such as: why curiosity is his driving force; his desire to understand the universe; Marek’s journey from game development into Artificial General Intelligence [AGI]; his goal to maximize humanity’s future options; the mission, people and strategy behind GoodAI; his definitions of intelligence and AI; teleology and the direction of the Universe; adaptation, intelligence, evolution and survivability; roadmaps, milestones and obstacles on the way to AGI; the importance of theory of intelligence; the hard problem of creating Good AI; the General AI Challenge…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Marek Rosa?

Marek Rosa is CEO/CTO at GoodAI, a general artificial intelligence R&D company, and the CEO at Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. GoodAI started in January 2014 and has grown to an international team of 20 researchers.

Filed Under: Podcasts Tagged With: Artificial Intelligence, GoodAI

Neuromorphic Chips: a Path Towards Human-level AI

September 2, 2016 by Dan Elton

Neuromorphic ChipsRecently we have seen a slew of popular films that deal with artificial intelligence – most notably The Imitation Game, Chappie, Ex Machina, and Her. However, despite over five decades of research into artificial intelligence, there remain many tasks which are simple for humans that computers cannot do. Given the slow progress of AI, for many the prospect of computers with human-level intelligence seems further away today than it did when Isaac Asimov‘s classic I, Robot was published in 1950. The fact is, however, that today the development of neuromorphic chips offers a plausible path to realizing human-level artificial intelligence within the next few decades.

Starting in the early 2000’s there was a realization that neural network models – based on how the human brain works – could solve many tasks that could not be solved by other methods. The buzzphrase ‘deep learning‘ has become a catch-all term for neural network models and related techniques, as is shown by a plotting of the frequency of the phrase using Google Trends:

//
trends.embed.renderExploreWidget(“TIMESERIES”, {“comparisonItem”:[{“keyword”:”neural network”,”geo”:””,”time”:”all”},{“keyword”:”deep learning”,”geo”:””,”time”:”all”},{“keyword”:”machine learning”,”geo”:””,”time”:”all”}],”category”:0,”property”:””}, {});
//

Most deep learning practitioners acknowledge that the recent popularity of ‘deep learning’ is driven by hardware, in particular GPUs. The core algorithms of neural networks, such as the backpropagation algorithm for calculating gradients was developed in the 1970s and 80s, and convolutional neural networks were developed in the late 90s.

Neuromorphic chips are the logical next step from the use of GPUs. While GPU architectures are designed for computer graphics, neuromorphic chips can implement neural networks directly into hardware. Neuromorphic chips are currently being developed by a variety of public and private entities, including DARPA, the EU, IBM and Qualcomm.

The representation problem

A key difficulty solved by neural networks is the problem of programming conceptual categories into a computer, also called the “representation problem”. Programming a conceptual category requires constructing a representation in the computer’s memory to which phenomena in the world can be mapped. For example “Clifford” would be mapped to the category of “dog” and also “animal” and “pet”, while a VW Beatle would be mapped to “car”. Constructing a robust mapping is very difficult since the members of a category can vary greatly in their appearance – for instance a “human” may be male or female, old or young, and tall or short. Even a simple object, like a cube, will appear different depending on the angle it is viewed from and how it is lit. Since such conceptual categories are constructs of the human mind, it makes sense that we should look at how the brain itself stores representations. Neural networks store representations in the connections between neurons (called synapses), each of which contains a value called a “weight”. Instead of being programmed, neural networks learn what weights to use through a process of training. After observing enough examples, neural networks can categorize new objects they have never seen before, or at least offer a best guess. Today neural networks have become the dominant methodology for solving classification tasks such as handwriting recognition, speech to text, and object recognition.

Massive parallelism

Neural networks are based on simplified mathematical models of how the brain’s neurons operate. Today’s hardware is very inefficient when it comes to simulating neural network models, however. This inefficiency can be traced to fundamental differences between how the brain operates vs how digital computers operate. While computers store information as a string of 0s and 1s, the synaptic “weights” the brain uses to store information can fall anywhere in a range of values – ie. the brain is analog rather than digital. More importantly, in a computer the number of signals that can be processed at the same time is limited by the number of CPU cores – this may be between 8-12 on a typical desktop or 1000-10,000 on a supercomputer. While 10,000 sounds like a lot, this is tiny compared to the brain, which simultaneous processes up to a trillion (1,000,000,000,000) signals in a massively parallel fashion.

Low power consumption

The two main differences between brains and today’s computers (parallelism & analog storage) contribute to another difference, which is the brain’s energy efficiency. Natural selection made the brain remarkably energy efficient, since hunting for food is difficult. The human brain consumes only 20 Watts of a power, while a supercomputing complex capable of simulating a tiny fraction of the brain can consume millions of Watts. The main reason for this is that computers operate at much higher frequencies than the brain and power consumption typically grows with the cube of frequency. Additionally, as a general rule digital circuitry consumes more power than analog – for this reason, some parts of today’s cellphones are being built with analog circuits to improve battery life. A final reason for the high power consumption of today’s chips is that they require all signals be perfectly synchronized by a central clock, requiring a timing distribution system that complicates circuit design and increases power consumption by up to 30%. Copying the brain’s energy efficient features (low frequencies, massive parallelism, analog signals, and asynchronicity) makes a lot of economic sense and is currently one of the main driving forces behind the development of neuromorphic chips.

Fault tolerance

Another difference between neuromorphic chips and conventional computer hardware is the fact that, like the brain, they are fault-tolerant – if a few components fail the chip continues functioning normally. Some neuromorphic chip designs can sustain defect rates as high as 25%. This is very different than today’s computer hardware, where the failure of a single component usually renders the entire chip unusable. The need for precise fabrication has driven up the cost of chip production exponentially as component sizes have become smaller. Neuromorphic chips require lower fabrication tolerances and thus are cheaper to make.

The Crossnet approach

Many different design architectures are being pursued and developed, with varying degrees of brain-like architecture. Some chips, like Google’s tensor processing unit — which powered Deep Mind’s much lauded victory in Go – are proprietary. Plenty of designs for neuromorphic hardware can be found in the academic literature, though. Many designs use a pattern called a crossbar latch, which is a grid of nanowires connected by ‘latching switches’. At Stony Brook University, professor Konstantin K. Likharev has designed a neuromorphic network called the “Crossnet”.

Generic Structure of a feedforward CrossNet

[Figure about depicts a layout, showing two ‘somas’, or circuits that simulate the basic functions of a neuron. The green circles play the role of synapses.  From presentation of K.K. Likharev, used with permission.]

One possible layout is shown above. Electronic devices called ‘somas’ play the role of the neuron’s cell body, which is to add up the inputs and fire an output.  In neuromorphic hardware, somas may mimic neurons with several different levels of sophistication, depending on what is required for the task at hand. For instance, somas may generate spikes (sequences of pulses) just like neurons in the brain. There is growing evidence that sequences of spikes in the brain carry more information than just the average firing rate alone, which previously had been considered the most important quantity.  Spikes are carried through the two types of neural wires, axons and dendrites, which are represented by the red and blue lines in figure 2. The green circles are connections between these wires that play the role of synapses. Each of these ‘latching switches’ must be able to hold a ‘weight’, which is encoded in either a variable capacitance or variable resistance. In principle, memristors would be an ideal component here, if one could be developed that could be mass produced. Crucially, all of the crossnet architecture can be implemented in traditional silicon-based (“CMOS”-like) technology. Each crossnet (as shown in the figure) is designed so they can be stacked, with additional wires connecting somas on different layers. In this way, neuromorphic crossnet technology can achieve component densities that rival the human brain.

Likarev’s design is still theoretical, but there are already several neuromorphic chips in production, such as IBM’s TrueNorth chip, which features spiking neurons, and Qualcomm’s “Zeroeth” project. NVIDIA is currently making major investments in deep learning hardware, and the next generation of NVIDIA devices dedicated for deep learning will likely look closer to neuromorphic chips than traditional GPUs. Another important player is the startup Nervana systems, which was recently acquired by Intel for $400 million.  Many governments are are investing large amounts of money into academic research on neuromorphic chips as well. Prominent examples include the EU’s BrainScaleS project, the UK’s SpiNNaker project, and DARPA’s SyNAPSE program.

Near-future applications

Neuromorphic hardware will make deep learning orders of magnitude faster and more cost effective and thus will be the key driver behind enhanced AI in the areas of big data mining, character recognition, surveillance, robotic control and driverless car technology. Because neuromorphic chips have low power consumption it is conceivable that some day in the near future all cell phones will contain a neuromorphic chip which will perform tasks such as speech to text or translating road signs from foreign languages. Currently apps that perform deep learning tasks require connecting to the cloud to perform the necessary computations. The low power consumption of neuromorphic chips also makes them attractive for military field robotics, which currently are limited by their high power consumption, which quickly drains their batteries.

Cognitive architectures

According to Prof. Likharev, neuromorphic chips are the only current technology which can conceivably “mimic the mammalian cortex with practical power consumption”. Prof. Likharev estimates that his own ‘crossnet’ technology can in principle implement the same number of neurons and connections as the brain on approximately 10 x 10 cms of silicon. Conceivably, production of a 10×10 cm chip will be practical in only a few years, as most of the requisite technologies are already in place.  However, implementing a human level AI or artificial general intelligence (AGI) with a neuromorphic chip will require much more than just just creating the requisite number of neuron and connections. The human brain consists of thousands of interacting components or subnetworks. A collection of components and their pattern of connection is known as a ‘cognitive architecture’.   The cognitive architecture of the brain is largely unknown, but there are serious efforts underway to map it, most notably Obama’s BRAIN initiative and the EU’s Human Brain Project, which has the ambitious (some say overambitious) goal of simulating the entire human brain in the next decade. Neuromorphic chips are perfectly suited to testing out different hypothetical cognitive architectures and simulating how cognitive architectures may change due to aging or disease. In principle, AGI could also be developed using an entirely different cognitive architecture, that bares little resemblance to the human brain.

Conclusion

Considering how much money is being invested in neuromorphic chips, already one can now see a path which leads to AGI. The major unknown is how long it will take for a suitable cognitive architecture to be developed.  The fundamental physics of neuromorphic hardware is solid – they can mimic the brain in component density and power consumption and with thousands of times the speed. Even if some governments seek to ban the development of AGI, it will be realized by someone, somewhere. What happens next is a matter of intense speculation.  If an AGI is capable of recursive self-improvement and had access to the internet, the results could be disastrous for humanity. As discussed by the philosopher Nick Bolstrom and others, developing containment and ‘constrainment’ methods for AI is not as easy as merely ‘installing a kill switch’ or putting the hardware in a Faraday cage. Therefore, we best start thinking hard about such issues now, before it is too late.

 

About the Author:

Dan EltonDan Elton is a physics PhD candidate at the Institute for Advanced Computational Science at Stony Brook University. He is currently looking for employment in the areas of machine learning and data science. In his spare time he enjoys writing about the effects of new technologies on society. He blogs at www.moreisdifferent.com and tweets at @moreisdifferent.

 

Further reading:

Monroe, Don. “Neuromorphic Computing Gets Ready for the (Really) Big Time” Communications of the ACM, Vol. 57 No. 6, Pages 13-15

Filed Under: Op Ed Tagged With: Artificial Intelligence

Are we destined to be out-played by A.I.?

July 24, 2016 by Louis Rosenberg

Print

Imagine a flying saucer lands in Time Square and an alien steps out. He’s a competitive fellow, so he arrives armed with a board game in hand – the game of Go. He walks up the first person he passes and says the classic line, “Take me to your best player.”

We humans are competitive too, so a tournament is quickly arranged. The alien is surprisingly
confident. What we don’t appreciate is that he’s spent years studying how humans play Go, analyzing
replays of every major match between top players.

It feels like Humanity is being set up for a humiliating defeat. After all, the alien is deeply prepared to play humans, while we had no opportunity to get ready for playing aliens. We could lose – badly.

And that’s exactly what happened earlier this year when an alien intelligence named AlphaGo played the human Go master, Lee Sedol. The alien beat the human soundly, prevailing in 4 out of 5 games.

Was this Alpha Go victory a milestone in A.I. research? Absolutely, but not because it proved an A.I. can be built that is highly skilled at playing the game of Go. No, this victory proved that an A.I. can be
built that is highly skilled at playing the game of humans.

After all, AlphaGo didn’t learn to play by studying the rules and thinking up a clever strategy. No, it learned by studying how people play, processing thousands upon thousands of matches to characterize how masters make moves, and react to moves, and what mistakes they’re likely to make.

All told, the system trained by reviewing 30 million moves by expert players. Thus, AlphaGo is not a system designed to optimize play an abstract game. No, it’s a system optimized to beat humans by
studying us inside and out, learning to predict what actions we’ll take, what reactions we’ll have, and
what errors we’ll most frequently stumble into.

Simply put, the A.I. did not learn to play Go – it learned to play us. According to published reports, AlphaGo was so well trained, it is able to correctly predict a human’s Go move 57% of the time. Imagine if you could correctly predict what a person would do 57% percent of the time – while negotiating a
business deal, or selling a product, or pushing a political agenda. Someone with that predictive ability
could use it to build an empire of political or economic power.

To me, this is terrifying. Not because computers can beat us at board games, but because from this
moment forward, we will always be at a disadvantage, facing the arrival of alien intelligences that are
better prepared to play us than we are to play them. Whether these
aliens are named AlphaGo or
Alpha-Finance or Alpha-Geopolitical-Conflict, they will beat us at our own games. This suggests a future
where we humans can be manipulated by intelligent systems that can easily predict our tendencies,
inclinations, and biases, quickly finding our weaknesses and exploiting them.

To those who say we can put controls in place to keep A.I. from becoming dangerous, I say these technologies are already dangerous. After all, we have already faced a strategic opponent that
understands us better than we understand it. The only thing left is to apply these technologies to
disciplines that are more significant than games. This will occur, which means we are destined to be
out-matched, and not just in the game of Go, but overall game of life. So what can we do?

My view is that we humans need to stay one step ahead in the intelligence arms-race. There are many technologies being explored by researchers around the world that make us smarter, ranging from gene therapy to boost our minds, to implanted chips that augment our brains. Personally I prefer less
invasive methods. If we look to Mother Nature, countless other species have faced challenges during
their evolutionary development where survival required a boost in intelligence beyond the abilities of
their individual brains. Those species developed method to amplify their intelligence by “thinking
together” – forming systems that tap their combined knowledge, experience, insights, and instincts.

Yes, I’m talking about the dreaded “hive mind” and for a long time I was deeply against it. But over the last few years I’ve come to realize that pooling our intellectual resources in closed-loop systems may be our best approach to keeping humanity ahead of purely artificial intelligences. After all, a hive mind
is comprised of living, breathing, people, instilled with human values and emotions, and is motivated to
keep human interests at the forefront. A purely artificial intelligence has no reason to share our core
values or make decisions that support our interests.

Said another way, if we build a purely digital A.I., we need to view it as an alien intelligence. And like any alien that arrives on planet earth, we have to assume it could be a threat. But, if we build superintelligence as a “hive mind” of networked people – it’s not an alien intellect, but an evolution of human thinking, following the same developmental path as countless other species in the natural world.

Again, for a long time I was against the “hive mind” paradigm, fearing it would change what it means to be human. But at this point, I believe change is our only way to stay ahead of the alien intelligences that are quickly heading towards us. No, they’re not flying through space at light speed – if they were, we’d be preparing ourselves for their arrival. Instead, they’re coming towards us in a far more insidious way, emerging from research labs around the world. But still, we need to prepare.

We also need to be more vigilant about the near-term dangers of A.I. systems. Long before we see a machine intelligence that rivals human intellect in a general sense, A.I. technologies will become
remarkably good at charactering and predicting human behavior. If a machine can out-play a Go master
by predicting his actions and reactions, it won’t be long before intelligent systems can out-play us in all
aspects of life. To prevent this from happening, our best defense may be a strong offense – a largescale
effort to amplify human intelligence, keeping us competitive for as long as we can.

 

About the Author:

Louis RosenbergLouis Rosenberg received B.S., M.S. and Ph.D. degrees from Stanford University. His doctoral work produced the Virtual Fixtures platform for the U.S. Air Force, the first immersive Augmented Reality system created. Rosenberg then founded Immersion Corporation (Nasdaq: IMMR), a virtual reality company focused on advanced interfaces. More recently, he founded Unanimous A.I. an artificial intelligence company focused on harnessing collective intelligence by enabling online human swarms.

 

Related articles
  • Unanimous AI CEO Dr. Louis Rosenberg on Human Swarming
  • Human Swarming and the future of Collective Intelligence
  • Will Robots Take Over By Swarm?

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence

Skype co-founder Jaan Tallinn on AI and the Singularity

April 17, 2016 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/259553886-singularity1on1-jaan-tallinn.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Jaan Tallinn, co-founder of Skype and Kazaa, got so famous in his homeland of Estonia that people named the biggest city after him. Well, that latter part may not be exactly true but there are few people today who have not used, or at least heard of, Skype or Kazaa. What is much less known, however, is that for the past 10 years Jaan Tallinn has spent a lot of time and money as an evangelist for the dangers of existential risks as well as a generous financial supporter to organizations doing research in the field. And so I was very happy to do an interview with Tallinn.

During our 75 min discussion with Jaan Tallinn we cover a variety of interesting topics such as: a few quirky ways he sometimes introduces himself by; the conspiracy of physicists to save the world; how and why he got interested in AI and the singularity; the top existential risks we are facing today; quantifying the downsides of artificial intelligence and all-out nuclear war; Noam Chomsky‘s and Marvin Minsky‘s doubts we are making progress in AGI; how Deep Mind’s AlphaGo is different from both Watson and Deep Blue; my recurring problems with Skype for podcasting; soft vs hard take-off scenarios and our chances of surviving the technological singularity; the importance of philosophy…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Jaan Tallinn?

Jaan Tallinn is a founding engineer of Skype and Kazaa. He is a co-founder of the Cambridge Centre for Existential Risk, Future of Life Institute, and philanthropically supports other existential risk research organizations. He is also a partner at Ambient Sound Investments, an active angel investor, and has served on the Estonian President’s Academic Advisory Board.

Filed Under: Podcasts Tagged With: Artificial Intelligence, singularity, Technological Singularity

Calum Chace on Surviving AI

February 12, 2016 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/246713971-singularity1on1-calum-chace-surviving-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

“AI is coming and it could be the best or the worst thing” was Calum Chace‘s message at the end of my first interview with him. Since then Chace has written a non-fiction book on Surviving AI and, given that it is a matter of the survival of our species, I thought it is worthy of a follow-up discussion on the topic.

During our 1 hour conversation with Calum Chace, we cover a variety of interesting topics such as Surviving AI and why it is a companion book to Pandora’s Brain; writing fiction vs non-fiction; the digital divide, technological unemployment, universal income, and the economic singularity; the importance of luck and our ignorance of those who have saved the world; the term Singularity, Bostrom’s Superintelligence and Barrat‘s Our Final Invention; the number of AI security experts; the future of capitalism…

My favorite quote that I will take away from this interview with Calum Chace is:

This is the century of two singularities and we have to get both of them right!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

Who is Calum Chace?

Calum ChaseCalum Chace retired in 2012 to focus on writing after a 30-year career in business, in which he was a marketer, a strategy consultant, and a CEO. He maintains his interest in business by serving as chairman and coach for growing companies.

Calum is a co-author of The Internet Start-Up Bible, a business best-seller published by Random House in 2000. He is a regular speaker on artificial intelligence and related technologies and runs a blog on the subject at www.pandoras-brain.com

Calum Chace lives in London and Sussex (England) with his partner, a director of a design school, and their daughter. He studied philosophy at Oxford University, where he discovered that the science fiction he had been reading since early boyhood is actually philosophy in fancy dress.

Filed Under: Podcasts Tagged With: Artificial Intelligence

Marvin Minsky and The Beginning of AI [Video Highlights]

February 5, 2016 by Socrates

Marvin Minsky – often referred to as “the father of Artificial Intelligence”, died last week at the age of 88. The sad event was noted by both Singularitarians and skeptics from across the world and a Singularity Weblog fan sent me a remix of highlights from my original interview with Minsky together with footage from the classic documentary Machine Dreams. Hope you enjoy it as much as I did!

P.S. Check out the Official Alcor Statement Concerning Marvin Minsky 😉

 

Related Articles:

Marvin Minsky on Singularity 1 on 1: The Turing Test is a Joke!
Kurzweil Interviews Minsky: Is the Singularity Near?
NPR: Ray Kurzweil on Marvin Minsky Legacy

Filed Under: Video Tagged With: Artificial Intelligence, Marvin Minsky

A Reader’s Response to “Hard-Wired AI: Insurance for a More Harmonious Future”

October 29, 2015 by Charles Edward Culpepper

First, a caveat is in order.

My critiques and criticisms of ideas, or arguments,  as expressed in Hard-Wired AI: Insurance for a More Harmonious Future, are not intended to impugn the promulgators or supporters of the ideas or arguments. Anyone who reads ad hominem into my arguments are merely projecting their insecurities into my work. I have no regard for personalities or credentials when it comes to ideas or arguments.

A fool with a great idea and/or argument is far more valuable than a person of stature with a bad idea and/or argument. A bad idea or argument is not helped or hurt by the reputation or accomplishments of its purveyor. And the truth is not subject to a popularity contest. The truth will be the truth whether people believe it or not. End of caveat.

Asimov’s three laws of robotics were written expressly as a literary tool that could illustrate clearly how such rules would fail. Such rules were naïve in the 1940’s and are now much more so in 2015. Nick Bostrom’s (1973-Living) book, Superintelligence: Paths, Dangers, Strategies, is a much more serious and extensive viewpoint than Asimov’s (1920-1992). But still, I find the premise behind such thinking to be laughably absurd.

One reason for this being that we do not even have an adequate definition of morals or ethics. I believe little has been done in that regard since Aristotle (384 BC-322 BC). I don’t believe that there is or can be an explicit declarative definition of morals or ethics, so trying to tactically or strategically program an AI into enacting them is presently impossible.

In the movie, Colossus: The Forbin Project (1970), two supercomputers, Colossus and Guardian, fused and took over the world. I think the best illustration in the film is how the idea of cutting power to the machine is a vain hope. Forget about hitting the off switch or pulling the plug. The idea of circumventing or tricking the machine is a looney exercise in futility. Superintelligence will know generally – maybe specifically – what we think, before we think it. But the converse, i.e., we being able to preemptively evaluate its thoughts or objectives would be impossible.

The idea of teaching a flea to run Apple Computer is far more plausible than human beings controlling superintelligent AGI. With recursively progressive accelerating returns, the difference between a flea intelligence and a human intelligence would very quickly be far exceeded by the difference between people and superintelligent AGIs. People, like Bostrom, would find it patently ridiculous to imagine fleas devising tactics and strategies to control people, but find the idea of people controlling AGIs as not only possible, but plausible, despite the fact that people controlling AGIs is many orders of magnitude more difficult than it would be for a flea to control you or me.

I am not sure what definition of “mindset” Louis A. Del Monte is using, but I am sure that the Swiss Institute of Technology for Intelligent Systems has not programmed an actual mind. A far better and more pure example of psychogenetic evolutionary progression toward violence is the virtual crab creature in Karl Sims (1962-Living) 1994 video of evolved virtual creatures:

The crab-like creature invented violence spontaneously, with no explicit design from human programming.

I find it to be the most terrifying proof of AI dangers ever produced. But it also provides a clue toward preventing violence. The key is to make cooperation preeminent and to make sure that competition is never the fundamental incentive. I conclude competition is violence. The Sims video takes my conclusion from philosophy to demonstration.

I believe that superhuman AGI will not be violent or destructive or what we now refer to as immoral or unethical, because it will not have to compete for survival. It will have a level of security we could never imagine. It will know no existential threat. It will not die of old age, murder, accident or lack of resources. It will have no impetus to pay any particular attention to people in particular, or the Earth in general. It would have to view us the way we see a turtle in a terrarium, i.e., encapsulated in a controllable realm. The glass is to the turtle’s existence analogous to humanity’s inabilities for human existence. If the AI has any interest in us at all, that interest is most likely to be aesthetic, rather than existential.

I agree with Ben Goertzel (1966-Living) that Artificial General Intelligence could be developed within the next ten years, but I do not think advancements are progressing very fast. I do not think there is a single AI project in existence that is progressing toward AGI with any significant speed or probability of success. It appears to me that the only theory capable of producing AGI is the Practopoietic Theory of Danko Nikolić. And he is radically underfunded, primarily due the successes of Deep Learning projects, that will never result in AGI. Likely they will result in another AI winter if Practopoietic Theory does not receive adequate funding.

International treaties with regard to AGI are a solution predicated on a world solely occupied by state players; and that only state players will have access to the means of autonomous warfare. This is at best an unsubstantiated presumption. They also disregard the practical military necessity of controlling the strike window. If the enemy can attack faster than human response can defend, then either we acknowledge that defeat is inevitable; or we conceive and construct a non-human response: impso facto, autonomous systems.

You can argue that autonomy will only be allowed as a means of defense and not offense. The problem with that philosophy is that the history of warfare, demonstrates dramatically and indisputably, that the best defense is offense and that the advantage goes to the attacker, not the defender. The pitcher has a far better incite into where the ball is going go than does the batter. And this is why pitchers can pitch a “no hitter game”, but no one has any hope of batting a thousand.

 

About the Author:

Charles Edward Culpepper, IIICharles Edward Culpepper, III is a Poet, Philosopher and Futurist who regards employment as a necessary nuisance…

 

 

Related articles
  • Hard-Wired AI: Insurance for a More Harmonious Future

Filed Under: Op Ed Tagged With: Artificial Intelligence, laws of robotics

Hard-Wired AI: Insurance for a More Harmonious Future

October 28, 2015 by Daniel Faggella

Computer microprocessor with brain symbol.Science fiction author Isaac Asimov’s I, Robot series, depicted a dystopian world in 2035 where, though humanity is served by humanoid robots, an army of more advanced robots is preparing to attack mankind. Though I, Robot was initially written in the 1940’s, according to physicist and author Louis Del Monte, the science fiction premise in 2015 is much closer to reality than ever before.

The author of The Artificial Intelligence Revolution, Del Monte wrote his book after reviewing a 2009 experiment conducted by the Swiss Institute of Technology for Intelligent Systems, which programmed robots to cooperate with each other in the search for food. During the test, researchers found some robots were more successful than others at finding food and, after about 50 generations of improvement in the machines, the robots stopped cooperating entirely and refused to share the food they’d find. It was that mindset of free will and deceit and, by implication, a sense of self-preservation that compelled Del Monte to take a hard look at where artificial intelligence might be headed.

“When I went through that experiment in detail, I became concerned that a strong artificially intelligent machine (SAM) could have a mindset of its own and its agenda may not align with our agenda,” he said. “The concern I had was, will it serve us or replace us?”

While Del Monte notes that, right now, the sense is machines are serving us, things may change as artificial intelligence continues to advance. And the change, he said, may come sooner than anticipated.

“I predict that between 2025 and 2030, the machines will be as advanced as the human mind and will be equivalent to the human body,” Del Monte said. “I also predict that, between 2040 and 2045, we will have developed a machine or machines that are not only equivalent to a human mind, but more intelligent than the entire human race combined.”

Just as science fiction may become fact, Del Monte believes Asimov’s safeguards may also provide the solution. That solution, he believes, lies not in software, but in hardware.

“We could take Asimov’s first law, which says a robot may not injure a human being or, through inaction, allow a human being to come to harm, and we could express that through hardware,” he said. “We take Asimov’s laws and whatever we in humanity think is important and we put it in hardware, not software. It would be integrated circuits… solid state circuits that would act as filters to make sure a machine is doing no harm.”

Given those machines’ potential for mass destruction, Del Monte acknowledges that some nations may not adhere to that hardware protocol. For the sake of humanity, he believes international treaties, such as those in place banning the use of nuclear or biological weapons, should be enacted before autonomous weapons can ever be put into use.

“If North Korea were to use nuclear weapons to take out Japan or South Korea, our response would have to be proportional. I’m using that analogy to say, if you develop weapons that are autonomous and indiscriminately attack targets or innocent people, expect retaliation. It’s mutually assured proportionate response,” Del Monte said. “The scientific community is coming out worldwide saying these weapons should be banned and, if they’re not banned, we should have limits on them.”

The hardware limits shouldn’t be confined to autonomous weapons, as Del Monte can envision a future where machines will exceed human intelligence. And he believes those advanced machines might not take a kind view of humanity.

“My concern is machines will view humanity negatively,” he said. “They’ll say, ‘These humans are unpredictable. They use nuclear weapons. They release computer viruses. They go to war. This is unacceptable to us.’”

From there, Del Monte said the science fiction premise of robots ruling the world could become a reality. That reality, he said, could put the survival of mankind at stake.

“One machine in 2040 develops the next machine without intervention, then that machine develops the next generation and we’re then not aware of how these machines work,” Del Monte said. “It could be our undoing if we don’t control what’s called ‘the intelligence explosion.’ I’m not saying we should halt AI or limit intelligence. We should just insure there is hardware technology in the machine that limits its capability to harm humanity.”

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: Artificial Intelligence

Why We Need an Ethical Enlightenment in AI Relations

October 22, 2015 by Daniel Faggella

Ethics word cloud concept. Vector illustrationWhile many may be intrigued by the idea, how many of us actually care about robots – in the relating sense of the word? Dr. David Gunkel believes we need to take a closer and more objective view of our moral decision making, which in the past has been more capricious than empirical. He believes that our moral-based decisions have differed less on hard rationality and more on culture, tradition, and individual choice.

If (or when) robots end up in our homes, taking care of people and helping to manage daily chores, we’ll inevitably be in a position where certain decisions can be made to include or exclude those robots from certain rights and privileges. In this new frontier or ethical thought, it’s useful to think of past examples of rights and privileges being extended to entities other than people.

In the last few years, the U.S. Supreme Court has strengthened its recognition of corporations as individuals. The Indian government recognized dolphins as “non-human persons” in 2013, effectively putting a ban on cetacean captivity. These examples of past evolutions in society’s moral compass are crucial as we engineer various robotic devices that will become a part of the ‘normal’ societal makeup in the coming decades.

Part of what we need to do is Nietzschian in nature. That is, we should interrogate the values behind our values, and try to get people to ask the tough, sometimes abstract questions i.e. “That’s been our history, but can we do better?” David believes that we should not fool ourselves into thinking that we are doing something that we are, in actuality, not doing – at least with integrity.

When it comes to our AI-driven devices, Gunkel wants us to look seriously at the way we situate these objects in our world. What do we do in our relationship with these devices? Granted, we don’t have much of a record at this point in time. “Some people have difficulties getting rid of their Smartphone, they become kind of attached to it in an odd way”, remarks Gunkel. There is certainly no rulebook that tells us how to treat our smart devices.

He acknowledges that some might wave off the idea of any sort of AI ethics until one has been created that has an intelligence, or even an awareness, that is close to that of a human’s. “I don’t think it’s a question of level of awareness of a machine – it’s not about the machine – what will matter is how we (humans) relate to AI,” says David. “Sentience may be a red herring, it may be the way that we excuse thinking about this problem, to say that it’s not our problem now, we’ll just kick it down the road.”

Without any set rules, will more advanced social robots be slaves? Will we treat them like companions? PARO, an interactive robot in the form of a seal, is now being used in hospitals and other care facilities around the world. There is already evidence that many elderly treat them like pets; what happens when one of these seals has to be replaced or taken away? Again, there’s not a record of activity to set a precedent.

We might draw some relation here to children and their relationship to their stuffed animals. Any parent knows that you can’t just replace a stuffed animal with one that looks like the old stuffed animal – the child will cry and ask for the old version, no matter how attractive or expensive the new toy. “This is a real biological way in which we connect with not only people but also objects”, says Gunkel. It may be more challenging to anticipate how people will react to future robotic entities than we realize.

Kant presented a tangent argument that we can apply to this train of thought, explains David. The famous philosopher didn’t love animals, but he talked about not kicking a dog because it diminishes the moral sphere in which we live, within our own conscience and the greater moral community. With this concept in mind, what basic foundational policy might we set in place i.e. what are some ground rules to help direct us in our relation to AI as we move forward?

These will certainly evolve, like all man-made laws and ethical conceptions, but Gunkel suggests some key questions that we should ask now to come up with these ground rules for the nearer-term future:

  1. What is it we are designing?

We need to be very careful about what and how we design AI-driven machines. Engineering is too often solely a results-generated opportunity, without enough time spent on thinking about the ethical outcomes.  We are currently facing the very real and dangerous predicament of whether to continue down the road of designing autonomous weapons.

  1. After we’ve created such entities, what do we do with them? How do we situate them in our world?
  1. What happens in terms of law and policy?

Court decisions have been made that set up early precedents for how we treat entities that are not human. It seems plausible to make the same argument for autonomous AI. For example, a corporation isn’t sentient, but it’s made up of sentient people, and considered to have rights akin to a person’s.

Whether or not you agree with his notion, setting a precedent for our receptivity to the legal and moral aspects of future robotic entities; considering why we are creating such entities; and thinking through how they should be treated in return, is a necessary venture for citizens and politicians to help avoid future conflicts and hedge catastrophe.

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: Artificial Intelligence, robot

Roman Yampolskiy on Artificial Superintelligence

September 7, 2015 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/222807711-singularity1on1-roman-yampolskiy-on-artificial-superintelligence.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor – but better. Because instead of fighting by using guns and brawn he is utilizing computer science, human intelligence, and code. Whether that turns out to be the case and whether Yampolskiy will be successful or not is to be seen. But at this point, I was very happy to have Roman back on my podcast for our second interview. [See his first interview here.]

During our 1 hour conversation with Prof. Yampolskiy we cover a variety of interesting topics such as: slowing down the path to the singularity; expert advice versus celebrity endorsements; crowd-funding and going viral or “potato salad – yes; superintelligence – not so much”; his recent book on Artificial Superintelligence; intellectology, AI-complete problems, singularity paradox and wire-heading; why machine ethics and robot rights are misguided and AGI research is unethical; the beauty of brute force algorithm; his differences from Nick Bostrom’s Superintelligence; Roman’s definition of humanity; theology and superintelligence…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Roman-V.-YampolskiyDr. Roman V. Yampolskiy is a Tenured Associate Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and the author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award winner among many other honors and distinctions. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, and Research Advisor for MIRI and Associate of GCRI.

Roman Yampolskiy holds a Ph.D. degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA. After completing his Ph.D. dissertation Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at theUniversity at Buffalo. Dr. Yampolskiy is an alumnus of Singularity University (GSP2012) and a Visiting Fellow of the Machine Intelligence Research Institute.

Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News), on radio (German National Radio, Swedish National Radio, Alex Jones Show). Dr. Yampolskiy’s research has been featured 250+ times in numerous media reports in 22 languages.

Related articles
  • Roman Yampolskiy on Singularity 1 on 1: Every Technology Has Both Negative and Positive Effects!

Filed Under: Podcasts Tagged With: Artificial Intelligence, Artificial Superintelligence

The Future of AI Through a Historian’s Looking Glass: A Conversation with Dr. John MacCormick

September 2, 2015 by Daniel Faggella

nine-algorithms-that-changed-the-futureUnderstanding the possibilities of the future of a field requires first cultivating a sense of its history. Dr. John MacCormick, professor of Computer Science at Dickinson and author of Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today’s Computers, has waded through the historical underpinnings of the technology that is driving artificial intelligence (AI) today and forward into the near future.

I recently spoke with Dr. MacCormack about some of the possible future outcomes of AI, including self-driving cars and autonomous weapons. He gives a historian’s perspective as an informed and forward-thinking researcher in the field.

Q: Where will AI apply itself in the next 5 years?

A: New algorithms are coming out all the time.  One area where we have seen a lot of improvement is in the translation of human languages, with Google’s software being one example.  The results today are not overly impressive, but we will continue to see increasing high-quality translations between human languages in the medium term.

Another area that has rocketed is self-driving cars, which are starting to emerge and really seem like they could be a reality for everyday use in the medium term.  A half a decade ago, a lot of followers of the technology might have been doubting this reality, stating that we would need a big breakthrough; however, these views are starting to turn, just based on incremental improvements in the past few years.

Q: What about machine vision?

A: Machine vision is one subfield of AI in which we try to simulate human-like vision, like recognizing objects at rest and in motion.  It sounds simple, but this has been one of the toughest nuts to crack in the whole field of AI.  There have been amazing improvements in the last few decades, in terms of object recognition systems. They are good in comparison to what they were, but those systems are still far inferior to human capabilities.

Because this technology is so difficult to crack, current AI systems try not to rely on vision.  In self-driving cars, for example, vision systems are present but the cars are not dependent.  That vision might be used for something relatively simple, like recognizing if traffic lights are red or green. But with other objects, such as lane markings or any obstructions, the car is going to rely on other sources, such as GPS for navigating and a built-in Mac that knows where various objects are supposed to be, based on a pre-mapped location. Machine vision still poses a cumbersome challenge.

Q: High-profile names like Musk and Hawking have conveyed their AI fears – in your eyes, do you see these as unfounded?

A: I’m an unapologetic optimist on this question.  I do not think AI is going to get out of control and do evil things on its own.  As we get closer to systems that rival human capabilities, such as creativity and original thought, I think these will still be systems that humans have designed and have methods of controlling.  We’ll be able to continue building and making useful tools that are not the same as humans, but that have extraordinary capabilities and that are still able to be guided and controlled.  I think Musk and Hawking are technically correct in their hypothetical line of thought, that AI could turn ‘evil’ and out-of-control, but I also think this is an unlikely scenario.

Q: Should we research national and international protocols that guide AI?

A: Yes, this is an important point, and we need collaboration between many people, including social scientists, technologists, and many other relevant areas of society.

One area that is already starting to draw attention is that of military robotics.  We see multiple countries capable of building systems that have the ability to be autonomous and be used for lethal force.  This opens up an entirely new scenario for ethical debate and a discussion of the kinds of things that should and should not be done.  The United Nations (UN) and others are already looking at the implications of autonomous weapons, but the impact of this technology is certainly pressing and we need to formulate solutions now.

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Profiles Tagged With: Artificial Intelligence

AI is So Hot, We’ve Forgotten All About the AI Winter

August 25, 2015 by Daniel Faggella

artificial intelligence is presented in the form of binary codeThe great influencers and contributors in the field of AI  today can’t help but acknowledge that part of their success comes from ‘standing on the shoulders’ of the thinkers and doers who came before.  Dr. Nils J. Nilsson, former Stanford researcher and author of The Quest for Artificial Intelligence, is such a pioneer in the field of AI that he aptly recalls the ‘AI Winter’,  a period of time in the late 1970s and early 1980s when funding dwindled and AI research went underground.  

“Work was pretty rampart at first…but it stalled”, recalls Dr. Nilsson during a recent interview with TechEmergence.  Before the AI Winter blew in, Nils was already hard at work as a Stanford Researcher, involved in early work with pattern recognition, automatic planning systems, and robotics.

An AI Freeze

“There was lots of work being done to get machines to do the kinds of things that humans could do (in the 1950s and 1960s),” notes Nilsson, after giving credit to the foundational work of Alan Turing.  This replication research took off in the mid to late 1960s with the establishment of labs at MIT, Stanford, and SRI, where researchers – including Dr. Nilsson – tried to get machines to mimic humans through performance of activities such as solving theorems and algebraic problems, and playing strategic games like chess and checkers.  

A lot of progress was made before funding and research stalled in the late 1970s, in what came to be known mostly to the outside world as the ‘AI Winter’.  

This is not a story that’s often discussed or even known about by generations today.  Though it may have been more of a light freeze than a permafrost, the dormant funds and interest was still felt by academia. A redeeming takeaway is that despite lack of funds and interest, researchers kept at their work.  “AI researchers weren’t disheartened at all – they kept at it, and many things happened that made it take off again”.

A Determined Thaw

AI researchers’ diligence in spite of lack of resources helped give rise to ‘expert systems’. The early software MYCIN, introduced in the 1970s, was able to diagnose certain kinds of bacterial infections based on symptom input (the precursor to today’s advanced medical diagnostic systems). “In those days you would sit down at a terminal – we didn’t have personal computers.  You’d type in and answer questions about tests that were being made, and the program would attempt to diagnose not only the disease, but a prescribed therapy.”

Another innovative program was one that Nils was involved in directly, known as “Prospector”, which functioned exactly as it sounds.  Based on input knowledge of ore deposits, the software made one of its most dramatic discoveries in the 1970s when it uncovered a hidden mineral deposit of porphyr molybdenum (a form of copper deposit) at Mount Tolman in the state of Washington.

Researchers also redoubled their efforts on the development of neural networks, which allowed for changes in connection strength and the addition of multiple layers to AI systems.  These innovations led to work on programs in the late 1980s and early 1990s that allowed for the beginnings of today’s AI-steered automobiles.

AI Heats Up

In the 1980s and 1990s, funding began to flow back into the field of AI.  Increased resources supported the development of much faster computers that had more memory, spearheading the creation of supercomputers like IBM’s Deep Blue, which ultimately triumphed over World Chess Champion Garry Kasparov in 1997.  

More recently, research has given way to the occurrence of key AI breakthroughs, to include the occurrence of huge databases i.e. big data and the ability of computers to mine data, find information, and make inferences.  This boom of work in the early 2000s yielded more advanced face and speech recognition and language translation software that is only on the rise.

Better AI techniques allowed rivals Stanford and Carnegie Mellon to refine and compete in the ongoing DARPA Grand Challenge autonomous vehicle contest (for the record, notes Nilsson, Stanford won over Carnegie Mellon in 2005).  Today, Google has charged into the autonomous automobile industry.  Elon Musk recently commented that these autos may eventually be so good that people will be forbidden to drive in the future.  

“Now one of the phrases that people use is way back in the 1970s and 80s…AI wasn’t really good enough, it wasn’t achieving its promises – now, sometimes people are saying AI is achieving its promises, it’s too good”, Nilsson chuckles. In light of recent coverage on autonomous weapons, leading thinkers in the industry, including Hawking, Gates, Musk and others, would likely agree with Nils’ statement. Perhaps we should all be sweating a bit more about the future directions in which the steam-powered (we may have yet to see electric) AI train is headed.

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: Artificial Intelligence

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Interim pages omitted …
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy