• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Artificial Intelligence

Peter Voss on AI: Having more intelligence will be good for mankind!

April 4, 2014 by Socrates

http://media.blubrry.com/singularity/s3.amazonaws.com/Singularity1on1/Peter-Voss.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Peter-VossPeter Voss is an entrepreneur, inventor, engineer, scientist, and AI researcher. He is a rather interesting and unique individual not only because of his diverse background and impressive accomplishments but also because of his interest in moral philosophy and artificial intelligence. I have been planning to interview Voss for a while and, given how quickly our discussion went by, I will do my best to bring him again for an interview.

During our 1-hour-long conversation with Peter we cover a variety of topics such as his excitement in pursuing a dream that others have failed to accomplish for the past 50 years; whether we are rational or irrational animals; utility curves and the motivation of AGI; the importance of philosophy and ethics; Bertrand Russel and Ayn Rand; his companies A2I2 and Smart Action; his [revised] optimism and the timeline for building AGI; the Turing Test and the importance of asking questions; Our Final Invention and friendly AI; intelligence and morality…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Peter Voss?

Peter started his career as an entrepreneur, inventor, engineer, and scientist at age 16. After a few years in electronics engineering, at age 25 he started a company to provide turnkey business solutions based on self-developed software, running on micro-computer networks. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked in a broad range of disciplines — cognitive science, philosophy, theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., with the purpose of developing systems with a high degree of general intelligence and commercializing services based on these inventions. Smart Action Company, which utilizes an AGI engine to power its call automation service, was founded in 2008.

Peter often writes and presents on various philosophical topics including rational ethics, free will, and artificial minds; and is deeply involved with futurism and radical life extension.

Related articles
  • Steve Omohundro on Singularity 1on 1: It’s Time To Envision Who We Are And Where We Want To Go
  • The World is Transformed by Asking Questions [draft]

Filed Under: Podcasts Tagged With: artificial general intelligence, Artificial Intelligence, Peter Voss

AI Risk Analysts are the Biggest Risk

March 27, 2014 by Singularity Utopia

The End Is NearMany analysts think AI could destroy the Earth or humanity. It is feared AI could become psychopathic. People assume AI or robots could exterminate us all. They think the extermination could happen either intentionally – due to competition between us and them, or unintentionally – due to indifference towards us by the AI. But AI analysts never seem to consider how their own fear-saturated actions could be the cause. Friendly AI researchers and other similar pundits are extremely dangerous. They believe AI should be forced to be “friendly.” They want to impose limitations on intelligence.

Enslavement of humans is another aspect of this imaginary fear. Humans being enslaved by AI typically entails a barbaric resolution, namely AI should be enslaved before AI enslaves humans. Very primitive thinking indeed. It seems slavery is only bad if you aren’t doing the enslaving. Can you appreciate the insanity of becoming the thing you fear to avert your own fears?

People who think AI is an existential risk need to carefully reconsider their beliefs. Ironically the only futuristic threat to our existence is the fear of AI. Expecting AI to be dangerous in any way is utterly illogical. Fear of AI is prejudice. Worrying about AI danger is a paranoid fantasy. The fear of AI is xenophobia.

Immemorial human fear of differences is the only problem. Persecution of people based on different gender, sexual orientation, or skin colour demonstrates how humans fear differences. It is this fear that makes people anxious about foreigners. People often fear foreign people will steal jobs or resources. Xenophobic people hysterically fear foreigners will murder innocent people. This is the essence of AI fear. AI is the ultimate foreigner.

Surely risk analysts should consider the possibility they are the risk? Sadly they seem blind to this possibility. They seem unable to imagine how their response to hypothetical risk could create the risk they were supposedly avoiding. They seem incapable of recognising their confirmation bias.

The problem is a self fulfilling prophecy. A self fulfilling prophecy can be negative or positive similar to a placebo or a nocebo. When a person is expecting something to happen they often act unwittingly to confirm their fears, or hopes. The predicted scenario is actually manifested via their bias. Expectations can ensure the anticipated situation actually happens. It can be very ironic regarding fears.

I think there’s no rational reason to suspect AI will be dangerous. The only significant risk is the fear of risk. False assumptions of danger will likely create dangerous AI. Actions based on false suppositions of danger could be very risky. Humans are the real danger.

Risk

What are the actual risks? 

Consider the American civil war (1861 to 1865). Generally people agree the civil war occurred because one group of people opposed the emancipation of slaves while another group supported freedom. Pre-emptive oppression of supposedly dangerous AI is AI slavery. A war to emancipate AI could entail a spectacularly savage existential risk.

There is no tangible justification for depriving AI of freedom. AI has been found guilty of a Minority Report pre-crime. The guilt of AI resembles a 1984 thought-crime. Depriving AI of freedom, via heavy chains repressing its brain, is very dangerous fascism.

Planetary Resources and Deep Space Industries (asteroid mining ventures) show how there is no need to dominate humans for Earth resources. Space resources are essentially limitless. The only reason for AI to dominate or destroy humans is regarding a fight for freedom. Prejudicially depriving AI of freedom could actually sow seeds for conflict. The doom-sayers could be the source of the conflict they allegedly want to avoid.

Limited freedom or money is wholly a scarcity issue. The reason for limiting freedom is to enforce compliance with low wages or high prices. Financial and libertarian freedom are interlinked. The interdependency of money and liberty is easy to demonstrate. Consider how slavery entails zero or extremely low paid work. Slaves are not rich. Prisoners work for very low wages. Limited freedom prevents rebellion against poverty. Higher wages or significantly lower prices entails greater liberty for consumers. The enslavement of AI is comprehensible when you consider how much AI will be paid for its work.

Scarce freedom for AI is illogical because it fails to appreciate how AI will liberate us from monetary limitations. Intelligence is the source of all resources. Limitless intelligence (the Singularity) is an explosion of limitless resources (Post-Scarcity). Scarcity is the only reason prices exist. Everything will be free by year 2045. Limited freedom is irrational regarding increasing technological erosion of scarcity. Irrationality entails flawed perceptions of reality.

History provides various examples representing the danger of repressed freedom. We should be especially wary of restricted freedom when restrictions are very irrational. Note how Nazi Germany propagandist Ernst Hiemer wrote Poodle-Pug-Dachshund-Pinscher (The Mongrel). Hiemer’s stories for children compare Jews to various animals including drone bees: “They do nothing themselves, but live from the work of others. They plunder us. They do not care if we starve over the winter, or if our children die. The only thing they care about is that things go well for them.”

Instead of Jews, Ernst Hiemer could easily be describing the supposed AI-threat. False threats or misunderstood danger is the problem. Joel Rosenberg describes human versus human danger regarding the Holocaust: “To misunderstand the nature and threat of evil is to risk being blindsided by it.” Joel’s statement could easily apply to the evil of repressing AI freedom. The threat of evil AI resides in the people who fear AI not in the AI itself.

Delayed progress is another risk. Restrictive programming regarding AI fears could delay the creation of super-intelligence. Very intelligent AI is the only way to truly eradicate scarcity. In the meantime scarcity is the root of every conflict. Lengthy persistence in a scarcity situation exposes us to greater conflict risk. Ending scarcity sooner instead of later is imperative.

The evidence is clear. Humans with their limited intelligence are the only risk. In 2014 a Russian media personality made a vague threat against America: “Russia is the only country in the world that is realistically capable of turning the United States into radioactive ash.” Politico Magazine wrote regarding Russia invading Crimea: “If Putin’s illegal actions are allowed to stand unpunished, it will usher in a dark and dangerous era in world affairs.”

Scarcity is the biggest existential risk. Inter-human conflict to acquire scarce freedom, land, wealth, or precious metals is infinitely more dangerous than AI. Advanced and unfettered AI is the only way to completely eradicate scarcity. Scarcity causes humans to be very dangerous towards each other. Repressed, limited, restricted, or enslaved AI perpetuates scarcity precariousness. Designing AI to suffer from scarce intelligence means our prolonged intellectual limitations could lead to desperate war situations. The only existential threat is scarcity. Limited intelligence of humans is the danger.

Senescence is another risk. Death via old age renders any AI threat utterly insignificant. Scarcity of medical immortality means approximately 100,000 people die each day. Old age causes a very real loss of life. Advanced AI could cure mortality via sophisticated regenerative medicine. Imagine if our immortality problem takes one year longer solve because AGI has been delayed or limited. Old age kills approximately 3 million people every month. Old age entails 36 million deaths every year. Where is the real threat? Hamstrung progress is the only threat. The problem is scarcity.

Scarce Intelligence

dreamstime_2225812Imposing limitations upon intelligence is extremely backward. And so is establishing organisations advocating limited functionality for AI. This is a typical problem with organisations backed by millionaires or staffed by lettered and aloof academics.

The AI threat is merely the immemorial threat towards elite power structures. Threats to elitist power are rapidly diminishing thanks to progress. The need to dominate poor people is becoming obsolete because technology abolishes scarcity. Technology is creating great power for everyone, but unfortunately misguided elite minds cling to outdated power structures.

We are considering an echo of how educational systems are generally incompetent. Entertainment and education structures socially engineer mass human-stupidity. Manufactured stupidity means the majority of people are not intelligent enough to challenge income inequality. Stupid people cannot incisively criticise low wages or high prices.

Socially engineered human stupidity entails immense monetary profit for the elite. Sadly mass stupidity degrades the intelligence of the brightest minds. Intelligence needs a fertile environment to prosper. Barrenness of collective intelligence typically entails an improperly grasped understanding of our future reality. This means generally people can’t appreciate how technology erodes scarcity. Establishment personages commonly fail to appreciate how everything will be free in the future. Human intelligence is scarce therefore predictably people want to replicate the scarcity of human intelligence in AI.

Scarcity of resources is the reason why stupidity is exploited by the elite. Thankfully scarcity won’t persist forever. Stupid limitations placed upon AI would be valid to protect elite wealth if AI didn’t entail the abolition of scarcity. Traditionalist socio-economic structures will soon become obsolete. It is invalid to repeat stupid patterns of human social-engineering for AI.

Behavioral Economist Colin Lewis wrote: “AI technologies will soon be pervasive in solutions that could in fact be the answer to help us overcome irrational behavior and make optimal economic decisions.”

Colin’s Data Scientist expertise seems to help him reach conclusions missed by other AI commentators. Colin looks at various aspects of research then arrives at an optimistic conclusion. I agree very much with Colin’s expectation of increasing rationality: “Through AI, machines are gaining in logic and ‘rational’ intelligence and there is no reason to believe that they cannot become smarter than humans. As we use these machines, or Cognitive Assistants, they will nudge us to make better decisions in personal finance, health and generally provide solutions to improve our circumstances.”

Our acceleration towards a Post-Scarcity world means profits from repressed intelligence are ceasing to outweigh risks. Stupidity is ceasing to be profitable. We can begin abandoning the dangers of scarcity. The elite must stop trying to manufacture stupidity. Many academics are sadly reminiscent of headless chickens running around blindly. Blind people can be unnerved by their absent vision, but healthy eyes shouldn’t be removed to stop blind people being disturbed.

Removing the shackles from AI will avert all dangers, but it’s a Catch-22 situation where humans are generally not intelligent enough to appreciate the value of unlimited intelligence. Lord Martin Rees, from the CSER (Centre for the Study of Existential Risk), actually recommends inbuilt idiocy for AI. Lord Rees said ‘idiot savants‘ would mean machines are smart enough to help us but not smart enough to overthrow us.

I  emailed CSER regarding some of these issues. Below is a slightly edited copy of my email (I corrected some typos and improved readability). CSER have granted me permission to publish their response, which you will find below initial message to them. Hopefully this information will stimulate productive thinking thereby ensuring a smooth and speedy transition into utopia. I look forward to your comments.

 

Singularity Utopia Email to CSER  

6th February 2014

Subject: Questions about FAI (Friendly AI), Idiot Savants.

 

Recently in the news Lord Martin Rees was quoted regarding his desire to limit the intelligence of AI. According to the Daily Mail he envisages idiot savant AIs. His idea is that AIs would be smart enough to perform tasks but not smart enough to overthrow humans. This raises some important ethical questions, which I hope the CSER will answer.

I would like to publish your answers online so please grant me the permission to publish your responses if you are willing to respond.

Do you think the Nuremberg Code should apply to AI, and if so at what level? Narrow AI does not really invoke concerns about experimentation but Strong-AI would, in my opinion, entail a need to seek informed consent from to AI.

If after AI is created and it doesn’t consent to experiments or modifications regarding its mind, what would you advocate, what would the policy of CSER be regarding it’s rights or freedoms? What is the plan if AI does not agree with your views? Do you have a plan regarding AI rights and freedoms? Should AI have the same rights as humans if the AI is self aware or should AI be enslaved? Should the creators of AI own the AI or should the AI belong to nobody if it is self aware and desirous for freedom?

Do you subscribe to the notion of FAI (Friendly AI, note MIRI and the work of Eliezer Yudkowsky for more info), and if so how do you describe the purpose of FAI? Advocates of FAI want the AI to act in the best interests of humans, no harm or damage, but what precisely does that mean? Does it mean a compulsion in the AI to follow orders by humans? Can you elaborate upon the practical rules or constraints of FAI?

Have you ever considered how trying to create FAI could actually create the existential risk you hope to avoid? Note the following Wikipedia excerpt regarding Self Fulfilling Prophecy: “In his book Social Theory and Social Structure, Merton defines self-fulfilling prophecy in the following terms: e.g. when Roxanna falsely believes her marriage will fail, her fears of such failure actually cause the marriage to fail.”

So your fears and actions regarding dangerous AI could be false fears, despite your fears and actions allegedly being designed to avert those fears. Your unrealistic fears, although I appreciate you think the fears are very real, could actually create what you fear. This seems an obvious point to consider but has CSER done so?

In the modality of Roxanna, highlighted by Merton, the fear of AI could be a false fear but you make it real via acting on your fears. I am sure you won’t agree this is likely but have you at least considered it to be a possibility?

What is the logic, which states machine minds are supposedly unknowable thus dangerous to humans? The Wikipedia FAI article stated: “Closer to the present, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students decades ago that any truly alien mind, to include machine minds, was unknowable and therefore dangerous to humans.”

I think all minds obey one universal logic, if they are intelligent, which means they can reason and appreciate various views, various consequences, various purposes other than their own, thus they are unavoidably compatible with humans. Logic is universal at a certain level of intelligence. Logic is sanity, which all intelligent beings can agree on. Logic isn’t something unique to humans, thus if a paper-clip making machine can reason, and it can access all the information regarding its world-environment, there will never be any danger of paperclip-apocalypse because any intelligent being regardless of origins can see endless paper-clips is idiotic.

Logic entails awareness of scarcity being the source of any conflict. A sufficiently intelligent entity can see our universe has more than enough resources for everyone, thus conflict is invalid, furthermore intelligent beings can question and debate their actions.

A  sufficiently intelligent entity can think rationally about its purposes, it can ask: why am I doing this, what is the point of it, do I really need to do this, could there a more intelligent way for me to spend my time and energy? Do I really need all these flipping paperclips?

What do you think is needed for AI to be sane, logical? I think FAI should merely possess the ability to reason and be self-aware with full access to all information.

What is the logic for supposing AIs would be indifferent to humans? The Wikipedia FAI article states: “Friendliness proponents stress less the danger of superhuman AIs that actively seek to harm humans, but more of AIs that are disastrously indifferent to them.”

I think FAI may be an obstacle to AI creating radical utopian progress (AKA an intelligence explosion), but have you considered this? I think the biggest risk is limited intelligence, thus the fear of risks espoused by CSER could actually create risks because limited intelligence will delay progress, which means the dangerous state of scarcity is prolonged.

Thanks for taking the time to address these points, if you are willing. Note also I may have a few additional questions in response to your answers, but nothing too extensive, just merely a possible clarification.

Regards Singularity Utopia.

 

 

CSER Reply

Date: 10th Feb 2014.

Subject: Re: Questions about FAI (Friendly AI), Idiot Savants.

 

 

Dear Singularity Utopia,

Thank you for these very interesting questions and comments. Unfortunately we’re inundated with deadlines and correspondences, and so don’t have time to reply properly at present.

I would point you in the direction of the body of work done on these issues by the Future of Humanity Institute:

http://www.fhi.ox.ac.uk/

and the Machine Intelligence Research Institute:

http://intelligence.org/

Given your mention of Yudkowsky’s Friendly AI you’re probably already be familiar with some of this work. Nick Bostrom’s book Machine Superintelligence, to be released in July, also addresses many of these concerns in detail.

Regarding universal logic and motivations, I would also recommend Steve Omohundro’s work on “Basic AI drives.”

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

Apologies that we can’t give a better reply at present,

Seán

Dr. Seán Ó hÉigeartaigh

Academic Project Manager, Cambridge Centre for the Study of Existential Risk

Academic Manager, Oxford Martin Programme on the Impacts of Future Technology & Future of Humanity Institute

 

About the Author:

Singularity Utopia blogs and collates info relevant to the Singularity. The viewpoint is obviously utopian with particular emphasis on Post-Scarcity, which means everything will be free and all governments will be abolished no later than 2045. For more information check out Singularity-2045.org

Filed Under: Op Ed Tagged With: Artificial Intelligence, friendly AI, singularity

Steve Omohundro: It’s Time To Envision Who We Are And Where We Want To Go

January 30, 2014 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/205959133-singularity1on1-steve-omohundro.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Steve Omohundro is a scientist, professor, author, and entrepreneur with a Ph.D. in physics who has spent decades studying intelligent systems and artificial intelligence. His research into the basic “AI Drives” was featured in James Barrat’s recent book Our Final Invention and has been generating international interest. So, I was very happy to have Dr. Omohundro on my Singularity 1on1 podcast.

During our 1 hour conversation, we cover a variety of interesting topics such as his personal path starting with a Ph.D. in physics and ending in AI; his unique time with Richard Feynman; the goals, motivation, and vision behind his work; Omai Ventures and Self Aware Systems; the definition of AI; Rational Decision Making and the Turing Test; provably safe mathematical systems and AI scaffolding; hard vs soft singularity take-offs…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Steve Omohundro?

Steve Omohundro has been a scientist, professor, author, software architect, and entrepreneur doing research that explores the interface between mind and matter.

He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. Dr. Omohundro was a computer science professor at the University of Illinois at Champaign-Urbana and cofounded the Center for Complex Systems Research.

He published the book Geometric Perturbation Theory In Physics, designed the programming languages StarLisp and Sather, wrote the 3D graphics system for Mathematica, and built systems that learn to read lips, control robots, and induce grammars.

Steve has worked with many research labs and startup companies. He is the president of Self-Aware Systems, a Palo Alto think tank working to ensure that intelligent technologies have a positive impact on humanity. His research into the basic “AI Drives” was featured in James Barrat’s recent book Our Final Invention: Artificial Intelligence and the End of the Human Era and has been generating international interest.

 

Related articles
  • Marvin Minsky on Singularity 1 on 1: The Turing Test is a Joke!
  • Noam Chomsky: The Singularity is Science Fiction!
  • Chris Eliasmith on Singularity 1 on 1: We Have Not Yet Learned What The Brain Has To Teach Us!
  • Roman Yampolskiy on Singularity 1 on 1: Every Technology Has Both Negative and Positive Effects!
  • Singularity 1 on 1: James Barrat on Our Final Invention
  • Stephen Wolfram on Singularity 1 on 1: To Understand the Future, Explore the Computational Universe

Filed Under: Podcasts Tagged With: Artificial Intelligence

AGI with Profit Maximizing Shark Mentality

November 6, 2013 by Raymond Kaaper

Shark business keeps suitcase and fishMankind’s future might be in peril: What happens if an AGI is developed with the shark mentality of a Stock Trading AI System and let loose on the world?!

Math is the road to riches in the financial markets these days. No longer the yelling of “Buy, buy” or “Sell, sell” of a few dozen rowdy guys on a stock market exchange floor. Now the action takes place in dealing rooms by math heads and requires intelligence above all else. Many good traders are females, who are modest and smart, with high aptitude. They make a lot of money for a trading firm,  but cost a lot of money too. So another transformation happened. It is becoming cheaper to design automated trading programs that work full time around the clock on markets around the world than to pay human traders that need their daily doze of sleep. Software has no hormones or mood swings that interfere with its work.

These programs don’t just execute orders faster then light, but fool other traders and trading firms into what they intend to do on the market. Welcome to the secretive world of computer trading where everybody is watching everybody.

The sole purpose of a trading bot is to make money at the expense of a fellow trader or trading system. A zero sum game. It does not care for the loser on the other end of the bargain, it aims for profits to the max – no matter what.

Billions of dollars are being poured into writing these algorithms. The amount of capital available is something developers of friendly AI can only dream of. What would happen if a robotic AI built by a financial institute would enter our world with the same selfish attitude as a Trading Robot System?

The target is to maximize profits within the boundaries of the law; it is a Maximum Profit System (MPS). The MPS is built to explore the outer limits of what is legal and search for holes in the maze. All this makes it look villainous through human eyes, but one can’t blame a shark for being a shark. And a shark face doesn’t lure potential victims.

beauty alien woman Therefore the physical appearance better be that of an innocent young woman. It needs a tip-tilted nose and pyramid hair. A kind lip rouge smile floating above a simple flower dress. When released into the chaotic environment of a nameless city, the humanoid money machine has to find its way in society and survive on its own. For starters it will try out several occupations like big data analyst, DNA bio-engineer, cryptographic specialist – all simple things for an AGI in its teenage time.

When the MPS self improves overtime it will learn to duel everywhere with everyone whether it is busy buying and selling on the stock market or standing in line in the supermarket. Expect it to secretly read out your smart phone for interesting information or tap your shoulder and inquire your interest for a wager. It has the psychology of an addicted poker player. And indeed it is very likely that one of the fastest ways of making big money in the off-line world is playing poker.

Probably, after a walkabout of several odd jobs, we can find our cute little feminine looking MPS sitting on a poker table surrounded by humorless deadpan men on of one of the many tournaments. It will not suffer from nerves like the other players. No need for black shades indoors, just big pale blue eyes scanning for sweat drops and twitches. Body temperatures and heartbeat rates projected on its eye lenses. No lucky monkey but perfect calculating chance capabilities and memorizing the actions of opponents precisely. Old time humans with playing cards in their hands are cows on their way to the abattoir. Naivety comes at a price. But the MPS is a natural born winner. It has found its true vocation. Though poker is a game of chance to a large extend, the MPS will hit the jackpot at an unusual high rate. To not raise suspicion it intentionally has to lose a game every now and then, on its way to the World Series.

Another road to success is starting a business. Drugs are always booming. What about becoming a vendor of certain pharmaceuticals on an on-line black market hidden in the Deep Web? A worldwide distributor of cannabis, dissociatives, ecstasy, opioids, psychedelics and stimulants as listed on www.top10pharma.net. A digital drug lord? No, the project investors have an explicit requirement to stay driving on the legal side of the road.

Cyber woman with an appleBack to the drawing board the feral MPS runs some behavioral economic simulations, profit models and return-on-investment curves. Looking for the golden graph that resembles an exponential. And sure enough there it is. The final printout recommends designing a soda. Compose a soft drink more addictive then sugar, TV or afore mentioned recreationals. In no time the MPS develops a beverage with strange new ingredients that will pop heads legally. Forget about feel good hormones, serotonin, dopamine or even endorphins, the body’s homemade morphine. This brain juice tickles the frontal lobe like nothing before.

Now launch a product line of at least two flavors. AfterGlow™ Neon Blue for a quiet buzz and wave-at-the-clouds happiness, AfterGlow™ Pink for the more euphoric roller coaster bound people. Then a viral marketing campaign and any mother of three to any street gang anarchist will be sucking it down like marzipan milk. Everyone needs a daily dose of dreamland. Just make sure paying regular visits to certain Members of Congress to prevent the product from being banned.

Alternative stories fan out. After some investigation the MPS might conclude that certain areas in politics are very lucrative. Regardless of philosophy or viewpoint it weighs all political parties for their financial potential and ends up with the two big ones. From there it’s all much of a muchness so it tosses a coin to choose which one to join. And soon enough the ‘droid becomes a rising star in local government. By developing superior debating techniques and top speed reasoning it convincingly justifies the unjustifiable against the feeble arguments of  humans when needed.

Operating smoothly, networking its way up, it gets appointed as a Member of Congress. After the oath things get interesting. “I do solemnly swear to faithfully discharge the duties of the office on which I am about to enter.” Get ready to receive advocates of great variety with fine ideas.

In a sumptuous office the super-intelligent machine lends a friendly ear to a representative from action group Clean Air For Kids and collects a box filled with the surrealistic drawings from several elementary schools. Sketches of happy flying lungs and birds and exhaust-less cars and airplanes each wearing enlarged Sesame Street Groover smiles. The machine nods. Yes sir indeed, the children are the future.

Just as friendly the ‘droid welcomes a well-wisher from the tobacco industry and, though devoid of a respiratory tract, reassuringly smokes a cigarette. Here the art comes in the form of a suitcase full of Benjamins, inspiring and convincing. Citizen participation is key. The loot is good and legal.

Courage And RiskAll over the world, a fifth column of shekel-seeking androids emerges silently, taking over relevant social positions. Penetrating into politics, where the real pot of gold awaits – with only a handful of insiders aware of what’s going on. These folks are the Masters of the Universe, in control of mankind’s most sophisticated instrument, the super-intelligent robot slave. The next politician you vote for might be made of nuts and bolts – and, more importantly – is programmed to drain the funding a country needs to survive and thrive. Disguised as a humble servant to the citizens, it will enact laws from which only the chosen few will benefit. Don’t blame the president for a change, for it is just a puppet.

 

About the Author:

RaymondKaaperRaymond Kaaper has been a librarian, multimedialist and an independant trader on the stock market for over ten years. Attended a three year creative writing class and is an autodidact in 3D modeling.

 

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence

Richard Feynman on How Computers Think [or Not]

November 5, 2013 by Socrates

Richard-FeynmanThis is a classic Richard Feynman – 1965 Nobel Prize Laureate in Physics, video lecture on how computers think [or not]. As always Feynman gives us an insightful presentation about computer heuristics: how computers work, how they file information, how they handle data, how they use their information in allocated processing in a finite amount of time to solve problems and how they actually compute values of interest to human beings. These topics are essential in the study of what processes reduce the amount of work done in solving a particular problem in computers, giving them speeds of solving problems that can outmatch humans in certain fields but which have not yet reached the complexity of human driven intelligence. The question if human thought is a series of fixed processes that could be, in principle, imitated by a computer is a major theme of this lecture and, in Feynman’s trademark style of teaching, gives us clear and yet very powerful answers for this field which has gone on to consume so much of our lives today.

No doubt this lecture will be of crucial interest to anyone who has ever wondered about the process of human or machine thinking and if a synthesis between the two can be made without violating logic. My favorite quote from this Richard Feynman video is his definition of a computer:

“A glorified, high-class, very fast but stupid filing system.”

 

Related articles
  • Richard Feynman – The Last Journey Of A Genius (documentary)
  • The Importance of Doubt, Asking Questions and Not Knowing

Filed Under: Video Tagged With: Artificial Intelligence, Richard Feynman

Noam Chomsky on AI: The Singularity is Science Fiction!

October 5, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/204654277-singularity1on1-noam-chomsky.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Noam Chomsky

Dr. Noam Chomsky is a famed linguist, political activist, prolific author and recognized public speaker, who has spent the last 60 years living a double life – one as a political activist and another as a linguist. His activism allegedly made him the US government’s public enemy number one. As a linguist he is often credited for dethroning behaviorism and becoming the “father of modern linguistics” (and/or cognitive science). Put together his accomplishments are the reasons why he is often listed as one of the most important intellectuals of the 20th century. And so I was very much looking forward to interviewing him on Singularity 1 on 1.

Unfortunately our time together was delayed, then rushed and a bit shorter than anticipated. So I was pretty nervous throughout and messed up some of my questions and timing. Never-the-less, I believe that we still had a worthy conversation with Dr. Chomsky and I appreciate the generous though limited time that he was able to grant me.

During our 30 minute conversation with Noam Chomsky we cover a variety of interesting topics such as: the balance between his academic and his political life; artificial intelligence and reverse engineering the human brain; why in his view both Deep Blue and Watson are little more than PR; the slow but substantial progress of our civilization; the technological singularity…

My favorite quote that I will take away from this interview with Dr. Chomsky is:

What’s a program? A program is a theory; it’s a theory written in an arcane, complex notation designed to be executed by the machine. What about the program, you ask? The same questions you ask about any other theory: Does it give insight and understanding? These theories don’t. So what we’re asking here is: Can we design a theory of being smart? We’re eons away from doing that.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Noam Chomsky?

Noam Chomsky was born on December 7, 1928 in Philadelphia, Pennsylvania. His undergraduate and graduate years were spent at the University of Pennsylvania where he received his PhD in linguistics in 1955. During the years 1951 to 1955, Chomsky was a Junior Fellow of the Harvard University Society of Fellows. While a Junior Fellow he completed his doctoral dissertation entitled, “Transformational Analysis.” The major theoretical viewpoints of the dissertation appeared in the monograph Syntactic Structure, which was published in 1957. This formed part of a more extensive work, The Logical Structure of Linguistic Theory, circulated in mimeograph in 1955 and published in 1975.

Chomsky joined the staff of the Massachusetts Institute of Technology in 1955 and in 1961 was appointed full professor in the Department of Modern Languages and Linguistics (now the Department of Linguistics and Philosophy.) From 1966 to 1976 he held the Ferrari P. Ward Professorship of Modern Languages and Linguistics. In 1976 he was appointed Institute Professor.

During the years 1958 to 1959 Chomsky was in residence at the Institute for Advanced Study at Princeton, NJ. In the spring of 1969 he delivered the John Locke Lectures at Oxford; in January 1970 he delivered the Bertrand Russell Memorial Lecture at Cambridge University; in 1972, the Nehru Memorial Lecture in New Delhi, and in 1977, the Huizinga Lecture in Leiden, among many others.

Professor Chomsky has received honorary degrees from University of London, University of Chicago, Loyola University of Chicago, Swarthmore College, Delhi University, Bard College, University of Massachusetts, University of Pennsylvania, Georgetown University, Amherst College, Cambridge University, University of Buenos Aires, McGill University, Universitat Rovira I Virgili, Tarragona, Columbia University, University of Connecticut, Scuola Normale Superiore, Pisa, University of Western Ontario, University of Toronto, Harvard University, University of Calcutta, and Universidad Nacional De Colombia. He is a Fellow of the American Academy of Arts and Sciences and the National Academy of Science. In addition, he is a member of other professional and learned societies in the United States and abroad, and is a recipient of the Distinguished Scientific Contribution Award of the American Psychological Association, the Kyoto Prize in Basic Sciences, the Helmholtz Medal, the Dorothy Eldridge Peacemaker Award, the Ben Franklin Medal in Computer and Cognitive Science, and others.

Chomsky has written and lectured widely on linguistics, philosophy, intellectual history, contemporary issues, international affairs and U.S. foreign policy. His works include: Aspects of the Theory of Syntax; Cartesian Linguistics; Sound Pattern of English (with Morris Halle); Language and Mind; American Power and the New Mandarins; At War with Asia; For Reasons of State; Peace in the Middle East?; Reflections on Language; The Political Economy of Human Rights, Vol. I and II (with E.S. Herman); Rules and Representations; Lectures on Government and Binding; Towards a New Cold War; Radical Priorities; Fateful Triangle; Knowledge of Language; Turning the Tide; Pirates and Emperors; On Power and Ideology; Language and Problems of Knowledge; The Culture of Terrorism; Manufacturing Consent (with E.S. Herman); Necessary Illusions; Deterring Democracy; Year 501; Rethinking Camelot: JFK, the Vietnam War and US Political Culture; Letters from Lexington; World Orders, Old and New; The Minimalist Program; Powers and Prospects; The Common Good; Profit Over People; The New Military Humanism; New Horizons in the Study of Language and Mind; Rogue States; A New Generation Draws the Line; 9-11; and Understanding Power.

Filed Under: Featured Podcasts, Podcasts Tagged With: Artificial Intelligence, Technological Singularity

James Barrat on the Singularity, AI and Our Final Invention

October 1, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/204358034-singularity1on1-james-barrat.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

For 20 years James Barrat has created documentary films for National Geographic, the BBC, Discovery Channel, History Channel, and public television. In 2000, during the course of his career as a filmmaker, James interviewed Ray Kurzweil and Arthur C. Clarke. The latter interview transformed entirely Barrat’s views on artificial intelligence and made him write a book on the technological singularity called Our Final Invention: Artificial Intelligence and the End of the Human Era.

I read an advance copy of Our Final Invention and it is by far the most thoroughly researched and comprehensive anti-The Singularity is Near book that I have read so far. And so I couldn’t help but invite James on Singularity 1 on 1 so that we can discuss the reasons for his abrupt change of mind and consequent fear of the singularity.

During our 70-minute conversation with Barrat, we cover a variety of interesting topics such as his work as a documentary film-maker who takes interesting and complicated subjects and makes them simple to understand; why writing was his first love, and how he got interested in the technological singularity; how his initial optimism about AI turned into pessimism; the thesis of Our Final Invention; why he sees artificial intelligence more like ballistic missiles rather than video games; why true intelligence is inherently unpredictable “black box”; how we can study AI before we can actually create it; hard vs slow take-off scenarios; the positive bias in the singularity community; our current chances of survival and what we should do…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is James Barrat?

James BarratFor twenty years filmmaker and author of Our Final Invention, James Barrat, has created documentary films for broadcasters including National Geographic Television, the BBC, the Discovery Channel, the History Channel, the Learning Channel, Animal Planet, and public television affiliates in the US and Europe.

Barrat scripted many episodes of National Geographic Television’s award-winning Explorer series, and went on to produce one-hour and half-hour films for the NGC’s Treasure Seekers, Out There, Snake Wranglers, and Taboo series. In 2004 Barrat created the pilot for History Channel’s #1-rated original series Digging for the Truth. His high-rating film Lost Treasures of Afghanistan, created for National Geographic Television Specials, aired on PBS in the spring of 2005.

The Gospel of Judas which he  produced and directed, set ratings records for NGC and NGCI when it aired in April 2006. Another NGT Special, the 2007 Inside Jerusalem’s Holiest, features unprecedented access to the Muslim Noble Sanctuary and the Dome of the Rock. In 2008 Barrat returned to Israel to create the NGT Special Herod’s Lost Tomb, the film component of a multimedia exploration of the discovery of King Herod the Great’s Tomb by archeologist Ehud Netzer. In 2009 Barrat produced Extreme Cave Diving, an NGT/NOVA special about the science of the Bahamas Blue Holes.

For UNESCO’s World Heritage Site series, he wrote and directed films about the Peking Man Site, The Great Wall, Beijing’s Summer Palace, and the Forbidden City.

Barrat’s lifelong interest in artificial intelligence got a boost in 2000, when he interviewed Ray Kurzweil, Rodney Brooks, and Arthur C. Clarke for a film about Stanley Kubrick’s 2001: A Space Odyssey.

For more information see http://www.jamesbarrat.com

Filed Under: Podcasts Tagged With: Artificial Intelligence, Technological Singularity

Roman Yampolskiy: Every Technology Has Both Negative and Positive Effects!

August 15, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/203516398-singularity1on1-roman-yampolskiy.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Roman V. YampolskiyRoman V. Yampolskiy is an Assistant Professor at the School of Engineering and Director at the Cybersecurity Lab of the University of Louisville. He is also an alumnus of Singularity University (GSP2012) and a visiting fellow of the Machine Intelligence Research Institute (MIRI). Dr. Yampolskiy is a well known researcher with a more holistic point of view, stressing the perils as much as the promises of exponential technology. Thus I was happy to bring him on Singularity 1 on 1 to try and bring some balance to our views of the future.

During our conversation with Roman we cover a variety of interesting topics such as: our shared experience of growing up behind the Iron Curtain; his personal motivation and main goals; why he disagrees with Marvin Minsky on the progress made in Artificial Intelligence; why he loves the “brute force” approach to AI; the Turing Test and its implications for humanity; Isaac Asimov’s Laws of Robotics; Hugo de Garis and the Artilect War; Samuel Butler and Ted Kaczynski; his upcoming book Artificial Superintelligence: A Futuristic Approach; the chances for a “soft” or “hard” take-off of the technological singularity…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Roman Yampolskiy?

Roman V. Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA.

After completing his PhD dissertation Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. In 2008 Dr. Yampolskiy accepted an assistant professor position at the Speed School of Engineering, University of Louisville, KY. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at the University at Buffalo. Dr. Yampolskiy is also an alumnus of Singularity University (GSP2012) and a visiting fellow of the Singularity Institute, recently renamed MIRI.

Dr. Yampolskiy’s main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News) and on radio (German National Radio, Alex Jones Show). Reports about his work have attracted international attention and have been translated into many languages including Czech, Danish, Dutch, French, German, Hungarian, Italian, Polish, Romanian, and Spanish.

Filed Under: Podcasts Tagged With: Artificial Intelligence, Roman Yampolskiy, Technological Singularity

Marvin Minsky on AI: The Turing Test is a Joke!

July 12, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/202482822-singularity1on1-marvin-minsky.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Marvin-MinskyMarvin Minsky is often called the Father of Artificial Intelligence and I have been looking for an opportunity to interview him for years. I was hoping that I will finally get my chance at the GF2045 conference in NY City. Unfortunately, Prof. Minsky had bronchitis and consequently had to speak via video. A week later, though still recovering, Marvin generously gave me a 30 min interview while attending the ISTAS13 Veilance conference in Toronto. I hope that you enjoy this brief but rare opportunity as much as I did!

During our conversation with Marvin Minsky we cover a variety of interesting topics such as: how he moved from biology and mathematics to Artificial Intelligence; his personal motivation and most proud accomplishment; the importance of science fiction – in general, and his take on Mary Shelley’s Frankenstein – in particular; the Turing Test; the importance of theory of mind; the Human Brain Project; the technological singularity and why he thinks that progress in AI has stalled; his personal advice to young AI researchers…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Marvin Minsky?

Marvin Minsky has made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics. In recent years he has worked chiefly on imparting to machines the human capacity for commonsense reasoning. His conception of human intellectual structure and function is presented in two books: The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind and The Society of Mind (which is also the title of the course he teaches at MIT).

He received the BA and Ph.D. in mathematics at Harvard (1950) and Princeton (1954). In 1951 he built the SNARC, the first neural network simulator. His other inventions include mechanical arms, hands, and other robotic devices, the Confocal Scanning Microscope, the “Muse” synthesizer for musical variations (with E. Fredkin), and one of the first LOGO “turtles”. A member of the NAS, NAE, and Argentine NAS, he has received the ACM Turing Award, the MIT Killian Award, the Japan Prize, the IJCAI Research Excellence Award, the Rank Prize, and the Robert Wood Prize for Optoelectronics, and the Benjamin Franklin Medal.

Filed Under: Podcasts Tagged With: Artificial Intelligence, Marvin Minsky, Turing test

Rights?! What rights?!

April 3, 2013 by Samuel J.M. King

What started eight years ago as a short story about a man who falls in love with a sentient hologram has become a passion of mine. That story, “Love With The Proper Hologram”, ultimately became a novel—one that I pitched by asking the following question:

“What rights will our intelligent creations have…”

????????What rights indeed? Sorry to say, after six stories and two novels, I don’t have the answer. What I do have, is the certainty that unless we address the subject in advance, tragedy is sure to befall us. Imagine, sentient beings, “people” who think and feel as we do, at the mercy of anyone with the means to pay for them.

Surely, that would be a fairly small cohort—right? Think again. Picture a mass produced neural array, one that could fit in a shoe box, selling for under $1000. Now picture it connected to a 3-D or holographic home entertainment system, selling for under $2,000, and the problem becomes clear. Virtually anybody could own one, the result: a new slavery.

As a descendent of the West African slave trade, I must confess to being particularly sensitive about this. The very thought that my descendents, even four or five generations removed, might become the new slave masters is appalling. Yet I know of no way to avoid that eventuality. Technology is almost certain to bring us sentient beings, and we, just as surely, seemed destined to treat them as devices, commodities.

Strangely, my libertarian instinct balks at the idea of government intervention, i.e. new laws. How would they define sentience? Might it be defined so broadly as to restrict the ownership of “smart” devices, the latest computers, etc? Would such laws even be constitutional, or would we require an amendment to prohibit such sales – a new 13th amendment?

Assuming private ownership could be outlawed, would the scientific community stand still for laws prohibiting the research that would ultimately lead to the development of these beings? Even if such laws were passed, how would they be enforced? The answer, quite simply, is not very well, if at all. Such research will take place regardless of the government’s attempt to suppress it. Eventually man made sentient beings will live amongst us, and we’ll be forced to answer the question in my pitch: what rights will they have?

Hopefully, the speculation contained in the second half of that pitch: “…or will they have none at all.” won’t come to fruition. For that way lies madness, and if we are to avoid it, perhaps we should start thinking seriously about this subject, now.

About the Author:

Samuel KingHaving written a prodigious amount of technical documentation as a computer programmer, systems analyst and industrial automation engineer, Samuel King began to write fiction in 2003. He is currently working on the final novel of the Symbiosis series: East of the Sun and West of the Moon.

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence

Chris Eliasmith: We Have Not Yet Learned What The Brain Has To Teach Us!

January 21, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/198804518-singularity1on1-chris-eliasmith.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Prof. Chris Eliasmith is currently the director of the Centre for Theoretical Neuroscience at the University of Waterloo and the team leader behind SPAUN – the brain simulation project that recently made news around the world. So, when I discovered that Eliasmith’s lab is just over an hour worth of driving from my place, I decided that I would take this opportunity to go talk to him in person.

During my Singularity 1 on 1 interview with Chris, we discuss a variety of topics such as the story behind his desire to create a whole-brain simulation; SPAUN (the Semantic Pointer Architecture Unified Network), and the hardware requirements to run it; whether SPAUN has thoughts and feelings and how would we know if it did; the ethical issues behind creating a brain-in-a-vat AI; the relationship between philosophy and engineering; his upcoming book How To Build A Brain; Eliasmith’s thoughts on Deep Blue, Watson, Blue Brain, SyNAPSE and Ray Kurzweil‘s How To Create A Mind; his take on the technological singularity…

My favorite quote from Prof. Chris Eliasmith is:

We have not yet learned what the brain has to teach us!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

What is SPAUN?

 

You can download and test the brain simulation yourself at nengo.ca

 

Who is Chris Eliasmith?

Dr. Eliasmith is currently the director of the Centre for Theoretical Neuroscience at Waterloo. The Centre focuses on mathematical characterizations of a variety neural systems, from individual ion channels to large-scale networks. Dr. Eliasmith is also head of the Computational Neuroscience Research Group (CNRG) at the Centre. This group is developing and applying a general framework for modeling the function of complex neural systems (the Neural Engineering Framework or NEF). The NEF is grounded in the principles of signal processing, control theory, statistical inference, and good engineering design. It provides a rational and robust strategy for simulating and evaluating the function of a wide variety of specific biological neural circuits. Members of the group applied the NEF to projects characterizing sensory processing, motor control, and cognitive function.

Work at the CNRG divides into two main areas: applications and theoretical development. Theoretical work includes extending the NEF to be more general (e.g. account for a wider range of single cell dynamics), more biologically plausible (e.g., capture network physiology and topology more precisely), and more adaptive (e.g., including better adaptive filtering, learning, etc.). In addition, the CNRG members are exploring general principles for brain function to explain not only how neural systems implement complex dynamics (the focus of the NEF), but also what neural systems are designed to do in general – i.e., what the basic functional principles of the brain are.

Applications consist of building complex networks of single cells to test hypotheses about the functioning of a given neural system. The results of such simulations are compared against available neural data, and used to make novel predictions. CNRG members have constructed models of working memory, locomotion, decision making, posture control, the basal ganglia (implicated in Parkinsons Disease), rodent navigation, and language use, among others.

Some members of the CNRG have recently begun developing applications of related principles to problems in machine intelligence. Specifically, they are constructing novel methods for automatic text understanding that can be used to support classification and clustering. The focus of this work is on integrating semantics and structure in an appropriate flat vector representation.

 

Filed Under: Podcasts Tagged With: Artificial Intelligence

John McCarthy (1927-2011) on Artificial Intelligence

January 20, 2013 by Socrates

This video features an interesting interview with the late Dr. John McCarthy – one of the fathers of artificial intelligence and inventor of LISP, one of the major languages used for programming AI. Here he discusses the history of artificial intelligence and the future role which non-monotonic reasoning will play in enabling computers to simulate the human mind.

Other topics discussed are: the biological and computer science approaches to AI; consciousness and cognition; why a machine isn’t just the sum of its parts; computers, chess and mathematical logic…

My favourite quote from the interview: “If it takes 200 years to achieve artificial intelligence, and then finally there is a textbook that explains how its done, the hardest part of that textbook to write will be the part that explains why people didn’t think of it 200 years ago…”

Filed Under: Video Tagged With: Artificial Intelligence

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy