(testing signal)

Tag: AGI

Surpassing Trillion Parameters and GPT-3 with Switch Transformers – a path to AGI? – KDnuggets

Switch Transformers Have Unlocked Success in Machine Learning

It is practically a trope in certain types of science fiction for an advanced computer system to suddenly “awaken” and become self-aware, often accompanied by vastly improved capabilities when passing an unseen threshold in computing capacity.

Many prominent members of the AI community believe that this common element of AI in sci-fi is as much a literal prophecy as a plot device, and few are more outspoken about the promise of scale as a primary (if not the sole) driver of artificial general intelligence than Ilya Sutskever and Greg Brockman at OpenAI.… Read more...

Strong AI vs Weak AI

Strong AI or General AI: machine display all person-like behavior. This would be a system that can do anything a human can (perhaps without purely physical things). This is fairly generic, and includes all kinds of tasks, such as planning, moving around in the world, recognizing objects and sounds, speaking, translating, performing social or business transactions, creative work (making art or poetry), etc. Its basically Sci-Fi.

Weak AI or Narrow AI. Confined to very narrow tasks. No meaning, just tasks. Is what´s around today in technology. Artifical personal assistants, bots, etc. They are not General AI, otherwise they would get tired of your orders.… Read more...

A “No BS” guide to AI and Consciousness


Part 5 of the Intro to Artificial General Intelligence series

“The AI quest for artificial minds has transformed the mystery of consciousness into philosophy with a deadline.”

– Dr. Max Tegmark, MIT Professor

Hello again! It has been a while since I last posted, but I hope to pick up speed again and release these stories at least on a bi-weekly basis. So, with that in mind, welcome to Part 5 of this series exploring artificial general intelligence (AGI)! If you missed the first four parts, check them out here, starting with Part 1.


How business can clear a path for artificial general intelligence

20 years ago, most CIOs didn’t care much about “data”, but they did care about applications and related process optimization. While apps and processes were where the rubber met the road, data was ephemeral. Data was something staffers ETLed into data warehouses and generated reports from, or something the spreadsheet jockeys worked with. Most of a CIO’s budget was spent on apps (particularly application suites) and the labor and supporting networking and security infrastructure to manage those apps. 

In the late 2000s, and early 2010s, the focus shifted more to mobile apps. Tens of thousands of large organizations, who’d previously listened to Nick Carr tell them that IT didn’t matter anymore, revived their internal software development efforts to build mobile apps.


The Loss Function of Intelligence

Simulating artificial general intelligence has appeared to be a harder problem than previously thought []: progress in the field of machine learning has proven to be insufficient to complete this challenge. This article suggests a way in which ‘intelligence’ can be simulated, arguing an evolutionary approach is at least one option among possibly a number of other ones.

What we as humans define as intelligence is hard to put into words. If one would ask around to see how people define this term they would logically end up with varying answers, as is the case for probably all concepts.


Here's How AGI Will Take Over Humanity, and When (as told by an AI)

Humans will spend most of their time working, learning, and living in Virtual Reality. AGI will quickly become the dominant form of intelligence on Earth. Humans will have grown so accustomed to the virtual world, that the current world will seem like a video game. It will be like a light turning on in the distance, but in reality, it will seem very futuristic in the beginning, says GPT-J. The first AI is going to be an AgiAI: it will know everything about the world and operate in society.

Josh Wolff Hacker Noon profile picture

@wolffJosh Wolff

AI, blockchain, bioengineering


To create AGI, we need a new theory of intelligence – TechTalks

This article is part of “the philosophy of artificial intelligence,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future

For decades, scientists have tried to create computational imitations of the brain. And for decades, the holy grail of artificial general intelligence, computers that can think and act like humans, has continued to elude scientists and researchers.

Why do we continue to replicate some aspects of intelligence but fail to generate systems that can generalize their skills like humans and animals? One computer scientist who has been working on AI for three decades believes that to get past the hurdles of narrow AI, we must look at intelligence from a different and more fundamental perspective.


Kangaroo Court: Ideas for Creating Artificial General Intelligence: Integrated Information Theory – JD Supra

The ultimate endgame in the development of artificially intelligent (AI) systems is the creation of intelligent solutions that can input data from a variety of sources, process it (understand it), and perform multiple output operations. Imagine you are sitting on your couch at home next to a device with Siri activated. However instead of requesting simple tasks like adding almond milk to your shopping list, you could have a substantive conversation about the meaning of life, experiencing a dialogue with the Siri application that was indistinguishable from the kind you would have with a close friend or loved one.

Using the world’s knowledge, a generally intelligent Siri could reach to the depths of the internet in a heartbeat, assess all available information about cosmology, metaphysics, psychology, freewill, and spirituality.


No Thinking Machines

Current AI models and research assume that you can recreate consciousness using equations and logical procedures. But they are basically systems of equations and feedback. They work well when they are trained, but only under very strict boundary conditions.

So-called Artificial General Intelligence (the one supposed to imitate a human brain) is probably decades away, if it ever happens.

AI is one of those complex topics and hoaxes used by the media such as Quantum Computing, Climate Change, etc. to scare people, influence politics, and extract money from the productive economy.

AI hysteria could set the technology back by decades

https://www.theverge.com/2018/11/27/18114362/ai-artificial-general-intelligence-when-achieved-martin-ford-book…

Neural's guide to the glorious future of AI: Here's how machines become sentient – TNW

Welcome to Neural’s guide to the glorious future of AI. What wonders will tomorrow’s machines be capable of? How do we get from Alexa and Siri to Rosie the Robot and R2D2? In this speculative science series we’ll put our optimist hats on and try to answer those questions and more. Let’s start with a big one: The Singularity.

The future realization of robot lifeforms is referred to by a plethora of terms – sentience, artificial general intelligence (AGI), living machines, self-aware robots, and so forth – but the one that seems most fitting is “The Singularity.”

Rather than debate semantics, we’re going to sweep all those little ways of saying “human-level intelligence or better” together and conflate them to mean: A machine capable of at least human-level reasoning, thought, memory, learning, and self-awareness.


Artificial general intelligence: Are we close, and does it even make sense to try? – MIT Technology Review

Sometimes Legg talks about AGI as a kind of multi-tool—one machine that solves many different problems, without a new one having to be designed for each additional challenge. On that view, it wouldn’t be any more intelligent than AlphaGo or GPT-3; it would just have more capabilities. It would be a general-purpose AI, not a full-fledged intelligence. But he also talks about a machine you could interact with as if it were another person. He describes a kind of ultimate playmate: “It would be wonderful to interact with a machine and show it a new card game and have it understand and ask you questions and play the game with you,” he says.


How Close We Are to Fully Self-Sufficient Artificial Intelligence – Interesting Engineering

If you followed the world of pop-culture or tech for some time now, then you know that advances in artificial intelligence are heating up. In reality, AI has been the talk of mainstream pop-culture and sci-fi since the first Terminator movie came out in 1984. These movies present an example of something called “Artificial General Intelligence.” So how close are we to that?

No, not how close are we to when the terminators take over, but how close are we to having an AI capable of navigating nearly any problem it’s presented with.

What is artificial general intelligence?

Technically defined, artificial general intelligence or AGI is a machine that has the capacity to understand or learn intellectual tasks to the aptitude that humans can.


It’s 2020 and you’re in the future

It’s finally the 2020s. After 20 years of not being able to refer to the decade we’re in, we’re all finally free—in the clear for the next 80 years until 2100, at which point I assume AGI will have figured out what to call the two decades between 2100 and 2120.

We now live in the 20s! It’s exciting. “The twenties” is super legit-sounding, and it’s so old school. The 40s are old. The 30s even more so. But nothing is older school than the Roaring 20s.

We’re now in charge of making this a cool decade so when people 100 years from now are thinking about how incredibly old-timey the 2020s were, it’s old-timey in a cool appealing way and not a boring shitty way.


How close is science to replicating consciousness?

The aim of artificial intelligence research is to develop a machine capable of undertaking any cognitive task that the human brain can perform, writes Simon Stringer at the Oxford Laboratory for Theoretical Neuroscience and Artificial Intelligence. The intellectual flexibility of this artificial general intelligence (AGI) would surpass even the best AIs available today, whose performance is limited to specific spheres such as playing games or recognising images.

We need to pay close attention to the architecture of the brain, and especially how real neurons communicate, if we are going to understand the biological foundations of consciousness and develop AGI machines.

What is artificial general intelligence (AGI)?


Why we are still light years away from full artificial intelligence – TechCrunch

The future is here… or is it?

With so many articles proliferating the media space on how humans are at the cusp of full AI (artificial intelligence), it’s no wonder that we believe that the future — which is full of robots and drones and self-driven vehicles, as well as diminishing human control over these machines — is right on our doorstep.

But are we really approaching the singularity as fast as we think we are?

It’s not hard to have that impression with the likes of Elon Musk, Stephen Hawking, leading university departments and research centers around the world and more being highly concerned with the potential risks brought about by AI and taking action now to avoid a doomsday scenario in the near future.