When will computers be smarter than humans? Return asked top AI experts: roon

News & Politics

The 2020s have seen unprecedented acceleration in the sophistication of artificial intelligence, thanks to the rise of large language model technology. These machines can perform a wide range of tasks once thought to be solvable only by humans: write stories, create art from text descriptions, and solve complex tasks and problems they were not trained to handle.

We posed two questions to six AI experts: James Poulos, roon, Robin Hanson, Niklas Blanchard, Max Anton Brewer, and Anton Troynikov. —Eds.

1. What year do you predict, with 50% confidence, that a machine will have artificial general intelligence — that is, when will it match or exceed most humans in every learning, reasoning, or intellectual domain?

2. What changes to society will this effect within five years of occurring?

roon

AGI will be developed in 2027 for the digital world and in 2037 for the real world. Its creators will be the richest people in history. The question is hard because skills are obviously not at all evenly distributed.

If you want to match the median human at “English to French translation,” basically any large language model can get there. If, instead, you say we need a language model that beats the 99th percentile of human skill at “English to French translation,” that’s maybe a more interesting question. We’ve had that tapped for four years or so, depending on which metrics you like. Our expectations have grown: These days we want a handful of genius models to surpass 99% of humans in all skills.

Language models appear to get better at all tasks in an incessant curve as we progress at making these models larger and more data-rich. But each unit of progress isn’t built equally. While a large gain in the beginning of training might mean that the network has just learned the concept of “sentiment” and made the next token prediction significantly easier, a small gain toward the end of training might mean that the model has just solved “quantitative reasoning” and can accomplish the small minority of tasks in the text corpus that involve math problems. We call these unpredictable advancements in capabilities “phase transitions.”

Language models construct internal representations of the real world. They’ll likely encounter a phase transition somewhere deep in the training process of a giant model within the next five to ten years that makes them virtually unbeatable at most feats of symbolic reasoning about the world. However, not all tasks can be solved under symbolic reasoning in a world model; no amount of reading about tennis will teach your muscle fibers to hit a 150 mph serve perfectly into the opposing court. These advanced feats of musculoskeletal coordination may be genuinely as hard as the most complex abstract reasoning – but I doubt it. Humans are not particularly gifted athletes in the animal kingdom; consider a peregrine falcon eying and devouring its moving prey in a dive from 200 feet. Perfectly adapted to its environment and with quite a small brain, the falcon benefits from the data scale of a billion years of massively parallel physical evolution.

My hope would be that a very strong language model can recursively write a learning algorithm that enables rapid iteration and learning in the real world such that we can work with much smaller data volumes than are available in the language universe. Since the methods by which language models become tool users are fuzzier, I’d put this “real-world-completeness” milestone farther off – perhaps 15 years. The greatest danger to any of these predictions is the availability of data: If we run out of all useful language data on the internet and find it impossible to generate more, all bets are off.

The machine matching human labor in performance is one thing, but performing it economically is another. Strangely enough, we’ll see high-dollar, high-skill, creative jobs being replaced more quickly than rote workers. While it is absolutely worth it to light GPUs on fire to make software developers more efficient – a highly scaling, high-value profession that works on a mostly textual reasoning task, with only sporadic interaction with the real world – it may not be efficient any time soon to make AIs that perform something like undersea welding.

My guess is that in those first few years, humanity, and specifically nations whose economies are concentrated in advanced industries, will greatly prosper from high-value knowledge work sectors becoming vastly more efficient. I don’t think there would be massive job losses in that time. I would also guess the company in possession of the best models would hold massive and compounding technological leads due to the complexity of engineering models on the scale of trillions of parameters and the fact that the smartest models seem likely to give suggestions that allow humans to study them and make them better. They will become the wealthiest humans ever to exist.

roon is an anonymous internet commentator and a machine learning researcher.

Articles You May Like

Fatal Truck Crashes Pile Up as the Biden Administration Remains Blasé About Highway Safety
‘Good SAT scores and self-entitlement do not supersede the law’: NYPD responds to AOC’s criticism of police at Columbia
Dumb twerking teens caught on video vandalizing business. Dumber still? Gang symbols carved into cars lead to arrest.
Israel wants U.S. National Guard deployed against anti-Israel protesters at American universities
PBS Tries To Blame Conservatism For Mass Stabbing

Leave a Comment - No Links Allowed:

Your email address will not be published. Required fields are marked *