Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the co-founders of DeepMind (Shane Legg) has provided the following definition of intelligence: "Intelligence measures an agent’s ability to achieve goals in a wide range of environments" in [1]. This definition has been pretty influential, and in the sense of this definition, AlphaGo is not AI. But it is a great step towards AI.

[1] S. Legg, M. Hutter, Universal Intelligence: A Definition of Machine Intelligence. https://arxiv.org/abs/0712.3329



This definition has the (unintentional?) effect of defining-away human intelligence. By the time a machine can demonstrate "enough" achievement in different domains to be called intelligent, the same high bar would disqualify any actual human being. If "a machine has to master backgammon, chess and poker, not just be a world champion at one of them" is a prerequisite for intelligence, then I don't think that any one human can demonstrate intelligence either.

Consider AI as a newly discovered species. If you were trying to discern if a previously unknown cetacean were intelligent, or if life discovered on a distant planet were intelligent, would you only say "intelligence discovered!" after it equaled-or-surpassed human performance on many or most kinds of thinking historically valued by humans? I wouldn't. I think that AI is already here and that the people waiting for artificial general intelligence will keep raising the bar and shifting the goalposts long after "boring" narrow AI has economically out-competed half the human population.


I think the quote does a great job at explaining why a lot of people have been critical of labeling the recent breakthroughs in Machine learning as real intelligence. Most people define intelligence in comparison to humans. Things like being the best GO player in the world are so specialized that they don't seem very human at all.

Most people will not be impressed by a machine that can master backgammon, chess and poker, despite it being a great technical feat. They would be impressed by one that can successfully teach a 5th grade math class, even though there are hundreds of thousands of people who can do this.

This would require more than teaching the kids math, but also how to deal with the kid who loses a parent during the school year. How to deal with bullying in the class, with misbehaving students. None of it is "specialized knowledge" like playing GO. And we are nowhere even remotely close to this.


I used to think that the rarity of human mastery of games, and games' abstraction from the physical world, were what prevented most humans from perceiving machine intelligence as intelligence. Then the 2005 DARPA Grand Challenge for self-driving vehicles showed that machines could perform a task that most adult Americans can perform, that no non-human animals have ever been taught to perform, and that requires significant awareness of the physical world. But AFAICT it didn't cause a sea change in how most people think about intelligence, human and otherwise.

There has been an uptick in people pondering the economic implications of driverless vehicles and a more robotic future. That discussion seems kind of oddly isolated from re-considering the nature of intelligence, human and otherwise. It's as if after the Industrial Revolution people kept narrowly scoping the meaning of "power" to "muscle power" rather than acknowledging mechanical forms. Oh, yes, that coal fired pump can remove water from the mine faster than I can... but it just uses clever tricks for faking power.


> a task that most adult Americans can perform, that no non-human animals have ever been taught to perform

Woah wait what? Non-human animals successfully navigate >7 miles of mountain terrain all the time.

Machine intelligence doesn't seem like "real" intelligence because it just doesn't seem as generalizable. Taking the engines and hydraulics used to great effect in water pumps and applying them to construction cranes required engineering work, sure, but no new physics. But you can't just take the convolutional neural nets that are breaking new ground in computer vision and apply them to natural language processing, you need new computer science research to develop long short-term memory networks.

The cool thing about AlphaGo, from my understanding, was that it was able to train the deep learning-based heuristics for board evaluation by playing a ton of games against itself. This is especially awesome because those heuristics are (were?) our main edge over machines [1]. But in CV and NLP, playing against yourself isn't really a thing, so again, this work doesn't automatically generalize the way engines and hydraulics did.

[1]: https://en.wikipedia.org/wiki/Anti-computer_tactics


In a sense, defining away human intelligence is the whole point. We have no servicable definition of intelligence other than a behaviourist black box embodied by humans. From the perpective of AI-research any progress will chip away bits from the intelligence black-box and still the resulting AI will be out of reach of "true" intelligence. Since we have little knowledge of the internal structure of "intelligence" as a phenomenon, we can't tell how much is left to discover about it.

From the other end, the merits of production AI will be measured against isolated properties pulled out of the intellgence black-box, and since intelligence remains undefined (in a strict ontological sense), debate ensues.


>We have no servicable definition of intelligence other than a behaviourist black box embodied by humans.

I believe cognitive scientists have a considerably better idea than that.


Yes they each have a better idea, but the intersection of their proposals is pretty much just that. Why do you think they still make a big fuss over the Turing Test?


I have never heard cognitive scientists make a big fuss over the Turing Test. Who've you been reading/watching?


Good old Doug makes a little fuss here: https://books.google.no/books?id=qa85DgAAQBAJ&lpg=PT428&ots=...

When I said "make a big fuss" i didn't mean "herald it as the gold standard", but they do go on about it even though they might disagree about what it utlimately signifies.


> Intelligence measures an agent’s ability to achieve goals in a wide range of environments

A tardigrade achieves its goals in a wide range of environments; I don't think anyone would call it intelligent. I've known quite a few 15-year-old juvenile delinquents that I'm certain were able to achieve their goals in a much wider range of environments than Albert Einstein; were they all much more intelligent than Einstein?

> But it is a great step towards AI.

How do you know how big a step it is? None of our learning algorithms has yet to achieve even an insect's level of intelligence (which would be normally considered zero or close to it). How do you know we're even on the right path? I mean, I have no real reason to doubt that we'll get AI sometime in the future, but the belief -- certainty even -- that AI is imminent has been with us for about sixty years now.


I would absolutely classify a tardigrade as an intelligent system. Systems of parts interacting in an intelligent way applies to more than just networks of nerve cells.


You've made a circular definition. And I wasn't claiming that only nerve cells can form intelligence, but I think that if we expand the definition beyond what people normally call intelligent, then we should be more precise than the vague "ability to achieve goals in a wide range of environments" or else every complex adaptive system would be called intelligent, in which case we can drop that name because we already have one: complex adaptive systems (https://en.wikipedia.org/wiki/Complex_adaptive_system)


I wasn't to give a definition. Just refuting your claim that nobody would consider a tardigrade intelligent.


But if a tardigrade is intelligent then so are bacteria and even viruses. While you can define intelligence how you like, I don't think this coincides with common usage. In fact, you've extended the definition so much that it just means "life" or even a complex adaptive system (https://en.wikipedia.org/wiki/Complex_adaptive_system). If by intelligent you mean "alive" or "complex adaptive system", why use the word intelligent?

Also, if intelligence means "achieving goals in a wide range of environments", then I think some computer viruses are far more intelligent -- by this definition -- than even the most advanced machine-learning software to date.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: