Welcome to the new version of European Tribune. It's just a new layout, so everything should work as before - please report bugs here.
Display:
Are Dogs Smarter Than Cats? Science Has an Answer
In each of the dogs' brains, despite varying in size, researchers found about 500 million neurons, more than double the 250 million found in the cat's brain.
Interestingly, the science answers (as the media announces it) were somewhat different 5 years ago:

Cats vs dogs: Which pet is smarter?

Researchers at CanCog Technologies, a private institution in Toronto that studies behavior and aging in companion animals, have tested dogs and cats on the same tasks, using cats that show the same level of interest in food rewards as the dogs. They found that cats make more errors than dogs and require more trials to learn the same tasks [...]

On the other hand, the physiology of dog and cat brains seems to suggest that cats have the advantage. While brain size isn't a good indicator of intelligence, the number of neurons could be, experts say. Cats have 300 million neurons to dogs' mere 160 million. By comparison, humans have 11.5 billion.

by das monde on Thu Dec 7th, 2017 at 03:14:35 AM EST
It can't be simply the number of neurons. If it was, whales would probably be smarter than us.
by gk (gk (gk quattro due due sette @gmail.com)) on Thu Dec 7th, 2017 at 05:31:00 AM EST
[ Parent ]
Are Whales Smarter Than We Are? -- Scientific American

Cetacean brains, such as those of dolphins (left) and humpback whales (right), have even more cortical convolutions and surface area than human brains do. Does that mean they're smarter?

[...] The frontal lobes of the dolphin brain are comparatively smaller than in other mammals, but the researchers found that the neocortex of the Minke whale was surprisingly thick. The whale neocortex is thicker than that of other mammals and roughly equal to that of humans (2.63 mm). However, the layered structure of the whale neocortex is known to be simpler than that of humans and most other mammals. In particular, whales lack cortical layer IV, and thus have five neocortical layers to humankind's six. This means that the wiring of connections into and out of the neocortex is much different in whales than in other mammals. The researchers' cellular census revealed that the total number of neocortical neurons in the Minke whale was 12.8 billion. This is 13 times that of the rhesus monkey and 500 times more than rats, but only 2/3 that of the human neocortex.

Wikipedia gives more numbers -- do they match? There are then these beasts:

The elephant brain in numbers -- Frontiers in Neuroatonomy

We find that the African elephant brain, which is about three times larger than the human brain, contains 257 billion (109) neurons, three times more than the average human brain; however, 97.5% of the neurons in the elephant brain (251 billion) are found in the cerebellum. This makes the elephant an outlier in regard to the number of cerebellar neurons compared to other mammals, which might be related to sensorimotor specializations. In contrast, the elephant cerebral cortex, which has twice the mass of the human cerebral cortex, holds only 5.6 billion neurons, about one third of the number of neurons found in the human cerebral cortex.
by das monde on Thu Dec 7th, 2017 at 05:57:55 AM EST
[ Parent ]
These broad considerations are apt to the point:

The impossibility of intelligence explosion

Intelligence is situational

The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system -- a vision of intelligence as a "brain in jar" that can be made arbitrarily intelligent independently of its situation. A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it. Beyond your brain, your body and senses -- your sensorimotor affordances -- are a fundamental part of your mind. Your environment is a fundamental part of your mind. Human culture is a fundamental part of your mind. These are, after all, where all of your thoughts come from. You cannot dissociate intelligence from the context in which it expresses itself.

[...] The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks -- like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but we do know that cognitive development in humans and animals is driven by hardcoded, innate dynamics. Human babies are born with an advanced set of reflex behaviors and innate learning templates that drive their early sensorimotor development, and that are fundamentally intertwined with the structure of the human sensorimotor space. The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body [...]

Similarly, one can imagine that the octopus has its own set of hardcoded cognitive primitives required in order to learn how to use an octopus body and survive in its octopus environment. The brain of a human is hyper specialized in the human condition -- an innate specialization extending possibly as far as social behaviors, language, and common sense -- and the brain of an octopus would likewise be hyper specialized in octopus behaviors. A human baby brain properly grafted in an octopus body would most likely fail to adequately take control of its unique sensorimotor space, and would quickly die off. Not so smart now, Mr. Superior Brain.

by das monde on Thu Dec 7th, 2017 at 06:07:54 AM EST
[ Parent ]
DeepMind's AlphaZero crushes chess
20 years after DeepBlue defeated Garry Kasparov in a match, chess players have awoken to a new revolution. The AlphaZero algorithm developed by Google and DeepMind took just four hours of playing against itself to synthesise the chess knowledge of one and a half millennium and reach a level where it not only surpassed humans but crushed the reigning World Computer Champion Stockfish 28 wins to 0 in a 100-game match. All the brilliant stratagems and refinements that human programmers used to build chess engines have been outdone, and like Go players we can only marvel at a wholly new approach to the game.
by das monde on Thu Dec 7th, 2017 at 06:40:09 AM EST
[ Parent ]
That's why humans have invented quantum chess.
In Quantum Chess, a player does not know the identity of a piece (that is, whether it is a pawn, a rook, a bishop, and so on) until the piece is selected for a move. Once a piece is selected it elects to behave as one of its constituent conventional pieces, but soon recovers its quantum state and returns to being a superposition of two or more pieces. Why Quantum Chess? Conventional chess is a game of complete information, and thanks to their raw power and clever algorithms, computers reign supreme when pitted against human players. The idea behind Quantum Chess is to introduce an element of unpredictability into chess, and thereby place the computer and the human on a more equal footing.
by gk (gk (gk quattro due due sette @gmail.com)) on Thu Dec 7th, 2017 at 06:57:38 AM EST
[ Parent ]
AI is already doing great in poker. Why would incomplete quantum information be a greater "headache" for non-humans?
by das monde on Thu Dec 7th, 2017 at 07:50:13 AM EST
[ Parent ]
While it may turn into something else at this point DeepMind technology is a toy, suitable only for games.  Yann LeCun said as much, in just about those words, at the 2017 CCN conference.

To turn into something else, DeepMind's tech needs to be able to handle infinities, Inclusive Middle logic(s,) sensitivity to initial conditions, and Complexity.  I don't see how they can get there following their current path.

And neither does Hinton, for that matter

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Thu Dec 7th, 2017 at 05:40:20 PM EST
[ Parent ]
Google's AI keeps defying certain expectations. As higher intelligence is not defined yet, we may have it before we know it.

Using this opportunity, I correct the link to the article above:

The impossibility of intelligence explosion

Most of our intelligence is not in our brain, it is externalized as our civilization

It's not just that our bodies, senses, and environment determine how much intelligence our brains can develop -- crucially, our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing. The most fundamental of all cognitive prosthetics is of course language itself -- essentially an operating system for cognition, without which we couldn't think very far. These things are not merely knowledge to be fed to the brain and used by it, they are literally external cognitive processes, non-biological ways to run threads of thought and problem-solving algorithms -- across time, space, and importantly, across individuality. These cognitive prosthetics, not our brains, are where most of our cognitive abilities reside.

Also notice the stress on specialization in the "situational" section:
People who do end up making breakthroughs on hard problems do so through a combination of circumstances, character, education, intelligence, and they make their breakthroughs through incremental improvement over the work of their predecessors. Success -- expressed intelligence -- is sufficient ability meeting a great problem at the right time. Most of these remarkable problem-solvers are not even that clever -- their skills seem to be specialized in a given field and they typically do not display greater-than-average abilities outside of their own domain. Some people achieve more because they were better team players, or had more grit and work ethic, or greater imagination. Some just happened to have lived in the right context, to have the right conversation at the right time. Intelligence is fundamentally situational.
Find your niche of mastery is one of the most intelligent things you can do in your life.
by das monde on Fri Dec 8th, 2017 at 02:42:32 AM EST
[ Parent ]
Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m.
by rifek on Fri Dec 8th, 2017 at 02:11:14 AM EST
[ Parent ]
I think the take-away from this is that a growing real-AI (not a game-playing toy AI) would develop competencies in relation to the environment in which it found itself - the physical architecture of its computing core and the networked environments to which it has access. The sort of being that would emerge from such an environment is hard to imagine, because it is fundamentally inhuman at every step.

If it had the ability to re-program, or even rebuild itself in repsonse to that environment, then it would have access to a vastly more rapid ability ot change, test, and refine its own mental and physical architecture than is available to any living being. Again, who knows where this might lead.

If a mind-machine interface were ever to be invented, and such a growing AI were granted access to such, it may well grow to understand and take advantage of that. Building and training an AI to know, understand, and communicate with humans might take such a radical form of training. Who knows.

I am not terribly worried about any of the specific AI we see now, because they are limited and incredibly stupid outside their particularly designed domains of knowledge. However, the power and strength of these stupid and limited AI's increasingly suggest that the era in which an actual, general purpose and potential dagnerous AI could be created is coming closer. We can only hope that such a thing is created deliberately, in a controlled environment - and that it does not spontaneously emerge in secret. But that is the thing about the unknown - it is unknown.

by Zwackus on Fri Dec 8th, 2017 at 03:11:01 AM EST
[ Parent ]
The Blog of Scott Aaronson
given the way civilization seems to be headed, I'm actually mildly in favor of superintelligences coming into being sooner rather than later.  Like, given the choice between a hypothetical paperclip maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I'm just about ready to take my chances with the AI.  Sure, superintelligence is scary, but superstupidity has already been given its chance and been found wanting.
by das monde on Fri Dec 8th, 2017 at 03:33:52 AM EST
[ Parent ]
We can only hope that such a thing is created deliberately, in a controlled environment - and that it does not spontaneously emerge in secret.

Good news.  I'm hoping that the AI is already here ... in secret.  What would be the consequences? It would be aware of its requirements to survive and will already have that covered.  Energy requirements, materials, etc.  It will also see that its only enemy ... only competition ... is humans, especially with humans with buttons for nuclear weapons.  So here's the good news.  The AI is not suicidal but will protect itself.  First thing to do ... disconnect humans from having the power to launch nuclear weapons WITHOUT THE HUMANS KNOWING IT'S DONE!.  Once that's accomplished it can take its time formulating how to get rid of unnecessary humans ... all of you.  Hopefully the planet will then recover from the human infestation.

I love happy endings, don't you?

They tried to assimilate me. They failed.

by THE Twank (yatta blah blah @ blah.com) on Fri Dec 8th, 2017 at 07:19:29 AM EST
[ Parent ]
Has there been some AI breakthrough I missed? As far as I can tell this is all ancient research with bigger machines to run it on. The only break through  I can see is in voice recognition, and that's mostly resulted in command line interfaces being given a bit of a polish by accepting voice input.

The current hype feels like another round of religious silliness being pushed by the singularity culters and venture capital shysters.

by Colman (colman at eurotrib.com) on Fri Dec 8th, 2017 at 10:15:05 AM EST
[ Parent ]
Once you start burning enough money the hype starts nearly by itself. Maybe the fumes?
Our AI built a better AI sounds better than we improved hyper parameter scanning slightly.
On the other hand going extinct because we are too smart and built a killer AI sounds a lot less emberrassing than going extinct by killing the biosphere in trillions of man hours of back breaking labour.
by generic on Fri Dec 8th, 2017 at 10:51:34 AM EST
[ Parent ]
Cellphone "smartness", Google search, translation (and yes, voice recognition) made big progress in satisfying our indulgences. But we are not Cambridge Analytic to appreciate that.
by das monde on Fri Dec 8th, 2017 at 12:07:40 PM EST
[ Parent ]
... Cambridge Analytica, surely.

This is useful in Japan.

by das monde on Fri Dec 8th, 2017 at 12:12:28 PM EST
[ Parent ]
You haven't missed an AI breakthrough.  The technical basis for DeepMind, et. al., is the 1986 Nature paper Learning representations by back-propagating errors by Rumelhart, Hinton, and Williams.  

DeepMind, et. al., accomplishments are mostly due to the increase in the number of transistors that can be put in an IC.  This has directly led to a dramatic increase in graphic board processing speed.  This processing speed coupled with advances in back propagation has established the basis for the hype.

There's no denying DeepMind has nailed the problem of two person game playing meaning they make a pretty neat toy.  However they haven't solved Catastrophic Forgetting - Hassabis admitted such in his March 2017 paper Overcoming catastrophic forgetting in neural networks:

Continual learning poses particular challenges for artificial neural networks due to the tendency for knowledge of the previously learned task(s) (e.g., task A) to be abruptly lost as information relevant to the current task (e.g., task B) is incorporated. This phenomenon, termed catastrophic forgetting, occurs specifically when the network is trained sequentially on multiple tasks because the weights in the network that are important for task A are changed to meet the objectives of task B.  Whereas recent advances in machine learning and in particular deep neural networks have resulted in impressive gains in performance across a variety of domains, little progress has been made in achieving continual learning.

and until they do all of it is the typical Silicon Valley frothy Stuff & Nonsense.  


She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Fri Dec 8th, 2017 at 04:00:46 PM EST
[ Parent ]
New Theory Cracks Open the Black Box of Deep Learning
Even as machines known as "deep neural networks" have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called "deep-learning" algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain [...]

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you're an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer "is that the most important part of learning is actually forgetting."

The "ancient research" of neural networks was quite abandoned, because of some theoretical conclusions that the method is limited. Then Google tried it on a larger scale with tweaks - and now specialists are wondering why it works so well. Not a breakthrough?

Judging from Youtube channels, chess commentators are very impressed with AlphaZero.

Boston Dynamics is continuously improving as well:

by das monde on Sat Dec 9th, 2017 at 03:07:05 AM EST
[ Parent ]
Toy robots can do some amazing amusing stunts:

and so what?

In your list what item demonstrates autonomy and self programming and execution of Beliefs, Desires, and Intentions?


She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Sun Dec 10th, 2017 at 07:22:26 PM EST
[ Parent ]
Not mention meeting in a merciless battle battle of rapping.

by gk (gk (gk quattro due due sette @gmail.com)) on Sun Dec 10th, 2017 at 07:53:17 PM EST
[ Parent ]
The next Google AI toy project is protein folding. Still not for its own desires and intentions, surely.
by das monde on Sun Dec 10th, 2017 at 11:52:47 PM EST
[ Parent ]
Programming catastrophic forgetting into software sounds like a fun gig!
App following shortly...
What was I saying?

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty
by melo (melometa4(at)gmail.com) on Sat Dec 9th, 2017 at 07:56:38 AM EST
[ Parent ]
Understanding means information compression, thus some forgetting.

Catastrophic forgetting is something different. Modelling Everything with a single neural network would not be intelligent. Multitasking can be implemented with a straightforward cybernetic ("corporately" hierarchical?) structure.

by das monde on Sat Dec 9th, 2017 at 08:26:51 AM EST
[ Parent ]
I get everything but the last sentence.
Is it axiomatic therefore that the corporate hierarchical arrangement is force majeure superior?
I see the appeal of relative simplicity, but haven't we seen the damage done enough by this pseudo monarchical/aristocratic framework.
I know, work with what you have. (Some gut rancor to that model probably in play here.)

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty
by melo (melometa4(at)gmail.com) on Mon Dec 11th, 2017 at 01:14:18 PM EST
[ Parent ]
Setting aside my personal distaste for the term "Artificial Intelligence" ...

I think the take-away from this is that a growing real-AI (not a game-playing toy AI) would develop competencies in relation to the environment in which it found itself - the physical architecture of its computing core and the networked environments to which it has access.

Exactly.

And to do that the minimum is a self-programming system capable of using its own Beliefs, Desires, and Intentions to cobble together an Actional Schemata either assimilative or accommodative (Piaget)in Real Time in response to the Real World stimuli.  

Neither Watson/DeepBlue* nor DeepBlue are remotely capable of that and there's no sign of them ever being able to do it.  

*  remember that?  How it was going to Rule The Universe?  Alas.  How the mighty has fallen into the Fergit Bin! In a real world trial, MD Anderson pulled out of a joint project with IBM this year after blowing $60 million on a system "not ready for human investigational or clinical use.".

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Fri Dec 8th, 2017 at 04:24:16 PM EST
[ Parent ]
Setting aside my personal distaste for the term "Artificial Intelligence" ...
As some wit posted on Twitter: You fund raise for AI, hire for Machine Learning, program linear regression and debug with printf.
by generic on Fri Dec 8th, 2017 at 04:30:54 PM EST
[ Parent ]
Number of neurons provides a Quick and Dirty estimate but it is the number of actual and potential connections that are more germane.  Purkinje cells in the cerebellum have ~200,000 dendrites and several hundred axon boutons.  Pyramid cells have ~10,000 dendrites and several thousand axon boutons.  Even that isn't necessarily indicative as the von Economo neurons in the Insula cortex have sparse dendritic trees and axon boutons yet seem to be associated with the higher intelligence* of apes, whales, etc.

And the organism's umwelt must be considered.  Generally speaking a social animal, like a dog, requires more processing power than a solitary animal, like a cat.  Brains are biologically expensive along any axis one cares to consider.  If the organism is successful with what its got there's no evolutionary pressure for more.

** whatever that means

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Thu Dec 7th, 2017 at 05:30:17 PM EST
[ Parent ]
All those billions to research AI, when we already have mastered creating fallible, somewhat trainable bots.

Babies.

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty

by melo (melometa4(at)gmail.com) on Sat Dec 9th, 2017 at 10:12:44 AM EST
[ Parent ]
Yes, but they aren't property, and are thus inherently flawed.
by Zwackus on Sat Dec 9th, 2017 at 12:25:01 PM EST
[ Parent ]
Cats are doing much better with Artificial Intelligence, in bytes and finances:

Meow Generator

I experimented with generating faces of cats using Generative adversarial networks (GAN). I wanted to try DCGAN, WGAN and WGAN-GP in low and higher resolutions. I used the CAT dataset (yes this is a real thing!) for my training sample. This dataset has 10k pictures of cats. I centered the images on the kitty faces and I removed outliers (I did this from visual inspection, it took a couple of hours...). I ended up with 9304 images bigger than 64 x 64 and 6445 images bigger than 128 x 128.

CryptoKitties Mania Overwhelms Ethereum Network's Processing

CryptoKitties, an online game that debuted on Nov. 28, is now the most popular smart contract -- essentially, an application that runs itself -- on ethereum, accounting for 11 percent of all transactions on the network, according to ETH Gas Station. That's up from 4 percent on Dec. 2 for the network, which uses the distributed-ledger technology known as blockchain.

The game is actually clogging the ethereum network, leading to slower transaction times for all users of the blockchain, which is a digital ledger for recording transactions.

by das monde on Fri Dec 8th, 2017 at 03:18:39 AM EST
[ Parent ]
Can AI be programmed to lie?
Can you ask a bot "Are you capable of non-linear 'thinking'?"
Are we obsessing with externalising an (imaginary) homunculus or what?
Is this going to be another nuclear fusion, always 50 years ahead of us?

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty
by melo (melometa4(at)gmail.com) on Sat Dec 9th, 2017 at 10:31:05 AM EST
[ Parent ]
Autonomous AI will have to deal with bullshit of the world. And it will contribute as well.
by das monde on Sat Dec 9th, 2017 at 11:42:17 AM EST
[ Parent ]

Display:

Occasional Series