Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
Display:
An AI doesn't have to be like a human. But AI is easier to do when you create it for a specific job.

It's not hard to build cybernetic AI that works on one specific problem, like landing a plane or optimising fuel consumption in a car. That would have seemed miraculous a century ago but it's a solved problem now.

It's much harder to do natural language processing, but (assuming no collapse) I expect that to be working in a couple of decades.

But what's really needed is a General Modelling Machine - something that can take any problem, build a working model for it more efficiently than a human programmer, and make useful and accurate predictions.

There's the P/NP issue which strongly implies that some classes of problem simply aren't tractable with the kinds of logic we use. So a GMM may not be possible at all.

Or it may need alternative kinds of (quantum?) logic we're not using yet.

If the latter appeared, a GMM with a natural language front end could become a very interesting and useful thing, and would be close to many people's idea of a general purpose AI.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Fri Aug 26th, 2011 at 07:34:30 AM EST
ThatBritGuy:
Or it may need alternative kinds of (quantum?) logic we're not using yet.

interesting diary... trying to design an artificial brain modelled on the human may well be impossible, but we are able to simulate some brain functions, and that fact makes some think if we extend our computer knowledge we could progress to eventually mimicking the whole shebang.

i do fail to see the point, really, though IT is very cool, humans are still w-a-y ahead in terms of our ability to emote, intuit, imagine, and these functions are far harder to crack than number crunching, boolean search, image manipulation, trajectory calculus for space exploration, med tech and such, which are bloody handy.

i respect human curiosity enormously, and fully expect research to bust its arse continuing on this trail, but ultimately though we'll continue to learn a lot spinoff-wise from it, i think we'll eventually give it up, as we already know how to make humans :) with young minds and good ed we can fashion mentalities, for good or ill, but the full spectrum of human brain functions, i think your friend is right, ormondotvos.

computers will be able to do a lot, more than we can imagine right now, so i'll keep an open mind and follow the research with interest, but the goal is specious, imo.

plus it has some psych implications that make me wonder if much of the motivation is not an effort to escape who we are, rather than dive deeper into 'it'.  savantism reveals to us how few people can fathom the deepest processing functions in their own bodyminds, where i think the real jewel we seek lies. computing can reflect, re-iterate and express our humanity, but never supplant it or be its true source. we are becoming semi-adjunctive to the little buggers already i know, but in the final analysis, i think there are parts of us that are way too unique to ever clone, reality (probably with much help from IT) will show us, that no matter how evolved computing becomes, we will ever remain its cerebral gestators, rather than vice-versa.

we might be able to implant new prosthetic eyes, ears, maybe even calculators (!), and i definitely see computer-human interfacing continuing apace, but we are so much more than what a mere machine, no matter how magical can be. it is showing us how we are whole systems embedded in larger whole systems, though, so i do love 'em!

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty

by melo (melometa4(at)gmail.com) on Fri Aug 26th, 2011 at 08:44:41 AM EST
[ Parent ]
Well, since we're wildly speculating here . . .

Maybe we can't simulate the human brain.  Maybe we can.  We don't know.  But if we could replicate our brain's ability to sort and process and give meaning to input and information, understand the structures of meta-information, and understand the nature of problems at least as well as humans, then I see no reason why such an AI would not quickly become far more than human.  The AI would have at its disposal the distinctly inhuman ability to precisely calculate mathematically at truly insane speeds, combined with an ability to absolutely remember everything, and an ability to copy itself infinitely and perfectly to new hardware, so as to run multiple simultaneous copies and truly mult-task.  Further, the AI would have the ability to directly and absolutely understand its own makeup, and to change and adjust this as necessary, on both the software and hardware level.  Just as an example, the AI could not only maintain copies of everything ever written by anyone ever simultaneously in conscious memory, but then build itself a million different brains to simultaneously think through and understand these things at once, and then instantly and perfectly recombine those multiple understandings together and keep them on hand, with perfect recall.  It would be like being able to recall perfectly the exact mental state of every epiphany or moment of understanding you have ever had, all at once.  Not only that, but be able to simultaneously consider all of them, juggle them around, and compare them at leisure.

But all this is a big if.  We don't know how human cognition works, on a logical or practical level.  We don't know how the brain works.  We don't know if our models of reason and logic will ever scale to consciousness, or if something else entirely alien would be required.  It's all a huge mystery.

I am agnostic towards the possibility of creating a human-level intelligence artificially.  But were it created, it would certainly be far more than human, and far more vast and powerful than we can truly comprehend, simply because it would be able to combine what we do well with what computers do well at a natural level.

by Zwackus on Fri Aug 26th, 2011 at 10:16:08 AM EST
[ Parent ]
I am agnostic towards the possibility of creating a human-level intelligence artificially.  But were it created, it would certainly be far more than human, and far more vast and powerful than we can truly comprehend, simply because it would be able to combine what we do well with what computers do well at a natural level.

I once read a suggestion that, if a sentient computer were ever created, its low-level number-crunching power would be as far removed from the conscious layer as human consciousness is removed from neural activity and so, for instance, the intelligent computer would still have to "open a calculator app" in order to do mathematical operations consciously, and it wouldn't be much faster than a human using a computer.

Economics is politics by other means

by Migeru (migeru at eurotrib dot com) on Fri Aug 26th, 2011 at 10:21:36 AM EST
[ Parent ]
A hardware AI would have the advantage over a wetware AI that we know how to upgrade hardware.

Though it is of course possible that the technology required to build a hardware AI would also enable us to build Ghost in the Shell style cyberbrains to enhance our wetware processors.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Fri Aug 26th, 2011 at 02:01:18 PM EST
[ Parent ]
The last hardware problem was solved when terrabyte disks drives became affordable.  Simply put, doing the wrong things at the wrong times even faster does not get you to doing the right things at the right times.

 

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Fri Aug 26th, 2011 at 09:52:14 PM EST
[ Parent ]
The fly in the ointment is the role that emotion forms in all of our mental processes. The eat or flee response is very close to the base of all animal intelligence and emotional responses are the basis for judging almost all things. We have developed methods for suspending judgement and we can attempt to account for emotion in our decisions, but it is very tricky. In order to create an AI that truly resembles that of a human it may be necessary for the development process to functionally recapitulate the evolutionary sequence of human beings.

The problem this poses is amplified by the active disrespect so many give to the role of emotions in our lives. Even the suggestion that a truly human AI would have to have the equivalent of human emotions would/will likely be received with disdain by many of those best able to conceive of the necessary programming. I would like to see special purpose AI utilized much more extensively in known critical areas of human endeavor, such as medical diagnostics, which is so often a disaster when performed by humans.

"It is not necessary to have hope in order to persevere."

by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Sat Aug 27th, 2011 at 09:48:22 PM EST
[ Parent ]
It was after I read Damasio, Sapolsky, and et. al. that I came to realize just how much our emotions (Limbic system, mesolimbic pathways, etc.) underlie our cognitive processing.  To the extent that if our emotions are neurologically unable to function properly we simultaneously lose our executive decision making.  

This realization made me understand attempting to build a "truly" human intelligence isn't worth the effort. A "truly" human intelligence would be subject to developing psychological, neurological, emotional, and cognitive dysfunctions human express and if it doesn't it's not a "truly" human intelligence.

QED

:-)

Which makes TBG's contention, which I share, we should drop the "AI" thing, as such, in order to be working on building a "General Modeling Machine."

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Sun Aug 28th, 2011 at 02:42:44 PM EST
[ Parent ]
We apparently agree. My caveat wrt a General Modeling Machine is to keep it away from making executive decisions. We are sufficiently "inhuman" all by ourselves and have no need of "artificial inhumanity".

"It is not necessary to have hope in order to persevere."
by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Mon Aug 29th, 2011 at 12:38:44 PM EST
[ Parent ]
GMMs won't be allowed to make executive decisions. The executive decisions will be hardcoded into them by programmers who simply apply the state of the art in anthropology, psychology and economics, without understanding that these disciplines exist in large part to justify particular forms of executive decisions.

The Serious People will then pretend that the GMM is making the executive decisions, because this gives the decisions an air of inevitability and truthiness.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Mon Aug 29th, 2011 at 03:17:30 PM EST
[ Parent ]
Then we will need the Butlerian Jihad.

"It is not necessary to have hope in order to persevere."
by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Mon Aug 29th, 2011 at 05:16:12 PM EST
[ Parent ]
Migeru:
as far removed from the conscious layer as human consciousness is removed from neural activity

how far is that? in mm? or is 'far' metaphorical? who or what presupposes any distance between our consciousness and neural activity, can they not be coterminous, even fused?

i guess if one has morphine in the system, that affects the way consciousness perceives nerve signals, though i'm told that it doesn't remove the pain per se, it rather causes the pain not to be worth caring about... presumably by flooding the brain with enough pleasure chemicals that the pain signals come in a distant second.

wouldn't it vary between individuals, just like pain thresholds do?

i guess 'anhedonia', an inability to feel pleasure in life could be seen as seen as a metaphorical distance between consciousness and the neural circuits, though how well the signals travel and are received within those circuits might vary a lot between different folks, or even between different times! for example when firewalkers walk barefoot over coals after psyching themselves up with group exercises, then they go home, can they stick their fingers in even a candle flame and still feel no pain or burning, without all the hoo-rah of the group pumping them into an altered state? i never heard of that happening, although to my mind that would actually be more interesting than firewalking, though that is interesting enough.

what's happening between consciousness and neural circuits when a hypnotist has a subject believe he's being burned, and his skin blisters and he feels the heat? is that heat 'real'?

perhaps mystical experience is when consciousness briefly syncs perfectly, though fleetingly, with one's neural circuits...

this neuroscience is cutting edge stuff, and yet has been around since recorded time, and makes our fascination with computers seem a novelty.

ancient animists ascribed mind to matter, even a rock has a spirit/vibration, just a very slow moving one compared to flowing water or a flower blooming. perhaps in our search to duplicate and mechanise consciousness we are actually missing what's right under our own noses, namely this supposed grail is a bagatelle, and computers will never have common sense.* they're data bankers, not delphic oracles!

*whatever that may be agreed to be... we can make simulacra till the cows come home, but ultimately a world ruled by computer logic seems like it would more likely be dystopian than otherwise.

to a geologist, rocks have 'memories', as the code embedded in the structure is readable to their trained minds.

Data Storage Rock Ready to Roll - EnterpriseStorageForum.com

Millenniata has unveiled its new storage technology that lets users etch data on an optical disc made from a stone-like substance that never degrades, reports Small Business Computing.

melo:

New tech uses silicon glass for data storage

Recently we heard about the M-DISC, which can reportedly store data in a rock-like medium for up to 1,000 years. Now, scientists from the University of Southampton have announced the development of a new type of nanostructured glass technology. Not only might it have applications in fields such as microscopy, but it apparently also has the ability to optically store data forever.

mind into matter, not matter over mind!

going back to the why we are so desirous of breathing life into a golem anyway... could it be that some are so spooked by the strong streaks of irrationality in the human psyche, and so tired of psycho tyrants bending others' wills, that to succeed in imbuing our better instincts into somewhere fixed, concrete and external (dryware?), we will finally, unarguably create that font of wisdom that we can fully trust as objective, ex cyber-cathedra, to tell us when probability decreed our choices/actions would lead to perdition, infallibility incarnate, but with no carne, none of that messy human cell breakdown to worry about, we will supposedly glory in our role bearing pure knowledge and infusing it into permanence.

'cept it won't be a font, it'd always be a reservoir, big difference...

uh huh.... isn't this about taking the long way round to get home where we always were, via a cul de sac to boot?

 we are real, computers are fiction. and yes in a good story the plot does run away with the characters occasionally, deus IN machina.

this is what happens when linear thinking runs amok IOW, methinks, and will go into history as an endearing odd footnote, like man's quixotic quest for Cities of Gold in the jungle, or Springs that offer the Water of Life, a fantasy Elixir of Summum Bonum.

we want off this wheel of change, basically... (instead of trying to figure out/embrace how to make it roll better). the search for absolute AI has a thanatic, death-worshipping aspect or streak to it. as do all linear projections that are fear based, 'we're not enough, we're not whole, we need a HAL to guide us to find our own asses!'

the cracks in our consciousness are where the light shines in, in this ultimately pointless exercise we are trying to seal them closed. it reminds me of those billionaire bunkers where the guy has his bugout system in place, sealing himself and his money into an impregnable vault only openable from the inside.

bliss of safety! there's no refuge from change. we can always keep upgrading the computer till it learns to do it itself, heck till it even extrudes a robot to go mine the earths it needs to self replicate, but there will always be something that we are made with which will not totally compute, and i don't think most of us would want it any other way.

mental rubber doll porn...

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty

by melo (melometa4(at)gmail.com) on Sat Aug 27th, 2011 at 02:25:40 AM EST
[ Parent ]
Migeru:
as far removed from the conscious layer as human consciousness is removed from neural activity
how far is that? in mm? or is 'far' metaphorical? who or what presupposes any distance between our consciousness and neural activity, can they not be coterminous, even fused?
Who said "presupposes"? And, yes, "far" is metaphorical as in "the fundamental processes of human thought are inaccessible to consciousness"
philosophers have made certain fundamental assumptions--that we can know our own minds by introspection, that most of our thinking about the world is literal, and that reason is disembodied and universal--that are now called into question by well-established results of cognitive science. It has been shown empirically that:Most thought is unconscious. We have no direct conscious access to the mechanisms of thought and language. Our ideas go by too quickly and at too deep a level for us to observe them in any simple way.Abstract concepts are mostly metaphorical.


Economics is politics by other means
by Migeru (migeru at eurotrib dot com) on Sat Aug 27th, 2011 at 03:11:52 AM EST
[ Parent ]
It turns out cracking the Natural Language Problem and building a General Modeling Machine intersects is "the same problem" (to a large but not total degree) since human language is a General Model Machine.  By cracking the problems created by the 424 distinct definitions of the word "set" gets you a long way to being able to provide the computer with the ability to accurately process reference-to-phenomenology - the key barrier to a GMM.

Long-winded exposition follows.

Jane bought an apple.  Jane bought a banana.  Jane has two fruit.

And one must ask: where the hell did "fruit" come from?

Well, once you put an apple and a banana in a bag (a collection, technically) the reference-to-phenomena is an Emergent: fruit.  One can, of course, list all the members of the collection, but nobody will tolerate:

Jane went to the apple, banana, pear, orange, mango, pineapple, lemon, lime, kiwi, and plum store and asked the apple, banana, pear, orange, mango, pineapple, lemon, lime, kiwi, and plum seller if he had any tomatoes for sale at his apple, banana, pear, orange, mango, pineapple, lemon, lime, kiwi, and plum stand because a tomato is a member of the apple, banana, pear, orange, mango, pineapple, lemon, lime, kiwi, and plum Set.

for long.  And we don't:

Jane went to the fruit stand and asked the fruit seller if he had tomatoes for sale at his fruit stand because a tomato is a fruit.

Excluded Middle Set Theory cannot deal with this very easily.  In fact there's a (highly paid, incredibly tedious) specialty within IT beavering away to solve the problems encountered by using Excluded Middle Set Theory (aka, Relational Database Model) in a Inclusive Middle World.  Because ..

The apple in "the apple of my eye" is not the same "apple" that, supposedly, bonked Newton on the noggin and the "fruit of one's loins" is not a kumquat.  

The practical result for a NLP or GMM is, using standard & approved CompSci procedures the Robert Frost couplet:

but I have promises to keep,
and miles to go before I sleep

has around 4.3 trillion possible combinatorial meanings and would take over 6,000 years to run through them all to find which one is the one you want.  Further you then throw all that processing away because it tells you nothing about how to process:

But I promise sleep before the miles to keep

(which is bad poetry ... and gets the point across) due to the fact the approved CompSci procedures used to parse the utterance (first step) throws all the "meaningfulness" away, (second step) constructs a representation of the "meaning" and then goes (third step) laboriously stumbling around trying to find the "meaning."

The project I've been working on for donkey's years started out assuming (silly fools we) "but I have promises to keep" is a pretty goddamn good representation of the meaning of "but I have promises to keep" and if you keep the "meaningfulness" in the first step you don't have to go looking for it in the third step.

From such stunning insights doth leaps in Human Civilization depend.

   

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Fri Aug 26th, 2011 at 09:34:59 PM EST
[ Parent ]
ThatBritGuy:
Or it may need alternative kinds of (quantum?) logic we're not using yet.

Far as I can tell, don't pin any hopes on quantum logic. Quantum logic is simply the operations possible if we want to use quantum bits. It won't say anything new unless a human interprets it differently or get new ideas by being forced to think through stuff a slightly different way.

Sweden's finest (and perhaps only) collaborative, leftist e-newspaper Synapze.se

by A swedish kind of death on Sat Aug 27th, 2011 at 05:16:59 AM EST
[ Parent ]
Quantum computing is basically a trick for performing exponential-dimensional operations in polynomial time (since superpositions have exponentially increasing dimension in the number of involved bits). That will be awesome when we're done inventing it, and it may even be useful in developing sentient AIs. But it is unlikely to be strictly necessary, because neurons are not quantum-scale, and they seem to meet the hardware specs for sentience.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Sat Aug 27th, 2011 at 10:23:50 AM EST
[ Parent ]
The supposition is that if you can hold coherence you can basically try every solution to a problem at once.

Kind of.

So that's a bit of an improvement on where we are today.

As for neurons and consciousness, Penrose famously thinks consciousness is a quantum process. I'm not sure we have a good enough model of how neurons actually work to say if he's right. (Probably not, but it's too early to tell.)

There's a bigger problem with speeding up human AI, which is that you can't build literal human AI and expect to run it at vastly amplified speeds developing mental problems.

If your consciousness suddenly speeded up by a couple of orders of magnitude everything around you would appear to happen very slowly, and you'd effectively be in solitary confinement for most of the day. Even if you were hooked up to the Internet so you could mainline Google, you'd still have problems getting enough stimulation.

Add near-perfect recall, and noise would recirculate to the point where the system would become unstable almost instantly, in real time.

It turns out that human dreaming is an essential sorting and filing mechanism, so you'd have to build in an equivalent form of garbage collection.

Similarly, brains are highly structured and not just a big wet bowl of neurons. The thinky parts probably won't work well without the other parts, and no one has a particularly good picture of how all of it hangs together.

And I'm fairly sure that natural language is a separate module, and not the same thing as a general modelling machine. What looks like a really hard problem to a human - formally defining how language is used to communicate associatively - probably won't be a really hard problem to a machine that is almost infinitely parallelised, with almost infinite memory.

It may have to rely on experiential axioms and a large library of metaphors to simulate comprehension. But that's not inherently a difficult problem with an almost infinitely parallelised quantum architecture.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Sat Aug 27th, 2011 at 11:53:00 AM EST
[ Parent ]
The supposition is that if you can hold coherence you can basically try every solution to a problem at once.

Ish.

The point is that if you can hold entanglement long enough, then you can do simultaneous operations in (2N)! - 1 dimensions with N qubits (if I remember by <bra|ket> algebra right - it's been a while).

And since the factorial scales faster than exponentially, you have just reduced whole classes of problems from taking an exponential number of bits to only taking a polynomial number of bits.

Which is awesome, but only tangentially related to sentience.

As for neurons and consciousness, Penrose famously thinks consciousness is a quantum process. I'm not sure we have a good enough model of how neurons actually work to say if he's right. (Probably not, but it's too early to tell.)

It is not categorically impossible, but the scale argues against it. Biologists do not routinely use quantum mechanics to describe inter-cellular interactions (or even, AFAIK, most intra-cellular interactions).

Which, of course, is not to say that quantum computing won't be useful for building AIs - natural human locomotion does not use steel or aluminium, but that does not prevent them from being useful in building trains.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Sat Aug 27th, 2011 at 02:31:08 PM EST
[ Parent ]
The point is more that from a parallel processing point of view, it's not just about solving equations more quickly. You end up with an architecture that's inherently optimised for associativity, and not for Turing-like linear computation.

E.g. when using Turing machines for video processing, you have to calculate each bit in the frame sequentially. That doesn't make it impossible to do associative recognition and processing, but it's inherently different - theoretically and practically - to working with entire frames, and using an associative memory that can retrieve relevant pattern information in a single operation.

You can fake associative processing sequentially, but certain kinds of processing remain impractical. With associative processing, they may not be.

So it becomes a game changer. Potentially you can't just do things more quickly, you can do entirely new things.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Sat Aug 27th, 2011 at 07:00:20 PM EST
[ Parent ]
Biologists do not routinely use quantum mechanics to describe inter-cellular interactions

Biologist may not, but it appears that evolution does. According to an article by a Cal Tech biologists, in order to achieve the efficiencies observe in photosynthesis the leaf cell has to be using a quantum computational method so as to find the optimal or near optimal path through the cell  for the energy of the photon utilized. I had a link to the article on the computer that was recently killed by lightning.

"It is not necessary to have hope in order to persevere."
by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Sat Aug 27th, 2011 at 10:06:07 PM EST
[ Parent ]
Discovering a natural process may be described, analyzed, with QM - or any other intellectual tool, for that matter - is NOT the same as proving the natural process uses QM - or the intellectual tool.  

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre
by ATinNM on Sun Aug 28th, 2011 at 02:54:23 PM EST
[ Parent ]
Of course not! I was not suggesting that leaves are conscious. But it is possible that evolution has come upon a process that WE require quantum computing to explain.

"It is not necessary to have hope in order to persevere."
by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Sun Aug 28th, 2011 at 03:47:51 PM EST
[ Parent ]
Hopefully, you were also not suggesting that leaves were "unconscious."

"Life shrinks or expands in proportion to one's courage." - Ana´s Nin
by Crazy Horse on Sun Aug 28th, 2011 at 03:59:34 PM EST
[ Parent ]
ever read 'secret life of plants'?

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty
by melo (melometa4(at)gmail.com) on Wed Aug 31st, 2011 at 05:00:36 AM EST
[ Parent ]
ThatBritGuy:
Similarly, brains are highly structured and not just a big wet bowl of neurons.

or a hurricane howl of hormones...

ThatBritGuy:

What looks like a really hard problem to a human - formally defining how language is used to communicate associatively - probably won't be a really hard problem to a machine that is almost infinitely parallelised, with almost infinite memory.

children easily absorb multiple languages if exposed young enough, yet where is the parallel will/motivation to learn in a computer? pull its plug and... nada.

hypothetically, if one invented perfect non-degradable computer parts, and a constant source of renewable energy made from non-entropic components, you'd have a tool that could operate independently of its creator, but why would a computer want to work? there's no reward for it to gobble/mash/store bits. it's inanimate. whatever it does is imitable, so it can only be pseudo-original in recombinant ways.

i admit the seduction, if cameras can see more that our eyes can, extrapolating from this is entertaining in a sci fi way, but we seem to be trying to humanise computers, and we are far too robotic as humans already!

time to redefine 'robotic'. lol, maybe 'human' as well...

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty

by melo (melometa4(at)gmail.com) on Sun Aug 28th, 2011 at 05:56:08 AM EST
[ Parent ]
You can build in curiosity. It's not a motivation in the hormonal or DNA-based sense, but it would be just as compelling, as long as it wasn't removed, or self-edited.

Software so far is fundamentally different to biology, because it's easy to build in the behaviours you want. Once you build them in, they stay there.

E.g. Roomba vacuum cleaners are pre-motivated to seek a power source when they're running out. Segways are pre-motivated not to tip over if they possibly can. Etc.

Since we don't have a working model of a full AI, no one knows whether it would work the same way, or whether it would self-edit for clarity and straightforwardness, or whether it would melt down if given contradictory imperatives.

But it's unlikely basic motivation would be an issue. And curiosity could easily be made a basic motivation.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Sun Aug 28th, 2011 at 09:47:20 AM EST
[ Parent ]
ThatBritGuy:
You can build in curiosity. It's not a motivation in the hormonal or DNA-based sense, but it would be just as compelling, as long as it wasn't removed, or self-edited.

'removed' makes it sound it's baked in, because if we had had to install it in the first place, we'd just omit that step, right?

how in heaven is it baked in? motivation for a vacuum cleaner to search out a power supply is triggered by a signal informing it its power is running out, humans choose to equip it that way.

collating trivia, white swan predictions from stat crunching, yes, they can out-do all but savants in that dept.

i think any original metaphor will stop it in its tracks... snark reduce it to a meltdown. computers make linear processing look more than it is, but that's the coding genius of the programmer, methinks.

self-editing, there's a big rub. how will it gauge how self-edited to be, by 'reading' the comprehension skillz of the human to whom it's 'communicating'? avoiding 3 syllable words if the listener is 2 ft tall?

language is the least of it...

this discussion is following me around during the day doing chores, first one on ET like that for a while.

what computers will continue to do , imo, is redefine our humanity by showing us what they can't do, take away the bells and whistles, and what's left?

thanks for trying to explain some pretty hairy science.

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty

by melo (melometa4(at)gmail.com) on Sun Aug 28th, 2011 at 11:06:11 AM EST
[ Parent ]
"this discussion is following me around during the day doing chores, first one on ET like that for a while.

what computers will continue to do , imo, is redefine our humanity by showing us what they can't do, take away the bells and whistles, and what's left?

thanks for trying to explain some pretty hairy science."

Me too. Nice change.

Capitalism searches out the darkest corners of human potential, and mainlines them.

by geezer in Paris (risico at wanadoo(flypoop)fr) on Mon Aug 29th, 2011 at 03:15:04 AM EST
[ Parent ]

Display:

Occasional Series