Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
Display:
ThatBritGuy:
Or it may need alternative kinds of (quantum?) logic we're not using yet.

interesting diary... trying to design an artificial brain modelled on the human may well be impossible, but we are able to simulate some brain functions, and that fact makes some think if we extend our computer knowledge we could progress to eventually mimicking the whole shebang.

i do fail to see the point, really, though IT is very cool, humans are still w-a-y ahead in terms of our ability to emote, intuit, imagine, and these functions are far harder to crack than number crunching, boolean search, image manipulation, trajectory calculus for space exploration, med tech and such, which are bloody handy.

i respect human curiosity enormously, and fully expect research to bust its arse continuing on this trail, but ultimately though we'll continue to learn a lot spinoff-wise from it, i think we'll eventually give it up, as we already know how to make humans :) with young minds and good ed we can fashion mentalities, for good or ill, but the full spectrum of human brain functions, i think your friend is right, ormondotvos.

computers will be able to do a lot, more than we can imagine right now, so i'll keep an open mind and follow the research with interest, but the goal is specious, imo.

plus it has some psych implications that make me wonder if much of the motivation is not an effort to escape who we are, rather than dive deeper into 'it'.  savantism reveals to us how few people can fathom the deepest processing functions in their own bodyminds, where i think the real jewel we seek lies. computing can reflect, re-iterate and express our humanity, but never supplant it or be its true source. we are becoming semi-adjunctive to the little buggers already i know, but in the final analysis, i think there are parts of us that are way too unique to ever clone, reality (probably with much help from IT) will show us, that no matter how evolved computing becomes, we will ever remain its cerebral gestators, rather than vice-versa.

we might be able to implant new prosthetic eyes, ears, maybe even calculators (!), and i definitely see computer-human interfacing continuing apace, but we are so much more than what a mere machine, no matter how magical can be. it is showing us how we are whole systems embedded in larger whole systems, though, so i do love 'em!

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty

by melo (melometa4(at)gmail.com) on Fri Aug 26th, 2011 at 08:44:41 AM EST
[ Parent ]
Well, since we're wildly speculating here . . .

Maybe we can't simulate the human brain.  Maybe we can.  We don't know.  But if we could replicate our brain's ability to sort and process and give meaning to input and information, understand the structures of meta-information, and understand the nature of problems at least as well as humans, then I see no reason why such an AI would not quickly become far more than human.  The AI would have at its disposal the distinctly inhuman ability to precisely calculate mathematically at truly insane speeds, combined with an ability to absolutely remember everything, and an ability to copy itself infinitely and perfectly to new hardware, so as to run multiple simultaneous copies and truly mult-task.  Further, the AI would have the ability to directly and absolutely understand its own makeup, and to change and adjust this as necessary, on both the software and hardware level.  Just as an example, the AI could not only maintain copies of everything ever written by anyone ever simultaneously in conscious memory, but then build itself a million different brains to simultaneously think through and understand these things at once, and then instantly and perfectly recombine those multiple understandings together and keep them on hand, with perfect recall.  It would be like being able to recall perfectly the exact mental state of every epiphany or moment of understanding you have ever had, all at once.  Not only that, but be able to simultaneously consider all of them, juggle them around, and compare them at leisure.

But all this is a big if.  We don't know how human cognition works, on a logical or practical level.  We don't know how the brain works.  We don't know if our models of reason and logic will ever scale to consciousness, or if something else entirely alien would be required.  It's all a huge mystery.

I am agnostic towards the possibility of creating a human-level intelligence artificially.  But were it created, it would certainly be far more than human, and far more vast and powerful than we can truly comprehend, simply because it would be able to combine what we do well with what computers do well at a natural level.

by Zwackus on Fri Aug 26th, 2011 at 10:16:08 AM EST
[ Parent ]
I am agnostic towards the possibility of creating a human-level intelligence artificially.  But were it created, it would certainly be far more than human, and far more vast and powerful than we can truly comprehend, simply because it would be able to combine what we do well with what computers do well at a natural level.

I once read a suggestion that, if a sentient computer were ever created, its low-level number-crunching power would be as far removed from the conscious layer as human consciousness is removed from neural activity and so, for instance, the intelligent computer would still have to "open a calculator app" in order to do mathematical operations consciously, and it wouldn't be much faster than a human using a computer.

Economics is politics by other means

by Migeru (migeru at eurotrib dot com) on Fri Aug 26th, 2011 at 10:21:36 AM EST
[ Parent ]
A hardware AI would have the advantage over a wetware AI that we know how to upgrade hardware.

Though it is of course possible that the technology required to build a hardware AI would also enable us to build Ghost in the Shell style cyberbrains to enhance our wetware processors.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Fri Aug 26th, 2011 at 02:01:18 PM EST
[ Parent ]
The last hardware problem was solved when terrabyte disks drives became affordable.  Simply put, doing the wrong things at the wrong times even faster does not get you to doing the right things at the right times.

 

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Fri Aug 26th, 2011 at 09:52:14 PM EST
[ Parent ]
The fly in the ointment is the role that emotion forms in all of our mental processes. The eat or flee response is very close to the base of all animal intelligence and emotional responses are the basis for judging almost all things. We have developed methods for suspending judgement and we can attempt to account for emotion in our decisions, but it is very tricky. In order to create an AI that truly resembles that of a human it may be necessary for the development process to functionally recapitulate the evolutionary sequence of human beings.

The problem this poses is amplified by the active disrespect so many give to the role of emotions in our lives. Even the suggestion that a truly human AI would have to have the equivalent of human emotions would/will likely be received with disdain by many of those best able to conceive of the necessary programming. I would like to see special purpose AI utilized much more extensively in known critical areas of human endeavor, such as medical diagnostics, which is so often a disaster when performed by humans.

"It is not necessary to have hope in order to persevere."

by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Sat Aug 27th, 2011 at 09:48:22 PM EST
[ Parent ]
It was after I read Damasio, Sapolsky, and et. al. that I came to realize just how much our emotions (Limbic system, mesolimbic pathways, etc.) underlie our cognitive processing.  To the extent that if our emotions are neurologically unable to function properly we simultaneously lose our executive decision making.  

This realization made me understand attempting to build a "truly" human intelligence isn't worth the effort. A "truly" human intelligence would be subject to developing psychological, neurological, emotional, and cognitive dysfunctions human express and if it doesn't it's not a "truly" human intelligence.

QED

:-)

Which makes TBG's contention, which I share, we should drop the "AI" thing, as such, in order to be working on building a "General Modeling Machine."

She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre

by ATinNM on Sun Aug 28th, 2011 at 02:42:44 PM EST
[ Parent ]
We apparently agree. My caveat wrt a General Modeling Machine is to keep it away from making executive decisions. We are sufficiently "inhuman" all by ourselves and have no need of "artificial inhumanity".

"It is not necessary to have hope in order to persevere."
by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Mon Aug 29th, 2011 at 12:38:44 PM EST
[ Parent ]
GMMs won't be allowed to make executive decisions. The executive decisions will be hardcoded into them by programmers who simply apply the state of the art in anthropology, psychology and economics, without understanding that these disciplines exist in large part to justify particular forms of executive decisions.

The Serious People will then pretend that the GMM is making the executive decisions, because this gives the decisions an air of inevitability and truthiness.

- Jake

Friends come and go. Enemies accumulate.

by JakeS (JangoSierra 'at' gmail 'dot' com) on Mon Aug 29th, 2011 at 03:17:30 PM EST
[ Parent ]
Then we will need the Butlerian Jihad.

"It is not necessary to have hope in order to persevere."
by ARGeezer (ARGeezer a in a circle eurotrib daught com) on Mon Aug 29th, 2011 at 05:16:12 PM EST
[ Parent ]
Migeru:
as far removed from the conscious layer as human consciousness is removed from neural activity

how far is that? in mm? or is 'far' metaphorical? who or what presupposes any distance between our consciousness and neural activity, can they not be coterminous, even fused?

i guess if one has morphine in the system, that affects the way consciousness perceives nerve signals, though i'm told that it doesn't remove the pain per se, it rather causes the pain not to be worth caring about... presumably by flooding the brain with enough pleasure chemicals that the pain signals come in a distant second.

wouldn't it vary between individuals, just like pain thresholds do?

i guess 'anhedonia', an inability to feel pleasure in life could be seen as seen as a metaphorical distance between consciousness and the neural circuits, though how well the signals travel and are received within those circuits might vary a lot between different folks, or even between different times! for example when firewalkers walk barefoot over coals after psyching themselves up with group exercises, then they go home, can they stick their fingers in even a candle flame and still feel no pain or burning, without all the hoo-rah of the group pumping them into an altered state? i never heard of that happening, although to my mind that would actually be more interesting than firewalking, though that is interesting enough.

what's happening between consciousness and neural circuits when a hypnotist has a subject believe he's being burned, and his skin blisters and he feels the heat? is that heat 'real'?

perhaps mystical experience is when consciousness briefly syncs perfectly, though fleetingly, with one's neural circuits...

this neuroscience is cutting edge stuff, and yet has been around since recorded time, and makes our fascination with computers seem a novelty.

ancient animists ascribed mind to matter, even a rock has a spirit/vibration, just a very slow moving one compared to flowing water or a flower blooming. perhaps in our search to duplicate and mechanise consciousness we are actually missing what's right under our own noses, namely this supposed grail is a bagatelle, and computers will never have common sense.* they're data bankers, not delphic oracles!

*whatever that may be agreed to be... we can make simulacra till the cows come home, but ultimately a world ruled by computer logic seems like it would more likely be dystopian than otherwise.

to a geologist, rocks have 'memories', as the code embedded in the structure is readable to their trained minds.

Data Storage Rock Ready to Roll - EnterpriseStorageForum.com

Millenniata has unveiled its new storage technology that lets users etch data on an optical disc made from a stone-like substance that never degrades, reports Small Business Computing.

melo:

New tech uses silicon glass for data storage

Recently we heard about the M-DISC, which can reportedly store data in a rock-like medium for up to 1,000 years. Now, scientists from the University of Southampton have announced the development of a new type of nanostructured glass technology. Not only might it have applications in fields such as microscopy, but it apparently also has the ability to optically store data forever.

mind into matter, not matter over mind!

going back to the why we are so desirous of breathing life into a golem anyway... could it be that some are so spooked by the strong streaks of irrationality in the human psyche, and so tired of psycho tyrants bending others' wills, that to succeed in imbuing our better instincts into somewhere fixed, concrete and external (dryware?), we will finally, unarguably create that font of wisdom that we can fully trust as objective, ex cyber-cathedra, to tell us when probability decreed our choices/actions would lead to perdition, infallibility incarnate, but with no carne, none of that messy human cell breakdown to worry about, we will supposedly glory in our role bearing pure knowledge and infusing it into permanence.

'cept it won't be a font, it'd always be a reservoir, big difference...

uh huh.... isn't this about taking the long way round to get home where we always were, via a cul de sac to boot?

 we are real, computers are fiction. and yes in a good story the plot does run away with the characters occasionally, deus IN machina.

this is what happens when linear thinking runs amok IOW, methinks, and will go into history as an endearing odd footnote, like man's quixotic quest for Cities of Gold in the jungle, or Springs that offer the Water of Life, a fantasy Elixir of Summum Bonum.

we want off this wheel of change, basically... (instead of trying to figure out/embrace how to make it roll better). the search for absolute AI has a thanatic, death-worshipping aspect or streak to it. as do all linear projections that are fear based, 'we're not enough, we're not whole, we need a HAL to guide us to find our own asses!'

the cracks in our consciousness are where the light shines in, in this ultimately pointless exercise we are trying to seal them closed. it reminds me of those billionaire bunkers where the guy has his bugout system in place, sealing himself and his money into an impregnable vault only openable from the inside.

bliss of safety! there's no refuge from change. we can always keep upgrading the computer till it learns to do it itself, heck till it even extrudes a robot to go mine the earths it needs to self replicate, but there will always be something that we are made with which will not totally compute, and i don't think most of us would want it any other way.

mental rubber doll porn...

'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty

by melo (melometa4(at)gmail.com) on Sat Aug 27th, 2011 at 02:25:40 AM EST
[ Parent ]
Migeru:
as far removed from the conscious layer as human consciousness is removed from neural activity
how far is that? in mm? or is 'far' metaphorical? who or what presupposes any distance between our consciousness and neural activity, can they not be coterminous, even fused?
Who said "presupposes"? And, yes, "far" is metaphorical as in "the fundamental processes of human thought are inaccessible to consciousness"
philosophers have made certain fundamental assumptions--that we can know our own minds by introspection, that most of our thinking about the world is literal, and that reason is disembodied and universal--that are now called into question by well-established results of cognitive science. It has been shown empirically that:Most thought is unconscious. We have no direct conscious access to the mechanisms of thought and language. Our ideas go by too quickly and at too deep a level for us to observe them in any simple way.Abstract concepts are mostly metaphorical.


Economics is politics by other means
by Migeru (migeru at eurotrib dot com) on Sat Aug 27th, 2011 at 03:11:52 AM EST
[ Parent ]

Display:

Occasional Series