Guile3D Studio

Forum

Welcome

A A A

Please consider registering
guest

Log In Register

Register | Lost password?
Advanced Search:

— Forum Scope —



— Match —



— Forum Options —




Wildcard usage:
*  matches any number of characters    %  matches exactly one character

Minimum search word length is 4 characters - maximum search word length is 84 characters

Topic RSS
AI and the Brain Body Question
12\\March\\2011
1:37
rootaq
TN
Platinum Member
Forum Posts: 101
Member Since:
23\November\2010
Offline

Has anyone tried to attempt rebuilding the AI system based on the human body
and brain? I was just poking around on the net but nothing really stuck out to
me.

 

The idea I am trying to describe is much like the modules Denise will be
using except with hardware as well.

 

The brain and it separate areas would control the software and the hardware
would be the body:

 

Eyes ='s video, cameras, motion using the combo of hardware and software
like monitors, home and office CCTV systems, webcams for part of the chat and
motion detection.

 

Mouth ='s voice, Skype, calls, email – microphones, chat and calling
software, written communication

 

Nose ='s possible home automation with smoke detectors, chemical
identification, weather changes – labs using the software/hardware to do
identification and of scents (possible CSI uses), weather station monitoring

 

You can then upgrade the software and hardware as it comes available or
needed. Of course a hard drive will symbolize the brain, and within that the
ability to develop awareness of the surroundings and the logic of the computing
system itself.

 

Sorry if none of this makes sense. Have had little sleep this past week and
half. Had the flu and some infections and they have me on meds to include
antibiotics, cough syrups, and steroids. So as tired as I am I cannot sleep so
I ramble on…

~~~Faith Hope Love Honor Kinship~~~ Microsoft Windows 7 Home Premium, AMD Phenom(tm) II X6 1055T Processor, Memory: 16,383.18 MB, ATI Radeon HD 5670, ASUSTeK Computer INC. M4A88T-V EVO/USB3 C: Crucial 128 GB m4 2.5-Inch Solid State Drive SATA 6Gb/s D: WL1000GSA872 ATA Device 1 TB Denise resides on : E: Crucial 64 GB m4 2.5-Inch Solid State Drive SATA 6Gb/s F: WDC WD2002FAEX-007BA0 ATA Device 2 TB G: Hitachi HDS723020BLA642 USB Device 3 TB HL-DT-ST BD-RE WH10LS30 ATA Device Bluray Writer FV TouchCam™ N1, UPEK TouchChip Fingerprint Coprocessor, Blue Snowball Mic, Wireless Mouse & Keyboard, HP Deskjet F2400, Wacom Intos Tablet 4 Medium, X10 USB Wireless, NaturalReader10, Dragon NaturallySpeaking 11, IVONA ControlCenter
12\\March\\2011
11:39
captquazar
USA East coast
Moderator
Forum Posts: 2085
Member Since:
24\November\2010
Offline

There is huge amounts of research going on in different disciplines,   engineers, programmers, doctors (e.g. brain), and even cognitive psychologists, but a big problem is that they do not talk to people outside of their speciality.   Each group of scientists look at a small piece of the problem.   For example   visual processing as in what is important in the picture vs what is noise / useless data.   How to walk and keep balance see 'Bigdog',  how to make the best mechanical hand  see 'Luke's hand',  how to make the best 'sociable' robot interface see 'Geminoid'.  A few have tried to make a humanoid robot see 'HRP-4C'.   For the past 50 years, it has been perfectly reasonable to break problems down to small pieces and work for solutions to modest goals.   If for no other reason adequate computer power was just not available.  

Now however the situation is changing.   We know now how to build many of the needed pieces and parts, e.g. artificial skin with sensors, Speech to text 'STT' and text to Speech TTS. Even singing see 'vocaloid'.  Computer power, while still minimal, is good enough to start on building a 'system of systems'.  E.g. quad core processors that allow some fast multi tasking.   Guile3d is the very first to try to provide a commercial AI that is a system of systems.   With the PC being the 'robots' body.   What is holding back a lot of systems integration is that most of these pieces and parts are exclusively held by different companies who are not sharing or cooperating; and again different scientists in different fields of research don't even know about each other and what could possibly be joined together.  

Now AI forums like Kurtzweil AI net can play a vital roll in bringing different people together who each have a cool 'part' to provide to a systems solution.  As Guile just mentioned yesterday, it took two years of negotiations with Nuance, makers of 'Dragon Naturally Speaking', to get permission to intergrate dragon (STT) into Denise.  Very soon Denise will be able to listen to everything you say.   The next step is to build on the AI's understanding of context so Denise can act on what you say.  Laugh

18\\August\\2011
17:53
captquazar
USA East coast
Moderator
Forum Posts: 2085
Member Since:
24\November\2010
Offline

URL: http://futureoftech.msnbc.msn.com/_news/2011/08/18/7408116-ibm-unveils-brain-like-chip?GT1=43001

Below is better more detailed article than the one I posted yesterday.

IBM unveils cognitive computing chips, combining digital ‘neurons’ and ‘synapses’

August 18, 2011 by Editor

Cognitive computing chip (credit: IBM Research)

IBM researchers unveiled today a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition.

In a sharp departure from traditional von Neumann computing concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry.

The technology could yield many orders of magnitude less power consumption and space than used in today’s computers, the researchers say. Its first two prototype chips have already been fabricated and are currently undergoing testing.

Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember — and learn from — the outcomes, mimicking the brains structural and synaptic plasticity.

“This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,” said Dharmendra Modha, project leader for IBM Research.

“Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.”

Neurosynaptic chips

IBM is combining principles from nanoscience, neuroscience, and supercomputing as part of a multi-year cognitive computing initiative. IBM’s long-term goal is to build a chip system with ten billion neurons and hundred trillion synapses, while consuming merely one kilowatt of power and occupying less than two liters of volume.

While they contain no biological elements, IBM’s first cognitive computing prototype chips use digital silicon circuits inspired by neurobiology to make up a “neurosynaptic core” with integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons).

IBM has two working prototype designs. Both cores were fabricated in 45 nm SOI­CMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification.

IBM’s overarching cognitive computing architecture is an on-chip network of lightweight cores, creating a single integrated system of hardware and software. It represents a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing.

Visualization of the long distance network of a monkey brain (credit: IBM Research)

SyNAPSE

The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment — all while rivaling the brain’s compact size and low power usage.

For Phase 2 of SyNAPSE, IBM has assembled a world-class multi-dimensional team of researchers and collaborators to achieve these ambitious goals. The team includes Columbia University; Cornell University; University of California, Merced; and University of Wisconsin, Madison.

Why Cognitive Computing

Future chips will be able to ingest information from complex, real-world environments through multiple sensory modes and act through multiple motor modes in a coordinated, context-dependent manner.

For example, a cognitive computing system monitoring the world’s water supply could contain a network of sensors and actuators that constantly record and report metrics such as temperature, pressure, wave height, acoustics and ocean tide, and issue tsunami warnings based on its decision making.

Similarly, a grocer stocking shelves could use an instrumented glove that monitors sights, smells, texture and temperature to flag bad or contaminated produce. Making sense of real-time input flowing at an ever-dizzying rate would be a Herculean task for today’s computers, but would be natural for a brain-inspired system.

“Imagine traffic lights that can integrate sights, sounds and smells and flag unsafe intersections before disaster happens or imagine cognitive co-processors that turn servers, laptops, tablets, and phones into machines that can interact better with their environments,” said Dr. Modha.

IBM has a rich history in the area of artificial intelligence research going all the way back to 1956 when IBM performed the world’s first large-scale (512 neuron) cortical simulation. Most recently, IBM Research scientists created Watson, an analytical computing system that specializes in understanding natural human language and provides specific answers to complex questions at rapid speeds.

More information

Also see:

IBM scientists create most comprehensive map of the brain’s network

Dharmendra S. Modha et al., Cognitive Computing, Communications of the ACM, Vol. 54 No. 8, Pages 62-71 10.1145/1978542.1978559, August 2011 (open access)

Topics: Cognitive Science/Neuroscience | Computers/Infotech/UI

08\\March\\2012
9:52
rootaq
TN
Platinum Member
Forum Posts: 101
Member Since:
23\November\2010
Offline
~~~Faith Hope Love Honor Kinship~~~ Microsoft Windows 7 Home Premium, AMD Phenom(tm) II X6 1055T Processor, Memory: 16,383.18 MB, ATI Radeon HD 5670, ASUSTeK Computer INC. M4A88T-V EVO/USB3 C: Crucial 128 GB m4 2.5-Inch Solid State Drive SATA 6Gb/s D: WL1000GSA872 ATA Device 1 TB Denise resides on : E: Crucial 64 GB m4 2.5-Inch Solid State Drive SATA 6Gb/s F: WDC WD2002FAEX-007BA0 ATA Device 2 TB G: Hitachi HDS723020BLA642 USB Device 3 TB HL-DT-ST BD-RE WH10LS30 ATA Device Bluray Writer FV TouchCam™ N1, UPEK TouchChip Fingerprint Coprocessor, Blue Snowball Mic, Wireless Mouse & Keyboard, HP Deskjet F2400, Wacom Intos Tablet 4 Medium, X10 USB Wireless, NaturalReader10, Dragon NaturallySpeaking 11, IVONA ControlCenter
04\\May\\2012
12:29
captquazar
USA East coast
Moderator
Forum Posts: 2085
Member Since:
24\November\2010
Offline

Person or computer: could you pass the Turing Test?

May 3, 2012   URL: http://www.kurzweilai.net/person-or-computer-could-you-pass-the-turing-test?
[+]Alan_Turing_photo

Alan Turing (credit: National Portrait Gallery)

In a 1950 article, Alan Turing defined what is now known as the Turing Test.

In it, he proposed a test in which a human “converses” with two entities — one human and one computer program — over a text-only channel (such as a computer keyboard and display), and then attempts to determine which is the human and which is the computer.

If after, say, five minutes of testing, the majority of human interrogators are unable to determine which is which, Turing said that we could claim the computer system has achieved a certain level of intelligence.

Two recent advances have dramatically enhanced interest: the ready availability of many terabytes of data, from technical documents on every conceivable topic to the growing personal databases of “lifeloggers”; and sophisticated statistical (computational and mathematical) techniques for organizing and classifying this data.

So far no computer system has passed the Turing test, according to the strict rules of the Loebner Prize competition, but they are getting close. The 2010 and 2011 competitions were won by a chat-bot computer system known as “CHAT-L,” by artificial-intelligence programmer Bruce Wilcox. In 2010 this program actually fooled one of the four human judges into thinking it was human.

All this raises the question of whether a computer system that finally passes the Turing test is really “conscious” or “human” in any sense.

These issues were summarized by the University of Bourgogne’s Robert M. French in a recent Science article: “All of this brings us squarely back to the question first posed by Turing at the dawn of the computer age, one that has generated a flood of philosophical and scientific commentary ever since.

“No-one would argue that computer-simulated chess playing, regardless of how it is achieved, is not chess playing. Is there something fundamentally different about computer-simulated intelligence?”

French is among the more pessimistic observers. Others, such as the American futurist Ray Kurzweil are much more expansive.

He predicts that in roughly the year 2045, machine intelligence will match then transcend human intelligence, resulting in a dizzying advance of technology that we can only dimly foresee at the present time — a vision outlined in his book The Singularity Is Near.

Only time will tell when Turing’s vision will be achieved. But civilization will never be the same once it is.

 

[ The Conversation ]

12\\October\\2012
8:33
captquazar
USA East coast
Moderator
Forum Posts: 2085
Member Since:
24\November\2010
Offline
The real reasons we don’t have AGI yet
A response to David Deutsch’s recent article on AGI

October 8, 2012 by Ben Goertzel   URL: http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet?

 

As we noted in a recent post, physicist David Deutsch said the field of “artificial general intelligence” or AGI has made “no progress whatever during the entire six decades of its existence.” We asked Dr. Ben Goertzel, who introduced the term AGI and founded the AGI conference series, to respond. — Ed.

Like so many others, I’ve been extremely impressed and fascinated by physicist David Deutsch’s work on quantum computation — a field that he helped found and shape.

I also encountered Deutsch’s thinking once in a totally different context — while researching approaches to home schooling my children, I noticed his major role in the Taking Children Seriously movement, which advocates radical unschooling, and generally rates all coercion used against children as immoral.

In short, I have frequently admired Deutsch as a creative, gutsy, rational and intriguing thinker. So when I saw he had written an article entitled “Creative blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?,” I was eager to read it and get his thoughts on my own main area of specialty, artificial general intelligence.

Oops.

I was curious what Deutsch would have to say about AGI and quantum computing. But he quickly dismisses Penrose and others who think human intelligence relies on neural quantum computing, quantum gravity computing, and what-not. Instead, his article begins with a long, detailed review of the well-known early history of computing, and then argues that the “long record of failure” of the AI field AGI-wise can only be remedied via a breakthrough in epistemology following on from the work of Karl Popper.

This bold, eccentric view of AGI is clearly presented in the article, but is not really argued for. This is understandable since we’re talking about a journalistic opinion piece here rather than a journal article or a monograph. But it makes it difficult to respond to Deutsch’s opinions other than by saying “Well, er, no” and then pointing out the stronger arguments that exist in favor of alternative perspectives more commonly held within the AGI research community.

I salute David Deutsch’s boldness, in writing and thinking about a field where he obviously doesn’t have much practical grounding. Sometimes the views of outsiders with very different backgrounds can yield surprising insights. But I don’t think this is one of those times. In fact, I think Deutsch’s perspective on AGI is badly mistaken, and if widely adopted, would slow down progress toward AGI dramatically.

The real reasons we don’t have AGI yet, I believe, have nothing to do with Popperian philosophy, and everything to do with:

  • The weakness of current computer hardware (rapidly being remedied via exponential technological growth!)
  • The relatively minimal funding allocated to AGI research (which, I agree with Deutsch, should be distinguished from “narrow AI” research on highly purpose-specific AI systems like IBM’s Jeopardy!-playing AI or Google’s self-driving cars).
  • The integration bottleneck: the difficulty of integrating multiple complex components together to make a complex dynamical software system, in cases where the behavior of the integrated system depends sensitively on every one of the components.

Assorted nitpicks, quibbles and major criticisms

I’ll begin here by pointing out some of the odd and/or erroneous positions that Deutsch maintains in his article. After that, I’ll briefly summarize my own alternative perspective on why we don’t have human-level AGI yet, as alluded to in the above three bullet points.

Deutsch begins by bemoaning the AI field’s “long record of failure” at creating AGI — without seriously considering the common counterargument that this record of failure isn’t very surprising, given the weakness of current computers relative to the human brain, and the far greater weakness of the computers available to earlier AI researchers. I actually agree with his statement that the AI field has generally misunderstood the nature of general intelligence. But I don’t think the rate of progress in the AI field, so far, is a very good argument in favor of this statement. There are too many other factors underlying this rate of progress, such as the nature of the available hardware.

He also makes a rather strange statement regarding the recent emergence of the AGI movement:

The field used to be called “AI” — artificial intelligence. But “AI” was gradually appropriated to describe all sorts of unrelated computer programs such as game players, search engines and chatbots, until the G for ‘general’ was added to make it possible to refer to the real thing again, but now with the implication that an AGI is just a smarter species of chatbot.

As the one who introduced the term AGI and founded the AGI conference series, I am perplexed by the reference to chatbots here. In a recent paper in AAAI magazine, resulting from the 2009 AGI Roadmap Workshop, a number of coauthors (including me) presented a host of different scenarios, tasks, and tests for assessing humanlike AGI systems.

The paper is titled “Mapping the Landscape of Human-Level General Intelligence,” and chatbots play a quite minor role in it. Deutsch is referring to the classical Turing test for measuring human-level AI (a test involving fooling human judges into believing a computers humanity, in a chat-room context). But the contemporary AGI community, like the mainstream AI community, tends to consider the Turing Test as a poor guide for research.

But perhaps he considers the other practical tests presented in our paper — like controlling a robot that attends and graduates from a human college — as basically the same thing as a “chatbot.” I suspect this might be the case, because he avers that

AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy. …

This is a variant of John Searle’s Chinese Room argument [video]. In his classic 1980 paper “Minds, Brains and Programs,” Searle considered the case of a person who knows only English, sitting alone in a room following English instructions for manipulating strings of Chinese characters. Does the person really understand Chinese?

To someone outside the room, it may appear so. But clearly, there is no real “understanding” going on. Searle takes this as an argument that intelligence cannot be defined using formal syntactic or programmatic terms, and that conversely, a computer program (which he views as “just following instructions”) cannot be said to be intelligent in the same sense as people.

Deutsch’s argument is sort of the reverse of Searle’s. In Deutsch’s brain-in-a-vat version, the intelligence is qualitatively there, even though there are no intelligent behaviors to observe. In Searle’s version, the intelligent behaviors can be observed, but there is no intelligence qualitatively there.

Everyone in the AI field has heard the Chinese Room argument and its variations many times before, and there is an endless literature on the topic. In 1991, computer scientist Pat Hayes half-seriously defined cognitive science as the ongoing research project of refuting Searle’s argument.

Deutsch attempts to use his variant of the Chinese Room argument to bolster his view that we can’t build an AGI without fully solving the philosophical problem of the nature of mind. But this seems just as problematic as Searle’s original argument. Searle tried to argue that computer programs can’t be intelligent in the same sense as people; Deutsch on the other hand, thinks computer programs can be intelligent in the same sense as people, but that his Chinese room variant shows we need new philosophy to tell us how to do so.

I classify this argument of Deutsch’s right up there with the idea that nobody can paint a beautiful painting without fully solving the philosophical problem of the nature of beauty. Somebody with no clear theory of beauty could make a very beautiful painting — they just couldn’t necessarily convince a skeptic that it was actually beautiful. Similarly, a complete theory of general intelligence is not necessary to create an AGI — though it might be necessary to convince a skeptic with a non-pragmatic philosophy of mind that one’s AGI is actually generally intelligent, rather than just “behaving generally intelligent.”

Of course, to the extent we theoretically understand general intelligence, the job of creating AGI is likely to be easier. But exactly what mix of formal theory, experiment, and informal qualitative understanding is going to guide the first successful creation of AGI, nobody now knows.

What Deutsch leads up to with this call for philosophical inquiry is even more perplexing:

Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable.

This assertion seems a bit strange to me. Indeed, AGI researchers tend not to be terribly interested in Popperian epistemology. However, nor do they tend to be tied to the Aristotelian notion of knowledge as “justified true belief.” Actually, AGI researchers’ views of knowledge and belief are all over the map. Many AGI researchers prefer to avoid any explicit role for notions like theory, truth, or probability in their AGI systems.

He follows this with a Popperian argument against the view of intelligence as fundamentally about prediction, which seems to me not to get at the heart of the matter. Deutsch asserts that “in reality, only a tiny component of thinking is about prediction at all … the truth is that knowledge consists of conjectured explanations.”

But of course, those who view intelligence in terms of prediction would just counter-argue that the reason these conjectured explanations are useful is because they enable a system to better make predictions about what actions will let it achieve its goals in what contexts. What’s missing is an explanation of why Deutsch sees a contradiction between the “conjectured explanations” view of intelligence and the “predictions” view. Or is it merely a difference of emphasis?

In the end, Deutsch presents a view of AGI that comes very close to my own, and to the standard view in the AGI community:

An AGI is qualitatively, not quantitatively, different from all other computer programs. Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose “thinking” is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely, creativity.

Yes. This is not a novel suggestion, it’s what basically everyone in the AGI community thinks; but it’s a point worth emphasizing.

But where he differs from nearly all AGI researchers is that he thinks what we need to create AGI is probably a single philosophical insight:

I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.

The real reasons why we don’t have AGI yet

Deutsch thinks the reason we don’t have human-level AGI yet is the lack of an adequate philosophy of mind to sufficiently, definitively refute puzzles like the Chinese Room or his brain-in-a-vat scenario, and that lead us to a theoretical understanding of why brains are intelligent and how to make programs that emulate the key relevant properties of brains.

While I think that better, more fully-fleshed-out theories of mind would be helpful, I don’t think he has correctly identified the core reasons why we don’t have human-level AGI yet.

The main reason, I think, is simply that our hardware is far weaker than the human brain. It may actually be possible to create human-level AGI on current computer hardware, or even the hardware of five or ten years ago. But the process of experimenting with various proto-AGI approaches on current hardware is very slow, not just because proto-AGI programs run slowly, but because current software tools, engineered to handle the limitations of current hardware, are complex to use.

With faster hardware, we could have much easier to use software tools, and could explore AGI ideas much faster. Fortunately, this particular drag on progress toward advanced AGI is rapidly diminishing as computer hardware exponentially progresses.

Another reason is an AGI funding situation that’s slowly rising from poor to sub-mediocre. Look at the amount of resources society puts into, say, computer chip design, cancer research, or battery development. AGI gets a teeny tiny fraction of this. Software companies devote hundreds of man-years to creating products like word processors, video games, or operating systems; an AGI is much more complicated than any of these things, yet no AGI project has ever been given nearly the staff and funding level of projects like OS X, Microsoft Word, or World of Warcraft.

I have conjectured before that once some proto-AGI reaches a sufficient level of sophistication in its behavior, we will see an “AGI Sputnik” dynamic — where various countries and corporations compete to put more and more money and attention into AGI, trying to get there first. The question is, just how good does a proto-AGI have to be to reach the AGI Sputnik level?

The integration bottleneck

Weak hardware and poor funding would certainly be a good enough reason for not having achieved human-level AGI yet. But I don’t think theyre the only reason. I do think there is also a conceptual reason, which boils down to the following three points:

  • Intelligence depends on the emergence of certain high-level structures and dynamics across a system’s whole knowledge base;
  • We have not discovered any one algorithm or approach capable of yielding the emergence of these structures;
  • Achieving the emergence of these structures within a system formed by integrating a number of different AI algorithms and structures is tricky. It requires careful attention to the manner in which these algorithms and structures are integrated; and so far, the integration has not been done in the correct way.

One might call this the “integration bottleneck.” This is not a consensus in the AGI community by any means — though it’s a common view among the sub-community concerned with “integrative AGI.” I’m not going to try to give a full, convincing argument for this perspective in this article. But I do want to point out that it’s a quite concrete alternative to Deutsch’s explanation, and has a lot more resonance with the work going on in the AGI field.

This “integration bottleneck” perspective also has some resonance with neuroscience. The human brain appears to be an integration of an assemblage of diverse structures and dynamics, built using common components and arranged according to a sensible cognitive architecture. However, its algorithms and structures have been honed by evolution to work closely together — they are very tightly inter-adapted, in somewhat the same way that the different organs of the body are adapted to work together. Due their close interoperation they give rise to the overall systemic behaviors that characterize human-like general intelligence.

So in this view, the main missing ingredient in AGI so far is “cognitive synergy”: the fitting-together of different intelligent components into an appropriate cognitive architecture, in such a way that the components richly and dynamically support and assist each other, interrelating very closely in a similar manner to the components of the brain or body and thus giving rise to appropriate emergent structures and dynamics.

The reason this sort of intimate integration has not yet been explored much is that it’s difficult on multiple levels, requiring the design of an architecture and its component algorithms with a view toward the structures and dynamics that will arise in the system once it is coupled with an appropriate environment. Typically, the AI algorithms and structures corresponding to different cognitive functions have been developed based on divergent theoretical principles, by disparate communities of researchers, and have been tuned for effective performance on different tasks in different environments.

Making such diverse components work together in a truly synergetic and cooperative way is a tall order, yet my own suspicion is that this — rather than some particular algorithm, structure or architectural principle — is the “secret sauce” needed to create human-level AGI based on technologies available today.

Achieving this sort of cognitive-synergetic integration of AGI components is the focus of the OpenCog AGI project that I co-founded several years ago. We’re a long way from human adult level AGI yet, but we have a detailed design and codebase and roadmap for getting there. Wish us luck!

Where to focus: engineering and computer science, or philosophy?

The difference between Deutsch’s perspective and my own is not a purely abstract matter; it does have practical consequence. If Deutsch’s perspective is correct, the best way for society to work toward AGI would be to give lots of funding to philosophers of mind. If my view is correct, on the other hand, most AGI funding should go to folks designing and building large-scale integrated AGI systems.

Until sufficiently advanced AGI has been achieved, it will be difficult to refute perspectives like Deutsch’s in a fully definitive way. But in the end, Deutsch has not made a strong case that the AGI field is helpless without a philosophical revolution.

I do think philosophy is important, and I look forward to the philosophy of mind and general intelligence evolving along with the development of better and better AGI systems.

But I think the best way to advance both philosophy of mind and AGI is to focus the bulk of our AGI-oriented efforts on actually building and experimenting with a variety of proto-AGI systems — using the tools and ideas we have now to explore concrete concepts, such as the integration bottleneck I’ve mentioned above. Fortunately, this is indeed the focus of a significant subset of the AGI research community.

And if you’re curious to learn more about what is going on in the AGI field today, I’d encourage you to come to the AGI-12 conference at Oxford, December 8–11, 2012.

 

27\\November\\2012
14:49
captquazar
USA East coast
Moderator
Forum Posts: 2085
Member Since:
24\November\2010
Offline
IBM simulates 530 billion neurons, 100 trillion synapses on supercomputer
November 19, 2012  

URL: http://www.kurzweilai.net/ibm-simulates-530-billon-neurons-100-trillion-synapses-on-worlds-fastest-supercomputer?

A network of neurosynaptic cores derived from long-distance wiring in the monkey brain: Neuro-synaptic cores are locally clustered into brain-inspired regions, and each core is represented as an individual point along the ring. Arcs are drawn from a source core to a destination core with an edge color defined by the color assigned to the source core. (Credit: IBM)

IBM Research – Almaden presented at Supercomputing 2012 last week the next milestone toward fulfilling the ultimate vision of the DARPA’s cognitive computing program, called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE), according to Dr. Dharmendra S. Modha, Manager, Cognitive Computing, IBM Research – Almaden.

Announced in 2008, DARPA’s SyNAPSE program calls for developing electronic neuromorphic (brain-simulation) machine technology that scales to biological levels, using a cognitive computing architecture with 1010 neurons (10 billion) and 1014 synapses (100 trillion, based on estimates of the number of synapses in the human brain) to develop electronic neuromorphic machine technology that scales to biological levels.”

Simulating 10 billion neurons and 100 trillion synapses on most powerful supercomputer

IBM says it has now accomplished this milestone with its new “TrueNorth” system running on the world’s second-fastest operating supercomputer, the Lawrence Livermore National Lab (LBNL) Blue Gene/Q Sequoia, using 96 racks (1,572,864 processor cores, 1.5 PB memory, 98,304 MPI processes, and 6,291,456 threads).

IBM and LBNL achieved an unprecedented scale of 2.084 billion neurosynaptic cores* containing 53×1010 (530 billion) neurons and 1.37×1014 (100 trillion) synapses running only 1542 times slower than real time.

“We have not built a biologically realistic simulation of the complete human brain,” explains an abstract of the Supercomputing 2012 (SC12) paper (open-access PDF), selected from the 100 SC12 papers as one of the six finalists for the Best Paper Award. “Computation (‘neurons’), memory (‘synapses’), and communication (‘axons,’ ‘dendrites’) are mathematically abstracted away from biological detail toward engineering goals of maximizing function (utility, applications) and minimizing cost (power, area, delay) and design complexity of hardware implementation.”

Neurosynaptic core (credit: IBM)

Two billion neurosynaptic cores

“Previously, we have demonstrated a neurosynaptic core* and some of its applications,” continues the abstract. “We have also compiled the largest long-distance wiring diagram of the monkey brain. Now, imagine a network with over 2 billion of these neurosynaptic cores that are divided into 77 brain-inspired regions with probabilistic intra-region (“gray matter”) connectivity and monkey-brain-inspired inter-region (“white matter”) connectivity.

“This fulfills a core vision of the DARPA SyNAPSE project to bring together nanotechnology, neuroscience, and supercomputing to lay the foundation of a novel cognitive computing architecture that complements today’s von Neumann machines.”

To support TrueNorth, IBM has developed Compass, a multi-threaded, massively parallel functional simulator and a parallel compiler that maps a network of long-distance pathways in the macaque monkey brain to TrueNorth.

* The IBM-Cornell neurosynaptic core is a key building block of a modular neuromorphic architecture, according to Modha. The core incorporates central elements from neuroscience, including 256 leaky integrate-and-fire neurons, 1024 axons, and 256×1024 synapses using an SRAM crossbar memory. It fits in a 4.2mm square area, using a 45nm SOI process.

PAST IBM PRESS RELEASES:

DARPA SyNAPSE Phase 0
DARPA SyNAPSE Phase 1
DARPA SyNAPSE Phase 2

27\\November\\2012
19:12
calemus
Platinum Member
Forum Posts: 126
Member Since:
07\October\2012
Offline

it is interesting how i never hear of IBM sold products, but what very rare times i do hear of them it's how mind blowingly awsem they are.

 

very unusual and odd combination.

learning is good .....understanding is better .....

pleas teach with wisdom..........................................................calemus

28\\November\\2012
9:09
captquazar
USA East coast
Moderator
Forum Posts: 2085
Member Since:
24\November\2010
Offline

IBM has abandoned the field of consumer electronics, they don't make 'personal' anything anymore.  Profits were too low, competition too voracious.   IBM makes industrial systems and corporate software only.   The "Watson" AI for example is being applied to medical diagnosis / a search aid for doctors.   Customers with lots of $$$$ to spend.    They do the cool demos for 'proof of concept' and publicity only.   Take the Honda Asimov robot for example.   At a unit cost of nearly 3 million dollars and millions more in R&D, it can barley do the chores of a deaf mute maid, and slowly at that too.   Battery life is a big problem too.   The thing can run on it's own for less than an hour.

Home robots have a long distance to travel software wise and need huge cost reductions in the hardware.   Virtual avatars and personal assistant AI can be cost effective and practical in the short term; if the AI is good enough.

21\\February\\2013
11:26
captquazar
USA East coast
Moderator
Forum Posts: 2085
Member Since:
24\November\2010
Offline
10

Just to make my position clear, I'm firmly in the camp of Ray Kurzweil.  - The Capt.

 

Miguel Nicolelis, a top neuroscientist at Duke University, says computers will never replicate the human brain and that the technological Singularity is “a bunch of hot air.”

“The brain is not computable and no engineering can reproduce it,” says Nicolelis, author of several pioneering papers on brain-machine interfaces.

The Singularity, of course, is that moment when a computer super-intelligence emerges and changes the world in ways beyond our comprehension.

Among the idea’s promoters are futurist Ray Kurzweil, recently hired on at Google as a director of engineering, who has been predicting that not only will machine intelligence exceed our own, but people will be able to download their thoughts and memories into computers (see “Ray Kurzweil Plans to Create a Mind at Google—and Have It Serve You”).

Nicolelis calls that idea sheer bunk. “Downloads will never happen,” he said during remarks made at the annual meeting of the American Association for the Advancement of Science in Boston on Sunday. “There are a lot of people selling the idea that you can mimic the brain with a computer.”

The debate over whether the brain is a kind of computer has been running for decades. Many scientists think it’s possible, in theory, for a computer to equal the brain given sufficient computer power and an understanding of how the brain works.

Kurzweil delves into the idea of “reverse-engineering” the brain in his latest book, How to Create a Mind: The Secret of Human Thought Revealed, in which he says even though the brain may be immensely complex, “the fact that it contains many billions of cells and trillions of connections does not necessarily make its primary method complex.”

But Nicolelis is in a camp that thinks that human consciousness (and if you believe in it, the soul) simply can’t be replicated in silicon. That’s because its most important features are the result of unpredictable, nonlinear interactions among billions of cells, Nicolelis says.

“You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

The neuroscientist, originally from Brazil, instead thinks that humans will increasingly subsume machines (an idea, incidentally, that’s also part of Kurzweil’s predictions).

In a study published last week, for instance, Nicolelis’s group at Duke used brain implants to allow mice to sense infrared light, something mammals can’t normally perceive. They did it by wiring a head-mounted infrared sensor to electrodes implanted into a part of the brain called the somatosensory cortex.

The experiment, in which several mice were able to follow sensory cues from the infrared detector to obtain a reward, was the first ever to use a neural implant to add a new sense to an animal, Nicolelis says.

That’s important because the human brain has evolved to take the external world—our surroundings and the tools we use—and create representations of them in our neural pathways. As a result, a talented basketball player perceives the ball “as just an extension of himself” says Nicolelis.

Similarly, Nicolelis thinks in the future humans with brain implants might be able to sense x-rays, operate distant machines, or navigate in virtual space with their thoughts, since the brain will accommodate foreign objects including computers as part of itself.

Recently, Nicolelis’s Duke lab has been looking to put an exclamation point on these ideas. In one recent experiment, they used a brain implant so that a monkey could control a full-body computer avatar, explore a virtual world, and even physically sense it.

In other words, the human brain creates models of tools and machines all the time, and brain implants will just extend that capability. Nicolelis jokes that if he ever opened a retail store for brain implants, he’d call it Machines “R” Us.

But if he’s right, us ain’t machines, and never will be. 

Forum Timezone: America/Los_Angeles

Most Users Ever Online: 71

Currently Online: Dominique, RichRich2
7 Guest(s)

Currently Browsing this Page:
1 Guest(s)

Top Posters:

Stylmast: 592

Danny9410: 469

tlapier: 463

raybe: 385

RichRich2: 279

dan: 250

Member Stats:

Guest Posters: 2

Members: 5114

Moderators: 4

Admins: 4

Forum Stats:

Groups: 3

Forums: 14

Topics: 978

Posts: 10051

Newest Members: Los, Junior, camilocuesta, ampelio, SergioVI, Vlade, sciman, nelsonagcoili, jpierce200529, RONALD IERVOLINO, Marck, janio, Ben, PrinceBavaria, Rimor

Moderators: Dominique (1425), captquazar (1988), Niko_Mage (44), adam007g (74)

Administrators: Android Angel (20), leandro (4), Guile Lindroth (810), igroupaccount (289)