Monday, December 7, 2009

Modeling the Mind: the singularity series

We're on the path to artificial intelligence surpassing that of human intelligence within roughly 20 years but scientists are re-thinking the path to singularity. There are a lot of different pieces required to re-constructing the thought process and now researchers are trying to find a way to integrate them all.

Although it's pretty nerdy, I've always been fascinated with figuring out how to get a computer to think, solve problems and function like a human mind. These two paragraphs are from Re-Thinking Artificial Intelligence, an MIT article about how scientists are re-thinking and re-approaching an issue that once resolved will bring us one step closer to successfully completing a Turing test, and ultimately singularity.

"After modeling the thought process, the second area of focus is memory. Much work in AI has tried to impose an artificial consistency of systems and rules on the messy, complex nature of human thought and memory. “It’s now possible to accumulate the whole life experience of a person, and then reason using these data sets which are full of ambiguities and inconsistencies. That’s how we function — we don’t reason with precise truths,” he says. Computers need to learn “ways to reason that work with, rather than avoid, ambiguity and inconsistency."

"And the third focus of the new research has to do with what they describe as “body”: “Computer science and physical science diverged decades ago,” Gershenfeld says. Computers are programmed by writing a sequence of lines of code, but “the mind doesn’t work that way. In the mind, everything happens everywhere all the time.” A new approach to programming, called RALA (for reconfigurable asynchronous logic automata) attempts to “re-implement all of computer science on a base that looks like physics,” he says, representing computations “in a way that has physical units of time and space, so the description of the system aligns with the system it represents.” This could lead to making computers that “run with the fine-grained parallelism the brain uses,” he says."

No comments: