search results matching tag: neural networks

» channel: learn

go advanced with your query
Search took 0.000 seconds

    Videos (23)     Sift Talk (0)     Blogs (1)     Comments (25)   

The Memristor Will Replace RAM and the Hard Drive

westy says...

Memristors /memˈrɪstɚ/ ("memory resistors") are a class of passive two-terminal circuit elements that maintain a functional relationship between the time integrals of current and voltage. This results in resistance varying according to the device's memristance function. Specifically engineered memristors provide controllable resistance useful for switching current. The memristor is a special case in so-called "memristive systems", a class of mathematical models useful for certain empirically observed phenomena, such as the firing of neurons.[3] The definition of the memristor is based solely on fundamental circuit variables, similar to the resistor, capacitor, and inductor. Unlike those more familiar elements, the necessarily nonlinear memristors may be described by any of a variety of time-varying functions. As a result, memristors do not belong to linear time-invariant (LTI) circuit models. A linear time-invariant memristor is simply a conventional resistor.[4]

Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables, namely voltage, current, charge and flux.[5] A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. He did acknowledge that other scientists had already used fixed nonlinear flux-charge relationships.[6] However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device.[7][8][9] Being much simpler than currently popular MOSFET switches and also able to implement one bit of non-volatile memory in a single device, memristors integrated with transistors may enable nanoscale computer technology. Chua also speculates that they may be useful in the construction of artificial neural networks.[10]

Intelligent Design - Where is AI going? (Videogames Talk Post)

gwiz665 says...

Most "weak AI" or narrow AI is used in games or for very specific purposes. There are very few that have grasped at the complexity of a general, strong AI. The main reason, as far as I can see, is that it is really, really, really complex and therefore hard to do. As soon as you up the complexity, the human error margin rises a bunch. Just look at games, which are nowhere near as complex, there are always bugs.

Any commercial physics engine (havok, PhysX etc.) are not based on the actual laws of physics, they use simplified rules that gives a similar behavior, but with a much better performance.

The argument that programming languages don't know any more than we do is a faulty one, that's like saying that our natural language is limited by our knowledge or that we can only express the math we know. There have been plenty of mathematical discoveries, which are expressed just fine in the mathematical language, so there is no reason why we should not be able to express an algorithmic solution to any turing-complete language.

All the, usually chatbots, AI's who are claimed to be "human emulators" are in fact not. They have a very narrow application, processing language and making a "reasonable" response.

I agree that we need to research the human mind, but I'm not sure we're going to find any magic there, which is inherent to the way the brain is built, I think it is far more likely that we will find that it builds on similar principles as what we now know as weak AI, but that the complexity is insanely higher.

Neural networks can run on a turing machine as well, and most scientists agree that we indeed have neural networks in our brain, so there's no reason to dismiss that we can run a brain in a computer as such.

Intelligent Design - Where is AI going? (Videogames Talk Post)

gwiz665 says...

"So the question is, can code contain all types of information?"
I think so.

There are some definite problems with computability, such as the halting problem which can never be solved by a turing machine. This can be accounted for, with checks and balances - the computer doesn't have to give the right answer, just reply that it can't calculate it - a human couldn't calculate it either.

I am inclined to think that we work much like a computer, but obviously not in exactly the same way; we don't have 1s and 0s in our head. But our head does have much lover atomic parts that our conscience have no connection to, much like a high level language does not have direct access to hardware or the intricacies of register addressing.

My whole point is that it may be prudent to consider that the term Strong AI may be a faulty one, and that there are only different degrees complexity in "Weak" AI. I may be mistaken, of course, and there may indeed be some "magical" step that needs to be taken before something is actually thinking, but I doubt it.

I think that more we learn about the human brain, the more we demystify the I in AI. Thoughts and consciousness will likely be not that special when broken down, but because the brain is so complex it seems a mystical answer is easier for now.

The term "weak AI" does not only mean the classical pre-programmed AI that is used in the games above, it also means neural networks and artificial life (swarm AI) and a better word is probably "narrow AI" because it typically only has a narrow application. The term "Strong AI" could easily be exchanged with "General AI", because it has to be able to do basically everything. My point is that there is no threshold where one becomes the other, it is a sliding scale of complexity.

US'08: chaos, possible 1st female president and glasses (Election Talk Post)

NordlichReiter says...

There are also plenty of ways for the electoral college to say screw the people lets pick the president ourselves.

The voting system should be this, a system of databases like a web. Not a conglomerate of paper, electronic, and chad ballots.

Each small place to vote gets a data table, then the voting precinct gets a database, etc etc. It would be a many to many relationship. There would be several backup databases, so as to say if one neural network of databases came under scrutiny they could go to one of the other hidden networks and find the truth.

There would need to be honeypots all over the system, but that's what its all about.

Then use the computers to count the votes... and print them out. Simple and clean.

Could also use the system to print out each and every ballot, and then have the whole election on paper some where.. multiple copies to be sure that nothing is forged.

It sounds crazy, but its achievable, its a sure way to put the voting power back in the hands of the people.

Frankly I don't trust the college.

EDIT: This is a plot for me to get a job! I'm a c# Java sql kind of guy!

Evolution of the Eye Made Easy

Intels 80 core processor

rychan says...

I disagree with anyone that says chips like this won't be useful for solving AI. AI is all about data and machine learning. And most of the operations that deal with that data are trivially parallelizable. I'm perfectly happy with this trend as long as it keeps Moore's law marching along.

I agree with gwiz665 except for the part about (computational) Neural Networks. Neural networks are just one of a million function approximators. Nearest neighbor is my favorite. Support vector machines are the best for many applications. People from brain and cognitive sciences tend to employ neural networks because they find it to be a useful analog to the brain, even if it doesn't actually work well and isn't actually similar to the brain.

Intels 80 core processor

gwiz665 says...

Classic AI is in principle flawed, because everything has to be pre-programmed. Modern AI (or Learning AI) is a far more promising idea.

jwray: I don't think we can "evolve" an AI, even though that sounds quite interesting, because AI's don't have a life span as such. Biological evolution happens because we adapt to our environment over the span of generations. An AI is a single generation, and as we do, it has to learn how to act and behave.

Modern AI is based on a neural network (like the brain), which is a huge analog state-based machine. If you have to simulate something like that, you require an almost infinitely parallel computer, because the brain is exactly as parallel as the number of neurons in it.

Evolved Virtual Creatures (1994)

8756 says...

Aaah ... Genetic algorithms. Really nice. These works by Karl Sims are very famous in the Artificial Intelligence community. The great deal with this is that it mixes genetic algorithms and artificial neural networks.

Quantum Consciousness (Stuart Hameroff)

gwiz665 says...

God in the Gaps - we don't understand all about quantum theory fully, we don't understand consciousness fully - thus there must be some "higher" connetion between the two.

I'm studying AI at the moment and up until the 80's people thought that the brain worked like a computer, or that tha brain could be simulated in a computer with a classical AI - this cannot be, because a preprogrammed AI must've had all inputs predicted, so that it has some rules and laws with which to process them. Newer theories use a neural network, which is able to process previously unknown inputs and is a highly parallel and analog type of processing. A neuron is not necessarily simply a switch, it can be a whole array of values. Furthermore the connections between neurons (synapses) have "weights" which determine how much influence a given neuron has on the network at whole.

There are different approaches to what intelligience is, and that is basis for any theory of AI. Some say, Turing for instance, that an entity is intelligent when input and out put so to say "fits" - how the computation is done is irrelevant, but if the result is right it must be intelligent. This is called engineering-AI.

Overlapping that is a psycological AI, which is divided into two groups. Group A says that the cognitive processes must be computational in nature, such that they can be copied. Group B (of philosophical AI) believes the emotional, intentional and conscious can be copied computationally as well.

I heard no real argument for quantum consciousness here, just that they both are "uncertain" and have an element of randomness.

Extraordinary brain tricks

jonny says...

Ironic - with the exception of the initial demonstration of top-down processing in text comprehension, none of those have much to do with the human 'mind'. They are mostly functions of subcortical structures, primarily the retina, and some processing in the lateral geniculate nucleus (part of the thalamus). [edit] Motion detection is definitely associated with area V5 of the occipital cortex, but the data has been heavily processed by that point. The neurons in V5 are almost certainly activated by the motion illusions, but the illusion itself is caused by the wiring of the retina and LGN.

As for the text demonstration, it's worth noting that your ability to read text like that is severely hampered, though not completely degraded. This has be shown by reaction time studies of all sorts. There are even neural network models that have been developed to demonstrate exactly that effect. Still pretty neat, though.

You want fascinating - try figuring out how your brain matches up the color of those dots with the shape of them, despite the fact that the information travels in two completely separate neural pathways. It's a low-level variation of the 'binding problem' and is one of the central issues in cognitive neuroscience. Figure that out and you should pack your bags for a trip to Stockholm.



Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists

Beggar's Canyon