Intels 80 core processor

Lethinsays...

We're here: gigahertz/GHz 10^9 or 10,000,000,000 calculations/s
that is here: Terahertz/THz 10^12 or 10,000,000,000,000 Calculations/s

teraflop is jargon for 1 trillion floating point operations (computations) per second.

http://en.wikipedia.org/wiki/Hertz has all the other info to better understand what he's talking about.

basically, it means within a 5 year window, normal peoples computers will be as powerful as a now's super computer, and more or less efficient...

dagsays...

Comment hidden because you are ignoring dag.(show it anyway)

Wow - I would think that bandwidth between the other PC components would be the new performance bottleneck with this. Nice to see that Moore's Law is still ticking away. Bring on the singularity.

dgandhisays...

This is not useful from a general user perspective, only a hard processing one. Some special applications, like massively parallel graphic rendering (photo-realism), or serious math will no doubt find uses for them, but adding speed will not solve more fundamental problems, like AI, which require conceptual breakthroughs more then processing speed increases.

The massive multi-core makes me think of SEAForth, which has a 24-core (soon 40-core) chip which runs, not on a clock speed (like 3Ghz), but at the speed of electrons. Asynchronous processing like that seems much more interesting then just packing more of the same old on to smaller and smaller dies a la intel.

Seriously, if you don't use your machine for games (get a dedicated machine like xbox360 or PS3 for goodness sake). and you don't try to run windows post 2kpro, then you CAN'T USE more then 1GHz, this is mostly about creating a market for things people don't really need.

jwraysays...

I disagree, dgandhi:

Our best bet for creating strong AI is to imitate the only known process that has ever created truly intelligent agents: biological evolution. Evolution is a massively parallel process that can take advantage of any number of cores.

9642says...

80 cores, and advantage in powerusage by making idle core's go to sleep mode? Sounds like another useless thing for home PC's, on a laptop doing say, heavy vid rendering or 3d rendering i can see the use (for those of you that dont do 3d rendering, a simpple multilight scene whit few carefully chosen reflections can take upto hours to render, not to mention a whole movie that has effects and scenes making the total rendertime on current "home" PC's to take a total time of week or more.)

And, as for games, didnt notice any "leap" in games when they introduced dual cores, so not expecting to see any change via adding more cores in the future.

Alaksays...

We're going from 2 cores, to 80. Am I missing something here?

As for the sleep mode thing:

Compare to dodge's MDS system in their v8 Engines. It shuts off 4 cylinders, but the motor still has to rotate the mass, and expell energy because of the parasitical loss of having 4 'sleep' cylinders attached to the whole.

gwiz665says...

Classic AI is in principle flawed, because everything has to be pre-programmed. Modern AI (or Learning AI) is a far more promising idea.

jwray: I don't think we can "evolve" an AI, even though that sounds quite interesting, because AI's don't have a life span as such. Biological evolution happens because we adapt to our environment over the span of generations. An AI is a single generation, and as we do, it has to learn how to act and behave.

Modern AI is based on a neural network (like the brain), which is a huge analog state-based machine. If you have to simulate something like that, you require an almost infinitely parallel computer, because the brain is exactly as parallel as the number of neurons in it.

xxovercastxxsays...

We're going from 2 cores, to 80. Am I missing something here?

I've found that the answer to that question is always a resounding yes. Nobody asks it unless they are, in fact, missing something.

Compare to dodge's MDS system in their v8 Engines.

Compare to the four-compartment stomach system of a cow. They regurgitate their food ("cud"), chew, and swallow again. Each time it does so, the cud enters a separate compartment and goes through a different digestion stage. This allows them to utilize food that would otherwise be indigestible.

What does this have in common with an 80-core processor that can put unneeded cores to sleep to conserve power? Not a god damn thing. But then, comparing a microchip architecture to an engine which attempts to simultaneously compensate for a small penis and a thin wallet is completely pointless as well.

rychansays...

I disagree with anyone that says chips like this won't be useful for solving AI. AI is all about data and machine learning. And most of the operations that deal with that data are trivially parallelizable. I'm perfectly happy with this trend as long as it keeps Moore's law marching along.

I agree with gwiz665 except for the part about (computational) Neural Networks. Neural networks are just one of a million function approximators. Nearest neighbor is my favorite. Support vector machines are the best for many applications. People from brain and cognitive sciences tend to employ neural networks because they find it to be a useful analog to the brain, even if it doesn't actually work well and isn't actually similar to the brain.

dgandhisays...

rychan:

While I see that throwing 80 cores at a problem satisfies (or creates) geek penis envy, I would like to point out that we have been adding processing power to AI problems for decades, with absurdly little to show for it.

The belief that if we just keep doing the same type of thing faster we will eventually get to intelligence does not hold much water. While adding speed may be a necessary aspect of moving to functional AI, it very well may not.

Until we have a functional AI model which allows us to do some of the things any mammalian brain can do, like reliable visual object identification, we really have no idea what it will take to make that happen.

We could all just sit around and through out geeky ideas (quantum computing, fuzzy logic, more power, massive parallelism, fractal-chaos based decision making, etc) but until they work, we really don't know which of them will work to solve this problem. I'm not saying it should not be built, I'm just saying it's not revolutionary.

rychansays...

We are making tremendous progress in computer vision with the help of lots of data and lots of computing power. I'm a computer vision scientist and every conference people are getting more and more remarkable performance at a host of different recognition / segmentation / classification / whatever tasks. Things like face detection are so well solved that inexpensive cameras do it automatically. What is your expertise to say that the computer vision field hasn't shown any process? I completely disagree.

We have good benchmarks now based on some really difficult, extensive test sets and every year people are making significant gains in performance. We're not yet at human performance for most tasks but the gap is quantifiably closing. What are your credentials to dismiss the thousands of top notch computer vision publications in the past decade which clearly demonstrate our progress?

I wasn't saying that this processor is anything revolutionary either, I'm just hoping it keeps Moore's law going. That + lots of data + lots of clever engineering will solve computer vision, which is an AI hard problem.

dagsays...

Comment hidden because you are ignoring dag.(show it anyway)

^ Computer vision is a fascinating area.

From a layman's perspective- the most exciting fruits of better computer vision for me are something like Google being able to index images and video based on their content and of course the self-driving car.

Thanks for your input rychan.

dgandhisays...

rychan:

Given that we each have a very slow, massively parallel, biological computer that easily passes our test for intelligence it is not unreasonable to argue that speed is not necessarily at issue.

We're not yet at human performance for most tasks but the gap is quantifiably closing. What are your credentials to dismiss the thousands of top notch computer vision publications in the past decade which clearly demonstrate our progress?

I've got a little 20mhz chip on my desk I use for embedded systems, it is turing complete. Any solution you can come up for an AI problem you can run (slowly) on my chip, but if you have no such solution, the 80 core monster won't help you. Or perhaps no usable solutions exist for a turing machine at all, and we need something different which would require basically starting from scratch, that is my point.

I wasn't saying that this processor is anything revolutionary either, I'm just hoping it keeps Moore's law going. That + lots of data + lots of clever engineering will solve computer vision, which is an AI hard problem.

Parallelism will not overcome the basic problems of reaching physical minimum gate size, and you still have to get data into the thing, which becomes much more top heavy when edge cores have to hand all the code and date to the internal cores. You will of course get more GigaFlOPS, (advertised on the box in big font) but you won't be able to use anywhere near all of them. This is a 80*n progression vs Moore's n^2, if this is the new strategy, Moore's law drops away pretty fast.

jwraysays...

As I see it, the trouble with AI research now is that it is restricted to creating a variety of algorithms to solve very narrowly defined problems, rather than on developing a complete intelligent being. Cobbling together some narrow algorithms will not create a person. Rather we should simulate an entire ecosystem of virtual beings living, reproducing, and dying in a virtual environment and promote their evolution through occasional intervention. Given enough processing power to run the simulation at fine enough resolution and quickly enough, this would surely produce strong AI.

dgandhisays...

jwray:

Alright, write an intelligence survival simulation for BOINC, and convince a few thousand people to run it on spare processor cycles. Given that it appears that abstract cognition only evolved once on this planet over a few billion years I think you will find that your simulation will produce very little in your lifetime, unless you strongly bound your selection criteria, which is what AI researchers do.

Cognition is the consequence of a particular grouping of solutions to smaller problems, the human brain is a composite organ, each part does its own thing, working together they solve problems. Modeling one part gets us closer to a working human brain simulator. Perhaps this simulator is not possible with the technology we are using, and perhaps it is not a very efficient way to solve the problem (AIs would have little need of most of our biological heritage).

While I do agree that artificial selection is useful, and is used for fine tuning of algorithms, the solution you propose may never be computationally feasible, and is not efficient. We do have brains that can find patterns faster then brute force, which is how evolution does it, it makes little sense to use a computer to do something it does poorly, when we have brains that do these things well. That argument can also be made against AI in general, but if we are ever going to build functional AIs the human brain still has the best shot at success.

jwraysays...

Computational feasibility can be achieved by abstracting the biological components into larger blocks, of course. I'm not talking about simulating every cell in an organism. Besides, there's no evidence that any sentient being is a better designer than evolution.

dgandhisays...

Besides, there's no evidence that any sentient being is a better designer than evolution.

Umm... Okay, our eyes are wired backwards, our spines and your knees are not efficient for bipedism, our feet are terribly fragile, our brains can't process numerical information as quickly or reliably as a $0.05 microchip.

While I agree with you in a meta sense, as our brains are a product of evolution, and so the products of our brains are as well. I think you underestimate the waste inherent in blind modification/selection. If we have sufficient understanding of a problem to solve it , or even to minimize the problem set to the point that we can use artificial selection, then brains have gotten us there, because that is what brains do well. Look at how quickly technology is evolving, semiconductor logic was nowhere to be seen 100 years ago, evolution can't touch that kind of development speed.

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More