search results matching tag: Neural

» channel: learn

go advanced with your query
Search took 0.000 seconds

    Videos (45)     Sift Talk (0)     Blogs (3)     Comments (130)   

Multi-Agent Hide and Seek

L0cky says...

This isn't really true though and greatly understates how amazing this demo, and current AI actually is.

Saying the agents are obeying a set of human defined rules / freedoms / constraints and objective functions would lead one to imagine something more like video game AI.

Typically video game AI works on a set of weighted decisions and actions, where the weights, decisions and actions are defined by the developer; a more complex variation of:

if my health is low, move towards the health pack,
otherwise, move towards the opponent

In this demo, no such rules exist. It's not given any weights (health), rules (if health is low), nor any instructions (move towards health pack). I guess you could apply neural networks to traditional game AI to determine the weights for decision making (which are typically hard coded by the developer); but that would be far less interesting than what's actually happening here.

Instead, the agent is given a set of inputs, a set of available outputs, and a goal.

4 Inputs:
- Position of the agent itself
- Position and type (other agent, box, ramp) of objects within a limited forward facing conical view
- Position (but not type) of objects within a small radius around the agent
- Reward: Whether they are doing a good job or not

Note the agent is given no information about each type of object, or what they mean, or how they behave. You may as well call them A, B, C rather than agent, box, ramp.

3 Outputs:
- Move
- Grab
- Lock

Again, the agent knows nothing about what these mean, only that they can enable and disable each at any time. A good analogy is someone giving you a game controller for a game you've never played. The controller has a stick and two buttons and you figure out what they do by using them. It'd be accurate to call the outputs: stick, A, B rather than move, grab, lock.

Goal:
- Do a good job.

The goal is simply for the reward input to be maximised. A good analogy is saying 'good girl' or giving a treat to a dog that you are training when they do the right thing. It's up to the dog to figure out what it is that they're doing that's good.

The reward is entirely separate from the agent, and agent behaviour can be completely changed just by changing when the reward is given. The demo is about hide and seek, where the agents are rewarded for not being seen / seeing their opponent (and not leaving the play area). The agents also succeeded at other games, where the only difference to the agent was when the reward was given.

It isn't really different from physically building the same play space, dropping some rats in it, and rewarding them with cheese when they are hidden from their opponents - except rats are unlikely to figure out how to maximise their reward in such a 'complex' game.

Given this description of how the AI actually works, the fact they came up with complex strategies like blocking doors, ramp surfing, taking the ramp to stop their opponents from ramp surfing, and just the general cooperation with other agents, without any code describing any of those things - is pretty amazing.

You can find out more about how the agents were trained, and other exercises they performed here:

https://openai.com/blog/emergent-tool-use/

bremnet said:

Another entrant in the incredibly long line of adaptation / adaptive learning / intelligent systems / artificial intelligence demonstrations that aren't. The agents act based on a set of rules / freedoms/constraints prescribed by a human. The agents "learn" based on the objective functions defined by the human. With enough iterations (how many times did the narrator say "millions" in the video) . Sure, it is a good demonstration of how adaptive learning works, but the hype-fog is getting a big thick and sickening folks. This is a very complex optimization problem being solved with impressive and current technologies, but it is certainly not behavioural intelligence.

Infinite Tucker Takes a Dive in a televised race.

newtboy says...

*dryheave*
You should hang your head in shame spreading that twaddle, even in jest. I cannot believe that person has a job in the sciences.

First claiming a 20 watt charge of energy is our consciousness is absolute nonsense, that's simply the power it takes to run a consciousness, your neural pathways in operation are that consciousness. It's like claiming the electricity from the socket is your computer's operating system, and when your computer crashes all your programs and data are safe somewhere in the socket. *facepalm

Second, he seems ignorant of entropy, the process through which most energy eventually degrades to heat. No distinctive or recognizable patterns are retained in that metamorphosis....heat is heat.

As to these "near death experiences", studies were done because people kept claiming to come out of their body to hover above it and watch it be saved before returning, but when a scrolling lighted message was put on top of the cabinets in multiple emergency and operating rooms, only visible from above, not a single person who claimed to have been out of body ever saw the messages blinking at them, because it's a delusion.

Made me throw up a little there, thanks.

Who Is America? (2018) | First Look | Sacha Baron Cohen SHOW

ChaosEngine says...

Ok, I gave it another go.

Jesus fucking christ.... you're right. It's still not that funny, but it is terrifying. That asshole laughing at the "it's not rape if it's your wife" gag? I think I could be pro-gun just in the hope that someone shoots that motherfucker.

To be honest, at this stage, I just found the accent and makeup distracting. It's unnecessary. You could probably just talk to these guys and they would say the same shit. No false persona required.

Ok, the bit about the kids being able to see in slow motion and the "cardi b neural pathway to the wiz khalifa" did make me laugh.

bcglorf said:

I agree with you, I reluctantly watched this because my opinion of Sacha's other stuff has been the same. You fish long enough and edit enough and you can get a lot of stuff together making people look stupid. I've found I really don't enjoy most of his previous schticks because of this.

This bit is different if you watch the last half. This isn't comedy style funny, but rather in it's prime Daily Show laugh instead of cry funny. I like this BECAUSE it has the potential to ruin people's careers, and normally that's what I've hated about Sacha's schticks before.

However, getting lobbyists, and then elected congressmen to advocate on camera for getting school children as young as 4-5 armed in schools helps society. If we can identify people this monstrous, compromised or willing to pander and get some of them out of office, that's good.

NVIDIA Research - AI Reconstructs Photos

bremnet says...

As hamsteralliance says, ContentAware uses proximity matching and relative area matching. If you tried to fill in the white space with ContentAware, it'd be full of everything except eyes. They nVidia folks used thousands of images to train the neural net (ie generate the model using training data) which has more discrete sequential or spatial relationships between features (ie. eyes go to either side of the nose, below the eyebrows, level, interpupilary distance etc etc). The neural approach ALWAYS needs training data sets - it doesn't appear to (from reading the paper) any adaptive or learning algorithm outside of the neural framework (so, it's not AI in the sense that it learns from any environmental stimulus and alters its response... that I can see anyway. The paper doesn't get into the minutiae). But I'd still date her, if only she'd have me.

hamsteralliance said:

I think one of the key things is that it was filling in the eyes with eyes. It was using completely different color eyes even and it knew where they needed to go. Content Aware only uses what's in the image, so it would just fill in that area with flesh and random bits of hair and mouth. This seems to pull from a neural network database thingymajigger.

How Do Machines Learn? - CGP Grey

NVIDIA Research - AI Reconstructs Photos

hamsteralliance says...

I think one of the key things is that it was filling in the eyes with eyes. It was using completely different color eyes even and it knew where they needed to go. Content Aware only uses what's in the image, so it would just fill in that area with flesh and random bits of hair and mouth. This seems to pull from a neural network database thingymajigger.

ChaosEngine said:

That's cool, but how is it different from photoshops "content aware fill" that debuted 8 years ago?

Ashenkase (Member Profile)

A.I. Is Progressing Faster Than You Think

Ickster says...

For the time being, I don't think that's a concern; with all of the incredible progress being shown in using neural networks to replicate (and yes, improve on) human skills, I'm not aware of any real advances in any field related to giving AI any sort of actual will.

What's more concerning to me is how this sort of technology will be put to use to categorize and control people. I'm not talking about shadowy cabals, or even evil corporations--I'm talking about the unintentional consequences of being able to accurately reduce people to a set of metrics and predictable behaviors, and how that may push culture to a bland algorithmic mush rather than the chaotic but vibrant human mess it's always been. Time will tell, I guess.

ChaosEngine said:

That is really cool and very scary.

What are the chances of us being able to control an AI? Next to zero, IMO.

Neuroscientist Explains 1 Concept in 5 Levels of Difficulty

dubious says...

I'm a bit surprised the grad student or expert didn't discuss neuromodulators more. The fact is we already have the full connectome of a much simpler system, a worm (C Elegans). And this full mapping is considered insufficient to fully understand the simplified worm behavior because it doesn't fully capture the diversity of different neuromodulators and how they effect processing in neurons. It matters if the neuron is releasing dopamine, serotonin, glutamate, etc. There are ways to approximate these from EM images by analyzing the synapse properties, but ultimately it leads to a much larger problem in understanding neural processing.

In a similar light, the connectome project does not do a good job capturing synaptic strength. We don't really know just from the electron microscopy how strong the connections are. We can try and approximate it by looking at the size/formation of the synapse but ultimately this falls short.

For instance, my memory is that thalamocortical projections (thalamic nuclei to L4 of the cortex) do not make up the primary inputs to L4 on a structural connectivity level, but the strength of those connections are much stronger then the more numerous cortico cortical connections. I don't think the connectome from EM images will be able to pull that out.

The connectome is important, the same way knowing the human genome is important. However, it's really not going to tell us how to simulate a person. It's an important step to be sure, one we are still a good ways away from finishing last I checked (which was three years ago ...)

eric3579 (Member Profile)

oritteropo says...

That's a shame, I thought it was a quite interesting explanation of all those Google deep dream images.

There was another computerphile video just on the neural networks used, linked in the description, but I didn't watch it.

The most basic idea is reasonably straightforward - the neural networks are being used to classify images, so there is a low level categoriser for low level things like edges and corners, and then a higher level one that looks for how edges and corners are arranged to make, say, ears... and then a top level one to look for how ears and noses are arranged to make cats.

The complicated bit is that then they run the device backwards, so instead of using it to assign a probability that something is an ear, they actually put the ear in the image even though it wasn't really there to start with.

Since I'm not really saying anything that they didn't, I'm assuming that didn't help?

eric3579 said:

I was so overwhelmed by this one. So very lost.

Tesla Model S driver sleeping at the wheel on Autopilot

bremnet says...

The inherently chaotic event that exists in the otherwise predictable / trainable environment of driving a car is the unplanned / unmeasured disturbance. In control systems that are adaptive or self learning, the unplanned disturbance is the killer - a short duration, unpredictable event for which the system is unable to respond to within the control limits that have been defined through training, programming and/or adaptation. The response to an unplanned disturbance is often to default to an instruction that is very much human derived (ie. stop, exit gracefully, terminate instruction, wait until conditions return to controllable boundary conditions or freeze in place) which, depending on the disturbance, can be catastrophic. In our world, with humans behind the wheel, let's call the unplanned disturbance the "mistake". A tire blows, a load comes undone, an object falls out of or off of another vehicle (human, dog, watermelon, gas cylinder) etc.

The concern from my perspective (and I work directly with adaptive / learning control systems every day - fundamental models, adaptive neural type predictors, genetic algorithms etc. ) is the response to these short duration / short response time unplanned disturbances. The videos I've seen and the examples that I have reviewed don't deal with these very short timescale events and how to manage the response, which in many cases is an event dependent response. I would guess that the 1st dead person that results from the actions or inaction of self driving vehicles will put a major dent if not halt to the program. Humans may be fallible, but we are remarkably (infinitely?) more adaptive in combined conscious / subconscious responses than any computer is or will be in the near future in both appropriateness of response and the time scale of generating that response.

In the partially controlled environment (ie. there is no such thing as 100%) of a automated warehouse and distribution center, self driving works. In the partially controlled environment where ONLY self driving vehicles are present on the roadways, then again, this technology will likely succeed. The mixed environment with self driving co-mingled with humans (see "fallible" above) is not presently viable, and I don't think will be in the next decade or two, partially due to safety risk and partially due to management of these short timescale unplanned disturbances that can call for vastly different responses depending upon the specific situation at hand. In the flow of traffic we encounter the majority of the time, would agree that this may not be an issue to some (in 44 years of driving, I've been in 2 accidents, so I'll leave the risk assessment to the actuaries). But one death, and we'll see how high the knees jerk. And it will happen.

My 2 cents.
TB

ChaosEngine said:

Actually, I would say I have a pretty good understanding of machine learning. I'm a software developer and while I don't work on machine learning day-to-day, I've certainly read a good deal about it.

As I've already said, Tesla's solution is not autonomous driving, completely agree on that (which is why I said the video is probably fake or the driver was just messing with people).

A stock market simulator is a different problem. It's trying to predict trends in an inherently chaotic system.

A self-driving car doesn't have to have perfect prediction, it can be reactive as well as predictive. Again, the point is not whether self-driving cars can be perfect. They don't have to be, they just have to be as good or better than the average human driver and frankly, that's a pretty low bar.

That said, I don't believe the first wave of self-driving vehicles will be passenger cars. It's far more likely to be freight (specifically small freight, i.e. courier vans).

I guess we'll see what happens.

Why are there dangerous ingredients in vaccines?

worthwords says...

Wrong, a 100% bioavailability is when a substance is introduced *intravenously* not intramuscularly or subcutaneously.

>> ocassional inadvertent ingestion and inhalation.
This is the most common rout - the skin is a major part of the immune system to keep pathogens out. we are exposed to thousands of compounds which trigger immune response and antibody creation each day via he respiratory system.

>>These damaging elements have perfect access to the brain
There is something called the blood brain barrier but nevertheless the pathogen is injected locally as mentioned not systemically.

>>Did you know autism is a known neural disruption?
this is a nonsense statement. the truth is we known very little about autism but while there are association, cause is not clear and the association with vaccines were initiated by a dishonest and discredited 'researcher'

I understand your basic premise but this is cargo cult science at its worse. very sad.
If you would like to learn more about bioavailability and how it's measured there are some good basic books on pharmcodynamics which are quite easy to read.

Sniper007 said:

Our bodies are best at responding to pathogens that enter our system normally - over mucus membranes, through skin contact, and via ocassional inadvertent ingestion and inhalation.

Directly injecting pathogens (and a whole host of other known toxins) straight into the bloodstream puts their bioavailability at 100%, instantly. These damaging elements have perfect access to the brain, and all other internal organs, giving the body's almost no chance whatsoever to deal with the invading harmful elements. You can expect to see symptoms manifest in minutes, hours, or days - and this is exactly what you do see in vaccine related injuries.

Aluminum, formaldehyde, cyanide, and other elements we do eat, and are harmless when found embeded in their naturally occurring places. Injecting those refined elements (mixed together with all kinds of other poisons) directly into the bloodstream is no where close to eating un-refind foods that have the same elements bonded to other molecules which render them intert or beneficial.

What is the bioavailability of aluminum found in a banana when eaten?

What is the bioavailability of that same quantity of aluminum when the banana is pulverized and injected into the bloodstream?

What is the bioavailability of that same quantity of aluminum when it's refined, and no part of the banana except the aluminum is injected directly into the bloodstream?

Their description of the actual affect of the aluminum in particular is incomplete. Aluminum is a known neural disruptor. If it reaches the brain directly (remember, bioavailability is at 100%) the aluminum will disrupt neurons. This may result in some cases in a neural disruption. Did you know autism is a known neural disruption?

Why are there dangerous ingredients in vaccines?

Sniper007 says...

Our bodies are best at responding to pathogens that enter our system normally - over mucus membranes, through skin contact, and via ocassional inadvertent ingestion and inhalation.

Directly injecting pathogens (and a whole host of other known toxins) straight into the bloodstream puts their bioavailability at 100%, instantly. These damaging elements have perfect access to the brain, and all other internal organs, giving the body's almost no chance whatsoever to deal with the invading harmful elements. You can expect to see symptoms manifest in minutes, hours, or days - and this is exactly what you do see in vaccine related injuries.

Aluminum, formaldehyde, cyanide, and other elements we do eat, and are harmless when found embeded in their naturally occurring places. Injecting those refined elements (mixed together with all kinds of other poisons) directly into the bloodstream is no where close to eating un-refind foods that have the same elements bonded to other molecules which render them intert or beneficial.

What is the bioavailability of aluminum found in a banana when eaten?

What is the bioavailability of that same quantity of aluminum when the banana is pulverized and injected into the bloodstream?

What is the bioavailability of that same quantity of aluminum when it's refined, and no part of the banana except the aluminum is injected directly into the bloodstream?

Their description of the actual affect of the aluminum in particular is incomplete. Aluminum is a known neural disruptor. If it reaches the brain directly (remember, bioavailability is at 100%) the aluminum will disrupt neurons. This may result in some cases in a neural disruption. Did you know autism is a known neural disruption?

Jurassic World - Official Super Bowl Spot

kceaton1 says...

Jurassic Park when it came out was simply: a phenomenon. I've never seen movie theaters packed for two weeks straight--no matter the time--for the same show. Everyone had seen the show over and over again. It was simply too amazing--it was the first show to PERFECTLY nail CGI--and it picked one of the best topics for CGI that you could... Who can ever forget the first time you saw and heard that T-Rex step out into the clearing and roar. It was mesmerizing (I do feel bad for those of you that hated it; there will always be haters, for any movie, or any book...but I think those of us that liked it all got the same sense of wonderment from that show...those scenes; which IS why we kept going back). It reminded me of the similar feeling you get from amusement park rides (pick your ride that fits what I'm describing).

The first time I saw that, I had to do a double take. Nothing, EVER, had been even remotely close to being that good. I mean nothing. Seeing the "gigantic" Brachiosaurus (as there have been sauropods found that, unlike the "brachi" @ 26m--length wise, is utterly dwarfed by ones like the Amphicoelias Fragillimus, that could be as long as 60m) was just amazing (this IS the movie that made CGI a reality for movies and mainstreamed it).

It helped that I saw the movie on a screen that was as big as an IMAX. One of those old-fashioned ones with a balcony and decorations. Torn down and replaced by a screen half it's size, but still fit just as many people (ah, what greed does to us)...

It was the T-Rex scene that left us awe struck and electrified--it truly felt like a dinosaur had come back to life...and yes, it was a bit terrifying. Add in the great music, well done sound (who can forget our *THX* openings), and something so well done that it basically was something new--the CGI--it was a hit that people saw so many times.

Jurassic Park did for CGI, what Star Wars did for extended special effects and the company(s) that created it. Both jump started a new generation of movies. Avatar tried to bring us into the 3D realm (which I DO like, and I would say it "worked" for as much as it possibly could...as I have a 3D HDTV and quite a collection of shows...but...), but 3D has too many issues left for it to "change" things *yet*. Sound is another place that can change things (along with many other aspects and ideas that deal with including or adding onto the sensory perception of a movie; maybe we just have to wait until we can connect almost directly neurally).

I hope this movie will be worth watching (I hope it can end up being much more than that), but it merely looks like a huge money grabbing scheme (plus Jurassic Park was at least based on a pretty good book; which BTW is worth reading even if you saw the movie). The fact that the new huge "T-Rex/Velociraptor" seems impervious to a 30mm machine gun makes me want to just...laugh; then add in the swarm of flying dinosaur people snatchers.

The Daily Show: Glass Half Empty

ChaosEngine says...

It's pretty easy to laugh at glass users as inconsiderate dickheads with stupid looking technology. Ya know, the same way everyone did with cell phone users back in the 80s.

I don't particularly like glass or the concept of everyone recording all the time, but it is going to happen. And what's more, it's going to impossible to tell.

What happens when the camera/display aspect of glass becomes small enough that it's just a contact lens. Or projecting a bit further, when we have neural interfaces that can directly record vision? Yeah, it all sounds a bit sci-fi, but then so would a smartphone back in the 80s.

History has shown that almost every outright dismissal of new technology as a fad has been wrong.

@newtboy, by the way, I believe glass does have a visible recording indicator.



Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists

Beggar's Canyon