search results matching tag: machine learning

» channel: learn

go advanced with your query
Search took 0.000 seconds

    Videos (25)     Sift Talk (0)     Blogs (1)     Comments (19)   

Algorithm Removes Water From Underwater Pictures

SFOGuy says...

And she's specific---that for the use of AI and Machine Learning visual processing of images taken of coral reefs (for example for population counts)--it could be very useful indeed.

newtboy said:

For research purposes, I bet it's invaluable.
For instance, accurately knowing coral colors makes identification possible, and accurately measuring the vibrancy of those colors could allow better estimates of reef health.

Privacy is NO LONGER a Social Norm

ChaosEngine says...

"Only 3% of people who use google have actually read the terms and conditions that they agreed to. "

3%?? I would have been amazed if it was as high as 0.3%.
3% would be (conservatively) over 10 million people. I doubt it's anywhere close to that.

I am not sure that privacy as a concept is even possible in a world with machine learning algorithms and big data. That's not a value judgment; I don't think privacy is worthless, I just find it increasingly untenable.

Machine learning has gotten so good, that even if you anonymise data, it's now pretty easy to tell a lot about you. Your digital fingerprint is there and an AI will be 99% correct about your age, gender, politics, sexual orientation, etc, even without you giving up that data.

Neural Network Prototyping On the Go

Neural Network Prototyping On the Go

Neural Network Prototyping On the Go

psycop says...

There are quite a few different languages which can all be compiled down to something that can be run on a system like this. On the whole though the system isn't writing code itself.

Imagine you have a magical problem solving machine with a fixed input, all sorts of switches and levers, and an output. You chose the shape and the size of the machine but you don't chose where to set the levers or which switches to set.

The machine learning process works by having lots and lots of examples of good inputs and matching correct outputs known as a training set. It starts with all the levers and switches set randomly. Each time it makes a guess it looks how far it was from the right answer given from the training set and sees how it could change its settings to get a little closer to the right answer next time.

Given enough time and computing power it can fine tune the settings to get to the point where it's very accurate for a wide variety of complex problems.

AR app lets you paint in 3D in mid-air for others to find

ChaosEngine says...

I can just imagine the poor machine learning algorithm to detect dicks.

Some poor bastard will have to spend hours looking at dick photos to help the system learn.

moonsammy said:

I'd hope that they put some time into automated dick-detection technology. Or perhaps just a tagging system to alert about rogue dicks for the admins to delete. I mean, you can't make this and not address that issue in some manner.

Testing Robustness

ChaosEngine says...

So here’s the thing. I’m willing to bet large amounts of money that the robot is using some kind of machine learning algorithm.

Which means, that all we can do is set it a task, not tell it HOW to achieve the desired result. As a corollary to this, we also don’t understand the robots “reasoning”.

Where am I going with this?

Well, we’ve told the robot to go through the door. It’s figured out how to do this. And now, we’re fucking with it to force it to come up with new solutions. Well, the most obvious solution is to eliminate the pesky human preventing it from achieving its goal.

I’m not actually joking here.

AUTOMATICA 4k - Robots Vs. Music - Nigel Stanford

ChaosEngine says...

Awwww, it's just a clever music video, with preprogrammed actions.

disappointing.

I mean, ok, there's some reasonably smart control stuff going on, but I was hoping it was going to be some kind of interesting machine learning that could collaborate on the fly.

RetroReport - Nuclear Winter

RedSky says...

Correlation and causation is distinguished by controlling for variables directly where the list of possible covariates or confounders is known & limited, or when it is not, using say machine learning techniques to infer a model from the data and repeatedly cross validating it with different test and training samples to ensure that it is rigorous. Read:

https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation

There is nothing about repeatability.

Also: https://en.wikipedia.org/wiki/Cross-validation_(statistics)

Repeatability has nothing to do with testing for correlation / causation. Okay, you repeat an experiment. It looks like X causes Y, like in the first test. But it turns out that Z (that you didn't consider or can't measure) is acting on X & Y at the same time, creating the appearance of a relationship between X & Y where none exists. Read:

https://en.wikipedia.org/wiki/Confounding

If anything the political hype is underblown. Politics deals with the immediate, tangible and the "what directly helps me now." With the financial crisis, politics in the US has decreed that any action on climate change that might marginally impact wages or living standards is out of bounds.

If we assume the risks are real - polluting has specific benefits (cost reduction to polluters) and incredibly dispersed costs which are almost imperceptible for decades while the damage is being done. It requires global coordination for a cost on carbon to be politically feasible. And the effects are seen at least 40 years into the future:

http://www.skepticalscience.com/Climate-Change-The-40-Year-Delay-Between-Cause-and-Effect.html

That's the problem, by the time the effects are obvious, it will be too late to react. In the meantime, you have massive amounts of money, interest groups, politics and delayed effect all acting against any action being taken.

vil said:

No I am not. Science totally relies on cause & effect.

Science has methods to distinguish correlation from causality. Causality means repeatable results, possibility of practical use and my hypocritical benefit. Correlation means randomness and no reason to invest.

Im not against the notion of global warming or nuclear winter.

As far as nuclear winter is concerned I dont think there is much difference between a frozen planet and one that is merely a "few" degrees colder than normal for a couple of years. In either case humans are done for. So while the hype was overdone, reality is just as frightening.

Global warming is a projection into the future, and the future is one of the hardest things to predict. I am happy to agree that we are f*cking up our planet and need to stop ASAP. There are measurable indicators that are clearly out of bounds, conclusively because of human activity.

The political hype (of climate change) is a big risk - if the climate straightens out because of external factors humans might be tempted to not stop f*cking up their environment.

Lets stick to facts and not overemphasize various projections.

Morgan | IBM Creates First Movie Trailer by AI [HD]

RedSky says...

The explanation afterwards typifies my skepticism of machine learning and the kind of magical thinking that makes people think that limitless tasks can be automated beyond set domains.

Of course, algorithms with enough data are going to be effective at determining scary, tender or action segments from movies. But just like how they admit, a human touch is required to then piece it together in a way that resonates on an emotional level.

Trailers ultimately are pretty formulaic so they may be automatable but there are bound to be a whole host of areas where either a deterministic result is not practical or the noise of the algorithm response will be high enough to render the prediction meaningless.

Also too bad the movie's getting panned by reviews, I was kind of excited about watching this.

Introducing FarmBot Genesis

eoe says...

All you haters should maybe learn to code and help them out. It's open source, after all.

I think this is a great idea that will only get better as people start tinkering with it. I see great ways that you could use machine-learning or even just expert advice to know exactly what food you can grow in your conditions (including gophers and chipmunks).

All technology starts out impractical because it's new, expensive, and buggy. Give it a few years, and there will be cheap as prefab ones that do tons of cool things.

Buncha haters (except @dag and @siftbot).

Tesla Model S driver sleeping at the wheel on Autopilot

ChaosEngine says...

Actually, I would say I have a pretty good understanding of machine learning. I'm a software developer and while I don't work on machine learning day-to-day, I've certainly read a good deal about it.

As I've already said, Tesla's solution is not autonomous driving, completely agree on that (which is why I said the video is probably fake or the driver was just messing with people).

A stock market simulator is a different problem. It's trying to predict trends in an inherently chaotic system.

A self-driving car doesn't have to have perfect prediction, it can be reactive as well as predictive. Again, the point is not whether self-driving cars can be perfect. They don't have to be, they just have to be as good or better than the average human driver and frankly, that's a pretty low bar.

That said, I don't believe the first wave of self-driving vehicles will be passenger cars. It's far more likely to be freight (specifically small freight, i.e. courier vans).

I guess we'll see what happens.

RedSky said:

@ChaosEngine

I'm not sure you understand what machine learning is. As I said, the trigger for your child.runsInFront() is based on numerical inputs from sensors that is fed into a formula with certain parameters and coefficients. This has been optimized from many hours of driving data but ultimately it's not able to predict novel events as it can only optimize off existing data. There is a base level of error from bias-variance tradeoff to any model that you cannot avoid. It's not simply a matter of logging enough hours of driving. If that base error level is not low enough, then autonomous cars may never be deemed reliable to be unsupervised.

See: https://en.wikipedia.org/wiki/Bias-variance_tradeoff
Or specifically: http://scott.fortmann-roe.com/docs/docs/BiasVariance/biasvariance.png

It's the same reason that a stock market simulator using the same method (but different inputs) is not accurate. The difference would be that while 55% correct for the stock market may be sufficiently accurate and useful to be profitable, a driving algorithm needs to be near perfect. It's true that a sensor reaction time to someone braking unexpectedly may be much better than a human's and prevent a crash, so yes in certain cases autonomous driving will be safer but because of exceptional cases, but it may never be truly hands-off and you may always need to be ready to intervene, just like how Tesla works today (and why on a regulatory level it passed muster).

The combination of Google hyping its project and poor understanding of math or machine learning is why news reports just parrot Google's reliability numbers. Tesla also, has managed to convince many people that it already offers autonomous driving, but the auto-steer / cruise and changing lanes tech has existed for around a decade. Volvo, Mercedes and Audi all have similar features. There is a tendency to treat this technology as magical or inevitable when there are some unavoidable limitations behind it that may never be surmounted.

Tesla Model S driver sleeping at the wheel on Autopilot

RedSky says...

@ChaosEngine

I'm not sure you understand what machine learning is. As I said, the trigger for your child.runsInFront() is based on numerical inputs from sensors that is fed into a formula with certain parameters and coefficients. This has been optimized from many hours of driving data but ultimately it's not able to predict novel events as it can only optimize off existing data. There is a base level of error from bias-variance tradeoff to this model that you cannot avoid. It's not simply a matter of logging enough hours of driving. If that base error level is not low enough, then autonomous cars may never be deemed reliable to be unsupervised.

See: https://en.wikipedia.org/wiki/Bias-variance_tradeoff
Or specifically: http://scott.fortmann-roe.com/docs/docs/BiasVariance/biasvariance.png

It's the same reason that a stock market simulator using the same method (but different inputs) is not accurate. The difference would be that while 55% correct for the stock market may be sufficiently accurate and useful to be profitable, a driving algorithm needs to be near perfect. It's true that a sensor reaction time to someone braking unexpectedly may be much better than a human's and prevent a crash, so yes in certain cases autonomous driving will be safer but because of exceptional cases, but it may never be truly hands-off and you may always need to be ready to intervene, just like how Tesla works today (and why on a regulatory level it passed muster).

The combination of Google hyping its project and poor understanding of math or machine learning is why news reports just parrot Google's reliability numbers. Tesla also, has managed to convince many people that it already offers autonomous driving, but the auto-steer / cruise and changing lanes tech has existed for around a decade. Volvo, Mercedes and Audi all have similar features. There is a tendency to treat this technology as magical or inevitable when there are some unavoidable limitations behind it that may never be surmounted.

Tesla Model S driver sleeping at the wheel on Autopilot

ChaosEngine says...

I wasn't talking about Tesla, but the technology in general. Google's self-driving cars have driven over 1.5 million miles in real-world traffic conditions. Right now, they're limited to inner city driving, but the tech is fundamentally usable.

There is no algorithm for driving. It's not
if (road.isClear)
keepDriving()
else if (child.runsInFront())
brakeLikeHell()

It's based on machine learning and pattern recognition.

This guy built one in his garage.

Is it perfect yet? Nope. But it's already better than humans and that's good enough. The technology is a lot closer than you think.

RedSky said:

Woah, woah, you're way overstating it. The tech is nowhere near ready for full hands-off driving in non-ideal driving scenarios. For basic navigation Google relies on maps and GPS, but the crux of autonomous navigation is machine learning algorithms. Through many hours of data logged driving, the algorithm will associate more and more accurately certain sensor inputs to certain hazards via equation selection and coefficients. The assumption is that at some point the algorithm would be able to accurately and reliably identify and react to pedestrians, pot holes, construction areas, temporary traffic lights police stops among an almost endless litany of possible hazards.

They're nowhere near there though and there's simply no guarantee that it will ever be sufficiently reliable to be truly hands-off. As mentioned, the algorithm is just an equation with certain coefficients. Our brains don't work that way when we drive. An algorithm may never have the necessary complexity or flexibility to capture the possibility of novel and unexpected events in all driving scenarios. The numbers Google quotes on reliability from its test driving are on well mapped, simple to navigate roads like highways with few of these types of challenges but real life is not like that. In practice, the algorithm may be safer than humans for something like 99% of scenarios (which I agree could in itself make driving safer) but those exceptional 1% of scenarios that our brains are uniquely able to process will still require us to be ready to take over.

As for Tesla, all it has is basically auto-cruise, auto-steer and lane changing on request. The first two is just the car keeping in lane based on lane marker input from sensors, and slowing down & speeding up based on the car follow length you give it. The most advanced part of it is the changing lanes if you indicate it to, which will effectively avoid other cars and merge. It doesn't navigate, it's basically just for highways, and even on those it won't make your exit for you (and apparently will sometimes dive into exits you didn't want based on lane marker confusion from what I've read). So basically this is either staged or this guy is an idiot.

Tesla Model S driver sleeping at the wheel on Autopilot

RedSky says...

Woah, woah, you're way overstating it. The tech is nowhere near ready for full hands-off driving in non-ideal driving scenarios. For basic navigation Google relies on maps and GPS, but the crux of autonomous navigation is machine learning algorithms. Through many hours of data logged driving, the algorithm will associate more and more accurately certain sensor inputs to certain hazards via equation selection and coefficients. The assumption is that at some point the algorithm would be able to accurately and reliably identify and react to pedestrians, pot holes, construction areas, temporary traffic lights police stops among an almost endless litany of possible hazards.

They're nowhere near there though and there's simply no guarantee that it will ever be sufficiently reliable to be truly hands-off. As mentioned, the algorithm is just an equation with certain coefficients. Our brains don't work that way when we drive. An algorithm may never have the necessary complexity or flexibility to capture the possibility of novel and unexpected events in all driving scenarios. The numbers Google quotes on reliability from its test driving are on well mapped, simple to navigate roads like highways with few of these types of challenges but real life is not like that. In practice, the algorithm may be safer than humans for something like 99% of scenarios (which I agree could in itself make driving safer) but those exceptional 1% of scenarios that our brains are uniquely able to process will still require us to be ready to take over.

As for Tesla, all it has is basically auto-cruise, auto-steer and lane changing on request. The first two is just the car keeping in lane based on lane marker input from sensors, and slowing down & speeding up based on the car follow length you give it. The most advanced part of it is the changing lanes if you indicate it to, which will effectively avoid other cars and merge. It doesn't navigate, it's basically just for highways, and even on those it won't make your exit for you (and apparently will sometimes dive into exits you didn't want based on lane marker confusion from what I've read). So basically this is either staged or this guy is an idiot.

ChaosEngine said:

*snip*



Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists

Beggar's Canyon