search results matching tag: Rendering

» channel: learn

go advanced with your query
Search took 0.000 seconds

    Videos (293)     Sift Talk (16)     Blogs (10)     Comments (1000)   

exurb1a - You (Probably) Don't Exist

L0cky says...

There is a generally held belief that consciousness is a mystery of science or a miracle of faith; that consciousness was attained instantly (or granted by god), and that one has either attained self awareness or has not.

I don't believe any of that. I believe like all things in biology, consciousness evolved to maximise a benefit, and occurred gradually, without any magic or mystery. The closest exurb1a gets to that is when he says at 6:28:

"Maybe evolution accidentally made some higher mammals on Earth self-aware because it's better for problem solving or something"

We need to know what other people are thinking and this is the problem that consciousness solves. If a neighbouring tribe enters your territory then predicting whether they come to trade, mate, steal or attack is beneficial to survival.

Initially this may be done through simulation - imagining the future based on past experience. A flood approaching your cave is bad news. Being surrounded by lions is not good. Surrounding a lone bison is dinner. Being charged by a screaming tribe is an upcoming fight.

We could only simulate another person's actions, but we had no experience that allows us to simulate another person's thoughts. You may predict that giving your hungry neighbour a meal may suppress their urge to raid your supplies but you still can't simply open their head and see what they are thinking.

Then for the benefit of cooperation and coordination, we started to talk, and everything changed.

Communication not only allows us to speak our mind, but allows us to model the minds of others. We can gain an understanding of another person's motivations long before they act upon them. The need to simulate another person's thoughts becomes more nuanced and complex. Do they want to trade, or do they want to cheat?

Yet still we cannot look into the minds of others and verify our models of them. If we had access to an actual working brain we could gradually strengthen that model with reference to how an actual brain works, and we happen to have access to such a brain, our own!

If we monitored ourselves then we could validate a general model of thought against real urges, real experiences, real problem solving and real motivations. Once we apply our own selves to a model of thought we become much better at modelling the thoughts of others.

And what better way to render that model than with speech itself? To use all of our existing cognitive skills and simply simulate others sharing their thoughts with us.

At 3:15 exurb1a referenced a famous experiment that showed that we make decisions before we become aware of them. This lends evidence to suppose that our consciousness is not the driver of our thoughts, but a monitor - an interpretation of our subconscious that feeds our model of how people think.

Not everybody is the same. We all have different temperaments. Some of us are less predictable than others, and we tend to avoid such people. Some are more amenable to co-operation, others are stubborn. To understand the temperament of one we must compare them to another. If we are to compare the model of another's mind to our own, and we simulate their mind as speech, then we must also simulate our own mind as speech. Then not only are we conscious, we are self-aware.

Add in a feedback loop of social norms, etiquette, acceptable behaviour, expected behaviour, cooperation and co-dependence, game theory and sustainable societies and this conscious model eventually becomes a lot more nuanced than it first started - allowing for abstract concepts such as empathy, shame, guilt, remorse, resentment, contempt, kinship, friendship, nurture, pride, and love.

Consciousness is magical, but not magic.

Machine seperates colors

Cuffed Without Cause

00Scud00 says...

Well, looking it up on Google the "Sobriety Test" strictly speaking involves three tests that don't involve the breathalyzer, which usually comes after those first tests. But he does say breathalyzer at 5:33, but if it is really an open and shut case because he refused it then why did he get off?
From the sounds of it the cop had no reason to suspect he was drunk in the first place, which renders the tests moot because he probably wasn't drunk and they knew it. As for why waste time and annoy? From his perspective they were wasting his time and annoying him, so why the hell not.

newtboy said:

4:26....at the station, what he's calling a "sobriety test" is, in most states, a breathalyzer test that you must agree to, or blood, and not saying yes and taking it is considered refusal because people do waste time arguing in an attempt to score lower, and ain't nobody got time for that. They told him clearly you must answer yes or no, or it's considered refusal, which is absolutely normal procedure from what I've seen. He answered "Listen, I was a US Marine, ....bla bla bla...let's take a minute....bla bla bla...explain my rights...bla bla." and never took it, which is refusal under the law.
5:33 confirms this, breathalyzer.

They must have claimed he failed the field test or why cuff him and require more tests at the station, something he omits, which makes sense since he said he joked around while taking it, marching left right instead of heel toeing. At first he insisted on making numerous phone calls first, like that's a right....he knows his rights....Then he wants to stop to set up his camera to record the stop...Then argues more about the test itself. The cops were clearly annoyed with him arguing and not complying before he got out of the car, but he persisted right into jail.

I wouldn't trust his biased recollection to include all the facts, especially since he is "conducting a study on racial profiling". Sounded to me like a case of arguing himself into a charge he was lucky to get out of because the cops stupidly didn't record the stop. From his own descriptions, in California at least, he's totally guilty....you have no right to discussions, and only an idiot would believe the cops will tell you your rights honestly anyway, so why keep asking except to waste time and annoy?

"Number 13" Sci-Fi Short Film - DUST Exclusive Premiere

jmd says...

Just..so..bad. Why is it so hard to write a good script? A story board? A director who has seen a movie or two? Lets CinimaSins this bitch;

1. opening shot is two shots at very wrong focal lengths, or that hole is actually very small.

2. One would think pre rendered special effects would not have issues with limited fill rates, but this comet clearly looks like its using a smoke trail from a video game on minimum graphic settings. You can count the number of particles on one hand.

3. For a desert nomad in a sand storm, she has an amazingly clean face, also, hoods that pull forward?

4. nomad is pointing at the clear as day impact landing of meteor as if it NEEDED to be pointed out.

5. a fairly large amount of simulated camera shake despite flames being so thin they don't smoke.

6. A horribly done transition shot where the boy is surrounded by smoke, fire, and lava, all except in the direction the camera is pointing.

7. Large tank army that no one notices until it passes them.

8. Physics, or lack of. the entire scene. Those 2 bypeds look like they were motioned captured by a two year old playing with his toys.

9. The expression on the boys face of surprise makes no sense for a robot of some sort who has crashed to the surface of a planet of which he had full intention of kicking ass in. The scowl afterwards makes it even more awkward.

10. what then proceeds is what can best be described as live gameplay from a random indie game from the steam store that utilizes a mostly black color pallet to hide the fact that nothing is texture mapped, low polygon models, and something that only slightly passes as a physics engine.

QUAKE: Forefather of the Online Deathmatch-LORE in a Minute

Mekanikal says...

I got a 4mb Voodoo 1 passthrough card when they first came out and to this day still think it was the most "holy shit!" game changer I have ever seen. 800x600 using Glide was unreal. I also had a GF 256 and while it certainly smoked the Voodoos in performance, the difference between software rendering and the Glide API was mindblowing.

deathcow said:

I had 3DFX then dual Voodoo-2, then geforce 256

Quick D: Invisible Box Challenge

kir_mokum says...

man, he worked way harder than he needed to. the green on the box was insufficient and rendered useless and tracking markers were also unnecessary.

Green screen special effects are amazing to me

GregTSL says...

This is basically how James Cameron shot Avatar. The backgrounds weren't fully rendered, but had enough detail he could get a feel for the final result. Also the actors would be represented in realtime in his field of view as their 12 foot tall counterparts...pretty amazing.

Dashcam Video Of Alabama Cop Who Shot Man Holding His Wallet

lv_hunter says...

Good greif hes on the ground, probably close to passing out from shock and theyre like "Sir are you armed?!" Sir dont move!" I mean if he kept moving would they have shot again?

I can understand it was a quick moment. It was a tense moment when all you saw was something blank in his hands, but damn go render aid! Cant say it but yes he should have stayed in his car and waited, in his rush to get out it looked suspicious.

Dashcam Video Of Alabama Cop Who Shot Man Holding His Wallet

bremnet says...

Hmmm.... serve and protect. Made a mistake, guy on the pavement bleeding. 3 buddies show up. 5+ minutes elapse, nobody bothers to render any aid, they just kneel and watch this guy bleed. I don't think I've ever seen a more pathetically sad response to another human beings plight and suffering than this.

Thomas Train Stunts

A Computer Vision System's Walk Through Times Square

RFlagg says...

That is a legit question. Ignoring that they used a segment of another video, the question is how quickly it used that video segment to render this. The keyword in the paper (https://arxiv.org/abs/1506.01497) is "Towards Real Time", which I guess means it may have been delayed a bit from the original video, but I didn't read past the abstract, and that paper is just the bases for the video.

To be of any real value, say in a car to help it self-drive, is how fast it can do it, and read street signs.

I'm going to guess the reason it doesn't do billboards is it is trained, for whatever reasons to only look so high. Then again, they took another video, then sent that video through the processor, so why ignore them? Also, can it recognize "storefront" "billboard" etc... reading those and translating in real time probably would be a bit advanced for now, but still...

TRRazor said:

This is awesome.
I wonder if the detection as it is shown in the video is actually real-time, or if some of the information was added later in post.

Steve Jobs Foretold the Downfall of Apple!

Mordhaus says...

As a former employee under both Jobs and Cook, I can tell you exactly what is wrong with Apple.

When I started with Apple, every thing we were concerned with was innovating. What could we come up with next? Sure, there were plenty of misses, but when we hit, we hit big. It was ingrained in the culture of the company. Managers wanted creative people, people who might not have been the best worker bee, but that could come up with new concepts easily. Sometimes corporate rules were broken, but if you could show that you were actively working towards something new, then you were OK.

Fast forward to when Cook started running the show, Steve was still alive, but had taken a backseat really. Metrics became a thing. Performance became a watchword. Managers didn't want creative thought, they wanted people who would put their nose to the grindstone and only work on things that headquarters suggested. Apple was no longer worried about innovating, they were concerned with 'maintaining'.

Two examples which might help illustrate further:

1. One of the guys I was working with was constantly screwing around in any free moment with iMovie. He was annoyed at how slow it was in rendering, which at the time was done on the CPU power. Did some of his regular work suffer, yeah. But he was praised because his concepts helped to shift some of the processing to the GPU and allow real time effects. This functionality made iMovie HD 6 amazing to work with.

2. In a different section of the company, the support side, a new manager improved call times, customer service stats, customer satisfaction, and drastically cut down on escalations. However, his team was considered to be:

a. making the other teams look bad

and

b. abusing the use of customer satisfaction tools, like giving a free iPod shuffle (which literally costs a few dollars to make) to extremely upset customers.

Now they were allowed to do all of these things, no rules were being broken. But Cook was mostly in charge by that point and he was more concerned with every damn penny. So, soon after this team blew all the other teams away for the 3rd month in a row, the new manager was demoted and the team was broken up, to be integrated into other teams willy-nilly.

Doing smart things was no longer the 'thing'. Toeing the line was. Until that changes, nothing is going to get better for Apple. I know I personally left due to stress and health issues from the extreme pressure that Cook kept sending downstream on us worker bees. My job, which I had loved, literally destroyed my health over a year.

Unreal Engine's Human CGI is So Real it's Unreal

ravioli says...

What this company (snapperstech.com) did is put together an upgraded control "rig" to manipulate facial expressions, taking into account muscle limits and interactions, skin elasticity, etc.

A little more info in the video's YT desc :
-Adaptive rig: which allows combining any number of expressions using optimized list of blendshapes.
-Real facial muscles constraints: the advanced rig logic simulates real facial muscles constraints.
-Advanced skin shader (for Maya and Unreal): holds up to 16 wrinkles maps and 16 dynamic diffuse maps with micro details and pores stretching.
-Easy to manipulate using facial controllers and/or GUI.
-Compatible with all game engines and animation packages.
-Smooth transition between all the expressions.
-Adjustment layer: freeform manipulation of multiple regions of the face to create unlimited variations of the same expression.

The real-time rendering part is achieved by the Unreal engine itself. The final rendering performance still relies mainly on the hardware used.

ChaosEngine said:

Yeah, the real-time aspect of it is insanely good, although I'd still like to know how much of the rendering budget it takes up, i.e. is this usable in a game or just a research project at the moment?

What do you mean by "only one modifier is being applied"? Which is my other criticism of the video; a voiceover explaining the tech would have been more interesting than the music.

I don't believe that "multiple modifiers" would make this look better, for the simple reason that if you're demoing a technology like this, you end with ALL the bells and whistles to make it look as good as possible.

Unreal Engine's Human CGI is So Real it's Unreal

Khufu says...

what you saw was a mesh with a skin shader rendering in real-time so that's how fast it renders. didn't look terribly hi-res, the real advancement here is the quality of the skin shader(for realtime) and the fidelity of the facial rig, having proper face target shapes all blending together to get complex movements with skin compression/stretching/wrinkling at this level have historically been out of reach for anything but pre-rendered cgi.

They can probably drop libraries of mocap data on this with face markers that match those manipulation points you see in the video, and animators can use them to animate, or clean up/change the motion capture data.

and the skin textures/pore detail/face model are not a technological achievement as much as the work of a skilled artist, and the deformations are the result of someone who really knows their anatomy.

since there is no animation in this video, no performance, it's hard to judge how realistic it feels. the real trick is always seeing it animated.

ChaosEngine said:

Sorry, not quite there yet. There is no way anyone would actually look at that and think "oh, it's a video of a human".

The uncanny valley is one of those instances where the closer you get to perfection, the more obvious the flaws are.

But in terms of a video game character, this is very, very good.

I would love to know a few more details about it:
- how expensive is the rendering? We're just seeing a face on its own. If we drop it into an actual scene, will it still run?

- how well does it animate/lip sync?

Unreal Engine's Human CGI is So Real it's Unreal

ChaosEngine says...

Yeah, the real-time aspect of it is insanely good, although I'd still like to know how much of the rendering budget it takes up, i.e. is this usable in a game or just a research project at the moment?

What do you mean by "only one modifier is being applied"? Which is my other criticism of the video; a voiceover explaining the tech would have been more interesting than the music.

I don't believe that "multiple modifiers" would make this look better, for the simple reason that if you're demoing a technology like this, you end with ALL the bells and whistles to make it look as good as possible.

ravioli said:

The reason it doesn't look quite there is because only one modifier is applied at a time, for the purpose of the demo. You must imagine the possibilities if multiple modifiers are put in altogether. Also, the rendering is done in real-time, so this is in itself pretty amazing.



Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists

Beggar's Canyon