search results matching tag: bandwidth

» channel: learn

go advanced with your query
Search took 0.000 seconds

    Videos (45)     Sift Talk (18)     Blogs (7)     Comments (314)   

Why Being Honest about Ghostbusters is Important

artician says...

I couldn't watch this. Stop wasting my bandwidth with your stupid exposition, youtubers! No one cares about your fucking intros and style!
Viva la información!

Go home robots, you're drunk!

MilkmanDan says...

Late to the comment party here, but Slashdot had an interesting explanation of *why* so many of the robots had trouble like this:
"DARPA deliberately degraded communications (low bandwidth, high latency, intermittent connection) during the challenge to truly see how a human-robot team could collaborate in a Fukushima-type disaster. And there was no standard set for how a human-robot interface would work. So, some worked better than others. The winning DRC-Hubo robot used custom software designed by Team KAIST that was engineered to perform in an environment with low bandwidth. It also used the Xenomai real-time operating system for Linux and a customized motion control framework. The second-place finisher, Team IHMC, used a sliding scale of autonomy that allowed a human operator to take control when the robot seemed stumped or if the robot knew it would run into problems."

http://hardware.slashdot.org/story/15/06/10/038224/why-so-many-robots-struggled-with-the-darpa-challenge

Graphics card woes

RFlagg says...

I agree it seems shady to advertise the 4 GB if only 3.5 GB is full bandwidth... there should have at least been an asterisk on it. Not sure the complaints about the ROPs is as big a deal... I mean if I purchased just off specs I would perhaps be, but really, I look at performance for the dollar. The two articles linked still say they stand by the recommendation due to performance for the dollar, which is the main metric I look at. Then again, if I was running more than 1080p I'd probably be a bit more concerned... That or if the few games that require 4GB to run in ultra wouldn't run in ultra on my card that supposedly has 4GB...

Surprise - Check out the display in the latest Oculus Rift

deathcow says...

EMPIRE maybe they can get some ridiculously high bandwidth link up and going wirelessly?

OK... well at least have it use a single dinky fiber optic cable to do it.

Hey, as long as the resolution and frame rate is there, I dont care if they pull their displays out of old Tempest arcade machines.

EMPIRE said:

you know nothing billpayer snow.

the fact that facebook bought the company is what will give them enough funds to actually manufacture their own parts, as they need them to be, and not buy cellphone displays (and, I'm sure other off-the-shelf parts). It will actually help them make a better product.

The cables are needed, because of lag. A wireless device woulnd't be able to have input and movement lag as low as they are trying to make it be. And it is a very important point, because of motion sickness.

Collegehumor Breaks Down Net Neutrality

RFlagg says...

Ummm... I'm confused. Does Trancecoach and others like him think that Netflix doesn't pay to access the Internet? That Google, Amazon, Netflix and the like all have a free access pass to the Internet? Or when they say "In other words, people who stream video should pay for it, and not the people who don't." are they talking about end users and not the companies paying millions to access the Internet already? Or are they confused on other aspects?

Perhaps some aspects of this video confused them...

Right now if a person pays $45 a month for 15Mps they should expect all that content delivered to them at 15Mps. The way the ISPs want to rig it, is they want to go to Netflix/YouTube/Google/Amazon and other services and make them pay extra to get to that 15Mps. If Netflix doesn't pay then the ISP slows that content down to 10Mps, even though the end user is paying for 15Mps access. They aren't coming to the end user, yet, and having them pay extra for streaming access as shown in this video, though I'm sure they'll triple dip that too eventually. (Another problem I have with the video, beyond suggesting they'll just charge the end user extra, is that Netflix and others are willing partners in this scam, when Google, Microsoft, Amazon, Netflix and all the others have been the biggest ones to support Net Neutrality and are fighting against the cable companies, while the video seems to suggest they'll handing the money over willingly.)

And if they mean the end user... then a person not streaming and only needing access to basic text and web stuff can get the basic 3Mbps option for only $30 or 2Mps option for $15. Streaming users do pay extra already. They pay for the extra bandwidth... if all you do is browse Facebook and tweet and the like and are using the 15Mbps or higher plans than you are an idiot. The end users do pay. As do the content companies like Netflix and YouTube/Google, Amazon and the rest...to the tune of millions a year. Yes, the content itself is far more expensive. For Netflix streaming a movie is cheaper than sending the DVD, postage is semi-cheap, but the people cost a lot. Still, they pay to access the Internet just like everyone else. Nobody is getting a free ride. This is just the ISPs trying to double, and potentially triple dip fees, and Net Neutrality seeks to stop them from double and potentially triple dipping. Bad enough we have to put up with banks double dipping ATM fees...

Big companies like Google, Netflix, Amazon and the like can potentially pay the fees if they have to. The question then becomes can sites like videosift pay whatever ComcastWarner, Verizon, AT&T want? I know my little blog couldn't pay extra... not that my site's users would need more than the 3Mps plan, if that, to access most of the content... save of course when I embed a YouTube video I made.

TLDR: The end user already pays extra if they stream above and beyond what an end user who doesn't stream pays. Also Netfilx, Amazon, Google and the like all pay millions to access the Internet, they don't get their access for free. What the ISPs want to do is tell Netflix, if they want to reach that customer who's paying $40 for 15Mps access at the full speed that consumer is already paying for, then Netflix has to pay that consumer's ISP in ADDITION to the costs they are already paying. If they don't pay, then the consumer is given that content at a slower speed than what they are paying their ISP to get it at. The ISPs are trying to double dip, and someday may triple dip. Net Neutrality would stop the ISPs from doing that.

Collegehumor Breaks Down Net Neutrality

MilkmanDan says...

Mostly this, although I also don't think ISPs should charge for download/upload volume or "bandwidth", since it really isn't a limited resource. They already charge you for a given max speed (which often isn't actually delivered). BUT that's another issue. About the whole "free market" angle, think about it like this:

Would a free market have so many industry powerholders embedded into the FCC or as lobbyists? I see lobbying as the antithesis of "free market". Lobbying is a corporation basically giving up on providing a product/service that people actually choose to use because it is legitimately the best choice for them, and just saying "meh, let's rig the game instead of actually trying".

Not to mention that in many places in the US there is NO competition for available ISPs in a given area. You've got option A or, well, no ... that's all there is. And even if you consider the US as a collective entity, the number of "competing" ISPs across the country is pretty small in terms of who actually owns the pipes, and getting smaller all the time with more mega-mergers etc.

Fairbs said:

I would agree with some of your logic if there really was free markets. The Republican party messages that free markets take care of themselves and is best for the people. This is a big lie since they don't exist (or rarely). Democrats also take advantage of this although they don't focus on it as much. Also ending NN would decrease competition as companies with deep pockets can go to the front of the line and force out the small guy.
On a purely bandwidth argument I would mostly agree with you. The more resources you use, the more you should pay. This is already in place or at least for cell phones.
I don't understand your point about the government.

Collegehumor Breaks Down Net Neutrality

Fairbs says...

I would agree with some of your logic if there really was free markets. The Republican party messages that free markets take care of themselves and is best for the people. This is a big lie since they don't exist (or rarely). Democrats also take advantage of this although they don't focus on it as much. Also ending NN would decrease competition as companies with deep pockets can go to the front of the line and force out the small guy.
On a purely bandwidth argument I would mostly agree with you. The more resources you use, the more you should pay. This is already in place or at least for cell phones.
I don't understand your point about the government.

Trancecoach said:

Seems like another non-issue. In other words, people who stream video should pay for it, and not the people who don't. Right now, people who don't are subsidizing some of the costs for those who do. I don't really get the "problem," but I haven't put a lot of time looking into it.

In other words, what's the issue with NN? That they won't let you access porn sites or whatever? I think freeing it up for ISP competition would take care of access and cost issues. Like if Verizon was to introduce "static" onto your calls, then AT&T would take a larger chunk from them by not doing so. In a free market, businesses have to compete for your business. In a free market, you cannot really introduce a false scarcity. Only if there is a cartel or monopoly can that happen (which, in this case -- and in every case -- is ultimately the government).

In a competitive environment, no sane provider would want a reputation as a bad provider who intentionally messes with their own quality of service. That makes no sense. The restriction of ISP competition seems to be more of a problem and it is for this reason that the whole NN issue strikes me as another unnecessary freakout.

The Man Who Turned Paper Into Pixels

Hank vs. Hank: The Net Neutrality Debate in 3 Minutes

scheherazade says...

People miss the point with net neutrality.

The internet is a packet delivery system.
You are literally paying your ISP for a packets-per-second delivery rate across their network.

That literally means, that the ISP is obligated to make an honest best effort to route your packets at the rate you subscribed to.

Any action to deliberately throttle your packets down to below your subscribed rate, is deliberately not providing a paid for service - i.e. fraud/stealing/whatever.

Net neutrality is the concept that they deliver all packets without prejudice.

That they don't inspect your packets, and decide to treat them differently based on their content.

Kind of how the postal service charges the same to send a letter from point A to B, regardless of what you wrote in that letter.
The postal service doesn't say things like :
"This letter describes a picture. We only allow 3 'letters describing a picture' per month, and you already sent 3, so this one will have to wait.".



So for example, comcast v netflix.

Reports such as this build a case for deliberate throttling : http://www.itworld.com/consumerization-it/416871/get-around-netflix-throttling-vpn

We know comcast wanted netflix to pay for network integration/improvement.
One way to do that is by twisting their arm : deliberately throttle netflix traffic to netflix customers, until netflix pays up (and along the way, selectively not deliver paid for bandwidth to comcast customers)

That would be singling out netflix packets - a non-neutral action.

(blah blah, I changed ISPs because my own experience suggested netflix throttling.)

-scheherazade

Living with Lag - An Oculus Rift experiment

Jinx says...

1000Mbit doesn't necessarily equate to lower latency anyway, and then Up/Download speed isn't even that important for online gaming, unless you want to stream/torrent/uploadselfies while playing. It is however a nice number that's easy to stick on marketing material. I had a huge fight with an ISP a few years ago after they pretty much flat refused to do anything about crippling packet loss of 10ish% at peak times because it had a "negligible effect on bandwidth". Pretty sad that I get much more reliable net over ADSL copper lines at 10ish Mbit than I had on fiberoptic.

teebeenz said:

Perhaps they should have actually done an ad about lag, instead of one about latency. If an ISP can't get that right, probably best to look for another ISP.

Living With Lag - An Oculus Rift Experiment

MilkmanDan says...

Funny, but then the sales line at the back makes the mistake of suggesting that throughput / bandwidth has anything to do with lag.

Sixty Symbols -- What is the maximum Bandwidth?

charliem says...

You are thinking about QAM, Quadrature Amplitude Modulation. Thats an interesting question because QAM essentially produces the same results that the prof talks about in this video. By using interesting ways to change the beat and phase of a single carrier, one can represent a whole array of numbers greater than just a 1 or a zero with a single pulse, case in point.

In QAM, lets just use the easy example of QAM, QPSK (4QAM), where there are 4 possible binary positions for any given 'carrier' signal at a known frequency.

By shifting both phase and amplitude, you can get a 0, 1, 2 or 3, where each position represents a power of 2, up to a total value of 16 unique numbers.

Rather than just a 0 or a 1, you can have 0 through to 15. However doing this requires both a timeslot, and a known carrier window.

The fastest the QAM transmitter can encode onto a carrier is limited by the nyquist rate, that is, less than half the frequency which the receiver can sample at its fastest rate (on the remote end). As you increase the speed of the encoding, you also increase the error rate, and introduce more noise into the base carrier signal, in turn, reducing your effective available bandwidth.

So it then becomes a balancing act, do I want to encode faster, or do I want to increase my constellation density? The obvious answer is the one we went with, increase in constellation density.

There are much more dense variants, I think the highest ive heard of was 1024 QAM, where a single carrier of 8MHz wide could represent 1024 bits (1,050,625 unique values for a given 'pulse' within a carrier).

I actually had a lot more typed out here, but the maths that goes into this gets very ugly, and you have to account for noise products that are introduced as you increase both your transmission speed, and your receiver sensitivty, thus lowering your SNR, reducing your effective bandwidth for a given QAM scheme.

So rather than bore you with the details, the Shannon Hartley theorem is the hard wired physical limitation.

Think of it as an asymptote, that QAM is one method of trying to milk the available space of.

You can send encoded pulses very fast, but you are limited by nyquist, and your receivers ability to determine noise from signal.

The faster you encode, the more noise, the less effective bandwidth....and so begins the ritule of increasing constellation density, and receivers that can decode them....etc....

There is also the aspect of having carriers too close to one another that you must consider. If you do not have enough of a dead band between your receivers cut off for top end, and the NEXT carrier alongs cutoff for deadband at its LOW end, you can induce what is known as a heterodyne. These are nasty, especially so when talking about fibre, as the wavelengths used can cause a WIDE BAND noise product that results in your effective RF noise floor to jump SUBSTANTIALLY, destroying your entire network in the process.

So not only can you not have a contiguous RF bandwidth of carriers, one directly after another...if you try and get them close, you end up ruining everyones day.

I am sure there will be newer more fancy ways to fill that spectrum with useable numbers, but I seriously doubt they will ever go faster than the limit I proposed earlier (unless they can get better SNR, again that was just a stab in the dark).

It gives you a good idea of how it works though.

If you want to read more on this, I suggest checking wikipedia for the following;

Shannon Hartley theorem.
Nyquist Rate
Quadrature Phase Shift Key
Quadrature Amplitude Modulation
Fibre Optic Communication Wavelengths
Stimulated Brillouin Scattering
Ebrium Doped Fibre Amplifiers

Sixty Symbols -- What is the maximum Bandwidth?

charliem says...

Fibre can go a pretty long distance before it affects the signal though...

Fibre is comprised mainly of silicone, the more pure the fibre, the less dispersion issues occur at or around 1550nm (one of the main wavelengths used for long distance transmission, as we can easily and cheaply amplify this using ebrium doped segments and some pumps!)

Any impurities in the fibre will absorb the 1550 at a greater rate than other wavelengths, causing linear distortions in the received carrier along greater distances. This is called Brillouin scattering.

In the context of the above video, consider a paralell cable sending data over 100m. If one of those lines is 98m, then every bit that is sent down that line, will be out of order.

Same deal with Brillouin scattering, only on the optical level. Thats one of the main issues we gotta deal with at distance, however it only ever occurs at or around 1550nm, and only ever when you are driving that carrier at high powers (i.e. launching into the fibre directy from an ebrium doped amplifier at +15 dBm)

Theres some fancy ways of getting around that, but its not cheap.

Anywhere from say around 1260 to 1675nm is the typical bandwidth window we use today.

So, say 415nm of available bandwidth.
If we want that in frequency to figure out the theoritcal bits/sec value from the shannon-hartley theory, then we just take the inverse of the wavelength and times it by the speed of light.

7.2239e+14 hz is the available spectrum.

...thats 7.2239e+5 terahertz....

Assume typical signal to noise on fibre carrier of +6dB (haha, not a chance in hell it would be this good across this much bandwidth, but whatever..)

For a single fibre you would be looking at an average peak bandwidth of around 20280051221451.9 mbps.

Thats 19,340,564 Terabits per second, or 18,887.3 Petabits per second.

You can fudge that +/- a couple of million Tbps based on what the actual SnR would be, but thats your average figure.....thats a lot of Terabits.

On one fibre.

Source: Im a telecoms engineer

Sixty Symbols -- What is the maximum Bandwidth?

ChaosEngine says...

Lol, I remember studying the theoretical maximum bandwidth of a telephone modem at university. I can barely remember the maths/physics of it, but I remember the result which, interestingly, turns out to be less than 56k!

Christ, I feel old now.

Sixty Symbols -- What is the maximum Bandwidth?

heathen says...

Very interesting video, but a pity he went in to all that detail to end with a figure as vague as "1 terabyte in a few hundredths of a second".

If "a few" is 5 we'd get 20 Tbytes/s, if a few is 50 we'd get 2 Tbytes/s.

From wikipedia it seems optical fibre bandwidth is also limited by distance, due to dispersion, but current experimental research results are approaching 100 Terabit/s over a single channel.
(26Tbit/s in 2011, 73.7 Tbit/s in 2013)
http://en.wikipedia.org/wiki/Fiber-optic_communication#Parameters

100 Terabit/s is 12.5 Terabyte/s, which is 1 Terabyte in 0.08 seconds.

So while it still doesn't tell us what the theoretical limit is, for our currently achievable maximum speeds "a few" is 8.



Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists

Beggar's Canyon