search results matching tag: gases

» channel: learn

go advanced with your query
Search took 0.000 seconds

    Videos (51)     Sift Talk (1)     Blogs (1)     Comments (144)   

enoch (Member Profile)

radx says...

Have you read any super-depressing articles lately? If not, try this one: http://www.truth-out.org/news/item/22790-the-vanishing-arctic-ice-cap

Nothing new, really. Just another reminder that we're thorougly bummed.

"The 40-year lag between emissions of greenhouse gases and consequent rise in global-average temperature suggests our planetary fate was sealed decades ago."

It would have ruined my day, but a coworker brought some delicious cheese cake along, so everything's in balance.

newtboy (Member Profile)

BoneRemake says...

For reference because this confuses the shit out of me.


Nonflammable -

non·flam·ma·ble
adjective \-ˈfla-mə-bəl\

: not burning or not burning easily : not easily set on fire
Full Definition of NONFLAMMABLE
: not flammable; specifically : not easily ignited and not burning rapidly if ignited

Flammable -



flam·ma·ble
[flam-uh-buhl] Show IPA
adjective
easily set on fire; combustible; inflammable.
Origin:
1805–15; < Latin flammā ( re ) to set on fire + -ble
inflammable -

in·flam·ma·ble
inˈflaməbəl/
adjective
adjective: inflammable

1.
easily set on fire.
"inflammable and poisonous gases"
synonyms: flammable, combustible, incendiary, ignitable;
volatile, unstable
"inflammable fabrics"
antonyms: fireproof


WHY NOTS THE ON FIRES OR NOT ON FIRES-ABLE ??

The Phone Call

bobknight33 says...

True but the Atheist also holds the "belief" that there is not GOD. So which belief is more correct? For me to get into a biblical debate with you and the atheist sift community would be pointless. It's like the saying you can bring a horse to water but you can't make him drink. So this makes me search the web for other ways to argue the point. Here is 1 of them.

Mathematically speaking evolution falls flat on it face..
Lifted from site: http://www.freewebs.com/proofofgod/whataretheodds.htm



Suppose you take ten pennies and mark them from 1 to 10. Put them in your pocket and give them a good shake. Now try to draw them out in sequence from 1 to 10, putting each coin back in your pocket after each draw.

Your chance of drawing number 1 is 1 to 10.
Your chance of drawing 1 & 2 in succession is 1 in 100.
Your chance of drawing 1, 2 & 3 in succession would be one in a thousand.
Your chance of drawing 1, 2, 3 & 4 in succession would be one in 10,000.

And so on, until your chance of drawing from number 1 to number 10 in succession would reach the unbelievable figure of one chance in 10 billion. The object in dealing with so simple a problem is to show how enormously figures multiply against chance.

Sir Fred Hoyle similarly dismisses the notion that life could have started by random processes:

Imagine a blindfolded person trying to solve a Rubik’s cube. The chance against achieving perfect colour matching is about 50,000,000,000,000,000,000 to 1. These odds are roughly the same as those against just one of our body's 200,000 proteins having evolved randomly, by chance.

Now, just imagine, if life as we know it had come into existence by a stroke of chance, how much time would it have taken? To quote the biophysicist, Frank Allen:

Proteins are the essential constituents of all living cells, and they consist of the five elements, carbon, hydrogen, nitrogen, oxygen and sulphur, with possibly 40,000 atoms in the ponderous molecule. As there are 92 chemical elements in nature, all distributed at random, the chance that these five elements may come together to form the molecule, the quantity of matter that must be continually shaken up, and the length of time necessary to finish the task, can all be calculated. A Swiss mathematician, Charles Eugene Guye, has made the computation and finds that the odds against such an occurrence are 10^160, that is 10 multiplied by itself 160 times, a number far too large to be expressed in words. The amount of matter to be shaken together to produce a single molecule of protein would be millions of times greater than the whole universe. For it to occur on the earth alone would require many, almost endless billions (10^243) of years.

Proteins are made from long chains called amino-acids. The way those are put together matters enormously. If in the wrong way, they will not sustain life and may be poisons. Professor J.B. Leathes (England) has calculated that the links in the chain of quite a simple protein could be put together in millions of ways (10^48). It is impossible for all these chances to have coincided to build one molecule of protein.

But proteins, as chemicals, are without life. It is only when the mysterious life comes into them that they live. Only the infinite mind of God could have foreseen that such a molecule could be the abode of life, could have constructed it, and made it live.

Science, in attempt to calculate the age of the whole universe, has placed the figure at 50 billion years. Even such a prolonged duration is too short for the necessary proteinous molecule to have come into existence in a random fashion. When one applies the laws of chance to the probability of an event occurring in nature, such as the formation of a single protein molecule from the elements, even if we allow three billion years for the age of the Earth or more, there isn't enough time for the event to occur.

There are several ways in which the age of the Earth may be calculated from the point in time which at which it solidified. The best of all these methods is based on the physical changes in radioactive elements. Because of the steady emission or decay of their electric particles, they are gradually transformed into radio-inactive elements, the transformation of uranium into lead being of special interest to us. It has been established that this rate of transformation remains constant irrespective of extremely high temperatures or intense pressures. In this way we can calculate for how long the process of uranium disintegration has been at work beneath any given rock by examining the lead formed from it. And since uranium has existed beneath the layers of rock on the Earth's surface right from the time of its solidification, we can calculate from its disintegration rate the exact point in time the rock solidified.

In his book, Human Destiny, Le Comte Du nuoy has made an excellent, detailed analysis of this problem:

It is impossible because of the tremendous complexity of the question to lay down the basis for a calculation which would enable one to establish the probability of the spontaneous appearance of life on Earth.

The volume of the substance necessary for such a probability to take place is beyond all imagination. It would that of a sphere with a radius so great that light would take 10^82 years to cover this distance. The volume is incomparably greater than that of the whole universe including the farthest galaxies, whose light takes only 2x10^6 (two million) years to reach us. In brief, we would have to imagine a volume more than one sextillion, sextillion, sextillion times greater than the Einsteinian universe.

The probability for a single molecule of high dissymmetry to be formed by the action of chance and normal thermic agitation remains practically nill. Indeed, if we suppose 500 trillion shakings per second (5x10^14), which corresponds to the order of magnitude of light frequency (wave lengths comprised between 0.4 and 0.8 microns), we find that the time needed to form, on an average, one such molecule (degree of dissymmetry 0.9) in a material volume equal to that of our terrestrial globe (Earth) is about 10^243 billions of years (1 followed by 243 zeros)

But we must not forget that the Earth has only existed for two billion years and that life appeared about one billion years ago, as soon as the Earth had cooled.

Life itself is not even in question but merely one of the substances which constitute living beings. Now, one molecule is of no use. Hundreds of millions of identical ones are necessary. We would need much greater figures to "explain" the appearance of a series of similar molecules, the improbability increasing considerably, as we have seen for each new molecule (compound probability), and for each series of identical throws.

If the probability of appearance of a living cell could be expressed mathematically the previous figures would seem negligible. The problem was deliberately simplified in order to increase the probabilities.

Events which, even when we admit very numerous experiments, reactions or shakings per second, need an almost-infinitely longer time than the estimated duration of the Earth in order to have one chance, on an average to manifest themselves can, it would seem, be considered as impossible in the human sense.

It is totally impossible to account scientifically for all phenomena pertaining to life, its development and progressive evolution, and that, unless the foundations of modern science are overthrown, they are unexplainable.

We are faced by a hiatus in our knowledge. There is a gap between living and non-living matter which we have not been able to bridge.

The laws of chance cannot take into account or explain the fact that the properties of a cell are born out of the coordination of complexity and not out of the chaotic complexity of a mixture of gases. This transmissible, hereditary, continuous coordination entirely escapes our laws of chance.

Rare fluctuations do not explain qualitative facts; they only enable us to conceive that they are not impossible qualitatively.

Evolution is mathematically impossible

It would be impossible for chance to produce enough beneficial mutations—and just the right ones—to accomplish anything worthwhile.

"Based on probability factors . . any viable DNA strand having over 84 nucleotides cannot be the result of haphazard mutations. At that stage, the probabilities are 1 in 4.80 x 10^50. Such a number, if written out, would read 480,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000."
"Mathematicians agree that any requisite number beyond 10^50 has, statistically, a zero probability of occurrence."
I.L. Cohen, Darwin Was Wrong (1984), p. 205.

Grimm said:

You are wrong...you are confusing something that you "believe" and stating it as a "fact".

How Turbo-Charger's are made

jonny says...

A supercharger is driven directly by the motor, whereas a turbocharger is driven by exhaust gases. So yes, a turbo requires higher revs to produce enough exhaust gas to spin the turbine - thus "boost lag".

EvilDeathBee said:

What's the difference between a turbocharger and a supercharger? Isn't it something like that the supercharger boosts power at lower revs, while a Turbo requires higher revs?

In Australia we have the Euro spec VW's, and my Golf had a 4 cylinder 1.4 litre engine with a turbocharger and a supercharger putting out 118 kilowatts (around 160 hp). Here in North America they have that 5 cylinder, 2.5 litre engine that produces 170 hp. Only 10 hp difference between a 1.4 and 2.5 litres!

How a Turbocharger Works

coolhund says...

Not quite right. In normal operation a turbo charger saves fuel, but under wide open throttle and general fast driving conditions, it uses much more fuel than NA engines with similar power.

Also a supercharger works similar, but its not propelled by exhaust gases, but directly by the engines crank shaft (over a belt).

Fireball!

kceaton1 says...

Lots of that light has to do with the fact that not only is it instantly VERY hot , but that there is ALSO ionized gases that get created into a plasma that doesn't last very long (due to energy or heat absorption speed), but it will light up really good with the color all depending on the material hit (carbon based stuff as said above, so the photons you see are from the "energy range" released in the energy exchange through the atmospheric gases and tree/pole/whatever hit).

Perpetual Motion Machine

GeeSussFreeK says...

>> ^Kalle:

One serious question that bothers me is.. why isnt it possible to use gravity as an energy source?
Would such a machine be a perpetual motion machine?


Gravity is REALLY weak. Like 36 orders of magnitude less than the electromagnetic force. 36 orders of magnitude is massive...larger the the total number of stars in the known universe. For instance, a fridge magnet is defeating the ENTIRE gravitational force of the earth AND the sun. Gravity makes for a great way to bind the macro-universe together, but it is shit as an energy source.

Also, gravity has only one polarity...and it doesn't turn off. So for the EM force, we have 2 poles that can be switched around via electrical current to make lots of different energy related things. But for gravity, you just have one ground state, and once you are there you need to input energy to get away from that ground state...no way around that. However, what has been done and is done in certain areas is to have a closed system where you apply energy at certain time and store that energy for later. The example most commonly used is in dams, where the will pump a large volume of water back up stream (potential energy) and store it (a gravity battery if you will) and release it as a later time when demand is high. This is always a loss based way to make energy; your going to spend more pumping it back up (heat loss and other losses including evaporation) than you will when you get it back...so it is just a way to cause demand shifting towards other hours with additional entropy.

You have 4 fundamental forces to draw energy from; and 3 of those are the only practical ones. Strong (nuclear) force, the EM force, and the gravitational force (the weak force is actually the force that powers the earths core, but isn't useful to use in power generation for a similar reason gravity isn't).

The EM force is what we use in internal combustion engines and electrical motors. Chemical reactions are rearrangements of the electron structures of molecules, which makes gasoline engines possible via liquid to gas expansion pressures. Generators deal with EM fields, polarity and current which is what drives thermal reactors like coal or can drive a car with a motor via conversation of stored electrical energy(just a backwards generator). Nuclear reactors deal with the strong (nuclear) force, and combine that with kinetic/thermodynamic forces of same flavor as coal and other thermal plants.

Even gravity isn't perpetual, the orbits of ALL celestial bodies are unstable. Gravity is thought and reasonably well satisfied to travel in waves. These waves cause turbulence in what would seem calm orbits, slowly breaking them down over time...drawing them closer and closer together. Eventually, all orbits will cause ejection or collision.


As to what energy is best, I personally believe in the power of the strong force, as does the sun . When you are talking about the 4 forces and their ability to make energy for us, the strong force is 6 orders of magnitude greater than other chemical reactions we can make. The EM force is not to much weaker than the strong force, but the practical application of chemical reactions limits us to the electron cloud, making fuels for chemical reactions less energetic by a million to a billion times vs strong force fuels. Now, only fission has been shown to work for energy production currently, but I doubt that will be true forever. If you want LOTS of energy without much waste, you want strong force energy, period. That and the weak force are the 2 prime movers of sustained life on this planet. While the chemistry is what is hard at work DOING life, the strong and weak force provide the energy to sustain that chemistry. Without it, there are no winds, there is no heat in the sky nor from the core, no EM shield from that core. Just a cold, lifeless hunk of metals and gases floating in the weak gravitational force.

Sorry for the rant, energy is my most favorite current subject



(edit, corrected some typos and bad grammar)

Peroxide (Member Profile)

bcglorf says...

It's not even that I am 'doubting' the proxy measures. I am directly observing that the proxy measures DO NOT register the last 100 years as particularly unusual or abnormal. In fact, the more accurate and improved the proxy reconstructions have become, the more normal the last 100 years appears in those reconstructions.

The proxy reconstructions are as much 0.6 degrees cooler than instrumental records at the exact same point in time. I am objecting to a laymen like in the video coming along and saying the instrumental record's warmth is unprecedented over the last 2k years. Sure the proxy records don't show temperatures as high as the instrumental record in the last 2k years. The proxy records don't even show temperatures as high as the instrumental record in the last 10. The proxy records fail to recreate the temperatures observed in the instrumental record.

Is that making sense or clear what I am talking to?


In reply to this comment by Peroxide:
hmmm, I was aware that we only have thermometric readings of temperature for the last 100-150+ years. So basically you are doubting the ability of tree rings, pollen identification in sediments, and other methods of temperature reconstruction.

If I may reiterate my point, which I made rudely in the video post, I would say that you might be interested to know that if you go back further than 2k years, as in, more than 10k, there are temperature changes that were even greater than 1 degree, however, homo-sapiens was not around to endure them. Irregardless of previous temperature deviation, science tells us that our "freeing-up" of carbon dioxide, and creation of methane, are the culprits of the current temperature increase.

How does our ability to measure the last 2k years change that? or change the fact that we are heading for a 6 degree increase (which would not be uniform, for instance the poles have already warmed more than by 0.8 degrees, while the tropics may have warmed by less than 0.8 degrees)?

I fail to see that you have any point outside of that our estimations going back past 150+ years may be slightly off.

"It is obviously true that past climate change was caused by natural forcings. However, to argue that this means we can’t cause climate change is like arguing that humans can’t start bushfires because in the past they’ve happened naturally. Greenhouse gas increases have caused climate change many times in Earth’s history, and we are now adding greenhouse gases to the atmosphere at a increasingly rapid rate." -s.s.


bcglorf (Member Profile)

Peroxide says...

hmmm, I was aware that we only have thermometric readings of temperature for the last 100-150+ years. So basically you are doubting the ability of tree rings, pollen identification in sediments, and other methods of temperature reconstruction.

If I may reiterate my point, which I made rudely in the video post, I would say that you might be interested to know that if you go back further than 2k years, as in, more than 10k, there are temperature changes that were even greater than 1 degree, however, homo-sapiens was not around to endure them. Irregardless of previous temperature deviation, science tells us that our "freeing-up" of carbon dioxide, and creation of methane, are the culprits of the current temperature increase.

How does our ability to measure the last 2k years change that? or change the fact that we are heading for a 6 degree increase (which would not be uniform, for instance the poles have already warmed more than by 0.8 degrees, while the tropics may have warmed by less than 0.8 degrees)?

I fail to see that you have any point outside of that our estimations going back past 150+ years may be slightly off.

"It is obviously true that past climate change was caused by natural forcings. However, to argue that this means we can’t cause climate change is like arguing that humans can’t start bushfires because in the past they’ve happened naturally. Greenhouse gas increases have caused climate change many times in Earth’s history, and we are now adding greenhouse gases to the atmosphere at a increasingly rapid rate." -s.s.

In reply to this comment by bcglorf:
You are exactly right, the red line is directly measured temperature, and it shows that we are indeed breaking records. The trick is it only shows that we have hit the record for the last 100 years for which we actually have a measured record. The temperature over the last 2k years is not directly measured, but derived from proxies like tree rings. Those temperature reconstructions never hit the heights of the measured record. The speaker in the video then declares that current temperature is then a record over 2k years. The trick is to look closer. When we state that the reconstruction of the last 2k years never hits the current measured records it doesn't only mean it never hit them before today, it means that even where the measured temperature hits the record today, the reconstruction STILL doesn't come close to hitting the measured record. Seems very strong evidence that the reconstruction might have missed a record like today that happened over the last 2k years, as clearly they missed THIS ONE.


In reply to this comment by Peroxide:
I can't understand how you are interpreting Mann's graph, you do realize the red line at the end is not a projection, but in fact current measurements...

So how are you coming to the conclusion that we are not (already, if not soon) breaking records?

In reply to this comment by bcglorf:
>> ^alcom:

By your own admission, the composite readings are now at record highs. Nowhere in my previous post did I mention that these measurements were exactly equal to the 0.8 instrumental record. I only said that they followed the same directional trend. Read it and weep, a 2000 year-old record is a 2000 year-old record.
To argue that 'Mann's statement that the EIV construction is the most accurate,' is to willfully ignore all the other data sources. You completely miss the point that instrumental data is by far the most accurate. Proxy reconstructions are relied on in the absence of instrumental data.
The lack of widespread instrumental climate records before the mid 19th century, however, necessitates the use of natural climate archives or “proxy” data such as tree-rings, corals, and ice cores and historical documentary records to reconstruct climate in past centuries.
The curve of this warming trend is gradually accellerating accourding to all data sources, even if you ignore the two instrumental lines. And what a coincidence that it matches the timeline of the industrial revolution. Ignore science at your own peril!
>> ^bcglorf:
What graph are you reading?
CPS Land with uncertainties: Peaked at 0.05 in 600, but yes a new peak in the lat 1990's at 0.15(not 0.8), recent temp is the internal record by only 0.1.
Mann and Jones 2003: current peak at 0.15(not 0.8), but current is the record by 0.25.
Esper et. Al 2002: peaks once in 990 and again in 1990 at negative 0.05, not positive 0.8 nor is current warming a record.
CPS land+ocn with uncertainties: peaks at 0.2 (not 0.8) and only starts at 1500 not sure how much the record would've been set by if it included the year 600 where land alone hit 0.05.
Briffa et al. : Series begins in 1400, but again peaks at 0.15 (not 0.8). Can't tell from the graph how much of a record but by Briffa et al's original 2001 paper it's by 0.2
Crowly and Lowery (2000): peaks at 0.15(not 0.8), granted it current warming sets the record within the series, by 0.25 higher than 1100.
Jones et al. (1999): peaks at 0.05(not 0.8), current is record by 0.1
Oerlemans (2005) and both borehole sample go back less than 500 years. The boreholes who a smooth curve throughout, with warming starting 500, not 100 years ago. They all peak at 0.2 or lower, again not 0.8.

If I repeat my main point, I think it is reinforced by each of the series above. Instrumental measured warming is completely anomalous compared to the proxy reconstructions. The instrumental record peaks fully 0.6 degrees higher than any of the proxy series. How can anyone look at that and NOT object to the declaration that the last 2k years as shown by proxies proves temperatures have been far cooler and more stable than the last 100 years as shown on the instrumental record. If you instead compare like to like, and compare the last 100 years as projected by each proxy and not the instrumental record, you clearly see that the last 100 years is anything but a radical anomaly.
If you accept Mann's statement that the EIV construction is the most accurate, it can be easily said that the last 100 years, as appears in proxy reconstructions, isn't much of an anomaly at all.

>> ^alcom:
Ah, now I see your point, bcglorf. Of the various methodologies, the 2 instrumental record sets of the last 100 years are the only ones that show the extreme spike of temperature. The composite reconstructions have not yet shown data that is above previously held records in the last 2 millennia, with the exception of the following:
CPS Land with uncertainties
Mann and Jones (2003)
Esperg et al. (2002)
CPS land+ocn with uncertainties
Briffa et al. (2005)
Crowely and Lowery (2000)
Jones et al. (1999)
Oerlemans (2005)
Mann et al. Optimal Borehole (2003)
Huang et al. Borehole (2000)
If you closely follow these lines, you will see that each plot above has indeed set 2000 year-old records in the last 25 years within their own recorded plots, even if not as pronounced as the instrumental record highlighted by the red line. I'm not sure why the EIV lines stop at 1850, but I'm also not a climatologist. The instrumental record has more or less agreed with the EIV record since its existence, including an extending cooling trend midway through this century. The sharp divergence is not fully understood perhaps, but I still think it foolish to ignore the provable, measurable and pronounced upward trend in all calculated measurements.
>> ^bcglorf:
>> ^alcom:
After a cursory reading of Mann's Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia, I can't see how climate change deniers can both read about his methodology AND at the same time gripe about a bias towards measurements that support his argument while ignoring conflicting measurements through other means.
These measurements AGREE. The regression measurement data seems to have a wider variance as you go backwards, but they all trend in the same directions both up and down over the last 2000 years. (I'm looking at the graph here: http://www.pnas.org/content/105/36/13252/F3.expansion.html ) converge and climb at the same incredible rate at the end (the last 25 years or so.) Show me ANY scientific data that reports a measurement of +.8°C over such a short period of time.
As for the polynomial curve objection, the variation in measurements over such a limited data set my not yet reveal the true nature of the curve. And Earth's feedback mechanisms may already be at work to counteract the difference in atmospheric composition. For example, trees will grow faster and more quickly in slightly warmer temperatures with elevated CO2 levels, and naturally counteract the effects of fossil fuel burning by simply converting more of it into O2. There are undoubtedly many, many more factors at play. I'm suggesting perhaps that apparent "straight line" graphing is currently the fastest rate of increase possible based on the feedback systems that are at work.
The point is that it is a losing battle, according to the current trend. At some point, these feedback systems will fail (eg., there will come a point when it is so hot in a region that no type of tree will continue to grow and absorb CO2) and worst still, there are things like the methane in permafrost that will exacerbate the problem further. This isn't like a religious doomsday scenario, the alarm bells are not coming from a loony prophet but from real, measurable evidence that so many people continue to ignore. I'd rather be wrong and relieved that there is no climate crisis and clean energy initiatives end up being a waste of time and money than wrong that there IS in fact cause to make serious changes. The doubt that has driven so much misinformation will at some point be exposed for the stupidity that it truly is.

Look closer at the graph in http://www.pnas.org/content/105/36/13252/F3.expansion.html for me. It is the graph I was talking about. NONE of the reconstructions from proxy sources spike up to 0.8 at the end. The all taper off short of 0.2, the bold red(instrumental) line makes them very hard to see precisely. The closest curve to the red that can be seen is the grey one, which is in fact the other instrumental record they include, and even it stops below 0.6. What is more, the green EIV reconstruction peaks past 0.2 twice over the last 2k years. It also spikes much more quickly around 900 and 1300. Most noteworthy of all is if you read further up in Mann's report, because the big reason for re-releasing this version of his paper is to evaluate the EIV method because statisticians recommended as far more appropriate. Mann notes in his results as well stating that of all the methods, 'we place the greatest confidence in the EIV reconstructions'.
My key point is glaringly obvious when looking at Mann's data, even on the graph. The instrumental record of the last 100 years spikes in an unprecedented fashion. The proxy reconstruction of that same time frame does not. Two different methodologies yielding 2 different results. The speaker in this video points at that and declares it's because of human emissions 100 years ago, but we must look at the fact the methodology changed at that exact point too. The EIV reconstruction was the latest attempt to bridge the gap between the proxy and instrumental records, and although it more closely matches the instrumental, it still doesn't spike 0.8 degrees over the last 100 years, and more interestingly it also shows much greater variation over the last 2k years. Enough variation in fact that if you look at just the green EIV line, the last 100 years isn't particularly note worthy or anomalous.





The speaker in the video makes a very big deal though about the current 0.8 of the instrumental record, and similarly a very big deal about it being unprecedented in the last 10k years, which you have to admit has been plainly proven as apples to oranges.

Yes, most current proxies show a 2k year old record, except the EIV which is the most recent and deemed most accurate record. If you can accept comparing reconstructions using the same proxy data but different analysis, the EIV proxy reconstruction shows that current temperatures are not record breaking at all.

If you want to say that "the curve of this warming trend is gradually accellerating accourding to all data sources" I'd say you need also observe warming trends in the reconstructions over the last 2k years. Again, the rate of change over the last century is not unprecedented in the proxy records over the last 2k years. You can again plainly see that similarly or more severe warming and/or cooling within the last 2k years in the proxy reconstructions.



ReverendTed (Member Profile)

GeeSussFreeK says...

Safe nuclear refers to many different new gen4 reactor units that rely on passive safety instead of engineered safety. The real difference comes with a slight bit of understanding of how nuclear tech works now, and why that isn't optimal.

Let us first consider this, even with current nuclear technology, the amount of people that have died as a direct and indirect result of nuclear is very low per unit energy produced. The only rival is big hydro, even wind and solar have a great deal of risk compared to nuclear as we do it and have done it for years. The main difference is when a nuclear plant fails, everyone hears about it...but when a oil pipeline explodes and kills dozens, or solar panel installers fall off a roof or get electrocuted and dies...it just isn't as interesting.

Pound per pound nuclear is already statistically very safe, but that isn't really what we are talking about, we are talking about what makes them more unsafe compared to new nuclear techs. Well, that has to do with how normal nukes work. So, firstly, normal reactor tech uses solid fuel rods. It isn't a "metal" either, it is uranium dioxide, has the same physical characteristics as ceramic pots you buy in a store. When the fuel fissions, the uranium is transmuted into other, lighter, elements some of which are gases. Over time, these non-fissile elements damage the fuel rod to the point where it can no longer sustain fission and need to be replaced. At this point, they have only burned about 4% of the uranium content, but they are all "used up". So while there are some highly radioactive fission products contained in the fuel rods, the vast majority is just normal uranium, and that isn't very radioactive (you could eat it and not really suffer any radiation effects, now chemical toxicity is a different matter). The vast majority of nuclear waste, as a result of this way of burning uranium, generates huge volumes of waste products that aren't really waste products, just normal uranium.

But this isn't what makes light water reactors unsafe compared to other designs. It is all about the water. Normal reactors use water to both cool the core, extract the heat, and moderate the neutrons to sustain the fission reaction. Water boils at 100c which is far to low a temperature to run a thermal reactor on, you need much higher temps to get power. As a result, nuclear reactors use highly pressurized water to keep it liquid. The pressure is an amazingly high 2200psi or so! This is where the real problem comes in. If pressure is lost catastrophically, the chance to release radioactivity into the environment increases. This is further complicated by the lack of water then cooling the core. Without water, the fission chain reaction that generates the main source of heat in the reactor shuts down, however, the radioactive fission products contained in the fuel rods are very unstable and generate lots of heat. So much heat over time, they end up causing the rods to melt if they aren't supplied with water. This is the "melt down" you always hear about. If you start then spraying water on them after they melt down, it caries away some of those highly radioactive fission products with the steam. This is what happened in Chernobyl, there was also a human element that overdid all their safety equipment, but that just goes to show you the worst case.

The same thing didn't happen in Fukushima. What happened in Fukushima is that coolant was lost to the core and they started to melt down. The tubes which contain the uranium are made from zirconium. At high temps, water and zirconium react to form hydrogen gas. Now modern reactor buildings are designed to trap gases, usually steam, in the event of a reactor breach. In the case of hydrogen, that gas builds up till a spark of some kind happens and causes an explosion. These are the explosions that occurred at Fukushima. Both of the major failures and dangers of current reactors deal with the high pressure water; but water isn't needed to make a reactor run, just this type of reactor.

The fact that reactors have radioactive materials in them isn't really unsafe itself. What is unsafe is reactor designs that create a pressure to push that radioactivity into other areas. A electroplating plant, for example, uses concentrated acids along with high voltage electricity in their fabrication processes. It "sounds" dangerous, and it is in a certain sense, but it is a manageable danger that will most likely only have very localized effects in the event of a catastrophic event. This is due mainly to the fact that there are no forces driving those toxic chemical elements into the surrounding areas...they are just acid baths. The same goes for nuclear materials, they aren't more or less dangerus than gasoline (gas go boom!), if handled properly.

I think one of the best reactor designs in terms of both safety and efficiency are the molten salt reactors. They don't use water as a coolant, and as a result operate at normal preasures. The fuel and coolant is a liquid lithium, fluoride, and beryllium salt instead of water, and the initial fuel is thorium instead of uranium. Since it is a liquid instead of a solid, you can do all sorts of neat things with it, most notably, in case of an emergency, you can just dump all the fuel into a storage tank that is passively cooled then pump it back to the reactor once the issue is resolved. It is a safety feature that doesn't require much engineering, you are just using the ever constant force of gravity. This is what is known as passive safety, it isn't something you have to do, it is something that happens automatically. So in many cases, what they designed is a freeze plug that is being cooled. If that fails for any reason, and you desire a shutdown, the freeze plug melts and the entire contents of the reactor are drained into the tanks and fission stops (fission needs a certain geometry to happen).

So while the reactor will still be as dangerous as any other industrial machine would be...like a blast furnace, it wouldn't pose any threat to the surrounding area. This is boosted by the fact that even if you lost containment AND you had a ruptured emergency storage tank, these liquid salts solidify at temps below 400c, so while they are liquid in the reactor, they quickly solidify outside of it. And another great benefit is they are remarkably stable. Air and water don't really leach anything from them, fluoride and lithium are just so happy binding with things, they don't let go!

The fuel burn up is also really great. You burn up 90% of what you put in, and if you try hard, you can burn up to 99%. So, comparing them to "clean coal" doesn't really give new reactor tech its fair shake. The tech we use was actually sort of denounced by the person who made them, Alvin Weinberg, and he advocated the molten salt reactor instead. I could babble on about this for ages, but I think Kirk Sorensen explains that better than I could...hell most likely the bulk of what I said is said better by him



http://www.youtube.com/watch?v=N2vzotsvvkw

But the real question is why. Why use nuclear and not solar, for instance?

http://en.wikipedia.org/wiki/Energy_density

This is the answer. The power of the atom is a MILLION times more dense that fossil fuels...a million! It is a number that is beyond what we can normal grasp as people. Right now, current reactors harness less that 1% of that power because of their reactor design and fuel choice.

And unfortunately, renewables just cost to darn much for how much energy they contribute. In that, they also use WAY more resources to make per unit energy produced. So wind, for example, uses 10x more steal per unit energy contributed than other technologies. It is because renewables is more like energy farming.

http://videosift.com/video/TEDxWarwick-Physics-Constrain-Sustainable-Energy-Options


This is a really great video on that maths behind what makes renewables less than attractive for many countries. But to rap it up, finally, the real benefit is that cheap, clean power is what helps makes nations great. There is an inexorable link with access to energy and financial well being. Poor nations burn coal to try and bridge that gap, but that has a huge health toll. Renewables are way to costly for them per unit energy, they really need other answers. New nuclear could be just that, because it can be made nearly completely safe, very cheap to operate, and easier to manufacture (this means very cheap compared to today's reactors as they are basically huge pressure vessels). If you watch a couple of videos from Kirk and have more questions or problems, let me know, as you can see, I love talking about this stuff Sorry if I gabbed your ear off, but this is the stuff I am going back to school for because I do believe it will change the world. It is the closest thing to free energy we are going to get in the next 20 years.

In reply to this comment by ReverendTed:
Just stumbled onto your profile page and noticed an exchange you had with dag a few months back.
What constitutes "safe nuclear"? Is that a specific type or category of nuclear power?
Without context (which I'm sure I could obtain elsewise with a simple Google search, but I'd rather just ask), it sounds like "clean coal".

Star Trek - "Gaseous Anomalies"

Starter Fluid Tire Inflation [MythBusters]

rottenseed says...

That's very true. The simple fact that because the pressure of the tire off the rim is going to be normalized with the atmospheric pressure, even if you seated the tire back on manually, you'd have a zero pressure delta between outside the tire and inside. For a tire to run properly, usually you have to have a difference of approximately 30 psi. So yea, case closed. THANKS!>> ^messenger:

That might work better, but to pressurize a tire, you need significantly more gas than just the little bit that comes out of the spray can, and it must be at ambient temperature too to count. Clearly, the only reason this inflates the tire at all is the heat. Here's a thought experiment that disproves the "stays inflated" myth:
Consider that you need to have much more gas inside the tire than outside for it to stay inflated at ambient temperature. Before lighting the gas, there's the same pressure inside and outside the tire because the two areas are contiguous. When she lights the fire, the gas inside expands rapidly, and lots of it escapes, so now, while there's a greater volume of gas inside the tire than before, this is due to a greatly reduced density, so there's actually less gas inside than before, which is why there was a vacuum. Using heat, there will always be less gas inside than before. It's not even worth experimenting. The only way something like this could work is with compressed gas, which you certainly wouldn't want to do because that would blow the tire up if it were lit on fire.
I think there are self-inflating tires already on the market that have compressed gas cartridges built in, triggered by a sensor that fixes flats by spraying a sealant inside, then lots of pressurized gas, probably CO2. They don't use fire.>> ^rottenseed:
So the limiting reactant would be the starter fluid and the air in the tire. Oxygen to be more exact. Because you want enough forces to seat the tire, but not so much it removes all of the gases from the tire, maybe they should have tried less starter fluid. If that's depleted in the reaction quickly leaving enough energy to seat the tire, but also enough oxygen left over from the reaction, you might end up with a working tire.
Somebody please double check my thought process, but I think it's definitely worth more experimentation.


Starter Fluid Tire Inflation [MythBusters]

messenger says...

That might work better, but to pressurize a tire, you need significantly more gas than just the little bit that comes out of the spray can, and it must be at ambient temperature too to count. Clearly, the only reason this inflates the tire at all is the heat. Here's a thought experiment that disproves the "stays inflated" myth:

Consider that you need to have much more gas inside the tire than outside for it to stay inflated at ambient temperature. Before lighting the gas, there's the same pressure inside and outside the tire because the two areas are contiguous. When she lights the fire, the gas inside expands rapidly, and lots of it escapes, so now, while there's a greater volume of gas inside the tire than before, this is due to a greatly reduced density, so there's actually less gas inside than before, which is why there was a vacuum. Using heat, there will always be less gas inside than before. It's not even worth experimenting. The only way something like this could work is with compressed gas, which you certainly wouldn't want to do because that would blow the tire up if it were lit on fire.

I think there are self-inflating tires already on the market that have compressed gas cartridges built in, triggered by a sensor that fixes flats by spraying a sealant inside, then lots of pressurized gas, probably CO2. They don't use fire.>> ^rottenseed:

So the limiting reactant would be the starter fluid and the air in the tire. Oxygen to be more exact. Because you want enough forces to seat the tire, but not so much it removes all of the gases from the tire, maybe they should have tried less starter fluid. If that's depleted in the reaction quickly leaving enough energy to seat the tire, but also enough oxygen left over from the reaction, you might end up with a working tire.
Somebody please double check my thought process, but I think it's definitely worth more experimentation.

Starter Fluid Tire Inflation [MythBusters]

Jinx says...

>> ^rottenseed:

So the limiting reactant would be the starter fluid and the air in the tire. Oxygen to be more exact. Because you want enough forces to seat the tire, but not so much it removes all of the gases from the tire, maybe they should have tried less starter fluid. If that's depleted in the reaction quickly leaving enough energy to seat the tire, but also enough oxygen left over from the reaction, you might end up with a working tire.
Somebody please double check my thought process, but I think it's definitely worth more experimentation.

Would be one to test, but I think you'd end up with the same problem. End of the day to jump the tyre back on the rim you are going to be filling quite a lot of the inside of that tyre with the hot gases from the combustion. You might get less deflation with less accelarant, but I'd think you'd start with a lower pressure within the tyre even before the gases cool, so you wouldn't gain much. Thats my guess anyway.


Now maybe if you did this, and at the same time dropped just the right amount of dry ice in with it you could get the tyre back on the rim AND get it pressurised without the need for some sort of pump. Ofc, dry ice is not exactly something you tend to have stored in the glove compartment...

Starter Fluid Tire Inflation [MythBusters]

rottenseed says...

So the limiting reactant would be the starter fluid and the air in the tire. Oxygen to be more exact. Because you want enough forces to seat the tire, but not so much it removes all of the gases from the tire, maybe they should have tried less starter fluid. If that's depleted in the reaction quickly leaving enough energy to seat the tire, but also enough oxygen left over from the reaction, you might end up with a working tire.

Somebody please double check my thought process, but I think it's definitely worth more experimentation.



Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists

Beggar's Canyon