Friday, February 29, 2008

Geothermal Energy In Australia

Author: Tobi Nagy

GEOTHERMAL ENERGY IN AUSTRALIA Geothermal energy might hail as an alternative source of energy to drive our turbines in a race to produce "cleaner" electricity.

What is Geothermal Energy? Geothermal energy is a natural "clean" source of energy that is tapped from holes drilled several kilometres below the earth's surface from which hot water and steam is extracted. It is a clean source of energy not requiring fossil fuel inputs (other than the drilling machinery), but may provide a lasting source of energy for decades, possibly even centuries.

Testing by Geodynamics In South Australia, Geodynamics, a company established to work out a way to economically extract geothermal energy below the barren Cooper Basin in the state is conducting trials into drilling into 90 million year old granite rock bed into one of the hottest spots on earth to extract hot water, to provide steam to power turbines for "clean" fossil fuel free electricity.

A $40 million exploration and testing program is currently underway in the hope of finding an economic form of extraction.

There is a hope that a full scale commercial operation will eventually commence that will provide hundreds of Megawatts of power to the state's electricity grid, in the race to reduce Greenhouse gases.

The major investor Origin Energy has already committed to taking 50% of the output to put into their electricity grid once the commercial operation commences in the near future.

About the author: Tobi Nagy is a small business develoment consultant and a specialist on developing sustainable systems. His website can be found at http://www.sustainable-development.net

Thursday, February 28, 2008

Telescopes, Principle of operation and factors that affect its properties

Author: David Chandler

Telescopes are devices that are used to view the distant objects. They find its use in astronomy and physics. It enables you to view the distant objects by magnifying them. There are many types of telescopes and their prices vary according to the specifications. Many accessories are also available that can be used in conjunction with the telescopes. Small telescopes that are used as toys are also capable of viewing some objects around 50 meters away.

Principle in which the telescope works

The principle in which the telescope works is very simple. There are two lenses that make up the task of viewing the objects that are at a distance. One of the lenses picks up the light from the object viewed and makes it available at a focus point. Another lens picks up the bright light from the focus point and spreads it out to your retina so that you can view. The lens that picks up the light from the object is called the objective lens or primary mirror. The lens that picks up the light from the focal point is called the eyepiece lens.

Factors that affect the viewing of the object

The capability of the telescope to collect the light from the object that is viewed and the capability to enlarge the image are the factors that affect the efficiency of the telescope. The capability to collect light from the object depends on the diameter of the lens or mirror, which is otherwise called the aperture. The larger the aperture the more the light it can collect. Enlarging of an image depends on the combination of the lenses that are used. The eyepiece in the telescope performs the magnification.

Some of the world's largest optical telescopes in operation

We say a telescope to be larger based on the aperture size. Based on this we can say that Keck and Keck II are the largest telescopes in operation with an aperture of 10 meters diameter. The Keck telescope is composed of 36 mirror segments. This is located at Mauna Kea, Hawaii. The next largest is Hobby-Eberly located at Mt. Fowlkes, Texas which has an aperture of 9.2 meters. You can get a list of the largest optical telescopes at http://astro.nineplanets.org/bigeyes.html.

Choosing your telescope

The choice of the telescope largely depends on what you want to observe. You can choose compound telescopes and refractor type of telescope for viewing through the urban skies. For the rural skies, you can use compound telescopes and reflectors. They are better than the refractors type of telescopes. Each type has its own advantages and disadvantages. Hence, many people have different telescope for different purposes.

For more information visit http://www.TelescopeInfoCenter.com

About the author: None

Wednesday, February 27, 2008

Building a foundation, starting with physics

Author: Kyle Watts

When you break it down, all sciences are derived from physics. Think about it, psychology is the biology of the brain. Biology is just the study of biochemical reactions. Biochemistry(or organic chemistry) is simply complex chemical reactions. Chemistry is just molecular physics summed up into a table(The periodic table). Finally, molecular physics is defined by the weak nuclear force of the fundamental forces of physics. Even the four fundamental forces of physics can be broken down into one Grand Unified Theory of Physics. Basically, the universe can be defined by a few basic principles.

This is why I have a new approach at understanding any field of research. To study a specific field we must break it down into its components. If I wanted to pursue research in the field of biology, I would first grasp a significant understanding of physics. Once I had a firm grasp on physics I would study chemistry and then organic chemistry. Only when I had a firm understanding of these gateway sciences would I start to study Biology. Some of you who are reading this are thinking that educational programs are designed with this in mind. But, the reality is, branched out scientific fields are receiving minimal or no education of these other important sciences. A person working on a psychology degree only has to take a few courses in biology and usually no courses in chemistry or physics. This is a fundamental flaw in enrolling in specialized scientific university programs. In many cases, much of the important science overlooked.

There are a few drawbacks to this method of learning. One is that it would take a much greater amount of time to learn everything from the ground up. Another is the fact that many people can't grasp all of these concepts. For example, I know a lot of people who flunked high school physics that eventually went on to become biologists. And a third problem would be that many people may not have the patients to learn with this method.

Despite the problems with this method of learning I believe that it is in the best interest of our society that we start creating programs that involve this type of learning. Maybe this isn't for everyone, but I think that even if there are only a small number of people that benefit from this method that they would become innovators and inventors. They would be the people that would push the boundary of scientific discovery to a new level. It is a difficult path but a necessary one if we wish to push science to its limits.

for more articles visit: www.unifyscience.batcave.net/

About the author: None

Tuesday, February 26, 2008

Essential Parts of a Microscope

Author: Peter Emerson

The basic design of the microscope has not changed that much over time. They have evolved, but the basic concept is still the same. There are several key parts that many types of microscopes have in common. All of the parts of a microscope must function properly for the microscope to work well. If one part is substandard, it can render the microscope useless. The major parts of a microscope are the lenses, the arm, the tube, the illuminator, the stage, and the adjustment knobs.

There are two kinds of lenses on a microscope. The eyepiece lens, also known as the ocular lens is at the top of the microscope. This is the part that people look through. The ocular lens is not adjustable on most models. The objective lens provides much of the microscope's magnification. A microscope usually has a few different objective lenses that vary in strength. The objective lenses are contained on a circular part placed between the eyepiece and the stage. Different objective lenses are chosen based on their strength. When someone wants to use a different strength of objective lens, they turn the circular disk to put another lens over the stage.

Other than the lenses, the other parts of a microscope are the tube, the arm, the stage, the illuminator and the adjustment knobs. The tube connects the ocular lens and the objective lens. People look through the ocular lens and tube and see out of the objective lens at the bottom. The arm connects the lenses and the stage. It protrudes to the side and provides a handle to carry the microscope as well. The stage is where the object is placed for examination. Stage clamps secure the microscope slides to the stage. The microscope slides contain specimens such as blood or other liquids. The illuminator is below the stage. This part provides light to make the specimen easier to see. The illuminator is either an actual light or a mirror.

Most microscopes feature two adjustment knobs to help focus the lenses. The coarse adjustment knob is the larger of the two and brings the lens and the stage closer together. The fine adjustment knob is smaller and is used after the coarse adjustment knob to provide any small adjustments to bring the item into sharp focus.

These parts of a microscope are common to nearly all models. Some microscopes use slightly different parts. For example, electron microscopes use electron beams instead of illuminators.

About the author: Microscopes Info provides detailed information about electron, compound, stereo, digital, video, and scanning tunneling microscopes, as well as an explanation of the different parts of a microscope, and more. Microscopes Info is affiliated with Business Plans by Growthink .

Monday, February 25, 2008

General Morphological Analysis: A general method for non-quantified modelling

Author: Tom Ritchey

Fritz Zwicky pioneered the development of morphological analysis (MA) as a method for investigating the totality of relationships contained in multi-dimensional, usually non- quantifiable problem complexes. During the past two decades, MA has been extended and applied in the area of futures studies and for structuring and analysing complex policy spaces. This article outlines the fundamentals of the morphological approach and describes recent applications in policy analysis.

""... within the final and true world image everything is related to everything, and nothing can be discarded a priori as being unimportant."" (Fritz Zwicky: Discovery, Invention, Research through the Morphological Approach.)

Note: The original article contained diagrams and pictures of morphological fields, which are not available in this text format. The original article can be downloaded from the Swedish Morphological Society at: www.swemorph.com/ma.html.

INTRODUCTION

General Morphological analysis (MA) was developed by Fritz Zwicky - the Swiss astrophysicist and aerospace scientist based at the California Institute of Technology (CalTech) - as a method for structuring and investigating the total set of relationships contained in multi-dimensional, non-quantifiable, problem complexes (Zwicky 1966, 1969).

Zwicky applied this method to such diverse fields as the classification of astrophysical objects, the development of jet and rocket propulsion systems, and the legal aspects of space travel and colonization. He founded the Society for Morphological Research and advanced the ""morphological approach"" for some 40 years, between the early 1930's until his death in 1974.

More recently, morphological analysis has been extended and applied by a number of researchers in the U.S.A and Europe in the field of policy analysis and futures studies (Rhyne 1981, 1995a, 1995b; Coyle 1994, 1995, 1996; Ritchey 1997, 1998, Ritchey, Stenström & Eriksson, 2002). The method is presently experiencing somewhat of a renaissance, not the least because of the development of small, fast computers and flexible graphic interfaces.

This article will begin with a discussion of some of the methodological problems confronting complex, non-quantified modelling, especially as applied to policy analysis and futures studies. This is followed by a presentation of the fundamentals of the morphological approach along with a recent application to policy analysis.

METHODOLOGICAL BACKGROUND

Analysing complex policy fields and developing futures scenarios presents us with a number of difficult methodological problems. Firstly, many, if not all of the factors involved are non- quantifiable, since they contain strong social-political dimensions and conscious self-reference among actors. This means that traditional quantitative methods, causal modelling and simulation are relatively useless.

Secondly, the uncertainties inherent in such problem complexes are in principle non-reducible, and often cannot be fully described or delineated. This represents even a greater blow to the idea of causal modelling and simulation.

Finally, the actual process by which conclusions are drawn in such studies is often difficult to trace - i.e. we seldom have an adequate ""audit trail"" describing the process of getting from initial problem formulation to specific solutions or conclusions. Without some form of traceability we have little possibility of scientific control over results, let alone reproducibility.

An alternative to formal (mathematical) methods and causal modelling is a form of non- quantified modelling relying on judgmental processes and internal consistency, rather than causality. Causal modelling, when applicable, can - and should - be used as an aid to judgement. However, at a certain level of complexity (e.g. at the social, political and cognitive level), judgement must often be used -- and worked with -- more or less directly. The question is: How can judgmental processes be put on a sound methodological basis?

Historically, scientific knowledge develops through cycles of analysis and synthesis: every synthesis is built upon the results of a proceeding analysis, and every analysis requires a subsequent synthesis in order to verify and correct its results (Ritchey, 1991). However, analysis and synthesis - as basic scientific methods - say nothing about a problem having to be quantifiable.

Complex social-political problem fields can be analysed into any number of non-quantified variables and ranges of conditions. Similarly, sets of non-quantified conditions can be synthesised into well-defined relationships or configurations, which represent ""solution spaces"". In this context, there is no fundamental difference between quantified and non- quantified modelling.

Morphological analysis - extended by the technique of cross consistency assessment (CCA, see below) - is a method for rigorously structuring and investigating the internal properties of inherently non-quantifiable problem complexes, which contain any number of disparate parameters. It encourages the investigation of boundary conditions and it virtually compels practitioners to examine numbers of contrasting configurations and policy solutions. Finally, although judgmental processes may never be fully traceable in the way, for example, a mathematician formally derives a proof, MA does go a long way in providing as good an audit trail as one can hope for.

THE MORPHOLOGICAL APPROACH

The term morphology comes from antique Greek (morphe) and means shape or form. The general definition of morphology is ""the study of form or pattern"", i.e. the shape and arrangement of parts of an object, and how these ""conform"" to create a whole or Gestalt. The ""objects"" in question can be physical objects (e.g. an organism, an anatomy, a geography or an ecology) or mental objects (e.g. linguistic forms, concepts or systems of ideas).

Fritz Zwicky proposed a generalised form of morphological research:

""Attention has been called to the fact that the term morphology has long been used in many fields of science to designate research on structural interrelations - for instance in anatomy, geology, botany and biology. ... I have proposed to generalize and systematize the concept of morphological research and include not only the study of the shapes of geometrical, geological, biological, and generally material structures, but also to study the more abstract structural interrelations among phenomena, concepts, and ideas, whatever their character might be."" (Zwicky, 1966, p. 34)

Essentially, general morphological analysis is a method for identifying and investigating the total set of possible relationships or ""configurations"" contained in a given problem complex. In this sense, it is closely related to typology construction (Bailey 1994), although it is more generalised in form and conceptual range.

The approach begins by identifying and defining the parameters (or dimensions) of the problem complex to be investigated, and assigning each parameter a range of relevant ""values"" or conditions. A morphological box - also fittingly known as a ""Zwicky box"" - is constructed by setting the parameters against each other in an n-dimensional matrix (see Figure 1, below). Each cell of the n-dimensional box contains one particular ""value"" or condition from each of the parameters, and thus marks out a particular state or configuration of the problem complex.

Ideally, one would examine all of the configurations in the field, in order to establish which of them are possible, viable, practical, interesting, etc., and which are not. In doing so, we mark out in the field a relevant ""solution space"". The solution space of a Zwickian morphological field consists of the subset of configurations, which satisfy some criteria - one of which is internal consistency.

However, a typical morphological field of 6-10 variables can contain between 50,000 and 5,000,000 formal configurations, far too many to inspect by hand. Thus, the next step in the analysis-synthesis process is to examine the internal relationships between the field parameters and reduce the field by identifying, and weeding out, all mutually contradictory conditions.

This is achieved by a process of cross-consistency assessment (CCA). All of the parameter values in the morphological field are compared with one another, pair-wise, in the manner of a cross-impact matrix. As each pair of conditions is examined, a judgment is made as to whether - or to what extent - the pair can coexist, i.e. represent a consistent relationship. To the extent that a particular pair of conditions is a blatant contradiction, then all those configurations containing this pair of conditions would also be internally inconsistent. Using this technique, a typical morphological field can be reduced by up to 90 or even 99%, depending on the problem structure.

There are three types of inconsistencies involved here: purely logical contradictions (i.e. those based on the nature of the concepts involved); empirical constraints (i.e. relationships judged be highly improbable or implausible on empirical grounds), and normative constraints (e.g. relationships ruled out on e.g. ethical or political grounds). Normative constraints must be used with great care, and clearly designated as such. We must first discover what we judge as possible, before we make judgements about what is desirable.

The reduction of the field to a solution space allows us to concentrate on a manageable number of internally consistent configurations. These can then be examined as elements of scenarios or specific solutions in a complex policy space. With computer support, the morphological field can be treated as an inference model . (For this purpose, FOA has developed a Windows-based software package which supports the entire analysis-synthesis process which General Morphology entails. The program is called MA/Casper: Computer Aided Scenario and Problem Evaluation Routine.)

The morphological approach has several advantages over less structured approaches. Zwicky calls MA ""totality research"" which, in an ""unbiased way attempts to derive all the solutions of any given problem"". It may help us to discover new relationships or configurations, which may not be so evident, or which we might have overlooked by other - less structured - methods. Importantly, it encourages the identification and investigation of boundary conditions, i.e. the limits and extremes of different contexts and factors.

It also has definite advantages for scientific communication and - notably - for group work. As a process, the method demands that parameters, conditions and the issues underlying these be clearly defined. Poorly defined parameters become immediately (and embarrassingly) evident when they are cross-referenced and assessed for internal consistency.

REFERENCES

Bailey, K.: Typologies and Taxonomies - An Introduction to Classification Techniques, Sage University Papers: Sage Publications, Thousand Oaks (1994).

Coyle, R. G., Crawshay, R. and Sutton, L.: ""Futures Assessments by Field Anomaly Relaxation"", Futures 26(1), 25-43 (1994).

Coyle, R. G., McGlone, G. R.: ""Projection Scenarios for South-east Asia and the South-west Pacific"", Futures 27(1), 65-79 (1995).

Coyle, R.G. and Yong, Y. C.: ""A Scenario Projection for the South China Sea"", Futures 28 (3), 269-283 (1996).

Doty, D. H. & Glick, W. ""Typologies as a Unique Form of Theory Building"", Academy of Management Review, Vol. 19, No.2 (1994)

Rhyne, R.: ""Whole-Pattern Futures Projection, Using Field Anomaly Relaxation"", Technological Forecasting and Social Change 19, 331-360 (1981).

Rhyne, R.: ""Field Anomaly Relaxation - The Arts of Usage"", Futures 27 (6), 657-674 (1995a).

Rhyne, R.: ""Evaluating Alternative Indonesian Sea-Sovereignty Systems"", Informs: Institute for Operations Research and the Management Sciences (1995b).

Ritchey, T.: ""Analysis and Synthesis - On Scientific Method based on a Study by Bernhard Riemann"" Systems Research 8(4), 21-41 (1991). (Available as REPRINT at: www.swemorph.com/downloads.html.)

Ritchey, T.: ""Scenario Development and Risk Management using Morphological Field Analysis"", Proceedings of the 5th European Conference on Information Systems (Cork: Cork Publishing Company) Vol. 3:1053-1059 (1997).

Ritchey, T. ""Fritz Zwicky, 'Morphologie' and Policy Analysis"", Presented at the 16th Euro Conference on Operational Analysis, Brussels (1998)

Ritchey, T, Stenström, M. & Eriksson, H., ""Using Morphological Analysis to Evaluate Preparedness for Accidents Involving Hazardous Materials"", Proceedings of the 4th LACDE Conference, Shanghai (2002). (Available as REPRINT at: www.swemorph.com/downloads.html.)

Zwicky, F., Discovery, Invention, Research - Through the Morphological Approach, Toronto: The Macmillan Company (1969).

Zwicky, F. & Wilson A. (eds.), New Methods of Thought and Procedure: Contributions to the Symposium on Methodologies, Berlin: Springer (1967).

About the author: Dr. Tom Ritchey is a Research Director at the Swedish Defence Research Agency. He maintains the website of the Swedish Morphological Society, where the original article - including diagrams - can be downloaded: www.swemorph.com/ma.html. Tom Ritchey can be reached at: ritchey@swemorph.com.

Sunday, February 24, 2008

digital camcorders and cheap memory cards canada

Author: Martin

How to Buy Digital SLR Cameras Digital SLR Cameras are certainly more expensive than other digital cameras, so the question that arises is the need of purchasing Digital SLR Cameras. Also if it is worth making a switch to Digital SLR Cameras, then How to Buy Digital SLR Cameras. Do refer inkango.com, if you have made up your mind to own one. Here orders for Digital SLR Cameras, canadas digital cameras, camcorders canada and any other cannon digital camera accessories may be purchased using MasterCard, Visa, American Express or C.O.D. payment upon delivery. This information answers the question about How to Buy Digital SLR Cameras. The versatility and color consistency in above mentioned models are some of the Advantages of digital SLR cameras looked for. In addition to this, Digital SLR Cameras are lighter and easier to use. This definitely prompts an individual to know How to Buy Digital SLR Cameras. Various features that you should be equipped with before you are purchasing Digital SLR Cameras that will help about How to Buy Digital SLR Cameras include: What brands of Digital SLR Cameras are available? You can select any of the following: * Fujifilm Digital SLR Camera * Konica Minolta Digital SLR Camera * Olympus Digital SLR Camera * Pentax Digital SLR Camera * Samsung Digital SLR Camera * Casio Digital SLR Camera

About the author: I am Jhon writting artilces since a long time this time it's about Digital Cameras i hope that you will like this if you have interest or require SLR Digital camera. it's really a nice one on this site i.e. http://www.inkango.com/fr/index.asp

Saturday, February 23, 2008

In the Wake of Katrina: The Wrath of Mother Nature

Author: Jon Bischke

When natural disaster hits there are usually more questions than answers. Why did this happen? Can something like this be prevented from happening again? What does this all mean? In the wake of Hurricane Katrina, people are asking these and many other questions. As people come to grips with what has occurred it is natural for there to be curiosity about previous natural disasters in our planet's past.

There are a number of audio books that deal with the subject of natural disasters and can help give people context and understanding during tragic times. Often the best way to prevent future disaster is to understand what happened in the past and take action to prevent mistakes that might have led to the event or increased its impact. Here are some resources that you may want to consider listening to.

""Krakatoa: The Day the World Exploded"" is the incredible story of the 1883 eruption of the volcano and the subsequent tsunami that killed almost 40,000 people. Simon Winchester narrates this tale of disaster and the ramifications of it on the surrounding area. On the other side of the world, a less catastrophic but more recent disaster is detailed in ""Fire on the Mountain"", the tale of a forest fire in Colorado on July 3, 1994. This fire claimed the lives of 14 firefighters and ranks as one of the deadliest days in the history of firefighting.

Blizzards and snowstorms are often tragic causes of death. ""Blizzard! The Storm That Changed America"" recounts the blizzard of 1888 that hit the Eastern Coast of the United States. This blizzard resulted in the death of 400 people, the sinking of 200 ships and snowdrifts that reached 50 feet in height. Climbers that challenge the world's highest peaks often come face to face with Mother Nature as well. Iconic climber Anatoli Boukreev's ""The Climb"" and Jon Krakauer's ""Into Thin Air"" both tell the haunting story of the 1996 attempts to scale Everest during which weather conditions contributed to the death of eight climbers.

There are even podcasts related to natural disasters. The Disaster News Network (DNN) puts out a regular podcast which has covered recent events such as the Indonesian earthquake and of course Hurricane Katrina. Another podcast that has covered the hurricane from a scientific perspective is the Science Friday podcast which is a production of NPR. Both of these podcasts are free to listen to and provide an alternative view of recent events.

Listening to audio books about natural disasters can't take the sting of these disasters away but it can help to give us a better historical perspective and show us the remarkable resilience of human beings even when the worst possible scenario has unfolded.

About the author: Jon Bischke is the Founder of LearnOutLoud.com and is passionate about helping you improve your life. He invites you to check out the complete selection of educational and self-development audio and video material at http://www.learnoutloud.com For the HTML version of this article complete with links to the titles that were mentioned, please visit http://www.learnoutloud.com/katrina01

Friday, February 22, 2008

Features of the Scanning Tunneling Microscope

Author: George Anderson

The scanning tunneling microscope (STM) invented by Heinrich Rohrer and Gerd Binnig in the 1980s still manages to do a great job today and competes with more advanced microscope types.

The scanning tunneling microscope is used for studying the surface atoms that are found on various materials. The device is based on a complex process of ""tunneling"" electrons between the material and the tip of a probe. The tip of the probe is sharp and extremely small and it allows for great precision. However, in order to get the best results, the distance between the tip and the studied material has to be precisely calculated. While the tip is moving on the surface of the material, a constant flow of electrons must be kept so as to get accurate readings. After the scanning tunneling microscope does its job, the researcher is left with a precise bump map of the surface material.

Classified as a scanning probe microscopy instrument, the STM is actually a better version of the atomic force microscope. The scanning tunneling microscope brings higher accuracy and better individual atom separation abilities, providing researchers with high resolution images. Since the size at which experiments can be done is very small (about 0.2 nm) the scanning tunneling microscope offers a lot of versatility in usage. By making the most out of the high resolution images, researchers can manipulate individual atoms on the material surface. This allows for precise chemical and physical reactions to be performed, as well as electron manipulation.

So how does the scanning tunneling microscope work? STMs work by following the guidelines found in quantum mechanics, where the flow of electrons between the surface of the studied material and the tip of the probe is the essence of the experiment. The quantum mechanical effect is represented by the tunneling of electrons, which is, in other words, a transfer of electrons between the surface and the tip of the probe. The jumping motion performed by the electrons and the back and forth motion creates a weak electrical current (which only happens if the studied surface is a conductor). Precise measures of the distance between probe and surface is accomplished by using converse piezoelectricity.

There are many fields of study where a scanning tunneling microscope can come in handy. Researchers use it to get a better understanding of the conductivity level mechanisms found in different molecules. Because it allows for such great precision and individual atom manipulation, the scanning tunneling microscope is often used in labs dealing with nano technology. Other applications where the STM is used include conductivity research as well as analysis of the structural surface of various materials. Electronic device manufacturers use the scanning tunneling microscope as a tool for verifying surface conductivity and improving the size of their electronic devices, and there are numerous other fields where the STM performs accurately.

About the author: George Anderson loves the details that can be learned from microscopic study, especially

scanning tunneling microscopes .

Thursday, February 21, 2008

EU's resistance to GMOs hurts the poor

Author: James Wachai

By James Wachai The bitter dispute between the U.S., Canada, and Argentina, on one hand, and the European Union (EU), on the other, over the latter's restrictive policies towards genetically modified foods reaches what is likely to be an acrimonious peak this week when the World Trade Organization (WTO) rules if the EU has violated trade rules by blocking foods produced using modern biotechnology techniques. Acrimonious because the EU is preemptively threatening to dishonor the verdict if it's in favor of the U.S., Canada and Argentina. The EU is keen on blocking genetically modified foods without scientific justification.

The dispute dates back to the spring of 1998 when five EU member states -Denmark, France, Greece, Italy and Luxembourg - issued a declaration to block GMOs approvals unless the European Commission (EC) proposed legislation for traceability and labeling of GMOs. A year later in June 1999, EU environment ministers imposed a six-year de facto moratorium on all GMOs. The official moratorium has since lapsed but EU's recalcitrance towards GMOs and obstruction remains.

EU's ban on GMOs has exasperated the U.S., Canada and Argentina - leading growers of crops with GMO enhancements - to initiate a WTO dispute settlement process against the EU in May 2003, arguing that the moratorium harmed farmers and their export markets, particularly for corn and soybeans, and which are critical sources of revenue for farmers.

Now, the WTO's verdict is due today(February 7, 2006). They have already reported it will be the longest report document of its kind. This suggests that EU political pandering may have seeped into the WTO process complicating what should be a simple trade dispute resolution. This is unfortunate for more than just the two parties involved.

The stakes are too high, not only to the parties in dispute, but to the entire world, and especially developing world. The dispute is not just another transatlantic trade skirmish. At stake are consumers' rights to have real choices with regard to their food, and farmers' freedoms to use approved tools and technologies to safely produce those food choices.

The EU has never justified its restrictive policies towards GMOs, which makes everybody question the motive behind GMOs ban. When it slapped a moratorium on GMOs, the EU cited undefined safety concerns as the reason for the drastic action. Their own scientists and regulators have repeatedly addressed and dismissed the safety issues for these GMO crops. Were similar undefined, precautionary principle standards applied to other growing practices - such as organic - Europe would have to similarly ban all foodstuffs.

In the absence of verifiable scientific justification to block GMOs from its territories, the EU is guilty of violating the Agreement on Technical Barriers to Trade (TBT) and the Agreement on the Application of Sanitary and Phytosanitary Measures (SPS), to which it is a signatory. The SPS, particularly, recognizes that countries are entitled to regulate crops and food products to protect health and environment. The agreement requires, however, ""sufficient scientific evidence"" to support trade-restrictive regulations on crops and food products to protect the environment.

The EU's argument in the WTO dispute is greatly eroded by the fact that various scientific bodies have, repeatedly, vindicated GMOs. For example, the United Kingdom-based Institute for Food Science and Technology (IFT) - an independent body for food scientists and technologists - has declared that ""genetic modification has the potential to offer very significant improvements in the quantity, quality and acceptability of the world's food supply.""

In 2004, the U.S. National Research Council (NRC), a division of the National Academy of Sciences (NAC), issued a report in which it found that genetic engineering is ""not an inherently hazardous process,"" calling fears of the anti-biotech crowd ""scientifically unjustified.""

In June 2005, the World Health Organization (WHO) released a report that acknowledged the potential of genetically modified foods to enhance human health and development. The report, Modern Food Biotechnology, Human Health and Development, noted that pre-market assessments done so far have not found any negative health effects from consuming GM foods. Surely, no respectable scientific body would endorse a flawed innovation.

These findings may help to explain why agricultural biotech innovators and product developers continue to thrive. Cropnosis - a leading provider of market research and consultancy services in the crop protection and biotechnology sectors - estimates that the global value of biotech crops stands at $5.25 billion representing 15 percent of the $34.02 billion crop protection market in 2005 and 18 per cent of the $30 billion 2005 global commercial seed market.

The International Service for the Acquisition of Agri-biotech Applications (ISAAA), in a report released early this year, reveals that since the commercialization of the first GM crop a decade ago, 1 billion acre of land, in 21 countries, is under biotech crops. In 2005 alone, the global area of approved biotech crops was 222 million hectares, up from 200 million acres in 2004. This translates to annual growth rate of 11 percent.

The lucrative nature of GM crops - they yield high and require less pesticides and herbicides - is driving many developing countries to embrace them. However, many, especially in Africa, where agriculture constitutes 30 per cent of the continent's Gross Domestic Product (GDP), have been reluctant cultivate GMOs for fear of losing their European agricultural markets. This is why Europe's accession to GMOs remains critical to Africa's adoption of GMOs. The EU, by default, is preventing many poor countries to benefit from GMOs.

If Europe opens its doors to GMOs, many poor countries stand to gain from this technology and both the economic as well as life-saving benefits it has to offer. Many in poor countries, predominantly, live on agriculture. They must be given a chance to benefit from modern agricultural technologies such as biotechnology. Denying poor countries an opportunity to reap from crop biotechnology, which has proved so successful in other parts of the world, amounts to condemning billions of people who live in poor countries to a slow and painful death.

About the author: Go to www.gmoafrica.org to read more about James Wachai

Wednesday, February 20, 2008

Mars Global Surveyor

Author: David Craig

While Spirit and Opportunity, Nasa's Mars Rovers are getting all the attention, largely overlooked bonanzas of information about the Red Planet are being reaped by the Mars Global Survey.

Originally launched for the first time successfully on September 12 1997, the Mars Global Surveyor has given Nasa much more than its expectations by living beyond its primary mission which was intended to end in January of 2002. Remaining in good condition as of this point, Nasa has extended its mission for a third time through 2006 and believe if funding is allocated the Surveyor could remain in space around Mars' orbit for another five to ten years. On September twelfth of this year, the Global Surveyor passed Viking I as the longest lived spacecraft in Mars space mission history.

Among the discoveries made by the Surveyor, the most dramatic since its mission began was a discovery of a fossilized river delta in a crater known as ""Eberswalde"". This delta proved the existence of water flow among Mars at one time, resulting in the production of sedimentary rock as found by Spirit and Opportunity.

The most exciting recent discovery has been that of the formation of new gullies on Mars. This evidence has changed the estimates of the age of Mars. In addition, the Mars Global Surveyor discovered a shrinking of the southern polar ice cap of three feet a year. This proved to Nasa that Mars is undergoing more frequent changed than previously believed. In addition, the Surveyor has gathered data on the well-known dust storms of Mars, showing them to be seasonal, varying, and covering only part of the planet at a time. The dust storms were found to be higher in the atmosphere than previously suspected. This meant the surface of Mars is calmer than previously believed during these interludes.

New technology has allowed Nasa to increase its utilization of the Surveyor in ways never dreamed of at the onset of its mission. Resolution of its cameras made it possible to determine that boulders no larger than one to two meters exist in ripples caused by a catastrophic flood. This technique, is known as ""compensated pitch and roll targeted observation"". In May of this year, the Surveyor again made history by being the first spacecraft to ever take images of other spacecraft in orbit, taking images of the European Space Agency's Mars Express and NASA's Mars Odyssey.

Nasa - National Mars Exploration Program - http://www.nasa.gov/home/: :

1) One Mars Orbiter Takes First Photos of Other Orbiters

2) Mars Orbiter Sees Rover Tracks Among Thousands of New Images

3) Nasa Press Releases September 20, 2005 4) Recent Changes on Mars Seen by Mars Global Surveyor Michael C. Malin and Kenneth S. Edgett, Malin Space Science Systems, September 2005

About the author: David Craig Nasa and General Astronomy Inofrmation M.S. Physics - University of Minnesota B.S. Computer Science - University of Oregon

Tuesday, February 19, 2008

Lagrangian Points and Nasa's Plan to Explore Space

Author: David Craig

October 3, 2005

Nasa is relying on its ability to determine the Lagrangian points between every set of planets, moons, asteroids, etcetera it intends to explore in order to implement its plan of successful interplanetary space exploration. Although this at first may seem to be a vague and mystical concept, foreign to all but the most overeducated of astrophysicists, in fact it is really quite simple to understand.

The Lagrangian in physics is merely nothing more than an alternative set of two equations for Newton's second law; force equals mass times acceleration. A Lagrangian point between two bodies exerting competing forces on a body therefore is a point at which the forces are equal and opposite. According to Newton's third law, if the net force on a body is zero it will stay at rest if at rest and if in motion it will stay in motion.

In mathematical terms, visualize a graph of a big bowl. The Lagrangian point is the point at the very bottom of the bowl. The energy from the very bottom of the bowl to the top represents the maximum energy required to kick a body at the bottom of the bowl out of the bowl and keep it from rolling back to return to its state of minimum energy. Therefore, in this case of a mass under the influence of two competing gravitational forces, the Lagrangian points are the orbits in which the mass in question will have the greatest ability to withstand the biggest change in net force upon it that would disturb it into an unstable orbit.

How this relates to Nasa and its plans for future space travel is that they have the ability to find the solutions of these formulas to determine the Lagrangian points lying between adjacent planetary bodies along the proposed route of space travel. They are planning to put space stations at these locations. This will make it possible to create stepping stones to extend space exploration outwards as far as you want to go. As it would be unrealistic to expect any spacecraft to be able to return to earth from deep space in the case of emergency or the need for repairs, this makes it hypothetically feasible to conduct space travel without limits in the future.

Sources: 1) NASA Reveals New Plan for the Moon, Mars & Outward By Leonard David; Senior Space Writer - Space.com

About the author: M.S. Physics - University of Minnesota

B.S. Computer Science - University of Oregon

Visit Nasa and General Astronomy Information for more pertinent Nasa and astronomy articles.

Monday, February 18, 2008

Temperature monitoring systems

Author: Rick Kaestner

Monitoring temperature is a critical element in many different segments of industry and business today. There are several means of measuring temperature, each of which has its own pluses and minuses. In the past you had to use a manual method, where an employee used a thermometer to determine temperature and a piece of paper and a pencil to record. This was time consuming, expensive and of questionable accuracy.

When chart recorders were invented it was used for monitoring temperature twenty-four hours a day. However it still required an employee to change the chart every day or week and because it was mechanical it often broke down requiring even more maintenance.

The Data logger appeared in the late 80s. They were not mechanical, which eliminated the ongoing maintenance and made monitoring temperature easier and less expensive. They recorded temperature in RAM memory and could do their work unattended. They were also rugged so they would be put in places that were inhospitable to humans.

Many businesses began using the data logger for monitoring temperature. This worked fairly well as long as the temperature they were monitoring didn't change frequently or require a response to certain events. The big drawback to the data logger is that the temperature can't be seen until it is downloaded into a computer. The data logger hasn't, until recently, come with a display.

There is now a new type of data logger available which does have a display. This class of instrument, called data viewers collects and stores temperature history, just like a data logger, but it also displays the temperature on an LCD display. This improves the utility of the device immensely. The most useful and low-cost data viewer is the ThermaViewer, manufactured by Two Dimensional Instruments, LLC.

This very useful instrument can be installed in minutes and easily used by every employee. It doesn't require an IT professional to set up or interpret. Once in place, it draws a chart on the large LCD display that is very easy to read. It is being used in laboratories and hospitals for measuring temperature of refrigerators and freezers where drugs and vaccines are stored. It is definitely easy enough for nurses, orderlies, and maintenance personnel to use.

About the author: Rick Kaestner is the President and CEO of Two Dimensional Instruments; the worldwide leader in providing technology to monitor, measure, record and document temperature and humidity. For more information please visit their website at http://www.e2di.com

Sunday, February 17, 2008

Top 10 Tips for Safely Handling and Using Gas Cylinders

Author: R.L. Fielding

Not every one needs to know that fluorine will violently ignite many substances, that silane burns on contact with air, or that ammonia will decompose thermally into twice its volume. But if you work with specialty gases , this information is essential. Safety must always be a primary goal when working with specialty gases -safety and knowledge go hand-in-hand.

To improve your chances of preventing hazardous accidents, follow these Top 10 Tips for safely handling and using gas cylinders :

1. Appropriate firefighting, personnel safety and first aid equipment should always be available in case of emergencies. Ensure adequate personnel are trained in the use of this equipment.

2. Obtain a copy of the MSDS for the gases being used. Read the MSDS thoroughly and become familiar with the gas properties and hazards prior to use.

3. Follow all federal, state and local regulations concerning the storage of compressed gas cylinders. Store gas cylinders in a ventilated and well lit area away from combustible materials. Separate gases by type and store in assigned locations that can be readily identified. Store cylinders containing flammable gases separately from oxygen cylinders and other oxidants by a fire-resistant barrier (having a fire-resistance rating of at least 30 minutes) or locate them at least 20 feet apart from each other. Store poison, cryogenic and inert gases separately. If a cylinder's contents are not clearly identified by the proper cylinder markings labels, do NOT accept for use.

4. Storage areas should be located away from sources of excess heat, open flame or ignition, and not located in closed or sub-surface areas. The area should be dry, cool and well ventilated. Outdoor storage should be above grade, dry and protected from the extremes of weather. While in storage, cylinder valve protection caps MUST be firmly in place.

5. Arrange the cylinder storage area so that old stock is used first. Empty cylinders should be stored separately and identified with clear markings. Return empty cylinders promptly. Some pressure should be left in a depleted cylinder to prevent air suck-back that would allow moisture and contaminants to enter the cylinder

6. Do not apply any heating device that will heat any part of a cylinder above 125°F (52°C). Overheating can cause the cylinder to rupture. Neither steel nor aluminum cylinder temperatures should be permitted to exceed 125°F (52°C).

7. Safety glasses, gloves and safety shoes should be worn at all times when handling cylinders. Always move cylinders by hand trucks or carts that are designed for this purpose. During transportation, keep both hands on the cylinder cart and secure cylinders properly to prevent them from falling, dropping or striking each other. Never use a cylinder cart without a chain or transport a gas cylinder without its valve protection cap firmly in place.

8. To begin service from a cylinder, first secure the cylinder and then remove the valve protection cap. Inspect the cylinder valve for damaged threads, dirt, oil or grease. Remove any dust or dirt with a clean cloth. If oil or grease is present on the valve of a cylinder which contains oxygen or another oxidant, do NOT attempt to use it. Such combustible substances in contact with an oxidant are explosive. Always disconnect equipment from the cylinder when not in use and return the cylinder valve protection cap to the cylinder.

9. Be sure all fittings and connection threads meet properly - never force. Dedicate your regulator to a single valve connection even if it is designed for different gases. NEVER cross thread or use adapters between non-mating equipment and cylinders. Use washers only if indicated. Never use pipe dope on pipe threads, turn the threads the wrong way, or use Teflon® tape on the valve threads to prevent leaking

10. When a cylinder is in use, it must be secured with some form of fastener. Floor or wall brackets are ideal for stationary use. Portable bench brackets are recommended for when a cylinder must be moved around. Smaller stands function well for lecture bottle use.

For more information on Gas Handling and Safety, and to download a comprehensive free Design & Safety Handbook, visit http://www.scottgas.com . Scott Specialty Gases (http://www.scottgas.com) is an international producer and supplier of specialty gas products and equipment for all types of scientific, industrial and medical applications.

This article is provided by Scott Specialty Gases. Scott Specialty Gases, a leading global manufacturer of specialty gases located in Plumsteadville, PA. More information on the company can be found at http://www.scottgas.com .

This article is copyrighted by Scott Gases. It may not be reproduced in whole or in part and may not be posted on other websites, without the express written permission of the author who may be contacted via email at scottgas@digitalbrandexpressions.com.

About the author: About the Author R.L. Fielding has been a freelance writer for 10 years, offering her expertise and skills to a variety of major organizations in the education, pharmaceuticals and healthcare, financial services, and manufacturing industries. She lives in New Jersey with her dog and two cats and enjoys rock climbing and ornamental gardening.

Saturday, February 16, 2008

Unlocking the Mystery of Life

Author: epicidiot.com

Review of the Intelligent Design video

""Unlocking the Mystery of Life""

Do molecular machines such as the incredible flagellar motor prove an intelligent designer?

How could DNA evolve?

Are the claims of this video valid or just another form of pseudoscience?

This review investigates the claims of this popular video and puts them to the test.

Often called the most researched and documented case for Intelligent Design, ""Unlocking the Mystery of Life"" features state-of-the-art computer animation to question the origins of life. The speakers are a who's who in the Intelligent Design movement such as Phillip Johnson, Paul Nelson, Dean H. Kenyon, Michael J. Behe, Stephen C. Meyer, William Dembski, and Jonathan Wells.

Read the full review at

http://www.epicidiot.com/evo_cre/vr_unlocking_the_mystery_of_life .htm

The conclusions might surprise you.

About the author: None

Friday, February 15, 2008

Nasa's Vomit Comet

Author: David Craig

September 29, 2005

The Vomit Comet is the nickname for Nasa's C-9 airplane used to simulate weightlessness for astronaut training. The C-9 replaced two KC-135's previously used for this function. The Vomit Comet engages in a flight lasting almost three hours entailing 30-40 parabolic loops in which gravity varies from earth's gravitational pull to near weightlessness for a period of 25 seconds. The aircraft flies horizontally for a period of time only to rise in a steep climb followed by the 25 second freefall.

The Vomit Comet received its name from the percentage of its passengers who throw up on its flights. According to John Yaniec, lead test director for NASA's Reduced Gravity Program, roughly one third of its passengers vomit, one third get sick but don't vomit, and the rest don't get sick at all. According to Yaniec, most airsickness is caused by anxiety over the upcoming flight.

The Vomit Comet is used to train future astronauts as well as to carry out microgravity experiments. Many high school and college science experiments have been carried out over the years on the Vomit Comet. One of the original KC-135 Vomit Comets was used to film scenes of the 1995 movie Apollo 13 starring Tom Hanks.

About the author: M.S. Physics - University of Minnesota B.S. Computer Science - University of Oregon Owner of Space Stuff - Home of Nasa and General Astronomy Information

Please feel free to visit.

Thursday, February 14, 2008

Do Planets Communicate with Living Organisms?

Author: Thomas Herold

Do you feel any difference if the moon if full? A lot of people, including myself, report that their sleep is different and even during the day they feel a shift in their mood that lasts sometimes a few days.

The moon is responsible for making the tides and has therefore a physical influence on the earth. But what about the other 9 major planets?

Astrology is based on the belief that time has quality. Important here is to mention that astrology does not say the planets are making the qualities or energy patterns. The planets are an indicator of these qualities and if you are familiar with these qualities you can see them manifesting in your daily life.

Our world and our beliefs are still so much indoctrinated by the old paradigm that everything is mechanical - including the human nature. This old belief will soon be replaced by new beliefs and new concepts. Quantum physics is already making such a big shift in our view of the word that soon the public will realize that our mechanical concepts of the world needs to be replaced in order to integrate new findings and experiments.

What we will learn sooner or later is that information takes no time at all to get from one place to another and therefore we can not even say anymore that information is traveling. On a quantum level information could be a singularity, meaning that everything is happening at the same time everywhere.

Does Life has Principles? A study done by a swiss scientist some 50 years ago revealed the principles of life itself. These principles are manifesting themselves in every living form as well as in any other material way.

The amazing result of his study shows the outcome of nearly 10 principles that are almost identical with the qualities of the planets. I like to mention that this study was not influenced in any way by astrology facts.

Each planet is associated with a different energy pattern. The names vary slightly as each astrologer interprets them differently. But overall they represent the same energy pattern. The difference is simply caused by the language. When I mention the color red we all agree on it but there are hundreds of variations.

What are these 10 Qualities?

Moon - Feeling Sun - Identity Mercury - Thinking Venus - Harmony Mars - Energy Jupiter - Expansion Saturn - Integration Uranus - Transition Neptune - Mystery Pluto - Metamorphosis

A Short Explanation of these Qualities:

Feeling - Our emotions and our senses. There are days you may more sensitive for light or sound than others.

Sun - What we identify with, our live force.

Mercury - Our capacity to understand, logic, language, talking.

Venus - Harmony means to reach an optimum, a balance.

Mars - Power, the strength to initiate or do something.

Jupiter - Exploring new areas in your life, growing.

Saturn - Learning something new about yourself.

Uranus - Shifting your work area or your life purpose.

Neptune - Quantum physics, beyond what you see and understand.

Pluto - Transformation, changing your way of life.

The names of these qualities are adapted and modified from Thomas Ring, one of the most popular German astrologers. He lived from 1892 until 1983 and Astrodienst Zurich has dedicated a special website for him.

Also there are some other planets like Chiron, but I'd like to concentrate on the major 10 planets. And for all scientist I want to add that the moon of course is a trabant and the sun is a star, but in astrology terms they are planets as well.

What can we do with these 10 Energy Patterns? We can create a chart from our birthday and see our unique energy pattern in it. Go to a good astrologer and you will be amazed on how precisely your birth chart represents your unique abilities and talents.

Now here comes the interesting part. As we know how the planets are moving we can look up the planet positions on a ephemerids. The ephemerids is telling us where the position of a planet is at a certain time. We combine these positions with our birth chart and what we get is a unique energy pattern for each day. We can even look up planet data in the future and can therefore find energy patterns in the future.

The combination of your energy patterns from your birth chart with the energy patterns of the daily planets is called transits. These calculations have been done for thousands of years. But today we have fast and inexpensive computers and calculations can be done in a fraction of a second.

What you will get from this calculation is a long list of relationships between the position of your planets from your birth chart and the position of the planets from a certain day.

The results of this calculation can be shown as a graphic with two circles. The inner circle shows your birth chart and the outer circle the chart from a certain day. Between these two circles you than see lines, which are representing the transits. For someone who understands astrology this graphic is meaningfully - for the rest of us it is meaningless.

How would it look like if we take one quality from our birth chart and combine it with the 10 qualities from a certain date? It would show us all the influences at once. For example I can see my energy (Mars) pattern and therefore may or may not base my decisions. If my energy pattern shows expanding (Jupiter) or shows energy (Mars) as well I know that this would be a good time for actions and starting new projects.

If you pay attention to your feelings and your daily qualities (transits) you may automatically adapt your work or life flow accordingly and you will find yourself having to deal with less resistance.

I am currently working on an application for the Internet, which will be available for free in a few weeks. With this application you will be able to calculate the positions of your planets at your birth time and watch a graphical radar chart of your current transits.

You can see some of the test graphics on my website at:

Quantum Biocommunication Technology

About the author: Thomas Herold is the founder of Quantum Biocommunication Technology, a website dedicated to the exploration of bicommunication.

Wednesday, February 13, 2008

What are Compound Microscopes?

Author: Peter Emerson

Most of the microscopes used today are compound. A compound microscope features two or more lenses. A hollow cylinder called the tube connects the two lenses. The top lens, the one people look through, is called the eyepiece. The bottom lens is known as the objective lens. Below the two lenses is the stage, with the illuminator below that.

Compound microscopes were among the first magnifying instruments invented. Two Dutch eyeglass makers named Zaccharias and Hans Janssen are credited with making the first compound microscope in 1590 by putting one lens at the top of a tube and another at the bottom of the tube. Their idea was fleshed out by others scientists over the next several centuries, but the basic design remained very similar.

The eyepiece, also known as the ocular lens, is at the top of the compound microscope. It is not adjustable, that is, it only has one strength. Most ocular lenses are 10x, meaning that they magnify objects to ten times their normal size. People look in through the eyepiece through the tube and out through the objective lens.

A compound microscope normally contains several objective lenses. The objective lenses are different lengths, with the longer ones being the strongest. The lenses are situated on a round disk below the tube. Viewers choose which strength lens they want and place it below the tube by turning the disk until the desired lens is in place.

The stage and illuminator are below the objective lens. Specimens are placed over a translucent part of the stage. Light provided by the illuminator shines through the clear part of the stage, making it easier for the viewer to see the magnified details of the specimen. Two adjustment knobs help focus the object on the stage by bringing the lenses and the stage closer together.

Compound microscopes have been around for hundreds of years and are still very useful. A number of scientific disciplines use compound microscopes to discover the wonders of the microscopic world.

About the author: Microscopes Info provides detailed information about electron, compound, stereo, digital, video, and scanning tunneling microscopes, as well as an explanation of the different parts of a microscope, and more. Microscopes Info is affiliated with Business Plans by Growthink .

Tuesday, February 12, 2008

Genetic Genealogy Research

Author: Garon Yoakum

One of the first

genetic genealogy studies was conducted in the late 1980s by scientists with the Department of Biochemistry at the University of California, Berkeley. These scientists Rebecca L. Cann, Mark Stoneking and Allan C. Wilson studied a newly discovered kind of DNA. Mitochondrial DNA (mtDNA) is contained not in the nucleus of our cell, but in the mitochondria organelles of our cells. These scientists chose to study Mitochondrial DNA (mtDNA) because of its three unique properties which they explain as:

First, mtDNA gives a magnified view of the diversity present in the human gene pool, because mutations accumulate in this DNA several times faster than in the nucleus. Second, because mtDNA is inherited maternally and does not recombine, it is a tool for relating individuals to one another. Third, there are about 1016 mtDNA molecules within a typical human and they are usually identical to one another (Cann 31).

They extracted and compared mtDNA from ""147 people, drawn from five geographic populations"" (Cann 31). The researchers discovered that ""All these mitochondrial DNAs stem from one woman who is postulated to have lived about 200,000 years ago, probably in Africa"" (Cann 31). Their findings also agree with the archaeology record as Cann explains ""Studies of mtDNA suggest a view of how, where and when modern humans arose that fits with one interpretation of evidence from ancient human bones and tools"" (36).

Swedish researchers Max Ingman, Henrik Kaessmann, Svante Paabo and Ulf Gyllensten critical of these findings conducted their own study in 2000. They claimed that ""almost all studies of human evolution based on mtDNA sequencing have been confined to the control region, which constitutes less than 7% of the mitochondrial genome"" (Ingman 708). Further they argued that the prior methods of analysis where ""providing data that are ill suited to estimations of mutation rate and therefore the timing of evolutionary events"" (Ingman 708). So they decided to study the complete mtDNA sequence from 53 people of various races.

Surprisingly their attempt to discredit the previous research failed as they also came to roughly the same conclusions. They conceded to the likely hood of a common ancestor shared by all the subjects despite being ""geographically unrelated"" (Ingman 712). They estimated ""The age of the most recent common ancestor (MRCA) for mtDNA, on the basis of the maximum distance between two humans...to be 171,500"" (Ingman 712) instead of the earlier estimate of 200,000 years ago. But they refused to align their findings with archeologists by stating ""Whether the ancestors of these six extant lineages originally came from a specific geographic region is not possible to determine"" (Ingman 712). Lastly they agreed on the potential of

genetic genealogy by summarizing:

Our results indicate that the field of mitochondrial population genomics will provide a rich source of genetic information for evolutionary studies. Nevertheless, mtDNA is only one locus and only reflects the genetic history of females. For a balanced view, a combination of genetic systems is required. With the human genome project reaching fruition, the ease by which such data may be generated will increase, providing us with an evermore detailed understanding of our genetic history (Ingman 712).

Their call for a more balanced view was shortly answered because in 2000 a team of researchers from the Department of Genetics at Stanford University lead by Peter A. Underhill published their results of studying Y-chromosome DNA. Only males have the Y-chromosome which has unique properties as explained by Underhill:

Binary polymorphisms associated with the non-recombining region of the human Y chromosome (NRY) preserve the paternal genetic legacy of our species that has persisted to the present, permitting inference of human evolution, population affinity and demographic history (358).

Their report was based upon ""the analysis of 1062 globally representative individuals"" (Underhill 358). They concluded that the subjects ""represent the descendants of the most ancestral patrilineages of anatomically modern humans that left Africa between 35,000 and 89,000 years ago"" (Underhill 358).

So far

genetic genealogy research has focused on these two kinds of DNA. As mentioned previously mtDNA is passed along the maternal line and Y-Chromosome DNA is passed along the paternal line. These two kinds of DNA effectively encompass all of our ancestors. Yet they provide no information about our ancestors inside the encompassed area. For example our maternal grandfather (mother's father) couldn't contribute any mtDNA or Y-Chromosome DNA to our mother. Yet he did contribute a third type of DNA called autosomal DNA. This type of DNA has yet to be studied for Genetic Genealogy purposes because of its inherent difficulties.

The main reason autosomal DNA is just now being studied is because scientists aren't sure how to determine which autosomal DNA came from mom and which came from dad without testing one or both of our parents. This situation is illustrated by the mathematical equation X = Xm/2 + Xd/2 where our autosomal DNA (X) is half of our mom's (Xm/2) and half of our dad's (Xd/2). By testing ourselves we identify our autosomal DNA but can't determine which part came from mom or dad. Additionally testing one of our parents is necessary to determine exactly which parent contributed which part of our autosomal DNA. This type of testing is currently used for Paternity and near relationship testing. But quickly becomes impractical after a few generations because of the difficulty of obtaining DNA samples from probably deceased ancestors.

Conclusion

Genetic Genealogy is the science of analyzing DNA for genealogical purposes. Studies have shown that we all stem from a common female and male ancestor. Because this emerging science is so new, benefits of this research are still being identified. Currently I believe Genetic Genealogy offers three categories of benefits. First is entertainment value. Finding out you're related to famous people like George Washington, Julius Caesar or Genghis Khan is just plain fun. Imagine the bragging rights and small-talk fodder this provides at social gatherings. Second is scientific value. Current studies have corroborated other scientific findings such as the human archaeological record. Medical sciences will benefit from correlating DNA studies with family genealogies to isolate hereditary diseases. Third is relatedness value. Finding out you're related to a wealthy individual like Bill Gates may entail a financial windfall. Most importantly of all is the ability to reunite families. Millions of displaced war torn families and adopted children can now turn to Genetic Genealogy to find their relatives.

Sources

Cann, Rebecca L. et al. ""Mitochondrial DNA and human evolution."" Nature 325 (1987): 31-36

Carmichael, Terrence and Alexander Kuklin. How to DNA Test our Family Relationships? California: AceN Press, 2000

Cavalli-Sforza, L. Luca et al. The History and Geography of Human Genes. New Jersey: Princeton University Press, 1994

Ingman, Max et al. ""Mitochondrial genome variation and the origin of modern humans."" Nature 408 (2000): 708-713

Tooker, Elisabeth. An Ethnography of the Huron Indians, 1615-1649. New York: Syracuse University Press, 1991

Underhill, Peter A. et al. ""Y chromosome sequence variation and the history of human populations."" Nature Genetics 26 (2000): 358-361

Walsh, Bruce. ""Estimating the Time to the Most Recent Common Ancestor for the Y chromosome or Mitochondrial DNA for a Pair of Individuals."" Genetics 158 (2001): 897-912

Zimmer, Carl. ""After You, Eve."" Natural History 3 (2001): 32-35

About the author: Garon Yoakum is a representative for Relative Genetics .

For more information on

genetic genealogy , contact us Toll Free at (800)956-9362

Monday, February 11, 2008

How Specialty Gases Differ from Industrial Gases

Author: Bob Jefferys

When it comes to compressed gases , there is often confusion over the difference between industrial gases (sometimes referred to as commodity or bulk gases) and specialty gases (sometimes referred to as cylinder gases, although industrial gases can also be supplied in cylinders). The Compressed Gas Association (CGA), who sets standards to which suppliers of all types of compressed gases conform, defines its mission as being ""dedicated to the development and promotion of safety standards and safe practices in the industrial gas industry."" In a broad sense, in that most compressed gases are used for some sort of industrial application, all could be considered to be industrial gases. So to define the true difference between industrial gases and specialty gases, one must look beyond the application to other factors such as complexity, level of purity and certainty of composition.

According to the CGA compressed gases are often grouped into five loosely defined families: atmospheric; fuel; refrigerant; poisonous; and those having no obvious ties to any of the other families. Assignment to these families is somewhat arbitrary and typically based on the origin, use or chemical structure of a gas. Specialty gases can belong to any of these five families. Essentially, they are industrial gases taken to a higher level. The dictionary describes one of the definitions of the word specialty as: an unusual, distinctive, or superior mark or quality. Specialty gases then, can be defined as high-quality gases for specific applications that are prepared using laboratory analysis and other preparation methods in order to quantify, minimize or eliminate unknown or undesirable characteristics within the gas. Regarding specialty gas mixtures, precise blending is also necessary to achieve very specific concentration values for the components contained within the mixture.

Specialty pure gases Pure gases are considered to be specialty gases when they are used as support gases for laboratory instruments such as chromatographs, mass spectrometers and other various types of analyzers and detectors. Manufacturers of these types of highly sensitive instruments normally specify the purity level of pure gases to be used with their instruments. For example, high-purity, moisture-free helium is often used as a carrier gas in these instruments. When unwanted impurities are present, performance of a laboratory instrument may be compromised, or the instrument itself may be damaged. A good rule of thumb is, when purity (sometimes as high as 99.9999%) and/or quantification of trace impurities is an issue, a pure gas is considered to be a specialty pure.

Specialty pure gases are used in the manufacturing of semiconductors and other closely controlled applications as well. They may also be used to assess and monitor the integrity of a bulk pure gas. Carbon dioxide is a good example. Beverage-quality CO2, as used in the manufacture of soft drinks, can be classified as being more of a bulk-type gas because it is used in large quantities. However, because purity is a health concern, a specialty pure CO2, in which all trace impurities have been carefully quantified, is needed to calibrate instruments used to monitor the purity of the bulk CO2.

Specialty gas mixtures Many specialty gases are actually gas mixtures that contain individual components. They are frequently used with various types of analyzers for process control and regulatory compliance. Some specialty mixtures are somewhat ""standard"" and may contain only three or four components, such as nitric oxide and sulfur dioxide mixtures that are used by utility companies to calibrate Continuous Emissions Monitors (CEMs). Others may be quite complex, containing as many as 30 or more components. Usually, a specialty gas mixture is prepared using a Standard Reference Material (SRM) in order to validate accurate measurement of the mixture's components. This provides what is known as traceability to a known measurement standard from a recognized metrology institution such as the National Institute of Standards and Technology (NIST). Specialty mixtures typically have components measured in percentages, parts-per-million and parts-per-billion.

Laboratory analysis to quantify all components and impurities in a specialty mixture is nearly always critical. A formal document known as a Certificate of Accuracy or Certificate of Analysis is provided for each cylinder containing a specialty mixture, and also for some specialty pure gases. This certificate specifies the concentration values for all contents, as well as other important information such the method of blending, type of laboratory analysis and reference standard used to prepare the mixture and expiration date. Expiration date refers to the length of time the components of a mixture remain at their certified concentrations within the specified tolerances. Depending on the stability of the components, shelf life can vary from as little as six months to two years or more. Special cylinder preparation processes, such as Scott's Aculife cylinder inerting treatments, can be used to condition cylinder interior walls in order to extend a mixture's shelf life.

Specialty gases are typically not used in nearly as large a quantity as industrial gases and are supplied in steel or aluminum high-pressure cylinders containing up to 3000 pounds of pressure per square inch/gauge (psig). Hence, they are sometimes referred to as cylinder gases or bottled gases. The cylinder itself is typically not included in the price of the specialty gas it contains and must be returned to the gas supplier when the gas has been depleted. A nominal monthly cylinder rental is usually charged until the cylinder is returned. Many specialty gases are also available in small, portable and non-returnable cylinders such as Scott's SCOTTY Transportables. Other specialized containers include lecture bottles that are often used in laboratories and floating piston-type cylinders that are used to contain volatile liquid phase mixtures.

The cost of specialization Due to blending technology, cylinder preparation, laboratory analysis and statistical quality control necessary to produce specialty gases, cost is much higher than for lower grade industrial gases. An A-size cylinder containing 218 cubic feet of a low grade of helium suitable for filling party balloons might cost little more than $50. The same cylinder containing 99.9999% pure research grade helium, with a total impurity of less than one part-per-million (1 ppm), would cost about $500. That's still a bargain considering 144 cubic feet of a three-component EPA Protocol mixture having an analytical accuracy of 1% may cost as much as $1,500. As with any other specialized product, the end cost of a particular specialty pure or gas mixture is largely determined by the degree of difficulty and complexity involved in its preparation.

Considerations when purchasing specialty gases Purchasing specialty gases can be a daunting task. Because of today's bottom line-oriented business climate, one might consider selecting a specialty gas product based strictly on price. Be careful! While in some cases organizations such as the EPA may dictate minimum accuracy and manufacturing processes for certain gas mixtures, there are few industry-wide standards for specialty gas quality. Blending, analytical and cylinder preparation procedures vary between suppliers of specialty gases. Moreover, suppliers do not always use common nomenclature when describing their products. Even when product names are the same, the characteristics of the gases can be quite different. The best advice is to carefully evaluate your application needs before purchasing. Then talk with a specialty gas expert to be sure you fully understand how the characteristics of a particular pure gas or gas mixture will either meet or possibly compromise your application. Remember also that most specialty gases require the use of specialized delivery equipment that is constructed of materials that will protect gas purity and integrity.

This article is copyrighted by Scott Gases . It may not be reproduced in whole or in part and may not be posted on other websites, without the express written permission of the author who may be contacted via email at scottgas@digitalbrandexpressions.com

About the author: Bob Jefferys is the Senior Corporate Communications Manager at Scott Specialty Gases.

Sunday, February 10, 2008

EPA Regulations Raise the Bar for Industrial Air Quality Testing

Author: Kenneth Eichleman

Far-reaching environmental legislation continues to change the way Americans live, work, and run their businesses. For the past decade and a half, companies have worked toward meeting the latest air quality standards set by the Environmental Protection Agency (EPA).

In 2005, regulations introduced by the Clean Air Act of 1990 came into full effect with the goal of reducing harmful emissions by 57-billion pounds per year. The act continues to have a huge impact both economically and environmentally as it targets the sources of urban air pollution, acid rain, and stratospheric ozone depletion.

Air pollution is not a new problem in the United States. During the 1940s, a series of pollution-related disasters forced Americans to acknowledge the need for clean air standards. The worst of those incidents took place during a five day period in 1948, when smog caused by industrial emissions and coal-burning furnaces killed 20 people and sickened nearly 7,000 others in the small town of Donora, Pennsylvania.

The tragedy spurred the federal government to take control of air quality management. In 1955, the Air Pollution Control Act was introduced to mandate the national investigation of air pollution. More stringent air quality controls were later established with the creation of the Clean Air Act of 1970 and the formation of the EPA. In 1990, the Clean Air Act was revised to include the following amendments:

* Title I - strengthens measures for attaining national air quality standards

* Title II - sets forth provisions relating to mobile sources

* Title III - expands the regulation of hazardous air pollutants

* Title IV - requires substantial reductions in emissions for control of acid rain

* Title V - establishes operating permits for all major sources of air pollution

* Title VI - establishes provisions for stratospheric ozone protection * Title VII - expands enforcement powers and penalties

The legislation not only provides the EPA with innovative regulatory procedures, but allows for a variety of supportive research and enforcement measures. Individuals may face fines up to $250,000 and imprisonment up to 15 years, with each day of violation counted as a separate offense. Businesses may face fines of up to $500,000 for each negligent violation and up to $1 million per day for knowing endangerment. Many corporations must apply for national operating permits because of the emissions released by their processes.

Current industrial

air quality testing is driven by the latest amendments. A major focus for manufacturers under the new provisions can be found in Title III, which identifies and lists 189 HAPs (Hazardous Air Pollutants) to be reduced within a ten-year period. This is a tremendous increase since the EPA had previously established standards for only seven HAPs out of only eight listed. These pollutants can result in serious health effects, such as cancer, birth defects, immediate death, or catastrophic accidents.

Among the air pollutants the act pinpoints for monitoring are VOCs (volatile organic compounds). These chemicals are identified as organic because of the presence of carbon, but many are synthetically created. VOCs include gasoline, industrial chemicals such as benzene, solvents such as toluene and xylene, and tetrachloroethylene (perchloroethylene, the principal dry cleaning solvent). Many VOCs, such as benzene, are present on the HAP list because of the threat they pose to human health. These pollutants may cause death, disease, or birth defects in organisms that ingest or absorb them.

There are a variety of methods for the determination of TO (toxic organic) compounds in ambient air at parts-per-million (ppm) and parts-per-billion (ppb) concentration levels. Following the EPA's TO-14, TO-14A, or TO-15 Methods, VOCs in air are collected in specially prepared canisters and analyzed by gas chromatography/mass spectrometry (GC/MS) instruments.

To test air quality using these methods, a sample of ambient air from a source must be drawn into a pre-evacuated specially prepared canister. After the sample is collected, the canister valve is closed, an identification tag is attached to the canister, a chain-of-custody (COC) form completed, and the canister is transported to a laboratory for analysis.

Upon receipt at the lab, the proper documentation is completed and the canister is attached to the analytical system. Water vapor is reduced in the gas stream by a dryer (if applicable), and the VOCs are then concentrated by collection in a cryogenically cooled trap. The refrigerant, typically liquid nitrogen or liquid argon, is then removed and the temperature of the trap is raised. The VOCs originally collected in the trap are revolatilized, separated on a GC column, and then run through one or more detectors to identify the components and concentrations in each sample. Findings are thoroughly documented in a written report which is presented to the client.

The qualitative and quantitative accuracy of these analyses is of the utmost importance. Difficulty arises in part because of the wide variety of TO substances and the lack of standardized sampling and analysis procedures.

To facilitate the improvement of laboratory

air quality testing and analysis, one proactive company, Scott Specialty Gases, offers a cross-reference program for labs. Now laboratories can evaluate their own proficiency by comparing their results against Scott Specialty Gases' as well as the blind results from other participating labs. By employing the highly accurate and stable gas mixtures manufactured by Scott Specialty Gases, laboratories can also calibrate their GC/MS instruments to achieve more precise readings of samples.

Chemical manufacturing plants, oil refineries, toxic waste sites or land fills, and solid waste incinerators are just a few of the many sources of hazardous air pollutants. The financial cost to install state-of-the-art controls is great.

Thanks to the services offered by companies like Scott Specialty Gases and to the more stringent requirements of the Clean Air Act of 1990, the environment is on the mend. The impact of industry compliance with the Clean Air Act of 1990 has been astounding. Careful testing has already shown a significant improvement in national air quality thanks to anti-pollution efforts. According to studies conducted by the Foundation for Clean Air Progress, exposure levels for ozone and particulates have decreased and four of the six most serious pollutants identified by the Clean Air Act of 1970 are no longer being released into the air at unhealthy levels. These improvements fly in the face of data that shows increased population growth and energy usage in the United States. Regulatory vigilance and technological advances in environmental monitoring have made cleaner air a reality.

This article is provided by Scott Specialty Gases. Scott Specialty Gases, a leading global manufacturer of specialty gases located in Plumsteadville, PA. More information on the company can be found at http://www.scottgas.com .

This article is copyrighted by Scott Gases. It may not be reproduced in whole or in part and may not be posted on other websites, without the express written permission of the author who may be contacted via email at scottgas@digitalbrandexpressions.com.

Sources:

""Clean Air Act."" Jan. 25, 1996. DOE Environmental Policy and Guidance. US Department of Energy. http://www.eh.doe.gov/oepa/laws/caa.html

Faletto, John S. ""1990 Clean Air Act Amendments - Impact on Small Businesses."" March 1994. Illinois Municipal Review. Illinois Periodicals Online (IPO). http://www.lib.niu.edu/ipo/im940311.html

""History of the Clean Air Act."" Environmental Resources for Teachers. Foundation for Clean Air Progress. 2002-2004. http://www.cleanairprogress.org/classroom/cleanairact_text.asp

McIntosh, Hugh. ""Catching Up on the Clean Air Act."" August 1993. Environmental Health Perspectives, Vol. 101, No. 3. Sept. 11, 1998. http://ehp.niehs.nih.gov/docs/1993/101-3/focus1.html

""Compendium of Methods for the Determination of Toxic Organic Chemicals in Ambient Air."" Cincinnati, OH: 1999. US Environmental Protection Agency. http://www.epa.gov/ttn/amtic/files/ambient/airtox/tocomp99.pdf ""The Plain English Guide to the Clean Air Act."" April 1993. Air Quality Planning and Standards. Updated: May 13, 2002. US Environmental Protection Agency. http://www.epa.gov/oar/oaqps/peg_caa/pegcaain.html

Scott Specialty Gases. ""Toxic Organic mixtures come in returnable cylander."" Feb. 12, 2004. Managing Automation. 2004. http://news.managingautomation.com/fullstory/30553

About the author: Ken Eichelmann earned his BS in Commerce & Engineering in 1977 from Drexel University. Ken joined Scott Specialty Gases in September 2001 as the SCOTTY Product Manager, bringing a wealth of knowledge and experience in marketing, product management, sales, management, and the process industries.

Saturday, February 09, 2008

Several types of hearing aids

Author: Michael Sanford

A hearing aid is an electronic, battery-operated device that amplifies and changes sound to allow for improved communication. Hearing aids receive sound through a microphone, which then converts the sound waves to electrical signals. The amplifier increases the loudness of the signals and then sends the sound to the ear through a speaker. Different kinds of hearing aids There are several types of hearing aids. Each type offers different advantages, depending on its design, levels of amplification, and size. Before purchasing any hearing aid, ask whether it has a warranty that will allow you to try it out. Most manufacturers allow a 30- to 60-day trial period during which aids can be returned for a refund. There are four basic styles of hearing aids for people with sensorineural hearing loss: In-the-Ear (ITE) hearing aids fit completely in the outer ear and are used for mild to severe hearing loss. The case, which holds the components, is made of hard plastic. ITE aids can accommodate added technical mechanisms such as a telecoil, a small magnetic coil contained in the hearing aid that improves sound transmission during telephone calls. ITE aids can be damaged by earwax and ear drainage, and their small size can cause adjustment problems and feedback. They are not usually worn by children because the casings need to be replaced as the ear grows. Behind-the-Ear (BTE) hearing aids are worn behind the ear and are connected to a plastic earmold that fits inside the outer ear. The components are held in a case behind the ear. Sound travels through the earmold into the ear. BTE aids are used by people of all ages for mild to profound hearing loss. Poorly fitting BTE earmolds may cause feedback, a whistle sound caused by the fit of the hearing aid or by buildup of earwax or fluid. Canal Aids fit into the ear canal and are available in two sizes. The In-the-Canal (ITC) hearing aid is customized to fit the size and shape of the ear canal and is used for mild or moderately severe hearing loss. A Completely-in-Canal (CIC) hearing aid is largely concealed in the ear canal and is used for mild to moderately severe hearing loss. Because of their small size, canal aids may be difficult for the user to adjust and remove, and may not be able to hold additional devices, such as a telecoil. Canal aids can also be damaged by earwax and ear drainage. They are not typically recommended for children. Body Aids are used by people with profound hearing loss. The aid is attached to a belt or a pocket and connected to the ear by a wire. Because of its large size, it is able to incorporate many signal processing options, but it is usually used only when other types of aids cannot be used.

On the basis of the hearing test results, the audiologist can determine whether hearing aids will help. Hearing aids are particularly useful in improving the hearing and speech comprehension of people with sensorineural hearing loss. When choosing a hearing aid, the audiologist will consider your hearing ability, work and home activities, physical limitations, medical conditions, and cosmetic preferences. For many people, cost is also an important factor. You and your audiologist must decide whether one or two hearing aids will be best for you. Wearing two hearing aids may help balance sounds, improve your understanding of words in noisy situations, and make it easier to locate the source of sounds.

Problems while adjusting to hearing aids Become familiar with your hearing aid. Your audiologist will teach you to use and care for your hearing aids. Also, be sure to practice putting in and taking out the aids, adjusting volume control, cleaning, identifying right and left aids, and replacing the batteries with the audiologist present. The hearing aids may be uncomfortable. Ask the audiologist how long you should wear your hearing aids during the adjustment period. Also, ask how to test them in situations where you have problems hearing, and how to adjust the volume and/or program for sounds that are too loud or too soft. Your own voice may sound too loud. This is called the occlusion effect and is very common for new hearing aid users. Your audiologist may or may not be able to correct this problem; however, most people get used to it over time. Your hearing aid may ""whistle."" When this happens, you are experiencing feedback, which is caused by the fit of the hearing aid or by the buildup of earwax or fluid. See your audiologist for adjustments. You may hear background noise. Keep in mind that a hearing aid does not completely separate the sounds you want to hear from the ones you do not want to hear, but there may also be a problem with the hearing aid. Discuss this with your audiologist.

For more information on hearing aids please visit the

Hearing aids resource center.

About the author: None

Friday, February 08, 2008

What's In Your Beverage? How to Ensure Quality Control with CO2 Analytical Support

Author: Leanne Merz

Calibration standards, performance audits, and the FDA's never-ending safety, labeling, and inspection requirements are just the tip of the iceberg when it comes to dealing with the increasingly stringent quality control standards of the beverage industry. As these quality standards become stricter, beverage producers are increasingly called upon to get products to market faster using fewer resources, while simultaneously managing ingredient quality, and ultimately, risk.

Mix rigorous regulations and mounting market challenges with exploding competition and the opportunity for enormous economic reward, and it becomes obvious that products must be perfect the first time around to fulfill production requirements, comply with distribution standards, and ultimately provide each consumer with the exact same exceptional product every time.

All of which makes quality control more necessary than ever.

Quality Assurance in the beverage industry starts by ensuring that top quality gases are used to perform the carbonation process and continues through the bottling and distributing process with a high-tech quality control examination.

On the top of the list of gases regulated in the world of drink is carbon dioxide (CO2), one of the main components of many of the beverages produced today, including soda, beer, sparkling water, and sports drinks. CO2 has also become a major constituent of orange juice through supercritical CO2 processing during pasteurization and has even entered the world of dairy with the addition of ""Refreshing Power Milk,"" a new carbonated milk hybrid, to the refreshments market.

Leading beverage manufacturers in this $700 billion industry are taking the critical step to ensure purity of beverage-grade CO2 by using analytical support gases and quality assurance services. Since ensuring purity of CO2 is such a crucial factor in the beverage production process, choosing a specialty gas company to provide purification, calibration, and cross-reference services for your products should be a priority.

Keep in mind that specialty gas companies outside of the beverage industry hold a uniquely favorable position as authoritative and neutral third-party qualifiers. These companies provide experience in developing trace contaminant calibration standards as well as independence from the supply and certification of beverage grade CO2, which helps to ensure unbiased statistical and graphical reporting.

Regardless of the industry from which the service company originates, it is vital that it provides specialized service in the CO2 industry and adheres to industry standards on commercial quality with regard to CO2.

Some more guidelines to consider when choosing a Quality Control Specialty Gas Service: * Your CO2 supplier should provide certification and analysis indicating compliance with commercial quality standards, such as ISBT, the International Society of Beverage Technologists

* Your quality assurance service company should have the resources available to create custom gas mixtures for CO2 ingredient quality control. Typical components include (but are not limited to) the following:

Methane Ethane Ethanol Dimethyl Ether Ethyl Acetate Methanol Ammonia Nitric Oxide Nitrogen Dioxide Carbonyl Sulfide Acetaldehyde Benzene Cyclohexane Ethylbenzene Diethyl Ether Toluene m-Xylene p-Xylene o-Xylene

* Preparing two sets of gas mixtures should be standard procedure for your chosen service company, with double analysis of each set to check for minor component stability, and guarantee a shelf life for the components.

* To further assure accurate results, your service company should identify inaccuracies and verify analytical processes by having participant labs analyze blind internal audit standards.

* Your service company should furnish a report to your company's quality control department detailing analytical results, including a statistical representation of the performance of each participant laboratory.

* Membership in the International Society of Beverage Technologists (ISBT) Quality Committee, Carbon Dioxide Subcommittee, should be maintained in order to keep abreast of emerging analytical methods and technologies within the beverage industry.

* Top of the line service companies will provide CO2 Cross-Referencing Services to confirm the accuracy of critical analytical processes. These programs provide beverage manufacturers with a reliable and objective method of monitoring the performance of multiple laboratories who qualify carbon dioxide used in carbonated beverages as well as confirm ingredient quality. Cross-Referencing Service should be considered in order to:

o Achieve the highest degree of confidence in the accuracy of analyses; o Confidentially identify inconsistencies or other problems in analytical processes; and o Maintain reliable and accurate intra-company quality assurance.

* Most importantly, make sure the service company has top rate Internal CO2 Audit Standards to meet the most demanding accuracy requirements for virtually any type of customized mixture and that a Certificate of Accuracy is provided for each cylinder.

By choosing a Quality Control Specialty Gas Service carefully, your company can be sure to keep pace with the ever-expanding list of regulations -- and quite possibly gain an even larger piece of this multi-billion dollar pie.

This article is copyrighted by Scott Gases. It may not be reproduced in whole or in part and may not be posted on other websites, without the express written permission of the author who may be contacted via email at scottgas@digitalbrandexpressions.com

About the author: Leanne Merz is Director of e-Commerce and Technical Services of Scott Specialty Gases, a leading global manufacturer of specialty gases located in Plumsteadville, PA. More information on the company can be found at http://www.scottgas.com .