Tuesday, July 31, 2007

Biotechnology Rather Than Aid Can Alleviate Poverty

Author: James Wachai

G8 leaders have agreed to boost aid to Africa by $25 billion by 2010. The G8 countries include USA, Canada, Britain, France, Japan, Russia, Germany and Italy. As expected, Africa is in celebration mood. To many, this announcement heralds the demise of poverty in Africa. No more hunger, no more deaths by easily preventable diseases. Africa will be saved from all manners of miseries. The doubling of aid would emancipate this desolate continent from the yokes of destituteness and hopelessness. These expectations are expected in a continent where more than 75 per cent of the population live on less than a dollar a day.

The million dollar question, however, is, will doubling of aid, to Africa, alone, enhance sustainable development? The answer is no. Africa has, in the past, refused to embrace poverty alleviation initiatives introduced by the very countries that it is begging aid from. Take the case of biotechnology. G8 countries continue to mint billions of dollars from genetically modified food. The latest report by the International Service for the Acquisition of Agri-Business Applications (ISAAA) forecasts the 2005 global market value for biotech crops to be US$ 5 billion. Unfortunately, Africa will derive negligible benefits from the sale of biotech products. The continent is still dilly-dallying on whether to embrace biotechnology. While other countries are scrambling to increase acreage of GM crops, Africa is still procrastinating - worrying about environmental and health impact of GM crops, which science has already clarified.

Isn't time for rich countries to demand that Africa expresses willingness to embrace modern farming technologies so as to reduce its reliance on foreign aid? There is, certainly, no other way to be self-sufficient in food production than to swim by the waves!

The US and Canada, for instance, are reaping huge economic benefits from genetically modified crops. And they happen to be more sympathetic to the African cause. It is ironical that Africa expects them to be more generous with the money accrued from a technology it despises. Africa cannot eat its cake and have it. If it cannot borrow a leaf from these biotech giants, then, it makes no sense to beg for aid from them!

Biotech has already boosted the economies of India, Argentina, Brazil, Uruguay, Romania, Mexico, Philippines, Australia and Spain. What's Africa waiting for? Africa, the Green Revolution by-passed you. India and Pakistan embraced the Green Revolution. It revolutionized their economies. They are now basking in glory, with plenty to eat and export. These, and other Asian countries, no longer rely on relief food. It is time for Africa to follow suit.

South Africa, to its credit, is the only African country growing genetically modified crops for commercial purposes. Already, the country has 0.5 million hectares of land under GM cultivation. This, however, is a drop in the ocean considering that global area of approved biotech crops, currently, stands at 81 million hectares. But it is a step towards the right direction. South Africa no longer experience food deficits. In fact, it is a major food provider to famine-stricken countries such as Zimbabwe, Zambia and Mozambique, all of which are yet to embrace biotechnology.

This is the path the rest of Africa should follow. Instead of begging the West for aid, Africa should strive to share the spoils of such technologies as biotechnology. This is the only and surest way of alleviating poverty.

About the author: James Wachai is a communication specialist who uses his expertise to increase public understanding of science and technology, specifically biotechnology. Read more from James at http://www.gmoafrica.org.

Sunday, July 29, 2007

How To Make Lighter and Thinner Magnesium Components?

Author: Ken Yap

Magnesium is the lightest structural material offering very good damping characteristics, weldability and excellent shielding against electro-magnetic interferance, and is unlimited in supply. It has been an excellent material for making portable electronic and telecommunication devices, and automotive and aerospace equipment such as MD player casings, chassis for cell phones, video cameras and notebook computers, automotive gear housings, car wheels and engine blocks.

The most common method to produce magnesium parts is by die casting and thixomolding processes. However, these runner and gating processes provide a low material yield of only 30% for thin-wall casting and can only produce thin walls of between 0.7mm to 1.2mm.

If we can form magnesium parts from sheet metal just like metal stamping of steel and aluminum parts, we can achieve better material yield of about 80% and possibly safer operation due to the lower processing temperature. However, magnesium is known to be non-formable as it is very resistant to deformation due to its hexagonal close-packed structure. The only way is warm forming of magnesium as deformation of magnesium above 225 degrees Celsius will cause additional slip planes to become operative.

Extensive process research in this area have resulted in a few warm forming hydraulic presses available in the market for draw forming. Recently, research in warm draw forming of magnesium to make cell phone chassis has successfully shown that 0.4mm thin walls can be achieved consistently. Metallographic tests of the chassis have also demonstrated that there is zero porosity and increased rigidity.

While the current warm forming press systems are complicated to operate as they require the preliminary building of stroke and force profiles for the specific products using data acquisition modules and forming simulation softwares, the increased replacement of aluminum and plastics with magnesium for handheld electronic devices may well accelerate this process. Progressive early adopters of this technology would have a first mover advantage in the competitive global manufacturing industry.

About the author: Author Ken Yap is a director of Suwa Precision Engineering Pte Ltd in Singapore which represents metal stamping, precision machining, miniature precision balls and PCB manufacturers from Suwa, Japan . He is also a director of Attisse Pte Ltd, a business and market research consultancy firm for Japanese investors .

Saturday, July 28, 2007

What's the MATTER?

Author: Charles Douglas Wehner

Einstein has shown that energy can be converted into matter? That`s a nice trick!

Matter consists of the elements hydrogen, helium, beryllium, boron, carbon, nitrogen, oxygen, fluorine, neon and on and on..... That is the periodic table of the elements, from Mendeleev, in which the components of matter are laid out in order of increasing weight, for ease of study.

So I invite you to take some energy and convert it into matter. I will then take that matter into the chemical laboratory, and tell you whether it was hydrogen, helium, beryllium or some other element that you have made for me.

If you do not accept the challenge, you will never know what you would have made. You might have transmuted base energy into GOLD - the 79th element of the Periodic Table. This has been the quest of alchemists throughout the ages. You cannot know what wonders you have missed.

Still not tempted? Then I will tell you a secret. Even nature cannot transmute energy into matter. Some might say that even God cannot.

But Einstein was a GENIUS. He could do things where Nature and God had failed. Or so we are given to believe.

So Einstein was wrong. How often I have heard this phrase. I was accosted on the streets for years by a young man who had heard that I was clever. He would routinely deliver his latest critique of Einstein. It was based - as is usual - upon semiscience, and eventually I knew why. Those who run the ""Einstein was wrong"" campaign have two things in common. Firstly, they are not scientists, and secondly they have a history of mental illness. Theirs is a campaign, by finding something deep and profound, to silence their own critics who say that they are mad.

Einstein was not wrong.

The problem is compounded by the disinformation campaign run by governments. Einstein had, with his famous equation, quantified the energy that might be obtained from a bomb. He had provided the mathematical basis for nuclear warfare, and the tyrants who deign to rule us were quite thrilled. They made Einstein into the greatest genius ever known, even though there have been several ""Zweisteins"". Madame Curie and Linus Pauling, for example, both got two Nobel prizes where Einstein got only one.

In 1955, Einstein started with Bertrand Russell the ""Ban the Bomb"" campaign. They jointly wrote the Russell-Einstein Manifesto. Einstein died - probably murdered - just before the American ""Ban the Bomb"" campaign was to start. The Manifesto was issued, and Russell soldiered on alone in England.

In his quest for peace, Einstein was not wrong either.

So the disinformation campaign takes many forms. Firstly, it spreads dirty rumours about the private lives of Russell and Einstein. Secondly, it plays down Einstein`s deep commitment to peace. Thirdly, it keeps the secrets of how to build the bomb secret, in case an enemy might discover them.

Science is simple. Fake scientists are bombastic. So, in the concealment of the few true facts that go into the making of the nuclear industries, a vast array of complicated and fanciful ""theories"" are generated, which lead to nothing useful. One of those myths is that energy turns into matter, and matter into energy. There is no such nuclear alchemy.

To understand what Einstein said, we must first consider physics. For our equations, we have only pounds, feet and seconds to work with. These might be converted into centimetres, grammes and seconds, or into metres, kilogrammes and seconds - or into something involving minutes or hours, but they remain for all time MASS, DISTANCE and TIME.

So mass, distance and time are the three physical ""elements"" - or physical ""dimensions"". They have nothing to do with chemistry.

Mass is not to be confused with weight. Consider what happens if you have a bag of apples, and take it into orbit. On Earth, it weighs a pound. In orbit it weighs nothing. We speak of the orbital condition as ""weightlessness"".

Now let us throw the apples against a wall. Let us do this twice, firstly at one foot per second and then at a thousand. On earth, the apples bounce off the wall after an impact of one foot per second. They are smashed to pulp at a speed of a thousand feet per second.

In orbit it is exactly the same. Weightlessness is not masslessness.

What causes the impact is momentum - the product of mass and speed. In the first case, we have a single pound-foot-per-second, in the second we have a thousand. The second momentum is a thousand times the first.

The energy needed is one half times the mass times the speed squared. In the first case we have half a foot-poundal, in the second half a million.

Scientists prefer to use other units, however. In the metre-kilogramme-second system (MKS), the energy will be JOULES. This is a convenient unit, because it is WATT SECONDS. It makes sense to the scientist, because he can visualise for how long a one-Watt torch would shine, and so get the ""feel"" of the energy.

Light travels at three hundred million metres per second. So one foot per second is about one third of a metre per second, or about a billionth of the speed of light.

According to Einstein`s equation, something travelling at a billionth of the speed of light will become half times a billionth times a billionth heavier.

So, the apples have a mass of 1.0000000000000000000000000 pounds. In motion, their mass becomes 1.0000000000000000000000005 pounds.

There is very little difference, and Einstein`s modification to the laws of motion barely disturb calculations that are based on Newton.

At a thousand feet per second, however, the mass is 1.0000000000000000005 pounds.

The increment in mass has grown by a million, although it is still trivial.

It is useful at this time to divide the number into two parts. We will call the 1 the ""rest mass"" and the .0000000000000000005 the ""relativistic mass"". Nothing has happened to the apples other than their motion and their change in mass. A bag of five apples does not become a bag of six apples for me to take into the chemical laboratory and discover the added carbon, hydrogen, oxygen and other elements. In short, there is no chemical change.

As the mass increases with increasing speed, we find that for the next relativistic calculation we must calculate the relativistic increment of the 1, and also the relativistic increase in the 0.0000000000000000005. Thus, the relativistic mass grows by compound interest - not by simple interest. There comes a time when the apples are becoming infinitely heavy.

When the apples are almost infinitely heavy, we need almost infinite energy to accelerate them at all. So close to the speed of light, they become impossible to shift.

It seems that nothing can reach the speed of light, for if it did it would contain infinite energy.

But what of light itself?

How did light reach the speed of light? It didn`t. It was BORN THERE.

What happens when you slow light down to rest? What is its rest mass?

If you slowed light down, it would not be light. All light travels at the speed of light.

If you could slow it down, it would have zero rest mass - because it is not matter.

All things, like bagsful of apples, require energy to move them. The apples have their own intrinsic mass - the ""rest mass"" - and the energy has its own mass - the ""relativistic mass"".

Light requires no energy to move it because it is born on the move. It consists purely of energy, and has ""relativistic mass"" without having ""rest mass"".

Regarding chemistry, we can consider hydrogen. It consists of an electron held by its negative charge to a proton, which is positive. However, the electric charge only works up to a point. It takes enormous force to drive the electron into the proton because the short-range forces repel it.

If we persevere, we can indeed crush hydrogen together - and the electric charges cancel. We have created a neutron.

In the SI system of physical units (MKS), the electron weights 9.10956 times ten to the power MINUS THIRTY-ONE kilogrammes. It would take about 1.1 MILLION MILLION MILLION MILLION MILLION (that is, million to the power five) to make a kilogramme of electrons, which are about 2.2 pounds. A million to the power five therefore weigh about two pounds.

Similarly, the proton weighs 1.67261 times ten to the power minus 27 kilogrammes, and the neutron weighs 1.67492 times ten to the minus 27 kilos.

We can see that when we force an electron and a proton together, the neutron weighs as much as 2.536 electrons plus one proton.

Here, the rest mass within the neutron is the sum of the rest masses of the electron and of the proton, whilst the surplus 1.536 electron masses is the relativistic mass due to something moving inside the neutron. We don`t know what it is.

We might conjecture that the electron has been shifted from its outer orbit into some kind of internal orbit within the proton. By whizzing about inside the proton, the electron is everywhere at once, and effectively cancels the positive charge of the proton in all places.

So the neutron is a hydrogen atom with stored energy - as if the electron within it were some kind of flywheel having energy-storage due to its spin.

If we throw the neutron with enough speed at an atom of uranium 238, it forms uranium 239. After a variable period which averages a microsecond, the electron flies out of the uranium. The neutron inside the uranium 239 has turned back into a proton, and the uranium is now plutonium 239. However, the plutonium with the ejected electron have the same mass as uranium 238 plus an electron plus a proton. The only difference is the binding energy, by which the proton is held in the plutonium. This binding energy has relativistic mass.

What has happened to the surplus 1.536 electron masses? They have turned back into the energy that we used to crush the hydrogen at the beginning of the experiment. A gamma ray - high-energy light (an X-ray) - has also been emitted, and light is not matter.

So everything has been accounted for. The matter has always stayed as matter and the energy has always stayed as energy. It is true that we could not see the energy inside the neutron, but we conjectured at some flywheel motion we cannot see - and we could detect its relativistic mass.

When people say that ""Einstein was wrong"", they mean that those REPORTERS who report Einstein are wrong. Bad reporters confuse MATTER with MASS.

Einstein said that energy can transmute into relativistic mass, and relativistic mass can transmute into energy.

In short, energy has mass, and takes it everywhere it goes.

The laws of the conservation of matter, and of the conservation of energy, have been preserved.

Charles Douglas Wehner

About the author: Charles Douglas Wehner, born 1944 in the Isle of Man, was a technical author in nucleonics and radar as well as a design engineer and factory manager in photoelectrics and other electronics and a computer programmer. He has a website http://wehner.org devoted to special science.

Friday, July 27, 2007

DNA Genealogy

Author: Curt Whitesides

The next time you are watching your favorite CSI TV show or a particular movie and stumble into the fascinating world of DNA, you might be surprised to know that our DNA can do more than identify a suspect or victim at a crime scene. In fact, DNA is now being used to identify ancestors in the new and exciting field of DNA Genealogy.

DNA Genealogy takes traditional genealogy and applies genetics to it. DNA Genealogy involves the use of genealogical DNA testing to determine the level of genetic relationship between two individuals (Genealogical 2005). DNA, deoxyribonucleic acid, is used in the process because of its unique nature and the fact that it is passed down from one generation to the next. In the passing, some parts of the DNA remain almost completely unchanged, while other parts change dramatically. This property allows for the identification of certain consistencies between generations and provides the ability to identify genetic relationships.

There are two types of DNA tests available for testing

DNA Genealogy : Mitochondrial DNA (mtDNA) and Y-chromosome DNA tests.

Mitochondrial DNA (mtDNA) is found in the cytoplasm of the cell instead of in the nucleus as is Y-chromosome (Tracing 2003). mtDNA is passed by a mother to both her male and female children without any additions or mixing from the father. Therefore, your mtDNA is the same as your mother's mtDNA. mtDNA is different in nature compared to Y-DNA. It changes slowly making it more difficult to determine close relationships and easier to determine relatedness. If two people have the same mtDNA, there is a very good chance that they also share a common maternal ancestor. Unfortunately, it is difficult to determine if that common maternal ancestor was recent or instead lived hundreds of years ago.

Y-chromosome tests have been used more and more recently to determine

DNA Genealogy . The Y-DNA tests are only available for males, because the Y-chromosome is only passed down along the paternal line from father to son. There are tiny chemical markers on the Y-chromosome that create a unique pattern. This pattern of markers is what is called a haplotype. A haplotype is used to determine one male lineage from another. This type of testing is often used to determine if two individuals who have the same surname share a common ancestor.

One of the early beginnings of DNA Genealogy was a study published by Bryan Sykes in 2000 (Sykes and Irven 2000) that used DNA Genealogy (Y-chromosome markers) along with surname studies to determine relatedness. The study compared 48 men with the same surname of Sykes from the regions of England and analyzed four Short Tandem Repeats (STRs) on their Y-chromosome: DYS19, DYS390, DYS391, and DYS393. The study found that of the 48 men tested, 21 had the same core haplotype and many others were only one mutational step away from the core haplotype. Skypes interpreted these results to reveal a common origin from an ancestor who lived some 700 years ago (Butler 2005).

Since its early beginnings,

DNA Genealogy has come a long way and has grown rapidly. DNA Genealogy continues to increase in popularity as the price of tests becomes much more affordable and the number of markers and clarity of the tests become greater. Additionally, DNA collection techniques make it a very simple and pain-free process.

Sources

Butler J. (2005) Forensic DNA Typing; Biology, Technology, and Genetics of STR Markers, 74, 231-232.

Genealogical DNA test. (2005, December 7). Wikipedia, The Free Encyclopedia. Retrieved 21:52, December 8, 2005 from http://en.wikipedia.org/w/index.php?title=Genealogical_DNA_test&o ldid=30489865.

Sykes, B. and Irven, C. (2000) American Journal of Human Genetics, 66, 1417-1419.

Tracing Your Ancestry Through DNA (2003) Genealogy.com. http://genealogy.about.com/cs/geneticgenealogy/a/dna_tests.htm

About the author: Relative Genetics , a leading provider of

DNA Genealogy , specializes in testing services on both the paternal and maternal lines, extended and nuclear family relationships, and Ancestral Origins TM analysis including both deep ancestry and ethnic heritage analysis.

Thursday, July 26, 2007

A Brief History of Creation - Part One

Author: Clara Szalai

What is the loop of Creation? How is there something from nothing?

In spite of the fact that it is impossible to prove that anything exists beyond one's perception since any such proof would involve one's perception (I observed it, I heard it, I thought about it, I calculated it, and etc.), science deals with a so-called objective reality ""out there,"" beyond one's perception professing to describe Nature objectively (as if there was a Nature or reality external to one's perception). The shocking impact of Matrix was precisely the valid possibility that what we believed to be reality was but our perception; however, this was presented through showing a real reality wherein the perceived reality was a computer simulation. Many who toy with the idea that perhaps, indeed, we are computer simulations, deviate towards questions, such as, who could create such software and what kind of hardware would be needed for such a feat. Although such questions assume that reality is our perception, they also axiomatically presuppose the existence of an objective deterministic world ""out there"" that nevertheless must be responsible for how we perceive our reality. This is a major mistake emphasizing technology and algorithms instead of trying to discover the nature of reality and the structure of creation. As will be shown in the following, the required paradigm shift from ""perception is our reality fixed within an objective world,"" to ""perception is reality without the need of an objective world 'out there,'"" is provided by a dynamic logical structure. The Holophanic loop logic is responsible for a consistent and complete worldview that not only describes, but also creates whatever can be perceived or experienced.

Stating that it is impossible to prove the existence of anything beyond one's perception is not saying there is nothing beyond perception, only that if there is anything, then whatever that is, is indefinite. It could be argued that the existence of physical laws, the universal perception that the apple falls to the ground is proof of an objective reality. However, this universal agreement is also our perception. It could be argued that if we cannot decide what to perceive, and everybody perceives the same physical reality, then there must be some lawfulness that dictates how we perceive and therefore, this lawfulness could be external to our perception. However, this lawfulness, as we shall see later on, is the precise lawfulness that creates perception, the process of definition, which is not external to perception (this process creates the perceived and the perceiver, which then gives meaning to this process - a loop - but about that, later). It could be argued, that hitting our knee on the table - whether we believe in the table or not - will hurt. The table is external to our body, but not to our perception. What then is perception? It is relating, a process of definition, defining and thereby rendering meaningful what has been perceived.

What then is this process of definition? It is creating borders within which one's perception gains meaning. The word ""definition"" comes from the Latin de finire , meaning, making finite or limited. In Hebrew, definition is HAGDARA (?????), meaning, to border. Any definition necessarily implies what the definition is not, or stated differently, to have meaning, whatever is defined explicitly includes the meaning by implicitly excluding everything else. Consequently, to define means to place the defined object within borders that by default create something beyond the borders of the definition. What is this something beyond the defined? The implicitly excluded everything else, or in other words, the indefinite. The paramount importance of incorporating the indefinite within a consistent logical structure cannot be overemphasized. The indefinite itself is a paradox, and incorporating it within the Holophanic logical structure engenders the loop of Creation where the dynamic structure of paradoxes is both the creative force of existence, and also the proof of the necessity of existence.

To better grasp the impetus of Creation, let's look at the indefinite and paradoxes. What does ""indefinite"" mean? Anything as long as it is not specified (not defined); anything that appears both within and beyond the borders of the definition and thereby rendering the border superfluous, which means, no border, no definition. If nevertheless we would attempt to define the notion ""indefinite,"" then that's a paradox because if we succeed, then it is defined, which contradicts its meaning - its indefiniteness - and the word ""indefinite"" means that it cannot be defined. This is an example of a paradox, that in essence means, if it is what it is, then it is not what it is, yet if it is not what it is, then it is what it is. A paradox is a creature that consists of a structure (how it is defined, the dynamic process on its way to stabilization) that contradicts its significance (what it is, the stabilized entity). What characterizes a paradox is the motion between its structure and significance, where the structure implies that its significance contradicts its structure, and vice versa.

Another example of a paradox would be ""wholeness."" Wholeness (totality, infinite, boundless) can only be wholeness if we can find a way to define it so that it includes everything and there is nothing beyond it. However, if we define wholeness, then to have meaning, it must be bordered within the walls of the definition, which implies that there is something beyond this border, in which case it is not wholeness. Or in more formal language, wholeness is only wholeness if it is not wholeness, which is an inconsistency. If we are satisfied with that, then we have completed the definition of wholeness. However, if we try to include the beyond created by our earlier definition within the borders of our next attempt at defining wholeness, then we gain a new definition of wholeness, which by the sheer structure of the process of defining creates a new beyond . In this case, the process of defining wholeness will be consistent but incomplete, and wholeness will remain indefinite.

Contemplating the paradox of Creation, the ancient Egyptian myth of Creation springs to mind, the myth of the self-creating god, Amun (or Amon). Amun masturbated and swallowed his semen, after which he spit it out in the form of a ball, thereby impregnating his mother, the sky. And only then, was he born. Thus Amun was his own father. Those pious who discovered the illustrated version of this myth in Karnak covered up the erect phallus of Amun, and with it, this story of Creation was laid into obscurity. The Holophanic model of Creation could regard this Egyptian myth as Amun retromorphously creating himself. I have coined the word retromorphous to mean, defining in retrospect, turning non-being into the potential of whatever the observation is made from, or in other words, creating the past from the present, creating the source from its outcome, which is the basis of complexity in the context of the loop logic. That is, only after Amun was born can he give meaning to his mother, the potential from which he emanated and to the process that created him (as represented by masturbation and incest) whereby he was born. Of course, neither the sky nor the masturbating Amun have meaning until Creation takes place de facto and Amun emerges. I find this an enticing illustration of the basic paradox of existence.

So how can there be something from nothing?

To be continued... © Clara Szalai

About the author: Clara Szalai is a philosopher, author, speaker and consultant. Holophany is Clara Szalai's revolutionary philosophy, a consistent and complete worldview that is awakening growing interest among scientists and laymen. Clara Szalai is also the author of the book, ""Holophany, the Loop of Creation."" Complete information on Ms. Szalai's work is available from her web site, http://www.holophany.com

Wednesday, July 25, 2007

A Brief History of Creation - Part Two

Author: Clara Szalai

So how can there be something from nothing? What is ""nothing?"" Nothing is what didn't turn into the potential of something. If there was something from nothing, then that nothing would have turned into the potential of something, because when we ask, how is there something from nothing, we ask this question from something, when something already exists. If we take a deeper look at ""nothing,"" we'll discover that ""nothing"" is a paradox. Any definition is something, so if we defined ""nothing,"" then it would become something, which contradicts its essence of being ""nothing."" Another way of looking at ""nothing"" would be by means of it being something that is meaningless. That is, ""nothing"" could be something that does not relate and that no thing or no one relates to. That is, if there was something totally alone in the universe, then that would be nothing, but it would be meaningless. If such existed, its existence would be external to our perception, and as such, this ""nothing"" would be indefinite.

We said that the indefinite could be anything, as long as it is not specified (not defined). However, if we nevertheless tried to define ""nothing"" (the indefinite), what would we get then? Since ""nothing"" is non-definable, it is transparent as the object of our inquiry. So when we attempt to define it, all we have is what we put into it, which is the process of definition. ""Nothing"" stayed nothing, we didn't define it, only made the process of definition explicit. ""Nothing"" gains meaning when we fail to define it; but having tried, we are left with a bonus, a something, which is our process of defining ""nothing."" Creation of something from nothing is not a function of defining something, but a function of attempting to define ""nothing."" And then, if that process of definition - which already is an existence - looks back at its origins, if this process of defining investigates into its own genesis, then what does it see? It sees itself. It sees the process of definition - self-reference.

If there is nothing external to perception, then this process of definition is the overall wholeness, the creator of meaning when it can relate to itself. However, to have meaning, the process of definition has to be defined; this definition would be a self-referential quasi-infinite and continuous process of establishing borders that create the indefinite beyond that establishes borders creating the indefinite beyond that establishes borders... which means, wholeness would continuously and forever fail to define itself while succeeding to define something - anything but itself.

Of course, both the totally defined and the totally indefinite are idealized notions that would be inconsistent with the Holophanic loop logic, nor can they be found in nature. The totally indefinite would be the total meaningless nothing, the kind of non-being that cannot be fathomed because if we would think about it, it would already be something. On the other hand, there can be no total definition either. I have used the term uncertainty of sameness to describe the logical impossibility of total definition. A defined entity can be said to have reached sameness -- it is the same as itself -- which means that it is, it exists as something definite, no matter which parameters defined it. However, no sooner does our object achieve sameness than the uncertainty of sameness raises its ugly head. Could it have been defined differently? Yes, of course. Could it have additional parameters? Yes, of course. Could it have been defined more precisely? Yes, of course. This uncertainty of sameness is the indefinite included in the definition, which is the result of including the tools of definition in the definition. Since 'a' can only be defined as 'a' with meaning if it implies 'not-a' (the indefinite beyond the borders of the definition), and since 'a' can only have meaning as 'a' because it is different from everything else (the everything else is the indefinite beyond the borders, which actually gives meaning to 'a' ), the meaning of 'a' depends on 'not-a.'

When the meaning of something depends on the indefinite, on what our defined object is not, then this indefinite is necessarily included in the process of definition. This logical implication that perception of meaning is only possible if and only if the indefinite is included within the perception is the reason why the 19th century dream of a consistent and complete axiomatic system with only well defined (explicit) empty signs had to fail (see more about that in my article, The Loop Logic ). In spite of the fact that logic is the fundament of algorithms and computer science, it had neither the aspiration nor the ability to be connected to the real world precisely because its propositions were so anemic regarding meaning. In the effort to exclude any hint of the indefinite, logical inference was confined to a binary type of world of true and false and lacking any correlation with life and experiencing. However, including the indefinite in the process of definition not only makes the loop logic the fundament of existence, but determines the necessity of existence. With the birth of Holophany, Heidegger's question, ""Why is there anything at all, rather than nothing?"" becomes irrelevant. When existence is relations, and relating is the act of perceiving, and perceiving is the process of definition, then existence is the overall lawfulness, the isomorphous lawfulness of the process of definition - the loop of Creation. What is being perceived, what is being stabilized, which significance is brought to the foreground from the amorphous background of the indefinite, depends on the non-linear rules of complex interactions. Thus the loop logic emphasizes the creation of essents rather than their interactions.

Is there a lawfulness responsible for any and every existence? An electron and a dog are very different creatures; so what invisible lawfulness is responsible for the existence of both? What kind of lawfulness would fulfill such demands? The answer is, isomorphism -- the same logical inner structure in entirely different representations. Whether an electron, a dog or the weather, each could be a different realization of the same inner logical structure. Creation of anything is the creation of meaning, which is an act of definition. The act of definition attempting to define itself is consciousness. So consciousness, or the soul if you wish, is not some invisible copy of our body carrying our identity, but the lawfulness of Creation expressed as our individual qualitative essence. Of course, it has been endlessly stated that we are God, that we are parts of God, and similar phrases. This is true, but true in the sense that God is the lawfulness that unfolds Creation, and this lawfulness is inherent in all creation including the creatures therein. It could be argued, that a soul, a person is more than mere definitions and intellect. If this logic is the logic of anything and everything, then it should be able to delineate the logical structure of experience as well. Indeed.

Anything that has meaning has to be defined, which places it somewhere on the scale between the continuous and the discrete, between the indefinite and the definite. The indefinite, continuous, infinite tends in the direction of the meaningless, whereas the meaningful is at best imprecise. Experience is the process of attempting to define the indefinite. When we try to capture an experience in a description, we are actually defining our attempt at defining the indefinite. The experience is continuous whereas its description, the definition is discrete. Just as we can never define wholeness, we can never define experience. Any description, any definition, is by nature discrete, whereas the net experience is continuous. So when we have an experience or perception and we become aware of having that experience, then we give it meaning by defining what it is. By doing this we create a discrete replica of the experience, yet the experience remains continuous and non-definable, non-discretizable. Experience is connected to learning. The person encounters something new. How do we know that something is new? Because it is inconsistent with our system. So when we interact with it, we have to integrate it, to assimilate it into our system. If we met something that was not new to the system, then our system would recognize it as part of itself. When that recognition does not occur, the system is interacting with something new. That is the impact. The system adjusts to include the new - that is the change. One's selfhood is the path of changes following one's experiences.

Our knowledge of the experience - whatever it might be that we experience - makes it exist for us. We could say, one only experiences when one is aware of experiencing. How do we know that we are aware of experiencing something? By experiencing it, we experience the awareness of experiencing. In this sense, experience and awareness of the experience, experiencing the awareness of the experience, being aware of experiencing the awareness of the experience, etc. is an infinitely continuous chain, which is what defines what experience is (not the interpretation of a specific experience, but experience in its general sense). And that's the definition of experience: an infinite loop of the process of becoming aware.

When ""nothing"" is the limit of both the totally indefinite and the totally defined, then that's like a circle of going from something to nothing to something to nothing, etc. The 'going' here means perception. ""Nothing"" is only a notion that has meaning if it has been perceived, in fact, a paradox. If it really is ""nothing,"" then it cannot be defined, and hence, it has no meaning. Yet if I relate to it, then it is something. So whenever I relate to ""nothing,"" whenever I say, Creation of something from nothing, that ""nothing"" has meaning for me, and hence, it is significance -- it is something just like any other something. That is, the structure of ""nothing"" is the same structure as that of something. Essentially, something from nothing is formation , not Creation, since nothing is also something. Then what is Creation? Creation is rather the creation of nothing from something, because Creation is the process of definition, and when we define, we create the indefinite beyond the definition, which at its limit is nothing, and only then can we have something from nothing... Oh yes, the loop. A true loop is only such if it contains its own source. If nothing can be proven to exist external to perception, then logic must be a loop, and existence is a logical necessity inferred by the loop.

Including the indefinite in the process of definition has far reaching consequences. It means that the tools of the definition are necessarily included in the definition. It means that meaning can only occur when there is both definition and also experience. It means that consciousness (whether it succeeds to define or not) must be part of science or any so-called objective endeavor. It means that any and all perception includes experience. The interaction with the indefinite, the experience, is what gives meaning to the defined. Perception, meaningful definition, can only occur in a highly flexible complex system that can learn and change. That's the difference between us and an electron, which only has fixed relations, and consequently, limited interactions. An electron always succeeds in defining, or it would be more correct to say, it can only interact with what it succeeds in defining. If it encounters the indefinite, it assumes a state of superposition.

Where is God in the loop of Creation? If we wanted to define God, the totality, we could not define God, because by the act of definition we would create the beyond, what is beyond God, which contradicts God's totality. Therefore, no definition of God would do justice to God, and every such definition would truncate God's wholeness. If God is indefinable, then God is indefinite. If God is indefinite, then I create God by the implication of the act of definition - any definition, because every definition creates the beyond, the indefinite beyond the borders of the definition. In that sense, this is consistent with the statement that I create God by my perception (definition). This does not say that I perceive God, but that my perception implies the existence of the indefinite (God). This means that if I perceive a dog, this perception implies the existence of God. If I perceive that I perceive, then that implies the existence of God. If I perceive dust, a table, an idea, whatever, then that implies the existence of God. If I experience, then that implies the existence of God. That's because any existence implies the existence of God. And that's because any existence is such if it relates or is related to, if it has meaning, if even partially it has been defined, which means, its mere definition implies the indefinite beyond the borders of the definition, it implies God, the indefinable. So one cannot directly perceive God (perhaps that is why it was stated in the Bible that no one could see God's face and live = exist - ""no man shall see me and live..."" - Exodus 33: 20), but only know about God by implication, which means, the implication of the indefinite - God - is what attributes meaning to any existence.

However, ""God"" does not equal ""indefinite,"" but the process that implies the existence of the indefinite is what could be said to be God, since that's the process of Creation. This is the process of Creation that both creates something, existence, and also nothing, the indefinite. This is why this logic is a loop. © Clara Szalai

About the author: Clara Szalai is a philosopher, author, speaker and consultant. Holophany is Clara Szalai's revolutionary philosophy, a consistent and complete worldview that is awakening growing interest among scientists and laymen. Clara Szalai is also the author of the book, ""Holophany, the Loop of Creation."" Complete information on Ms. Szalai's work is available from her web site, http://www.holophany.com

Tuesday, July 24, 2007

The Spirit of Soul

Author: Sam Oliver

When was the last time you closed your eyes and simply paid attention to the inner world in you? As you close your eyes and pay attention to your inner self, insight is awakened. You are able to become conscious of what infuses our external world.

Each of us has individual awareness or ways we interpret the world around us. Because of our unique experiences of the world, we take in multiple images across the span of a lifetime. These experiences are imprinted in our psyche and our soul. The act of recalling these memories creates expressions or feelings in our heart. These past expressions are pondered in the inner vision of our mind and our heart as though we are re-living them in the present. In so doing, we are retrieving our soul at various points of interests that have lodged a sense of importance to our individual awareness. The movement from the world around us to the world within us is a shift in attention. This shift in our attention is a conscious choice. Here, we realize that the world around us has hidden aspects to it reminding us just how privileged we are to be aware of our awareness. This realization alone gives us identification with whom we are.

You and I are conscious beings who live in a body, but this isn't our real home. We inhabit space and time, but our real self, our authentic self, our individual awareness (soul) is a unique expression of spirit infusing our lives from an infinite number of possible correlations. We make choices every day on how our life will be lived. Each choice creates a pattern. These patterns become a statement of our character. Our character becomes a living testimony on the inner processes of our thoughts incarnating into the world we live in.

As such, the life of our soul is revealed. Every moment, we are given the opportunity to experience our soul in a variety of ways. These experiences are facets of our inner world that manifests themselves from a single expression we call spirit. The intent to focus on our past, our present, or our future desires leads us to a path. This path is a revelation of our soul seeking out an opportunity to live out our purpose. Purpose gives us meaning and hope beyond our present circumstances. It is a path into what can no longer be seen and moving our lives and our soul - into SPIRIT.

I hope that reading the above information was both enjoyable and educational for you. Your learning process should be ongoing--the more you understand about any subject, the more you will be able to share with others.

Samuel Oliver, author of, ""What the Dying Teach Us: Lessons on Living"" For more on this author; http://www.soulandspirit.org

About the author: Sam Oliver worked with the dying for over 15 years. During that time, he wrote 4 books on grief. Website URL;

http://www.soulandspirit.org

Monday, July 23, 2007

Sodium Vapour Lamp

Author: KD Dasappan

Sodium Vapour Lamp consists of a discharge tube made from a heat resistant glass, containing a small amount of metallic sodium, neon gas and two electrodes, Neon gas is added to start the discharge and to develop enough heat to vaporize the sodium. Because of law pressure inside the tube, a sufficiently long tube required to obtain more light. To reduce the overall dimension of the lamp, this tube is generally bent into U-shape.

The light produced by this lamp is yellowish which is produced at its optimum pressure of about 0.004mm of mercury. This pressure is obtained at a temperature of about 280° C and so it becomes necessary to maintain this temperature. For this purpose the U-tube is enclosed in a double walled flask to prevent lose of heat. The double walled flask is interchangeable and can be fitted on to another U-tube. While replacing the inner U-tube one must be very careful because if it is broken and sodium comes in contact with moisture, it may result in fire.

All electric discharge lamps require a higher voltage at the time of starting and low voltage during operation. Generally, sodium vapour lamps are operated by a high leakage reactance transformer. At starting a high voltage of about 450 volts is applied across the lamp which is sufficient to start the discharge. When the lamp is fully operative after 10 - 15 minutes, the voltage across it falls to about 150 volts. Because of the high reactance of circuit, the power factor is low and hence a p.f improvement capacitor is connected.

The efficiency of a low pressure sodium vapour lamp is very high (about 40 - 50 lumens/watt) and it produces a light of particular wavelength having yellow color. Sodium lamps are mainly employed for street, high way and airfield lighting where color distinction is not so necessary.

About the author: Dasan writes about Distance Education , Science Articles and Business Hosting topics. This article is free to re-print as long as nothing is changed, all links remained intacked, the bio remains in full and the rel=""nofollow"" tag

Sunday, July 22, 2007

Hydroponics - A Novel Blessing of Science

Author: Paul MacIver

The term hydroponics stands for the technique of cultivating plants in a nutrient solution rather than in soil. It's a novel technique of growing plants in water which contains dissolved nutrients. This technique is also known as indoor gardening, aquiculture and tank farming.

Studies have proved the fact that plant roots are able to absorb the nutrients from the water even without soil. The new technique hydroponics is based on the concept that plants can be grown without any soil at all.

Professor Gericke of the University of California, Davis, is considered the father of hydroponics. Professor Gericke, in 1929, proved his invention by growing tomato plants in water to a quite remarkable size. The Professor coined the name hydroponics for the culture of plants in water.

Almost any plant can be made to grow through hydroponics. Today, the new techniques of hydroponics gardening and hydroponics farming are becoming popular.

Benefits of Hydroponics:

Hydroponics is a very useful technique when there is scarcity of land, and it is growing extremely beneficial and profitable to farmers. The positive aspects of hydroponics are listed below.

Hydroponics --

* Gets rid of soil-borne diseases and weeds.

* Requires no soil tilling or ploughing.

* Helpful in land scarcity; plants can be placed very close to one another.

* Can be done in small spaces.

* Highly productive; high yield, large amount of food can be produced from small spaces.

* Requires only a small amount of water compared to traditional farming.

* Allows the production of quality plants under controlled environmental conditions.

* Makes it possible to grow plants all year round.

Future of Hydroponics:

The future of hydroponics seems to be quite bright. As plants are grown indoors, they can be made to grow almost anywhere, in any condition and any weather.

It'll make it possible to grow plants in Antarctica. The techniques such as hydroponics or aeroponics may make it possible to grow vegetables and fruits in space in some near future.

About the author: Paul MacIver writes articles on many topics including gardening . For further info on Hydroponics Gardening check out this Hydroponics website. You may freely reprint this article as long as nothing is changed, bio is included and all links are intact.

Saturday, July 21, 2007

Types of Chip Formation During Machining

Author: Ken Yap

As every one knows, chips are formed during the machining of workpieces. The side of the chip in contact with the cutting tool is normally shiny, flat and smooth while the other side, which is the free workpiece surface, is jagged due to shear.

It is important to study the formation of chips during the machining process as the former affects the surface finish, cutting forces, temperature, tool life and dimensional tolerance. Understanding the chip formation during the machining process for the specific materials will allow us to determine the machining speeds, feed rates and depth of cuts for efficient machining and increased tool life in the specific actual machining operation. During the machining process, three basic types of chips are formed. They are discontinuous chips, continuous chips, and continuous chips with built-up edge.

Discontinuous chip formation normally occurs during machining of brittle work material. This type of chips also occus in machining operation with small rake angles on cutting tools, coarse machining feeds, low cutting speeds. Discontinuous chip formation results in poor workpiece surface finish.

During continuous chip formation, a continuous ""ribbon"" of metal flows up the chip-tool zone. This is considered to be the ideal condition for efficient cutting action.

Continuous chip with built-up edge formation is basically the same process as continuous chip formation, except that as the metal flows up the chip-tool zone, small particles of the metal begin to adhere or weld themselves to the edge of the cutting tool. As the particles continue to weld to the tool, it affects the cutting action of the tool.

This type of chip formation is common in machining of softer non-ferrous metals and low carbon steels. Common problems are the built-up edges breaking off and being embedded in the workpiece during machining, decrease in tool-life and final poor surface finish of the workpiece.

Studies on the built-up edges have shown that the chip material is welded, deformed and then deposited onto the rake face of the tool layer by layer. It is thus possible to observe the presence of built-up edges by studying the back face of the chip during the machining process. This is normally used in micro or ultra precision machining operation.

To reduce built-up edges, improve the lubrication conditions, use sharp tools and better surface finish tool and also apply ultrasonic vibration during the machining process.

About the author: Author Ken Yap is a director of Suwa Precision Engineering Pte Ltd in Singapore and represents

precision metal stamping , swiss screw machining , miniature precision balls and printed circuit boards manufacturers from Suwa, Japan.

Friday, July 20, 2007

Loupe, Monocle Or Magnifier? How To Choose The Right Unit

Author: Ettore del Pozzo

If you need a visual aid and are unsure whether to choose an eye loupe, a monocle or a standard magnifying glass, read this article before ordering your next magnifier.

When customers call in to order a magnifier and are unsure of the type or model that they need, our first question to them is:

What is the intended use?

As with all other tools, magnifying lenses come in many sizes, types, shapes and strengths and (as with tools) bigger and stronger isn't always the best choice for the job.

Let's put it this way; if you are a size 9, would you buy a pair of size 11 shoes ... just because bigger is better? Of course not! So, deciding on the type and strength of magnifier that's right for the intended purpose is more important than going for the biggest and most powerful unit that one can find.

Reading Magnifiers, Inspection Loupes, Hands-Free Monocles

Even though there are hundreds, maybe thousands, of different models on the open market, they can all be categorized in one of the above types and each type has a specific use.

Reading Magnifiers:

Definitely the most popular, reading magnifiers come in the widest variety of size, shape and strength. The most common, also known as the Sherlock Holmes Magnifier, is the classic handheld round magnifying glass. Do not limit, however, your choice to the old classic style (unless you want to) as now they come with many options and features: Some with light, some have legs, some are attached to a necklace, some can be worn like a hat or visor and some can be attached on your prescription glasses.

Their sizes vary between 1-inch and 6 inches and they are between 2x and 5x in strength (some stronger units are also available but keep in mind that, usually, the stronger the power, the smaller the magnifier and the shorter the focal distance).

Unless you are severely vision impaired, do not choose a 15x or stronger loupe to read the newspaper or your favorite book because, besides the discomfort of having to keep the loupe right under you eye and the reading material one to three inches from your eyes, you'd be able to read only a few letters at a time!

Some vision impaired customers, especially those affected by macular degeneration, tend to prefer units that are between 5x and 10x in strength and that have a light source built-in. Bright light, in their experience, is as important as strong magnification.

Of course, this type of magnifier is also widely used in a variety of applications; to inspect plants, minerals, art ... and by anyone who just wants to see something larger.

Inspection Loupes:

Widely used in the jewelry environment, this type of magnifying glass is usually very small (between 12 mm and 30 mm) and very powerful (10x to 30x). Its main purpose is to inspect gems, settings and other small items. It's not recommended for reading as it has a very short focusing distance.

To properly use an eye loupe, one side of the lens must be placed right under one eye and the object to be examined directly under the other side. Do not attempt to use a jewelry loupe like a regular magnifier as you'd get upside down images and plenty of distortion.

The object to be examined must be smaller than the circumference of the lens, to allow light to seep through. Placing printed pages or large objects under the glass would cover the lens and obscure the view. Some newer models come with built-in LED light, to improve vision in a dark environment, but ... if the lens is fully covered, the light won't

be of much use!

Hands-Free Monocles:

Also known as ""watchmaker loupes"", these magnifiers are designed to allow handsfree operation. Somewhere between a reading magnifier and a jewelers' loupe, handsfree monocles are ideal for crafting work as, besides the obvious benefit of having both hands free for the job, they usually have a medium focal distance (2-4 inches), various levels of magnification (2x-10x) and aren't as restrictive to use as jewelry loupes.

Customers with very serious vision impairment tend to favor one of these units to read 2-4 words at a time. Not much but, for extreme cases, they are more suitable than jewelry loupes.

One big drawback of the old classic monocles was the fact that, in order to keep them firmly in the eye, one had to scrunch the face and train the eye muscles to stay positioned in a certain way for extended periods of time ... or they would pop out and jump away. When I was a child (I grew up in the business) I thought that in order to be a proficient watchmaker one had to have a deformed face!

Modern models come with headbands or clips, to be attached to prescription glasses, so they can be used for extended periods of time without straining facial muscles.

About the author: Ettore del Pozzo is owner and operator of delpozzo.com and SeeLarger.com .

Both websites offer a vast selection of Loupes, Magnifiers and other visual aids.

Thursday, July 19, 2007

Paternal Line Research

Author: Curt Whitesides

Have you ever looked into the mirror and wondered, ""Where did I get that hair?"" yet at the same time realized that the older you get the more you look like your mother or father? The DNA that a son receives from father is not only influential in determining eye color, hair style, and height; but also in identifying who the father was. More specifically Paternal Line Research is defining who we are by helping to determine where we came from.

Paternal Line Research uses

Y chromosome testing to trace the paternal line. Throughout time, Y-chromosome tests are only available for males, because the Y-chromosome passed only down the paternal line from father to son. There are tiny chemical markers on the Y-chromosome that create a unique pattern. This pattern is used to distinguish male lineages from each other. This type of testing is often used to determine if two individuals who have the same surname share a common ancestor. Furthermore, this test is often used to provide additional also details in paternity cases where the alleged father is not present for testing.

The Y-chromosome is passed from father to son and has the property of remaining unchanged for several generations. Y-chromosome mutations generally occur once every 500 generations. Because of this consistency in the Y-DNA, it is very accurate in assessing relatedness and even more accurate in assessing un-relatedness (Paternal 2005). Additional information that can be gathered from Paternal Line Research is the approximation of a common ancestor or most recent common ancestor (MRCA) and the most likely estimate (MLE) to a common ancestor--an estimate of when the most recent common ancestor between two relatives lived (presented in generations).

The field of Paternal Line Research has rapidly improved in recent years due to the fact that Y-chromosome analysis has improved. New markers have been discovered and population groups are being characterized (Kayser et al. 2004). Various tests have been conducted as well as validation studies. Both have demonstrated that

Y chromosome testing is in fact reliable (Butler 2005). Many different examples abound that indicate the value that Paternal Line Research testing has in forensic DNA casework. In addition, internet-accessible databases house thousands of Y-DNA haplotypes making Paternal Line Research an increasingly popular and accessible field.

The Easy Y-Match and Exact Y-Match search engines of the Relative Genetics' database allows searches of Y-chromosome paternal line test results. These two search functions will allow clients to identify other individuals with whom they may have a close genealogical connection. Web site visitors may also search for possible relatives using a basic surname search. The recently improved flexibility of the Web site allows individuals to create new projects, participate as members in multiple projects, and accept project members who have been tested by organizations other than Relative Genetics. In addition, members of Group Projects will find that the color coding and sorting features of the group data table makes it easy to quickly identify relatives within their group. Web site visitors are also granted convenient, effective access to the Sorenson Molecular Genealogy Foundation database.

Paternal Line Research is still largely untapped and it will be very interesting to see what the future holds. For example, DNA casework currently has yet to accept

Y chromosome testing as a valid standard and instead still sees it as a specialized technique only to be used in unique situations (Butler 2005). Databases will need to expand in size and power in order to strengthen the statistical information regarding a match. In addition, the many different markers that are available need to be further characterized to better define where they fit in analyzing haplotypes and the strength of matches. At any rate, Paternal Line Research has great potential in addition to the great success that it has already produced.

Sources

Butler J. (2005) Forensic DNA Typing; Biology, Technology, and Genetics of STR Markers, 74, 231-232.

Kayser, M., Kittler, R., Erler, A., Hedman, M., Lee, A.C., Mohyuddin, A., Mehdi, S.Q., Rosser, Z., Stoneking, M., Jobling, M.A., Sajantila, A. and Tyler-Smith, C. (2004) American Journal of Human Genetics, 74, 1183-1197.

Paternal Lineage. (2005). DNA Diagnostics Center. http://www.dnacenter.com/dna-testing/paternal-lineage.html.

About the author: Relative Genetics is a genealogical company specializing in DNA testing.

Click here to learn more about

Y chromosome testing of the paternal line.

Wednesday, July 18, 2007

Works of Late Professor Raj Shankar

Author: Abhinav Shankar

Late Professor Raj Shankar

Late Dr. Raj Shankar (1947-2000), Late Professor, Department Of Biochemistry, Institute Of Medical Sciences, Varanasi , Uttar Pradesh-221005, INDIA

His main fields of specialization were Neurobiochemistry and Clinical Biochemistry.His contributions on neurochemistry are well recognized and he had been invited to deliver lectures in various prestigious conferences.Professor Raj Shankar's contribution is in developmental neurobiology with special reference to undernutrition during brain growth spurt period.His work had clearly established that undernutrition during brain development causes some irreversible changes.In 1991,work carried out in Texas and Yale with Magnetic Resonance Imaging by other workers confirmed some of the conclusions of Prof Shankar's work.Work done during last few years on developing brain show that signal transduction mechanisms are affected due to nutritional stress during brain development.Professor Shankar's other work involved biochemical aspects of mode of action of drugs on C.N.S.Apart from work on reserpine done earlier and published in NATURE and BIOCHEMICAL PHARMACOLOGY,in 1987 he established that the barbiturate pentobarbitone affects protein phosphorylation in brain.This work is significant from the point of view of views on signal transduction.His work also concerned with mode of action of drugs like haloperidol and trifluoperazine.Professor Shankar's work in clinical biochemistry was mainly concerned with lipoprotein metabolism.Professor Shankar has over 50 publications in international and national Journals of repute.

The brief history of Research work done by him & his contributions to Neuro-Chemistry and Bio-Chemistry :

He worked for one year (1966-67) in the Vallabhbhai Patel Chest Institute,University Of Delhi,India on the Lipid Metabolism In Mycobacteria.In 1967,he went to Kinsmen Laboratory of Neurological Research,University Of British Columbia,Canada to work under Prof.J.H. Quastel, F.R.S. for PhD degree in Biochemistry.There he worked On ""Cerebral Metabolism during Anoxia and effect of some Neurotropic Drugs"".This work clearly showed that tetrodotoxin strongly stimulates anaerobic glycolysis and these findings led to the conclusion that at the onset of anoxia,and in the absence of tetrodotoxin action potentials are generated. The above work has been cited in various publications by a number of workers.In 1971,he returned from abroad and joined Department Of Biochemistry ,Institute Of Medical Sciences,Banaras Hindu University,Varanasi,India.There he initiated work on undernutrition and brain development.His early work was concerned with developing a suitable model for study of effect of undernutrition on cerebral metabolism during the brain growth spurt period.Ultimately the method of restriction feeding time of the suckling rats was adopted and he found that transport property of isolated cerebral tissue is altered in developing rats .This was perhaps first demonstration of the fact that undernourishment during brain growth spurt period causes changes in the membrane organization resulting in altered transport properties.It was also found that a key membrane enzyme Na+K+ ATPase show decreased activity in addition to changes in pattern of oubain inhibition which was reversible on adequate rehabilation.This work along with some other findings of him show that these changes are presumably due to the free radical induced damage to developing brain.

Studies on incorporation of C14-acetate into brain lipids of undernourished rats in vivo carried out by him showed that inspite of lipid deicit, 14C-acetate incorporation is slightly higher in experimental animals.This finding ultimately led to the observation that operation of pentose phosphate cycle is altered in the brains of undernourished animals.Gultathione and ascorbic acid was also studied in undernourished brain. From 1980-84 he worked on a project ""Drug Action ,brain development and behavioral changes in the mammalian system ""financed by CSIR(India).This work mainly led to the finding that action of some CNS acting drugs are potentiated in undernourished animals.They also observed that the drug reserpine is a strong inhibitor of lipid peroxidation and protein phosphorylation in brain.This project was mainly concerned with interaction between drug action,monoamines and membrane phenomenon which could later be extra polated to behaviour.Since various gangliosides are involved in binding of cerebral amines the contents of different species of gangliosides and neuraminidase activity was established.He established that the drug reerpine effects the cationiv content of rat brain and proposed that changes in cationic content may be playing a part in release of monoamines at the synapse.It was further shown that transport property of isolated cerebral tissue is altered in chronic reserpinized animals which further pointed out the membrane action of reserpine.In 1980 he established that there is a relationship between high density lipoproteins and premature atherosclerosis in patients with renal failure.The work on lipoproteins was also carried out with patients suffering from anxiety neurosis.He was one of the coinvestigators in an international study on protein energy requirements in indivisuals of this country and this work was supported by the United Nations University.In addition he was involved in a number of collaborative studies with various groups in the institute.The studies include Biochemical and Immunological studies in chylurya,placental oxidative enzymes in pregnancy anemia,Lecithin-Sphingomyelin ratio in fetal lung maturity etc.Till August 2000 he was involved in study of protein phosphorylation in brain.He had shown that protein phosphorylation is adversely on rehabilation .Attempts were in progress to characterize specific proteins by SDSPAGE and radioautography.He had shown the effect of pentobarbitone on cerebral protein phosphorylation which is not due to to its action on electron transport chain.Work was also in progress in his laboratory till August 2000 to study effect of a number of CNS drugs on protein phosphorylation in brain and to examine if specific proteins are effected.This would have helped to establish exact role of cerebral phosphoproteins in neurotransmission.Mechanism of action of some antiepileptic drugs on brain metabolism was also in progress.In 1999 he conclusively established that in conditions like Alzeihmer's disease there is phosphorylation related folding problem of proteins.This is responsible for cognitive and other defects.Similar situation exists under severe undernutrition during brain growth spurt period and similar mechanism involving protein phosphorylation may be involved here also. In 2000 along with his student Kalyan Goswami,he showed that sites in proteins damaged due to free radicals can be accurately determined by carbonylation studies and may be developed as an accurate method to denote relation between chronological age and biological age after free radical induced damage.(Kalyan Goswami,B.D. Bhatia ,Raj Shankar, 2000)

Sources http://users.vectorstar.net/~abhinav/raj/

About the author: Abhinav Shankar, Student, Dr.BR Ambedkar University,Agra,UP,India

Tuesday, July 17, 2007

About Time

Author: Ren Withnell

Relativity. It really really upset the whole scientific applecart for a while, but now it is generally accepted. I am told that time slows down the faster I go, relative to the people on Earth. The closer I get to the speed of light, the slower time moves. A particle of light travelling from the nearest star, Alpha Centuri, takes over 4 of our years to get here. But the particle of light is travelling at the speed of light and as such does not experience time. If the light could think, it would pop into existence as a result of the nuclear reactions in the star, then crash into the eye of the scientist looking at it through a telescope. It never even got time to think ""OH...I'm off....wahey...what's that?...CRUNCH! The particle of light in our world existed for 4 years. The particle of light itself never had time to realise it existed.

So if travelling at the speed of light slows down time to a standstill, what happens if I completely stop? Surely time would speed up? Well, here I am, there you are, totally at a stop. But my watch is telling me time is marching on steadily. But I am moving. I am sat on a planet's surface that is spinning. So I am moving relative to the core of the Earth. This planet is moving in an orbit around the Sun. So I am moving very quickly relative to the Sun. Our sun is towards the outer edge of the Milky Way, a vast galaxy of stars millions of light years across. This is spinning around its centre, we are moving incredibly fast relative to this as yet unknown centre. And our galaxy is moving very quickly away from countless other galaxies in our universe. This moving away is so fast it causes light to change colour, the ""Red Shift"" Doppler Effect.

Our current belief is that at the start of our universe, there was a big bang. This explosion sent the energy and matter of our universe outwards like a balloon being inflated. As such the universe is still spreading out at an alarming rate. But, if we are to achieve a complete stop, relative to the universe, we would need to come to rest at the centre of the universe, the point at where this big bang happened. If we did, would time not speed up, infinitely?

Conceptually the time we know and understand is the time we experience here on our planet. We measure everything else by the time here. Even in relativity we measure changes in time relative to the time that is fixed here. So, if we came to rest at the centre of the universe and time speeded up infinitely, then we would not be able to witness the birth, growth, spread and final demise of our universe. It would happen infinitely quickly. This reminds me of the particle of light. It existed for 4 years relative to our time, but for no time at all relative to itself. The time the universe exists is zero, relative to the universe itself. So, perhaps movement is that which creates time. If you are not moving at all, time passes infinitely quickly. If you are moving at the speed of light, time passes infinitely slowly.

Movement is also I believe responsible for another unexplained force, that of gravity. We can create gravity in the weightlessness of space by spinning whatever object an astronaut is in. This gravity is created by constantly changing the direction the astronaut is moving. Imagine the traditional space ring, such as in James Bond's Moonraker. By spinning the ring everyone could be centrifuged to the outside of the ring, creating an artificial gravity. Imagine a person stood in that ring. The rotating force is trying to spin the person off into space, the force holding the ring together stops that from happening. As the person stands stationary inside the ring, relative to space they are constantly changing direction as the ring rotates.

Given that changes in direction create gravity, and without movement we have no time, are these things not tied together? I believe time is a force, a real, measurable force. The movement of time is in the spinning of the universal bodies. And spinning creates gravitational forces. Don't quote Newtonian Physics, Einstein re-wrote all that. Don't think like we always do, as our world being the centre of everything, it is not. There is an answer here, waiting to be found. An answer to understanding time, gravity, force, energy, mass and everything else.

I am no scientist, mathematician, genius or wizard. But I can see a pattern here, I just don't have the tools to understand it.

About the author: Just have a look at TechSolus

Monday, July 16, 2007

The Invention of the Atomic Clock

Author: Steve Gink

Louis Essen was born in 1908 in a small city in England called Nottingham. His childhood was typical of the time and he pursued his education with enjoyment and dedication. At the age of 20 Louis graduated from the University of Nottingham, where he had been studying. It was at this time that his career started to take off, as he was invited to join the NPL, or National Physics Laboratory.

It was during Louis's time at the NPL that he began working to develop a quartz crystal oscillator as he believed they were capable of measuring time as accurately as a pendulum based clock. Ten years after joining the NPL Louis had invented the Essen ring. This was an eponymous invention which took its name from the shape of the quartz which Louis had used in his latest clock and which was three times more accurate than the previous versions.

Louis soon moved on to newer areas of research and began to study ways to measure the speed of light. During World War II he began to work on high frequency radar and used his technical ability to develop the cavity resonance wavemeter. From 1946 it was this wavemeter which he used, along with a colleague by the name of Albert Gordon-Smith, to make his lightspeed measurements. It has been acknowledged recently that Louis's measurements were by far the most accurate to have been recorded up until that time.

During the early part of the 1950's Louis began to take an interest in research which was being carried out at the National Bureau of Standards (NBS) in the United States of America. He learnt that work was being carried out to invent a clock which was more accurate than any other. The American scientists were using the idea of maintaining a clock's accuracy by using the radiation emitted or absorbed by atoms. At that time the Americans were using a molecule of ammonia but Louis felt that this was not working as well as if they were using different atoms, such as hydrogen or caesium, and so he began working on his own clock using these materials instead.

1953 saw Louis and a colleague, Jack Parry, receiving permission to develop an atomic clock at the NPL based on Louis's existing knowledge of quartz crystal oscillators and other relevant techniques he had learned from the cavity resonance wavemeter he had previously designed. Only two years later Louis's first atomic clock was running, Caesium I, designed by the UK scientists. Development in the United States had all but stopped due to political difficulties.

Louis continued to work on his atomic clock and by 1964 he had managed to increase the accuracy of the atomic clock from one second in 300 years to one second every 2000 years! The continued success of Louis's work resulted in the definition of a second being changed from 1/864000 of a mean solar day to being calculated as the time it took for 9192631770 cycles of the radiation in an atomic clock.

Louis Essen died in 1997 and before his death had been honoured with, amongst others, an OBE and the Tompion Gold Medal of the Clockmakers' Company.

About the author: For more information and samples of atomic clocks visit www.atomic-clocks.org the site contains information about atomic clocks and some images.

Saturday, July 14, 2007

The Invisible Ether and Michelson Morley

Author: Michael Strauss

Modern scientists adopted the ancient concept of the ether to explain the fundamental nature of the universe. Einstein allegedly dethroned the ether concept with his space time continuum. In spite of this, we might have to revisit the ether concept.

The concept of the invisible ether or 'aether' is an old concept dating to the time of the ancient Greeks. They considered the ether as that medium which permeated all of the universe and even believed the ether to be another element. Along with Earth, Wind, Fire and Water Aristotle proposed that the ether should be treated as the fifth element or quintessence; this term which literally means 'fifth element' has even survived down to the present day to explain an exotic form of 'dark energy' which is crucial in some cosmological models. These ideas spread throughout the world until the advent of a new springtime in scientific thought. The first person in the modern era to conceive of the idea of an underlying ether to support the movement of light waves was seventeenth century dutch scientist Christiaan Huygens.

Many others followed in expressing their opinions on the ether concept. Whilst Isaac Newton disagreed with Huygens wave theory he also wrote about the 'aethereal medium' although he expressed his consternation in not knowing what the aether was. Newton later renounced the ether theory because in his mind the infinite stationary ether would interrupt the motions of the enormous masses (the stars and planets) as they moved in space. This rejection was reinforced by some other problematical wave properties which were not explicable at the time; most notably, the production of a double image when light passes through certain translucent materials. This property of matter known as 'birefringence' was an important hurdle to be overcome for a proper understanding of the wave nature of light.

Some time later (1720) whilst working on other astronomical issues related to light and the cosmos, English scientist James Bradley made observations in hopes of quantifying a parallax. This effect is an apparent motion of foreground objects in comparison to those in the background. Whilst he was unable to discern this parallax effect he happened to reveal another effect which is prevalent in cosmological observations; this other effect is known as stellar aberration. Bradley was able to easily describe this aberration in terms of Newton's particle theory of light. However, to do so in light of the wave or undulatory theory was difficult at best since to do so would have required a 'motionless' medium; the static nature of this ether concept was of course the property which had originally caused Newton's denial of the idea.

But Newton's acolytes would find themselves in a difficult position when it was shown that birefringence could be explained through another interpretation of the nature of light. If light was treated as being in a side to side action or 'transverse motion' then birefringence could be attributed to a light wave rather than the particle or corpuscular theory of Newton. This along with the detection of an interference effect for light by Thomas Young in 1801 renewed the ascendancy of the wave theory of light. These findings however carried with them all of the preconceived notions prevalent in the scientific mind. Since it was assumed that waves like water and sound waves required a medium of propagation, it was similarly assumed that light still needed a medium or ether for its waves to be transmitted across the universe.

However, further problems would afflict the ether theory. Because of the unique properties of a transverse wave it became apparent that this hypothetical explanation required the ether to be a solid. In response, Cauchy, Green and Stokes contributed theoretical and mathematical observations to an 'entrainment' hypothesis which later came to be known as the 'ether drag' concept. But nothing would give more impetus to these ideas than when James Clerk Maxwell's equations (1870s) required the constancy of the speed of light (c). When the implications of Maxwell's equations are worked out by physicists, it was understood that as a result of the need for a constant speed of light only one reference frame could meet this requirement under the teachings of Galilean Newtonian relativity. Therefore, scientists expected that there existed a unique absolute reference frame which would comply with this need; as a result, the ether would again be stationary.

As a consequence, by the late nineteenth century the aether was assumed to be an immovable rigid medium. However, earlier previous theories existed as to the nature of the aether. One of the most famous of these is known as the 'aether drag' hypothesis. In this concept, the aether is a special environment within which light moves. Also, this aether would be connected to all material objects and would move along with them. Measuring the speed of light in such a system would render a constant velocity for light no matter where one tested for light's speed. This 'aether drag' idea originated in the aftermath of Francois Arago's experiment which appeared to show the constancy of the speed of light. Arago believed that refractive indexes would change when measured at different times of the day or year as a result of stellar and earthly motion. In spite of his efforts, he did not notice any change in the refractive indexes so measured.

Many other experiments would follow; these were performed in order to find evidence of the aether in its many different abstractions. However the most important of these was conducted by american scientists Michelson and Morley. Their experiment considered another alleged effect of a different aether theory which came to be known as the aether wind. Since the aether permeated the entire universe, the earth would move within the ether as it spun on its axis and moved within the solar system about the sun. This movement of the earth with respect to the aether gave rise rise to the idea that it would be possible to detect an 'ether wind' which would be sensed because of the aforementioned movement. Thus, their experiment was essentially an attempt to detect the so-called ether wind. This mysterious zephyr would be nearly impossible to detect because the aether only infinitesimally affected the surrounding material world. Michelson first experimented in 1881 with a primitive version of his interferometer; a mechanism designed to measure the wave like properties of light. He would follow this by combining forces with Morley in the most famous 'null' experiment of physics.

In this investigation, Michelson utilized an improved version of his interferometer device. Michelson's apparatus would help him win the Nobel prize for his optical precision instruments and the investigations carried out with them. His most important study being what became known as the Michelson Morley experiment of 1887. Michelson and Morley used a beam splitter made of a partially transparent mirror and two other mirrors arranged horizontally and vertically from a light source. When a beam of light traveled from a source of coherent light to the half-silvered mirror (the semitransparent mirror) it is transmitted to either of the horizontal or vertical mirrors. When the light returned to the eyepiece of an observer the separately returning light waves would combine destructively or constructively. This phenomenon is known as the interference effect for light. It was hoped that a shifting of the interference fringes from that which was normally predicted would be able to ascertain the existence of the aether wind.

To detect this effect, the Michelson interferometer was prepared in such a manner as to minimize any and all extraneous sources of experimental error. It was located in a lower level of a stone edifice to eliminate heat and oscillatory effects which might comprise the experimental results. Additionally, the interferometer was mounted atop a marble slab that was floated in a basin of mercury. This was so that the apparatus could be moved through a variety of positions with respect to the invisible ether. But despite their many preparations the experiment did not yield the expected fringe patterns. Thus, Michelson and Morley concluded that there was no evidence for the existence of the ether. Others would replicate the experiment in different incarnations which modified the premise of the experiment. Each and every one returning a similar negative result. Modern theorists have taken these results and those of many other experiments as being indicative of the non-existence of the aether. However, even the negative result of Michelson Morley has come in to question as far back as 1933.

In that year, Dayton Miller demonstrated the fact that even though the duo's experiment had not specifically found the expected range of interference patterns, they had found an interesting little noticed effect. Miller then went on to suggest that Michelson Morley had found an experimental sine wave like set of data that correlated well with the predicted pattern of data. He also described how thermal and directional assumptions inherent in the experimental arrangement may have impacted badly on the fringe interference data. Thus, the test may have been performed in an imperfectly conceived experimental setup and with a built in mathematical bias against the detection of an appropriate outcome. Thus, in the future the aether theory in some form or another may still be sustainable as a foundational theory of physics.

Perhaps it is best to leave with these ideas as expressed in 1920 by Einstein who stated that he believed the ether theory to still be relevant to his ideas on space and time:

""More careful reflection teaches us, however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether""

he continued:

""Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether""

and finally:

""According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.""

About the author: Michael Strauss is an engineer and author of Requiem for Relativity the Collapse of Special Relativity, a serious critique on the fallacies of Special Relativity. To contact the author visit: www.relativitycollapse.n et or www.relativitycollapse.c om

Friday, July 13, 2007

Metaphysical experiences

Author: Anupam Mishra

Gautam Buddha said ""Aapo Deepo Bhav"" means don't search light outside instead be light yourself. Running on the same lines Vivekananda said"" Stand upon your own feet"". This quotation applies completely to my own life. Since 1998 I became interested in astrology and meditation. I tried a lot to have a Guru in both of the fields but somehow or the other I didn't find them at all either they didn't fit to my mindset or at the last hour anyhow immovable hurdle came in my way. In a course of time I understood well that God did not want me to have any Guru. He wanted to search him inside.

As soon as I took that truth deep into my heart a sea change started taking place into the life. I still remember the early morning hours of April 2002 when I got up just after seeing a kind of prophetic dream. The dream implied great auspiciousness that made me unable to sleep due to utter excitement. Having found myself difficult to sleep I just lay down on my bed carelessly and riveted my whole concentration between my eyebrows. I was also pondering over who was he always guided me since my childhood days whenever I found myself in drooping spirits.

In a few seconds I got one pointed concentration. Suddenly, I saw light grey colored sky filled with bright stars. I thought today I would not reason about anything I would only witness whatever might come before me. Then I noticed the fade blue and white sparkles of light into the sky like firecrackers. In a few moments I felt as if my mind was gripped by an unknown source. In that state my mind started asking the questions and the answers were coming in a flash. However, I must admit that there was no voice at all of anybody, which I was hearing. In all actuality they were only thought waves. No sooner than the mind asked a question the answer came without waiting an inch further.

I got to know many upcoming events like having a world mission that would make me famous round the globe. However, the voice gave me warning that I must detach myself from the hunger of fame. I was told about my four debts namely my sister, mother, son and the wife to whom I had to fulfill all the obligations without any attachment at all.

At last I requested to know whom was he telling me all those things. The voice said why I wanted to know. I said just in order to see who was guiding me all the time since my childhood in one way or the other.

All of a sudden I saw a human form appearing before me. I did not able to recognize him. I was amazed who was he? Then a flash came to my mind and I recollected the famous book of Yogananda 'Autobiography of an Yogi'. I recognized that he was nobody else but 'Yukteshwara Giri' the Guru (preceptor) of Yogananda. In front of Yukteshwara Giri there was Ramakrishna Paramhansa sitting just in front of him. After a while Yukteshwara asked me to leave and promised me that whenever I might need him he would come and aid me.

I opened my eyes. My hands were trembling and hairs were standing erect. My whole body was sweating profusely. I tried to explain all those things to my wife but I found myself out of breath. It seemed as if my whole energy had gone. I was literally puffing and the tears were flowing out of my eyes. Then I saw a bowl of water filled with white flowers. I was flabbergasted who put them there. They were looking afresh and giving out bewitching aroma. As for waiting another surprise I remembered the photo of Yukteshwara Giri in which he was standing with Yogananda wearing the garland of white flowers around his neck. I also recollected that he used to wear it every time. I felt like spell-bounded thinking was he Yukteshwara himself who put those flowers just in order to substantiate his own presence in the room?

That was my first countable spiritual experience. After that I had no visions at all for a long time. However, of course lot of hands movements started taking place as soon as I initiated my meditation. The hands moved automatically to and fro and made many gestures, which were not familiar to me. Later on I got to know they were the Mudras (gestures), which are made while doing Gayatri Mantra, a female deity of Vedas. There were lots of mudras, which I do not remember right now. However, after a while I read about the mudras in a holy book and got their name to my amazement. Who was doing all that? In a course of time I understood well it was the play of the 'Mother Kundalini' when it becomes activated.

Occasionally, any question rose into my mind during the meditation and the head started nodding in right or no as if someone was answering the question. As the time rolled on the things became more apparent, consequently, more obvious answers started coming to me in the form of gestures. For example when I asked a question mentally the index finger started moving in the air with great force as if indicating yes or no. Many a times the positive answers came in the form of auspicious mudras like the mudra of a Lotus flower. On the contrary the negative responses came in the form of inauspicious mudras like strangulating someone with my hands.

According to the history the famous war of Mahabharta happened 5,300 before but I was informed it occurred 3 lakhs years back. I always regarded Indian Mythology beyond any logic containing only wild imaginations. However, I got to know there was nothing illogical in the Indian Mythology. The thing is we try to confirm the thing according to our present mind set that is why they seem be very far from reality.

I was told about my last birth. It was so unbelievable for me. To my astonishment I was told that I was nothing but the rebirth of the same person to whom I had been paying my obeisance all the time. I always regarded that person as my Guru mentally since 1999. He is no more in this world since he left his mortal appendage in the beginning of 20th century. I went on searching the truth on the Internet and also through books. I pondered over the matter in a great detail. I was quite reluctant in accepting that truth. I was shocked when I found an interview where he was telling a foreign lady that ""He would born once again on this earth"". The American lady asked him why he had to take birth again "" You will not understand since it is the order of my Guru"" was precisely his answer.

And last but not the least in the last month of September 2005 I went into a strange Samadhi. It was peculiar in a sense that it didn't match with the known details as given by others. I just became unconscious for sometime feeling as if it was of two or three minute's duration. Subsequently, I was told it was almost one hour. I was quite suspicious about that state of samadhi but to my surprise in the morning I felt as if something like energy was oozing out of my fingers. I passed that energy to my sister at her Ajna Chakra. She started seeing visions of divine beings. She got the ability to see the future.

I transferred the energy to a heart patient and said him confidently to stop all the medicines except one tonic. He cured within one week. His Blood pressure became normal. His hypertension got controlled. Now he is like a normal person anyhow. The person was a retired News editor of a famous daily Navbharat times.

I also cured an asthmatic patient who had been suffering from the same disease since his childhood days without any cure at all. Once I tried to do distant healing of my sister- in law. She lives far away from my home. So, it was very difficult for her to pay her visit frequently to my home. I called her astral body and cured it. Her liver enlarged around 2centimetres however within three days it came to its normal size. Her hp factor also got controlled.

So, friends don't go by the dogmatism of science. Don't measure everything on crude scientific terms. Science does not mean to reject anything but to search out the reality behind everything. Of course instruments are different. You can't understand astronomy by using the apparatus of chemical science. In a same way if you want to do acid test in regard to spirituality go through the process, which was propounded by our Saints. Only then you will realize that our Saints were not fools but they were far more intelligent than any common human beings.

At last I bow in the feet of 'Parsheshwara Mahadev' and grateful to him for showering his abundant grace upon me.

About the author: Anupam Mishra, aged 37, is a Hindu Brahmin by birth. He is involved neck deep in his divine search after ultimate reality since 1998. He is a professional astrologer, spiritual guide and spiritual healer by profession.