Evolution Theory And Humans’ Nature

We all learn evolution theory in schools. Well most of us do. However, what's taught in schools are just basic. What most people, especially religious fundamentalists do not want you to know is that evolution theory can very naturally explain human nature. It's not in school. So I'll tell you here. I'll start from the least shocking conclusions first and then we'll go to the most politically incorrect ones people have been trying to hide from you. Why Cheetah run fast? Simple. In ancient time, some Cheetah run fast, some Cheetah ran slow. Cheetahs that run fast, gather more food, and live. Such Cheetah, then get married, form a family, and life offspring. The slow Cheetah die. Got it so far? Here we go... Here, we see that evolution fine tuned Cheetah traits, namely, promoting Cheetahs that run fast. It turns out, evolution do not only govern physical traits, like how fast you can run and other capabilities. Evolution also fine tuned preferences. Preferences that are working out in the gene pool are preferences that are hard wired in our genes. Those are preferences that we don't even have to think about. We just feel like doing it. For example, most of us have strong preferences to have sex with the opposite sex. Why? Because those who do live have decendants and those who don't went extinct. Nothing strange, nothing bizzare. Now here we go... Say one male make 1000 kids. Say another male make 1 kids. Which one will survive better in the gene pool? The one making 1000 kids. You see, gene pool survival is not a boolean value. Survival is not for the fit but for the fittest. Preferences that work in the gene pool in the past are preferences that are common nowadays. Ugh, I can sense that the conservative will start going back to their bible. Not yet. Here's more. One obvious way to make genetic copies of ourself is by making kids. Now, if you're a male, how would you maximize the number of kids you make? You do so by mating with as many females as possible. Males that mate with more women, and produce more kids, like Genghis Khan, will survive better in the genepool, in the past. In fact, a genetic testing shows that the y chromosome of Genghish Khan is the same chromosome with 1 out of 4 people in Asia. Now that's, success. Let me repeat. Preferences that used to work in the past are preferences that are common nowadays. So, what do common typical males want nowadays? Mate with as many females as possible. Not necessarily making kids. Our preferences are set up in the past, where sex and kids are inseparable. There are no contraception whatsoever. So males want as many females as possible. It is normal to want as many females as possible. In fact, the "normality" of those who are homosexual is not far different than the normality of males that are monogamous. Successful males are males that can make a lot of money, gain huge political power, and mate with many females. That’s what males want. What do women want? Women want the best genes. Those women that pick the best genes will produce more successful sons. How do women measure the quality of a male’s genetic material? By success. Got so far? Now, we got an issue. There are the same number of males and females. If one male is successful, the others don’t get any. And that’s the main sources of conflicts all over the world. When we’re not at war, we’re in a race. When we’re in a race, those who are not competitive will want to knock down those running fast. Such preferences are called envy. Different societies then have different ways to balance tolerance toward success and some socialism to appease those who are not successful. The conservative, for example, allow economic success but demand socialism through life long monogamous relationship. The liberal, for example, allow sexual success but demand socialism in economy. None of which are optimum, in my opinion. I wrote plenty of articles suggesting how better social contracts can benefit both the rich and the poor. For example, taxing kids, rather than income and paying dividend to all citizens, will allow the poor to postpone making kids and have enough capital to get them rich. Now, that’s the basic of evolution theory on humans’ nature and preferences. I guess that’s all for an article. Properly understood, evolution theory can be very useful. We can understand why there are many criminalization against consensual acts. We’ll see that those laws are there to protect disgruntled competitors. We’ll see why there are so many wars over religious doctrines. That happens because to be successful in countries heavily influenced by envy, the wise need to keep pitting people against each other. Many more are like this. Properly understood, we can correctly predict the outcome of our choices more accurately. Then we can come up with strategies that will result in what we want more. On the other hand, those who are blind will be eaten by those who see. It’s toward ones’ best interest to learn and understand evolution theory. Ignore evolution theory at our own peril.

Posted in Very interesting Scientific articles | Comments Off on Evolution Theory And Humans’ Nature

MALARIA, the silent killer…A simple guide for travellers

What is malaria? Malaria is a very serious disease caused by protozoan parasites of the genus Plasmodium. Four species of the parasites produce the disease which is transmitted by the female anopheline mosquito. The most dangerous is P. falciparum. If untreated it can lead to fatal cerebral malaria. What are the symptoms? Flu-like symptoms: headaches, muscle aches, confusion, dizziness, vomiting (lasting several hours), sweating, tiredness, but most of all, fever. Anemia and jaundice can occur. Symptoms generally occur from 7 days to a few weeks after being bitten, however may not occur for up to one year. How is it prevented? The following drugs should be taken before embarking on a trip to a country where malaria is prevalent: Atovaquone/proguanil Doxycycline Mefloquine Primaquine (in special circumstances) Visit your doctor or health clinic several weeks before travelling as these drugs need to be administered in advance. A good insect repellent should also be applied to exposed skin whilst abroad; preferably one containing DEET (N.N - diethyl meta-toulamide) which is the only ingredient guaranteed to work and is long lasting. There are other repellents on the market for those not wishing to use DEET, but they need to be applied frequently. What countries are at risk? Afghanistan. Angola, Brazil, Benin, Burkino, Cambodia, Camaroon, Central African Republic, Chad, China, Comoros, Congo, Djibouti, Equatorial New Guinea, Eritrea, Faso Burindi, Gabon, Ghana, Guinea, Bissau, Indonesia, Ivory Coast, Kenya, Liberia, Madagascar, Malawi, Mali, Mozambique, Niger, Nigeria, Principe, Rwanda, Sao Tome, Senegal, Sierra Leone, Somalia, Sudan, Sri Lanka, Swaziland, Tanzania, Thailand, Togo, Uganda, Vietnam, Zaire, Zambia. BE SAFE!

Posted in Very interesting Scientific articles | Comments Off on MALARIA, the silent killer…A simple guide for travellers

The Invisible Ether and Michelson Morley

The concept of the invisible ether or 'aether' is an old concept dating to the time of the ancient Greeks. They considered the ether as that medium which permeated all of the universe and even believed the ether to be another element. Along with Earth, Wind, Fire and Water Aristotle proposed that the ether should be treated as the fifth element or quintessence; this term which literally means 'fifth element' has even survived down to the present day to explain an exotic form of 'dark energy' which is crucial in some cosmological models. These ideas spread throughout the world until the advent of a new springtime in scientific thought. The first person in the modern era to conceive of the idea of an underlying ether to support the movement of light waves was seventeenth century dutch scientist Christiaan Huygens. Many others followed in expressing their opinions on the ether concept. Whilst Isaac Newton disagreed with Huygens wave theory he also wrote about the 'aethereal medium' although he expressed his consternation in not knowing what the aether was. Newton later renounced the ether theory because in his mind the infinite stationary ether would interrupt the motions of the enormous masses (the stars and planets) as they moved in space. This rejection was reinforced by some other problematical wave properties which were not explicable at the time; most notably, the production of a double image when light passes through certain translucent materials. This property of matter known as 'birefringence' was an important hurdle to be overcome for a proper understanding of the wave nature of light. Some time later (1720) whilst working on other astronomical issues related to light and the cosmos, English scientist James Bradley made observations in hopes of quantifying a parallax. This effect is an apparent motion of foreground objects in comparison to those in the background. Whilst he was unable to discern this parallax effect he happened to reveal another effect which is prevalent in cosmological observations; this other effect is known as stellar aberration. Bradley was able to easily describe this aberration in terms of Newton's particle theory of light. However, to do so in light of the wave or undulatory theory was difficult at best since to do so would have required a 'motionless' medium; the static nature of this ether concept was of course the property which had originally caused Newton's denial of the idea. But Newton's acolytes would find themselves in a difficult position when it was shown that birefringence could be explained through another interpretation of the nature of light. If light was treated as being in a side to side action or 'transverse motion' then birefringence could be attributed to a light wave rather than the particle or corpuscular theory of Newton. This along with the detection of an interference effect for light by Thomas Young in 1801 renewed the ascendancy of the wave theory of light. These findings however carried with them all of the preconceived notions prevalent in the scientific mind. Since it was assumed that waves like water and sound waves required a medium of propagation, it was similarly assumed that light still needed a medium or ether for its waves to be transmitted across the universe. However, further problems would afflict the ether theory. Because of the unique properties of a transverse wave it became apparent that this hypothetical explanation required the ether to be a solid. In response, Cauchy, Green and Stokes contributed theoretical and mathematical observations to an 'entrainment' hypothesis which later came to be known as the 'ether drag' concept. But nothing would give more impetus to these ideas than when James Clerk Maxwell's equations (1870s) required the constancy of the speed of light (c). When the implications of Maxwell's equations are worked out by physicists, it was understood that as a result of the need for a constant speed of light only one reference frame could meet this requirement under the teachings of Galilean Newtonian relativity. Therefore, scientists expected that there existed a unique absolute reference frame which would comply with this need; as a result, the ether would again be stationary. As a consequence, by the late nineteenth century the aether was assumed to be an immovable rigid medium. However, earlier previous theories existed as to the nature of the aether. One of the most famous of these is known as the 'aether drag' hypothesis. In this concept, the aether is a special environment within which light moves. Also, this aether would be connected to all material objects and would move along with them. Measuring the speed of light in such a system would render a constant velocity for light no matter where one tested for light's speed. This 'aether drag' idea originated in the aftermath of Francois Arago's experiment which appeared to show the constancy of the speed of light. Arago believed that refractive indexes would change when measured at different times of the day or year as a result of stellar and earthly motion. In spite of his efforts, he did not notice any change in the refractive indexes so measured. Many other experiments would follow; these were performed in order to find evidence of the aether in its many different abstractions. However the most important of these was conducted by american scientists Michelson and Morley. Their experiment considered another alleged effect of a different aether theory which came to be known as the aether wind. Since the aether permeated the entire universe, the earth would move within the ether as it spun on its axis and moved within the solar system about the sun. This movement of the earth with respect to the aether gave rise rise to the idea that it would be possible to detect an 'ether wind' which would be sensed because of the aforementioned movement. Thus, their experiment was essentially an attempt to detect the so-called ether wind. This mysterious zephyr would be nearly impossible to detect because the aether only infinitesimally affected the surrounding material world. Michelson first experimented in 1881 with a primitive version of his interferometer; a mechanism designed to measure the wave like properties of light. He would follow this by combining forces with Morley in the most famous 'null' experiment of physics. In this investigation, Michelson utilized an improved version of his interferometer device. Michelson's apparatus would help him win the Nobel prize for his optical precision instruments and the investigations carried out with them. His most important study being what became known as the Michelson Morley experiment of 1887. Michelson and Morley used a beam splitter made of a partially transparent mirror and two other mirrors arranged horizontally and vertically from a light source. When a beam of light traveled from a source of coherent light to the half-silvered mirror (the semitransparent mirror) it is transmitted to either of the horizontal or vertical mirrors. When the light returned to the eyepiece of an observer the separately returning light waves would combine destructively or constructively. This phenomenon is known as the interference effect for light. It was hoped that a shifting of the interference fringes from that which was normally predicted would be able to ascertain the existence of the aether wind. To detect this effect, the Michelson interferometer was prepared in such a manner as to minimize any and all extraneous sources of experimental error. It was located in a lower level of a stone edifice to eliminate heat and oscillatory effects which might comprise the experimental results. Additionally, the interferometer was mounted atop a marble slab that was floated in a basin of mercury. This was so that the apparatus could be moved through a variety of positions with respect to the invisible ether. But despite their many preparations the experiment did not yield the expected fringe patterns. Thus, Michelson and Morley concluded that there was no evidence for the existence of the ether. Others would replicate the experiment in different

incarnations which modified the premise of the experiment. Each and every one returning a similar negative result. Modern theorists have taken these results and those of many other experiments as being indicative of the non-existence of the aether. However, even the negative result of Michelson Morley has come in to question as far back as 1933. In that year, Dayton Miller demonstrated the fact that even though the duo's experiment had not specifically found the expected range of interference patterns, they had found an interesting little noticed effect. Miller then went on to suggest that Michelson Morley had found an experimental sine wave like set of data that correlated well with the predicted pattern of data. He also described how thermal and directional assumptions inherent in the experimental arrangement may have impacted badly on the fringe interference data. Thus, the test may have been performed in an imperfectly conceived experimental setup and with a built in mathematical bias against the detection of an appropriate outcome. Thus, in the future the aether theory in some form or another may still be sustainable as a foundational theory of physics. Perhaps it is best to leave with these ideas as expressed in 1920 by Einstein who stated that he believed the ether theory to still be relevant to his ideas on space and time: "More careful reflection teaches us, however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether" he continued: "Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether" and finally: "According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it."

Posted in Very interesting Scientific articles | Comments Off on The Invisible Ether and Michelson Morley

Finite Element Analysis: Post-processing

The following four-article series was published in a newsletter of the American Society of Mechanical Engineers (ASME). It serves as an introduction to the recent analysis discipline known as the finite element method. The author is an engineering consultant and expert witness specializing in finite element analysis. FINITE ELEMENT ANALYSIS: Post-processing by Steve Roensch, President, Roensch & Associates Last in a four-part series After a finite element model has been prepared and checked, boundary conditions have been applied, and the model has been solved, it is time to investigate the results of the analysis. This activity is known as the post-processing phase of the finite element method. Post-processing begins with a thorough check for problems that may have occurred during solution. Most solvers provide a log file, which should be searched for warnings or errors, and which will also provide a quantitative measure of how well-behaved the numerical procedures were during solution. Next, reaction loads at restrained nodes should be summed and examined as a "sanity check". Reaction loads that do not closely balance the applied load resultant for a linear static analysis should cast doubt on the validity of other results. Error norms such as strain energy density and stress deviation among adjacent elements might be looked at next, but for h-code analyses these quantities are best used to target subsequent adaptive remeshing. Once the solution is verified to be free of numerical problems, the quantities of interest may be examined. Many display options are available, the choice of which depends on the mathematical form of the quantity as well as its physical meaning. For example, the displacement of a solid linear brick element's node is a 3-component spatial vector, and the model's overall displacement is often displayed by superposing the deformed shape over the undeformed shape. Dynamic viewing and animation capabilities aid greatly in obtaining an understanding of the deformation pattern. Stresses, being tensor quantities, currently lack a good single visualization technique, and thus derived stress quantities are extracted and displayed. Principal stress vectors may be displayed as color-coded arrows, indicating both direction and magnitude. The magnitude of principal stresses or of a scalar failure stress such as the Von Mises stress may be displayed on the model as colored bands. When this type of display is treated as a 3D object subjected to light sources, the resulting image is known as a shaded image stress plot. Displacement magnitude may also be displayed by colored bands, but this can lead to misinterpretation as a stress plot. An area of post-processing that is rapidly gaining popularity is that of adaptive remeshing. Error norms such as strain energy density are used to remesh the model, placing a denser mesh in regions needing improvement and a coarser mesh in areas of overkill. Adaptivity requires an associative link between the model and the underlying CAD geometry, and works best if boundary conditions may be applied directly to the geometry, as well. Adaptive remeshing is a recent demonstration of the iterative nature of h-code analysis. Optimization is another area enjoying recent advancement. Based on the values of various results, the model is modified automatically in an attempt to satisfy certain performance criteria and is solved again. The process iterates until some convergence criterion is met. In its scalar form, optimization modifies beam cross-sectional properties, thin shell thicknesses and/or material properties in an attempt to meet maximum stress constraints, maximum deflection constraints, and/or vibrational frequency constraints. Shape optimization is more complex, with the actual 3D model boundaries being modified. This is best accomplished by using the driving dimensions as optimization parameters, but mesh quality at each iteration can be a concern. Another direction clearly visible in the finite element field is the integration of FEA packages with so-called "mechanism" packages, which analyze motion and forces of large-displacement multi-body systems. A long-term goal would be real-time computation and display of displacements and stresses in a multi-body system undergoing large displacement motion, with frictional effects and fluid flow taken into account when necessary. It is difficult to estimate the increase in computing power necessary to accomplish this feat, but 2 or 3 orders of magnitude is probably close. Algorithms to integrate these fields of analysis may be expected to follow the computing power increases. In summary, the finite element method is a relatively recent discipline that has quickly become a mature method, especially for structural and thermal analysis. The costs of applying this technology to everyday design tasks have been dropping, while the capabilities delivered by the method expand constantly. With education in the technique and in the commercial software packages becoming more and more available, the question has moved from "Why apply FEA?" to "Why not?". The method is fully capable of delivering higher quality products in a shorter design cycle with a reduced chance of field failure, provided it is applied by a capable analyst. It is also a valid indication of thorough design practices, should an unexpected litigation crop up. The time is now for industry to make greater use of this and other analysis techniques. Roensch & Associates. All rights reserved.

Posted in Very interesting Scientific articles | Comments Off on Finite Element Analysis: Post-processing