Greater prevalence of anal cancer precursors for women living with HIV than prior reports A federal program to rebuild the ecosystems of the Louisiana Delta at the mouth of the Mississippi River took a hit last month when Hurricane Katrina roared through the gulf. The Golden Meadow Plant Materials Center, which is charged with rebuilding an ever-eroding Louisiana coastline, lost about one-third of its 50,000-to-80,000 plants – which strengthen marshes and island barriers. The plants were located in a greenhouse at the 90-acre facility in Galliano, La., southwest of New Orleans. Relatively speaking, the plant materials center took a small hit. Coastal Mississippi, however, did not fare so well. Nor did southeastern Louisiana.”It was a catastrophic event,” said Gary Fine, Manager of the United States Agriculture Department’s Golden Meadow center, which is one of 26 plant materials centers across the country. Fine and a group of mainly volunteer workers – such as high-school students – may make a trek to the beaches of Mississippi to “try and help them replenish,” Fine said.Before coming to Louisiana, Fine worked in Kansas rebuilding the planes.”I grew prairie grass in Kansas and I’m growing wetland grass here,” Fine said, explaining that the program was initially developed by the government in the 1930s to replant the dust bowl area of the central planes.It’s up to Fine’s crew to discover what plants, for instance, can tolerate a great deal of salt. The saltwater from the ocean, combined with rising sea levels, erode and sink the coastline. The stabilization of the Mississippi River’s channel – the construction of levees – has cut off sediment-laden overflow that once nourished adjacent wetland areas. Hurricanes and tropical storms ravage the coast, as well.By working with plant geneticists, crossing and breeding certain species that can help strengthen the coastline, Fine is using nature’s bounty to protect the ecosystems from its wrath by strengthening marshes and island barriers with hardy vegetation.One of a number of projects Fine is working on involves rebuilding ridges and maritime forests. Using bulldozers to shape the ridgeline, and recreating the maritime forests with plants grown in containers at the facility, the group is able to stabilize the coastal region. Because with each hurricane season, the Gulf of Mexico coastal regions become more vulnerable. Such plants include; ‘Vermilion’ smooth cord grass; ‘Fourchon’ bitter panicum; ‘Brazoria’ seashore paspalum; and ‘Fourchon’ bitter panicum.When the coastline recedes, migratory birds from South America can no longer make the trip across the gulf. By rebuilding the maritime forests and coastline, Fine’s group is recreating the habitat that sustains the wildlife.It’s one reason Fine likes working in the delicate coastal region.”It’s exciting and rewarding,” he said, adding, “It’s a fragile environment but it’s a dynamic environment.”by Allison Cooper, Copyright 2005 PhysOrg.com Citation: Oceanic ecosystem in the wake of hurricanes (2005, October 2) retrieved 18 August 2019 from https://phys.org/news/2005-10-oceanic-ecosystem-hurricanes.html Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
How close? The answer is a bit hard to swallow even to a disinterested physicist! A difference of one part in a million billion (1015) would allow galaxies to form before the expansion of the universe pulls everything too far apart for new structures to form. This is known as a fine-tuning problem: to explain the observed properties of the universe under the Big Bang model, physicists had to assume a very specific value for its initial density.If the universe were actually at the critical density, which has a clear physical significance, the fine-tuning problem wouldn’t be so bad. A universe starting at the critical density remains at the critical density forever, which sounds like a clue to some deeper physical law. One might claim that an unknown physical process makes this the only possible value. But in knowing that the initial density was some other number, physicists had to admit that any initial density was possible. Although we live in a universe capable of supporting life, the probability that such a universe came into existence randomly seemed to be infinitesimal.The fine-tuning problem was eventually solved by borrowing ideas from quantum field theory, a branch of physics dealing with fundamental particles and their interactions. During the Eighties and Nineties, most physicists were content with the Big Bang model and believed that a quantum mechanical process called inflation pushed the density of the early universe very close to its critical value in a brief period of runaway expansion. During inflation, the universe was dominated by a field of energy not unlike the dark energy being discussed today. In this scenario, the initial density of the universe was no longer relevant—inflation would drive any initial value towards the critical value in the blink of an eye.At the turn of the millennium, however, this tidy theory began to fail. Large-scale surveys discovered distant supernovae by the dozen, allowing astronomers to determine how fast the universe was expanding billions of years ago. The cosmology du jour predicted that the universe was slowing down, but these and subsequent observations have shown that the expansion is actually speeding up!To explain this result, Einstein’s cosmological constant had to be brought back into the picture. This parameter corresponds to the energy density of a vacuum (the ‘dark energy’), and just like the matter density the cosmological ‘constant’ evolves along with the universe. The fine-tuning problem has therefore returned, in a different form. The initial density of vacuum energy had to be very close to zero at the Big Bang, or else an accelerating expansion would have driven apart all the matter before stars could form. Inflation can’t solve the problem this time; technically speaking, the cosmological constant is itself one cause of inflation.Once again, cosmologists find themselves debating the initial conditions of the universe. One common explanation, which has been used for decades to solve fine-tuning problems, is called the anthropic principle. In essence, this is the statement that we must live in a universe that can support life because we are here to observe it. This statement isn’t very satisfying, however, since it doesn’t offer any new insight into the nature of the universe. In modern times, physicists such as Alexander Vilenkin (Tufts University) have begun to suggest that our universe is only one of many. They envision an eternally expanding field of fundamental energy, effervescent with an infinity of universes. Each one has a Big Bang of its own, popping into existence wherever quantum fluctuations cool the fundamental field sufficiently. If there are an infinite number of universes, then it is certainly much less surprising that some would be habitable. Our particular combination of cosmological parameters, however, remains a highly improbable event in its own right.Advances in string theory and our understanding of higher dimensional spaces have made possible an even more astonishing solution to the coincidence problem. Quantum mechanical models have been proposed that allow the cosmological constant to decay from any initial value to almost zero. Such models, however, have two problems: first, the process typically requires trillions of years; and second, while the cosmological constant is large the density of matter in the universe drops to zero very quickly. But what if the universe is much older than it appears? Professors Paul Steinhardt (Princeton University) and Neil Turok (Cambridge University) have come up with a novel solution that gives the cosmological constant time to decay to its required value. Resurrecting a ghost of the cyclical universe, they propose that our universe is one of two embedded in the eleven-dimensional space of string theory. The two universes are linked with a spring-like attraction, and so pass through each other (moving along one of the higher dimensions) periodically. Every time they interact, enormous energies are released and both universes fill with hot plasma—a new Big Bang. There is no Big Crunch, as both universes are constantly expanding. A trillion years or so after one Big Bang, when the universe is practically empty, another Big Bang occurs and the stars and galaxies can form once more.The underlying cosmological constant, however, is unaffected by this process and has all the time it needs to decay to a small value. Eventually stars and galaxies will have time to form, and the same will be true of every subsequent cycle. In this modern version of the old cyclical model, the coincidence is resolved because only a few cycles are required for the cosmological constant to decay. The number of star-producing cycles following the decay, however, is practically infinite.Either way, it is clear that our perspective has changed. A single universe is no longer satisfying, given the most unlikely nature of our own. To explain our existence, it seems we must imagine others.References:Paul Steinhardt and Neil Turok, “Why the Cosmological Constant is Small and Positive”, Science 4 May 2006, http://xxx.lanl.gov/astro-ph/0605173Alexander Vilenkin, “The Vacuum Energy Crisis”, Science 4 May 2006, http://xxx.lanl.gov/astro-ph/0605242Articles from Science magazine are also available at http://www.sciencemag.org/1As the universe expands, its density decreases. The critical density is therefore actually a function of time, and had a much higher value in the early universe than it does today.By Ben Mathiesen, Copyright 2006 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. During the decades following common acceptance of the Big Bang model, physicists and astronomers tried very hard to measure the composition of the universe. According to theory, the average density of the universe would determine its ultimate fate. A universe with too little matter would expand forever, and its average density would eventually drop to zero. A universe with too much matter, on the other hand, would one day collapse under its own gravity (the ‘Big Crunch’). Only one special value, the critical density, could prevent both a Big Crunch and the unchecked expansion of the universe. Those with philosophical objections to a dying universe had only three alternatives. One idea was that we actually lived in a steady state universe. In this model, the universe expands at a constant rate but produces an occasional atom out of the void to maintain its average density. A steady state universe is infinite, and need not have had a Big Bang at all. Another way out was to have a cyclical universe, whose every Big Crunch is followed by another Big Bang. The cyclical universe model didn’t improve our own long-term prospects, but it at least preserved the universe itself from extinction. Unfortunately, neither of these models survived under the pressure of improving astronomical observations.By the 1970s, a critical density Big Bang model was the only viable solution for a stable universe. Unfortunately, even the most generous accounting of matter in the universe added up to only about half of the required density. Cosmologists were stuck with an unstable universe, doomed to end in cold and darkness. A universe that expands forever is not so bad, if the data require it; the future history of the universe might be disappointing to aesthetes, but a scientist will just shrug and accept the result. The Big Bang model, however, still had a big problem: our low-density universe could only arise from a highly unlikely coincidence of initial conditions. An expanding universe is fine in principle, but it mustn’t expand too quickly! For galaxies, stars, and planets to form, the average density of matter has to stay relatively high for at least a few billion years. To satisfy even this one vague constraint, it turns out that the initial density of the universe would have had to be very close to the critical value1. Citation: A Cosmic Coincidence Resurrects the Cyclical Universe (2006, June 5) retrieved 18 August 2019 from https://phys.org/news/2006-06-cosmic-coincidence-resurrects-cyclical-universe.html Over the past five years or so, scientists have finally converged on a model of the universe that explains (or at least permits) all of its characteristics. The new cosmological model has one very surprising feature, however, which is supported by several robust and unrelated observations. In addition to matter and radiation, it seems that the vacuum of space is filled with a mysterious ‘dark energy’ that pushes the universe apart. While the dark energy helps us explain a great many things, it also resurrects an old problem once thought buried—the idea that our universe is the product of a highly unlikely cosmic coincidence. Could vacuum physics be revealed by laser-driven microbubbles? Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
The war on Southern California smog is slipping. Fixing it is a $14 billion problem Citation: Scientists design Maglev car with greater stability (2006, June 2) retrieved 18 August 2019 from https://phys.org/news/2006-06-scientists-maglev-car-greater-stability.html Since the late ‘60s, scientists have been designing, building and operating “flying trains,” or magnetically levitated (“Maglev”) systems. However, the sci-fi-like technology still faces challenges for increased stability, controllability and cost-effectiveness – but scientists are making steady improvements. Maglev trains, which use the electromagnetic force to overcome gravity, have intrigued scientists for the past several decades. Levitating several centimeters above the track, the train cars have no physical contact and no friction with the track, enabling speeds of more than 400 mph. Implementing Maglevs into urban areas – or even across the country – could make a five-hour drive a smooth 40-minute ride, as well as reinvent the world’s infrastructure.The first operating low-speed Maglev systems, built in England and Germany in the mid-‘80s, are no longer in existence. However, the newest 300-mph system built in 2002 in Shanghai, China has revived political and consumer interest in the technology, despite current large construction costs. Although the city plans to extend the 18-mile track to 100 miles for the World Expo in 2010, the technology must become more economically viable with proven safety for mainstream adoption.As part of the investigation, a team of scientists (W. Yang et al.) from China has recently designed and built a new model for a Maglev car that could offer significant stability advantages over current technology. The scientists used specially fabricated high temperature superconductors (HTSC) for the cars and permanent magnets for the tracks, which demonstrate a higher levitation force and greater stability than when permanent magnets are used on both the track and car. Although the car models are only about 12 cm long and 4 cm wide, they demonstrated frictionless, stable movement across the 10-meter-long track.“The arrangement of the magnets made it easy to get a uniform magnetic field distribution along the length of the track,” wrote W. Yang et al. in a recent issue of Superconductor Science and Technology. “The model can be used [for] a fast transportation system to students and adults.”In the HTSC model, the cars are propelled by a combination of linear motors on the tracks and aluminum rotors on the cars. Photoelectric switches near the linear motors save energy by ensuring that the motors work only when the car is traveling through them.The HTSC model’s large magnetic fields across the track also provide a strong guidance force and good magnetic distribution that prohibits the cars from escaping the track. Because this magnetic configuration provides a large dragging force, it also eliminates the need for any exterior stability control.Using liquid nitrogen to cool the HTSCs to -196 degrees C, the scientists investigated different cooling methods to optimize the levitation and guidance forces. The team found that a field cooled process, which cools the magnets inside the car after the car is on the track, provides significant stability, but also requires further investigation.Citation: Yang, W. M., et al. A small Maglev car model using YBCO bulk superconductors. Superconductor Science and Technology. 19 (2006) S537-S539.By Lisa Zyga, Copyright 2006 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Explore further Scientists have fabricated Maglev car models using high temperature superconducting technology, which could increase stability and inspire further developments to confront Maglev challenges. Photo credit: Wanmin Yang et al. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
“Before our study, it was clear that there is a component of imitation that influences facial expressions, but there was no study that compared the gestalt of facial movements of relatives in several emotions,” Peleg told PhysOrg.com. Peleg is a PhD student supervised by Professors Eviatar Nevo and Gadi Katzir at the International Graduate Center of Evolution at the Institute of Evolution, part of the University of Haifa in Israel. In the 1970s—contrary to some views of the time but in accordance with Darwin—psychologists Paul Ekman and Eibl Eibesfeldt showed that facial expressions are universal: people from different parts of the world smile when happy and frown when sad, etc. Scientists also know that individuals have unique facial expression signatures. Due to the existence of different nerves and muscles, some people will have, for example, dimples, “Duchenne” smiles (with circles under the eyes) and the ability to lift one eyebrow. Wanting to know if there might be a heritable basis for these individual signatures, Peleg et al. studied the gestalt of facial movements, seen in details such as the intensity and frequency of expressions. “Facial expressions are non-verbal communication phenotypes, meaning they are composed from genetics and environmental conditions,” said Peleg. “We decided to investigate a population of born-blind persons in order to eliminate the social influence and the effects of imitation.”In the study, the scientists video-taped 51 subjects—21 who were blind, and a total of 30 of their family members—when provoked to exhibit six emotional states: concentration, sadness, anger, disgust, joy and surprise. Next, the researchers used a classification tool to assign values (e.g. for types of movements, frequencies) to each of the subject’s expressions. After defining the values, another classification tool determined which subjects were family members. Quite convincingly, 80% of the classifications correctly identified family members when taking into account all six emotional expressions. The single emotion that received correct classification of family members when tested alone was anger at 75%. In a test comparing the family members with each other, the scientists also found that related subjects showed similar frequencies of facial expressions for the emotions of concentration, sadness and anger, but not the others. Scientists have found that family members share a facial expression “signature”—a unique form of the universal facial expressions encountered worldwide. In a rare study taking into account blind subjects, Gili Peleg, et al. have discovered that family members were identified by their facial expressions 80% of the time, giving scientific support to the observation that a child “has her Daddy’s smile.” Citation: Study finds facial expressions are inherited (2006, November 7) retrieved 18 August 2019 from https://phys.org/news/2006-11-facial-inherited.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. “The hereditary influence that appeared in think-concentrate, sadness, and anger may relate to the induction of the high diversity of facial movements by these emotions, as we found in a previous study,” said Peleg. “We believe that if our study population was larger, we could get significant results even in the other three emotional states: disgust, joy and surprise.”Peleg et al. hope that finding a heritable basis for facial expression signatures may lead to discovering genes responsible for facial expressions. If so, it might be possible to develop repair mechanisms for people lacking facial expressions, such as people with autism. Much information can be communicated through a person’s facial expressions, and the scientists also wonder about their evolutionary significance.“Communication abilities have an evolutionary advantage; therefore facial expression phenotypes should be conserved,” said Peleg. “Facial expressions are important in inter-individual and hierarchical interactions of people within our own species; between different human races; between different tribes; and in animals between different species. The relationships of mother-babies; bonding of pairs; aggression interactions between individuals and so on should be very important in hierarchical situations in human and animal societies. Likewise, facial expressions should be of great importance as pre-mating isolating mechanisms between species.“The genetic basis of facial expressions is probably composed of an array of gene coding for muscle structure, bone structure and muscle innervations,” Peleg continued. “However, our results also demonstrate kinship sequences of facial expression. This could indicate genetic conservation and the existence of brain regions that control facial expressions.”Citation: Peleg, Gili, Katzir, Gadi, Peleg, Ofer, Kamara, Michal, Brodsky, Leonid, Hel-Or, Hagit, Keren, Daniel, and Nevo, Eviatar. “Hereditary family signature of facial expression.” Proceedings of the National Academy of Sciences. October 24, 2006. Vol. 103 No. 43. 15921-15926.By Lisa Zyga, Copyright 2006 PhysOrg.com
Explore further “I remember waiting in line to scan my ticket inside the terminal, I believe it was at the Seattle airport,” Steffen told PhysOrg.com. “I remember being quite disappointed when I saw how long the second line was – the one at the entrance to the airplane – and how slowly it moved. . . . That’s when I thought that there had to be a better way to get people onto the airplane than the one that was being employed. I didn’t have the time to work on it right then, so I brooded over it for almost 18 months. Last year, I decided that I either needed to solve the problem or stop thinking about it.”In his analysis, Steffen found that the worst method for boarding a plane is boarding from the front to the back, since passengers have to wait and step over each other to get to their seats. As he explains in a paper submitted to the Journal of Air Transport Management, conventional wisdom suggests that boarding in a manner opposite to the slowest method seems like it should be the fastest method. Quite unexpectedly, then, Steffen found that the common back-to-front boarding method is actually the second worst method possible, only slightly better than boarding front to back.“I was certain that the worst way to load the airplane was from front to back, so I ran my simulation in that configuration first to set a baseline,” Steffen said. “I was also somewhat convinced that the optimal way would be from back to front or something like it. I half expected to find that back-to-front loading is several times faster than front-to-back. Had that been true, I was prepared to run the two simulations, see how much faster it was, be satisfied, and put it aside. When the results were almost identical, I first thought that there was a bug in my code. Once I was convinced that my code worked properly, I realized that the problem was more interesting than I had anticipated, and I got more serious about it.”Using a combination of a Monte Carlo optimization algorithm and intuition, Steffen determined an optimal boarding method, which could make boarding go 4 to 10 times faster than the worst method, depending on the size of the plane. In the optimal method, passengers would board 10 at a time in every other row (since loading luggage requires about two aisles of space). This way, passengers could always be boarding luggage or sitting in their seats, rather than waiting in the aisle, as in the two previous methods. Citation: The Best Way to Board a Plane (2008, February 14) retrieved 18 August 2019 from https://phys.org/news/2008-02-board-plane.html Tackling the forensic unknowns of 3-D-printed firearms This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. A Boeing 747 passenger plane. Most airlines board passengers the same way, first filling the seats in the back of the plane, and then moving to the front. After a recent experience boarding a plane in this manner, Fermilab physicist Jason Steffen wondered if there might be a better way. So, in the midst of studying gravitation and axion-like particles, Steffen took a short break to investigate an optimal boarding method for airline passengers. However, Steffen also acknowledged that the optimal method might not be practical, since passengers who sit next to each other often travel together, and prefer to board together. He proposed a modified version, where passengers board in blocks of three consecutive seats on one side of the plane in every other row. In this strategy, there would be four boarding groups, with passengers in the same row on the same side boarding together. This method provided a decent middle ground, as it was twice as slow as the optimal method, but twice as fast as the conventional method. Although getting passengers to line up in their correct groups might sound challenging, Steffen noted that Southwest Airlines has been experimenting with having its passengers line up in numerical order – so the logistics wouldn’t be inconceivable.Steffen also identified several other boarding strategies with results superior to the conventional method. Contrary to our tendency for order, even completely random boarding proved to be a good alternative. In fact, random boarding was nearly as fast as the modified optimal method. Plus, by its very nature, it has the advantage of not requiring airline attendants to organize boarding passengers in any way. And the random result also shows that, when passengers board out of order in the other strategies, the results will still be better than the conventional boarding method.The main advantage of the alternative boarding methods is that they allow several passengers to load their overhead luggage simultaneously, which Steffen identified as the largest factor in determining boarding time. By spreading the passengers throughout the airplane instead of concentrating them together, more passengers could load their luggage at once. Steffen noted that, although he has recently heard of other boarding optimization studies, his analysis uses a unique method and is the first to generate this specific optimization strategy.“I think that the biggest challenge to implementing one of these methods is cracking into the industry,” he said. “Right now, I have a model where the parameters need to be calibrated with data. But that would require an investment from an airline company or manufacturer. While I could be wrong, I doubt that when an airline company needs to study an issue like this one that their first thought is, “Let’s go talk to a physicist” (followed by, “Look, here’s one that’s studied axions and extrasolar planets. He’s our man.”) The two fields just don’t talk to each other enough to have that kind of understanding.”Still, Steffen thinks that reducing the boarding time could benefit airlines in a number of ways, especially for short flights between nearby cities. In such cases, quicker boarding might allow an additional daily flight to be scheduled, or it could reduce the number of gates an airline requires, since each gate could be cleared more rapidly. With thousands of flights taking off around the globe every day, a few minutes could save a lot of people a lot of time.More information: Steffen, Jason H. “Optimal boarding method for airline passengers.” ArXiv:0802.0733v1 [physics.soc-ph] 6 Feb 2008.Copyright 2008 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.
(Left) The BioModule will carry 30 samples, and have a mass of 100 grams. Credit: Bruce Betts/The Planetary Society. (Right) Water bears have already shown that they can survive vacuum conditions and intense radiation. Credit: Bob Goldstein. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Russia delays Mars probe launch until 2012: report The LIFE experiment is being developed by The Planetary Society, a publicly supported organization founded in part by Carl Sagan that now has 125 member countries. The researchers will send 10 individual organisms (three of each, for a total of 30 samples) from all three domains of life – bacteria, eukaryota, and archaea – along with some native soil samples to Mars’ largest moon on the three-year mission. According to the scientists, the experiment will test part of the theory of transpermia, specifically investigating life’s ability to move between planets. In an earlier experiment in 2007, water bears flew on a spacecraft and survived the major hardships of radiation and the vacuum. In 2011, the life forms will be packed up inside a puck-like container called a BioModule with a total mass of 100 grams, which is designed to resemble a meteorite that may have carried earlier life forms between planets. After the 10-month journey to Phobos, the specimens will undergo a 4,000-g impact on the moon’s surface, spend a few weeks there in their sealed containers, and then return to Earth on board a robotic interplanetary lander that would crash-land in Kazakhstan. Scientists would then open the containers and see what was still alive.”If no microbes survive, this does not necessarily rule out the possibility of transpermia, but it certainly calls it into question more,” according to The Planetary Society’s website. “But if some of the organisms do make it alive to Phobos and back, then at least we would know that some life could indeed survive an interplanetary journey over a three-year period inside a rock.”The experiment would mark the longest time that biological samples have spent in deep space; the Biostack 1 and 2 experiments, flown during the Apollo 16 and 17 missions to the moon, traveled outside the Earth’s magnetosphere for about two weeks. To prepare for the upcoming launch, the scientists had to overcome several challenges. They tested the BioModule’s durability by violently vibrating the container while strapped to a shake table, and then shooting the container out of an air cannon to mimic the conditions it would undergo. More information: The Planetary Society: LIFE Experiment and FAQvia: Wired© 2009 PhysOrg.com (PhysOrg.com) — Tiny microscopic creatures commonly known as water bears (also called Tardigrades), along with a few other life forms, will be sent to the Martian moon Phobos to test whether organisms can survive for long periods of time in deep space. The mission, called the Living Interplanetary Flight Experiment (LIFE), was originally going to be launched earlier this month, but it has been delayed due to safety and technical issues. Currently, the scientists hope to launch the specimens on the Russian Phobos-Grunt spacecraft in 2011, the next time that the orbits of Earth and Mars offer a launch window. Explore further Citation: Water Bears to Travel to Martian Moon, Test Theory of Transpermia (2009, October 13) retrieved 18 August 2019 from https://phys.org/news/2009-10-martian-moon-theory-transpermia.html
Scientists have proposed a method in which beryllium could have been produced in the first few minutes of the Universe. Beryllium is not generally thought to have been produced until much later. Image credit: Alchemist-hp. CC BY-SA, Wikimedia. (PhysOrg.com) — Some chemical elements appear much more abundantly in nature than others, which is partly due to how the elements originally formed. Scientists know that the light elements (hydrogen, deuterium, helium, and traces of lithium) were produced by fusion in the early Universe. Today, lithium, beryllium, and boron are constantly being produced in cosmic rays, while the heavier elements (up to iron) are formed by fusion in stars. Elements heavier than iron are formed by supernovae. “Looking at the abundance pattern of the light elements allows us to gain insight into the dynamics of the early Universe when it was a billion times hotter than it is today and only a few hundred seconds old,” Pradler told PhysOrg.com. “In our work we show that any process, such as the decay or annihilation of a relic particle species X, that dumps hadronic energy into this primordial mix sets off a chain of non-thermal nuclear reactions which culminates in the fusion of beryllium — an element otherwise out of reach by primordial standards.”Beryllium, along with lithium, can be observed in metal-deficient stars, which were formed from the nearly pristine interstellar gas. Scientists can identify the elements by using stellar spectroscopy to detect each element’s individual atomic resonance lines. Previous research has found that, in contrast to lithium, the beryllium in these stars is not of primordial origin. Whereas lithium’s value as a function of stellar metallicity plateaus at low metallicities, there is no plateau for beryllium. Instead, beryllium seems to be decreasing to smaller and smaller values as stellar metallicity decreases, and thus to more pristine mixtures of the interstellar gas from which the star formed.As the scientists explain, what makes beryllium so powerful in these stars is that, unlike lithium, it is not really affected by any stellar dynamics. Whereas lithium is fragile and may have been destroyed in the stars, beryllium is much stronger. For this reason, beryllium could be more useful for constraining nonstandard BBN models. “Many new particle physics models, including those which are currently searched for at the Large Hadron Collider (LHC) at CERN, predict long-lived massive states X,” Pradler said. “As the LHC is pushing the terrestrial energy frontier to search for new physics, these X particles could have copiously been produced in the Big Bang. The conversion of X’s rest mass into hadronic energy during its decay can be detected in an elevated beryllium abundance. The more energy is dumped, the higher the Be abundance will be. The isotope acts a calorimeter.”The scientists hope that future observations of metal-deficient stars may further tighten the limit on beryllium’s primordial abundance, and help to strengthen beryllium as a constraint on models of new physics.“One class of models our study targets are supersymmetric extensions of the Standard Model in which each ordinary particle gets assigned a ‘doppelgänger’ state,” Pradler said. “These states are typically heavy and it may well be that one of them has a lifetime such that it decays during or shortly after BBN. Indeed, it is even conceivable that the dark matter itself was produced in such decays. BBN can act as a powerful probe to test new physics beyond the Standard Model, and every model has to pass this cosmological consistency check.” Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Primordial beryllium could reveal insights into the Big Bang (2011, April 21) retrieved 18 August 2019 from https://phys.org/news/2011-04-primordial-beryllium-reveal-insights-big.html Physicists Maxim Pospelov of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, and the University of Victoria in Victoria, British Columbia, along with Josef Pradler, also of the Perimeter Institute, explain in a new study that investigating how chemical elements are produced can lead to a better understanding of what happened during the early Universe. The physicists have specifically investigated how beryllium could be used as a “Big Bang calorimeter” to probe the energy levels in the early Universe, and also to serve as a constraint on new physics models. Their study is published in a recent issue of Physical Review Letters.In their analysis, Pospelov and Pradler have investigated what may have happened during Big Bang nucleosynthesis (BBN), a period that started about 3 minutes after the Big Bang and lasted for about 20 minutes. It was during this time that the first elements were produced, with the lightest elements in the greatest abundance. For instance, at that time only one lithium nucleus existed for every 10 billion hydrogen atoms. After BBN ended, the Universe became too cool to allow any further nuclear fusion reactions to take place. Until now, researchers have thought that beryllium could not have been produced during rather generic circumstances in BBN. But here, Pospelov and Pradler have shown that, when an unknown particle X decays under the conditions during BBN, it can release a large amount of energy that can lead to the production of 9Be, which is the only stable isotope of beryllium. The formation of 9Be occurs at the end of a chain of transformations, going through a few light element isotopes including 6He, eventually leading to the beryllium isotope. When the physicists calculated the efficiency of this chain of transformations, they found that the process could produce a beryllium/hydrogen abundance ratio of 10-14 (or 1 gram of beryllium per 10 million tonnes of hydrogen). Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Physicists demonstrate conditions for laser-driven fusion More information: Maxim Pospelov and Josef Pradler. “Primordial Beryllium as a Big Bang Calorimeter.” Physical Review Letters 106, 121305 (2011). DOI: 10.1103/PhysRevLett.106.121305
The researchers, led by Prof. Xiang Zhang at the University of California, Berkeley, and Lawrence Berkeley National Laboratory, have published their study in a recent issue of Nano Letters.As the researchers explain, most previous invisibility cloaks have used metallic metamaterials for cloaking at microwave frequencies. But at optical frequencies, the metal absorbs too much light and leads to significant metallic loss, and Berkeley and other groups have had to design dielectric cloaks at infrared frequencies. More recently, researchers at University of Birmingham (UK) have experimented with using uniaxial crystals as the cloak material, which can enable cloaking in visible frequencies, but only for a certain polarization of light.In the current study, the researchers used a technique called quasi conformal mapping (QCM) to conceal an object with a height of 300 nm and a width of 6 µm underneath a reflective “carpet cloak.” The carpet itself has the appearance of a smooth optical mirror, so that the object and the bump that the object makes underneath the carpet are undetectable by visible light.“The carpet cloak means that you conceal the object under a layer, which we call carpet, but you see the carpet like a normal mirror, as if it is flat with no bump caused by putting the object underneath,” Zhang told PhysOrg.com. “This way, the observer won’t recognize something is concealed underneath.”In order to guide visible light around the concealed object, the researchers had to make light travel at different speeds while approaching the bump. They achieved this by designing the materials to have a variable refractive index, transforming them into metamaterials, since they don’t appear in nature. The researchers placed a silicon nitride waveguide on a transparent nanoporous silicon oxide substrate that they specially developed to have a much lower refractive index than that of the waveguide. Using nanofabrication techniques, the researchers etched tiny holes into the nitride to make a desired pattern, giving the waveguide the cloaking refractive index profile.“The concept of the carpet cloak was originally suggested so that you can design a certain pattern for a given size of the bump, and hide an object of arbitrary shape under that,” Zhang said. “If you need to make a bigger size bump to hide a bigger object, a new hole pattern will be required.”With this refractive index profile, along with the transparency of both the waveguide and the substrate, the cloak could completely conceal an object by producing a light beam profile identical to a beam reflected from a flat carpet with no object underneath.“This device is among the first cloak devices that operate at visible frequencies; the other very recent visible light cloaks operate based on a principle that relies on a certain polarization of light, whereas the quasi-conformal-based principle does not rely on the polarization,” Zhang said. “Of course, the waveguide geometry entails different operation for different polarizations, which is extrinsic to the QCM design.”In addition to cloaking, the new technique provides an important step toward implementing optical transformation structures in the visible range. Using transformation optics (TO), researchers can manipulate light for applications such as powerful microscopes and computers.“The carpet cloak is an example of a wide family of devices that can be made based on transformation optics,” Zhang said. “Besides invisibility, all kinds of optical illusion schemes can be made based on the concept, where the observer receives a different impression when looking at an object. The capability to manipulate light propagation can be used in energy devices, optical computing devices, and beyond, wherever it is desired to have full control on the light path; TO lets us redirect light and re-route it.” This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Miniature invisibility ‘carpet cloak’ hides more than its small size implies (PhysOrg.com) — Most of the invisibility cloaks that have been demonstrated to date conceal objects at frequencies that are not detectable by the human eye. Designing invisibility cloaks that can conceal objects from visible light has been more challenging due to the strict material requirements. But in a new study, researchers have fabricated a carpet cloak that can make objects undetectable in the full visible spectrum. When an input beam (black arrow) reflects off (a) a bump without a cloak, the bump causes a perturbation. When the beam reflects off (b) a bump covered by a cloak, the cloak masks the bump, and the reflected beam is reconstructed as if the bump did not exist. (c) Light after reflection from a flat mirror, a bump without a cloak, and a cloaked bump, at three different wavelengths. Image credit: Majid Gharghi, et al. ©2011 American Chemical Society More information: Majid Gharghi, et al. “A Carpet Cloak for Visible Light.” Nano Letters. DOI: 10.1021/nl201189z Explore further Citation: Invisibility carpet cloak can hide objects from visible light (2011, June 15) retrieved 18 August 2019 from https://phys.org/news/2011-06-invisibility-carpet-cloak-visible.html Copyright 2011 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.
© 2012 Phys.org (Phys.org)—Researchers studying the cosmos have been stumped by an observation first made by Monique and François Spite of the Paris Observatory some thirty years ago; they noted that in studying the halos of older stars, that there should be more lithium 7 than there appeared to be in the universe. Since that time many studies have been conducted in trying to explain this apparent anomaly, but thus far no one has been able to come up with a reasonable explanation. And now, new research has deepened the mystery further by finding that the amount of lithium 7 in the path between us and a very young star aligns with would have been expected shortly after the Big Bang, but doesn’t take into account the creation of new amounts since that time. In their paper published in the journal Nature, Christopher Howk and colleagues suggest the discrepancy is troubling because it can’t be explained with normal astrophysics models. Journal information: Nature This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. New ideas add further mystery to why there is less lithium-7 in the universe than expected Estimates of the lithium abundance in the SMC interstellar medium and in other environments. Credit: Nature, 489, 121–123. More information: Observation of interstellar lithium in the low-metallicity Small Magellanic Cloud, Nature, 489, 121–123 (06 September 2012) doi:10.1038/nature11407AbstractThe primordial abundances of light elements produced in the standard theory of Big Bang nucleosynthesis (BBN) depend only on the cosmic ratio of baryons to photons, a quantity inferred from observations of the microwave background. The predicted primordial 7Li abundance is four times that measured in the atmospheres of Galactic halo stars. This discrepancy could be caused by modification of surface lithium abundances during the stars’ lifetimes or by physics beyond the Standard Model that affects early nucleosynthesis. The lithium abundance of low-metallicity gas provides an alternative constraint on the primordial abundance and cosmic evolution of lithium that is not susceptible to the in situ modifications that may affect stellar atmospheres. Here we report observations of interstellar 7Li in the low-metallicity gas of the Small Magellanic Cloud, a nearby galaxy with a quarter the Sun’s metallicity. The present-day 7Li abundance of the Small Magellanic Cloud is nearly equal to the BBN predictions, severely constraining the amount of possible subsequent enrichment of the gas by stellar and cosmic-ray nucleosynthesis. Our measurements can be reconciled with standard BBN with an extremely fine-tuned depletion of stellar Li with metallicity. They are also consistent with non-standard BBN.Press release What’s really bothering all the scientists working on the lithium problem is the fact that it’s the only element that doesn’t fit with models of how things should have come to exist right after the Big Bang. All known elements occur in amounts predicted, except for lithium 7; there’s just a third as much as theorists think there should be. In trying to understand why, researchers have looked at old stars that surround the Milky Way galaxy, low mass bosons called axions, and more recently binary stars that are believed to harbor black holes. Unfortunately, such studies have only made the problem worse by suggesting that even more lithium 7 ought to be hanging around somewhere than was predicted earlier.In this new research the team looked at one single huge young star in the Small Magellanic Cloud, or more precisely, at the spectrum measured of gas and dust through which light must travel to get from there to here, and found that the amount of lithium 7 is consistent with theories that suggest how much of the element there should have been shortly after the Big Bang, which is unsettling because scientists know that more of it should have been created between then and now. Thus, these new results only add to the mystery of where all the rest of it is, or worse, why it wasn’t created in the first place as models suggest. Citation: Mystery over apparent dearth of lithium 7 in universe deepens (2012, September 6) retrieved 18 August 2019 from https://phys.org/news/2012-09-mystery-apparent-dearth-lithium-universe.html Explore further
To find out more about the submission, rejection and resubmission process, the team looked at biologically oriented submissions to 923 science journals during the years 2006 to 2008. Data was obtained by sending emails to more than 200,000 people listed as corresponding authors on published papers. Each was asked if the paper they’d published had been resubmitted to the journal that had published it, and if so, which one had previously rejected it. The team received 80,000 response emails.In calculating statistics based on answers provided by respondents, the team found that approximately 75 percent of papers printed had not been resubmitted, a sign that work also goes into targeting submissions appropriately. For the 25 percent of papers that had been resubmitted after being rejected, the researchers found that most followed a step down process whereby submitters first submitted to high impact journals than lower ones where they met with eventual success. It was in this group that the team found what they described as the biggest surprise, that rejected papers once published elsewhere received more citations relative to others in the same journal than did those accepted on first submission.The researchers suggest that the higher citation rate for the resubmitted papers is likely due to reviews offered by peers, referees and editors that together result in a better paper. This indicates, they say, that perhaps the quality of papers published in scientific journals might be improved overall if more papers from first time submitters were rejected. Another possibility is that papers that go against the status quo tend to get rejected more often than those that toe the line, which would cause a bigger stir, and hence citations when eventually published. More information: Flows of Research Manuscripts Among Scientific Journals Reveal Hidden Submission Patterns, Science, DOI: 10.1126/science.1227833ABSTRACTThe study of science-making is a growing discipline that builds largely on online publication and citation databases, while prepublication processes remain hidden. Here, we report results from a large-scale survey of the submission process, covering 923 scientific journals from the biological sciences in years 2006–2008. Manuscript flows among journals revealed a modular submission network, with high-impact journals preferentially attracting submissions. However, about 75% of published articles were submitted first to the journal that would publish them, and high-impact journals published proportionally more articles that had been resubmitted from another journal. Submission history affected postpublication impact: Resubmissions from other journals received significantly more citations than first-intent submissions, and resubmissions between different journal communities received significantly fewer citations.Press release Citation: Researchers reveal hidden patterns in flow of manuscript submissions (2012, October 12) retrieved 18 August 2019 from https://phys.org/news/2012-10-reveal-hidden-patterns-manuscript-submissions.html (Phys.org)—A research team made up of members from France, Canada and the US has been studying the manuscript submission and rejection process for biologically based papers submitted to journals for publication. They have found, as they describe in their paper published in the journal Science, that papers resubmitted and accepted by one journal after being rejected by another, receive more citations than other papers. Journal information: Science © 2012 Phys.org Submission network picture. Credit: (c) Science, DOI: 10.1126/science.1227833 Explore further How to Spot an Influential Paper Based on its Citations This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.