Tuesday, February 19, 2013

Meat and germs: Cook meat properly as bacteria in meat growing drug resistance

Bacteria in meat show growing drug resistance, FDA says

By Robert Roos
Feb 7, 2013 (CIDRAP News) – An annual report released by the Food and Drug Administration (FDA) this week shows that antibiotic resistance in bacteria found in retail meat and poultry samples is continuing to increase, though not uniformly.

For example, almost 45% of Salmonella isolates found on retail chicken samples were resistant to multiple classes of antimicrobial classes, up slightly from the 2010 level, says the 2011 Retail Meat Report of the National Antimicrobial Monitoring System (NARMS). Also, close to half of Campylobacter isolates in chicken were resistant to tetracyclines.

The report also shows that Campylobacter contamination in general (both susceptible and resistant isolates) increased in chicken and ground turkey samples in 2011, while Salmonella detections were down slightly for both items.

The NARMS retail meat surveillance program is a joint effort of the FDA, the Centers for Disease Control and Prevention (CDC), and health departments in 11 states: California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New Mexico, New York, Oregon, Tennessee, and Pennsylvania. Its goals include providing information to promote steps for reducing resistance in foodborne bacteria.

In 2011, each health department bought about 40 retail samples each month—10 each of chicken, ground turkey, ground beef, and pork chops. All the state labs cultured meat and poultry samples for Salmonella, but only poultry samples were cultured for Campylobacter. Four of the states also cultured samples for Enterococcus and Escherichia coli.

The states sent their bacterial isolates to the FDA Center for Veterinary Medicine for identification of serotypes, antimicrobial susceptibility testing, and genetic analysis, the report says.

Resistant Salmonella
The testing revealed that 44.9% of Salmonella isolates in chicken were resistant to at least three antimicrobial classes in 2011, compared with 43.3% in 2010. In ground turkey, 50.3% of isolates showed this level of resistance, up from 33.7% the year before. In addition, 27% of chicken isolates showed resistance to at least five drug classes, which was down from 29% in 2010.

The report also says the percentage of Salmonella isolates with no detected resistance declined in 2011.

The researchers found continuing increases in Salmonella resistance to two specific drug classes. Between 2002 and 2011, resistance to third-generation cephalosporins in chicken isolates climbed from 10% to 33.5%, while such resistance in ground turkey rose from 8.1% to 22.4%. Both increases were significant (P <.05).

Significant increases over that same period were seen for Salmonella resistance to ampicillin: chicken isolates, 16.7% to 40.5%; ground turkey isolates, 16.2% to 58.4%.

On the other hand, all Salmonella isolates were susceptible to nalidixic acid, a member of the quinolone class, the report says.

Campylobacter resistance
More than 90% of Campylobacter isolates come from chicken samples each year, with the rest from ground turkey, the report notes. It says macrolide and fluoroquinolone drugs are used to treat Campylobacterinfections. Fluoroquinolone use in poultry production was banned in 2005.

Macrolide resistance in chicken samples remained low in 2011, at 4.3% for Campylobacter coli and 0.5% for Campylobacter jejuni, the testing showed.

C coli resistance to ciprofloxacin, a fluoroquinolone, peaked at 29.1% in 2005 and has dropped since then, reaching 18.1% in 2011, the report says. However, C jejuni resistance to the drug has continued an upward trend, from 15.2% in 2002 to 22.4% in 2011.

In addition, tetracycline resistance in both Campylobacter species jumped from 2010 to 2011, from 36.3% to 48.4% for C jejuni and from 39.2% to 29.1% for C coli.

Further, gentamicin resistance in C coli reached 18.1% in 2011, a big increase from the 0.7% level seen in 2007 when it was first detected.

On the brighter side, the report says multidrug resistance is rare in Campylobacter: Only 9 of 634 isolates from poultry were resistant to three or more drug classes in 2011.

The report also profiles resistance in Enterococcus species and Escherichia coli found in meat and poultry samples. Among other things, it notes that no Enterococcus isolates were resistant to vancomycin or linezolid, two drug classes that "are critically important in human medicine but are not used in food animal production."

General prevalence
As for the overall prevalence of contamination (susceptible and resistant strains), the project showed that 45.7% of chicken samples in 2011 contained Campylobacter, up from 38.3% in 2010. For ground turkey, the 2011 figure was 2.3%, up from 1.0% in 2010.

For Salmonella, the general prevalence in chicken was 12.0%, down from 13.0% the year before, while the ground turkey figure was 12.3%, down from 15.3% a year earlier. Salmonella was found in less than 1% of ground beef samples both years. For pork chops, the 2011 number was 2.1%, up from 1.5% in 2010.

For E coli (most strains of which are nonpathogenic), prevalence numbers were lower in 2011 than 2010 but remained fairly high: chicken, 71.0%; ground turkey, 76.7%; ground beef, 44.8%; and pork chops, 30.4%.

Congress member reacts
Rep. Louise Slaughter, D-N.Y., a food safety advocate, called the FDA's findings on resistance in meat samples "alarming." In a statement, she cited the high level of ampicillin resistance found in bacteria in ground turkey and the tetracycline resistance seen in Campylobacter from poultry samples.

"The threat of antibiotic-resistant disease is real, it is growing and those most at risk are our seniors and children," Slaughter said. "We can help stop this threat by drastically reducing the overuse of antibiotics in our food supply, and Congress should act swiftly to do so today."

New guidelines for management of thyroid dysfunction in pregnancy launched by the Indian Thyroid Society


Bangalore, Feb 18: The Indian Thyroid Society (ITS) today launched three guidelines for the Management of Thyroid Dysfunction in Pregnancy, Dyslipidemia and Depression at the 10th Annual Conference ‘ITSCON – 2013’. These guidelines are for managing thyroid dysfunction in Pregnancy to safeguard mother and child health, and for patients of Depression and Dyslipidemia to reduce the co-morbidities associated with thyroid disorders. Thyroid disorders in India are characterized by a high prevalence (approx. 11% of adult population), minimal diagnosis, low awareness and low involvement of doctors in treatment.

The guidelines were developed by Elsevier, a global provider of scientific, technical and medical information, and endorsed by the Indian Thyroid Society, Endocrine Society of India [ESI], Federation of Obstetric and Gynaecological Societies of India [FOGSI] and The Association of Physicians of India [API]. Abbott provided financial assistance for the development of these guidelines.

On the launch of the ITS Guidelines, Dr. R. V. Jayakumar, President, Indian Thyroid Society [ITS], Professor of Endocrinology, AIMS, Cochin said, “Conditions such as depression, cardiovascular disorders, high cholesterol, obesity, osteoporosis, infertility and miscarriages are linked to thyroid disorders and these are on the rise in India.. The three independent guidelines for the screening and management of Thyroid Dysfunction will support the medical fraternity in diagnosis and treatment. Timely diagnosis of thyroid disorders in pregnant women is important for a healthy pregnancy and a healthy child. In addition, the guidelines for dyslipidemia and depression offer recommendations to minimize the risk of arising complications.”
Dr. Rakesh Sahay Professor of Endocrinology, Osmania Medical College, Hyderabad said “Thyroid disorders are one of the most under-diagnosed medical conditions and often referred to as the hidden disease. The new guidelines for thyroid disorders associated with pregnancy, dyslipidemia and depression will aid doctors like us to educate the masses about the importance of undergoing a TSH test for the correct diagnosis. Timely treatment of thyroid disorders is the key to preventing health problems.”
The Thyroid Dysfunction and Pregnancy Guidelines recommend screening for hypothyroidism in pregnant women at the 1st antenatal visit by measuring TSH levels. In case overt hypothyroidism is diagnosed then expectant mothers should be treated with a full replacement dose of thyroxine to normalize thyroid function as rapidly as possible. It is also important to understand that; thyroid dysfunction by itself is not an indication for termination of pregnancy.

As per the Thyroid Dysfunction and Dyslipidemia Guidelines, overt hypothyroidism is associated with the risk of cardio vascular disease as it causes increased levels of LDL cholesterol and hypertension. It is therefore advised that doctors screen patients with dyslipidemia for abnormal thyroid levels and accordingly prescribe treatment.

According to theThyroid Dysfunction and Depression Guidelines one of the co-morbidities associated with thyroid disorders is depression. It is important for doctors who are treating patients for depression to refer them to undergo a TSH test to detect hypothyroidism. This will help in correct diagnosis and treatment to prevent further damage.
Thyroid disorder is a medical condition that impairs the normal functioning of the thyroid gland causing abnormal production of hormones leading to hyperthyroidism or hypothyroidism. Multiple factors such as hereditary, environment and diet can trigger thyroid dysfunction. Thyroid disorders are commonly diagnosed between 20 – 40 years and research has shown that women are more commonly detected with thyroid disorders than men.

Speaking of thyroid disorders in women during pregnancy, Dr. Hema Divakar, President - Federation of Obstetric and Gynaecological Societies of India [FOGSI], Bangalore said “Hypothyroidism is emerging as one of the most common endocrine problem during pregnancy and often goes undetected.

It increases the risk of miscarriage, stillbirth, premature birth and placental abnormalities that adversely affects the overall development of the foetus. In the best interest of the mother and baby, we encourage regular screening for thyroid disorders amongst pregnant women. The guidelines recommend screening at the 1st antenatal visit by measuring TSH levels.”

Often symptoms such as anxiety, mood swings and poor concentration are ignored as signs of stress. These could have been triggered due to abnormal levels of thyroid hormone which can push people into depression.

According to Dr Sarita Bajaj, President Endocrine Society of India (ESI), Allahabad, “Thyroid hormones have a tremendous effect on body processes and can even impact cognitive function. There is little awareness that depression is a co-morbidity associated with hypothyroidism. All patients with depression should preferably be screened for thyroid function tests and be appropriately treated with thyroxine as judged by the physician.”
Dr. Shashank Joshi, President Elect - The Association of Physicians of India [API], Mumbai says "Many hypothyroid patients have underlying lipid abnormalities which get controlled by simple thyroxine therapy. Hypothyroidism needs lifelong thyroxine therapy and if controlled well, the patients can lead a normal life."

Hypothyroidism also leads to a co-morbid condition called dyslipidemia indicated with an increase in serum total cholesterol, low-density lipoprotein (LDL), apolipoprotein B, lipoprotein (a) levels, and possibly triglyceride levels. Dyslipidemia puts a patient at an increased risk for developing cardiovascular diseases, atherosclerosis and coronary artery disease.

The Chairman Organising committee Dr. K. M. Prasanna Kumar confirmed that close to 500 eminent speakers and Key Opinion Leaders from across India were present at the ITSCON-2013 conference. The speakers highlighted various disorders arising from thyroid dysfunctions, the importance of timely screening and recommended treatment to prevent further complications. Till 2012, ITS had screened close to 12 lakh women for thyroid disorders at various diagnostic and education camps throughout India.

Campylobacter infections in Alaska linked to drinking of raw milk

Raw milk suspected in Campylobacter infections in Alaska

At least four people in Alaska's Kenai Peninsula recently suffered Campylobacter infections after drinking raw milk, the Alaska Division of Public Health (DPH) said in a Feb 15 health advisory.

The four people were infected with Campylobacter isolates that were matched by pulsed-field gel electrophoresis. In addition, at least one person with a probable infection also reported drinking raw milk, and an infant in close contact with a confirmed case-patient has a suspected case, the statement said.

The strain identified in the cases has not been seen in Alaska before, it said.
Feb 15 Alaska DPH notice

Assocham study reveals the impact of trade union strike on India's GDP

GDP to take Rs 15,000-20,000 crore hit from strike: ASSOCHAM

While sharing some of their concerns like rising prices, ASSOCHAM today appealed to the central trade unions to call off their two-day strike as the country’s economy will take a hit of big hit of Rs 15,000-20,000 crore from the nation-wide disruption in economic activity.
“The national economy, battling slowdown can ill-afford this situation. In fact, the strike would aggravate the price situation because of disruption in the supply line of essential commodities”, said Mr. Rajkumar Dhoot President ASSOCHAM.
Mr. Dhoot further said the strike would cripple mostly the services sector like banking, insurance and transport, besides the industrial production. Even the agriculture would be affected as the movement of vegetables, highly perishable items, would be disrupted.
The ASSOCHAM has estimated the national loss figures based on the daily erosion of about 30-40 per cent to the country’s Gross Domestic Production (GDP) for two days. As per the Advanced estimates of the CSO, the national GDP for the current financial year is projected to be about Rs 95 lakh crore. In other words, it is Rs 26,000 crore per day and Rs 52,000 crore for two days. Of this , the strike would take its toll on at least 30-40 per cent – Rs 15,000 crore-Rs 20, 000 crore.
“Given the nature of the strike and involvement of the all the five major central trade unions, it is going to affect largely the services sector including the banking, financial services, tourism, transportation etc, which are the major contributors to the country’s GDP”, added Mr. Dhoot.

States like West Bengal, Kerala, Maharashtra, Gujarat, Tamil Nadu, Delhi, Haryana, Karnataka and parts of Uttar Pradesh are likely to be affected significantly. Besides, banking operations including the cheque clearances and some segments of the financial markets would take a hit. Moreover, disruption in railways and other public transportation in major cities would hit the movement of the workforce and the cargo operations at the ports.
The cargo operations both at the airport and ports are likely to be affected, the chamber apprehends. 
“Our conservative estimates show that at least 30-40 per cent of the daily GDP would take a hit. For two days, it would be something like Rs 15,000-20,000 crore,” reveals the ASSOCHAM Economic Research (AER) department.
Expressing concerns over the impending strike, the Mr. Dhoot said, it would not be in the interest of the country’s economy to stop work in the crucial sectors. “While we share some of the concerns like rising prices, the solution lies in working together to ensure that the situation is brought under control by raising production and pumping up the supply. The strike, in fact would put further pressure on the price situation as the prices of vegetables etc would immediately go up because of disruption”.

He said the GDP growth is projected to be at a decade low of about five per cent and several sectors like manufacturing are operating at a much lower scale and work disruption would make a big dent on the economic activity.
“Besides, the services sector which has remained backbone of the economy, has also started slowing down,” he said.

As per the chamber estimates, despite global slowdown and difficult domestic conditions, the Indian industry has not really resorted to job trimming and has generally been working in partnership with the labour-force.
“The labour force is a very important stakeholder in the national activity. In fact, it is the human resource which is India’s advantage vis-à-vis several other high cost economies. Thus, welfare of the workforce is on top of the priorities of the industry and ASSOCHAM is fully committed to ensuring their welfare,” added Mr Dhoot.
He appealed to leaders of the all the central unions including CITU, AITUC, INTUC and BMS to engage with the government and find amicable solutions to the issues raised by them. The Industry shares some of the concerns like rising prices, but then “we need to work together to resolve the issue and ensure better supplies which is possible by higher investment and production. The workers’role in this area too is of paramount importance,” he said.
Mr. Dhoot also appealed to the government to immediately engage with the labour unions to find out amicable solutions to the issues raised by them.

Tuesday, February 5, 2013

More about the breast pump and hygiene

Breast Pump Basics

  • Breast shield: Cone-shaped cup that fits over the nipple and surrounding area.
  • Pump: Creates the gentle vacuum that expresses milk. The pump may be attached to the breast-shield or have plastic tubing to connect the pump to the breast shield.
  • Milk container: Detachable container that fits below the breast shield and collects milk as it is pumped.

These days, many new mothers return to the workplace with a briefcase in one hand—and a breast pump kit in the other.

For those moms working outside the home who are breastfeeding their babies (and those who travel or for other reasons can’t be with their child throughout the day), using a breast pump to “express” (extract) their milk is a must.

The Food and Drug Administration (FDA) oversees the safety and effectiveness of these medical devices.

New mothers may have a host of questions about choosing a breast pump. What type of breast pump should they get? How do they decide ahead of time which pump will fit in best with their daily routines? Are pumps sold “used” safe?
Choosing the Right Pump for You

Kathryn S. Daws-Kopp, an electrical engineer at FDA, explains that all breast pumps consist of a few basic parts: a breast shield that fits over the nipple, a pump that creates a vacuum to express the milk, and a detachable container for collecting the milk.

There are three basic kinds of pump: manual, battery-powered and electric. Mothers can opt for double pumps, which extract milk from both breasts at the same time, or single, which extract milk from one breast at a time.

Daws-Kopp, who reviews breast pumps and other devices for quality and safety, suggests that mothers talk to a lactation consultant, whose expertise is in breastfeeding, or other health care professional about the type of breast pump that will best fit their needs. Questions for new moms to keep in mind include:
How do I plan to use the pump? Will I pump in addition to breastfeeding? Or will I just pump and store the milk?
Where will I use the pump? At work? When I’m traveling?
Do I need a pump that’s easy to transport? If it’s electric, will I have access to an outlet?
Does the breast shield fit me? If not, will the manufacturer let me exchange it?

Should You Buy or Rent?

There’s also the decision of whether to buy or rent a breast pump. Many hospitals, lactation consultants and specialty medical supply stores rent breast pumps for use by multiple users, Daws-Kopp notes.

These pumps are designed to decrease the risk of spreading contamination from one user to the next, she says, and each renter needs to buy a new accessories kit that includes breast-shields and tubing.

“Sometimes these pumps are labeled “hospital grade,” says Daws-Kopp. “But that term is not one FDA recognizes, and there is no consistent definition. Consumers need to know it doesn’t mean the pump is safe or hygienic.”

Daws-Kopp adds that different companies may mean different things when they label a pump with this term, and that FDA encourages manufacturers to instead use the terms “multiple user” and “single user” in their labeling. “If you don’t know for sure whether a pump is meant for a single user or multiple users, it’s safer to just not get it,” she says.

The same precaution should be taken for “used” or second-hand pumps.

Even if a used pump looks really clean, says Michael Cummings, M.D., an obstetrician-gynecologist at FDA, potentially infectious particles may survive in the breast pump and/or its accessories for a surprisingly long time and cause disease in the next baby.
Keeping It Clean

According to FDA’s recently released website on breast pumps, the first place to look for information on keeping the pump clean is in the instructions for use. In general, though, the steps for cleaning include:
Rinse each piece that comes into contact with breast milk in cool water as soon as possible after pumping.
Wash each piece separately using liquid dishwashing soap and plenty of warm water.
Rinse each piece thoroughly with hot water for 10-15 seconds.
Place the pieces on a clean paper towel or in a clean drying rack and allow them to air dry.

If you are renting a multiple user device, ask the person providing the pump to make sure that all components, such as internal tubing, have been cleaned, disinfected, and sterilized according to the manufacturer’s specifications.

Cummings notes that there are many benefits to both child and mother from breastfeeding. “Human milk is recommended as the best and exclusive nutrient source for feeding infants for the first six months, and should be continued with the addition of solid foods after six months, ideally until the child is a year of age,” he says.

The benefits are both short- and long-term. In the short-term, babies can benefit from improved gastrointestinal function and development, and fewer respiratory and urinary tract infections. In the long-term, children who have been breast fed may be less obese and, as adults, have less cardiovascular disease, diabetes, inflammatory bowel disease, allergies, and even some cancers.

Cummings adds that moms and their families benefit by the bonding experience and economically as well, since a reduction in acute and chronic diseases in the baby saves money.

For women considering this option, FDA ‘s website offers resources and information on breast pumps and breastfeeding. These include information on the selection and care of the pumps, in addition to describing signs of an infection or injury related to their use.

(This article appears on FDA's Consumer Updates page, which features the latest on all FDA-regulated products. January 14, 2013)

All about the Kyasanur Forest Disease

Kyasanur Forest Disease, India, 2011–2012

By Gudadappa S. Kasabi and others

To determine the cause of the recent upsurge in Kyasanur Forest disease, we investigated the outbreak that occurred during December 2011–March 2012 in India. Male patients >14 years of age were most commonly affected. Although vaccination is the key strategy for preventing disease, vaccine for boosters was unavailable during 2011, which might be a reason for the increased cases.

Kyasanur Forest disease (KFD), a tick-borne viral disease, was first recognized in 1957 in Shimoga District, India, when an outbreak in monkeys in Kyasanur Forest was followed by an outbreak of hemorrhagic febrile illness in humans. KFD is unique to 5 districts (Shimoga, Chikkamagalore, Uttara Kannada, Dakshina Kannada, and Udupi) of Karnataka State and occurs as seasonal outbreaks during January–June.

Since 1990, vaccination campaigns using formalin-inactivated tissue-culture vaccine have been conducted in the districts to which KFD is endemic (Directorate of Health and Family Welfare Services, Government of Karnataka, Manual on Kyasanur Forest disease. 2005, unpub. data). Earlier studies showed vaccine efficacy of 79.3% with 1 dose and 93.5% with 2 doses. The vaccination program identifies villages reporting KFD activity (laboratory-confirmed cases in monkeys and/or humans, or infected ticks), and all villages within 5 km of the affected location are targeted for vaccination. Two doses are administered to persons 7–65 years of age at 1-month intervals. Because the immunity conferred by vaccination is short-lived, booster doses are administered at 6–9-month intervals consecutively for 5 years after the last reported KFD activity in the area (Directorate of Health and Family Welfare Services, Government of Karnataka, Manual on Kyasanur Forest disease. 2005, unpub. data). If KFD activity is reported where vaccination has been administered during pretransmission seasons, additional vaccination campaigns are conducted.

Thirthahalli Taluka in the Shimoga District, where vaccination campaigns were ongoing, reported 0 cases of KFD during 2007–2010. A vaccination campaign was conducted in the area during October 2010. Because 11 cases were reported from the Thirthahalli Taluka in March 2011, vaccination campaigns were conducted during April–May 2011; however, no booster doses were administered in the affected areas during October–November 2011 because the vaccine was not available. Suspected KFD cases were reported in the area again in December 2011. We investigated this cluster to 1) confirm the etiology, 2) identify risk factors, and 3) propose recommendations for control.

The Study

We defined a suspected KFD case as sudden onset of fever, headache, and myalgia among residents of Shimoga during December 2011–March 2012 (Directorate of Health and Family Welfare Services, Government of Karnataka, Manual on Kyasanur Forest disease, 2005, unpub. data). Health workers conducted door-to-door searches to identify suspected case-patients within 5 km of villages that reported monkey deaths or laboratory-confirmed KFD cases in humans since December 2011. We established stimulated passive surveillance in health facilities in the district to identify suspected case-patients. Health workers collected information about sociodemographic profile, date of onset, and place of residence from all suspected case-patients. We recorded clinical history and vaccination details of laboratory-confirmed case-patients. We analyzed the data to describe the disease by time, place, and person. The investigation was exempted from ethical committee clearance because it was part of the state-level public health response to the outbreak.

Blood specimens were collected from all suspected case-patients. We tested for KFD virus by using nested reverse transcription PCR (RT-PCR) and Taqman-based RT-PCR at the National Institute of Virology (Pune, India) and/or intracerebral injection of the serum into suckling mice at the Virus Diagnostic Laboratory, Shimoga.

We conducted a matched case–control study to identify risk factors for the illness. Persons with laboratory-confirmed infection who were admitted to health facilities were considered case-patients, and healthy persons were used as controls. We recruited 2 controls per case-patient (total 51 cases, 102 controls). Case-patients and controls were matched with age group (±5 years), sex, and locality. We interviewed participants to collect information about any recent exposure to the forest and number of doses of KFD vaccine received in 2011. We conducted conditional logistic regression analysis by using Epi Info software (Centers for Disease Control and Prevention, Atlanta, GA, USA) to identify risk factors. All risk factors evaluated were included in the logistic regression model.

During December 2011–March 2012, we identified 215 suspected case-patients from 80 villages (total population 22,201) in Shimoga (attack rate 9.7 cases/1,000 persons). Of these, 61 (28%) cases were laboratory confirmed (57 by RT-PCR; 4 by suckling mice intracerebral inoculation). Most (92%) laboratory-confirmed case-patients were >14 years of age, and 70% were male. The cases began occurring in the last week of December 2011, peaked during the first 2 weeks of February, and then declined gradually. Of the 215 suspected cases, 166 (77%) occurred in 4 primary health center areas of Thirthahalli Taluka.

Besides fever and myalgia, common clinical manifestations among the 61 laboratory-confirmed case-patients included bleeding (38 [63%] persons), vomiting (28 [46%]), and abdominal pain (26 [42%]). The hemorrhagic manifestations included conjunctival congestion (30 [49%]), hematemesis (5 [8%]), epistaxis (1 [2%]), hematuria (1 [2%]), and rectal bleeding (1 [2%]). One patient died (case-fatality rate 0.5%). Of the 61 laboratory-confirmed case-patients, 20 (33%) had received 2 doses of KFD vaccine, and 2 (3%) received 1 dose; 39 (64%) did not receive any vaccination during April–May 2011. Twelve case-patients were housewives or students; the rest reported multiple occupations requiring frequent visits to the forest, such as cultivator, dry leaf gatherer, agriculture laborer, and cattle grazer.

Behavioral factors, such as handling cattle (adjusted odds ratio [aOR] 5.1, 95% CI 1.3–20.4) and frequent visits to forest for livelihood (aOR 4.8, 95% CI 1.2–20.3) and piles of dry leaves within the compounds of the house (aOR 4.1, 95% CI 1.3–12.3) were independently associated with illness. Of the 51 case-patients, 20 had received 2 doses of vaccine and 2 had received 1 dose. The odds of developing illness did not differ significantly for nonvaccinated case-patients and case-patients who received 2 doses.


Vaccination is the key strategy for preventing KFD in Karnataka. However, during 2011, a booster vaccination campaign was not conducted in the district because of vaccine unavailability, which might be a reason for the upsurge of KFD cases during 2012. Two doses of the vaccine given during April–May 2011 did not confer adequate protection against the disease during December 2011–March 2012, suggesting the possibility of short-lived immunity conferred by 2 doses of vaccine and the need for periodic boosters.

In the affected areas, local villagers stay in and around the forest area, frequently visit the forest for their livelihood, and get infected through tick bites. We identified certain risk factors for the illness, including frequent visits to the forest, handling of cattle, and piles of dry leaves within the compounds. The higher attack rates for male case-patients aged >14 years during this outbreak are consistent with their frequent exposure to the forest. Health authorities advise use of tick repellent; however, it was infrequently used in the area. Educating the community to wear long-sleeved clothing might help reduce exposure to ticks.

Although the transmission cycle of KFD virus is well documented, its control remains challenging. Measures to minimize the human–tick interface are less likely to succeed considering the forest ecosystem and the dependence of local villagers on it. Control of ticks in the forest is far from easy, but health authorities need to continue educating villagers about using tick repellent before visiting the forest, especially during spring and summer, and ensure distribution of tick repellents to them. Health authorities must ensure that vaccination campaigns are initiated on time and completed before November every year. More epidemiologic studies are needed to evaluate the long-term protection offered by booster doses of vaccine. Molecular studies also are needed to understand the phylogenetic relationships of the past and contemporary strains of the virus and to identify possible sources and origins of outbreak strains.

(Dr Kasabi is a senior medical officer with Department of Health and Family Welfare, Government of Karnataka, in Shimoga District. He conducted this outbreak investigation as a part of a Master of Public Health (Epidemiology and Health Systems) course at the National Institute of Epidemiology, Chennai. His research interests include health system research and reemerging infectious diseases.)

History of deadly pandemic infections - plague, influenza A

Lessons from the History of Quarantine, from Plague to Influenza A

By Prof Eugenia Tognotti

In the new millennium, the centuries-old strategy of quarantine is becoming a powerful component of the public health response to emerging and reemerging infectious diseases. During the 2003 pandemic of severe acute respiratory syndrome, the use of quarantine, border controls, contact tracing, and surveillance proved effective in containing the global threat in just over 3 months. For centuries, these practices have been the cornerstone of organized responses to infectious disease outbreaks. However, the use of quarantine and other measures for controlling epidemic diseases has always been controversial because such strategies raise political, ethical, and socioeconomic issues and require a careful balance between public interest and individual rights. In a globalized world that is becoming ever more vulnerable to communicable diseases, a historical perspective can help clarify the use and implications of a still-valid public health strategy.
The risk for deadly infectious diseases with pandemic potential (e.g., severe acute respiratory syndrome [SARS]) is increasing worldwide, as is the risk for resurgence of long-standing infectious diseases (e.g., tuberculosis) and for acts of biological terrorism. To lessen the risk from these new and resurging threats to public health, authorities are again using quarantine as a strategy for limiting the spread of communicable diseases. The history of quarantine—not in its narrower sense, but in the larger sense of restraining the movement of persons or goods on land or sea because of a contagious disease—has not been given much attention by historians of public health. Yet, a historical perspective of quarantine can contribute to a better understanding of its applications and can help trace the long roots of stigma and prejudice from the time of the Black Death and early outbreaks of cholera to the 1918 influenza pandemic and to the first influenza pandemic of the twenty-first century, the 2009 influenza A(H1N1)pdm09 outbreak.

Quarantine (from the Italian “quaranta,” meaning 40) was adopted as an obligatory means of separating persons, animals, and goods that may have been exposed to a contagious disease. Since the fourteenth century, quarantine has been the cornerstone of a coordinated disease-control strategy, including isolation, sanitary cordons, bills of health issued to ships, fumigation, disinfection, and regulation of groups of persons who were believed to be responsible for spreading the infection.


Organized institutional responses to disease control began during the plague epidemic of 1347–1352. The plague was initially spread by sailors, rats, and cargo arriving in Sicily from the eastern Mediterranean; it quickly spread throughout Italy, decimating the populations of powerful city-states like Florence, Venice, and Genoa. The pestilence then moved from ports in Italy to ports in France and Spain. From northeastern Italy, the plague crossed the Alps and affected populations in Austria and central Europe. Toward the end of the fourteenth century, the epidemic had abated but not disappeared; outbreaks of pneumonic and septicemic plague occurred in different cities during the next 350 years.

Medicine was impotent against plague; the only way to escape infection was to avoid contact with infected persons and contaminated objects. Thus, some city-states prevented strangers from entering their cities, particularly, merchants and minority groups, such as Jews and persons with leprosy. A sanitary cordon—not to be broken on pain of death—was imposed by armed guards along transit routes and at access points to cities. Implementation of these measures required rapid, firm action by authorities, including prompt mobilization of repressive police forces. A rigid separation between healthy and infected persons was initially accomplished through the use of makeshift camps.

Quarantine was first introduced in 1377 in Dubrovnik on Croatia’s Dalmatian Coast, and the first permanent plague hospital (lazaretto) was opened by the Republic of Venice in 1423 on the small island of Santa Maria di Nazareth. The lazaretto was commonly referred to as Nazarethum or Lazarethum because of the resemblance of the word lazaretto to the biblical name Lazarus. In 1467, Genoa adopted the Venetian system, and in 1476 in Marseille, France, a hospital for persons with leprosy was converted into a lazaretto. Lazarettos were located far enough away from centers of habitation to restrict the spread of disease but close enough to transport the sick. Where possible, lazarettos were located so that a natural barrier, such as the sea or a river, separated them from the city; when natural barriers were not available, separation was achieved by encircling the lazaretto with a moat or ditch. In ports, lazarettos consisted of buildings used to isolate ship passengers and crew who had or were suspected of having plague. Merchandise from ships was unloaded to designated buildings. Procedures for so-called “purgation” of the various products were prescribed minutely; wool, yarn, cloth, leather, wigs, and blankets were considered the products most likely to transmit disease. Treatment of the goods consisted of continuous ventilation; wax and sponge were immersed in running water for 48 hours.

It is not known why 40 days was chosen as the length of isolation time needed to avoid contamination, but it may have derived from Hippocrates theories regarding acute illnesses. Another theory is that the number of days was connected to the Pythagorean theory of numbers. The number 4 had particular significance. Forty days was the period of the biblical travail of Jesus in the desert. Forty days were believed to represent the time necessary for dissipating the pestilential miasma from bodies and goods through the system of isolation, fumigation, and disinfection. In the centuries that followed, the system of isolation was improved.

In connection with the Levantine trade, the next step taken to reduce the spread of disease was to establish bills of health that detailed the sanitary status of a ship’s port of origin. After notification of a fresh outbreak of plague along the eastern Mediterranean Sea, port cities to the west were closed to ships arriving from plague-infected areas. The first city to perfect a system of maritime cordons was Venice, which because of its particular geographic configuration and its prominence as a commercial center, was dangerously exposed. The arrival of boats suspected of carrying plague was signaled with a flag that would be seen by lookouts on the church tower of San Marco. The captain was taken in a lifeboat to the health magistrate’s office and was kept in an enclosure where he spoke through a window; thus, conversation took place at a safe distance. This precaution was based on a mistaken hypothesis (i.e., that “pestilential air” transmitted all communicable diseases), but the precaution did prevent direct person-to-person transmission through inhalation of contaminated aerosolized droplets. The captain had to show proof of the health of the sailors and passengers and provide information on the origin of merchandise on board. If there was suspicion of disease on the ship, the captain was ordered to proceed to the quarantine station, where passengers and crew were isolated and the vessel was thoroughly fumigated and retained for 40 days. This system, which was used by Italian cities, was later adopted by other European countries.

The first English quarantine regulations, drawn up in 1663, provided for the confinement (in the Thames estuary) of ships with suspected plague-infected passengers or crew. In 1683 in Marseille, new laws required that all persons suspected of having plague be quarantined and disinfected. In ports in North America, quarantine was introduced during the same decade that attempts were being made to control yellow fever, which first appeared in New York and Boston in 1688 and 1691, respectively. In some colonies, the fear of smallpox outbreaks, which coincided with the arrival of ships, induced health authorities to order mandatory home isolation of persons with smallpox, even though another controversial strategy, inoculation, was being used to protect against the disease. In the United States, quarantine legislation, which until 1796 was the responsibility of states, was implemented in port cities threatened by yellow fever from the West Indies. In 1720, quarantine measures were prescribed during an epidemic of plague that broke out in Marseille and ravaged the Mediterranean seaboard of France and caused great apprehension in England. In England, the Quarantine Act of 1710 was renewed in 1721 and 1733 and again in 1743 during the disastrous epidemic at Messina, Sicily. A system of active surveillance was established in the major Levantine cities. The network, formed by consuls of various countries, connected the great Mediterranean ports of western Europe.


By the eighteenth century, the appearance of yellow fever in Mediterranean ports of France, Spain, and Italy forced governments to introduce rules involving the use of quarantine. But in the nineteenth century, another, even more frightening scourge, cholera, was approaching. Cholera emerged during a period of increasing globalization caused by technological changes in transportation, a drastic decrease in travel time by steamships and railways, and a rise in trade. Cholera, the “Asiatic disease,” reached Europe in 1830 and the United States in 1832, terrifying the populations. Despite progress regarding the cause and transmission of cholera, there was no effective medical response.

During the first wave of cholera outbreaks, the strategies adopted by health officials were essentially those that had been used against plague. New lazarettos were planned at western ports, and an extensive structure was established near Bordeaux, France. At European ports, ships were barred entry if they had “unclean licenses” (i.e., ships arriving from regions where cholera was present). In cities, authorities adopted social interventions and the traditional health tools. For example, travelers who had contact with infected persons or who came from a place where cholera was present were quarantined, and sick persons were forced into lazarettos. In general, local authorities tried to keep marginalized members of the population away from the cities. In 1836 in Naples, health officials hindered the free movement of prostitutes and beggars, who were considered carriers of contagion and, thus, a danger to the healthy urban population. This response involved powers of intervention unknown during normal times, and the actions generated widespread fear and resentment.

In some countries, the suspension of personal liberty provided the opportunity—using special laws—to stop political opposition. However, the cultural and social context differed from that in previous centuries. For example, the increasing use of quarantine and isolation conflicted with the affirmation of citizens’ rights and growing sentiments of personal freedom fostered by the French Revolution of 1789. In England, liberal reformers contested both quarantine and compulsory vaccination against smallpox. Social and political tensions created an explosive mixture, culminating in popular rebellions and uprisings, a phenomenon that affected numerous European countries. In the Italian states, in which revolutionary groups had taken the cause of unification and republicanism, cholera epidemics provided a justification (i.e., the enforcement of sanitary measures) for increasing police power.

By the middle of the nineteenth century, an increasing number of scientists and health administrators began to allege the impotence of sanitary cordons and maritime quarantine against cholera. These old measures depended on the idea that contagion was spread through the interpersonal transmission of germs or by contaminated clothing and objects. This theory justified the severity of measures used against cholera; after all, it had worked well against the plague. The length of quarantine (40 days) exceeded the incubation period for the plague bacillus, providing sufficient time for the death of the infected fleas needed to transmit the disease and of the biological agent, Yersinia pestis. However, quarantine was almost irrelevant as a primary method for preventing yellow fever or cholera. A rigid maritime cordon could only be effective in protecting small islands. During the terrifying cholera epidemic of 1835–1836, the island of Sardinia was the only Italian region to escape cholera, thanks to surveillance by armed men who had orders to prevent, by force, any ship that attempted to disembark persons or cargo on the coast.
Anticontagionists, who disbelieved the communicability of cholera, contested quarantine and alleged that the practice was a relic of the past, useless, and damaging to commerce. They complained that the free movement of travelers was hindered by sanitary cordons and by controls at border crossings, which included fumigation and disinfection of clothes. In addition, quarantine inspired a false sense of security, which was dangerous to public health because it diverted persons from taking the correct precautions. International cooperation and coordination was stymied by the lack of agreement regarding the use of quarantine. The discussion among scientists, health administrators, diplomatic bureaucracies, and governments dragged on for decades, as demonstrated in the debates in the International Sanitary Conferences, particularly after the opening, in 1869, of the Suez Canal, which was perceived as a gate for the diseases of the Orient. Despite pervasive doubts regarding the effectiveness of quarantine, local authorities were reluctant to abandon the protection of the traditional strategies that provided an antidote to population panic, which, during a serious epidemic, could produce chaos and disrupt public order.

A turning point in the history of quarantine came after the pathogenic agents of the most feared epidemic diseases were identified between the nineteenth and twentieth centuries. International prophylaxis against cholera, plague, and yellow fever began to be considered separately. In light of the newer knowledge, a restructuring of the international regulations was approved in 1903 by the 11th Sanitary Conference, at which the famed convention of 184 articles was signed.


In 1911, the eleventh edition of Encyclopedia Britannica emphasized that “the old sanitary preventive system of detention of ships and men” was “a thing of the past”. At the time, the battle against infectious diseases seemed about to be won, and the old health practices would only be remembered as an archaic scientific fallacy. No one expected that within a few years, nations would again be forced to implement emergency measures in response to a tremendous health challenge, the 1918 influenza pandemic, which struck the world in 3 waves during 1918–1919. At the time, the etiology of the disease was unknown. Most scientists thought that the pathogenic agent was a bacterium, Haemophilus influenzae, identified in 1892 by German bacteriologist Richard Pfeiffer.

During 1918–1919, in a world divided by war, the multilateral health surveillance systems, which had been laboriously built during the previous decades in Europe and the United States, were not helpful in controlling the influenza pandemic. The ancestor of the World Health Organization, the Office International d’Hygiène Publique, located in Paris, could not play any role during the outbreak. At the beginning of the pandemic, the medical officers of the army isolated soldiers with signs or symptoms, but the disease, which was extremely contagious, quickly spread, infecting persons in nearly every country. Various responses to the pandemic were tried. Health authorities in major cities of the Western world implemented a range of disease-containment strategies, including the closure of schools, churches, and theaters and the suspension of public gatherings. In Paris, a sporting event, in which 10,000 youths were to participate, was postponed. Yale University canceled all on-campus public meetings, and some churches in Italy suspended confessions and funeral ceremonies. Physicians encouraged the use of measures like respiratory hygiene and social distancing. However, the measures were implemented too late and in an uncoordinated manner, especially in war-torn areas where interventions (e.g., travel restrictions, border controls) were impractical, during a time when the movement of troops was facilitating the spread of the virus.

In Italy, which along with Portugal had the highest mortality rate in Europe, schools were closed after the first case of the unusually severe hemorrhagic pneumonia; however, the decision to close schools was not simultaneously accepted by health and scholastic authorities. Decisions made by health authorities often seemed focused more on reassuring the public about efforts being made to stop transmission of the virus rather than on actually stopping transmission of the virus. Measures adopted in many countries disproportionately affected ethnic and marginalized groups. In colonial possessions (e.g., New Caledonia), restrictions on travel affected the local populations. The role that the media would play in influencing public opinion in the future began to take shape. Newspapers took conflicting positions on health measures and contributed to the spread of panic. The largest and most influential newspaper in Italy, Corriere della Sera, was forced by civil authorities to stop reporting the number of deaths (150–180 deaths/day) in Milan because the reports caused great anxiety among the citizenry. In war-torn nations, censorship caused a lack of communication and transparency regarding the decision-making process, leading to confusion and misunderstanding of disease-control measures and devices, such as face masks (ironically named “muzzles” in Italian).

During the second influenza pandemic of the twentieth century, the “Asian flu” pandemic of 1957–1958, some countries implemented measures to control spread of the disease. The illness was generally milder than that caused by the 1918 influenza, and the global situation differed. Understanding of influenza had advanced greatly: the pathogenic agent had been identified in 1933, vaccines for seasonal epidemics were available, and antimicrobial drugs were available to treat complications. In addition, the World Health Organization had implemented a global influenza surveillance network that provided early warning when novel influenza (H2N2) virus, began spreading in China in February 1957 and worldwide later that year. Vaccines had been developed in Western countries but were not yet available when the pandemic began to spread simultaneously with the opening of schools in several countries. Control measures (e.g., closure of asylums and nurseries, bans on public gatherings) varied from country to country but, at best, merely postponed the onset of disease for a few weeks. This scenario was repeated during the influenza A(H3N2) pandemic of 1968–1969, the third and mildest influenza pandemic of the twentieth century. The virus was first detected in Hong Kong in early 1968 and was introduced into the United States in September 1968 by US Marines returning from Vietnam. In the winter of 1968–69, the virus spread around the world; the effect was limited and there were no specific containment measures.

A new chapter in the history of quarantine opened in the early twenty-first century as traditional intervention measures were resurrected in response to the global crisis precipitated by the emergence of SARS, an especially challenging threat to public health worldwide. SARS, which originated in Guangdong Province, China, in 2003, spread along air-travel routes and quickly became a global threat because of its rapid transmission and high mortality rate and because protective immunity in the general population, effective antiviral drugs, and vaccines were lacking. However, compared with influenza, SARS had lower infectivity and a longer incubation period, providing time for instituting a series of containment measures that worked well. The strategies varied among the countries hardest hit by SARS (People’s Republic of China and Hong Kong Special Administrative Region; Singapore; and Canada). In Canada, public health authorities asked persons who might have been exposed to SARS to voluntarily quarantine themselves. In China, police cordoned off buildings, organized checkpoints on roads, and even installed Web cameras in private homes. There was stronger control of persons in the lower social strata (village-level governments were empowered to isolate workers from SARS-affected areas). Public health officials in some areas resorted to repressive police measures, using laws with extremely severe punishments (including the death penalty), against those who violated quarantine. As had occurred in the past, the strategies adopted in some countries during this public health emergency contributed to the discrimination and stigmatization of persons and communities and raised protests and complaints against limitations and travel restrictions.


More than half a millennium since quarantine became the core of a multicomponent strategy for controlling communicable disease outbreaks, traditional public health tools are being adapted to the nature of individual diseases and to the degree of risk for transmission and are being effectively used to contain outbreaks, such as the 2003 SARS outbreak and the 2009 influenza A(H1N1)pdm09 pandemic. The history of quarantine—how it began, how it was used in the past, and how it is used in the modern era—is a fascinating topic in history of sanitation. Over the centuries, from the time of the Black Death to the first pandemics of the twenty-first century, public health control measures have been an essential way to reduce contact between persons sick with a disease and persons susceptible to the disease. In the absence of pharmaceutical interventions, such measures helped contain infection, delay the spread of disease, avert terror and death, and maintain the infrastructure of society.

Quarantine and other public health practices are effective and valuable ways to control communicable disease outbreaks and public anxiety, but these strategies have always been much debated, perceived as intrusive, and accompanied in every age and under all political regimes by an undercurrent of suspicion, distrust, and riots. These strategic measures have raised (and continue to raise) a variety of political, economic, social, and ethical issues. In the face of a dramatic health crisis, individual rights have often been trampled in the name of public good. The use of segregation or isolation to separate persons suspected of being infected has frequently violated the liberty of outwardly healthy persons, most often from lower classes, and ethnic and marginalized minority groups have been stigmatized and have faced discrimination. This feature, almost inherent in quarantine, traces a line of continuity from the time of plague to the 2009 influenza A(H1N1)pdm09 pandemic.

The historical perspective helps with understanding the extent to which panic, connected with social stigma and prejudice, frustrated public health efforts to control the spread of disease. During outbreaks of plague and cholera, the fear of discrimination and mandatory quarantine and isolation led the weakest social groups and minorities to escape affected areas and, thus, contribute to spreading the disease farther and faster, as occurred regularly in towns affected by deadly disease outbreaks. But in the globalized world, fear, alarm, and panic, augmented by global media, can spread farther and faster and, thus, play a larger role than in the past. Furthermore, in this setting, entire populations or segments of populations, not just persons or minority groups, are at risk of being stigmatized. In the face of new challenges posed in the twenty-first century by the increasing risk for the emergence and rapid spread of infectious diseases, quarantine and other public health tools remain central to public health preparedness. But these measures, by their nature, require vigilant attention to avoid causing prejudice and intolerance. Public trust must be gained through regular, transparent, and comprehensive communications that balance the risks and benefits of public health interventions. Successful responses to public health emergencies must heed the valuable lessons of the past.

(Prof Tognotti is a professor of the history of medicine and human sciences at the University of Sassari. Her primary research interest is the history of epidemic and pandemic disease in the modern era.)

Treating the influenza or flu

Flu and its management

 What should you do if you have the flu?

Follow these basic tips for treating the flu.

If you have been diagnosed with the flu, you should stay home and follow your health care provider’s recommendations.

You can treat the flu with or without medication. When treating without medication, be sure to get plenty of rest and fluids.

Talk to your health care provider or pharmacist about over-the-counter and prescription medications to ease flu symptoms and help you feel better faster. Over-the counter medications may relieve some flu symptoms, but they will not make you less contagious.

Your health care provider may prescribe antiviral medications to make your illness milder and prevent serious complications.

Antiviral medications are approved for adults and children one year and older. On December 21, 2012, the U.S. Food and Drug Administration expanded the approved use of Tamiflu to treat children as young as two weeks old who have shown symptoms of flu for no longer than two days.

All about pumpers: The harmful effects of Anabolic Steroids

Anabolic Steroids

Most anabolic steroids are synthetic substances similar to the male sex hormone testosterone. They are taken orally or are injected.

Some people, especially athletes, abuse anabolic steroids to build muscle and enhance performance. Abuse of anabolic steroids can lead to serious health problems, some of which are irreversible.
Street names

Juice, gym candy, pumpers, stackers

Major effects of steroid abuse can include liver damage; jaundice; fluid retention; high blood pressure; increases in "bad" cholesterol. Also, males risk shrinking of the testicles, baldness, breast development, and infertility.

Females risk growth of facial hair, menstrual changes, male-pattern baldness, and deepened voice. Teens risk permanently stunted height, accelerated puberty changes, and severe acne. All users, but particularly those who inject the drug, risk infectious diseases such as HIV/AIDS and hepatitis.
Statistics and Trends

The NIDA-funded 2010 Monitoring the Future Study showed that 0.5% of 8th graders, 1.0% of 10th graders, and 1.5% of 12th graders had abused anabolic steroids at least once in the year prior to being surveyed.

Steroids and the brain: Use of steroids may damage your memory

2013 February

So where did you leave the steroids? For long-term users of these muscle-building drugs, this could be a real question.

A lab study finds they may lose some ability to remember shapes, such as faces, and where things are, such as directions and objects in locations.

Researcher Harrison Pope of Harvard-affiliated McLean Hospital, tested 44 steroid users, 31 of whom used steroids for an average of seven years.

If you think of visuospatial memory test scores like an IQ score with 100 being normal, users lost about 1.5 points a year:

“If you’d had 10 years of exposure to steroids, this would drop your score on this test to an 85.”

The study in the journal Drug and Alcohol Dependence was supported by the National Institutes of Health.

Loud music is dangerous to health

While loud noise can cause the collection of inaccurate data in experimentation, it can also lead to hearing loss in musicians. Even child musicians, aged eight to twelve, show signs of it, while most adult musicians suffer from it to some degree.
The relationship between hearing loss and musicians is due to their constant exposure to loud noise. Prolonged exposure to sounds above 85dB, the range of many musical instruments, can cause gradual hearing loss. Moreover, modern music genres, such as rock and hip hop, have a tendency to increase the decibel level until it crosses the ear's pain threshold.
To prevent or reduce loss of hearing, musicians are advised to keep their exposure to loud sounds to a minimum and wear soundproof earmuffs or noise cancelling headphones, when possible. Like Herzan'sacoustic enclosures and vibration cancellation technology, they both use the same principles to reduce the amount of noise to a tolerable level. Sound absorbing or deflecting materials muffle noise while the transmission of a noise cancelling signal eliminates sound or vibrations over a broad spectrum.
(courtesy: www.herzan.com)

50 million cosmic rays over the frozen icy continent of Antarctica

2013 February 5


WASHINGTON -- A large NASA science balloon has broken two flight
duration records while flying over Antarctica carrying an instrument
that detected 50 million cosmic rays.

The Super Trans-Iron Galactic Element Recorder (Super-TIGER) balloon
launched at 3:45 p.m. EST, Dec. 8 from the Long Duration Balloon site
near McMurdo Station. It spent 55 days, 1 hour, and 34 minutes aloft
at 127,000 feet, more than four times the altitude of most commercial
airliners, and was brought down to end the mission on Friday.
Washington University of St. Louis managed the mission.

On Jan. 24, the Super-TIGER team broke the record for longest flight
by a balloon of its size, flying for 46 days. The team broke another
record Friday after landing by becoming the longest flight of any
heavy-lift scientific balloon, including NASA's Long Duration
Balloons. The previous record was set in 2009 by NASA's Super
Pressure Balloon test flight at 54 days, 1 hour, and 29 minutes.

"Scientific balloons give scientists the ability to gather critical
science data for a long duration at a very low relative cost," said
Vernon Jones, NASA's Balloon Program Scientist.

Super-TIGER flew a new instrument for measuring rare elements heavier
than iron among the flux of high-energy cosmic rays bombarding Earth
from elsewhere in our Milky Way galaxy. The information retrieved
from this mission will be used to understand where these energetic
atomic nuclei are produced and how they achieve their very high

The balloon gathered so much data it will take scientists about two
years to analyze it fully.

"This has been a very successful flight because of the long duration,
which allowed us to detect large numbers of cosmic rays," said Dr.
Bob Binns, principal investigator of the Super-TIGER mission. "The
instrument functioned very well."

The balloon was able to stay aloft as long as it did because of
prevailing wind patterns at the South Pole. The launch site takes
advantage of anticyclonic, or counter-clockwise, winds circulating
from east to west in the stratosphere there. This circulation and the
sparse population work together to enable long-duration balloon
flights at altitudes above 100,000 feet.

The National Science Foundation (NSF) Office of Polar Programs manages
the U.S. Antarctic Program and provides logistic support for all U.S.
scientific operations in Antarctica. NSF's Antarctic support
contractor supports the launch and recovery operations for NASA's
Balloon Program in Antarctica. Mission data were downloaded using
NASA's Tracking and Data Relay Satellite System.

Monday, February 4, 2013

Assocham says Chinese CCTVs are technologically more sophisticated and affordable

Demand for Chinese CCTVs soars exponentially across metros: Survey
Chinese CCTVs are technologically more sophisticated & affordable

By Syed Akbar

In the aftermath of gang-rape and murder of a medical student in Delhi in December last, both the demand and import of Chinese closed circuit television (CCTV) and surveillance cameras in the metro cities has gone through the roof, according to a just-concluded ASSOCHAM survey.
The Associated Chambers of Commerce and Industry of India (ASSOCHAM) interacted with about 200 stakeholders in the domain of security products including the traders, manufacturers and others operating in CCTV cameras’ market considering that gory incident of December 16, 2012 has made surveillance imperative.

The ASSOCHAM carried out the survey between December 20, 2012-January 20, 2013 in metro cities of Bangalore, Chennai, Delhi, Hyderabad, Kolkata, Kanpur, Lucknow, Mumbai, National Capital Region (NCR-Ghaziabad, Gurgaon, Faridabad and Noida) and Pune as these state capitals and cities are flocked by men and women from various tier II, III cities, districts and rural areas in search of job opportunities which also makes these centres prone to crime.

Over half of the traders said there is negligible manufacturing of CCTV cameras in the country and thus they import the same from countries like China, Taiwan, Malaysia, Israel and also from US and Europe as their product is not only cheap but these countries being leading hardware manufacturers, their products are affordable and based on latest technologies and thus are more preferred by the customers over domestically manufactured CCTVs.

Majority of respondents said that even most of the indigenous enterprises are importing all the components from abroad, assembling them and selling them under their brand names.

In terms of sales, the Chinese CCTV cameras are selling like hot cakes and the respondents said their sales have increased by over 60-70 per cent during the course of last one month itself.

The imported tag on basic fixed focus cameras and sophisticated infrared cameras is proving to be lucrative for those operating in the business and said it is feasible for them to import as the investment involved is huge while the volumes are not high which in turn does not justify the cost of production.

Lack of government support, absence of regulatory framework, large investments and outdated technology are key reasons holding back domestic electronic companies from venturing into the CCTV domain leading to increased dependence upon imported stuff, highlights the ASSOCHAM survey.
“The need for safety and security in almost every walk of life has fuelled an overwhelming demand for CCTV cameras and more so after the Munirka gang-rape incident as hostels, paying guest accommodations, hotels and places alike in cosmopolitan cities are installing surveillance gadgets to keep a check on the movements of both the inhabitants and stalkers,” said Mr D.S. Rawat, secretary general of ASSOCHAM while releasing the chamber survey.

According to an ASSOCHAM analysis, the video surveillance and CCTV market in India is growing at a compounded annual growth rate (CAGR) of about 30 per cent is likely to cross Rs 2,200 crore by 2015.
Indian CCTV camera market is currently poised at about Rs 1,300 crore and accounts for about 40 per cent of the Rs 3,250 crore worth total electronic security market in India, according to an analysis on CCTV/video surveillance market.

The global CCTV and video surveillance market is growing at a CAGR of about 25 per cent and is currently poised at about Rs one lakh crore and is likely to cross Rs 1.5 lakh crore mark by 2015, according to the ASSOCHAM study. Asia accounts for nearly 35 per cent of the global CCTV market with a share of over Rs 27,000 crore.

The CCTV camera industry is going to emerge as a huge market in the next few years in wake of rising demands from sectors like hospitality industry, services, healthcare, retail and transportation.

The ease to inter-connect all monitoring systems, traffic systems, various market places with police stations and defence headquarters in the real time make the CCTV surveillance a prominent and feasible security solution.

Currently, parts of northern India account for maximum number of security installations, followed by west, south and east India.

Deployment of CCTVs significantly help in carrying out post-attack investigation, besides, continuous monitoring of the video surveillance system also plays a vital role in combating security breaches and terror threats at sensitive places like railway stations, airports, hospitals and busy market places.

“Rapid economic growth and rising industrial activities amid security threats, fear of potential terrorist attacks has fuelled the demand for CCTV cameras evidently as government authorities and even private sector are investing huge amount of money in installing CCTVs to secure their offices and public places across the country,” said Mr D.S. Rawat, secretary general, ASSOCHAM while releasing the findings of the study.
CCTVs are the most sought after security systems and apart from government, both at the central and the state levels, the private sector is also going to increase their expenditure on security surveillance and as a result the cost of the CCTVs are going to head south, highlights the study.

“Tier II and tier III cities, currently having a small proportion of security system installations are going to emerge as the real growth drivers of this technology driven industry in the long run,” said Mr Rawat. “Economic liberalisation will create jobs and income opportunities, attract migrants and foster a cosmopolitan culture in these cities making them prone to security threats.”

“Public private partnerships (PPP) is a feasible solution to develop homeland security solutions to ensure safe, secure and smart cities, ports and highways,” said Mr Rawat.

Wednesday, January 9, 2013

All about densest matter of the Universe: Chandra X-ray Observatory gives insight into fast moving jet of particles from a rotating neutron star


WASHINGTON -- Unlike with some blockbuster films, the sequel to a
movie from NASA's Chandra X-ray Observatory is better than the first.
This latest movie features a deeper look at a fast moving jet of
particles produced by a rapidly rotating neutron star, and may
provide new insight into the nature of some of the densest matter in
the universe.

The hero of this Chandra movie is the Vela pulsar, a neutron star that
was formed when a massive star collapsed. The Vela pulsar is about
1,000 light-years from Earth, about 12 miles in diameter, and makes a
complete rotation in 89 milliseconds, faster than a helicopter rotor.

As the pulsar whips around, it spews out a jet of charged particles
that race along the pulsar's rotation axis at about 70 percent of the
speed of light. The new Chandra data, which were obtained from June
to September 2010, suggest the pulsar may be slowly wobbling, or
precessing, as it spins. The period of the precession, which is
analogous to the slow wobble of a spinning top, is estimated to be
about 120 days.

"We think the Vela pulsar is like a rotating garden sprinkler --
except with the water blasting out at over half the speed of light,"
said Martin Durant of the University of Toronto in Canada, who is the
first author of the paper describing these results.

One possible cause of precession for a spinning neutron star is it has
become slightly distorted and is no longer a perfect sphere. This
distortion might be caused by the combined action of the fast
rotation and "glitches," sudden increases of the pulsar's rotational
speed due to the interaction of the superfluid core of the neutron
star with its crust.

"The deviation from a perfect sphere may only be equivalent to about
one part in 100 million," said co-author Oleg Kargaltsev of The
George Washington University in Washington, who presented these
results Monday at the 221st American Astronomical Society meeting in
Long Beach, Calif. "Neutron stars are so dense that even a tiny
distortion like this would have a big effect."

If the evidence for precession of the Vela pulsar is confirmed, it
would be the first time a neutron star has been found to be this way.
The shape and the motion of the Vela jet look strikingly like a
rotating helix, a shape that is naturally explained by precession.
Another possibility is the strong magnetic fields around the pulsar
are influencing the shape of the jet. For example, if the jet
develops a small bend caused, by precession, the magnetic field's
lines on the inside of the bend will become more closely spaced. This
pushes particles toward the outside of the bend, increasing the

"It's like having an unsecured fire hose and a flow of water at high
pressure," said co-author George Pavlov, principal investigator of
the Chandra proposal at Pennsylvania State University in University
Park. "All you need is a small bend in the hose and violent motion
can result."

This is the second Chandra movie of the Vela pulsar. The original was
released in 2003 by Pavlov and co-authors. The first Vela movie
contained shorter, unevenly spaced observations so that the changes
in the jet were less pronounced and the researchers did not argue
that precession was occurring. However, based on the same data,
Avinash Deshpande of Arecibo Observatory in Puerto Rico and the Raman
Research Institute in Bangalore, India, and the late Venkatraman
Radhakrishnan, argued in a 2007 paper the Vela pulsar might be

Astronomers have returned to observing Vela because it offers an
excellent chance to study how a pulsar and its jet work. The 0.7
light-year-long jet in Vela is similar to those produced by accreting
supermassive black holes in other galaxies, but on a much smaller
scale. Because Vela's jet changes dramatically over a period of
months and is relatively close, it can be studied in great detail
unlike jets from black holes that change over much longer timescales.

If precession is confirmed and the Vela pulsar is indeed a distorted
neutron star, it should be a persistent source of gravitational
waves, and would be a prime target for the next generation of
gravitational wave detectors designed to test Einstein's theory of
general relativity.

A paper describing these results will be published in Thursday's The
Astrophysical Journal. Other co-authors of the paper were Julia
Kropotina and Kseniya Levenfish from St. Petersburg State
Polytechnical University in St. Petersburg, Russia.

NASA's Marshall Space Flight Center in Huntsville, Ala., manages the
Chandra program for NASA's Science Mission Directorate in Washington.
The Smithsonian Astrophysical Observatory controls Chandra's science
and flight operations from Cambridge, Mass.

Peanut allergy: New methods to treat allergy to peanuts

Peanut Therapy Shows Promise in Treating Peanut Allergy

NIH-Funded Clinical Study is One of the First to Evaluate Sublingual Immunotherapy as a Peanut Allergy Treatment

A new study supported by the National Institutes of Health (NIH) suggests that sublingual immunotherapy (SLIT) can reduce the allergic response to peanut in adolescents and adults. SLIT is a treatment approach in which, under medical supervision, people place a small amount of allergen under the tongue to decrease their sensitivity to the allergen. This is one of the first randomized, placebo-controlled studies to test the efficacy and safety of SLIT to treat peanut allergy and is one of several federally funded trials investigating immune-based approaches to preventing and treating food allergy. The results appear online in the January issue of the Journal of Allergy and Clinical Immunology.

The study enrolled 40 people aged 12 to 37 years with peanut allergy who were on a peanut-free diet. After an initial food challenge to measure how much peanut powder they could eat without having an allergic reaction, participants received 44 weeks of daily therapy, followed by a second food challenge. Fourteen of the 20 participants (70 percent) given peanut SLIT were able to consume at least 10 times more peanut powder than they could at the beginning of the study, compared with only 3 of the 20 participants (15 percent) given placebo. After 68 weeks on peanut SLIT, on average, participants could consume significantly more peanut powder without having an allergic reaction. Study investigators also observed that SLIT caused only minor side effects, such as itching in the mouth, suggesting that daily therapy is safe.

Although more work is needed, the investigators hope that SLIT could one day help protect people with peanut allergy from experiencing severe allergic reactions in cases of accidental exposure. The researchers caution that people should not try peanut SLIT on their own because any form of immunotherapy carries a significant risk for allergic reactions. The therapy should be administered only under the guidance of trained clinicians.

The multicenter study was supported by the NIH’s National Institute of Allergy and Infectious Diseases (NIAID) and conducted by the Consortium of Food Allergy Research (CoFAR) at clinical sites in Baltimore; Chapel Hill, N.C.; Denver; Little Rock, Ark.; and New York City. CoFAR investigators David Fleischer, M.D., associate professor of pediatrics in the Division of Pediatric Allergy and Immunology at National Jewish Health in Denver, and A. Wesley Burks, M.D., chair of the Department of Pediatrics at the University of North Carolina, Chapel Hill, led the trial.

461 new planet candidates found in the universe, totaling potential planets to 2740 orbiting 2036 stars


WASHINGTON -- NASA's Kepler mission Monday announced the discovery of
461 new planet candidates. Four of the potential new planets are less
than twice the size of Earth and orbit in their sun's "habitable
zone," the region in the planetary system where liquid water might
exist on the surface of a planet.

Based on observations conducted from May 2009 to March 2011, the
findings show a steady increase in the number of smaller-size planet
candidates and the number of stars with more than one candidate.

"There is no better way to kickoff the start of the Kepler extended
mission than to discover more possible outposts on the frontier of
potentially life bearing worlds," said Christopher Burke, Kepler
scientist at the SETI Institute in Mountain View, Calif., who is
leading the analysis.

Since the last Kepler catalog was released in February 2012, the
number of candidates discovered in the Kepler data has increased by
20 percent and now totals 2,740 potential planets orbiting 2,036
stars. The most dramatic increases are seen in the number of
Earth-size and super Earth-size candidates discovered, which grew by
43 and 21 percent respectively.

The new data increases the number of stars discovered to have more
than one planet candidate from 365 to 467. Today, 43 percent of
Kepler's planet candidates are observed to have neighbor planets.

"The large number of multi-candidate systems being found by Kepler
implies that a substantial fraction of exoplanets reside in flat
multi-planet systems," said Jack Lissauer, planetary scientist at
NASA's Ames Research Center in Moffett Field, Calif. "This is
consistent with what we know about our own planetary neighborhood."

The Kepler space telescope identifies planet candidates by repeatedly
measuring the change in brightness of more than 150,000 stars in
search of planets that pass in front, or "transit," their host star.
At least three transits are required to verify a signal as a
potential planet.

Scientists analyzed more than 13,000 transit-like signals to eliminate
known spacecraft instrumentation and astrophysical false positives,
phenomena that masquerade as planetary candidates, to identify the
potential new planets.

Candidates require additional follow-up observations and analyses to
be confirmed as planets. At the beginning of 2012, 33 candidates in
the Kepler data had been confirmed as planets. Today, there are 105.

"The analysis of increasingly longer time periods of Kepler data
uncovers smaller planets in longer period orbits-- orbital periods
similar to Earth's," said Steve Howell, Kepler mission project
scientist at Ames. "It is no longer a question of will we find a true
Earth analogue, but a question of when."

The complete list of Kepler planet candidates is available in an
interactive table at the NASA Exoplanet Archive. The archive is
funded by NASA's Exoplanet Exploration Program to collect and make
public data to support the search for and characterization of
exoplanets and their host stars.

Ames manages Kepler's ground system development, mission operations
and science data analysis. NASA's Jet Propulsion Laboratory (JPL) in
Pasadena, Calif., managed Kepler mission development. Ball Aerospace
and Technologies Corp. in Boulder, Colo., developed the Kepler flight
system and supports mission operations with JPL at the Laboratory for
Atmospheric and Space Physics at the University of Colorado in

The Space Telescope Science Institute in Baltimore archives, hosts and
distributes the Kepler science data. Kepler is NASA's 10th Discovery
Mission and is funded by NASA's Science Mission Directorate at the
agency's headquarters in Washington.

JPL manages NASA's Exoplanet Exploration Program. The NASA Exoplanet
Archive is hosted at the Infrared Processing and Analysis Center at
the California Institute of Technology.

Tracing history of a black hole 11 billion years ago: Scientists find the source of ancient outburst in the farther space


WASHINGTON -- In 2011, a months-long blast of energy launched by an
enormous black hole almost 11 billion years ago swept past Earth.
Using a combination of data from NASA's Fermi Gamma-ray Space
Telescope and the National Science Foundation's Very Long Baseline
Array (VLBA), the world's largest radio telescope, astronomers have
zeroed in on the source of this ancient outburst.

Theorists expect gamma-ray outbursts occur only in close proximity to
a galaxy's central black hole, the powerhouse ultimately responsible
for the activity. A few rare observations suggested this is not the

The 2011 flares from a galaxy known as 4C +71.07 now give astronomers
the clearest and most distant evidence that the theory still needs
some work. The gamma-ray emission originated about 70 light-years
away from the galaxy's central black hole.

The 4C +71.07 galaxy was discovered as a source of strong radio
emission in the 1960s. NASA's Compton Gamma-Ray Observatory, which
operated in the 1990s, detected high-energy flares, but the galaxy
was quiet during Fermi's first two and a half years in orbit.

In early November 2011, at the height of the outburst, the galaxy was
more than 10,000 times brighter than the combined luminosity of all
of the stars in our Milky Way galaxy.

"This renewed activity came after a long slumber, and that's important
because it allows us to explicitly link the gamma-ray flares to the
rising emission observed by radio telescopes," said David Thompson, a
Fermi deputy project scientist at NASA's Goddard Space Flight Center
in Greenbelt, Md.

Located in the constellation Ursa Major, 4C +71.07 is so far away that
its light takes 10.6 billion years to reach Earth. Astronomers are
seeing this galaxy as it existed when the universe was less than
one-fourth of its present age.

At the galaxy's core lies a supersized black hole weighing 2.6 billion
times the sun's mass. Some of the matter falling toward the black
hole becomes accelerated outward at almost the speed of light,
creating dual particle jets blasting in opposite directions. One jet
happens to point almost directly toward Earth. This characteristic
makes 4C +71.07 a blazar, a classification that includes some of the
brightest gamma-ray sources in the sky.

Boston University astronomers Alan Marscher and Svetlana Jorstad
routinely monitor 4C +71.07 along with dozens of other blazars using
several facilities, including the VLBA.

The instrument's 10 radio telescopes span North America, from Hawaii
to St. Croix in the U.S. Virgin Islands, and possess the resolving
power of a single radio dish more than 5,300 miles across when their
signals are combined. As a result, The VLBA resolves detail about a
million times smaller than Fermi's Large Area Telescope (LAT) and
1,000 times smaller than NASA's Hubble Space Telescope.

In autumn 2011, the VLBA images revealed a bright knot that appeared
to move outward at a speed 20 times faster than light.

"Although this apparent speed was an illusion caused by actual motion
almost directly toward us at 99.87 percent the speed of light, this
knot was the key to determining the location where the gamma-rays
were produced in the black hole's jet," said Marscher, who presented
the findings Monday at the American Astronomical Society meeting in
Long Beach, Calif.

The knot passed through a bright stationary feature of the jet, which
the astronomers refer to as its radio "core," on April 9, 2011. This
occurred within days of Fermi's detection of renewed gamma-ray
flaring in the blazar. Marscher and Jorstad noted that the blazar
brightened at visible wavelengths in step with the higher-energy

During the most intense period of flaring, from October 2011 to
January 2012, the scientists found the polarization direction of the
blazar's visible light rotated in the same manner as radio emissions
from the knot. They concluded the knot was responsible for the
visible and the gamma-ray light, which varied in sync.

This association allowed the researchers to pinpoint the location of
the gamma-ray outburst to about 70 light-years from the black hole.

The astronomers think that the gamma rays were produced when electrons
moving near the speed of light within the jet collided with visible
and infrared light originating outside of the jet. Such a collision
can kick the light up to much higher energies, a process known as
inverse-Compton scattering.

The source of the lower-energy light is unclear at the moment. The
researchers speculate the source may be an outer, slow-moving sheath
that surrounds the jet. Nicholas MacDonald, a graduate student at
Boston University, is investigating how the gamma-ray brightness
should change in this scenario to compare with observations.

"The VLBA is the only instrument that can bring us images from so near
the edge of a young supermassive black hole, and Fermi's LAT is the
only instrument that can see the highest-energy light from the
galaxy's jet," said Jorstad.

NASA's Fermi Gamma-ray Space Telescope is an astrophysics and particle
physics partnership. Fermi is managed by NASA's Goddard Space Flight
Center. It was developed in collaboration with the U.S. Department of
Energy, with contributions from academic institutions and partners in
France, Germany, Italy, Japan, Sweden and the United States.