Friday, October 21, 2011

 

folium zuur

October 17, 2011 — Folic acid supplements taken from 4 weeks before to 8 weeks after conception have been linked to a significantly lower prevalence of severe language delay in children, according to a study published in the October 12 issue of JAMA.

Dr. Christine Roth

"Severe language delay is associated with a range of childhood neuropsychiatric disorders, such as autism, and is also associated with difficulties in achieving literacy," lead author Christine Roth, ClinPsyD, from the Norwegian Institute of Public Health in Oslo, told Medscape Medical News.

Even though half of the children who are rated as having language delay at age 3 years, especially if it is in the moderate range, grow out of it by the time they reach school age, many continue to struggle with language difficulties, said Dr. Roth, who is also a visiting researcher at the Mailman School of Public Health at Columbia University in New York City.

Studies have shown that periconceptional folic acid supplements reduce the risk for neural tube defects, but none of the trials have followed-up to investigate whether the supplements have effects on neurodevelopment that only show after birth, she said.

"Unlike the United States, Norway does not fortify foods with folic acid, increasing the contrast in relative folate status between women who do and do not take folic acid supplements," she noted.

With this in mind, Dr. Roth set out to specifically study periconceptional folic acid use and language delay.

The analysis included 19,956 boys and 18,998 girls born to mothers participating in the Norwegian Mother and Child Cohort Study between 1999 and 2008. The researchers used data on children born before 2008 whose mothers returned the 3-year follow-up questionnaire by June 2010.

The investigators found that 204 children (0.5%) had severe language delay, defined as minimal expressive language (only 1-word or unintelligible utterances). Of the 9052 children whose mothers took no folic acid supplements, severe language delay was reported in 81 children (0.9%), but among the 7127 children whose mothers did take folic acid supplements, severe language delay was reported in 28 children (0.4%).

"If in future research this relationship were shown to be causal, it would have important implications for understanding the biological processes underlying disrupted neurodevelopment, for the prevention of neurodevelopmental disorders, and for policies of folic acid supplementation for women of reproductive age," senior author Ezra Susser, MD, DrPh, from the Mailman School of Public Health and the New York State Psychiatric Institute, New York City, said in a statement.

Dr. Roth added that Norwegian women should follow the current recommendations of starting to take a folic acid supplement of 0.4 mg daily 1 month before becoming pregnant and continuing through the second to third month of pregnancy.

"This means, of course, that women who might become pregnant should take supplements, so that they will be taking supplements if pregnancy does occur."

However, she stops short of saying that the findings should be used in creating formal policy recommendations.

"Our study does offer some further evidence in favor of the current recommendations, but we caution that it is premature to use it as a basis for formulating policy recommendations," she said.

The study was supported by the Norwegian Ministry of Health and the Ministry of Education and Research, the National Institutes of Health/National Institute of Environmental Health Sciences, and the Norwegian Research Council. Dr. Roth and Dr. Susser have disclosed no relevant financial relationships.

JAMA. 2011; 306:1566-1573. Abstract


Thursday, October 13, 2011

 

depression

Commentary

Contemporary psychiatry has been surprised and chagrined by recent meta-analyses demonstrating that antidepressant drugs have statistically weak benefits among those with mild or moderate depressions, or perhaps total lack of benefit. The benefits of antidepressant drugs have only been consistently demonstrable for the most severe depressions, leaving clinicians without a mainstay for treating mild–moderate depression.

The study of Lieverse and colleagues is therefore extremely welcome, for it demonstrates that bright light can benefit outpatients aged above 60 years with mild major depressions. The bright light benefits were confirmed by a range of rating scales. Moreover, there were no adverse effects significantly more common in the bright light treated group.

Although bright light showed some superiority of response during 3 weeks of treatment, it was peculiar that greater relative benefit was observed in the bright light intervention group at 6 weeks, 3 weeks after the bright light was withdrawn. This was unexpected based on previous light treatment studies. One would like to know more about the results that would likely be obtained with longer intervals of treatment, as well as longer intervals of post-treatment follow-up.

A number of previous studies of bright light treatment (BLT) have suggested promising antidepressant benefits, particularly the inpatient studies of Martiny1 which combined bright light with antidepressants. The current study provides the strongest evidence for bright light benefits in outpatients, particularly in the older age group. More trials and longer term trials are needed.

As the costs and adverse effects of BLT seem minimal, and bright light has proven its worth in treating seasonal depressions, clinicians are beginning to offer BLT now to patients with non-seasonal depression. This study will encourage treatment with bright light for patients aged above 60 years.

Daniel F Kripke
Professor of Psychiatry Emeritus, University of California San Diego, La Jolla, California, USA


 

dementia AF

Discussion

In this population-based study of older adults, AF was associated with a 40% to 50% higher risk of AD and all-cause dementia, independent of stroke. This higher risk persisted after adjustment for many cardiovascular risk factors and diseases and in numerous sensitivity analyses. Because participants were followed prospectively and screened routinely for cognitive impairment, and dementia was ascertained using sensitive and valid methods, the findings provide more-rigorous information than many prior studies of this question. Considerable efforts were made to identify and account for clinically recognized stroke, so the estimates reflect the associations between AF and dementia beyond its known association with clinical stroke.

Several biological mechanisms may underlie the association between AF and dementia and AD. First, AF leads to incomplete atrial emptying, which may lead to thrombus formation in the left atrial appendage. This can result in systemic embolization, including to the brain. In addition to clinically recognized stroke, people with AF experience silent cerebral emboli.[15] It has previously been reported that cerebral microinfarcts are an important neuropathological predictor of clinical dementia.[32] It is not known whether AF increases the risk of cerebral microinfarcts. Second, AF is associated with greater beat-to-beat heart rate variability, which may lead to cerebral hypoperfusion.[13] Either or both of these mechanisms may be associated with other neuropathological entities associated with dementia, such as neurofibrillary tangles, Lewy bodies, and hippocampal sclerosis. These findings are consistent with Fortuhi's "dynamic polygon" hypothesis,[33] which posits that multiple processes and risk factors act together to produce late-life dementia. People with late-life dementia and AD commonly have multiple neuropathological findings (e.g., vascular pathology or Lewy bodies in addition to AD lesions).[34] It may be that, in people with levels of AD pathology insufficient to produce dementia on their own, additional insults from AF decrease cognitive reserves and hasten the onset of dementia or AD.

In addition to the mechanisms described above, there are other possible explanations for the association observed between AF and dementia. First, AF and dementia may share underlying risk factors or pathophysiological processes, such as inflammation. It may be that despite efforts to control for cardiovascular risk factors and clinical cardiovascular disease, AF serves as a marker for the overall burden of cardiovascular disease. Second, subtle changes in the brain may affect autonomic input to the heart, changing cardiac conduction and leading to AF. However, the outcome of incident dementia was studied, so autonomic changes should have been minimal in these cases with early AD.

These findings are consistent with those of prior cross-sectional studies[4,35] and with some[6–8] but not all[16–20] longitudinal studies. (For more information, please see summary table provided as Appendix S1.) Three of eight longitudinal studies reported that participants with AF were at higher risk of dementia than those without,[6–8] although one study found this association only in participants with mild cognitive impairment,[7] and another found an association at 5 years of follow-up but not 1 or 10 years.[8] Five longitudinal studies found no association.[16–20] In general, the studies that found an association examined a younger population than those that found no association, but there were other important differences. The methods used to detect AF and dementia varied widely. Only one prior study[6] presented data about AF diagnosed during follow-up; other studies ascertained AF only at baseline. Because the current study identified AF diagnosed during follow-up, it is likely that it had more-complete ascertainment than most prior studies, improving its accuracy and power. Some studies used measures of dementia that have poor sensitivity or specificity, such as ICD-9 codes from administrative data[6] or Mini-Mental State Examination scores less than 24.[16] The current study included the largest number of dementia cases identified using sensitive and rigorous methods of any study to date. Only one prior study included more dementia cases,[6] but it relied purely on administrative data to identify dementia, which likely led to substantial misclassification. This probably led to bias, the direction of which cannot be predicted.

This study has limitations. Diagnoses of dementia and AD were based on clinical criteria, and although neuroimaging studies were available for many dementia cases, research-quality neuroimaging was not performed. Cases of vascular or mixed dementia may have been misclassified as AD. It is not likely that this detracts from the findings, because prior work has established that in late-life dementia, people who meet neuropathological criteria for AD commonly have additional coexisting neuropathological findings, often vascular lesions.[34] The association between AF and AD remained present when the outcome was limited to participants with probable AD, a group that is likely to meet neuropathological criteria for AD.[34]

It is likely that some cases of AF that did not come to clinical attention were missed, particularly those that were transitory or asymptomatic. It is difficult to predict how this misclassification would have affected the results. If this misclassification was nondifferential, the true association between AF and dementia may be stronger than was observed. Information about AF duration and persistence was lacking, so these aspects of AF could not be examined in relation to dementia. The timing of dementia onset and the onset of self-reported CHD, stroke, and CHF were subject to error because the exact date of onset was not known, so it was estimated as occurring halfway between ACT visits. Covariates including CHD and CHF were measured from self-report, which may be inaccurate. Information about valvular heart disease or echocardiographic findings such as left atrial dilation and impaired systolic function, which are associated with development of AF, was not available.[10] All of these limitations could have led to residual confounding. The study population was predominantly white and well educated, which may limit generalizability. Participants who died or withdrew from ACT may have differed from those who remained in the study in terms of their likelihood of having AF and of developing dementia, which may have led to bias.

The findings have important clinical implications. AF is common, and its prevalence is increasing.[36] It is not known whether specific treatments for AF could modify the greater risk of dementia observed. If such treatment could delay the onset of dementia by even a few years, this could have a substantial effect on the burden of dementia in the population. Although one recent observational study reported that participants with AF who underwent catheter ablation had a lower risk of dementia than those who did not,[37] these results may have been biased because treatment was not randomly assigned. Additional research is needed to examine the relationship between various treatments for AF and cognitive outcomes. Clinical trials and comparative effectiveness studies examining AF treatments will surely continue to study stroke and mortality but should also use sensitive and rigorous measures to ascertain cognitive decline and incident dementia so that these important outcomes can be evaluated. Future research seeking ways to avert the increasing burden of dementia should aim to determine the extent to which AF might be a modifiable risk factor.


 

AF atrium fibrillation

Can Atrial Fibrillation Cause MR?

In patients with AF and normal-leaflet-motion MR, those who were in sinus rhythm after AF ablation had less MR than those with recurrent AF.

Functional mitral regurgitation (MR) with normal leaflet motion is typically caused by annular dilation associated with left ventricular (LV) enlargement. In the present study, investigators postulated that normal-leaflet-motion MR can also result from annular dilation associated with atrial fibrillation (AF), a condition they termed "atrial functional MR." They retrospectively screened 828 patients undergoing first AF ablation to identify 53 (6.4%) with moderate or greater MR and normal leaflet motion. All patients had LV ejection fractions ≥50%.

Compared with a randomly selected reference cohort with mild or no MR, subjects were older and more likely to have persistent AF and hypertension. Of the patient characteristics included in multivariate analysis, large mitral annular dimension was associated with the largest odds ratio for MR (8.39). Thirty-two subjects underwent 1-year echocardiographic follow-up. Compared with patients with recurrent AF, those in sinus rhythm had significantly less MR (mean MR jet-to-left-atrial [LA] ratio, 0.16 vs. 0.28) and a significantly smaller mean LA volume index (24 cc3/m2 vs. 31 cc3/m2), as well as a smaller mean annular dimension (3.2 cm vs. 3.5 cm; P=0.06).

Comment: This is the largest study to date to demonstrate an association between atrial fibrillation and secondary, normal-leaflet-motion mitral regurgitation and the first to demonstrate early resolution of MR with restoration of sinus rhythm by ablation, suggesting a causal relationship. If these findings are confirmed in a prospective study, a strategy of rhythm control may be preferable to one of rate control in these patients.

Howard C. Herrmann, MD

Published in Journal Watch Cardiology October 12, 2011


Wednesday, October 12, 2011

 

polio

In 2000, the inactivated poliovirus vaccine (IPV) replaced the oral poliovirus vaccine (OPV) for routine immunization in the United States to prevent vaccine-associated paralytic polio, as reported by the American Academy of Pediatrics (AAP) in the December 1999 issue of Pediatrics. In the January 24, 1997, issue of MMWR. Recommendations and Reports, the US Centers for Disease Control and Prevention found that approximately 8 cases of OPV-associated paralytic polio occurred per year. Three combination vaccines that include IPV are licensed for use in the United States, according to the Centers for Disease Control and Prevention in the August 7, 2009, issue of MMWR. Morbidity and Mortality Weekly Report.

This policy statement from the AAP addresses the recommendations for poliovirus vaccination.

Study Synopsis and Perspective

The AAP has updated its recommendation for the administration of poliovirus vaccines, clarifying the standard schedule for immunization, as well as the minimal ages and minimal intervals between doses, according to a policy statement published online September 26 in Pediatrics.

Although the use of the OPV beginning in the early 1960s led to the elimination of polio in the United States, with the last reported outbreak seen in 1979, wild polioviruses still occur naturally in 4 countries: Afghanistan, India, Nigeria, and Pakistan. The fact that these 4 countries exported the virus to other countries that reported polio cases in 2009 points to the potential for the virus to be brought into the United States, the AAP policy statement says.

Twenty countries reported 1349 cases of polio in 2010, and 14 countries have reported 333 polio cases through August 23 of this year.

The IPV replaced the OPV as the vaccine of choice in the United States in 2000 in an effort to prevent rare but serious vaccine-associated paralytic polio. The current vaccination schedule, designed to produce immunity early in life, calls for 3 doses of IPV at 2, 4, and 6 through 18 months of age, and a fourth dose at 4 through 6 years of age. The AAP recommends that if risk for exposure is imminent, such as when a person travels to 1 of the 4 countries with wild polioviruses, then the doses should be administered at the minimum ages and intervals.

Within the United States, pockets of underimmunized children could lead to an outbreak if the wild viruses migrate to where those children are living, the AAP says.

The AAP statement says that after an individual receives the IPV series of doses, immunity is "long-term, possibly lifelong." However, another recommendation in its statement is that even adults who completed immunization with OPV or IPV early in life get a single dose of IPV if they are at increased risk for exposure to wild poliovirus in 1 of the countries.

Three combination vaccines and 1 stand-alone vaccine are licensed in the United States. Diphtheria and tetanus toxoids and acellular pertussis adsorbed, hepatitis B (DTaP-HepB-IPV; Pediarix, GlaxoSmithKline), is licensed for the first 3 doses and through 6 years of age. DTaP, IPV, and Haemophilus influenza type b (DTaP-IPV/Hib; Pentacel, Sanofi Pasteur) is licensed for all 4 doses through 4 years of age. DTaP-IPV (Kinrix, GlaxoSmithKline) is licensed for the last dose at ages 4 through 6. IPV (Poliovax, Sanofi Pasteur), the stand-alone vaccine, is licensed for all doses in infants, children, and adults.

The World Health Assembly set a goal in 1988 of eradicating polio worldwide. At that time, an estimated 350,000 cases of polio existed in 125 countries. That number decreased to 1604 cases in 2009.

Pediatrics. Published online September 26, 2011. Full text

Related Link
The Centers for Disease Control and Prevention provides extensive information on Polio Vaccination for healthcare professionals and patients, including downloadable patient teaching tools available in English and Spanish.

Study Highlights

Clinical Implications


Thursday, October 06, 2011

 

warfarin dabigatran rivaroxaban

On September 8, 2011, the Cardiovascular and Renal Drugs Advisory Committee of the Food and Drug Administration (FDA) discussed data submitted in support of the new drug application for rivaroxaban for preventing stroke and non–central nervous system systemic embolic events in patients with nonvalvular atrial fibrillation. Supportive evidence came primarily from ROCKET-AF (Rivaroxaban Once Daily Oral Direct Factor Xa Inhibition Compared with Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation; ClinicalTrials.gov number, NCT00403767),1 in which more than 14,000 patients were randomly assigned in a double-blind fashion to either 20 mg of rivaroxaban once daily or warfarin therapy targeting an international normalized ratio (INR) of 2 to 3. The primary aim was to assess whether rivaroxaban was noninferior to warfarin, with a secondary aim of assessing superiority.

In ROCKET-AF, a noninferiority margin of 1.38 for the relative risk of stroke or systemic embolism was based on an approval criterion that rivaroxaban be superior to placebo by at least 50% of the margin by which warfarin is superior to placebo, as estimated from a meta-analysis of six placebo-controlled reference studies. Per-protocol “on-treatment” analyses were prespecified because of concerns in noninferiority trials that events occurring with equal probabilities after patients discontinue randomized treatments might dilute the trials' sensitivity to true treatment differences and thus increase the risk of falsely declaring a treatment noninferior.2 In the primary analysis, the relative risk of stroke or systemic embolism with rivaroxaban as compared with warfarin was 0.79, with a 95% confidence interval that excluded the prespecified noninferiority margin. The risk of major bleeding events was somewhat higher with rivaroxaban, especially when double counting is avoided by excluding hemorrhagic strokes that were included in the efficacy end point of stroke or systemic embolism.

ROCKET-AF had important strengths, including its double-blind design and the favorable efficacy results noted above. However, thorough analyses provided by the FDA identified important issues affecting interpretation of these results.

In noninferiority trials, the “constancy assumption” must be satisfied: the control treatment, as administered in the new trial, must have the same magnitude of benefit relative to placebo as it had in the reference trials used to estimate its effect. The noninferiority margin might need to be modified if the results in the control group are for a different patient population, intensity of treatment, or measure of outcome than was used in the reference trials. Concerns about nonconstancy in ROCKET-AF were related to the higher-risk patients enrolled in the study, the more than 5% of patients who discontinued follow-up due to “withdrawal of consent,” and the fact that the INR for patients in the warfarin group was in the therapeutic range (between 2 and 3) only 55% of the time — considerably less than the 62 to 73% seen in other recent clinical trials. When the FDA analyzed data only from ROCKET-AF sites whose patients' average time in the therapeutic range was above specified thresholds, they found that the relative risk of stroke or systemic embolism with rivaroxaban was considerably higher (near unity) if the threshold was 67%, whereas with a threshold near 55% (corresponding to sites with an average time in the therapeutic range of about 65%), the relative risk was closer to that observed in the study as a whole.

Even in noninferiority trials, per-randomization analyses should be conducted. These analyses avoid the bias that occurs with per-protocol on-treatment analyses when patients discontinue their randomized treatment for reasons related to the treatment itself and the patients who do so have a different risk profile from those who don't. The importance of per-randomization analyses is very apparent in ROCKET-AF. The on-treatment analysis was based on observations that were truncated at 2 days after discontinuation of randomized treatment — a time frame likely to miss events related to inadequate coagulation during the transition to alternative treatment. There was greater risk of such events in the group receiving rivaroxaban, with its 5-to-9-hour half-life, than in the group receiving warfarin, with its 40-hour half-life. There was a much higher rate of stroke or systemic embolism in the rivaroxaban group than in the warfarin group (31 vs. 12 detected events) between day 2 and day 7 after discontinuation of randomized treatment. In the per-randomization analysis that captured these events, the relative risk of stroke or systemic embolism with rivaroxaban was 0.88, with a 95% confidence interval of 0.78 to 1.03, so superiority was not established. A positive trend seen in the per-protocol analysis of myocardial infarctions was similarly attenuated. A striking increase in death rates after the discontinuation of randomized treatment further complicates the noninferiority assessment in ROCKET-AF.

The Randomized Evaluation of Long-Term Anticoagulation Therapy (RE-LY) (NCT00262600), which provided the pivotal data in support of dabigatran's approval, compared open-label use of dabigatran with warfarin in more than 18,000 patients. The estimated relative risk of stroke or systemic embolism in a per-randomization analysis was 0.66 (95% confidence interval, 0.53 to 0.82) in a setting in which the average time in therapeutic range with warfarin was 64%.3 Although the trial was not blinded and dabigatran's effect on the risk of myocardial infarction was slightly unfavorable, the results robustly support the superiority of dabigatran over warfarin. RE-LY is also relevant to deliberations regarding rivaroxaban's approval. According to FDA policy, “It is essential that a new therapy must be as effective as alternatives that are already approved for marketing when the disease to be treated is life-threatening or capable of causing irreversible morbidity (e.g., stroke or heart attack).”2,4 Does rivaroxaban satisfy this criterion? In particular, are additional data needed to evaluate whether rivaroxaban is noninferior to dabigatran?

The RE-LY results and uncertainty about the validity of the constancy assumption in ROCKET-AF raise concerns that rivaroxaban could be inferior to either dabigatran or warfarin, particularly when the latter is “used skillfully.” The apparent nonconstancy of warfarin treatment between the two trials is problematic, although it's unclear whether the lower average time in therapeutic range in ROCKET-AF reflects greater difficulty in caring for higher-risk patients or is an artifact of the protocol design and trial conduct, including the mandated blinding of INR monitoring. Further concerns relate to a trend toward higher event rates in the rivaroxaban group than in the warfarin group as patients were transitioned to usual care — excess events that weren't captured in the primary efficacy analyses. The FDA also noted that ROCKET-AF's once-daily dosing of rivaroxaban wasn't really supported by the available pharmacokinetic and pharmacodynamic data. If the apparent noninferiority of once-daily rivaroxaban to warfarin was due primarily to a low time in therapeutic range in the warfarin group and the exclusion of excess events after randomized treatment was discontinued, then that dosing strategy might be unacceptably inferior to dabigatran. These circumstances could lead to an unproven treatment displacing an effective treatment on the basis of overzealous promotion of more convenient once-daily dosing.

The majority of the advisory committee judged that ROCKET-AF's results supported approval of rivaroxaban for stroke prevention in patients with atrial fibrillation. Justifications included the strength of evidence for noninferiority relative to warfarin in a high-risk population, the expectation that evidence can be obtained to establish that risk will be reduced by short-term continuation of rivaroxaban when transitioning to other anticoagulant therapy, the belief that postmarketing studies can address FDA concerns that a twice-daily dosing regimen is more appropriate, and the interest in having an additional option that (some are convinced) adequately preserves the efficacy of existing treatments. It was suggested that rivaroxaban might be used in patients who have an inadequate response to or cannot take dabigatran or warfarin, although data are not available to directly address rivaroxaban's efficacy and risks in such settings. The FDA will take the advisory committee's discussion and other insights under consideration; the target date for FDA action, according to the agency's Web site, is November 5, 2011.

Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.

This article (10.1056/NEJMp1110639) was published on October 5, 2011, at NEJM.org.


Wednesday, October 05, 2011

 

simvastatine

From Medscape Internal Medicine > Medicine Matters

Clarifying Simvastatin Warnings -- It's Not Just 80 mg

Sandra A. Fryhofer, MD

Posted: 09/28/2011

Physician Rating: 4.5 stars ( 38 Votes )
Rate This Article:

Hello. I'm Dr. Sandra Fryhofer. Welcome to Medicine Matters. The topic is the simvastatin saga, our review of the FDA's latest warnings about this popular statin. Here's why it matters.

When my patients needed statins, I always started with simvastatin. It was generic. It was on all the pharmacy plans. The price point was right. Prescribing was hassle free. There were no complex forms to fill out and explain to my patients. But on June 8, 2011, the FDA released new warnings about the dangers of high-dose simvastatin, which affects many people. In 2010, an estimated 2.1 million US patients were prescribed a product containing 80 mg simvastatin. The FDA also issued new warnings about dosing and drug interactions. Here are the highlights:

Restrictions for 80-mg simvastatin

Restrictions for intermediate- and low-dose simvastatin

Restrictions for simvastatin at any dose

The information on adverse effects with simvastatin are not new. In 2004, when the A to Z trial was published in JAMA,[1] an editorial expressed concerns about increased rates of myopathy in patients on simvastatin. The most recent study to cast dispersions on simvastatin was SEARCH, published in The Lancet in 2010.[2] It linked simvastatin 80-mg doses to increased risk for myopathy.

So why didn't the FDA act sooner? That's not clear, but the word is out now. Simvastatin problems are public property. Patients know about them and depend on us to clear up their medication regimens. Not all statins are created equal and the least expensive drug may not necessarily be the best for your patient. Please also check "Switching From Simvastatin 80 mg: How to Shop for Statins" from my column Staying Well. For Medicine Matters, I'm Dr. Sandra Fryhofer.


Tuesday, October 04, 2011

 

PTSD Dr. Brunner post traumatische stress disorder

Why Don't All Traumatized People Develop PTSD?

Two genetic studies raise the possibility of identifying specific aspects of the stress response system that could be treated in trauma-exposed patients.

Only a minority of people exposed to severe trauma develop post-traumatic stress disorder (PTSD), which suggests that certain factors (some of them possibly genetic) may be involved in vulnerability to PTSD. Two studies look at this question.

Mehta and colleagues studied 209 people recruited from an inner-city hospital (90% black; mostly low-income) who had experienced at least one adult trauma; 70% had developed PTSD. The researchers focused on links among PTSD, sensitivity of the glucocorticoid receptor (GR), and a single nucleotide polymorphism (SNP) of FKBP5, a GR co-chaperone involved in negative feedback regulation of the hypothalamic–pituitary–adrenal (HPA) axis. Among PTSD patients, after adjustment for childhood and adult traumas and depression, greater GR receptor sensitivity (measured by dexamethasone suppression) was associated with being an A allele carrier than with GG homozygosity, which was associated with lower baseline serum cortisol levels. Neither PTSD nor genotype alone predicted dexamethasone suppression. However, researchers identified another 41 genes (32 previously unreported) that participate in GR regulation; 19 appeared to form a network related to FKBP5 regulation of HPA axis activation in PTSD.

In Mercer and colleagues' study, 204 undergraduate women (mean age, 20), who had participated in a study of traumatic experiences, social supports, and sexual victimization, were reassessed twice after a lone shooter killed or wounded 26 people on campus (mean follow-ups, 3.2 weeks and 8.4 months). Genotyping of three loci in the serotonin transporter genetic region was performed. Severity of postshooting PTSD symptoms was predicted by the number of exposure events (e.g., hearing gunfire or being hurt), but not by previous traumas or social support. After statistical corrections and adjustment for the shooting exposure, higher PTSD symptom scores (especially avoidance) and acute stress disorder after the shooting were associated with a multimarker genotype that decreases expression of the serotonin transporter.

Comment: Studies of gene interactions are subject to false positive results (see JW Psychiatry Sep 12 2011) even after traditional corrections for multiple statistical tests. Still, these studies show that vulnerability to an abnormal stress response is genetically heterogeneous. People with a specific allele in a gene network that regulates the HPA axis may be vulnerable to suppression of cortisol production resulting from burnout of the stress response. Genetic interactions that reduce resilience of serotonergic systems that moderate arousal may increase susceptibility to excessive stress responses. As genotyping becomes cheaper and more reliably tied to clinical phenotypes, clinicians may be able to identify trauma-exposed people who would respond to interventions aimed at a specific dimension of stress response systems.

Steven Dubovsky, MD

Published in Journal Watch Psychiatry October 3, 2011


Sunday, October 02, 2011

 

stroke

Medical Management Is Superior to Stenting for Intracranial Arterial Stenosis

The rate of postprocedural strokes was unacceptably high in patients who received stents.

Stenting is now available for patients with intracranial arterial stenosis, an important cause of stroke. This procedure finally has been examined in a randomized trial, funded by the NIH and a device manufacturer. Patients with recent transient ischemic attacks or nondisabling strokes — attributed to intracranial arterial stenoses of 70% to 99% — received either "aggressive medical management" alone or the same management plus angioplasty and stenting. The qualifying artery was the middle cerebral in 44% of patients, internal carotid in 21%, basilar in 22%, and vertebral in 13%.

After 451 patients were randomized, the trial was stopped because of adverse outcomes in the stent group: At 30 days after enrollment, the primary endpoint (stroke or death) had occurred in 14.7% of stented patients and in 5.8% of medical-management patients — a highly significant difference; most strokes in stented patients occurred immediately after their procedures. Beyond 30 days, the ipsilateral stroke rate was 6% in both groups during average follow-up of 1 year.

Comment: In this study of patients with recently symptomatic intracranial arterial stenosis, medical management — consisting of aspirin, blood pressure treatment, lipid treatment, lifestyle modification, and a 3-month course of clopidogrel — clearly was superior to angioplasty plus stenting. Even discounting periprocedural strokes, stenting conferred no advantage during 1 year of follow-up.

Allan S. Brett, MD

Published in Journal Watch General Medicine September 29, 2011

Citation(s):

Chimowitz MI et al. Stenting versus aggressive medical therapy for intracranial arterial stenosis. N Engl J Med 2011 Sep 15; 365:993. (http://dx.doi.org/10.1056/NEJMoa1105335)

Medline abstract (Free)



This page is powered by Blogger. Isn't yours?