Chronic low back pain is one of the most common complaints addressed by primary care physicians. When conservative approaches fail, minimally invasive interventional procedures such as epidural steroid injections are often used. These injections are generally well tolerated; however, serious adverse effects can occur. The outbreak of fungal meningitis in 2012 linked to contaminated methylprednisolone acetate vials distributed by the New England Compounding Center used for epidural injections highlights serious potential complications.1 There exists a need to assess how frequently and for what indications interventional spinal procedures are performed. Previous studies determined utilization patterns in the Medicare population only.2,3 The aim of the present study is to determine the number of visits associated with spinal steroid injections in the United States annually and the characterization of these visits. Methods The National Ambulatory Medical Care Survey (NAMCS) is conducted annually by the National Center for Health Statistics (NCHS) to estimate utilization of ambulatory medical care services in the United States.4 The National Hospital Ambulatory Medical Care Survey (NHAMCS) collects data on the utilization of ambulatory care services in hospital emergency and outpatient departments.4 We queried the NAMCS and the NHAMCS from 1993 to 2010 for data regarding outpatient visits with the International Classification of Diseases, 9th Edition, procedure code for “injection of other agent into spinal canal” (V03.92). The estimated number of outpatient office visits in which a corticosteroid was associated with the procedure, as well as the associated diagnoses and the specialties performing these procedures, were determined. Analyses were performed using SAS version 9.2 software (SAS Institute, Cary, NC). Results An estimated 12.1 million outpatient visits in the United States between 1993 to 2010 were associated with spinal injections, with 2.2 million more visits from 2008 to 2010 compared with 1993–1995. Of these visits, 1.9 million were associated with injection of a corticosteroid, with 721,000 more visits from 2008 to 2010 compared with 1993–1995, an increase of 915% (Fig. 1). Of these visits, 52.7% were to office-based physicians and 47.3% were to hospital outpatient departments. More female (66%) than male patients (34%) received steroid injections. The leading corticosteroids injected were methylprednisolone (65.7%), glucocorticoids, unspecified (5.6%), and triamcinolone (5.4%). According to NAMCS data, injections were most commonly performed by all other specialties, including pain management specialties (anesthesiology, pain medicine, and physical medicine and rehabilitation [40.1%]), orthopedic surgeons (35%), and neurologists (14.3%). The leading diagnoses associated with these visits were thoracic or lumbosacral neuritis or radiculitis, unspecified (18.7%); degeneration of intervertebral disc, site unspecified (10.5%); spinal stenosis other than cervical (10.5%); degeneration of lumbar or lumbosacral intervertebral disc (8.4%); and lumbago (8.2%). Fig The number of US outpatient visits associated with spinal steroid injections, with 95% confidence intervals. Discussion According to NAMCS and NHAMCS data, there has been a significant increase in the number of outpatient visits associated with a spinal steroid injection. Manchikanti et al reported a similar trend in the Medicare population, with a 130% increase in the number of patients receiving epidural steroid injections from 2000 to 2011.2 In addition, the Centers for Disease Control and Prevention reported 749 cases of fungal infections linked to steroid injections across 21 states as a result of contaminated steroid vials, with 63 deaths.1 With an increasing number of these procedures being performed, the risk–benefit of spinal steroid injections should be considered given the recent fungal outbreak, especially considering the efficacy of these procedures is debatable. A meta-analysis of epidural spinal injections for low back pain, which reviewed 29 randomized clinical trials, did not find a statistically significant benefit of these interventions at 6 months or longer when considering pain, disability, and patients undergoing subsequent surgery.5 Selection bias was identified in the majority of the studies. Additional high-quality randomized clinical trials are needed to provide evidence of efficacy for the use of epidural steroid injections in the treatment of back pain. The limitations of our study include an underestimation of the annual number of spinal steroid injections because 46.7% of the 12.1 million patient visits with a spinal injection had either no associated specified drug code or were coded as a miscellaneous uncategorized agent. The procedure code V03.92 is not specific only to epidural injections, but also includes other types of spinal injections. This code also does not capture data on the route of administration of epidural injections to allow for subgroup analyses.
Clarity in identification is a hallmark of medicine, in reference to the identification of patients, laboratory results, medications, and so forth. So, why, in the critical environment in which health care is delivered, are the roles and authority of healthcare providers not identifiable? There are two identifiers of position and authority. One is attire and the other is title. In earlier times, physicians wore a long white coat or a business suit, often with a stethoscope prominently displayed around their neck or dangling, jauntily, from their coat pocket. Nurses wore white uniforms and caps that indicated the school from which they had graduated and whether they were registered nurses or licensed vocational nurses. Surgical personnel wore scrubs. Laboratory personnel also wore long white coats, but they rarely ventured onto patient floors. Ancillary staff usually wore business attire and chaplains were either in clerical garb or wore suits or dresses with an appropriate religious symbol clearly visible. Today, medical students, in their short white coats, are probably the last vestige of a specific category of healthcare worker wearing specific attire. It is now not uncommon for physicians to make rounds wearing casual street clothes—even jeans or golfing attire. Nurses wear scrubs, as do occupational therapists, respiratory therapists, physical therapists, radiologic technologists, and phlebotomists; even ward clerks are wearing scrubs. More healthcare workers wear long white coats, including social workers, physician assistants, and nurse practitioners. The ability of a patient, patient’s family member, or even another healthcare provider to identify the appropriate person to answer a specific question or perform a certain task has become difficult. Even when a person’s functional capacity is embroidered on his or her lab coat (if he or she is wearing a lab coat) the name, degree, and profession are frequently obscured by a pocketful of pens, a notebook, or papers. Some healthcare facilities require specific attire (eg, different colored scrubs) for different allied health workers. Requiring specific attire, based on one’s function, is not universal, and when it is used, the types or colors of attire are not standardized from one facility to another. Regarding title, it is confusing for patients and their families when a healthcare worker introduces himself or herself as “doctor.” Most nonmedical people presume that the person is a physician or surgeon and may wonder why this doctor, who isn’t “their” doctor, is even talking to them. Thinking that this “doctor” is a medical doctor, they may inquire about their diagnosis, treatment plan, or prognosis when, in fact, the doctor is an allied health professional who has earned a doctoral degree. This identification problem has become more common as a result of two factors: the increased number of nursing and allied health professions that offer doctoral degrees and the increased number of instances in which allied health professionals are interacting directly with patients. What benefit is there in this loss of distinction? More important, what harm, risk, or inefficiencies does it produce? There are no benefits other than promoting the individual’s right to dress as he or she chooses and to be addressed by a well-earned honorific, but the lack of distinction creates confusion. As noted, above, the title “doctor” has lost its specificity. At one time, the title referred to a licensed practitioner of the medical arts, either an MD or a DO and, even today, 94% of people surveyed randomly equate the title “doctor” with “physician,” even though 77% of people who use the title are not physicians.1 There are Doctors of Nursing, Doctors of Physical Therapy, Doctors of Occupational Therapy, Doctors of Laboratory Science, and so forth. These doctors, who have earned their degrees through advanced study, are justifiably proud of their accomplishments and refer to themselves, and expect to be addressed, by this title. There should be no objection to this, but within the healthcare environment, when it obscures rather than clarifies the situation, is it helpful? Some argue that the word “doctor” derives from the Latin “docere,” which means “to teach.”1 In Latin, the word used to indicate a medical doctor is “medicus.”2 As such, the word “doctor” should not be the exclusive property of physicians and surgeons; however, the word “medicus” has never caught on and the word “physician,” although useful in describing what one is, is not useful as a term of direct address. “Physician Johnson, may I speak with you regarding Mrs. Smith?” doesn’t roll easily off the tongue. Others argue that traditional and common usage should be considered.3 Based on common usage, the American Medical Association (AMA) passed a resolution resolving “that the title of ‘Doctor,’ in a medical setting, apply only to physicians licensed to practice medicine in all its branches, dentists and podiatrists.”4 The American Nurses Association rejected the AMA’s resolution, suggesting the creation of the term “medical doctor” as a differentiating title, stating, “Those who have earned a doctorate degree may be called a ‘doctor.’ There is no legitimate reason to exclude nurses from this practice … .”5 The American Society of Health System Pharmacists also objected to the AMA resolution, stating, “Therefore, the need for legislation as called for in the resolution to protect the titles resident, residency, and doctor seem [sic] unnecessary and unproductive.”6 The American College of Clinicians, although recognizing the traditional usage of the title, also recognized its legitimate use, even within the healthcare environment, by others who have earned a doctoral degree, stating, “We also believe that no one profession owns an educational degree (be it clinical or non-clinical, degree or title) especially at the doctoral level.”7 We present a number of suggestions to address this problem and list the advantages and disadvantages of each: The use of nametags, with one’s area of specialty prominently indicated in a large, easy-to-read, font. Advantage: Clear identification of one’s specialty Disadvantages: ▪ May be difficult to read at a distance or by someone who has poor eyesight ▪ May easily be obscured by overlying clothing or by being turned around, although a nametag with the information duplicated on the opposite side would solve the latter problem ▪ May not be the first thing one thinks to look for when a healthcare worker identifies himself or herself as “Doctor ________” Prohibit the use of the title “doctor” within the confines of a healthcare facility or in the context of healthcare delivery, by anyone other than an MD or DO. Advantages: ▪ There is precedent for this. A number of states have passed laws or have considered passing laws, preventing nonmedical doctors from using the title “doctor” in healthcare-related situations or in advertising.8–11 ▪ Medical doctors would be immediately recognizable by their title. ▪ This restriction would not prevent nonmedical doctors from using their title in social or other nonhealthcare delivery environments. Disadvantages: ▪ A prohibition against the use of the title “doctor” within the healthcare environment by anyone other than medical doctors is deemed excessively restrictive by a number of allied health professionals and their professional societies. ▪ A prohibition would restrict allied health professionals who have earned a doctoral degree from distinguishing themselves within the healthcare environment from those who have not earned that distinction. Proper and clear communication of one’s special area of expertise. Advantages: ▪ Would prevent misidentification ▪ Would have the positive effect of educating patients and their families about the level of education and training that one has obtained in his or her specialty Disadvantage: ▪ Would be difficult to enforce An enforced dress code, universally standardized. Advantage: ▪ Would eliminate any confusion, once the “standard” is understood Disadvantage: ▪ May be seen as imposing restrictions on people’s “individual rights” Conclusions The ambiguity and confusion regarding the roles of healthcare providers have increased as traditional dress codes have been eliminated and as more providers are earning doctoral degrees. Rather than considering the workplace as a venue for displaying one’s sartorial individuality or arguing about who has the right to use the title “doctor” and where they should be allowed to use it, we should recognize and address the issue of identity confusion in the healthcare environment and adopt some rational solutions to the problem. Studies need to be conducted with the goal of finding ways to improve the identification of physicians and the roles of all healthcare professionals.
Key Points Obese children experience abnormal sleep quality and increased diurnal sleepiness. Obese children have reduced pediatric quality of life scores. Longer sleep times in obese children older than 12 years are associated with decreased body mass indexes. Weight gain increases significantly after age 12, and intervention studies should begin at an early age, when possible. Short sleep duration is linked to obesity and all components of the metabolic syndrome.1–6 Short sleep periods and sleep restriction can increase the risk for weight gain through multiple pathways, including hormonal and behavioral changes.1 Adipocytes secrete leptin, which suppresses appetite, and the gastric mucosa secretes ghrelin, which stimulates appetite. Sleep deprivation decreases leptin levels, increases ghrelin levels, and increases hunger and appetite ratings.7 These hormonal changes have been confirmed in larger epidemiologic studies and are related to an increased risk for obesity.8 This correlation between short sleep duration and an increased risk for obesity has been observed in multiple studies in both children and adults.1,9–12 Children with shorter sleep duration have a 58% increased risk for obesity; this risk was reduced by 9% for every 1-hour increase in sleep duration.13 This relation between sleep duration and obesity offers the possibility of therapeutic interventions, and primary care physicians need information about sleep-related interventions that may modify behavior and help limit weight gain or produce weight loss in children and their families. We designed this retrospective study to examine which factors, including sleep and behavioral patterns, are associated with a change in body mass index (BMI) in obese children referred to a dietitian for weight reduction. Methods Subjects The Department of Pediatrics at Texas Tech University Health Sciences Center operates both general and subspecialty pediatric clinics. Ages of the patients range from 4 weeks to 18 years; 55% of them are girls. The ethnicity distribution is black (8%), Hispanic (47.6%), white (38.1%), and other (6.3%). The payer sources include commercial insurance (27.7%), Medicaid (57.5%), self-pay (5.7%), government programs (8.6%), and miscellaneous (0.5%). In these clinics, pediatric patients whose BMIs exceed the 90th percentile for their age and sex are referred to the Healthy Kids Program in the same clinic for further evaluation and intervention by a licensed dietitian. Siblings at risk for obesity and children whose growth curve indicates that they are likely to meet the definition of obesity within the next 2 years also are referred to this program. The dietitian’s initial evaluation includes an interview with the children and parents and distribution of questionnaires related to dietary habits and other behavioral patterns. These children were counseled on healthy dietary and lifestyle activities related to obesity at the initial visit and at follow-up visits every 2 to 4 weeks. We obtained the charts of 77 children enrolled in the Healthy Kids program. The primary goal of this pilot study was to identify possible relations between sleep habits and obesity in this population of obese and at-risk children preintervention; we only analyzed information from the children’s initial visits to the dietitian. This study was approved by the institutional review board at Texas Tech University Health Sciences Center. Measures The outcome of interest was the child’s BMI at time of the initial dietitian visit. Pediatric obesity usually is determined by whether a child’s BMI exceeds the 95th percentile for his or her age and sex. This definition is not useful in this group of patients, and we modeled the BMI as a continuous response variable in subsequent regression analysis rather than as an indicator variable for pediatric obesity because our study population is either already obese or at-risk for obesity. The focus of the analysis was to identify factors associated with significant increases/decreases in BMI in this population and not which factors are associated with pediatric obesity. Our primary predictor variables of interest were sleep duration (hours), quality of life, and sleep habits. Sleep duration was measured for both weekdays and weekends and calculated from the self-reported typical bedtimes and wake times. Quality of life was measured using the Pediatric Quality of Life Inventory (PQLI; PedsQL version 4.0).14,15 The inventory includes physical functioning, emotional functioning, social functioning, and school functioning. A five-point response scale from 0 to 4 is used; higher scores correspond to a poorer quality of life. Other sleep-related instruments included the Pediatric Sleep Questionnaire (PSQ), the Pediatric Daytime Sleepiness Scale (PDSS), and a generic sleep questionnaire. The PSQ is a validated instrument to assess the presence of childhood sleep-related breathing disorders and prominent symptom complexes, including snoring, daytime sleepiness, and related behavioral disturbances in children.16 It consists of 22 questions in 2 parts (part 1, sleep; part 2, behavior), with a score of 0 or 1 allotted to a negative and positive response, respectively. A higher score corresponds to the presence of these symptom constructs. Sleepiness was assessed using the PDSS, which consists of 8 questions scored on a scale of 0 to 4. The 0-to-4 scale corresponds to a response of “never” to “always” to questions assessing symptoms related to sleepiness in children.17 A higher score is consistent with increased sleepiness and correlates with low academic achievement, shorter total sleep time, and a significantly higher level of diurnal sleepiness.17 An additional generic sleep questionnaire included questions about sleep habits (sleep and wake-up times on weekdays and weekends), naps (their length), whether family members share the room or bed with the child, and the child’s self-assessment of how he or she feels upon waking (still tired vs rested). In addition, children were asked to rate their body figure using a well-described body figure scale for pediatric patients.18 The scale (from 1 to 9) consists of separate images of boys and girls ranging from emaciated to obese. Depending on the child’s age, the parents also helped children to complete questionnaires. Statistical Analyses The control variables in this study included age and sex. Age was calculated in years and months. All of the statistical analyses were conducted using the statistical software package R version 2.8.1 (Bell Laboratories, Murray Hill, NJ). General demographics and physical characteristics were summarized with means and standard deviations, medians and ranges, or categorical counts and percentages. For the univariate analysis, t tests were used to test for significant subgroup differences in both the average BMI and average weekday sleep duration and to identify possible confounding factors in the underlying relation between sleep habits and BMI. Given our small sample size, we reduced the number of predictor variables to achieve a more robust, stable final model. In addition, we were interested in characterizing the children by their sleep behavior given the battery of validated surveys and questionnaires. We clustered the children by all of their survey scores: PQLI Physical, PQLI Emotional, PQLI Social, PQLI School, PSQ1, PSQ2, and PDSS, using complete linkage hierarchical clustering.19 In hierarchical clustering, the distance between all of the scores is used to measure the similarity of the children (small distance, similar survey scores, similar children). Complete linkage simply defines the distance among groups as the maximum distance between children in each group. We chose the number of clusters/groups by looking at a tree of distances between the children (ie, a dendrogram) and by looking for obvious separation between groups of subjects (groups connected at high heights/large distances). This approach assumes no hypothesis about the number or type of groups. It determines the group structure solely using the distances between children and can be viewed as an exploratory technique. Before the multivariate analysis, we explored the univariate relation between weekday sleep duration and BMI depending on age. Children subsequently were categorized into three subgroups: age ≤ 8 (young), 8 12 (teenagers); we used linear regression to model the relation in each subgroup. These results led us to build a multivariate linear regression model that allowed for different sleep duration effects for children 12 years or younger and children older than 12 years. This model also included sex, ethnicity (white vs nonwhite), our survey score groups, self-assessment of current body figure, naps on weekends, and sharing a bedroom. Results Our final convenience sample comprised 77 children (62.3% girls, 37.7% boys) having a mean age of 10.42 with a range of 2 years 8 months to 16 years 10 months. The mean height was 144.9 ± 18.7 cm; the mean weight was 73.02 ± 30.71 kg. The mean BMI was 33.08 ± 7.37 kg/m2. General characteristics, including quality of life and body figure self-assessment, are summarized in Table 1. Univariate analyses (Table 3) identified several statistically significant subgroup mean BMI differences. Table 1 General characteristics of study sample Sleep characteristics of the study sample are reported in Table 2. The mean sleep duration was 9.09 ± 1.09 hours. Thirty-one children (61.1%) reported a difference between their weekday and weeknight bedtimes of >1 hour. The distributions of PSQ1 and PDSS scores were fairly normal (means 5.27–12.55); PSQ2 scores were skewed to the right. Twenty-nine children (55.8%) reported feeling rested upon waking. Sixteen children (26.7%) indicated that they took naps during the weekdays (mean length 1.35 hours). A total of 20% fell asleep in school. Thirty-five children shared a bedroom (58.3%); 17 shared a bed (28.3%). Univariate analyses identified several statistically significant factors associated with sleep duration (Table 3). Table 2 Sleep characteristics of study sample Table 3 BMI and weekday sleep duration by patient characteristics Figure 1 illustrates the survey clustering results. Three branches seem well separated from one another (ie, connected at larger distances); the 3 groups included 26 children (group 1), 22 children (group 2), and 5 children (group 5). The summary characteristics of these groups are enumerated in Table 4. Analysis of variance indicated that the three groups have different means for every survey score except PDSS. The results in Table 4 indicate that group 1 had, on average, lower scores than the other two groups, and we refer to these children as the “normal” group. Group 2 had elevated scores as compared with group 1, indicating lower quality of life with increased sleep behavior issues. In particular, group 2 had an average PQLI Physical score that was three times higher than the other groups. Group 3 was smaller (n = 5) but was distinguished by its higher PQLI Emotional, Social, and School scores, with slightly higher PSQ2 and slightly lower PDSS scores. Table 4 Characteristics of groups clustered by sleep behavior surveys Fig. 1 Three groups of students clustered by sleep behavior surveys. Groups are represented by the three branches below the red line; subjects are labeled by their ID numbers. After categorizing the children into three age subgroups as previously described (2–8, 8–12, and 12–17 years), we modeled the univariate relation between BMI and weekday sleep duration for each of the subgroups (Fig. 2, Table 5). The slopes for these relations vary widely. For young children, an increase of 1 hour of weekday sleep corresponded to an average decrease of 0.16 in BMI. For preteens, the same increase in weekday sleep corresponded to an average increase of 0.58 in BMI. For teenagers, the average BMI decreased by 2.24 for each 1-hour increase in weekday sleep duration, a markedly different effect. Given this disparity in effect size, we chose to include an indicator variable for whether the child was 12 years or older and a corresponding interaction term with weekday sleep duration to allow for different sleep duration effects for different ages in the model. The corresponding linear regression model (here without other confounders) is then: Fig. 2 Unadjusted relationships between body mass index and weekday sleep duration by age subgroup. Age ≤8: black circles; 8 < age ≤12: red triangles; 12 < age: blue stars. The effect on mean body mass index associated with a 1-hour increase in weekday sleep duration is −0.16 (age ≤8), 0.58 (8 < age ≤12), −2.24 (12 < age) with respective P values (0.92, 0.63, 0.16). Table 5 Unadjusted subgroup linear regression results for predicting BMI with sleep duration Table 6 includes both the unadjusted analyses for the selected variables and the final multivariate model (including our sleep behavior group covariates). In the unadjusted model age, race (white), body figure, and sleep behavior Group 2 was associated with an increase in BMI and sharing a bedroom was associated with a decrease in BMI. In the adjusted model, the following effects were associated with a significant change in average BMI. For all of the children, a 1-year increase in age was associated with an average increase of 0.824 in BMI (P < 0.05). Older children’s BMI values were, on average, 19.594 higher than those of younger children (0.05 < P < 0.10). For children 12 years or younger, an increase of 1 hour in weekday sleep duration was associated with an insignificant BMI increase of 1.028; however, for children older than 12 years, this effect was augmented by a marginally significant decrease in BMI of −2.291, implying that the overall associated effect for this age group was −1.263. The average male BMI was insignificantly 1.561 lower than the average female BMI. Being white was associated with an insignificant average BMI increase of 1.956. Each unit increase on the self-assessment current body figure scale was associated with a significant average BMI increase of 1.781. Both taking naps on weekends and sharing a room were insignificant in the presence of our other variables. Although our sleep behavior groups showed a significant univariate relation with BMI (specifically, group 2 vs group 1), in the multivariate model, these effects were attenuated and insignificant. The overall model was significant (adjusted R2 = 0.705; F = 8.703 on 11, df 40; P < 0.001). Table 6 Multivariate linear regression results for predicting BMI Discussion This study involved a select group of children who were obese or at risk for obesity and referred to a dietitian for counseling; the primary objective was to determine whether there was a relation between sleep duration and BMI in these children. The secondary objectives were to determine using standardized questionnaires whether there were associations between BMI or sleep time and quality of life indexes, sleep disturbance, diurnal behavioral problems, or diurnal sleepiness. Our analysis reveals potentially important associations that may have therapeutic implications, but this analysis cannot determine causal relations. The mean BMI in these children was 33 kg/m2, but increased substantially on average in older children (older than 12 years). In the univariate analyses, children with high PQLI (Physical), PSQ1, PSQ2, and PDSS scores were heavier, implying that children with high BMIs have more difficulty with physical activity at school and are more likely to experience sleep-related problems, diurnal sleepiness, and behavioral problems. These results may be explained, in part, by undiagnosed sleep-related breathing disorders, which can adversely affect quality of life in children.20–23 Moreover, although the average sleep duration of these children was 9 hours, approximately one-fourth took naps on both weekdays and weekends, and 20% fell asleep during school, indicating sleep deprivation. It is possible that the cumulative effect of even small sleep deficits has important consequences during child development.24 Our univariate analysis indicated that multiple variables were associated with increased BMIs and/or decreased sleep duration; however, our sample size did not support the use of all of these variables in the multivariate model. As such, we used cluster analysis to identify groups of children with similar sets of scores on the PedsQL, PSQ, and PDSS questionnaires and used the sleep behavior and quality of life group designation as a predictor variable with a normal referent group. This choice allowed inclusion of information from all questionnaires in the multivariate model, but did not measure the effects associated with individual questionnaires. In the final multivariate model, there was a statistically significant interaction between age 12 years and sleep duration. The average BMI fell 1.263 for each 1-hour increase in sleep duration, suggesting that sleep time is still important in obese older children and that studies of their sleep hygiene are warranted. The pronounced increase in the rate of change in BMI after age 12 in this cohort suggests that any intervention should begin at an early age to enhance its success. Children who shared a bedroom had decreased BMIs and longer sleep durations (univariate analysis). Older children who share a bedroom with a younger child may have their bedtime anchored to the younger child’s sleep schedule, which is likely longer just based on age.25,26 An alternative explanation may involve the family and its environment. Children who sleep in the same bedroom may have healthier intrafamily dynamics, including less stress and more physical activity. For example, it is possible that outdoor play with regular sun exposure modulates obesity through increased vitamin D levels.27 In addition, a better understanding of social networks and social activities in obese children should improve future studies or interventions.28 The survey instruments used in this study can help identify children with detrimental social and school difficulties at their current body weight and could provide the basis for counseling at schools. This study has some limitations. By design, the study focus group was a relatively homogenous group of obese children. We did not compare these subjects with control subjects, and the results would not necessarily be applicable to nonobese children. We used several well-established survey instruments but they may have overlapping underlying constructs. We are not certain whether one survey instrument performed better than the others. In addition, there are likely confounding factors that were not addressed with the surveys and questionnaires used in this clinic; however, this study focused on characterizing and better understanding the subgroup of obese and at risk children who represent an ongoing, important pediatric problem. Conclusions This study provides useful information about the associations among BMI, sleep, and quality of life indexes in obese children. Clearly, BMI is much higher in older children, and these children have significant social and school problems based on validated questionnaires. There is a protective association between sleep duration and BMI in older children, even though they are already obese. This study suggests that early intervention with behavioral strategies may have potential benefit in obese teens; however, the modern adolescent lifestyle could complicate or confound any intervention.29 For example, electronic entertainment and communication devices can negatively influence sleep duration. Limitation of their use provides one potential approach to improving sleep hygiene; at a minimum, carefully controlling for and/or modeling their effect should be part of any larger population-based study.30,31
Key Points Weight management in perimenopause is critical to prevent excess cardiovascular risk. Effectiveness of behavioral strategies depends on intervention intensity, adherence to physical activity and dietary recommendations, length of time, and ongoing maintenance. The primary care setting is ideal for identification of perimenopausal women who are overweight and obese, for implementation of behavioral lifestyle strategies to prevent the development and progression of cardiovascular disease, and the promotion of overall health and functioning. Obesity has become a global health problem and has reached an epidemic level. In the United States and Europe, obesity, generally accepted as a body mass index (BMI) >30 kg/m2, affects approximately one-third of the total population.1 Obesity is a cause of many comorbid conditions, including depression, low self-esteem, sleep apnea, osteoarthritis, certain forms of cancer, type 2 diabetes mellitus (DM2),2 and is a major contributing factor to cardiovascular disease (CVD), a leading cause of death and disability among women.3 CVD claims the lives of almost 500,000 women each year.4 In 2007, CVD caused, on average, one death per minute among women in the United States.5 Obesity raises risks for CVD partly through effects on blood pressure, blood sugar, and blood cholesterol. It also affects insulin resistance and elevations in thrombotic markers, such as fibrinogen, and inflammatory markers, such as interleukin-6, and C-reactive protein (CRP).6 The Heart Disease Prevention Guidelines for Women7 recommend maintaining a BMI of <25 kg/m2; quitting smoking; performing 150 minutes of moderate exercise or 75 minutes of vigorous exercise per week; eating a diet of fruits and vegetables, whole grains, and high-fiber foods; and limiting intake of saturated fat, trans fats, cholesterol, alcohol, sodium, and sugar. According to the 2014 Evidence-Based Guideline for the Management of High Blood Pressure in Adults,8 people with hypertension aged 60 years or older need to work to achieve a blood pressure (BP) of <150/90 mm Hg and for people younger than 60 years and/or with DM or nondiabetic chronic kidney disease a BP of 140/90 mm Hg.8 Weight management is now considered an essential intervention in addressing the epidemic of obesity and decreasing the risks of CVD and all-cause mortality.6,9 Evidence from randomized controlled trials (RCTs) indicates that exercise and dietary modifications decrease blood pressure as well as improve the lipid profile by raising high-density lipoprotein (HDL) and lowering triglycerides (TG) and low-density lipoprotein (LDL) in adults with metabolic abnormalities 9 who are overweight and obese and should be considered an essential part of behavioral or therapeutic lifestyle changes. Perimenopausal women are at a higher risk of CVD compared with their premenopausal counterparts. Contributors to weight gain at menopause include declining estrogen level, age-related loss of bone and muscle tissues, and lifestyle factors such as diet and a decrease in energy expenditure.10 The change in fat accumulation around the abdomen also has been implicated as the primary cause of CVD seen in women.10,11 Theoretical frameworks underlying obesity interventions often address motivation as a facilitating factor. There are several theories explaining behavioral changes related to obesity interventions. Bandura’s self-efficacy theory emphasizes how cognitive, behavioral, personal, and environmental factors interact to determine a person’s motivation and behavior.12 Prochaska’s transtheoretical model of behavior change has been the basis for developing effective interventions to promote change in health behaviors.13 Self-determination theory addresses personality development, the relation of culture to motivation, and the impact that social environment has on motivation, affect, behavior, and well-being.14 Encouraging behavior to improve dietary habits and increasing physical activity are essential keys to manage weight and promote the heart health of perimenopausal women in primary care; however, it can be challenging. As such, the purpose of this review was to examine research focused on modification of diet and physical activity behaviors to identify short-term outcome measures that would be appropriate, feasible, and achievable for weight-loss intervention in primary care for women in midlife to reduce risks and improve their cardiovascular health. Methods Computerized and manual searches were performed using the electronic databases of PubMed, MEDLINE, CINAHL, Scopus, PsycINFO, and Google Scholar. Key words “menopause,” “obesity,” and “cardiovascular disease” were entered to retrieve the literature for the period 2003–2013. A manual search of article reference lists retrieved also was completed. Inclusion criteria were RCTs of exercise, diet, and a combination of exercise and diet, English language, and women 45 to 75 years old at risk for CVD. Results Study Characteristics Of the 50 articles identified through the search, 13 met the inclusion criteria. The main reasons for exclusion of other articles were lack of relevance to the research question (16), duplicate studies (4), studies conducted and published before 2003 (9), and not RCTs (8). The characteristics of the included studies are presented in Tables 1 to 3. Articles identified as relevant were selected, followed by an extraction of the needed information, including authors; place of the study; trial characteristics including design, randomization, and duration; characteristics of participants such as age group; inclusion and exclusion criteria; treatment interventions; attrition; outcome measures; and key findings. Table 1 Trials using exercise as weight-loss intervention to minimize risks factors for CVD Table 2 Trials using dietary weight loss interventions to minimize risk factors for CVD Table 3 Trials using dietary and exercise weight-loss interventions to minimize risk factors for CVD The quality of selected RCTs was assessed by blinding, randomization, and dropouts, as it is described for each of the studies. All of the studies reported no significant differences in the main characteristics of the participants at baseline. Dropouts in the intervention and control groups ranged from 0%15 to 15%.18 Reasons for dropouts were the inability to continue training, dissatisfaction with randomization, loss to follow-up, relocation, time constraints, voluntary withdrawal, adverse effects, and equipment failure. The participants were people at risk for CVD and therapeutic interventions used were behavioral changes; dietary interventions alone; and different-intensity exercise, both alone and in combination with diet. Principal outcome measures were weight loss and biomarkers of CVD such as abnormal blood pressure, cholesterol, glucose, insulin, hemoglobin A1c (Hgb A1c), inflammatory markers, and BMI. In this review, the focus was on changes of these outcome measures that could be used in primary care to monitor intervention effectiveness. Sample Characteristics Participants included women and men 45 to 75 years old. The majority of studies we selected were conducted using female subjects. The literature search revealed that few trials recruit only female subjects.16–18,21 The data from these trials (Tables 1–3) were extracted from the female subgroup only. Intervention Strategies The behavioral intervention strategies for weight loss were categorized and are presented by type of intervention: exercise only, diet only, and diet and exercise combined. Exercise Only Intervention All exercise only studies reviewed included physical activities: high amount/vigorous-intensity exercise (65%–80% peak maximum oxygen consumption), low amount/vigorous intensity, low amount/moderate intensity (40%–55% peak maximum oxygen consumption),16 and moderate-intensity exercise (45 min 5 days/week).19 One study had four groups with moderate intensity and vigorous intensity exercise with the same calorie restrictions for all groups (approximately 400 kcal/day deficit).20 All exercise sessions were supervised by a certified physiologist or physician. The value of exercise only was based on changes in the following outcome measures: body weight, Hgb A1c, endothelial function, insulin resistance, adipocytokines, total cholesterol (TC), HDL, LDL, CRP, insulin, glucose, TG, and leptin. A study by Stensvold et al16 found that exercise alone did not contribute to weight loss or statistically significant improvement of CVD biomarkers, even though it exerted some beneficial effect on physiological abnormalities. Another study19 found no statistically significant differences between exercisers and controls in fasting blood sugar or postprandial blood sugar levels, triglyceride levels, and lipid profile, at 3 and 12 months. Results of one RCT20 revealed that both exercise groups (various-intensity exercise plus calorie restriction) and control (calorie restriction only) lost weight, but found no statistically significant differences between groups. Changes in lipids, fasting glucose, insulin, or postprandial blood sugar level were similar across all of the groups. Because most of these outcomes improved in each group, the effect of weight loss itself appears to be more favorable for improving cardiovascular status than for improved fitness.20A study by Okada et al21 revealed that HDL and adiponectin improved in both exercise and control groups (P < 0.01); however, CRP was not significantly changed in either group. The contradictory results of these studies suggest that exercise alone may not be effective in CVD risk reduction, and other factors such as diet modification, improved sleeping habits, stress and depression management, and smoking cessation may account for substantial decrease in CVD risk factors. Articles reviewed16,19–21 clearly demonstrated that there were no statistically significant changes in cardiac biomarkers between exercise and control groups, apart from one study21 that showed a slight decrease in the LDL level in both groups (P < 0.05). Primary care providers should be aware, however, that although exercise by itself may not affect all CVD risk factors, weight loss as a result of exercise appears to significantly improve the health and well-being of women who are overweight. Diet Only Intervention Strategies for weight control generally recommend the adoption of a low-fat diet, which is associated with cardiovascular risk reduction.22 Noakes et al,22 Howard et al,23 Shai et al,18 and Azadbakht et al15 conducted studies to investigate the effects of different diets on CVD risk factors, and these resulted in consistent findings: improvement in outcome measures (P < 0.001) owed to weight loss rather than dietary composition. The study by Howard et al23 revealed that a diet consisting of lowfat food plus behavioral modifications had a modest effect on weight loss and other outcome measures. Similar results were achieved by Shai et al,18 who compared the effects of lowfat diets (30%) versus moderately lowfat diets (35%), versus the Mediterranean diet with the same amount of calories, on weight loss and other CVD risk factors. Women in the Mediterranean diet group lost more weight (−6.2 kg) than women in the lowfat diet group (−0.1 kg; P < 0.001). Improvement in BMI, Hgb A1c, CRP, and cholesterol were noted; however, LDL did not change significantly within or among groups. In the study conducted by Azadbakht et al,15 subjects in the intervention group following a Dietary Approaches to Stop Hypertension diet experienced statistically significant improvement (P < 0.01) in weight loss and other outcome measures, when red meat was replaced by soy nut protein, compared with controls who ate red meat as the only source of protein. The results of reviewed studies15,18,22,23 suggest that better reduction of CVD risk factors were achieved with the Mediterranean diet18 and that modest improvements in some outcome measures such as CRP, HDL, TC, and fasting blood sugar were the result of weight loss rather than dietary composition (P < 0.05). None of the reviewed studies showed significant changes in LDL. Primary care providers should focus more on behavioral counseling and modified dietary approaches that yield better reduction of CVD risk factors than the generally adopted lowfat-diet approach. Diet and Exercise Combination Intervention An RCT demonstrated that a diet and exercise combination intervention is more effective that either diet or exercise alone.24 In this study, postmenopausal women were randomized to four groups: a dietary intervention (1200–2000 kcal diet) group, a moderate-to-vigorous exercise group, a diet and exercise group, and a no lifestyle change group. Results revealed that subjects in the diet and exercise group lost more weight than either diet or exercise-alone groups (−8.9 kg [−10.8%], P < 0.0001 vs −7.2 kg [−8.5%], P <0.0001, and −2.0 kg [−2.4%], P = 0.034, respectively). The control group experienced a nonsignificant decrease in weight (−0.8%) and BMI. Two RCTs provided data about changes in the level of inflammatory cytokines in response to a combination of diet and exercise.25,26 The intervention consisted of a calorie-restricted diet (−300 kcal) with a goal of 10% weight reduction, and three sessions of supervised different-intensity exercise. Participants received health education regarding behavioral modification. Results of the first study showed that subjects in the diet and exercise group lost more weight than their counterparts (P < 0.01). Inflammatory biomarkers responded in a similar way: significantly greater decrease in CRP and interleukin-6 levels in the diet and exercise group compared with the diet only group or the controls (−41.7%, P < 0.001 vs −36.1%, P < 0.001, and −24.3%, P < 0.001 vs −23.1%, P < 0.001, respectively). In the second study, more weight loss also was achieved in the diet and exercise group than in the control group (−4.7 kg [7.3%] vs −2.0 kg [3.1%]); better reduction of the inflammatory biomarker monocyte chemoattractant protein-1 (−16 pg/mL [5.17%], P < 0.038 vs 71 pg/mL [23.7%]); insulin resistance (−0.508, 28.6%, P = 0.006 vs 0.222, 12.4%). There was a nonsignificant fasting insulin decrease in both groups (P < 0.335). The findings of these studies show that a combination of moderate-to-vigorous exercise and a modified diet consisting of a moderate caloric restriction is the most effective strategy for achieving clinically meaningful weight loss and has the biggest impact on reduction of CVD risk factors, especially CRP and other inflammatory biomarkers. Not many studies provide long-term follow-up with participants of aggressive behavior modification trials. Okada et al21 conducted a 2-year follow-up during which it was shown that the control group developed cardiovascular events more frequently than did the exercise group (P < 0.05). The beneficial effect of exercise to reduce cardiovascular events persisted up to 24 months. Shai et al18 concluded that at the 24-month mark, the overall weight changes among the 45 women were −0.1 kg (95% confidence interval [CI] −2.2 to 1.9) for the lowfat group, −6.2 kg (95% CI −10.2 to −1.9) for the Mediterranean diet group, and −2.4 kg (95% CI −6.9 to 2.2) for the low-carbohydrate group (P 12 months) also may be helpful. Additional RCTs involving obese minority populations at risk for CVD are needed. Clinical manifestations of CVD risk factors among different ethnic groups may vary. Healthcare professionals can play a pivotal role in managing obesity by diagnosing patients at risk and providing psychosocial support to encourage therapeutic adherence.32 Although some aspects of CVD risk such as age, sex, and family history are nonmodifiable, others are a result of lifestyle, which can be influenced by appropriate changes in diet and activity, as well as early pharmacologic interventions.33 Awareness of heart disease as a leading cause of death among women is suboptimal and a gap in awareness exists between whites and racial/ethnic minorities. 33 The US Preventive Services Task Force34 notes that weight loss is associated with a lower incidence of health problems and death and recommends that clinicians screen all of their patients for obesity and offer counseling and behavioral interventions to promote sustained weight loss. Despite the guidelines, physician-provided obesity care is inadequate.35,36 Conclusions A combination of vigorous exercise and a modified diet appears to be the best obesity-management strategy. If confirmed in larger studies, it may be an effective nonpharmacologic approach for the reduction of risk factors in the prevention and treatment of CVD. Prompt recognition, diagnosis, and intervention in cases of overweight and obesity may prevent progression to DM2 and major coronary events, which are leading causes of morbidity and mortality in the world, as well as significantly improve the quality of life of perimenopausal women while reducing medical costs and the worldwide economic burden. Little is known about genetic and obesity-related diseases among minority populations. Research is needed to determine whether public health messages aimed at reducing obesity and its consequences in racially and ethnically diverse populations may benefit from incorporating an acknowledgment of the role of genetics and epigenetics in these conditions.
Key Points In the setting of a well-structured training environment, ultrasound-guided percutaneous kidney biopsy (PKB) remains a safe and effective procedure. PKB training can be performed effectively when included in hands-on training of bedside renal ultrasound examinations. Indications for PKB in our cohort remained vigorous, with lupus nephritis and focal segmental glomerulosclerosis predominating in the recovered specimens. PKB training should remain an essential element of competent nephrology training. The kidneys are highly vascular organs and any trauma or surgery poses a risk of severe bleeding. Patients with advanced kidney disease or uremia are believed to have an additional predisposition to bleeding.1 This predisposition becomes particularly problematic when these patients undergo invasive procedures, such as a biopsy of renal tissues.2 Percutaneous renal biopsy (PKB) remains an essential tool in the diagnosis and treatment of patients with renal diseases.3,4 The technique has significantly improved during the last few decades, with ultrasound (US)-guided needle placement5 and spring-loaded biopsy actuation facilitating obtaining of tissue samples4,6–8; however, indications for a procedure may evolve over time9 and the indications may demonstrate a regional pattern of variation.10 The safety and efficacy of PKB with newer techniques and improved technologies received relatively limited attention when used exclusively in a training setting,11,12 and anecdotal experience suggests that certain procedure-oriented aspects of nephrology training may be endangered.13,14 Accordingly, there is a gap in our knowledge and understanding regarding how well the experience from larger-volume, experienced operators translates into structured training scenarios. We reviewed our experience with bedside kidney biopsies at our institution. Methods Study Population We performed a retrospective cohort review of our consecutive 2.5-year renal biopsy training experience (May 2007–November 2009) at the University of Mississippi Renal Fellowship Program. The basis of data recovery was the procedure teaching log of the first author, which included all inpatient renal biopsies obtained through the Division of Nephrology’s renal biopsy training during the index period. All of the biopsies were performed exclusively by renal fellows under real-time US visualization within the framework of a structured US-PKB training course. A certified sonography technician was present a majority of the time to assist during the visualization of the renal tissues. On days when there was no renal biopsy, the same US technician assisted the renal biopsy trainees in performing practice kidney USs which were free of charge to the patients. This hands-on practical training was performed 1 half-day every 2 weeks during the first year of renal fellowship for all three first-year nephrology trainees (postgraduate year 4 [PGY4]). According to the US training log maintained by the first author, we have performed these training USs on 50 to 60 patients annually, including obtaining renal sonograms for renal biopsies. General eligibility criteria for PKB at the University of Mississippi Medical Center included a Modifications of Diet in Renal Disease–formula based estimated glomerular filtration rate >10 mL/minute/1.73 m2 for exclusively chronic appearing renal processes; hematocrit >25%; platelet count >100,000/mm3; and normal results on standard test of blood coagulation with prothrombin and activated partial thromboplastin time (PT/aPTT). Exclusion criteria included abnormal laboratory tests (hematocrit, platelets, or PT/aPTT as above); excessive obesity with body habitus excluding safe biopsy; known bleeding disorder, including excessive bleeding with menstruation or history of bleeding with invasive procedures; poorly controlled hypertension (supine blood pressure >160/90 mm Hg); hydronephrosis, small kidney(s) (<8.0 cm) or multiple bilateral renal cysts or masses on renal US; nonsteroidal anti-inflammatory drug use during the 7 days before the biopsy or during active urinary tract infection; or patient refusal. Of note, this cohort did not overlap with our former report describing in vitro platelet function testing results in renal biopsy patients.15 This retrospective study was reviewed and approved by the University of Mississippi Human Research Office. Measurements We collected data on age, sex, race, prebiopsy blood pressures, serum creatinine and blood urea nitrogen, random urine protein/creatinine ratio, urine sediments, electrolytes, PT/aPTT, and complete blood count (CBC) with pre- and postbiopsy hemoglobin values at various time points before the renal biopsy (Table 1). For the CBC, we recorded both baseline values and the lowest documented value occurring within 24 hours after PKB. Procedure-related variables, including procedure indications, number of passes, self-perceived difficulty, sample sufficiency, and immediate complications were collected from the teaching log and the patients’ medical records. The medical records also were reviewed for procedure indications, baseline medical conditions, kidney size, number of recovered glomeruli, final pathological diagnoses, and any additional information regarding potential complications. We did not tally data on minor complications (local pain, local subcutaneous hematoma) or hematuria. Table 1 Baseline cohort characteristics and baseline medical conditions Procedure Description Patients presented for the procedure either with appropriate prebiopsy instructions from the outpatient nephrology clinic or were referred from the inpatient nephrology consult team. All of the patients fasted after midnight before the procedure, except for their medications. If the initial blood pressure was higher than 140/90 mm Hg on the day of the procedure, additional medical therapy was given to achieve a blood pressure ≤140 mm Hg systolic and ≤90 mm Hg diastolic. Selected patients with estimated glomerular filtration rate <30 mL/min/1.73 m2 received intravenous desmopressin acetate 0.3 μg/kg on the day of the biopsy at the discretion of the consulting renal team. Before renal biopsy, the clinical team obtained or verified the presence of all necessary laboratory and clinical data and reviewed the scenario for potential contraindications (eg, aspirin exposure within 7 days). Intravenous access was obtained in the patients referred from the outpatient clinic]], [[ if needed, prebiopsy laboratory studies were performed on the morning of the procedure. Immediately before the procedure, a prebiopsy renal US was performed by the renal biopsy team at a patient’s bedside to once again reassess kidney size, shape, and suitability of visualization for kidney biopsy, as well as to determine the presence or absence of interfering structural abnormalities (cyst, masses, stones, or hydronephrosis). This preprocedure US also helped the team to identify the optimal biopsy site. The target site for biopsy was usually the lower pole of either kidney or the lower or upper pole of the renal allografts, depending on the anatomic relation of the graft to the surrounding structures. Biopsies were routinely performed with real-time US guidance using the ATL Philips HDI 5000 (Philips Healthcare, Andover, MA) in general abdominal scanning mode by two renal subspecialty trainees (PGY4 and PGY5 nephrology fellows have completed 3 years of general internal medicine training). As a routine, the second-year nephrology fellow manipulated the US probe and assisted with the biopsy, and the first-year nephrology fellow performed the actual renal biopsy and needle passes. All of the procedures were performed under the direct supervision of a dedicated attending nephrologist (T.F.) who as a rule provided only guidance and advice during the procedure and rarely physically intervened during the procedure. During the procedure, the skin was marked in the appropriate area, prepared, draped in a sterile fashion, and injected with local anesthetic. The US probe, covered with US gel, was fitted with a sterile sleeve and the biopsy guide was attached. Subsequently, deep and, when possible, perinephric analgesia was provided using an 18-gauge spinal needle (9-cm length) under direct US visualization, with the needle guide channel fitted to the US probe. Thereafter, a superficial skin incision (approximately 3–4 mm) was performed at the puncture point of the spinal needle to enable easy passage of the renal biopsy needle through the skin. At that point a 16-gauge spring-loaded biopsy needle gun (Bard MaxCore, Bard Biopsy Systems, Tempe, AZ) was advanced into the renal capsule and actuated to obtain renal tissue under real-time US guidance. Passes were made until two to three acceptable cores were obtained or a maximum of six to eight total passes had been achieved. Before breaking the sterile field, the cores were viewed under a microscope in the pathology department and the specimen quality and further passes were considered depending on the adequacy of the specimens. The pathology faculty divided the specimens for electron microscopy and immunofluorescence. In the interim, moderate pressure was applied manually to the biopsy site. After the biopsy, a repeated renal US was performed to screen for postprocedure bleeding and a dressing was applied to the biopsy site. The formal renal procedure note on the chart documented the type (native vs transplant) and site of the biopsy, the number of passes, and any possible complications. After the procedure, all of the patients were placed on strict bed rest and observed in a floor-level nursing care unit. Vital signs, including blood pressure, were monitored every 30 minutes for the first 4 hours and frequently thereafter. CBCs were obtained 4 to 6 hours after the procedure and the next morning. In the absence of complications or ongoing hospitalization, observation care discharge usually ensued by mid-morning, with appropriate postbiopsy and follow-up instructions. Data Analysis Upon review of both electronic and paper-based medical records, predefined information as approved by the Human Research Office was collected in an Excel data sheet (Microsoft Corporation, Redmond, WA). The estimated glomerular filtration rate was calculated based on the abbreviated Modifications of Diet in Renal Disease formula: 186 × (Scr)–1.154 × (age)–0.203 × 0.742 (if the subject was female; 1.212 if the subject was African American), with further adjustment for African American race. Data were analyzed using IBM SPSS Statistics 19 (IBM SPSS, Armonk, NY) and reported as means ± standard deviations or medians 25% to 75% interquartile range (IQR) for descriptive data; Pearson correlations and paired-sample t test were used for statistical comparisons. Results Results from 64 PKB (78.1% native, 21.9% deceased donor) were analyzed; the majority (70%) of native PKBs were performed on the left side. Main indications in our cohort were impaired renal function in 37 (52.9%) patients and proteinuria in 33 (47.1%) patients. A complete description of baseline cohort characteristics is shown in Table 1. The overall age and race composition of the study population reflected the patient characteristics of our institution, which serves a predominantly inner-city or otherwise economically disadvantaged population in the southeastern United States. A total of 14 renal fellows participated in our series, either as operators or as operator/observers. For the majority of the time (69%), two nephrology fellows were present during the procedure. A qualified US technician was present approximately half of the time (54.7%) during the biopsies. Further details regarding indications for kidney biopsy, procedure-associated variables, and recovered histology results are shown in Table 2. Direct attending involvement became necessary in three cases; however, two (3.1%) of these biopsy attempts remained unsuccessful. Self-perceived difficulty during the procedure was rated as “none” 68.8%, “mild” 15.6%, “moderate” 9.4%, and “large” 6.3% of the time. Specimens appeared sufficient in 58 (90.6%) and borderline in 4 (6.3%) bedside inspections. We recovered 18.8 (±11.5) glomeruli (median 18; 25%–75% IQR 12–24); however, the four specimens characterized as “borderline” at bedside were not meaningfully inferior to the rest of the samples (glomeruli counts of 12, 18, 19, and 21, respectively). Major recovered histological diagnoses also are shown in Table 2; only three specimens returned with no diagnostic changes. Recovered diagnosis of diabetic nephropathy and lupus nephritis on biopsy closely correlated with the preceding history (r 0.605 and 0.842, respectively; P < 0.0001 for both). There was some positive correlation between an acute rise of serum creatinine and histologic diagnosis of acute tubular necrosis on biopsy (r 0.289; P = 0.02). Baseline hemoglobin correlated inversely with serum creatinine (r −0.350; P = 0.005). Hemoglobin of 10.8 (±1.8) g/dL decreased to 10.2 (1.9) g/dL within 24 hours after the biopsy, with a mean change of 0.55 (±0.73; P < 0.0001). For the five patients who received packed red blood cell transfusions (2 U each), initial hemoglobin values were 7.1, 11, 9.7, 8.1, and 8.4 g/dL, respectively, with the lowest values 24 hours after the PKB being 6.2, 8.8, 6.9, 6.5, and 8.4 g/dL. Blood pressures in the morning after the PKB were well controlled at 130/75 ± 16/13 mm Hg. On immediate postprocedure US, we observed hematomas in three (4.7%) patients on bedside US; one patient experienced persistent urine leakage at the biopsy site. No patients died or needed surgical or radiological intervention; five (7.8%) patients received packed red blood cell transfusions. Table 2 Indications for renal biopsy, procedure-associated variables, and recovered results (N = 64) Discussion We demonstrated excellent PKB success rates with reasonable complication rates in an exclusively training setting. The relatively recent emergence of real-time US monitoring5 and use of spring-loaded biopsy needles4,6–8 may have contributed to the success of PKB in our series and likely shortened the learning process for our trainees. An earlier series by Whittier and Korbet and with heavy participation of trainees spanning 2 decades from the early 1980s, noted a 5% transfusion rate and 0.7% of participants required invasive interventions to stop bleeding; however, only one death (0.1%) occurred. The comprehensive meta-analysis of 34 studies and 9474 biopsies with automated spring-loaded biopsy devices and direct US visualization reaffirmed the overall complication rates (3.5%) for macroscopic hematuria and red blood cell transfusion rates (0.9%).16 An additional large series from Norway, spanning 2 decades including 8573 adult biopsies, documented an overall need for surgical or radiologic interventions of <0.2%.17 Of note, a small center size (PKB <30/year) was associated with an increased complication rate.17 Some publications also have reported a decrease in hemoglobin of approximately 1 g/dL versus our observed decrease of 0.55 (±0.72) g/dL. Although our blood transfusion rate was clearly larger (7.8%) than most in the reported literature (<0.5%),7,18,19 or ≤1%,6,17,20,21 this may have partly represented a lower baseline hemoglobin than was reported in some articles,18,20 with approximately one-third (35.9%) of our patients having hemoglobin levels of 10 glomeruli, which compares favorably with the published literature (70%).19 Undoubtedly, the clinical scenario and degree of overall illness also affect complication rates during kidney biopsies. Taking an example of critically ill patients, a series of 71 biopsies performed among intensive care unit patients documented a larger-than-usual complication rate, including the need for embolization in 2.6% of patients.23 The median number of recovered glomeruli was 21 (25%–75% IQR 12–28] and 1 biopsy failed to recover any renal tissue. This particular cohort also had an overall mortality rate of 22%, commensurate with their degree of illness in the ICU.23 In our experience, the first-year nephrology trainees observed and participated in approximately five to eight renal biopsies to acquire the basic skills to proceed with supervised biopsies on their own. Our trainees, however, also received regular biweekly, half-day bedside US training throughout the first year of fellowship and benefited from close and repeated interactions with certified US technicians. Such an arrangement required a significant commitment from the leadership of the renal fellowship to dedicate trainees’ time to attend the biweekly course and vigorous assumption of responsibility of the training process by one dedicated attending physician. The optimal length of postbiopsy monitoring for native kidney biopsies also remains debatable. Although some authors still advocate 24-hour monitoring,3,24 this number has been challenged as possibly being no longer necessary in the era of direct US visualization, and monitoring of 6 to 8 hours may be sufficient.12,25 Of note, in our practice we learned to pay close attention to the relative changes in blood pressure in the first 4 to 6 hours after renal biopsy. In addition, subjective symptoms of abdominal pain, rather than back pain, also proved to be helpful to predict retroperitoneal hematoma.15 As expected for our population, the majority of our biopsies recovered the diagnoses of lupus nephritis or focal sclerosis. Chronic, nonspecific scarring also was frequent and commensurate with the escalating burden of chronic kidney disease in the southeastern United States; however, acute increase of serum creatinine bore some association with acute tubular necrosis, as reported previously.21,26 The correlation between preexisting lupus/diabetes and recovered histological diagnoses of these conditions were statistically highly significant and in keeping with the growing incidence of chronic kidney disease among these patients. Analyses of larger patient databases would be beneficial to fully explore the correlation between clinical and histological findings. Several limitations of our study should be noted and include the single-center design and relatively small number of enrollees, the retrospective nature of data collection, the lack of a control or comparator group, and the subjective nature of rating several aspects of the study such as self-perceived difficulty and specimen adequacy. Neither conventional bleeding time27 nor in vitro platelet function testing15 was performed as part of our protocol. As a routine, we have not performed formal follow-up ultrasonography studies via the radiology department as part of this series. Conversely, in past studies, postbiopsy perinephric hematoma on formal US was not found to be particularly beneficial in predicting subsequent adverse events.15,28 In our experience, relatively poor visualization of the kidney with major loss of anatomic details on bedside US after biopsy proved to be an ominous sign in predicting subsequent major bleeds and transfusion needs.15 Furthermore, we have not documented body mass index (BMI) or body weight for our patients, nor have we recorded the depth of tissue sampling during the procedure. Nonetheless, use of BMI in triaging patients for renal biopsy has declined in our practice pattern during the last decade. Rather, there appears to be no replacement for being able to effectively visualize the kidney at the bedside to determine whether the patient is a candidate for renal biopsy. Some of the literature also challenged the significance of a high BMI to predict complications in renal biopsy.21 Proper patient compliance with instruction is key for safe bedside biopsies. We have carefully avoided attempts at biopsy in patients who were confused or demonstrated questionable compliance because of cognitive impairment. Similar to published experience,23 we have been able to perform biopsies without difficulty on sedated patients undergoing mechanical ventilation in a lateral decubitus position. Our study also has some relative strengths. The practice of renal biopsy and the complication rate were appropriate for the skills of our trainees and closely reflected real-world practice of an active training program. Visualization of renal hematoma did not reflect high-sensitivity formal radiologic evaluation, but the real-world skills of practicing nephrologists. Indeed, the fact that multiple primary physicians performed both the biopsies and the US examinations and that we were able to achieve the results we did lends credence to the effectiveness of our teaching paradigm. Conclusions In this cohort of patients from the southeastern United States, indications for renal biopsy remained vigorous and varied. A large array of significant diagnoses was recovered, with lupus nephritis and focal segmental glomerulosclerosis predominating in this largely African American cohort. There was a close correlation between preceding clinical diagnoses and recovered histologies for systemic lupus erythematosus and diabetes mellitus. Finally, our results suggest that percutaneous renal biopsy, under proper US visualization and in a well-structured training environment, is a reasonably safe and effective procedure, even when performed by relatively inexperienced physicians-in-training. Accordingly, it should remain an essential element of competent nephrology training.