by Amy Haddlesey
Editor’s Note: Lifebeat Newspaper is happy to present the first installment of our Spotlight on a Researcher series, which will feature articles on interviews we conducted with researchers across Canada and within Kingston about their work.
I worked with Dr. Rolando Del Maestro, a researcher at the Montreal Neurological Institute within the Neurosurgical Simulation and Training Centre. Dr. Del Maestro has made a lasting impact on the field of brain tumour treatment and care as a practicing neurosurgeon and a co-founder of the Brain Tumour Foundation of Canada, as well as through his contributions to brain tumour research. Currently, Dr. Del Maestro is working on research surrounding a neurosurgical simulator called the NeuroVR. Earlier in the year, I had the opportunity to interview Dr. Del Maestro to discuss what led him to pursue research, his current work on the NeuroVR, and what starting the Brain Tumour Foundation of Canada meant to him.
At the start of his career, Dr. Del Maestro attended the University of Western Ontario from 1967-1973 where he received his M.D. Over the next 5 years, he completed a rotating internship and his residency in neurosurgery without a lot of exposure to the area of research. During his time at Western, Dr. Del Maestro met with a researcher from Sweden, who was very influential in Rolando's decision to pursue a Ph.D. It was during his Ph.D. in Sweden that Dr. Del Maestro developed a new appreciation for what he calls the “complete arc” in medicine and science. The “complete arc” is a term he uses to describe the deep-rooted connection between research and clinical practice. He mentioned in particular that knowing the chemistry behind what he works on in his clinical practice made all the difference. From this experience forward, it became clear to him that to make a difference in his field, he would need to combine both his knowledge of surgical experience and research.
The combination of surgical experience and research has led Dr. Del Maestro to his current research focus on neurosurgical simulation. The "NeuroVR," the central tool of his studies, is a neurosurgical training simulator developed with the National Research Council of Canada (NRC). The simulator involves the use of haptic feedback (sense of touch) and virtual reality to create simulations that resemble neurosurgery, especially the resection of brain tumours. The possible use of simulation in the neurosurgical community became apparent to Dr. Del Maestro as he noticed during his career that there was a near standstill in how operations and training procedures were being conducted around the world. This observation caused him to ask himself "how do I make the field of neurosurgery better globally?" To do this, he looked to the already established global use of simulation in the aviation industry. At the time of the inception of the NeuroVR, the aviation industry had three key facets that were used to reduce the occurrence of accidents: 1) early warning signs (warning lights, control callouts, etc.), 2) flight simulators to train pilots before ever flying their first craft, and 3) group training, in which pilots learn from each other in a proper training environment. In the field of neurosurgery, the only facet that was present was the monitoring of early warning signs (blood pressure, breathing rate, etc.), and so Dr. Del Maestro concluded that the field of neurosurgery was decades behind where it could be. Instead of seeing this delay as an obstacle, Rolando saw an opportunity for rapid improvement by incorporating simulation into neurosurgical training using the aviation industry’s current practices as a model. As a result, his current projects focus on validating the NeuroVR as an accurate training tool for neurosurgery. The goal is to improve the training of young neurosurgeons and reduce neurosurgical disasters related to human error. These goals are modeled after the reduction in fatal aircraft accidents following the introduction of simulators in aviation training. Introducing an idea as novel as neurosurgical simulation training is not without its challenges. In our interview, Rolando described the disruption to current practices that new models cause as a necessary step in innovation, rather than a hindrance or reason to turn back.
Beyond his research efforts involving brain tumours, Dr. Del Maestro has also had a deep social impact on the area. During his career as a neurosurgeon and his wife Pamela Del Maestro’s career as a nurse, both developed a need to help those with and affected by brain tumours beyond the scope of clinical care. This passion to do more for their patients led Dr. Del Maestro, along with Pam Del Maestro and Steve Northey, to create the Brain Research Fund Foundation of Canada in 1982 (now known as the Brain Tumour Foundation of Canada or BTFC). The foundation acts as both a support system and a research fund. It organizes support groups and events geared towards empowerment, cultivating hope, and emotional support for people with brain tumours and their family members. BTFC has also funded extensive research into finding a potential cure and treatment for brain tumours. Dr. Del Maestro considers the BTFC one of his most important contributions during his life since it has allowed him to help those affected in a way that will continue after him.
Through the foundation, his neurosurgery career, and his research, Dr. Del Maestro has helped the field of brain tumours from nearly every angle. It is the same passion that fuels his research that started the Brain Tumour Foundation of Canada and allows Dr. Del Maestro to withstand adversity and skepticism in the emerging and new field of neurosurgical simulation. He mentioned that enacting change is always difficult and gradual, especially in the world of medicine. Instead of being overwhelmed or discouraged by this difficulty, Dr. Del Maestro is fascinated by complexity and challenge, a trait that seems to be a key ingredient to his success.
by Wara Lounsbury
It’s a start of a new semester and a new year, and the return of students to Queen’s has been heralded by the fall of fresh white snow blanketing the campus. This crystalline transformation is a physical manifestation of the blank slate that a new year represents; an opportunity to resolve to make a change for the better. Some of these resolutions may have fallen to the wayside already (I for one have not held to my resolution to exercise more, but who was I kidding really?), but some of us still hold steadfast. After all with the ushering in of a new year comes newfound motivation to be more productive, exercise more, or to lose some of that weight gained after feasting during the holidays. One of the most common New Year’s resolutions is to lose weight, be that through exercise or through dieting. Unfortunately, there are many dangers associated with diets, some widely publicized, and some less known. In fact, sometimes dieting can even be fatal, as in the Terri Schiavo case.
by Angela Chen
In the past decade, specifically in North America, the gluten-free (GF) diet has become regarded as the new “healthy” way of eating. In fact, many restaurants now serve GF meal options, and gluten is sometimes regarded as an unhealthy component of our diets, even for people who do not have celiac disease (a medical condition that involves an immune reaction from eating gluten, which can lead to damage in the small intestine). Even though many individuals promote the GF diet, this widespread idea seems to be based more on single case studies or personal experience rather than overall research. Before eliminating gluten from one’s diet, it is pertinent to consider the potential harm it could cause.
A few recent studies have indicated that a low-gluten diet may compromise overall health. A study carried out by a collaboration between Columbia University and Harvard Medical school revealed that a low-gluten diet was related to increased heart attack risk. After following the diet and overall health of 100,000 people between 1982 and 2012, they found a correlation between people who had little to no gluten in their diet and those who experienced heart attacks or were more prone to them. Other research on the effects of a GF diet on gut microbiota and immune function are currently under analysis, but some findings seem to indicate that gluten is important for certain components of immune function.
Like many other health fads, it is difficult to find conclusive, long-term data regarding health benefits and risks because of the speed at which they gain popularity. For example, it used to be thought that there were many health benefits from drinking juice boxes, but this idea was countered by research that found that most juices contain excessive sugar content. Similarly, the negative health effects associated with using artificial sweeteners as sugar substitutes to reduce sugar intake also show that many health fads that increase in popularity exponentially are often not critically studied or analyzed. This lack of critical analysis may be a reflection of the general public’s desire to see immediate results rather than taking the time to make an informed decision.
Individuals who participate in a GF diet often eat GF substitutes. A 2015 systematic analysis conducted by Missbach and colleagues revealed that GF food substitutes have no predominant health benefits. Furthermore, replacing gluten-containing foods with GF food substitutes results in significant cost differences (increases range from 200% to 300% relative to the gluten-containing version of the food). GF substitutes are also high in synthetic hydrocolloids and gum. Therefore, individuals who do not have celiac disease do not seem to be benefiting from a GF diet. Additionally, by not allowing the body to metabolize gluten in a GF diet, the mechanism for digesting gluten weakens, which can result in celiac disease in individuals who previously could digest gluten. Developing celiac disease from a GF diet is highly preventable by allowing the body to process gluten if it is capable of doing so.
A final consideration is one surrounding public health. In her response to the gluten-free fad and the danger for celiac individuals, Grabowski commented on her distrust in restaurants to provide truly GF meals. She writes that although many restaurants have GF options, they may not be training and educating their servers on how to handle GF foods. It is not uncommon for servers to neglect to change gloves or serving plates, which often results in cross contamination. McIntosh et al. (2011) looked at 260 foods that were claimed to be gluten-free and found that approximately 10% of these samples contained gluten. For individuals who are not celiac, this cross contamination will go unnoticed; however, it places celiac individuals at risk.
In conclusion, while the GF diet may produce immediate, physical results and has become popularized by celebrities, research shows that it may cause more harm than good in the long run. Therefore, it is important for individuals to learn about the effects of the GF diet before excluding gluten from their diet. Rather than cutting gluten entirely, it may be beneficial to eat gluten in moderation. Just like any other foods, too much or too little is never good. Individuals should turn to research and credible physician recommendations before making a decision about their diet that will ultimately impact their overall health.
by Haley Richardson
Climate change: it’s something we hear about nearly every day whether it’s in class, on the news, or in casual conversation. While most people know that climate change is not a good thing, many are not aware of the specific implications of the changes in the earth’s temperature and atmosphere. One of the more major yet less regarded effects of climate change is how rising global temperatures can change the habitable range of a species. More specifically, infectious agents, as well as the organisms that carry them, can begin to populate areas of the world that they were not able to before, spreading disease to regions that were formerly too cold for them to survive.
Infectious agents, like all other biological entities, have specific environmental factors that determine where they can grow and reproduce at the fastest rate. Often, these environments tend to be hot areas with a high quantity of water or a humid atmosphere. As a result, the areas of the world with the most infectious diseases tend to be the tropics, particularly after monsoon season, which results in large amounts of rain. However, as global temperatures rise, the formerly hot, humid areas of the tropics may become too dry for many disease-carrying organisms to handle, causing them to migrate to more temperate zones. By contrast, regions formerly too cold to have these disease-carrying organisms will become warmer and more humid, allowing the range of the species to expand further north. Furthermore, large-scale extreme weather events such as El Niño, La Niña and others are already increasing in frequency, bringing with them new organisms with new diseases to colonize areas affected by these phenomena.
For example, the infection rate of the malaria virus is primarily attributed to the species that carries it: mosquitoes. Although many parts of the world contain mosquitoes, only a specific species of mosquitoes in the Anopheles genus can carry the malaria virus. Malaria-carrying Anopheles primarily live in sub-Saharan Africa, where the temperature and humidity of the region are ideal for the mosquito’s life cycle. However, as climate change causes global temperatures to rise, Sub-Saharan Africa is expected to become too arid for the mosquitoes to thrive, and so scientists are predicting that they will migrate as far north as Southern Europe.
The consequences of the spread of diseases like malaria could be astronomical, especially in less developed countries where access to health services and vaccinations are limited. Additionally, by migrating to previously uninhabited regions, the Anopheles mosquito would be spreading the malaria virus to populations that had not had a chance to build up an immunity to the disease, amplifying its effects. Other similarly transmitted diseases such as yellow fever, sleeping sickness and dengue fever, as well as bacterial infections such as cholera and Lyme disease, are also predicted to spread at a faster rate and into previously uninhabited areas as global temperatures rise. All of these diseases in their newly acquired ranges could spell disaster for many people worldwide, and in some areas they already have.
However, lest this article be all doom and gloom, there are several initiatives that the international community has proposed to manage the spreading of infectious diseases due to climate change. Besides the obvious solution of slowing down carbon emissions to decrease the rate of rising global temperatures, scientists and lawmakers alike have come up with systems to monitor the spread of infectious diseases to ensure that responses to outbreaks will be faster and more efficient than before. Institutions such as the Pacific ENSO Application Centre have developed early-warning systems that could detect extreme weather events that could lead to outbreaks of disease, allowing them to inform governments to prepare relief in advance and educate the public on disease prevention. Other institutions such as the World Health Organization actively provide relief to diseased-ravaged areas, and lobby governments to provide funds and support to affected regions. Furthermore, scientists around the globe are working to develop vaccines to the planet's most deadly diseases, as well as ways to mass produce them in the most cost-effective manner. Thus, as global climate change alters the environment (perhaps irreparably) at a frighteningly fast pace, we can at least hope that human progress will move even faster.
by Amy Haddlesey
As students, we are no strangers to staying up late to do schoolwork or the hardships of getting up early for an 8:30 class. A good portion of the population will find one of these activities easier than the other, which means that people can fall into one of two camps: night owls or early birds. The population is said to have a normal distribution when it comes to how we organize our behaviour within the 24-hour day, with most of us in the middle of the two extremes, but there are still individuals that find themselves leaning towards one or the other. The term ‘chronotype’ is used to describe your preference for wakefulness, which is divided into late chronotypes (LCs), intermediate chronotypes (ICs), and early chronotypes (ECs). ECs are characterized by their difficulty with staying up late, and LCs are characterized by their difficulty with getting up early. Chronotype is both age and sex-dependent. Interestingly, a higher percentage of females are ECs.
Chronotype-specificity is dictated by the interplay between neural circadian rhythms and homeostatic oscillators. Both essentially involve regulating the cellular processes involved in telling our bodies when to sleep, wake, and eat. While circadian rhythms, or the “body clock”, can respond to or be affected by external factors (eg. sunlight), the homeostatic oscillators are considered internal, independent regulators. However, it is the interplay between the two that regulates the overall fluctuation between sleep and wakefulness over the course of each day. Recent research into the genetic basis of our inner clocks has revealed that our circadian rhythms are important time reference systems that interact with the environment. Understanding how circadian clock function is most affected could lead to helpful interventions in mediating clock dysfunction improving human health and welfare. In this study, more than 80 different genes were shown to be expressed differently between late and early chronotypes in fruit flies. Furthermore, it wasn’t expression alone that separated the groups; there were also different genetic variations present between late and early chronotypes. Overall, there are many factors at play in determining your preference for how you organize your day that may be beyond your control.
Studies have found that the differences between chronotypes extend beyond sleeping preference. Different chronotypes are also associated with differences in cognitive performance, gene expression, endocrinology, and lifestyle. Most notably, LCs tend to suffer from a conflict between internal and external time (‘social jetlag’) that may cause them to suffer more mental stress. In other words, a night owl’s tendency to sleep through the day and stay up late comes into conflict with the typical hours of the social day, which may cause LCs to experience jet lag-like symptoms as they try to adjust. Similarly, LCs may have to adjust to working hours or school hours as well. One paper suggested that with an increased understanding of chronotype-specificity, work schedules could ideally be designed to fit your wake/sleep schedule. It is important to note, however, that most workplaces have to fit into regular business hours, and so it’s unclear how many workplaces could be suited to this “customizable” workday and how large of a range can be accommodated. With that said, there are some businesses that set formalized flex hours, where there are certain hours you must be in the office, but you can shift the surrounding hours to your own preference (come in late so you leave later versus coming in early so you leave earlier).
As much as we try, we can’t always control our schedule and that means we are sometimes forced to conform to a routine at odds with our chronotype. A recent study involving adolescents has shown evidence linking chronotype and academic performance. During adolescence, our chronotypes are typically at our latest, as might be expected with the typical tendency of teenagers to stay up later and sleep in later when compared to other age groups. As discussed, a late chronotype can mean a mismatch between our circadian clock and the early school clock. As a result, it was found that late chronotypes generally have lower grades. This finding is especially interesting when there isn’t an agreed upon relationship between early and late chronotypes and IQ. Instead of a difference in IQ, these lower test grades could be due to the circumstantial or ‘social jet lag’ discussed earlier causing sleep deprivation in LCs. The effect of chronotypes seem to be strongest in the morning and disappear in the afternoon, which is in line with views that LCs struggle to adjust to an earlier schedule.
So what can we do with this information? There are many strategies recommended to improve sleep quality including a consistent sleep schedule or reducing your exposure to blue light before bed. However, it may be that these findings indicate a need for schedules better suited to our natural bodily rhythms that could lead to positive outcomes for public health and productivity. It’s hard to imagine what tangible programs or policies could be established with this information, but it’s an interesting subject that may be worth our attention.
by Tayyaba Bhatti and Wara Lounsbury
With the recent chill of November comes the ever-looming threat of exams. During these gloomy times we can find solace in the thoughts of the upcoming holidays. For some us, that entails a warm and cozy time spent with family and for others it means a trip to warmer climates. As pleasant and comforting as vacations are supposed to be, there is one unwelcome side effect: jet lag, which often casts a pall on the joyous times. But what exactly is jet lag? Previously believed to be a state of mind, jet lag was later discovered to be a physiological phenomena. Jet lag is the sensation of fatigue during daylight hours that arises due to the slow adjustment of our circadian clocks to new time zones. Circadian clocks are made up of the molecular pathways that dictate when we sleep, move, and eat, and are present in almost every cell in our body. Having to adjust these circadian clocks to a new time zone and then back to the original one within a span of a couple of weeks can take a toll on our bodies. However, there may be hope for frequent flyers in the future according to a recent publication in the journal Cell Metabolism.
The research started with an initial question: how do all circadian clocks show the same time? Universal clock resetting cues like feeding-fasting and temperature cycles have already been discovered. Dr. Gad Asher, the lead researcher on the study, decided to study oxygen consumption as a potential cue because all cells in our body use oxygen. The mice were placed in special cages to measure oxygen consumption and monitor blood oxygen levels. In order to simulate jet lag induced by traveling to a different time zone, researchers shifted the light-dark cycles of mice 6 hours ahead. They found that decreasing oxygen levels 12 hours before or 2 hours after the light-dark cycle shift helped mice to adapt to their new light-dark cycle more easily and recover from jet lag faster.
Furthermore, the researchers investigated the cellular route by which oxygen modulates the circadian clocks by removing a protein (HIF1α) that tells cells how and when to use oxygen. They discovered that when this protein was removed, the mice were no longer able to achieve the faster adaptation of the light-dark cycles upon exposure to decreased oxygen levels. Thus, they determined that HIF1α mediates the effect of oxygen on the circadian clocks.
This study incites a lot of new research questions. Most of us would want to know whether these findings can be applied to humans as well, or whether it's just mice that now have a cure for their jet lag. Another question that comes to mind is whether changing the oxygen level would work best before, during, or after a flight. Also, would raising oxygen levels have the same preventative effect as lowering oxygen levels? However, the most important question is how this research can be relevant to us.
Most of us experience chaotic sleep schedules during midterm and finals seasons. Some of us even suffer chronic fatigue due to ever-shifting circadian clocks during the year to meet deadlines or to socialize within our networks. Others struggle to maintain the heavy coursework with part-time jobs and night shifts. Since HIF1α also plays a role in various other cellular processes, this research may be extended to help explain the underlying effects of these phenomena surrounding fatigue. However, this requires further studying of the protein, its interactions within the body, and the application of oxygen consumption alteration to humans. For now, it appears just Stuart Little and his friends can rejoice over their jet-lag free vacations.
by Greg Eriksen
Editor’s Note: Lifebeat previously published an article called The Science of a Hangover: Homecoming Edition, but here is a more in depth explanation of the theories behind why hangovers happen and why different alcohols lead to different hangovers.
The hangover: the price many of us pay to have a night out that we probably won’t remember. After a week of working hard, we celebrate on the weekend only to wake up on Sunday feeling miserable. We then proceed to struggle through the headaches, dizziness, and nausea with the help of coffee and Advil. Although the science behind hangovers is not completely understood, there are many contributing factors that help us understand why those awful Sundays occur. Furthermore, we may be able to reduce the effects of the hangover based on the alcohol we choose to drink!
Often the headaches and dizziness associated with a hangover come from being very dehydrated. Alcohol is a natural diuretic which works by decreasing the body’s anti-diuretic hormone. This hormone is responsible for the reabsorption of water. Therefore, when high levels of alcohol are consumed, we stop reabsorbing water, and thus have an increasing urge to pee. This effect explains why we all have so many untimely trips to the bathroom in one night! Consequently, the total fluid loss can give you painful headaches the next day.
Another contributing factor to the symptoms of a hangover is the result of ethanol (alcohol’s primary component) metabolism. Upon ethanol consumption, the enzyme alcohol dehydrogenase converts ethanol and NAD+ to NADH and acetaldehyde. The high concentration of NADH inhibits gluconeogenesis: the generation of glucose. Essentially, the first step in the gluconeogenesis involves the conversion of NAD+ to NADH. However, with a high level of NADH already present due to ethanol consumption, this reaction is unfavourable, and therefore gluconeogenesis will not occur. The outcome can lead to low levels of blood sugar known as hypoglycemia. As a result, when an individual wakes up with a hangover, they may be hypoglycemic, which causes dizziness and nausea--sound familiar?
As I mentioned before, the metabolism of ethanol produces NADH as well as acetaldehyde. The latter of the two molecules is hypothesised to be the primary reason we all experience hangovers. Research has shown that acetaldehyde is 10-30 times as toxic as alcohol. In effect, we are introducing toxins into our bodies for fun. The toxic effects of acetaldehyde may include sweating, dizziness, and even memory impairment. Due to acetaldehyde’s high toxicity, our bodies are very efficient at converting it to a more stable compound known as acetate. However, drinking in mass consumption (much like what is done on homecoming) produces a lot of acetaldehyde, which allows its adverse effects to take place.
Although waking up with a hangover can be a common occurrence, the alcohol we choose to drink may influence the hangover’s intensity. Congeners are substances within alcohol that are formed during the fermentation process. They often contribute to the flavour of the beverage we choose to drink. However, they are also linked to the effects of hangovers. In a previous study, researchers decided to look into the effects of different alcohols on people. They gathered 95 healthy alcohol users and served them 3 different beverages on 3 different nights. The first night, all 95 individuals were given the same undisclosed alcohol to “acclimate” to drinking. On night two, they were given vodka or whiskey with a consumption that put them 3 times above the legal limit for driving. Finally, on the third night, they were given a placebo drink with no alcohol in it. Once the study was done, the participants who drank whisky reported worse hangover symptoms than those who drank vodka. Whisky also contains considerably more congeners than vodka. Furthermore, research published in the British Medical Journal discovered a connection between hangover severity and amount of congeners. Accordingly, the amount of congeners for each alcohol has been recorded, with brandy, red wine, and whisky having considerably more congeners than vodka, gin, and white wine. Interestingly enough, the drinks that give the worst headaches tend to be darker and more flavourful. Furthermore, drinks that include carbonation tend to absorb faster than distilled drinks, and ultimately get you more drunk. Therefore, it seems that the best drink to buy in the club is also the cheapest one: vodka water.
Overall, no matter the facts, many of us will still choose to drink on a night out regardless of the effects in the morning. In a sense, drinking on that Saturday night is just borrowing happiness from Sunday. However, we can help ourselves survive the hangover by choosing the right kinds of alcohol! There is nothing wrong with saving a little bit of money and drinking vodka water instead of another jäger bomb!
by Lauren Lin
Cell phones today have many uses, like texting, taking photos, scrolling through social media, playing games, and of course, actually calling other people. Therefore, it’s not surprising that the number of people who use cell phones rises every year and that cell phones are becoming increasingly important to our day-to-day lives. As a student, it’s not uncommon to see your classmates checking their phones in the middle of a lecture, and many of us would probably find it difficult to go a full day without using our phones.
Despite the frequent use of the term “cell phone addiction”, it is critical to note that while some researchers believe that cell phone addiction is a valid mental disorder that should have diagnostic criteria and treatments, others are hesitant to classify problematic phone use as an addiction. The DSM-5 (Diagnostic and Statistical Manual of Mental Disorders Fifth Edition), which was published in May 2013 by the American Psychiatric Association, does not include cell phone addiction in its list of mental disorders. In fact, addictions in the DSM-5 are mostly substance use disorders (e.g. alcohol use disorder), with gambling disorder being the only behavioural addiction in the DSM-5. According to the DSM-5, behaviours like compulsive buying or compulsive sex are considered to be impulse control disorders rather than addictive disorders. Therefore, there are researchers who suggest that the next edition of the DSM should include a new addictive disorder for cell phone use, as well as researchers who believe that it should be a new impulse control disorder instead.
The criteria for substance abuse disorder and gambling disorder described in the DSM-5 includes (but is not limited to) the following:
A 2016 review by Gutierrez, Fonseca, and Rubio found that many studies done on cell phone abuse reported behaviours that have parallels with the DSM-5 criteria for addiction. Therefore, cell phone addiction may align with our current understanding of addictive disorders. However, some researchers theorize that the distress, anxiety, and inability to stop cell phone abuse could be caused by the social aspect of cell phones (i.e. the communication cell phones facilitate) rather than the cell phone use itself. Additionally, other researchers believe that cell phone abuse could be a consequence of another psychological issue or variable, such as social anxiety or the desire for approval due to low self-esteem, instead of being its own disorder. Some researchers even acknowledge that what seems like cell phone abuse could be adaptive and typical for certain lifestyles or professions.
Although there isn’t a consensus on whether cell phone addiction exists, there are many studies that try to investigate cell phone addiction and abuse. These studies seem to indicate that there is a high prevalence rate among young people (especially adolescents who had their first cell phones before age 13) and that the prevalence rate varies across different cultures. Interestingly, men and women also seem to differ in how they use their cell phones. Females generally use cell phones for more time than males and mainly for communication purposes, while males tend to use cell phones for communication and gaming equally. In terms of effects, cell phone abuse seems to decrease the amount and quality of sleep an individual gets and is associated with substance use disorders, anxiety, depression, stress, and loneliness. However, the relationship between mental health issues and cell phone abuse could be bidirectional in that they affect each other. Currently, treatments used for gambling disorder, such as therapy and self-help groups, seem to be the most promising ways to help people who have cell phone abuse, but more research needs to be done.
Therefore, the good news is that you may not have a cell phone addiction even if you find yourself seemingly glued to your phone. However, problematic cell phone use can still cause issues, so it’s important to be aware of whether your phone use is negatively impacting your life.
by Amy Haddlesey
As this weekend approaches, along with Queen’s Homecoming, it will not be uncommon to see engineers purpled from head-to-toe flocking to Richardson Stadium to watch the football game. This tradition, albeit messy, is one aspect of what makes Queen’s Homecoming so special and school spirit so unmistakable. Although it’s not always explicitly referred to, Gentian Violet, or Crystal Violet, is at the centre of this tradition. As an important part of not only Homecoming but other traditions on campus as well, it seems that Gentian Violet deserves a closer look.
Although our engineering students use this purple dye for aesthetic purposes, the dye actually has many notable medicinal properties. In the first half of the 20th century, Gentian Violet was predominantly used to treat trench mouth, thrush, impetigo, burns, pinworm, cutaneous and systemic fungal infections. With that said, it has been noted to have numerous applications, including anti-fungal, anti-bacterial, anti-helminithic, anti-trypanosomal, anti-angiogenic, and even anti-tumour properties. Some of the latter mentioned effects have been proposed as recently as 2013. Today, Health Canada describes Gentian Violet as an herbal medicine that helps to relieve digestive disturbances, stimulate appetite, prevent nausea, and increase bile flow when it’s advantageous. Overall, Gentian Violet has an extensive and perhaps growing medicinal background as researchers continue to investigate its possible applications.
Beyond just the scope of medicine, Queen’s and other universities use Gentian Violet for its brilliant and intense colour, which is named after the gentian flower with similarly coloured petals. The stabilization of the molecule through resonance results in the intense colour of the compound. The molecule itself is symmetrical with three amino groups each containing two methyl groups attached and has many alternating double bonds throughout the structure. The intensity of the purple is also why the compound is an extremely effective biological stain.
One of the best-known uses of Gentian Violet is as a stain for visualization purposes in the lab. In 1884, Hans Gram was the first to notice the importance of the irreversible fixation of Gentian Violet by Gram-positive bacteria. This discovery was the basis of the Gram stain for categorizing bacteria. Gentian Violet is also used as a histological stain to study cells and tissues in plants and animals. In a similar way that Gentian Violet is an effective stain in the lab, its properties make it a great stain for when you want to be entirely purple. Thankfully, it comes off easily enough with a bleach-water mixture.
Another important thing to note is the history of Gentian Violet and the reason why Queen’s and other university engineering students chose the colour purple. It has been proposed that purpling is a tribute and in dedication to World War II British Naval Engineers. These individuals wore purple armbands, which would stain their skin after many days of working in the boiler room. It has also been suggested that it is in reference to the purple jackets engineering corps of the British army and navy wore instead of the customary red jackets. Another possible explanation that has come up is that the engineers abroad the Titanic wore purple overalls and that purple is a symbol of bravery for their efforts to keep the smoke signal going while the ship was sinking. No matter the exact origin, there’s no question that purple is strongly associated with engineering on our campus and on others as well.
Overall, it seems Gentian Violet has a lot more uses than just covering our FRECs and frosh here at Queen’s. It is another prime example of how much history (and science!) goes into the fun, uniting traditions at Queen’s. Having just celebrated 175 years, it’s no mystery how Queen’s has become a treasure trove of new and old traditions, each with their own stories. You never know what you may learn looking into some of them.
by Haley Richardson
While genetic engineering of the human genome may seem like a concept exclusive to works of science fiction a la Gattaca or Ender’s Game, the reality could be much closer than you think. Recent technologies, such as the development of CRISPR/Cas9 techniques, have made genome editing better, faster, and above all, cheaper. The accessibility and relative ease of these technologies have caused a boom in genome research unlike anything since the Human Genome Project. Although researchers initially limited themselves to editing genomes of the most basic organisms, recently scientists have been upping their game with more complex organisms, including humans. Ethical concerns with this practice have been raised across the political spectrum and in both public and private life with many questioning if, and how, this research should be regulated.
There are two categories of human genome editing: somatic cell editing or germline editing. Somatic cells are any non-reproductive cells, and any changes made to the genome of somatic cells of an individual cannot be passed down to his or her children. Changes to germline cells, on the other hand, are heritable and can be passed on to future descendants. As a result, germline editing research is often more controversial than somatic cell research, and is illegal in several countries including Canada, Australia, and most of Europe. While not outright illegal in places such as the United States, China, and Japan, germline editing research is highly restricted. Despite these restrictions, in 2015, researchers in China reported their first attempts to edit human germline cells in embryos, sparking ethical debates and calls worldwide for tougher regulation of the research.
In response to these concerns, several organizations within the scientific and medical community have reported their own findings and recommendations. The National Academy of Sciences and the National Academy of Medicine in the US released a report in 2017 titled Human Genome Editing: Science, Ethics, and Governance. The NAS/NAM report analyzed the research that is currently being done in this area and provided recommendations on how to manage it. While the report is more in favour of somatic cell research, they recommend that germline research should be allowed as well, so long as the public is notified and in favour. Furthermore, it states that genome editing should be used strictly for “the treatment of disease and disability”, and not for aesthetic or performance enhancing purposes. In other words, the designer babies of science fiction should not be on anyone’s agenda, unless it is to rid the gene pool of a serious disease or disorder.
While some scientists have applauded the report, others are not so sure. Those in favour often cite how restrictive many of the recommendations are, saying that the report only allows germline editing in extreme circumstances with no available alternatives. Critics are quick to point out, however, that while the report recommends these restrictions now, they leave the future open to debate once the technology has improved. As a result, many researchers have argued that germline editing should not be allowed under any circumstances, as once the door is open, it will be harder to control how the technology is used.
Even scientists who have worked with genome editing research for years, such as Edward Lanphier, are opposed to the transition to germline editing. An article in Nature co-authored by Lanphier, who has been involved in somatic cell editing research and clinical trials, claims that germline editing could have “an unpredictable effect on future generations”, making it “dangerous and unacceptable”. Furthermore, Lanphier argues that somatic cell research has the potential to cure genetic disorders and save many lives, and allowing controversial germline research to occur jeopardizes the already tenuous acceptance of somatic therapies by politicians and citizens.
After all, while some reports have shown that the public is becoming more comfortable with genome editing, many are still strongly opposed or mistrustful of the idea. A study conducted by the PEW research centre indicated that nearly half of the American citizens questioned in the study said that genome editing to produce healthier babies was “crossing a line” and “meddling with nature”. This indicates that even if the scientific community manages to come to a consensus on germline editing, it would likely require a significant amount of outreach to get the public (and therefore politicians and lawmakers) on board. So while you can’t expect to wake up tomorrow living in an Aldous Huxley novel, you might not be mistaken for feeling as though a brave new world is just around the corner.