by Greg Eriksen
What is Artificial Intelligence?
Have you ever wondered how self-driving cars work or how Google Home replies to you as if it were human? The answer is artificial intelligence (AI). AI explains the ability for a machine to process information and simulate cognitive functions in response. Through mechanisms such as neural networks and deep learning, a machine can be trained to learn through experience. Consequently, AI machines are able to adjust to new inputs and come up with an appropriate response or action.This ability to accommodate new information helps to explain why self-driving cars don’t shoot off to the side with every bend in the road!
What is a Neural Network?
One of the primary systems artificial intelligence machines use are called neural networks. A neural network is a computer system that is inspired by the biological inner workings of the human brain. These networks are composed by a series of layers starting at what is known as the input layer. For each input, the system can dynamically analyze the data, which subsequently fires down different pathways of the neural network. The input continuously passes down the layers of the network, with each layer becoming more detailed. Through this filtering process, the system eventually is able to recognize the input and produces an appropriate output or response.
What does AI have to do with medicine?
Other than being able to sleep while a self-driving car takes you to work, there are plenty of other potential applications for artificial intelligence that are likely more useful! A major area that AI can improve is the analysis of big data, especially in the context of medicine. Big data is data so voluminous that traditional processing mechanisms cannot efficiently analyse it. Using AI technology, big data analytics can examine large amounts of data to uncover hidden patterns and insights in an extremely time efficient manner. In medicine, there are many fields that contain large amounts of data stored deep within online files. These include the millions of images in radiology to the millions of chemical compounds created in drug analysis.
With the ever-growing database in medicine, AI machines can access this and run problem solving algorithms in a process that would take humans much longer. In recent development, an artificial intelligence company known as Atomwise has developed a supercomputer which is able to analyse big data more efficiently than ever before. The supercomputer known as AtomNet is particularly used for pharmaceutical analytics. AtomNet is the first structure-based AI system that can predict the biological activity of small molecules used for drug discovery applications. Consequently, the company is able to analyze millions of theoretical molecules without wasting any materials. One of Atomwise’s biggest discoveries came from their research on the Ebola virus. Once the structure of the Ebola virus was found, it was designed on the AtomNet supercomputer. Following this, millions of simulations took place analyzing the different effects of molecules on the virus. In what would have taken traditional analytical processes months, the AtomNet AI system found two potential Ebola fighting treatments in less than one day!
The fundamental principle in biology that AtomNet exploits is that structure is largely associated with function. Therefore, the ability to determine where chemical bonding can take place is essential for the discovery of new drugs, and so AtomNet uses a convolutional neural network system that incorporates structural information in its analysis. By doing so, the system can assess how different molecular structures chemically fit together. In similar research to the Ebola virus, AtomNet investigated almost 82 million molecules and eventually discovered a protein-protein inhibitor for a treatment of the autoimmune disease multiple sclerosis!
Pharmaceutical analytics is not the only medical field that AI can improve. An AI platform known as Arterys has been developed to assist radiologists in analyzing various medical images! Furthermore, another company known as 3Scan has created a system to efficiently analyze tissue pathology. Perhaps the most exciting partnership with AI technology is with the gene-editing CRISPR CAS-9 system. In short, this system is derived from a bacterial immune response against viruses. The CRISPR CAS-9 complex is able to take the genetic information of a virus and alternatively code for the destruction of that specific virus. With the new advances in genome editing, the CRISPR system can potentially synthesise any DNA molecule. One of the only barriers affecting its prosperity is the problem of off-target effects. To test these potential effects without stepping over ethical boundaries, Microsoft wants to turn to AI technology! The partnership between AI and genome editing may soon revolutionise disease prevention.
Artificial Intelligence is making a very strong case for its influence in the medical world. With its major advances in pharmaceuticals, radiology, and genome editing it is paving a very promising future. Although self-driving cars may be awesome, a disease-free world sounds a whole lot better.
by Lauren Lin
Huntington’s disease (HD) is a fatal neurodegenerative disease that has symptoms such as chorea (jerky, involuntary movements), loss of coordination, and difficulties with walking, talking, swallowing, focusing, recalling memories, and making decisions. People with HD may also experience increased anxiety, depression, aggression, and impulse control issues. As a neurodegenerative disease, the symptoms begin with subtle issues associated with the previously mentioned symptoms and become more severe over time.
HD is one of the few neurodegenerative diseases that has a clear genetic component. It was identified in 1993 that HD is caused by a mutation in the gene found on chromosome 4 that codes for the huntingtin protein (the Htt gene). Although CAG repeats are found in healthy individuals, individuals with HD have very high numbers of CAG repeats in the Htt gene, and more repeats are associated with more serious manifestations of the disease. The huntingtin gene is dominant, meaning that individuals only need one copy of it from their mother or father to have HD. Therefore, children of parents with HD have a 50% chance of inheriting the disease. Symptoms usually appear between the ages of 35 to 55, but individuals may have symptoms starting before 20 (called Juvenile HD) or in late adulthood (Late Onset HD).
The function of the huntingtin protein in healthy individuals is still unclear, but the protein seems to play a role in the function of nerve cells since huntingtin appears to interact with proteins that only exist in the brain. The mutated huntingtin gene leads to abnormal aggregates of huntingtin protein fragments in the brain called neuronal inclusions. The basal ganglia, a brain area that is involved in movement coordination, seems to be the most affected by neuronal inclusions. However, the cerebral cortex, which plays a role in cognitive processes like attention, is also vulnerable to the effects of the huntingtin protein. The symptoms related to the cerebral cortex (i.e. cognitive difficulties) show up later than motor difficulties, which are associated with effects of the abnormal huntingtin protein on the basal ganglia.
Currently, there are no drugs that can prevent or slow down the progression of Huntington's disease, but drugs are given to people with HD to help manage their symptoms. For example, some antipsychotic drugs such as haloperidol may be given to patients with HD to help with hallucinations (which sometimes individuals with HD experience), violent outbursts, and chorea. Antidepressant and anxiolytic (anti-anxiety) drugs are also sometimes given to help with the psychiatric symptoms that individuals with HD may have.
However, many researchers are investigating new possible treatments for Huntington’s Disease, and a new gene-silencing treatment has been found to have potential in treating HD. A new drug called Ionix-HTTRx is an antisense drug that contains part of a strand of synthetic oligonucleotides that selectively binds to messenger RNA (mRNA) to block translation of all huntingtin protein. The drug is injected into the fluid around the spinal cord, which is then carried to the brain in the cerebrospinal fluid.
This month, Ionix Pharmaceuticals reported their findings from a phase 1 trial that included 46 patients aged 25 to 65 from Canada and Europe. The study was 13 weeks long, during which participants were assigned randomly to be injected with one of five possible dosages of IONIS-HTTRx or a placebo. One injection was given each month, and at the end of the study, the participants who received the highest two doses of IONIS-HTTRx had about a 40% reduction in mutant huntingtin (mHTT) levels in their cerebral spinal fluid. The researchers predict that the decreases in mHTT in the cerebral spinal fluid correspond to a 55-85% reduction of mHTT levels in the brain cortex, which may lead to clinically significant results. However, more research trials need to be done with more patients and for longer periods of time to provide a better understanding of whether the drug is truly effective in reducing the levels of huntingtin protein in the brain and helping with HD symptoms. There are plans to conduct more trials beginning later in 2018 or early 2019.
Since only one research trial has been done with Ionix-HTTRx and the trial had a very small sample and lasted for a short time, researchers and healthcare professionals do not have enough evidence to support the effectiveness of the drug. That being said, the potential for a drug to decrease the amount of harmful huntingtin protein fragments in the brain brings with it a lot of optimism and hope that we may be able to better treat Huntington’s disease.
by Amy Haddlesey
Depression is a mental disorder that is estimated to affect more than 300 million people worldwide. Major Depressive Disorder (MDD) is one of the most commonly diagnosed depressive disorders. MDD is often characterized by having at least 5 of 9 symptoms specified in the Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-V), with at least one of the symptoms being depressed mood or the loss of interest or pleasure. Other symptoms may include but are not limited to: sleep difficulties, fatigue or loss of energy, reduced ability to concentrate, feelings of worthlessness, and psychomotor agitation or retardation. As a highly prevalent mental illness, it is becoming increasingly important to create new, innovative ways to detect and help depression.
In a study done by Microsoft Research, researchers looked into using social media behaviour as a way to infer the behaviours related to depression, with the intention of providing an accessible framework for early detection and diagnosis. The social media platform used in this particular study was Twitter. As a setting for the expression of many significant aspects of behaviour such as a person’s thoughts, mood, activities, and socialization patterns, Twitter provides a wealth of easily accessible knowledge about a person’s emotional condition over time without being intrusive into participants’ lives. In the past, web activity patterns and online behaviour on Facebook have also been studied in relation to mental disorders. For example, researchers have examined trends in Facebook status updates that are associated with depressive symptoms.
Using crowdsourcing, the study compiled several hundred Twitter users who participated by completing a CES-D (Centre for Epidemiologic Studies Depression Scale) screening test, Beck Depression Inventory, and an additional survey aimed at gaining depression history and demographic information. A CES-D is a 20-item self-report scale used to measure depressive symptoms. After completing the questionnaires, users could opt in to share Twitter usernames if they had a public profile, under the assumption that their profile could then be mined and analyzed anonymously by computerized programs. Within the study, data or Twitter posts were collected over a yearlong duration from individuals who had given their consent and who had either been diagnosed with depression previously or who had no history of depression. For those who had depression, Twitter data from one year leading up to their diagnosis was collected. For people who did not have depression, Twitter data was collected for the duration of a year ending with the date the survey was completed.
In total, 476 users were used within the study. Data from individuals who had depression were used to create a gold standard for the changes in activity on Twitter preceding diagnosis. The behavioural patterns on Twitter of individuals who had depression included:
· Posting patterns shifting towards later at night
· Decrease in engagement
· Higher expression of negative affect
· Lower activation
· Higher presence of first-person pronouns
· Higher use of depression terms
· Higher use of words associated with symptoms (Figure 1)
· Higher disclosure of feelings and seeking social support
· Larger discussion of therapy and treatment
Overall, there is a marked increase in certain behaviours and decrease in others before diagnosis that suggests that there may be a shift in behaviour on Twitter leading up to the onset of a depressive episode. By developing this gold standard using data from the depressed group, the study was able to build a statistical classifier that provides an estimate of the risk of depression. In making this prediction, the trend of behavioural change and the degree of the behavioural change the one-year period were both important in identifying behaviour related to depressive symptoms.
The study reported that out of their developed models, their best performing model has a 70% accuracy rating for prediction as well as a precision of 0.74. Therefore, the model seems to be able to estimate the risk of depression at an above-chance level, and the model is able to make these estimates relatively consistently. The hopes for this study and moving forward within this area is that the prediction process could aid in identifying behaviour associated with depressive episodes, increase early detection, and lead to having a better support system in place by the time a mental health-related issue presents.
Figure 1. Categorization of words into those related to symptoms, disclosure, and treatment.
by Jenna Finley
We’ve all heard someone say at some point: “I think I’m getting sick so I’ll just take a ton of vitamin C and it’ll be fine.” The idea that a dose of vitamin C will keep you from getting the flu (or at least stop the illness from lingering) is one of the most common home remedies nowadays, and the reason vitamin C tablets fly off the shelves right around the beginning of cold and flu season. However, does this home remedy actually help?
Short answer: we’re not sure.
Vitamin C began to be thought of as an important guardian of health in the 1970s when prominent doctors began recommending daily doses as a way for people to lead longer, healthier lives. But it wasn’t until the 1990s that vitamin C began to be more widely touted as a common cold prevention method. Drugs containing vitamin C began popping up on shelves claiming to be common cold cures, the most prominent of which was called Airborne. Since its release, the drug has been the subject of multiple lawsuits over the unsubstantiated claims made involving the “cold busting” power of vitamin C and yet has still inspired dozens of new ‘cold preventing’ vitamin C supplements.
As far as research goes, very little support has been found for the idea that taking vitamin C will help prevent an illness, at least for the general public. If you’re an extremely active person who takes a dose of 250-1000mg of vitamin C every single day, then you could reduce your cold incidence by half! Great news for Olympic athletes and marathon runners, but for the rest of us, washing our hands regularly would be more helpful.
The possibility of shortening the length of a cold and reducing its symptoms is where the research gets more interesting, though not in the way you might expect. The research findings are also a lot more conflicted in this area. While some studies suggest that vitamin C can reduce symptoms as much as 85%, others say the supplementation makes no difference. The most popular and cited study says that vitamin C can make a difference, but only if a 200mg vitamin C supplement is taken every single day - not just the days you’re feeling sick or the days leading up to a cold. I don’t think a lot of us can say that we meet that condition, but even if we did, the benefits aren’t too exciting. On average, this regime leads to only one less day of illness.
Taking a massive amount of vitamin C at once (megadosing) is another common method people use in the hopes that they’ll finally be free of that persistent cold. While some research seems to agree with this treatment, there is yet another caveat. The dose necessary to have a chance at relieving your illness would need to be as high as 8000mg/day, which can cause a whole host of problems. In the end, the 1000mg tablets your roommate is eating like candy around exam season might be doing them more harm than good, as too much vitamin C can make you a lot sicker, resulting in symptoms like vomiting, abdominal pain, and diarrhea.
Therefore, we can’t conclusively say that vitamin C supplementation helps, but we do know that it can hurt. Most nutritionists recommend getting your daily needed vitamin C from your meals and forgoing a supplement all together. Doses over 400mg are excreted from the body and can result in you (literally) flushing your money down the toilet.
At the end of the day, if you’re still convinced a few vitamin C tablets will help you stave off the dreaded common cold for another day, go ahead and take them, but be careful. No one wants to suffer any more than they have to during exam season.
by Rosalin Dubois
It is one of the worst times of the year - everyone, from your best friend to your professor, is getting sick. Sooner or later you probably will too, and when that time comes you’ll be struck by the same distressing experience: no matter how well you feel during the day, by the time you are ready to go to bed, you feel so miserable that you never want to leave your room again. I’ve always been told, “You’ll feel better in the morning; colds always feel worse at night!” But why is that true?
There are a few plausible explanations for this phenomenon. Some of these explanations are less scientific than others but may still be able to provide insight into why sleeping while sick can be so difficult. Consider everything that you have to distract you during the day. While you are focused on getting to class, meeting an important deadline, or even just socializing with your friends, you may pay less attention to the signals that your body is sending you that indicate you are sick. However, when you try to sleep, you have fewer distractions, and so you may notice more or these signals and feel much sicker. Additionally, when you lie down, gravity affects your body differently than when you are standing. Even just sitting up may help to clear your stuffy airways and help you to sleep better (essential for that 8:30 lecture!).
However, what if you are working late and sitting up, and still feel worse than during the day? You may be experiencing this because of the circadian rhythms of our immune system. Circadian rhythms are physical, mental, and behaviour changes that follow a daily cycle (think your “internal clock”) that tell you what time to get up in the morning. Our immune system also follows a similar pattern as researchers have found that immune response varies throughout the day. During the day, the part of the immune system called cell-mediated immunity (or just cellular immunity) is responsible for defending us from infection. This form of defence is very effective against viruses, bacteria, fungi, and other invaders. Most important to note is that we don’t typically feel the strain of this type of immunity at work.
At night, inflammation replaces cellular immunity. Inflammation is the type of immune response that we normally experience when our tissues are damaged by trauma, bacteria, heat, or other causes. When this happens, the cells that have been damaged release chemicals (including histamine and prostaglandins), which cause blood vessels to expand and allow more blood to reach the damaged area. Additionally, inflammatory mediators increase the permeability of blood vessels to defence cells that carry fluid into the tissue and cause swelling. By surrounding the damaging substance with a barrier of this released fluid, inflammation aims to isolate the invader from our tissues, and therefore prevent it from doing more damage. We experience the worst of our symptoms of being sick, like fever, increased amounts of mucus, and fatigue, when we feel the effects of swelling when our system is inflamed.
Studies over the last decade indicate that this transition between cellular immunity and inflammation occurs due to a change in the activity of a type of white blood cells called T-cells. These cells are important in cell-mediated immunity because they attack and kill antigens (foreign substances). It was determined that T-cells actually become less active against antigens during times when the body would normally be resting, especially at night.
This seems bizarre; why would the body turn off such important defenders when we need them most? A study done by a team of German researchers just last year may hold the answer to that question. In this study, researchers observed changes in the population of the lymph nodes of mice during their active times, and during their rest times. They correctly expected to see more T-cells present in the lymph nodes when they were not working and the mice were resting. However, they were surprised to find that high levels of dendritic cells were also present at this time. Dendritic cells process information about antigens and communicate this information to T-cells so that cellular immunity can effectively target this threat.
This research seems to indicate that during the day, T-cells and dendritic cells move normally throughout the body, gathering information and dealing with threats. When T-cells randomly come into contact with dendritic cells throughout the body, they receive information so that they can adapt their immune response to better eliminate the antigens. At night, both types of cell move to the lymph nodes, and the high concentration of these cells allows for a greater likelihood of interaction. Through this interaction, the T-cells will receive the information from the dendritic cells to develop a functional immune response to this threat - meaning you could potentially heal faster! Think of it as these cells meeting up to share information on any invaders to be more effective at fighting them off! Meanwhile, the inflammatory part of immunity does its best to prevent any infection from progressing further.
At the end of the day (pun intended), it seems likely that it is a combination of these factors (distraction, position, and dynamics of the immune system) that cause us to feel worse at night. Unfortunately, there still isn’t much to do to prevent this phenomenon other than what you should already be doing to treat a cold. Get well soon, Queen’s!
by Lauren Lin
“Locked-in syndrome” is used to describe a medical condition in which there is complete paralysis of all voluntary muscles in the body including most facial muscles. Individuals who have locked-in syndrome are conscious, have cognitive function, and are aware of their environment, but they cannot produce movements or speak. This condition is often caused by damage to the pons, a part of the brainstem that relays information to different parts of the brain. The damage can result from strokes, infections of the brain, or bleeding. Certain disorders like Amyotrophic Lateral Sclerosis (ALS), a motor neuron disease, can also cause total motor paralysis. Many people with locked-in syndrome can communicate through moving their eyes and/or blinking, but individuals with locked-in syndrome may eventually lose their ability to move their eyes, and so communication becomes extremely difficult.
Chaudhary, Xia, Silvoni, Cohen, and Birbaumer (2017) report on the potential for brain-computer interface (BCI) to offer a way for patients with ALS who are paralyzed to communicate. BCI research may involve invasive procedures like implanting electrodes in the brain or noninvasive technologies like functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS) to record brain activity. The recorded brain activity then can be interpreted for what the user is communicating. Chaudhary et al. (2017) used fNIRS to measure the changes in blood flow by assessing oxygenated hemoglobin (O2Hb) and used electroencephalography (EEG) to measure brain waves of four patients who had no motor movement. The relative changes in oxygenated hemoglobin when patients responded to “true/yes” and “false/no” statements were significantly different from each other, and so fNIRS measurements were used to recognize whether the patient answered "yes" or "no". However, EEG measurements were not able to reliably discriminate between yes or no answers.
To train the patients to be able to answer questions using BCI, the researchers asked the patients to respond “yes” or “no” to personal statements with known answers like “Your husband’s name is Joachim” or “You were born in Berlin.” For each known statement with a clear “yes” answer, a similar statement with a clear “no” answer was also given. For example, if the statement “You were born in Berlin” was true, it could be paired with “You were born in Paris,” a false statement. The reverse was done for statements with clear “no” answers. The patients were explicitly told to think of “yes” or “no” answers but not to imagine the answer visually or auditorily so that the BCI would only be picking up on signals that correspond with “yes” or “no” sentiments rather than the look or sound of the words. The patients also received feedback on what their answer was interpreted as (e.g. “Your answer was recognized as ‘yes’”) during training.
The patients were asked a total of at least 200 sentences with known answers and 40 open questions or statements that asked about the person’s quality of life or questions of caretakers that only the patient could answer (e.g. “You have back pain.”). The four patients communicated using BCI with a correct response rate of 70% over the course of several weeks, which is above the level of chance (50%). Three of the four patients were asked open questions about their quality of life, such as “Are you happy?” and “I love to live.” These questions were asked repeatedly to ensure validity of the response. All three patients answered “yes,” which indicated an overall positive attitude towards their current situation and towards life.
BCI seems like a promising way for patients with paralysis in almost all voluntary muscles to communicate since it does not require any motor movements. However, the interpretations of the responses are not always correct, and so it is extremely important to take precautions like asking a single question multiple times. Additionally, BCI may not be accessible in all healthcare settings since it requires both the equipment needed to measure and interpret brain signals and the training for the patient to use it. Despite these limitations, BCI still has a lot of potential to provide a way for locked-in patients to convey their thoughts who may not have been able to previously, especially when considering that one of the patients included in the study had not been able to communicate for four years. The researchers of this study are hopeful that this technology could be a stepping stone towards improving the quality of life of patients who are in a locked-in state and even write that family members all “experienced substantial relief” when they were able to communicate with the patients and that they still use the system.
by Amy Haddlesey
Editor’s Note: Lifebeat Newspaper is happy to present the first installment of our Spotlight on a Researcher series, which will feature articles on interviews we conducted with researchers across Canada and within Kingston about their work.
I worked with Dr. Rolando Del Maestro, a researcher at the Montreal Neurological Institute within the Neurosurgical Simulation and Training Centre. Dr. Del Maestro has made a lasting impact on the field of brain tumour treatment and care as a practicing neurosurgeon and a co-founder of the Brain Tumour Foundation of Canada, as well as through his contributions to brain tumour research. Currently, Dr. Del Maestro is working on research surrounding a neurosurgical simulator called the NeuroVR. Earlier in the year, I had the opportunity to interview Dr. Del Maestro to discuss what led him to pursue research, his current work on the NeuroVR, and what starting the Brain Tumour Foundation of Canada meant to him.
At the start of his career, Dr. Del Maestro attended the University of Western Ontario from 1967-1973 where he received his M.D. Over the next 5 years, he completed a rotating internship and his residency in neurosurgery without a lot of exposure to the area of research. During his time at Western, Dr. Del Maestro met with a researcher from Sweden, who was very influential in Rolando's decision to pursue a Ph.D. It was during his Ph.D. in Sweden that Dr. Del Maestro developed a new appreciation for what he calls the “complete arc” in medicine and science. The “complete arc” is a term he uses to describe the deep-rooted connection between research and clinical practice. He mentioned in particular that knowing the chemistry behind what he works on in his clinical practice made all the difference. From this experience forward, it became clear to him that to make a difference in his field, he would need to combine both his knowledge of surgical experience and research.
The combination of surgical experience and research has led Dr. Del Maestro to his current research focus on neurosurgical simulation. The "NeuroVR," the central tool of his studies, is a neurosurgical training simulator developed with the National Research Council of Canada (NRC). The simulator involves the use of haptic feedback (sense of touch) and virtual reality to create simulations that resemble neurosurgery, especially the resection of brain tumours. The possible use of simulation in the neurosurgical community became apparent to Dr. Del Maestro as he noticed during his career that there was a near standstill in how operations and training procedures were being conducted around the world. This observation caused him to ask himself "how do I make the field of neurosurgery better globally?" To do this, he looked to the already established global use of simulation in the aviation industry. At the time of the inception of the NeuroVR, the aviation industry had three key facets that were used to reduce the occurrence of accidents: 1) early warning signs (warning lights, control callouts, etc.), 2) flight simulators to train pilots before ever flying their first craft, and 3) group training, in which pilots learn from each other in a proper training environment. In the field of neurosurgery, the only facet that was present was the monitoring of early warning signs (blood pressure, breathing rate, etc.), and so Dr. Del Maestro concluded that the field of neurosurgery was decades behind where it could be. Instead of seeing this delay as an obstacle, Rolando saw an opportunity for rapid improvement by incorporating simulation into neurosurgical training using the aviation industry’s current practices as a model. As a result, his current projects focus on validating the NeuroVR as an accurate training tool for neurosurgery. The goal is to improve the training of young neurosurgeons and reduce neurosurgical disasters related to human error. These goals are modeled after the reduction in fatal aircraft accidents following the introduction of simulators in aviation training. Introducing an idea as novel as neurosurgical simulation training is not without its challenges. In our interview, Rolando described the disruption to current practices that new models cause as a necessary step in innovation, rather than a hindrance or reason to turn back.
Beyond his research efforts involving brain tumours, Dr. Del Maestro has also had a deep social impact on the area. During his career as a neurosurgeon and his wife Pamela Del Maestro’s career as a nurse, both developed a need to help those with and affected by brain tumours beyond the scope of clinical care. This passion to do more for their patients led Dr. Del Maestro, along with Pam Del Maestro and Steve Northey, to create the Brain Research Fund Foundation of Canada in 1982 (now known as the Brain Tumour Foundation of Canada or BTFC). The foundation acts as both a support system and a research fund. It organizes support groups and events geared towards empowerment, cultivating hope, and emotional support for people with brain tumours and their family members. BTFC has also funded extensive research into finding a potential cure and treatment for brain tumours. Dr. Del Maestro considers the BTFC one of his most important contributions during his life since it has allowed him to help those affected in a way that will continue after him.
Through the foundation, his neurosurgery career, and his research, Dr. Del Maestro has helped the field of brain tumours from nearly every angle. It is the same passion that fuels his research that started the Brain Tumour Foundation of Canada and allows Dr. Del Maestro to withstand adversity and skepticism in the emerging and new field of neurosurgical simulation. He mentioned that enacting change is always difficult and gradual, especially in the world of medicine. Instead of being overwhelmed or discouraged by this difficulty, Dr. Del Maestro is fascinated by complexity and challenge, a trait that seems to be a key ingredient to his success.
by Wara Lounsbury
It’s a start of a new semester and a new year, and the return of students to Queen’s has been heralded by the fall of fresh white snow blanketing the campus. This crystalline transformation is a physical manifestation of the blank slate that a new year represents; an opportunity to resolve to make a change for the better. Some of these resolutions may have fallen to the wayside already (I for one have not held to my resolution to exercise more, but who was I kidding really?), but some of us still hold steadfast. After all with the ushering in of a new year comes newfound motivation to be more productive, exercise more, or to lose some of that weight gained after feasting during the holidays. One of the most common New Year’s resolutions is to lose weight, be that through exercise or through dieting. Unfortunately, there are many dangers associated with diets, some widely publicized, and some less known. In fact, sometimes dieting can even be fatal, as in the Terri Schiavo case.
by Angela Chen
In the past decade, specifically in North America, the gluten-free (GF) diet has become regarded as the new “healthy” way of eating. In fact, many restaurants now serve GF meal options, and gluten is sometimes regarded as an unhealthy component of our diets, even for people who do not have celiac disease (a medical condition that involves an immune reaction from eating gluten, which can lead to damage in the small intestine). Even though many individuals promote the GF diet, this widespread idea seems to be based more on single case studies or personal experience rather than overall research. Before eliminating gluten from one’s diet, it is pertinent to consider the potential harm it could cause.
A few recent studies have indicated that a low-gluten diet may compromise overall health. A study carried out by a collaboration between Columbia University and Harvard Medical school revealed that a low-gluten diet was related to increased heart attack risk. After following the diet and overall health of 100,000 people between 1982 and 2012, they found a correlation between people who had little to no gluten in their diet and those who experienced heart attacks or were more prone to them. Other research on the effects of a GF diet on gut microbiota and immune function are currently under analysis, but some findings seem to indicate that gluten is important for certain components of immune function.
Like many other health fads, it is difficult to find conclusive, long-term data regarding health benefits and risks because of the speed at which they gain popularity. For example, it used to be thought that there were many health benefits from drinking juice boxes, but this idea was countered by research that found that most juices contain excessive sugar content. Similarly, the negative health effects associated with using artificial sweeteners as sugar substitutes to reduce sugar intake also show that many health fads that increase in popularity exponentially are often not critically studied or analyzed. This lack of critical analysis may be a reflection of the general public’s desire to see immediate results rather than taking the time to make an informed decision.
Individuals who participate in a GF diet often eat GF substitutes. A 2015 systematic analysis conducted by Missbach and colleagues revealed that GF food substitutes have no predominant health benefits. Furthermore, replacing gluten-containing foods with GF food substitutes results in significant cost differences (increases range from 200% to 300% relative to the gluten-containing version of the food). GF substitutes are also high in synthetic hydrocolloids and gum. Therefore, individuals who do not have celiac disease do not seem to be benefiting from a GF diet. Additionally, by not allowing the body to metabolize gluten in a GF diet, the mechanism for digesting gluten weakens, which can result in celiac disease in individuals who previously could digest gluten. Developing celiac disease from a GF diet is highly preventable by allowing the body to process gluten if it is capable of doing so.
A final consideration is one surrounding public health. In her response to the gluten-free fad and the danger for celiac individuals, Grabowski commented on her distrust in restaurants to provide truly GF meals. She writes that although many restaurants have GF options, they may not be training and educating their servers on how to handle GF foods. It is not uncommon for servers to neglect to change gloves or serving plates, which often results in cross contamination. McIntosh et al. (2011) looked at 260 foods that were claimed to be gluten-free and found that approximately 10% of these samples contained gluten. For individuals who are not celiac, this cross contamination will go unnoticed; however, it places celiac individuals at risk.
In conclusion, while the GF diet may produce immediate, physical results and has become popularized by celebrities, research shows that it may cause more harm than good in the long run. Therefore, it is important for individuals to learn about the effects of the GF diet before excluding gluten from their diet. Rather than cutting gluten entirely, it may be beneficial to eat gluten in moderation. Just like any other foods, too much or too little is never good. Individuals should turn to research and credible physician recommendations before making a decision about their diet that will ultimately impact their overall health.
by Haley Richardson
Climate change: it’s something we hear about nearly every day whether it’s in class, on the news, or in casual conversation. While most people know that climate change is not a good thing, many are not aware of the specific implications of the changes in the earth’s temperature and atmosphere. One of the more major yet less regarded effects of climate change is how rising global temperatures can change the habitable range of a species. More specifically, infectious agents, as well as the organisms that carry them, can begin to populate areas of the world that they were not able to before, spreading disease to regions that were formerly too cold for them to survive.
Infectious agents, like all other biological entities, have specific environmental factors that determine where they can grow and reproduce at the fastest rate. Often, these environments tend to be hot areas with a high quantity of water or a humid atmosphere. As a result, the areas of the world with the most infectious diseases tend to be the tropics, particularly after monsoon season, which results in large amounts of rain. However, as global temperatures rise, the formerly hot, humid areas of the tropics may become too dry for many disease-carrying organisms to handle, causing them to migrate to more temperate zones. By contrast, regions formerly too cold to have these disease-carrying organisms will become warmer and more humid, allowing the range of the species to expand further north. Furthermore, large-scale extreme weather events such as El Niño, La Niña and others are already increasing in frequency, bringing with them new organisms with new diseases to colonize areas affected by these phenomena.
For example, the infection rate of the malaria virus is primarily attributed to the species that carries it: mosquitoes. Although many parts of the world contain mosquitoes, only a specific species of mosquitoes in the Anopheles genus can carry the malaria virus. Malaria-carrying Anopheles primarily live in sub-Saharan Africa, where the temperature and humidity of the region are ideal for the mosquito’s life cycle. However, as climate change causes global temperatures to rise, Sub-Saharan Africa is expected to become too arid for the mosquitoes to thrive, and so scientists are predicting that they will migrate as far north as Southern Europe.
The consequences of the spread of diseases like malaria could be astronomical, especially in less developed countries where access to health services and vaccinations are limited. Additionally, by migrating to previously uninhabited regions, the Anopheles mosquito would be spreading the malaria virus to populations that had not had a chance to build up an immunity to the disease, amplifying its effects. Other similarly transmitted diseases such as yellow fever, sleeping sickness and dengue fever, as well as bacterial infections such as cholera and Lyme disease, are also predicted to spread at a faster rate and into previously uninhabited areas as global temperatures rise. All of these diseases in their newly acquired ranges could spell disaster for many people worldwide, and in some areas they already have.
However, lest this article be all doom and gloom, there are several initiatives that the international community has proposed to manage the spreading of infectious diseases due to climate change. Besides the obvious solution of slowing down carbon emissions to decrease the rate of rising global temperatures, scientists and lawmakers alike have come up with systems to monitor the spread of infectious diseases to ensure that responses to outbreaks will be faster and more efficient than before. Institutions such as the Pacific ENSO Application Centre have developed early-warning systems that could detect extreme weather events that could lead to outbreaks of disease, allowing them to inform governments to prepare relief in advance and educate the public on disease prevention. Other institutions such as the World Health Organization actively provide relief to diseased-ravaged areas, and lobby governments to provide funds and support to affected regions. Furthermore, scientists around the globe are working to develop vaccines to the planet's most deadly diseases, as well as ways to mass produce them in the most cost-effective manner. Thus, as global climate change alters the environment (perhaps irreparably) at a frighteningly fast pace, we can at least hope that human progress will move even faster.