by Amy Haddlesey
As students, we are no strangers to staying up late to do schoolwork or the hardships of getting up early for an 8:30 class. A good portion of the population will find one of these activities easier than the other, which means that people can fall into one of two camps: night owls or early birds. The population is said to have a normal distribution when it comes to how we organize our behaviour within the 24-hour day, with most of us in the middle of the two extremes, but there are still individuals that find themselves leaning towards one or the other. The term ‘chronotype’ is used to describe your preference for wakefulness, which is divided into late chronotypes (LCs), intermediate chronotypes (ICs), and early chronotypes (ECs). ECs are characterized by their difficulty with staying up late, and LCs are characterized by their difficulty with getting up early. Chronotype is both age and sex-dependent. Interestingly, a higher percentage of females are ECs.
Chronotype-specificity is dictated by the interplay between neural circadian rhythms and homeostatic oscillators. Both essentially involve regulating the cellular processes involved in telling our bodies when to sleep, wake, and eat. While circadian rhythms, or the “body clock”, can respond to or be affected by external factors (eg. sunlight), the homeostatic oscillators are considered internal, independent regulators. However, it is the interplay between the two that regulates the overall fluctuation between sleep and wakefulness over the course of each day. Recent research into the genetic basis of our inner clocks has revealed that our circadian rhythms are important time reference systems that interact with the environment. Understanding how circadian clock function is most affected could lead to helpful interventions in mediating clock dysfunction improving human health and welfare. In this study, more than 80 different genes were shown to be expressed differently between late and early chronotypes in fruit flies. Furthermore, it wasn’t expression alone that separated the groups; there were also different genetic variations present between late and early chronotypes. Overall, there are many factors at play in determining your preference for how you organize your day that may be beyond your control.
Studies have found that the differences between chronotypes extend beyond sleeping preference. Different chronotypes are also associated with differences in cognitive performance, gene expression, endocrinology, and lifestyle. Most notably, LCs tend to suffer from a conflict between internal and external time (‘social jetlag’) that may cause them to suffer more mental stress. In other words, a night owl’s tendency to sleep through the day and stay up late comes into conflict with the typical hours of the social day, which may cause LCs to experience jet lag-like symptoms as they try to adjust. Similarly, LCs may have to adjust to working hours or school hours as well. One paper suggested that with an increased understanding of chronotype-specificity, work schedules could ideally be designed to fit your wake/sleep schedule. It is important to note, however, that most workplaces have to fit into regular business hours, and so it’s unclear how many workplaces could be suited to this “customizable” workday and how large of a range can be accommodated. With that said, there are some businesses that set formalized flex hours, where there are certain hours you must be in the office, but you can shift the surrounding hours to your own preference (come in late so you leave later versus coming in early so you leave earlier).
As much as we try, we can’t always control our schedule and that means we are sometimes forced to conform to a routine at odds with our chronotype. A recent study involving adolescents has shown evidence linking chronotype and academic performance. During adolescence, our chronotypes are typically at our latest, as might be expected with the typical tendency of teenagers to stay up later and sleep in later when compared to other age groups. As discussed, a late chronotype can mean a mismatch between our circadian clock and the early school clock. As a result, it was found that late chronotypes generally have lower grades. This finding is especially interesting when there isn’t an agreed upon relationship between early and late chronotypes and IQ. Instead of a difference in IQ, these lower test grades could be due to the circumstantial or ‘social jet lag’ discussed earlier causing sleep deprivation in LCs. The effect of chronotypes seem to be strongest in the morning and disappear in the afternoon, which is in line with views that LCs struggle to adjust to an earlier schedule.
So what can we do with this information? There are many strategies recommended to improve sleep quality including a consistent sleep schedule or reducing your exposure to blue light before bed. However, it may be that these findings indicate a need for schedules better suited to our natural bodily rhythms that could lead to positive outcomes for public health and productivity. It’s hard to imagine what tangible programs or policies could be established with this information, but it’s an interesting subject that may be worth our attention.
by Tayyaba Bhatti and Wara Lounsbury
With the recent chill of November comes the ever-looming threat of exams. During these gloomy times we can find solace in the thoughts of the upcoming holidays. For some us, that entails a warm and cozy time spent with family and for others it means a trip to warmer climates. As pleasant and comforting as vacations are supposed to be, there is one unwelcome side effect: jet lag, which often casts a pall on the joyous times. But what exactly is jet lag? Previously believed to be a state of mind, jet lag was later discovered to be a physiological phenomena. Jet lag is the sensation of fatigue during daylight hours that arises due to the slow adjustment of our circadian clocks to new time zones. Circadian clocks are made up of the molecular pathways that dictate when we sleep, move, and eat, and are present in almost every cell in our body. Having to adjust these circadian clocks to a new time zone and then back to the original one within a span of a couple of weeks can take a toll on our bodies. However, there may be hope for frequent flyers in the future according to a recent publication in the journal Cell Metabolism.
The research started with an initial question: how do all circadian clocks show the same time? Universal clock resetting cues like feeding-fasting and temperature cycles have already been discovered. Dr. Gad Asher, the lead researcher on the study, decided to study oxygen consumption as a potential cue because all cells in our body use oxygen. The mice were placed in special cages to measure oxygen consumption and monitor blood oxygen levels. In order to simulate jet lag induced by traveling to a different time zone, researchers shifted the light-dark cycles of mice 6 hours ahead. They found that decreasing oxygen levels 12 hours before or 2 hours after the light-dark cycle shift helped mice to adapt to their new light-dark cycle more easily and recover from jet lag faster.
Furthermore, the researchers investigated the cellular route by which oxygen modulates the circadian clocks by removing a protein (HIF1α) that tells cells how and when to use oxygen. They discovered that when this protein was removed, the mice were no longer able to achieve the faster adaptation of the light-dark cycles upon exposure to decreased oxygen levels. Thus, they determined that HIF1α mediates the effect of oxygen on the circadian clocks.
This study incites a lot of new research questions. Most of us would want to know whether these findings can be applied to humans as well, or whether it's just mice that now have a cure for their jet lag. Another question that comes to mind is whether changing the oxygen level would work best before, during, or after a flight. Also, would raising oxygen levels have the same preventative effect as lowering oxygen levels? However, the most important question is how this research can be relevant to us.
Most of us experience chaotic sleep schedules during midterm and finals seasons. Some of us even suffer chronic fatigue due to ever-shifting circadian clocks during the year to meet deadlines or to socialize within our networks. Others struggle to maintain the heavy coursework with part-time jobs and night shifts. Since HIF1α also plays a role in various other cellular processes, this research may be extended to help explain the underlying effects of these phenomena surrounding fatigue. However, this requires further studying of the protein, its interactions within the body, and the application of oxygen consumption alteration to humans. For now, it appears just Stuart Little and his friends can rejoice over their jet-lag free vacations.
by Greg Eriksen
Editor’s Note: Lifebeat previously published an article called The Science of a Hangover: Homecoming Edition, but here is a more in depth explanation of the theories behind why hangovers happen and why different alcohols lead to different hangovers.
The hangover: the price many of us pay to have a night out that we probably won’t remember. After a week of working hard, we celebrate on the weekend only to wake up on Sunday feeling miserable. We then proceed to struggle through the headaches, dizziness, and nausea with the help of coffee and Advil. Although the science behind hangovers is not completely understood, there are many contributing factors that help us understand why those awful Sundays occur. Furthermore, we may be able to reduce the effects of the hangover based on the alcohol we choose to drink!
Often the headaches and dizziness associated with a hangover come from being very dehydrated. Alcohol is a natural diuretic which works by decreasing the body’s anti-diuretic hormone. This hormone is responsible for the reabsorption of water. Therefore, when high levels of alcohol are consumed, we stop reabsorbing water, and thus have an increasing urge to pee. This effect explains why we all have so many untimely trips to the bathroom in one night! Consequently, the total fluid loss can give you painful headaches the next day.
Another contributing factor to the symptoms of a hangover is the result of ethanol (alcohol’s primary component) metabolism. Upon ethanol consumption, the enzyme alcohol dehydrogenase converts ethanol and NAD+ to NADH and acetaldehyde. The high concentration of NADH inhibits gluconeogenesis: the generation of glucose. Essentially, the first step in the gluconeogenesis involves the conversion of NAD+ to NADH. However, with a high level of NADH already present due to ethanol consumption, this reaction is unfavourable, and therefore gluconeogenesis will not occur. The outcome can lead to low levels of blood sugar known as hypoglycemia. As a result, when an individual wakes up with a hangover, they may be hypoglycemic, which causes dizziness and nausea--sound familiar?
As I mentioned before, the metabolism of ethanol produces NADH as well as acetaldehyde. The latter of the two molecules is hypothesised to be the primary reason we all experience hangovers. Research has shown that acetaldehyde is 10-30 times as toxic as alcohol. In effect, we are introducing toxins into our bodies for fun. The toxic effects of acetaldehyde may include sweating, dizziness, and even memory impairment. Due to acetaldehyde’s high toxicity, our bodies are very efficient at converting it to a more stable compound known as acetate. However, drinking in mass consumption (much like what is done on homecoming) produces a lot of acetaldehyde, which allows its adverse effects to take place.
Although waking up with a hangover can be a common occurrence, the alcohol we choose to drink may influence the hangover’s intensity. Congeners are substances within alcohol that are formed during the fermentation process. They often contribute to the flavour of the beverage we choose to drink. However, they are also linked to the effects of hangovers. In a previous study, researchers decided to look into the effects of different alcohols on people. They gathered 95 healthy alcohol users and served them 3 different beverages on 3 different nights. The first night, all 95 individuals were given the same undisclosed alcohol to “acclimate” to drinking. On night two, they were given vodka or whiskey with a consumption that put them 3 times above the legal limit for driving. Finally, on the third night, they were given a placebo drink with no alcohol in it. Once the study was done, the participants who drank whisky reported worse hangover symptoms than those who drank vodka. Whisky also contains considerably more congeners than vodka. Furthermore, research published in the British Medical Journal discovered a connection between hangover severity and amount of congeners. Accordingly, the amount of congeners for each alcohol has been recorded, with brandy, red wine, and whisky having considerably more congeners than vodka, gin, and white wine. Interestingly enough, the drinks that give the worst headaches tend to be darker and more flavourful. Furthermore, drinks that include carbonation tend to absorb faster than distilled drinks, and ultimately get you more drunk. Therefore, it seems that the best drink to buy in the club is also the cheapest one: vodka water.
Overall, no matter the facts, many of us will still choose to drink on a night out regardless of the effects in the morning. In a sense, drinking on that Saturday night is just borrowing happiness from Sunday. However, we can help ourselves survive the hangover by choosing the right kinds of alcohol! There is nothing wrong with saving a little bit of money and drinking vodka water instead of another jäger bomb!
by Lauren Lin
Cell phones today have many uses, like texting, taking photos, scrolling through social media, playing games, and of course, actually calling other people. Therefore, it’s not surprising that the number of people who use cell phones rises every year and that cell phones are becoming increasingly important to our day-to-day lives. As a student, it’s not uncommon to see your classmates checking their phones in the middle of a lecture, and many of us would probably find it difficult to go a full day without using our phones.
Despite the frequent use of the term “cell phone addiction”, it is critical to note that while some researchers believe that cell phone addiction is a valid mental disorder that should have diagnostic criteria and treatments, others are hesitant to classify problematic phone use as an addiction. The DSM-5 (Diagnostic and Statistical Manual of Mental Disorders Fifth Edition), which was published in May 2013 by the American Psychiatric Association, does not include cell phone addiction in its list of mental disorders. In fact, addictions in the DSM-5 are mostly substance use disorders (e.g. alcohol use disorder), with gambling disorder being the only behavioural addiction in the DSM-5. According to the DSM-5, behaviours like compulsive buying or compulsive sex are considered to be impulse control disorders rather than addictive disorders. Therefore, there are researchers who suggest that the next edition of the DSM should include a new addictive disorder for cell phone use, as well as researchers who believe that it should be a new impulse control disorder instead.
The criteria for substance abuse disorder and gambling disorder described in the DSM-5 includes (but is not limited to) the following:
A 2016 review by Gutierrez, Fonseca, and Rubio found that many studies done on cell phone abuse reported behaviours that have parallels with the DSM-5 criteria for addiction. Therefore, cell phone addiction may align with our current understanding of addictive disorders. However, some researchers theorize that the distress, anxiety, and inability to stop cell phone abuse could be caused by the social aspect of cell phones (i.e. the communication cell phones facilitate) rather than the cell phone use itself. Additionally, other researchers believe that cell phone abuse could be a consequence of another psychological issue or variable, such as social anxiety or the desire for approval due to low self-esteem, instead of being its own disorder. Some researchers even acknowledge that what seems like cell phone abuse could be adaptive and typical for certain lifestyles or professions.
Although there isn’t a consensus on whether cell phone addiction exists, there are many studies that try to investigate cell phone addiction and abuse. These studies seem to indicate that there is a high prevalence rate among young people (especially adolescents who had their first cell phones before age 13) and that the prevalence rate varies across different cultures. Interestingly, men and women also seem to differ in how they use their cell phones. Females generally use cell phones for more time than males and mainly for communication purposes, while males tend to use cell phones for communication and gaming equally. In terms of effects, cell phone abuse seems to decrease the amount and quality of sleep an individual gets and is associated with substance use disorders, anxiety, depression, stress, and loneliness. However, the relationship between mental health issues and cell phone abuse could be bidirectional in that they affect each other. Currently, treatments used for gambling disorder, such as therapy and self-help groups, seem to be the most promising ways to help people who have cell phone abuse, but more research needs to be done.
Therefore, the good news is that you may not have a cell phone addiction even if you find yourself seemingly glued to your phone. However, problematic cell phone use can still cause issues, so it’s important to be aware of whether your phone use is negatively impacting your life.
by Amy Haddlesey
As this weekend approaches, along with Queen’s Homecoming, it will not be uncommon to see engineers purpled from head-to-toe flocking to Richardson Stadium to watch the football game. This tradition, albeit messy, is one aspect of what makes Queen’s Homecoming so special and school spirit so unmistakable. Although it’s not always explicitly referred to, Gentian Violet, or Crystal Violet, is at the centre of this tradition. As an important part of not only Homecoming but other traditions on campus as well, it seems that Gentian Violet deserves a closer look.
Although our engineering students use this purple dye for aesthetic purposes, the dye actually has many notable medicinal properties. In the first half of the 20th century, Gentian Violet was predominantly used to treat trench mouth, thrush, impetigo, burns, pinworm, cutaneous and systemic fungal infections. With that said, it has been noted to have numerous applications, including anti-fungal, anti-bacterial, anti-helminithic, anti-trypanosomal, anti-angiogenic, and even anti-tumour properties. Some of the latter mentioned effects have been proposed as recently as 2013. Today, Health Canada describes Gentian Violet as an herbal medicine that helps to relieve digestive disturbances, stimulate appetite, prevent nausea, and increase bile flow when it’s advantageous. Overall, Gentian Violet has an extensive and perhaps growing medicinal background as researchers continue to investigate its possible applications.
Beyond just the scope of medicine, Queen’s and other universities use Gentian Violet for its brilliant and intense colour, which is named after the gentian flower with similarly coloured petals. The stabilization of the molecule through resonance results in the intense colour of the compound. The molecule itself is symmetrical with three amino groups each containing two methyl groups attached and has many alternating double bonds throughout the structure. The intensity of the purple is also why the compound is an extremely effective biological stain.
One of the best-known uses of Gentian Violet is as a stain for visualization purposes in the lab. In 1884, Hans Gram was the first to notice the importance of the irreversible fixation of Gentian Violet by Gram-positive bacteria. This discovery was the basis of the Gram stain for categorizing bacteria. Gentian Violet is also used as a histological stain to study cells and tissues in plants and animals. In a similar way that Gentian Violet is an effective stain in the lab, its properties make it a great stain for when you want to be entirely purple. Thankfully, it comes off easily enough with a bleach-water mixture.
Another important thing to note is the history of Gentian Violet and the reason why Queen’s and other university engineering students chose the colour purple. It has been proposed that purpling is a tribute and in dedication to World War II British Naval Engineers. These individuals wore purple armbands, which would stain their skin after many days of working in the boiler room. It has also been suggested that it is in reference to the purple jackets engineering corps of the British army and navy wore instead of the customary red jackets. Another possible explanation that has come up is that the engineers abroad the Titanic wore purple overalls and that purple is a symbol of bravery for their efforts to keep the smoke signal going while the ship was sinking. No matter the exact origin, there’s no question that purple is strongly associated with engineering on our campus and on others as well.
Overall, it seems Gentian Violet has a lot more uses than just covering our FRECs and frosh here at Queen’s. It is another prime example of how much history (and science!) goes into the fun, uniting traditions at Queen’s. Having just celebrated 175 years, it’s no mystery how Queen’s has become a treasure trove of new and old traditions, each with their own stories. You never know what you may learn looking into some of them.
by Haley Richardson
While genetic engineering of the human genome may seem like a concept exclusive to works of science fiction a la Gattaca or Ender’s Game, the reality could be much closer than you think. Recent technologies, such as the development of CRISPR/Cas9 techniques, have made genome editing better, faster, and above all, cheaper. The accessibility and relative ease of these technologies have caused a boom in genome research unlike anything since the Human Genome Project. Although researchers initially limited themselves to editing genomes of the most basic organisms, recently scientists have been upping their game with more complex organisms, including humans. Ethical concerns with this practice have been raised across the political spectrum and in both public and private life with many questioning if, and how, this research should be regulated.
There are two categories of human genome editing: somatic cell editing or germline editing. Somatic cells are any non-reproductive cells, and any changes made to the genome of somatic cells of an individual cannot be passed down to his or her children. Changes to germline cells, on the other hand, are heritable and can be passed on to future descendants. As a result, germline editing research is often more controversial than somatic cell research, and is illegal in several countries including Canada, Australia, and most of Europe. While not outright illegal in places such as the United States, China, and Japan, germline editing research is highly restricted. Despite these restrictions, in 2015, researchers in China reported their first attempts to edit human germline cells in embryos, sparking ethical debates and calls worldwide for tougher regulation of the research.
In response to these concerns, several organizations within the scientific and medical community have reported their own findings and recommendations. The National Academy of Sciences and the National Academy of Medicine in the US released a report in 2017 titled Human Genome Editing: Science, Ethics, and Governance. The NAS/NAM report analyzed the research that is currently being done in this area and provided recommendations on how to manage it. While the report is more in favour of somatic cell research, they recommend that germline research should be allowed as well, so long as the public is notified and in favour. Furthermore, it states that genome editing should be used strictly for “the treatment of disease and disability”, and not for aesthetic or performance enhancing purposes. In other words, the designer babies of science fiction should not be on anyone’s agenda, unless it is to rid the gene pool of a serious disease or disorder.
While some scientists have applauded the report, others are not so sure. Those in favour often cite how restrictive many of the recommendations are, saying that the report only allows germline editing in extreme circumstances with no available alternatives. Critics are quick to point out, however, that while the report recommends these restrictions now, they leave the future open to debate once the technology has improved. As a result, many researchers have argued that germline editing should not be allowed under any circumstances, as once the door is open, it will be harder to control how the technology is used.
Even scientists who have worked with genome editing research for years, such as Edward Lanphier, are opposed to the transition to germline editing. An article in Nature co-authored by Lanphier, who has been involved in somatic cell editing research and clinical trials, claims that germline editing could have “an unpredictable effect on future generations”, making it “dangerous and unacceptable”. Furthermore, Lanphier argues that somatic cell research has the potential to cure genetic disorders and save many lives, and allowing controversial germline research to occur jeopardizes the already tenuous acceptance of somatic therapies by politicians and citizens.
After all, while some reports have shown that the public is becoming more comfortable with genome editing, many are still strongly opposed or mistrustful of the idea. A study conducted by the PEW research centre indicated that nearly half of the American citizens questioned in the study said that genome editing to produce healthier babies was “crossing a line” and “meddling with nature”. This indicates that even if the scientific community manages to come to a consensus on germline editing, it would likely require a significant amount of outreach to get the public (and therefore politicians and lawmakers) on board. So while you can’t expect to wake up tomorrow living in an Aldous Huxley novel, you might not be mistaken for feeling as though a brave new world is just around the corner.
by Alana Duffy
We’ve all met someone who acts a certain way in everyday life, then seemingly becomes a completely different person when he or she enters a group setting. This switch can be a good thing or a bad thing, but either way, it’s due to a centuries-old phenomenon known as mob mentality. Defined by the Oxford Dictionary as “the tendency for people's behaviour or beliefs to conform to those of the group to which they belong,” mob mentality has been a prevalent part of human behaviour since people began to form tribes and migrate in groups.
The most obvious example is probably violent rioting in the name of nationalism, however, mob mentality exists in day-to-day life as well. Roasting someone’s post on Insta with your friends, singing and yelling at a hockey game, or smashing a beer bottle on University can all be attributed to mob mentality. Most people would never participate in these activities on their own, and may even disapprove of them. However, if we become part of a group that encourages such behaviour, it can change how we act.
Recently, scientists have been wondering more about the neurological aspect of mob mentality. Though there is no concrete evidence to date, it is suspected that mirror neurons may play a role. These are brain cells that fire when we watch someone perform an action and when we perform the same action, suggesting that some parts of our brain may be specialized for imitating others.
Let’s not forget to consider the role of our dear old friend dopamine, the neurotransmitter responsible for signalling in reward pathways. This chemical is released whenever we do something enjoyable such as have sex, take drugs, or even eat a really good pizza (shout-out to Maxx from Dominos). Dopamine encourages us to repeat actions that have been previously pleasurable and is strongly linked to the formation of addictive behaviours. Unsurprisingly, it has also been implicated in the ability to influence decision-making. Laboratory studies have revealed that changing one’s opinion due to social influence triggers a large dopamine release in the brain.
A study conducted at the University of Basel in Switzerland examined the relationship between dopamine levels and the likelihood that a person would change his or her answer after discovering that other participants held differing opinions. More specifically, they used transcranial magnetic stimulation (TMS) to reduce dopamine release in the medial-prefrontal cortex. This area of the brain produces an error signal when we make what is perceived as an incorrect decision. Following a decrease in dopamine in this area, subjects were 40% less likely to conform to the group by changing their opinions. On the flip side, researchers in Denmark gave participants a pill to increase the amount of dopamine in the brain and found that people changed their opinions much more readily to align with the majority.
Interestingly enough, our physical anatomy may also affect the likelihood of conforming to groups. A Japanese study found that a subject’s desire to be unique and have their own opinion was related to the size of their medial-prefrontal cortex (MPFC). The thinner their MPFCs were, the less likely they were to alter their opinions to fit the group majority. Further research is certainly required, but this is an intriguing result nonetheless. I wonder what would happen if we attempted to reproduce this experiment in self-proclaimed hipsters?
This leads us to the question of whether or not people can be held responsible for their actions when they’re in a large group. If an automatic neurobiological response is part of the reason why people change their behaviour, can they really be held 100% accountable? To me, the obvious answer is yes – they can and they should be. However, many people do not share this sentiment – how often have we heard “but everyone was doing it!” as an excuse for poor behaviour? This is known as diffusion of responsibility, where a person is less likely to take responsibility for his or her own action or inaction when others are present. This way of thinking holds no legal or moral merit; it is simply what we tell ourselves to feel better about something we know deep down to be wrong. So the next time you’re in a group that is acting like they were all raised by wild animals, take a moment to reflect on your behaviour as an individual, rather than one of many.
by Lauren Lin
Do you ever wonder why you seem to start the school year with the best intentions but find yourself losing motivation soon after? Now that the second of week of school is coming to an end, some of us may be feeling less motivated to follow through with the goals we set for ourselves, like not getting behind on readings or going to that Monday 8:30 lecture every week.
The phenomenon that people are more likely to work towards their goals right after a temporal boundary, like the end of a week or year, is called the fresh start effect. In 2014, Dai and colleagues investigated the fresh start effect in three studies. In the first study, they found that the general public searched for the term “diet” on Google more frequently at the beginning of the week and after federal holidays. In the second study, Dai et al. analyzed data documenting the daily gym attendance of undergraduate students to see how temporal boundaries affected the engagement of behaviours that are related to goals. The students were more likely to exercise following the beginning of a new week, month, year, semester, as well as after school breaks and their own birthdays. However, the beginning of a semester and week resulted in the greatest increases in gym visits.
Since the data Dai et al. used for the first two studies were health related, they realized that an alternative explanation for these results could be that people usually consume larger amounts of food on some holidays and weekends, and so they go to the gym more after these periods of time. Therefore, they removed a few holidays that often involve large meals, such as Thanksgiving Day and Christmas, from their analyses and found that the students still exercised more after federal holidays or school breaks than they do on typical days. However, they conducted a third study that included goals that weren’t health-related to better separate the effects of overeating during holidays from the fresh start effect. In their third study, the researchers used data from stickK, a website that allows customers to choose a personal goal and to decide on how much money they will need to pay a person or a charity if they don’t achieve their goals. Even with the wide range of goals, Dai et al. still found that people were more likely to adhere to their targets immediately after temporal boundaries, with the increase in commitment most noticeable at the start of the year, week, and after federal holidays.
To explain the fresh start effect, Dai et al. hypothesized that a temporal boundary can help us psychologically distance our current self from “past imperfections,” allowing us to behave in a way that better reflects our new self-image that is more positive. Another reason behind the fresh start effect could be that temporal boundaries redirect your focus from details in day-to-day life to a broader view of your life. As a result, you end up being able to think about achieving your long-term goals. However, Dai et al.’s studies did not provide evidence for these mechanisms.
Hennecke and Converse also conducted four studies to look at how the perception of temporal boundaries can influence expectations. In their first study, participants answered a series of questions about their expectations for improving their diet in the next 6 days, which covered Thursday, February 27th to Tuesday, March 4th. The participants either saw calendar dates (i.e. February 27th to March 4th) or weekdays (i.e. Thursday to next Tuesday). As a result, the expectations of the participants who saw calendar dates increased more for March 1st compared to the participants who saw the weekdays. However, both groups had jumps in expectations for Monday, March 3rd, with the participants who saw weekdays having a slightly higher jump than the group that saw calendar dates. In study 2, Hennecke and Converse looked at self-reported constraints and means to eat more healthily on the next day (Saturday, August 1st) and presented the next day either as a Saturday or August 1st. In study 3, they asked for the circumstances, constraints, obstacles, as well as expectations for adapting a healthier diet for the next 4 days (presented as weekdays or calendar dates), which included the start of a new week. In study 2 and 3, participants still preferred to initiate a change in their behaviour after a temporal boundary if they were presented the dates in a way that clearly indicated a temporal boundary. Additionally, the information collected in these studies allowed the researchers to determine that the desire to start after a temporal boundary could be due to how the goal is represented before and after a temporal boundary. It seems that people think less about the constraints and inconvenience of eating healthily for the day after a temporal boundary, and hence prefer to start working on their goals after a temporal boundary. This difference in representation could be caused by people thinking that it will be easier to start with a clean record after a temporal boundary or that they will have renewed resources in a new period. To demonstrate the effect of perceived temporal boundaries in a real-life setting, Hennecke and Converse analyzed data from prospective dieters who were interested in an expensive dieting program in a fourth study and found that people were willing to sacrifice an entire week of access to the expensive program in order to start their plan after a perceived temporal boundary.
These studies show that we tend to feel motivated and want to start making changes at the beginning of a new time period, which explains why we have such high expectations at the start of a new school year. However, can the fresh start effect help us to actually achieve our academic goals? Although more research is needed to determine whether thinking in terms of new beginnings does more good by motivating you or does more harm by giving you a reason to push back working on goals, perceiving temporal boundaries does seem to increase the likelihood that you will stick to your goals. Therefore, rather than thinking “new school year, new me,” you might want to try thinking “new day, new me” instead.
*If you are feeling too overwhelmed in the new school year or would like to learn some strategies that will help with managing school, there are many resources on campus that you can use.
Queen's Counselling Services
AMS Peer Support Centre
Student Academic Success Services
by Omri Nachmani
The rate of technological advancement has been accelerating at a breakneck pace. Everyday, a new breakthrough threatens to bring us closer to something out of a Huxley novel. Aside from the unsettling (and sometimes frightening) implications of mass unemployment due to automation, or the potential of a new genetic caste system, some scientific and technological achievements are paving the way for a revolution in medicine and healthcare. Here are five technologies that may revolutionize healthcare over the next five years.
Artificial Intelligence (AI)
Deep learning, a concept central to AI, is the ability of computer algorithms to learn by trial and error with continuously improving efficiency. So far, deep learning algorithms have defeated the world’s top players at chess, Chinese Go, and even Jeopardy. However, this technology also has other applications. Samsung’s Medison uses deep learning to analyze ultrasound images and detect breast tumours with superhuman precision and accuracy. Though initially unreliable, the ability to continuously improve performance gives it a considerable advantage over human-doctor counterparts. Another AI company, EMERGENT, uses the same idea to tease out complex drug interactions by analyzing millions of pharmaceutical databases and individual patient care records, making medicine even more personalized. Perhaps most impressively, IBM’s Watson is capable of reading millions of published clinical articles within seconds, extract the key information, and make an unbiased and evidence-based decision regarding treatment. What potential benefits may we reap If IBM’s Watson is available on every physician’s phone? We will certainly find out.
Gene Sequencing and Editing
The Human Genome Project, completed in 2003, costed approximately $2.7 billion at the time. Today, one can sequence their genome for less than $1000. This incredible price drop opens many doors for scientists and healthcare. Deep Genomics, founded by University of Toronto researchers, uses genetic sequencing data along with AI to predict how genetic mutations may affect cells and impact the human body. The power to know one’s genes allows researchers to predict which treatments may or may not work. Such is the nature of precision medicine, where the one-size-fits-all doctrine is thrown out of the window and the focus is you. But why stop there? If a genetic mutation is known, why not eliminate it? In 2015, as a last resort treatment, researchers used the genetic editing tool CRISPR-Cas9 to cure an 11-month-old infant Layla of severe leukemia. By re-engineering her immune cells, researchers induced them to attack her cancerous cells. Layla has been cancer-free for over 18 months. Though, with great power comes great responsibility. While genetic engineering lends hope in curing the incurable, it may also spark fears of designer babies and biological warfare. It goes without saying: we need to tread lightly.
In developing countries and rural locations with few health resources or limited funding, labs-on-a-chip may offer a quick and inexpensive way to diagnose and monitor infectious diseases. These circuits, roughly the size of a USB stick, integrate numerous pathology laboratory functions and can detect bloodborne pathogens with high accuracy. US-based DNA Electronics have developed a chip that can detect and monitor the levels of HIV in a patient’s blood, making the management of symptoms more effective. Normally, blood samples have to be sent to a laboratory, taking three days to be analyzed and sent back to a clinician, costing time, money, and resources. In places where those are lacking, labs-on-a-chip offer an effective solution that may just prevent the next global epidemic.
Yes, drones. (why not?) More often than not, effectively responding to global disasters such as tsunamis and earthquakes is a problem of logistics rather than will or funding. Sending medical first responders or search-and-rescuers is risky in these hazardous scenarios. Drones offer a way to bypass this challenge. Drones can be used to deliver life-saving medications to remote areas or countries in states of emergency, avoiding the issues of distribution logistics or lack of responders. Companies such as Matternet are already testing the feasibility of a drone delivery network in Haiti to speed up delivery of medicines. Drones may also be used to search for survivors in disasters and other seemingly inaccessible locations. Once identified, the same drones can drop a survival package until further help arrives. Maybe the next first responder you encounter will have the Amazon logo on its propellers.
The same technology that lets you print off your own salt shaker may also one day print a kidney if you’re in need. Take a 3D printer, replace the ink with cells and some suspension fluid and you have yourself a 3D printer capable of printing skin for burn victims or even a new pancreas. Okay, this is more than five years down the road (although it is being researched extensively), but some applications are right around the corner: 3D printed casts, personalized bone replacement parts, pills, and much more. Some companies have even printed brain tumor models using MRI scans giving the surgeon a chance to practice a risky surgery. With 3D printing potentially solving the organ donor crisis and reducing fatal surgical errors, it’s hard not to be excited.
It is important to acknowledge that many of these technologies are in experimental stages. However, it appears that they will pave the way for better, faster, and more precise healthcare. But that’s just an educated guess.
by Lauren Lin
Induced pluripotent stem cells (or iPSC) are stem cells that are made by reprogramming already specialized adult cells into a pluripotent state that is similar to that of an embryonic stem cell. As a result, these induced pluripotent stem cells are able to differentiate into any cell type in the body. This technology was developed by Shinya Yamanaka and Kazutoshi Takahashi at Kyoto University in 2006 when they infected adult skin cells from mice with viruses to introduce 24 genes that they believed to be necessary for cells to behave like embryonic stem cells. Later on, they were able to identify that there were only four genes, each of them encoding for a transcription factor, that were needed to reprogram adult cells into pluripotent cells. These genes were Oct4, Sox2, Myc, and Klf4.
iPS cells are now widely used within many areas of research, but the iPS cell lines cultured in different labs don’t seem to be consistent, as many papers have reported results that other researchers haven’t been able to replicate. Additionally, there is still research being done to investigate which types of adult cells are able to be reprogrammed, which cells the iPSC can differentiate into, and whether or not reprogramming can take place without Myc, since it has the potential to turn cells cancerous. It has also been discovered that iPSCs aren’t exactly the same as embryonic stem cells, since they hold onto a certain kind of “epigenetic memory.” This means that the cells have retained the chemical changes associated with whichever adult cell they originated from. However, some scientists don’t believe that this epigenetic memory will end up interfering with the results of research conducted with iPSC.
When iPSC techniques were first developed, researchers originally thought that it would be primarily used for regenerative medicine. It was thought that they could reprogram a person’s skin or blood cells into iPSCs, then differentiate these cells into whichever cell type was needed to treat a disease. For example, people with neurological disorders could have new neurons made to treat their condition. The idea that iPSCs could be used for regenerative medicine was especially exciting because it seemed to be able to avoid both the problem of immune rejection and the ethical issues that arise from using stem cells from embryos in therapy.
In 2013, Masayo Takahashi worked with Yamanaka to develop stem cell treatments for retinal diseases. Takahashi and her team reprogrammed skin cells from patients with an eye condition called age-related macular degeneration (AMD) into iPSCs, which were then used to make retinal pigment epithelium (RPE) cells. Sheets of RPE cells were implanted into a patient in 2014, and the progression of the patient’s macular degeneration seemed to stop. However, before a second trial was started, they found a few genetic changes in the second patient’s iPS cells and RPE cells. It wasn’t conclusive whether or not the mutations were cancerous, but the trial was halted and only resumed in June 2016. When Takahashi’s work was stopped, other researchers developing iPS-cell-based therapies also put their projects on hold. Now that the clinical trials for AMD are set to begin again, researchers hope to start new clinical trials that examine the use of iPSCs for other diseases, such as Parkinson’s disease.
Currently, iPS cells are heavily used in research, especially for newly developed drugs and the progression of human diseases. The ability to culture iPS cells and to differentiate these cells into any body cell has allowed researchers to culture human tissues that would have previously been difficult to access. Additionally, while clinical trials have experienced setbacks, iPSC technology has been continuously refined and used for other studies. For example, some researchers have been using CRISPR-Cas9, a gene-editing tool, to introduce mutations into iPS cells to look at the effects of the mutation and to compare the mutated cell lines with control cell lines. The research that studied the link between the Zika virus and microcephaly, a condition in which an infant is born with a head that is abnormally small, used the ability of iPSCs to model early human development. Dr. Guo-li Ming used cortical neural progenitor cells, iPSCs, and immature neurons to discover that the cells that go on to form the brain’s cortex are “potentially susceptible to the virus, and their growth could be disrupted by the virus.”
Although it will still take a very long time to understand certain diseases and to develop new drugs or cell therapies, iPSCs are valuable tools that researchers can use, either as models for human tissue or as treatments themselves. iPSCs have already contributed to many scientific advances, and there may be potential uses for them that have yet to be explored.