“How’s medical school going?” That’s a common question, and I don’t think I have the most cheerful answer. Maybe my bitter honest sentiments were strengthened by living in Ukraine- where one never responds to an inquiry with “Great!” or “Good.” I can’t bring myself to respond that medical school is “cool” or “amazing.” The best word that comes to mind lately is “ok.”
Yes, just ok. Why just ok? Well, has anyone ever told you that medical school was the best time of their life? I doubt it. That’s because it’s really pretty hard. Medical school hard- that’s no surprise, right? It seems that many medical students must be a special breed of people that thrive on academic torture as much as helping others. Indeed, there has to be a high level of willingness to sacrifice, delay gratification, and self-discipline for students in this odd regime, and it’s no cheerful matter.
The US medical education system is undoubtedly an “odd” thing to take part in. First of all, there are the prerequisites and admissions—basic science classes, the dreaded $250+ MCAT, the expense of submitting an AMCAS online application, and the drawn out selection process which can last from June of one year until August of the following year. It’s a self-selecting process that fills all applicants with self-doubt, and often a competitive mindset. Is this what it takes to become a recognized, practicing healer in our country? Being a basic science wiz, having a lot of money, and knowing how to get ahead of others? What about kindness or compassion?
I recently read an article, “Most Likely to Succeed“ in the December 2008 New Yorker magazine in which Michael Gladwell explores how difficult and unpredictable the NFL quarterback selection process has been. He writes, “There are certain jobs where almost nothing you can learn about candidates before they start predicts how they’ll do once they’re hired.” He goes on to make a connection to medical admissions saying, “We now realize that being a good doctor requires the ability to communicate, listen, and empathize—and so there is increasing pressure on medical schools to pay attention to interpersonal skills as well as to test scores. We can have better physicians if we’re just smarter about how we choose medical-school students.”
It’s true—there is a new movement in medical education to select people who have done community service, and who show dedication to humanitarianism in their essay and interviews. In fact, I think this is probably the reason that I was accepted to medical school. Yet, most schools continue to value the test scores and basic sciences just as highly as humanistic skills.
The result is that I am surrounded by amazing people- who I deeply admire—but who also happen to have trouble breaking out of the competitive and over-achieving mindset. I occasionally am drowned by this sentiment myself. For example, after a recent exam, I found myself unable to hold back tears. Meanwhile, I made myself feel worse because logically I knew how stupid it was for me to be crying over my grades. I know that as life challenges go—this one is not so bad.
For most people I study with, it seems being in medical school is the most important part of their lives. What’s the problem with that? Well, maybe nothing if you think that Dr. House is the best doctor ever. Personally, I’d prefer a well-balanced doctor who cared about me—even if it meant delaying the right diagnosis. But I think many Americans might choose the cynical, distant, and brilliant physician, if they had a choice.
This past month of rigorous anatomy, physiology, development, physical diagnosis, and pathology—has not given me much time to ponder about my solutions to choosing good future physicians. I briefly considered using “shoe-selection” as a qualification (those wearing practical, but professional shoes during their interview would receive high marks in this category). However, ultimately changing future physicians would require a change in curriculum as well as admissions.
I would change the amount of basic science material covered to include less of these sciences, and more classes about public health, social work, and practical skills. I would also have less multiple-choice testing, and more options to extend medical school to 5-6 years, instead of just 4. I would try to admit some regular achievers along with the high achievers, out of which more people might be satisfied with general medicine rather than the highest specialty possible (although some argue that students choose specialties for the salary, maybe it has to do with personalities). Medical students currently are not the type to be satisfied with anything less than the best, most, and highest they can be- which is causing a huge shortage in primary care and family doctors.
My medical school is even supposedly non-competitive, and encourages the importance of listening and personal connection more than other schools. I am especially honored to be surrounded by many great physician role-models who act differently from most medical norms. My recent family practice mentor challenged my notions of American doctors by being extremely personal—making jokes and always hugging people when appropriate. She was so gifted, and developed a deep bond with patients—one that I dream of having someday too.
Yet, my days as a first year are normally so far removed from this type of experience. This semester has been more fulfilling than last one with anatomy and physical diagnosis being more hands-on and interactive than my previous courses. However, I am still bogged down by the exams, especially tonight. And I must remind myself constantly—like a mantra—that two years of classroom time is nothing compared to a career full of clinical joy, and the opportunity to get deeply involved in people’s health—something that’s special and sacred work. And maybe… just maybe… I will hold on to some sense of happiness, balance, and idealism, even if medical school is just ok.
Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts
Tuesday, February 24, 2009
Tuesday, November 25, 2008
A Blood Thirsty Biochemical Explanation
During this summer, I was preparing myself for medical school by doing foolish things- I read a popular teen vampire novel called Twilight. This week Twilight the movie premiered, although I haven’t had time to go see it yet. However, it was during this same week at one of my final biochemistry lectures that I learned something very interesting and unexpected… vampires are real!
What?! Why didn't I hear about this disease before? Oh yes, that’s probably because people who actually have this genetic disease would rather not be related to vampires. Given that vampires are considered evil by many people and were tortured outcasts of society, this is a touchy subject. The legend of vampires exists nevertheless, and there is a real medical explanation for how it may have originated. Therefore, I feel compelled to share this intriguing knowledge.
It makes sense that before science was able to explain strange genetic diseases and symptoms, people assumed that they were connected with the devil or a result of certain behaviors or diet. In fact, when I lived in rural Ukraine, pregnant women refused to cut their hair, thinking it would cause birth defects. Yet, now we know that genetic defects are often the result of a tiny, unlikely mistake in the duplication & division of DNA. A single tiny rearrangement or deletion can cause a lot of strange things to happen- it’s amazing that most of us come out looking “normal.”
The possible explanation of the vampire legend is one of seven types of porphyrias, which are all diseases in the synthesis of heme (the compound in blood which carries oxygen and carbon dioxide to keep us alive). Heme is synthesized in the liver and bone marrow. There are eight steps, four of which take place inside mitchondria and four in the cytoplasm. Therefore, there are multiple opportunities for defective enzymes and build up of intermediate products.
The interesting thing about the intermediates in heme synthesis is that they contain many benzene rings. When these rings are oxidized, they can absorb light and appear to be colored (think of purple-yellow-brown-greenish bruising when heme is broken down right under the skin). When there is a build up of intermediates, it causes purple urine, red and fluorescent teeth, and extreme sensitivity to light rays.
The type of porphyria believed to have started the legend of vampires is called porphyria cutanea tarda. It leads to a build up of products which cause a variety of problems. First, people suffer from extreme anemia, so they are very pale. Additionally they have red and fluorescent teeth, which can look pretty strange. The intermediate products in their blood can oxidize to become insoluble when exposed to sunlight. This causes pain and blistering in their skin, so they would try to never go outside in sunlight. Also, drinking fresh blood, which somebody may have figured out, could relieve the neurological symptoms. This was a genetic defect, so it stayed within families. Since these people were probably outcasts from society, they may have married cousins causing the disease to proliferate faster.
None of this is proven fact because nobody knows where any supposed “vampires” are buried. Otherwise, it would be possible to dig them up and perform a genetic test. Also, this explanation doesn’t explain a fear of garlic or why you have to put a stake through their hearts to kill them. This is an extremely rare disease. Yet, it’s interesting to know that we understand so much about the body now that we can go back in time to solve medical mysteries.
What?! Why didn't I hear about this disease before? Oh yes, that’s probably because people who actually have this genetic disease would rather not be related to vampires. Given that vampires are considered evil by many people and were tortured outcasts of society, this is a touchy subject. The legend of vampires exists nevertheless, and there is a real medical explanation for how it may have originated. Therefore, I feel compelled to share this intriguing knowledge.
It makes sense that before science was able to explain strange genetic diseases and symptoms, people assumed that they were connected with the devil or a result of certain behaviors or diet. In fact, when I lived in rural Ukraine, pregnant women refused to cut their hair, thinking it would cause birth defects. Yet, now we know that genetic defects are often the result of a tiny, unlikely mistake in the duplication & division of DNA. A single tiny rearrangement or deletion can cause a lot of strange things to happen- it’s amazing that most of us come out looking “normal.”
The possible explanation of the vampire legend is one of seven types of porphyrias, which are all diseases in the synthesis of heme (the compound in blood which carries oxygen and carbon dioxide to keep us alive). Heme is synthesized in the liver and bone marrow. There are eight steps, four of which take place inside mitchondria and four in the cytoplasm. Therefore, there are multiple opportunities for defective enzymes and build up of intermediate products.
The interesting thing about the intermediates in heme synthesis is that they contain many benzene rings. When these rings are oxidized, they can absorb light and appear to be colored (think of purple-yellow-brown-greenish bruising when heme is broken down right under the skin). When there is a build up of intermediates, it causes purple urine, red and fluorescent teeth, and extreme sensitivity to light rays.
The type of porphyria believed to have started the legend of vampires is called porphyria cutanea tarda. It leads to a build up of products which cause a variety of problems. First, people suffer from extreme anemia, so they are very pale. Additionally they have red and fluorescent teeth, which can look pretty strange. The intermediate products in their blood can oxidize to become insoluble when exposed to sunlight. This causes pain and blistering in their skin, so they would try to never go outside in sunlight. Also, drinking fresh blood, which somebody may have figured out, could relieve the neurological symptoms. This was a genetic defect, so it stayed within families. Since these people were probably outcasts from society, they may have married cousins causing the disease to proliferate faster.
None of this is proven fact because nobody knows where any supposed “vampires” are buried. Otherwise, it would be possible to dig them up and perform a genetic test. Also, this explanation doesn’t explain a fear of garlic or why you have to put a stake through their hearts to kill them. This is an extremely rare disease. Yet, it’s interesting to know that we understand so much about the body now that we can go back in time to solve medical mysteries.
Sunday, November 23, 2008
Random observations
One of the best parts of higher education is having someone explain something to you in such a way that you exhale the quintessential "OH!" One such moment was a computer science instructor who stated the now obvious "Random generation is a human construct." It is tempting to observe the chaotic structure of the universe as random, but such an observation would be incorrect. The universe, just as all things in it, is bound by what we understand as physical laws. Under specific conditions all objects operate specifically according to their composition. Our ability to perceive the complex structure of the universe may be limited, but a larger perspective will reveal that all things within the universe follow a specific order. In that sense I believe, to a certain degree, in pre-destiny. The universe is massive, but if you trace any event you will find that it was always meant to occur as it occurred, based on the fact that said event is the effect of an infinite number of causes, all causes of which are in themselves diverse effects.
Anyway, randomness does not exist in a physical model, but only as a conceptual model that we have invented. Even so there are certain logical flaws to randomness we need to observe. Ideally we initialize the concept of randomness to imply chaos or surprise. The best model to demonstrate randomness that we use is the coin toss. Heads or tails seems random enough, and statistical analysis shows that the results hold very close to 50-50. Such analysis is ignorant to all conditions present however. Elevation in relation to air pressure, speed of rotation based on the kinetic energy of the flick, the shape of the thumb and the finger upon which the coin rests, the coin's initial resting state in reference to heads or tails facing upward, the atmospheric conditions, the temperature of the air, etc. If humans were capable of ascertaining all conditions possible we would then create perfect predictions to the outcome of a coin-toss. As such, our fallibility is perfect in allowing us a fair and perfect method for deciding the initial state of a football game. Music is another idiom where I encounter randomness and often so. In the course of electronic composition we use random number generation regularly. It is a very effective tool to avoid the predictability that years of tonal composition have afforded us. Still, even the most elaborate algorithm of randomness is never really random. The other day I was playing my iPod through my stereo. I created a playlist that, as I was using the limited technology of the iPod's "On-the-Go" listing, was not linked in a way any DJ would be proud of. There was no consideration of feel, tempo, key, or splice points, only a list of songs I enjoyed. Instead of put forth the effort to make cognitive and meaningful transitions to songs, I instead employed the "shuffle" feature. As evidenced by my prior statement I am obviously lazy, and as such decided to leave "shuffle" on when listening to Hot Chip's "The Warning" album. Expecting a random distribution of songs from the album order, what gave me the greatest apprehension was that the random distribution played the songs in order of 1, 2, 3, 5, and then selected from the end of the album. The anticipation of unknown segues and an altered energy flow was further disturbing when my usual sense of order was inserted into a chaotic environment. In that sense, with the anticipation of chaos, the most chaotic occurrence was order.
It reminded me of a cartoon I watched as a kid. It was a Sesame Street video I had that was telling stories of all the characters. Oscar the Grouch was talking about how cold it was outside and said "This hot chocolate is going to taste good and yucky." Oscar is a character that has a fine appreciation for negative things. Hot chocolate is a foodstuff that the majority of its consumers find to be quite enjoyable. The fact that hot chocolate is an enjoyable beverage is something that a person with a negative disposition would find disgusting, but as such is counter-intuitive to the fact that a negative person thrives off of negative experiences. Any "yuckiness" present is a sought-after experience, and as such would be considered good. This began an inner-manifestation of a dialogue about good and evil, and how the supposed polar opposites are completely meaningless. In a sense randomness is the same way. The creation of chaos is internal, in which case the superimposition of order upon your given state is the ultimate in chaos. Or so my perceptions have led me to believe.
Anyway, randomness does not exist in a physical model, but only as a conceptual model that we have invented. Even so there are certain logical flaws to randomness we need to observe. Ideally we initialize the concept of randomness to imply chaos or surprise. The best model to demonstrate randomness that we use is the coin toss. Heads or tails seems random enough, and statistical analysis shows that the results hold very close to 50-50. Such analysis is ignorant to all conditions present however. Elevation in relation to air pressure, speed of rotation based on the kinetic energy of the flick, the shape of the thumb and the finger upon which the coin rests, the coin's initial resting state in reference to heads or tails facing upward, the atmospheric conditions, the temperature of the air, etc. If humans were capable of ascertaining all conditions possible we would then create perfect predictions to the outcome of a coin-toss. As such, our fallibility is perfect in allowing us a fair and perfect method for deciding the initial state of a football game. Music is another idiom where I encounter randomness and often so. In the course of electronic composition we use random number generation regularly. It is a very effective tool to avoid the predictability that years of tonal composition have afforded us. Still, even the most elaborate algorithm of randomness is never really random. The other day I was playing my iPod through my stereo. I created a playlist that, as I was using the limited technology of the iPod's "On-the-Go" listing, was not linked in a way any DJ would be proud of. There was no consideration of feel, tempo, key, or splice points, only a list of songs I enjoyed. Instead of put forth the effort to make cognitive and meaningful transitions to songs, I instead employed the "shuffle" feature. As evidenced by my prior statement I am obviously lazy, and as such decided to leave "shuffle" on when listening to Hot Chip's "The Warning" album. Expecting a random distribution of songs from the album order, what gave me the greatest apprehension was that the random distribution played the songs in order of 1, 2, 3, 5, and then selected from the end of the album. The anticipation of unknown segues and an altered energy flow was further disturbing when my usual sense of order was inserted into a chaotic environment. In that sense, with the anticipation of chaos, the most chaotic occurrence was order.
It reminded me of a cartoon I watched as a kid. It was a Sesame Street video I had that was telling stories of all the characters. Oscar the Grouch was talking about how cold it was outside and said "This hot chocolate is going to taste good and yucky." Oscar is a character that has a fine appreciation for negative things. Hot chocolate is a foodstuff that the majority of its consumers find to be quite enjoyable. The fact that hot chocolate is an enjoyable beverage is something that a person with a negative disposition would find disgusting, but as such is counter-intuitive to the fact that a negative person thrives off of negative experiences. Any "yuckiness" present is a sought-after experience, and as such would be considered good. This began an inner-manifestation of a dialogue about good and evil, and how the supposed polar opposites are completely meaningless. In a sense randomness is the same way. The creation of chaos is internal, in which case the superimposition of order upon your given state is the ultimate in chaos. Or so my perceptions have led me to believe.
Wednesday, July 30, 2008
Insightfulness
For many regular M&M readers, the blog is a healthy way to avoid work. Situated on our desktops, only a mouse click away, it provides us with a welcome distraction from what we get paid (or get degrees) to do. However, despite the hours that I've probably dumped into reading and writing on here in short spurts while the work clock is ticking over the last 6 months (yes, we've been online that long!), I've always felt that it actually improves my productivity to have a little online sanctuary like this. That's right. Being able to get away for a few minutes every hour or so is refreshing and allows me - and I hope all of you - to return to the regularly scheduled program with new energy, focus, and insight.
A fascinating article in this week's The New Yorker is such a wonderful validation of the above premise that I have to share the insight on these pages, the very source of my productive distraction. "The Eureka Hunt" examines how and why epiphanies strike us at the neurological level, and it reveals some very interesting things about the way we process "ah-ha!" moments. In a nutshell: epiphanies are a different breed of idea than one arrived at through concerted effort. We don't consciously produce them; they just seem to arrive like lighting bolts, and when they do, there is a sense of metaphysical certitude that they are correct. They don't need to be fact checked - we already know they are right, even as they are just emerging from the electrical storm of our brains.
Most of our rational, linear thought functions come from the left hemisphere of the brain. (I hope we get some comments on this from our resident neuroscientist Ben - I'm getting all this from one article!) Since much of rational thought and language processing originates here, people with damage to the right hemisphere can often seemingly function just fine. But when researchers look closer, it becomes clear that, despite functional use of language, reasoning skills, etc., those with right lobe damage suffer from an inability to read nuance into words and into abstract problems. To really get a metaphor (or a subtle joke), you have to have communication across the corpus callosum. And this abstract interpretive ability is precisely the engine behind insight. In fact, if people try to verbalize what they're thinking in the act of trying to solve a word puzzle, an act that will result in a minor epiphany when the answer comes to you, they do significantly poorer at solving the problem (a phenomenon called "verbal overshadowing"). In other words, focusing entirely with your logical mind on a problem stifles the insight that can crack the code.
A number of recent papers, drawing on experimental research using EEG and fMRI scans of subjects' brains as they attempt to solve riddles, show that very specific cortical areas light up when a "Eureka!" moment is involved. The basic finding is this: the left brain will process a problem as it does - linearly, logically - but the remote association that provides insight originates in the right hemisphere. Thus, if someone concentrates really hard on a problem without relaxing their minds a bit to allow the quiet right lobe to have its occasional say, their chances of genuine creative insight are considerably diminished. This finding, researchers say, is a problem for the stimulant-addicted academic community (stimulants encourage left brain concentration). Relaxation, then, is the key to epiphany.
I've always done my best cerebration at three times: while walking, while in the shower, and early in the morning, just after waking up. Amazingly, researchers have identified both the early morning and hot showers as common triggers for insight. (The positive mental effects of late night neighborhood strolls will certainly be discovered someday.) So next time you're stuck on something, put down the book and hop in the shower. (A coincidence that Archimedes was in the bath at the time of his insight?)
Einstein came up with his most brilliant ideas in moments of insight, not in front of the chalk board doing equations. Richard Feynman, Nobel-winning physicist, always came up with his greatest insights at the topless bar, where he would begin to shape his new ideas mathematically on his napkin. Henri Poincare's seminal reinterpretation of Euclidian geometry came while he was stepping onto a bus. Of course, these stories aren't to say that these famous people hadn't been deeply contemplating their problems well before the koan was answered; in fact, this is why scientists suggest that epiphanies come with a sense of certitude - the left hemisphere has already done the math, it just couldn't come up with the precise solution.
So, good M&M readers, a bit of validation for the day dreaming that has brought you here today. Next time you get distracted and your mind wanders, let it.
A fascinating article in this week's The New Yorker is such a wonderful validation of the above premise that I have to share the insight on these pages, the very source of my productive distraction. "The Eureka Hunt" examines how and why epiphanies strike us at the neurological level, and it reveals some very interesting things about the way we process "ah-ha!" moments. In a nutshell: epiphanies are a different breed of idea than one arrived at through concerted effort. We don't consciously produce them; they just seem to arrive like lighting bolts, and when they do, there is a sense of metaphysical certitude that they are correct. They don't need to be fact checked - we already know they are right, even as they are just emerging from the electrical storm of our brains.
Most of our rational, linear thought functions come from the left hemisphere of the brain. (I hope we get some comments on this from our resident neuroscientist Ben - I'm getting all this from one article!) Since much of rational thought and language processing originates here, people with damage to the right hemisphere can often seemingly function just fine. But when researchers look closer, it becomes clear that, despite functional use of language, reasoning skills, etc., those with right lobe damage suffer from an inability to read nuance into words and into abstract problems. To really get a metaphor (or a subtle joke), you have to have communication across the corpus callosum. And this abstract interpretive ability is precisely the engine behind insight. In fact, if people try to verbalize what they're thinking in the act of trying to solve a word puzzle, an act that will result in a minor epiphany when the answer comes to you, they do significantly poorer at solving the problem (a phenomenon called "verbal overshadowing"). In other words, focusing entirely with your logical mind on a problem stifles the insight that can crack the code.
A number of recent papers, drawing on experimental research using EEG and fMRI scans of subjects' brains as they attempt to solve riddles, show that very specific cortical areas light up when a "Eureka!" moment is involved. The basic finding is this: the left brain will process a problem as it does - linearly, logically - but the remote association that provides insight originates in the right hemisphere. Thus, if someone concentrates really hard on a problem without relaxing their minds a bit to allow the quiet right lobe to have its occasional say, their chances of genuine creative insight are considerably diminished. This finding, researchers say, is a problem for the stimulant-addicted academic community (stimulants encourage left brain concentration). Relaxation, then, is the key to epiphany.
I've always done my best cerebration at three times: while walking, while in the shower, and early in the morning, just after waking up. Amazingly, researchers have identified both the early morning and hot showers as common triggers for insight. (The positive mental effects of late night neighborhood strolls will certainly be discovered someday.) So next time you're stuck on something, put down the book and hop in the shower. (A coincidence that Archimedes was in the bath at the time of his insight?)
Einstein came up with his most brilliant ideas in moments of insight, not in front of the chalk board doing equations. Richard Feynman, Nobel-winning physicist, always came up with his greatest insights at the topless bar, where he would begin to shape his new ideas mathematically on his napkin. Henri Poincare's seminal reinterpretation of Euclidian geometry came while he was stepping onto a bus. Of course, these stories aren't to say that these famous people hadn't been deeply contemplating their problems well before the koan was answered; in fact, this is why scientists suggest that epiphanies come with a sense of certitude - the left hemisphere has already done the math, it just couldn't come up with the precise solution.
So, good M&M readers, a bit of validation for the day dreaming that has brought you here today. Next time you get distracted and your mind wanders, let it.
Friday, July 18, 2008
Singing Fish
The field of zoological musicology, touched upon intermittently here in the annals of M&M, has had a new breakthrough: the singing toadfish. Follow the link to read the story and see a video.
Wednesday, June 11, 2008
Arctic Sea Ice
In response to Ruxton's challenge, I'd like to post about a topic I have become pretty interested in over the last couple of years - arctic sea ice.
I know what you are thinking - jeez, Chris, could you pick a more boring topic to be interested in. Yes, I could, so don't tempt me. But seriously, major changes are afoot up in the north, and we are now entering the summer melt season. Last summer I read the weekly updates from the National Snow and Ice Data Center during the melt period, and the data is fascinating. Records kept being broken by the week, and a note of awe crept into the normally emotionless tone of the scientists' reports.
How bad was last year's melt? Here is a summary from the National Snow and Ice Data Center October 1, 2007 press release: (click on the links for the pictures)
This year looks to break last year's melt records. One major factor that could play out this year is that because of last year's record melt, the new ice that formed over the winter is especially thin. "NIC scientist Todd Arbetter suggests that much of the first-year ice is likely to melt by the end of summer, saying that despite the total ice extent appearing normal, the relative amount of multi-year ice going into this summer is very low when compared to climatological averages" (NSIDC News, June 3, 2008).

So why should we care? Well, besides scientific curiosity, scientists agree that arctic sea ice is an potent indicator of climate change. Like other studies on climate change, things have been progressing in the arctic much faster than anyone predicted. A quick search shows this trend:
Some reports suggest that the arctic could be ice free this summer. The good news is that since arctic ice floats on the water, the melting does not directly contribute to a rise in sea levels. The bad news is that the effects of the melting are not limited to simply the ice on the water. New science suggests that "[p]ermafrost as far as 900 miles inland melts at more than three-times the usual rate when the sea ice melts rapidly, as it did last summer . . ." The article goes on to state that:
I'll be following this topic over the next several months, but if you are interested check out the NSIDC's RSS feed.
I know what you are thinking - jeez, Chris, could you pick a more boring topic to be interested in. Yes, I could, so don't tempt me. But seriously, major changes are afoot up in the north, and we are now entering the summer melt season. Last summer I read the weekly updates from the National Snow and Ice Data Center during the melt period, and the data is fascinating. Records kept being broken by the week, and a note of awe crept into the normally emotionless tone of the scientists' reports.
How bad was last year's melt? Here is a summary from the National Snow and Ice Data Center October 1, 2007 press release: (click on the links for the pictures)
"Arctic sea ice during the 2007 melt season plummeted to the lowest levels since satellite measurements began in 1979. The average sea ice extent for the month of September was 4.28 million square kilometers (1.65 million square miles), the lowest September on record, shattering the previous record for the month, set in 2005, by 23 percent (see Figure 1). At the end of the melt season, September 2007 sea ice was 39 percent below the long-term average from 1979 to 2000 (see Figure 2). If ship and aircraft records from before the satellite era are taken into account, sea ice may have fallen by as much as 50 percent from the 1950s. The September rate of sea ice decline since 1979 is now approximately 10 percent per decade, or 72,000 square kilometers (28,000 square miles) per year (see Figure 3)."How big is the difference between the 2005 record low ice extent and 2007?
(NSIDC Oct. 1, 2007 Press Release)
"The minimum for 2007 shatters the previous five-day minimum set on September 20–21, 2005, by 1.19 million square kilometers (460,000 square miles), roughly the size of Texas and California combined, or nearly five United Kingdoms."
(NSIDC, September 20th, 2007)
This year looks to break last year's melt records. One major factor that could play out this year is that because of last year's record melt, the new ice that formed over the winter is especially thin. "NIC scientist Todd Arbetter suggests that much of the first-year ice is likely to melt by the end of summer, saying that despite the total ice extent appearing normal, the relative amount of multi-year ice going into this summer is very low when compared to climatological averages" (NSIDC News, June 3, 2008).
So why should we care? Well, besides scientific curiosity, scientists agree that arctic sea ice is an potent indicator of climate change. Like other studies on climate change, things have been progressing in the arctic much faster than anyone predicted. A quick search shows this trend:
Nov. 4th, 2004: "Global warming is causing the Arctic ice-cap to melt at such an unprecedented rate that by the summer of 2070 it may have no ice at all . . ."
Dec. 12th, 2006: "The recent retreat of Arctic sea ice is likely to accelerate so rapidly that the Arctic Ocean could become nearly devoid of ice during summertime as early as 2040 . . ."
Dec. 12th, 2007: "Arctic summers ice free by 2013."
Some reports suggest that the arctic could be ice free this summer. The good news is that since arctic ice floats on the water, the melting does not directly contribute to a rise in sea levels. The bad news is that the effects of the melting are not limited to simply the ice on the water. New science suggests that "[p]ermafrost as far as 900 miles inland melts at more than three-times the usual rate when the sea ice melts rapidly, as it did last summer . . ." The article goes on to state that:
"Melting permafrost – frozen soil – would release massive amounts of carbon. Arctic soils hold 30% of the carbon currently stored in the world's soils. The result of melting: carbon dioxide and methane would enter the atmosphere at a rate to rival thousands of factories and power plants running at full steam. Global warming would increase, causing additional melting, which would result in additional emissions, additional warming, additional melting .... You get the point."I feel that within the next few years we will experience some sort of natural event that brings home the realities of climate change to people on a visceral, rather than purely intellectual, level. An ice-free arctic would be a powerful symbol and could perhaps be the wake-up call we need to actually treat climate change as the existential threat it is.
I'll be following this topic over the next several months, but if you are interested check out the NSIDC's RSS feed.
Subscribe to:
Posts (Atom)