[222004090010] |
You're Wrong
[222004090020] |John Ioannidis has been getting a lot of press lately.
[222004090030] |He reached the cover of the last issue of The Atlantic Monthly.
[222004090040] |David Dobbs wrote about him here (and a few years ago, here).
[222004090050] |This is the doctor known for his claim that around half of medical studies are false -- that is about 80% of non-randomized trials and even 25% of randomized trials.
[222004090060] |These are not just dinky findings published in dinky journals; of 49 of the most highly-regarded papers published over a 13-year period, 34 of the 45 with that claimed to have found effective treatments had been retested, and 41% of those tests failed to replicated the original result.
[222004090070] |Surprised?
[222004090080] |Quoting the Atlantic Monthly:
[222004090090] |Ioannidis initially thought the community might come out fighting.
[222004090100] |Instead, it seemed relieved, as if it had been guiltily waiting for someone to low the whistle...
[222004090110] |Well, it's not surprising.
[222004090120] |The appropriate analog in psychology is the randomized trial, of which in medicine 25% turn out to be false according to this research (which hopefully isn't itself false).
[222004090130] |As Ioannidis has detailed, the system is set up to reward false positives.
[222004090140] |Journals -- particularly glamour mags like Science -- preferentially accept surprising results, and the best way to have a surprising result is to have one that is wrong.
[222004090150] |Incorrect results happen: "statistically significant" means "has only a 5% probability of happening by random chance."
[222004090160] |This means (in theory) that 5% of all experiments published in journals should reach the wrong conclusions.
[222004090170] |If journals are biased in favor of accepting exactly those 5%, then the proportion should be higher.
[222004090180] |There are other factors at work.
[222004090190] |Some scientists are sloppier than others, and many of the ways in which one can be sloppy lead to significant and/or surprising results.
[222004090200] |For instance, 5% of experiments have false positives.
[222004090210] |There are labs that will run the same experiment 6 times with minor tweaks.
[222004090220] |There is a (1-.95^6) * 100 = 26.5% chance that one of those will have a significant result.
[222004090230] |The lab may then publish only that final experiments and not report the others.
[222004090240] |If sloppy results lead to high-impact publications, survival of the fittest dictates that sloppy labs will reap the accolades, get the grant money, tenure, etc.
[222004090250] |Keep in mind that often many different labs are trying to do the same thing.
[222004090260] |For instance, in developmental psychology, one of the deep questions is what is innate?
[222004090270] |So many labs are testing younger and younger infants, trying to find evidence that these younger infants can do X, Y or Z. If 10 labs all run the same experiment, there's a (1-.95^10) * 100 = 40.1% chance of one of the labs finding a significant result.
[222004090280] |Countervailing Forces
[222004090290] |Thus, there are many incentives to publish incorrect data.
[222004090300] |Meanwhile, there are very few disincentives to doing so.
[222004090310] |If you publish something that turns out to replicate, it is very unlikely that anyone will publish a failure to replicate -- simply because it is very difficult to publish a failure to replicate.
[222004090320] |If someone does manage to publish such a paper, it will certainly be in a lower-profile journal (which is, incidentally, a disincentive to publishing such work to begin with).
[222004090330] |Similarly, consider what happens when you run a study and get a surprising result.
[222004090340] |You could replicate it yourself to make sure you trust the result.
[222004090350] |That takes time, and there's a decent chance it won't replicate.
[222004090360] |If you do replicate it, you can't publish the replication (I tried to in a recent paper submission, and a reviewer insisted that I remove reference to the replication on account of it being "unnecessary").
[222004090370] |If the replication works, you'll gain nothing.
[222004090380] |If it fails, you won't get to publish the paper.
[222004090390] |Either way, you'll have spent valuable time you could have spent working on a different study leading to a different paper.
[222004090400] |In short, there are good reasons to expect that 25% of studies -- particularly in the high-profile journals -- are un-replicable.
[222004090410] |What to do?
[222004090420] |Typically, solutions proposed involve changing attitudes.
[222004090430] |The Atlantic Monthly suggests:
[222004090440] |We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right.
[222004090450] |That's because being wrong in science is fine, and even necessary ...
[222004090460] |But as long as careers remain contingent on producing a stream of research that's dressed up to seem more right than it is, scientists will keep delivering exactly that.
[222004090470] |I've heard this idea expressed elsewhere.
[222004090480] |In the aftermath of Hausergate, a number of people suggested that a factor was the pressure-cooker that is the Harvard tenure process, and that Harvard needs to stop putting so much pressure on people to publish exciting results.
[222004090490] |So the idea is that we should stop rewarding scientists for having interesting results, and instead reward the ones who have uninteresting results?
[222004090500] |Journals should publish only the most staid research, and universities should reward tenure not based on the number of highly-cited papers you have written, but based on how many papers you've written which have never been cited?
[222004090510] |I like that idea.
[222004090520] |I can run a boring study in a few hours and write it up in the afternoon: "Language Abilities in Cambridge Toddlers are Unaffected by Presence or Absence of Snow in Patagonia."
[222004090530] |That's boring and almost certainly true.
[222004090540] |And no one will ever cite it.
[222004090550] |Seriously, though, public awareness campaigns telling people to be more responsible are great, and sometimes they even help, but I don't know how much can be done without changing the incentive structure itself.
[222004090560] |Reputation
[222004090570] |I don't have a solution, but I think Ioannidis again points us towards one.
[222004090580] |He found that papers continue to be cited long after they have been convincingly and publicly refuted.
[222004090590] |I was discussing this issue with a colleague some time back and mentioned a well-known memory paper that nobody can replicated.
[222004090600] |Multiple failures-to-replicate have been published.
[222004090610] |Yet I still see it cited all the time.
[222004090620] |The colleague said, "Wow!
[222004090630] |I wish you had told me earlier.
[222004090640] |We just had a student spend two years trying to follow up that paper, and the student just couldn't get the method to work."
[222004090650] |Never mind that researchers rarely bother to replicate published work -- even if they did, we have no mechanism for tracking which papers have been successfully replicated and which papers can't be replicated.
[222004090660] |Tenure is awarded partly on how often your work has been cited, and we have many nice, accessible databases that will tell you how often a paper has been cited.
[222004090670] |Journals are ranked by how often their papers are cited.
[222004090680] |What if we rewarded researchers and journals based on how well their papers hold up to replication?
[222004090690] |Maybe it would help, maybe it wouldn't, but without a mechanism for tracking this information, this is at best an intellectual enterprise.
[222004090700] |Even if such a database wasn't ultimately useful in decreasing the number of wrong papers, at least we'd know which papers were wrong.
[222004100010] |Words and non-words
[222004100020] |"...the modern non-word 'blogger'..." -- Dr. Royce Murray, editor of the journal Analytic Chemistry.
[222004100030] |"209,000,000 results (0.21 seconds)" -- Google search for the "non-word" blogger.
[222004100040] |------------ There has been a lot of discussion about Royce Murray's bizzarre attack on blogging in the latest JAC editorial (the key sentence: I believe that the current phenomenon of "bloggers" should be of serious concern to scientists).
[222004100050] |Dr. Isis has posted a nice take-down of the piece focusing on the age old testy relationship between scientists and journalists.
[222004100060] |My bigger concern with the editorial is that it is clear that Murray has no idea what a blog is, yet feels justified in writing an article about blogging.
[222004100070] |Here's a telling sentence:
[222004100080] |Bloggers are entrepreneurs who sell “news” (more properly, opinion) to mass media: internet, radio, TV, and to some extent print news.
[222004100090] |In former days, these individuals would be referred to as “freelance writers”, which they still are; the creation of the modern non-word “blogger” does not change the purveyor.
[222004100100] |Wrong!
[222004100110] |Wrong!
[222004100120] |Wrong!
[222004100130] |A freelance writer does sell articles to established media entities.
[222004100140] |Bloggers mostly write for their own blog (hence the "non-word" blog-ger).
[222004100150] |There are of course those who are hired to blog for major media outlets like Scientific American or Wired, but then they are essentially columnists (in fact, many of the columnists at The New York Times have NYTimes blogs at the request of the newspaper).
[222004100160] |This magnifies, for the lay reader, the dual problems in assessing credibility: a) not having a single stable employer (like a newspaper, which can insist on credentials and/or education background) frees the blogger from the requirement of consistent information reliability ...
[222004100170] |Who are the fact-checkers now?
[222004100180] |Wait, newspapers don't insist on credentials and don't fact-check the stories they get from freelancers?
[222004100190] |Why is Murray complaining about bloggers, then?
[222004100200] |In any case, it's not like journals like Analytic Chemistry do a good job of fact-checking what they publish or that they stop publishing papers by people whose results never replicate.
[222004100210] |Journal editors living in glass houses...
[222004100220] |This focus on credentials is a bit odd -- I thought truth was the only credential a scientist needed -- and in any case seriously misplaced.
[222004100230] |I challenge Murray to find a popular science blog written by someone who is neither a fully-credentialed scientist writing about his/her area of expertise, nor a well-established science journalist working for a major media outlet.
[222004100240] |Are there crack-pot bloggers out there?
[222004100250] |Sure.
[222004100260] |But most don't have much of an audience (certainly, their audience is smaller than the fact-checked, establishment media-approved Glenn Beck).
[222004100270] |Instead, we have a network of scientists and science enthusiasts discussing, analyzing and presenting science.
[222004100280] |What's to hate about that?
[222004110010] |A Frog at the Bottom of a Well
[222004110020] |My college had a graduate admissions counselor, with whom I consulted about applying to graduate school.
[222004110030] |Unfortunately, different fields (math, chemistry, literature, psychology) use completely different methods of selecting graduate students (and, in some sense graduate school itself is a very different beast depending on the field).
[222004110040] |My counselor didn't know anything about psychology, so much of the information I was given was dead wrong.
[222004110050] |My graduate school also provides a lot of support for applying for jobs.
[222004110060] |This week, there is a panel on "The View from the Search Committee," which includes as panelists professors from Sociology, Romance Language &Literatures, and Organismic and Evolutionary Biology.
[222004110070] |That is, none of them are from Psychology.
[222004110080] |I do know that different fields recruit junior faculty in very different ways (for instance, linguistics practices a form of speed-dating at conferences as a first round of interviews, while others psych has no such system).
[222004110090] |So...do I go?
[222004110100] |Keep in mind that I get lots of advice from faculty in my own department (and also from friends at other psych departments who have recently gone through the process).
[222004110110] |That is, how likely is it that the experience of these three professors will map on to the process I will actually go through?
[222004110120] |How likely is it that a one-hour panel can cover all the different variants of the process?
[222004110130] |How likely is it that there is information that would be relevant to anyone applying to any department that isn't obvious or something I am likely to already know?
[222004110140] |Thoughts?
[222004110150] |-------- The title of this post comes from an old proverb about a frog sitting at the bottom of a well, thinking that the patch of blue above is the whole world.
[222004110160] |Often (always?) we don't realize just how limited our own range of experience is. photo: e_monk
[222004120010] |Question: What are sisters good for?
[222004120020] |Answer: increasing your score on a 13-question test of happiness by 1 unit on one of the 13 questions.
[222004120030] |A recent study of the effect of sisters on happiness has been getting a lot of press since it was featured on at the New York Times.
[222004120040] |It's just started hitting my corner of the blogosphere, since Mark Liberman filing an entry at Language Log early in the evening.
[222004120050] |On the whole, he was unimpressed.
[222004120060] |The paper didn't report data in sufficient detail to really get a sense of what was going on, so he tried to extrapolate based on what was in fact reported.
[222004120070] |His best estimate was that having a sister accounted for 0.4% of the variance in people's happiness.
[222004120080] |This is a long way from the statement that "Adolescents with sisters feel less lonely, unloved, guilty, self-conscious and fearful", which is how ABC News characterized the study's findings, or "Statistical analyses showed that having a sister protected adolescents form feeling lonely, unloved, guilty, self-conscious and fearful", which is what the BUY press release said ...
[222004120090] |Such statements are true if you take "A's are X-er than B's" to mean simply that a statistical analysis showed that the mean value of a sample of X's was higher than the mean value of a sample of Y's, by an amount that was unlikely to be the result of sampling error.
[222004120100] |Only an hour later, the ever wide-eyed Jonah Lehrer wrote
[222004120110] |There's a surprisingly robust literature on the emotional benefits of having sisters.
[222004120120] |It turns out that having at least one female sibling makes us happier and less prone to depression...
[222004120130] |I think this demonstrates nicely the added value of blogging, particularly science blogging.
[222004120140] |Journalists (like Lehrer) are rarely in a position to pick apart the methods of a study, whereas scientist bloggers can.
[222004120150] |I know many people miss the old media world, but the new one is exciting.
[222004120160] |------ For more thoughts on science blogging, check this and this.
[222004130010] |Did your genes make you liberal?
[222004130020] |"The new issue of the Journal of Politics, published by Cambridge University, carries the study that says political ideology may be caused by genetic predisposition." --- RightPundits.com
[222004130030] |"Scientists find 'liberal gene.'" --- NBC San Diego
[222004130040] |"Liberals may owe their political outlook partly to their genetic make-up, according to new research from the University of California, San Diego, and Harvard University.
[222004130050] |Ideology is affected not just by social factors, but also by a dopamine receptor gene called DRD4." -- University press release
[222004130060] |As in the case yesterday of the study about sisters making you happy, these statements are all technically true (ish -- read below) but deeply misleading.
[222004130070] |The study in question looks at the effects of number of friends and the DRD4 gene on political ideology.
[222004130080] |Specifically, they asked people to self-rate on a 5-point scale from very conservative to very liberal.
[222004130090] |They tested for the DRD4 gene.
[222004130100] |They also asked people to list up to 5 close friends.
[222004130110] |The number of friends one listed did not significantly predict political ideology, nor did the presence or absence of the DRD4 gene.
[222004130120] |However, there was a significant (p=.02) interaction ... significant, but apparently tiny.
[222004130130] |The authors do not discuss effect size, but we can try to piece together the information by looking at the regression coefficients.
[222004130140] |An estimated coefficient means that if you increase the value of the predictor by 1, the outcome variable increases by the size of the coefficient.
[222004130150] |So imagine the coefficient between the presence of the gene and political orientation was 2.
[222004130160] |That would mean that, on average, people with the gene score 2 points higher (more liberal) on the 5-point political orientation scale.
[222004130170] |The authors seem to be reporting standardized coefficients, which means that we're looking at increasing values by one standard deviation rather than by one point.
[222004130180] |The coefficient of the significant interaction 0.04.
[222004130190] |This means that roughly as the number of friends and presence of the gene increase by one standard deviation, political orientation scores increase by 0.04 standard deviations.
[222004130200] |The information we'd need to correctly interpret that isn't given in the paper, but a reasonable estimate is that this means that someone with one extra friend and the gene would score anywhere from .01 to .2 points higher on the score (remember, 1=very conservative, 2=conservative, 3=moderate, 4=liberal, 5=very liberal).
[222004130210] |The authors give a little more information:
[222004130220] |For people who have two copies of the [gene], an increase in number of friendships from 0 to 10 friends is associated with increasing ideology in the liberal direction by about 40% of a category on our five-category scale.
[222004130230] |People with no copies of the gene were unaffected by the number of friends they had.
[222004130240] |None of what I wrote above detracts from the theoretical importance of the paper.
[222004130250] |Identifying genes that influence behavior, even just a tiny bit, is important as it opens windows into the underlying mechanisms.
[222004130260] |And to their credit, the authors are very guarded and cautious in their discussion of the results.
[222004130270] |The media reports -- fed, no doubt, by the university press release -- have focused on the role of the gene in predicting behavior.
[222004130280] |It should be clear that the gene is next to useless in predicting, for instance, who somebody is going to vote for.
[222004130290] |Does that make it a gene for liberalism?
[222004130300] |Maybe.
[222004130310] |I would point out one other worry about the study, which even the authors point out.
[222004130320] |They tested a number of different possible predictors.
[222004130330] |The chances of getting a false positive increases with every statistical test you run, and they do not appear to have corrected for multiple comparisons.
[222004130340] |Even with 2,000 participants (which is a large sample), the p-value for the significant interaction was only p=.02, which is significant but not very strong, so the risk that this will not replicate is real.
[222004130350] |As the authors say, "the way forward is to seek replication in different populations and age groups."
[222004140010] |Does Global Warming Exist, and Other Questions We Want Answered
[222004140020] |This week, I asked 101 people on Amazon Mechanical Turk both whether global temperatures have been increasing due to human activity AND what percentage of other people on Amazon Mechanical Turk would say yes to the first question.
[222004140030] |78% agree with the answer to the first question.
[222004140040] |Here's the answers to the second, broken down by whether the respondent did or did not believe in man-made global warming:
[222004140050] |Question: How many other people on Amazon Mechanical Turk believe global temperatures have been increasing due to human activity?
[222004140060] |Average 1st Quartile-3rd QuartileBelievers 72% 60%-84%Denialists 58% 50%-74%Correct 78% ------ Notice that those who believe global warming is caused by human activity are much better at estimating how many other people will agree than are those who do not.
[222004140070] |Interestingly, the denialists' answer is much closer to the average of all Americans, rather than Turkers (who are mostly but not exclusively American, and are certainly a non-random sample).
[222004140080] |So what?
[222004140090] |Why should we care?
[222004140100] |More importantly, why did I do this experiment?
[222004140110] |A major problem in science/life/everything is that people disagree about the answers to questions, and we have to decide who to believe.
[222004140120] |A common-sense strategy is to go with whatever the majority of experts says.
[222004140130] |There are two problems, though: first, it's not always easy to identify an expert, and second, the majority of experts can be wrong.
[222004140140] |For instance, you might ask a group of Americans what the capital of Illinois or New York is.
[222004140150] |Although in theory, Americans should be experts in such matters (it's usually part of the high school curriculum), in fact the majority answer in both cases is likely to be incorrect (Chicago and New York City, rather than Springfield and Albany).
[222004140160] |This was even true in a recent study of, for instance, MIT or Princeton undergraduates, who in theory are smart and well-educated.
[222004140170] |Which of these guys should you believe?
[222004140180] |So how should we decide which experts to listen to, if we can't just go with "majority rules"?
[222004140190] |A long chain of research suggests an option: ask each of the experts to predict what the other experts would say.
[222004140200] |It turns out that the people who are best at estimating what other people's answers will be are also most likely to be correct.
[222004140210] |(I'd love to cite papers here, but the introduction here is coming from a talk I attended earlier in the week, and I don't have the the citations in my notes.)
[222004140220] |In essence, this is an old trick: ask people two questions, one of which you know the answer to and one of which you don't.
[222004140230] |Then trust the answers on the second question that come from the people who got the first question right.
[222004140240] |This method has been tested on a number of questions and works well.
[222004140250] |It was actually tested on the state capital problem described above, and it does much better than a simple "majority rules" approach.
[222004140260] |The speaker at the talk I went to argued that this is because people who are better able to estimate the average answer simply know more and are thus more reliable.
[222004140270] |Another way of looking at it though (which the speaker mentioned) is that someone who thinks Chicago is the capital of Illinois likely isn't considering any other possibilities, so when asked what other people will say guesses "Chicago."
[222004140280] |The person who knows that in fact Springfield is the capital probably nonetheless knows that many people will be tricked by the fact that Chicago is the best-known city in Illinois and thus will correctly guess lots of people will say Chicago but that some people will also say Springfield.
[222004140290] |Harder Questions I wondered, then, how well it would work on for a question where everybody knows that there are two possible answers.
[222004140300] |So I surveyed Turkers about Global Warming.
[222004140310] |Believers were much better at estimating how many believers there are on Turk than were denialists.
[222004140320] |Obviously, there are a few ways of interpreting this.
[222004140330] |Perhaps denialists underestimate both the proportion of climate scientists who believe in global warming (~100%) and the percentage of normal people who believe in global warming, and thus they think the evidence is weaker than it is. Alternatively, denialists don't believe in global warming and thus have trouble accepting that other people do and thus lower their estimates.
[222004140340] |The latter proposal, though, would suggest that believers should over-estimate the percentage of people who believe in global warming, though that is not in fact the case.
[222004140350] |Will this method work in general?
[222004140360] |In some cases, it won't.
[222004140370] |If you asked expert physicists in 1530 about quantum mechanics, presumably none of them would believe it and all would correctly predict that none of the other would believe it.
[222004140380] |In other cases, it's irrelevant (near 100% of climatologists believe in man-made global warming, and I expect they all know that they all believe in it).
[222004140390] |More importantly, the method may work well for some types of questions and not others.
[222004140400] |I heard in this talk that researchers have started using the method to predict product sales and outcomes of sports matches, and it actually does quite well.
[222004140410] |I haven't seen any of the data yet, though.
[222004140420] |------For more posts on science and politics, click here and here.
[222004150010] |Vote!
[222004150020] |The best thing I can say about the last two years is that Democrats have made real investments in science.
[222004150030] |After eight years of stagnant or falling funding, it was like a breath of fresh air.
[222004150040] |Luckily, Republicans are back to suck the air (and life) out of us again.
[222004150050] |After the complete clusterfuck that was the Bush administration, I don't know why anyone would be willing to call themselves a Republican, much less vote for one.
[222004150060] |But if I knew everything about human nature, I wouldn't have to run experiments.
[222004150070] |I wish Obama and the Dems had been doing more to fix up the wreckage left behind by Bush, but at least they don't seem hell-bent at destroying the economy.
[222004150080] |I hope you all enjoyed the respite.
[222004150090] |In the meantime, vote.
[222004150100] |Just in case. ----For previous posts and more details on Republican and Democratic science policies, read this, this, this and this, among others.
[222004160010] |Seriously ambiguous pronouns
[222004160020] |The intro to Terminator: The Sarah Conner Chronicles goes something like:
[222004160030] |In the future, my son will lead humanity in the war against Skynet, a computer system programmed to destroy the world.
[222004160040] |It has sent machines back through time, some to kill him, one to protect him.
[222004160050] |The only reading I get on this is that "it" refers to Skynet, and thus Skynet has sent machines back to kill John Conner as well as protect him.
[222004160060] |So I'm only a few episodes into Season I on Netflix Instant, so perhaps I'm about to find out that Skynet is playing some weird kind of Robert Jordan game, but I suspect rather the writers wanted "it" to refer to "the war".
[222004160070] |I can get that reading if I squint, but it seems incredibly unnatural.
[222004170010] |Boston University Conference on Language Development: Day 1
[222004170020] |BUCLD is one of my favorite conferences, not least of which because it takes place every year just across the river.
[222004170030] |This year has been shaping up to be a particularly good year, if the first day is any indication.
[222004170040] |Ben Ambridge (w/Julien Pine &Caroline Rowland) gave an excellent talk on learning semantic restrictions on verb alternations.
[222004170050] |Of all the work Steve Pinker has done, I think his verb alternation work is the least well-known, but it's also probably my favorite work, and it's nice to see someone systematically revisiting these issues, and I think Ambridge is making some important contributions.
[222004170060] |Kenny Smith (w/Elizabeth Wonnacott) presented a really neat proof-of-concept involving language evolution, showing that you can get robust regularization of linguistic systems in a community of speakers even if none of the individual learners/speakers have strong biases to regularize the input.
[222004170070] |This was a really fun talk; one of those talks that makes one reconsider one's life choices ("should I be studying language evolution?").
[222004170080] |Dea Hunsicker (w/Susan Goldin-Meadow) presented new analyses of an old home-sign corpus, looking at evidence that this particular home sign had noun phrases.
[222004170090] |Home-sign, for those who don't know it, is an ad-hoc mini sign language often developed by deaf children who don't have exposure to a developed sign language.
[222004170100] |If I had to pick a best talk, I'd pick Erin Conwell's talk (w/Tim O'Donnell &Jesse Snedeker) on the dative alternation, in which she sketched an explanation of why, although double-object constructions are overall more frequent that prepositional-object constructions, the latter seem to be more productive in early child language.
[222004170110] |But I may be biased here in that Erin is a post-doc in the same lab as me.
[222004170120] |There were a number of other good talks today that I saw -- and many that I didn't -- which deserve mention.
[222004170130] |I'd write more, but it's late, and there's another full day coming up tomorrow.
[222004180010] |New Language Experiment for Bilinguals
[222004180020] |I'm not sure I've ever blogged about a conference past the first day.
[222004180030] |I'm usually too tired by the second day.
[222004180040] |BUCLD is particularly grueling, running over 12 hours on the first day and near 12 hours on the second.
[222004180050] |Plus the parties.
[222004180060] |I do want to point folks to one thing: Thomas Roeper, Barbara Zurer Pearson and Margaret Grace, all of the University of Massachusetts, are running an interesting study on quantifiers (words like all, some, each, and most).
[222004180070] |One interesting thing about this study is that while language researchers very often exclude non-native speakers and bilinguals, the researchers are very interested in comparing results from native and non-native speakers of English.
[222004180080] |Right now, they're looking for people who learned some language other than English prior to learning English.
[222004180090] |The study is here.
[222004180100] |They are particularly interested right now in getting data from non-native English speakers.
[222004180110] |There is a raffle that participants can win (details are on the site).
[222004190010] |Bad News for Science Funding
[222004190020] |NIH expects to have to cut the percentage of grant applications that are funded from 20% to a historic low of 10%.
[222004190030] |Let's point of for the moment that 20% was not very high, but 10% is rough.
[222004190040] |The expected outcome is that some labs will close, and those that don't will have to do less research, if for no other reason than that they will spend more time writing grants and less time doing real work (guess who pays the researcher's salaries while they write grants: NIH.
[222004190050] |So this also means that less of the money in the remaining grants will go to actual research).
[222004190060] |The reason for the expected cutback is the Republican vow to cut discretionary civilian spending to 2008 levels.
[222004190070] |I understand living within one's means.
[222004190080] |I have a fairly frugal household (in graduate school, my wife attended a university-sponsored seminar on how to manage on a graduate student budget, only to discover that the recommended "austerity" budget was considerably more lavish than ours; we promptly started eating out more).
[222004190090] |But focusing on discretionary spending seems like someone $100,000 in debt clipping coupons: it's maybe good PR but as a solution to the problem, it's hopeless.
[222004190100] |This graph says it all:
[222004190110] |Go ahead and cut all discretionary spending: you get a 16% reduction in the budget (which is in the neighborhood of our current deficit) at considerable cost.
[222004190120] |So maybe the coupon example isn't the right one.
[222004190130] |This is someone who, with a $100,000 debt, lets his teeth rot in his mouth because he's saving money on toothpaste.
[222004200010] |Understanding and Curing Myopic Voting
[222004200020] |The abstract from a recent talk by Gabriel Lenz of MIT:
[222004200030] |Retrospective voting is central to theorizing about democracy.
[222004200040] |Given voters’ ignorance about politics and public policy, some argue that it is democracy's best defense.
[222004200050] |This defense, however, assumes citizens are competent evaluators of incumbent politicians' performance.
[222004200060] |Although little research has investigated this assumption, voters' retrospective assessments in a key domain, the economy, appear flawed.
[222004200070] |They overweight election-year income growth in presidential elections, ignoring cumulative growth under the incumbent.
[222004200080] |In this paper, I present evidence that this myopia arises from a more general “end bias” in retrospective assessments.
[222004200090] |Using a three-year panel survey, I show that citizens' memories of the past economy are inconsistent with their actual experience of the economy as they reported it in earlier interviews.
[222004200100] |They fail to remember the past correctly in part because the present shapes their perceptions of the past.
[222004200110] |I then show similar behavior in the lab.
[222004200120] |When participants evaluate economic and crime data, I again find that election-year performance shapes perceptions of overall performance, even under conditions where the election year should not be more informative.
[222004200130] |Finally, I search for and appear to find a cure.
[222004200140] |Presenting participants with cumulative information on performance (e.g., total income growth or total rise in murders during incumbents’ terms) cures this myopia.
[222004200150] |On one hand, these results are troubling for democracy because they confirm citizens’ incompetence at retrospection.
[222004200160] |On the other hand, they point to a remedy, one that candidates and the news media could adopt.
[222004200170] |That's a remedy as long as the candidates and news media don't simply lie about the fact.
[222004200180] |Good luck with that one.
[222004220010] |Broken but not yet Dead
[222004220020] |I became fairly ill on my last trip to Russia in August.
[222004220030] |The disease itself was fairly nasty if generally treatable, though it came with a not insignificant chance of developing fatal complications.
[222004220040] |Meanwhile, it took me a day to convince any of my friends that I was sick enough that I needed to see a doctor (they all wanted me to take various berries or herbs instead).
[222004220050] |Having gotten one friend on board, it took him a day to find a hospital that was open (one was closed because of a power outage, and several were open but all the doctors were on vacation).
[222004220060] |I eventually got to a doctor who gave me the necessary meds.
[222004220070] |Within a few days my fever was low enough I could get around reasonably well, and though I still felt like shit for a few weeks after that, I was able to fly home on schedule.
[222004220080] |I was reminded of this story by Dr. Isis's harrowing account of her recent, nasty bout of mosquito-born infection.
[222004220090] |Her story is much more compelling than mine (one reason I didn't have a full post on mine before) and worth reading in its own right.
[222004220100] |What I picked up on in particular was the following:
[222004220110] |Health care in the United States might be broken, but at least we have health care.
[222004220120] |I spent the last two weeks teaching medical school in a country where much of the population doesn't have access to running water and access to fresh food is limited.
[222004220130] |41% of children under four are iron deficient.
[222004220140] |There are 60 times more low birth weight infants per capita than in the United States.
[222004220150] |There is a hospital in the capitol city, but no CT, MRI, or dialysis.
[222004220160] |It has two intensive care beds.
[222004220170] |Nine ambulances service the entire country.
[222004220180] |Medical record keeping is problematic and there is a shortage of technicians, doctors, and nurses.
[222004220190] |That's absolutely true.
[222004220200] |It's also a reminder, though, that things broken -- if left without repairs too long -- eventually decay away.
[222004220210] |Right now it is nice that our (American) health care system is still better than that in the developing world ... but it's worrisome that it's not as good as that in the rest of the developed world.
[222004220220] |If we wait long enough without fixing it we may wake up one day and find that we are no longer in the developed world.
[222004220230] |If this seems far-fetched, consider that among developed nations, we're in the middle or back of the pack in health care, primary education, income equality and especially Internet infrastructure.
[222004220240] |In most of these areas (perhaps not primary education) we've beens steadily losing ground for decades (we're also losing ground in fields where we're still technically ahead, like science).
[222004220250] |If that continues, we will eventually be left behind.
[222004230010] |Google Translate Fail
[222004230020] |Google Translate's blog:
[222004230030] |There are some things we still can't translate.
[222004230040] |A baby babbling, for example.
[222004230050] |For the week of November 15th we are releasing five videos of things Google can’t translate (at least not yet)!
[222004230060] |Check out the videos and share them with your friends.
[222004230070] |If you can think of other things you wish Google translated (like your calculus homework or your pet hamster), tweet them with the tag #GoogleTranslate.
[222004230080] |We’ll be making a video of at least one of the suggestions and adding it to our page.
[222004230090] |What do I wish Google Translate could translate?
[222004230100] |I'll bite.
[222004230110] |How about Russian?
[222004230120] |Or Japanese?
[222004230130] |I mean, have the folks over at GT ever actually used their product?
[222004230140] |It's not very good.
[222004230150] |I'll admit that machine translation has improved a lot in recent years, but I doubt it's as good as a second-year Spanish student armed with a pocket dictionary.
[222004230160] |Nothing against the fine engineers working at Google.
[222004230170] |GT is an achievement to be proud of, but when they go around claiming to have solved machine translation, it makes those of us still working on the problems of language look bad.
[222004230180] |It's hard enough to convince my parents that I'm doing something of value without Google claiming to have already solved all the problems.
[222004240010] |More Politics
[222004240020] |As expected, the President appears to be caving on massive tax-breaks for the mega-wealthy while cutting back on services for everyone else and key investments for the future (in the name of fiscal responsibility).
[222004240030] |The fact that this is a Republican proposal doesn't make him any less responsible.
[222004240040] |I'll be voting for someone else in two years.
[222004250010] |Priest, Altars and Peer Review
[222004250020] |David Dobbs at Neuron Culture is complaining about NASA and peer review:
[222004250030] |A NASA spokesperson has dismissed a major critique of the Science arsenic bug paper based not on the criticism's merits, but on its venue -- it appeared in a blog rather than a peer-reviewed journal.
[222004250040] |Apparently ideas are valid (or not) based on their content, or even the reputation of the author, but on where they're published.
[222004250050] |I'm not known for my strong endorsement of the fetishism of peer review, but even so I think Dobbs is being somewhat unfair.
[222004250060] |My reading of history is that scientists have been plugging the peer-review mantra because they're tired of having to respond to ignorant assholes who appear on Oprah spouting nonsense.
[222004250070] |I mean, yes, you can address wacko claims about vaccines causing autism or the lack of global warming on their merits (they have none), but it gets tiresome to repeat.
[222004250080] |In any case, relatively few members of the public can follow the actual arguments, so it becomes an issue of who you believe.
[222004250090] |And that's a hard game to win, since saying "so-and-so doesn't know what they're talking about" sounds elitist even when it's true, and "elitism" (read: "meritocracy") is for some reason unpopular.
[222004250100] |Focusing on peer review as a mechanism for establishing authority is convenient, because the public (thinks it) understands the mechanisms.
[222004250110] |You're not saying, "Believe me because I am a wise scientist," but "Believe the documented record."
[222004250120] |And since Jenny McCarthy doesn't publish in peer-reviewed journals, you can (try to) exclude her and other nuttos from the conversation.
[222004250130] |So I think there are good reasons for a NASA spokesman, when speaking with a reporter, to dismiss blogs.
[222004250140] |Taking a critique in a blog seriously in public is only going to open the floodgates.
[222004250150] |I mean, there are a *lot* of blogs out there.
[222004250160] |That doesn't mean that the scientists involved aren't taking the a series critique by a serious scientist seriously just because the criticism appeared in a blog.
[222004250170] |I hope that they are, and we don't want to read too much into NASA's official statement.
[222004250180] |All that said, I'm not sure focusing on peer-reviewed science has been helping very much.
[222004250190] |I mean, McCarthy still gets booked on Oprah anyway.
[222004260010] |And for my next trick, I'll make this effect disappear!
[222004260020] |In this week's New Yorker, Jonah Lehrer shows once again just how hard it is to do good science journalism if you are not yourself a scientist.
[222004260030] |His target is the strange phenomenon that many high profile papers are failing to replicate.
[222004260040] |This has been very much a cause celebre lately, and Lehrer follows a series of scientific papers on the topic as well as an excellent Atlantic article by David Freedman.
[222004260050] |At this point, many of the basic facts are well-known: anecdotally, many scientists report repeated failures to replicate published findings.
[222004260060] |The higher-profile the paper, the less likely it is to replicate, with around 50% of the highest-impact papers in medicine failing to replicate.
[222004260070] |As Lehrer points out, this isn't just scientists failing to replicate each other's work, but scientists failing to replicate their own work: a thread running through the article is the story of Jonathan Schooler, a professor at UC-Santa Barbara who has been unable to replicate his own seminal graduate student work on memory.
[222004260080] |Lehrer's focus in this article is shrinking effects.
[222004260090] |No, not this one.
[222004260100] |Some experimental effects seem to shrink steadily over time:
[222004260110] |In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze "temporal trends" across a wide range of subjects in ecology and evolutionary biology.
[222004260120] |He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
[222004260130] |As described, that's weird.
[222004260140] |But there is a good explanation for such effects, and Lehrer brings it up.
[222004260150] |Some results are spurious.
[222004260160] |It's just one of those things.
[222004260170] |Unfortunately, spurious results are also likely to be exciting.
[222004260180] |Let's say I run a study looking for a relationship between fruit-eating habits and IQ.
[222004260190] |I look at the effects of 20 different fruits.
[222004260200] |By chance, one of them will likely show a significant -- but spurious -- effect.
[222004260210] |So let's say I find that eating an apple every day leads to a 5-point increase in IQ.
[222004260220] |That's really exciting because it's surprising -- and the fact that it's not true is integral to what makes it surprising.
[222004260230] |So I get it published in a top journal (top journals prefer surprising results).
[222004260240] |Now, other people try replicating my finding.
[222004260250] |Many, many people.
[222004260260] |Most will fail to replicate, but some -- again by chance -- will replicate.
[222004260270] |It is extremely difficult to get a failure to replicate published, so only the replications get published.
[222004260280] |After time, the "genius apple hypothesis" becomes part of established dogma.
[222004260290] |Remember that anything that challenges established dogma is exciting and surprising and thus easier to publish.
[222004260300] |So now failures to replicate are surprising and exciting and get published.
[222004260310] |When you look at effect-sizes in published papers over time, you will see a gradual but steady decrease in the "effect" of apples -- from 5 points to 4 points down to 0.
[222004260320] |Where I get off the Bus
[222004260330] |So far so good, except here's Lehrer again:
[222004260340] |While the publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation.
[222004260350] |For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals.
[222004260360] |It also fails to explaint eh experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
[222004260370] |Huh?
[222004260380] |Lehrer seems to be suggesting that it is publication that makes a result spurious.
[222004260390] |But that can't be right.
[222004260400] |Rather, there are just lots of spurious results out there.
[222004260410] |It happens that journals preferentially publish spurious results, leading to biases in the published record, and eventually the decline effect.
[222004260420] |Some years ago, I had a bad habit of getting excited about my labmate's results and trying to follow them up.
[222004260430] |Just like a journal, I was interested in the most exciting results.
[222004260440] |Not surprisingly, most of these failed to replicate.
[222004260450] |The result was that none of them got published.
[222004260460] |Again, this was just a factor of some results being spurious -- disproportionately, the best ones.
[222004260470] |(Surprisingly, this labmate is still a friend of mine; personally, I'd hate me.)
[222004260480] |The Magic of Point O Five
[222004260490] |Some readers at this point might be wondering: wait -- people do statistics on their data and only accept a results that is extremely unlikely to have happened by chance.
[222004260500] |The cut-off is usually 0.05 -- a 5% chance of having a false positive.
[222004260510] |And many studies that turn out later to have been wrong pass even stricter statistical tests.
[222004260520] |Notes Lehrer:
[222004260530] |And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid--that is, they contain enough data that any regression to the mean shouldn't be dramatic.
[222004260540] |'"These are the results that pass all the tests," he says.
[222004260550] |"The odds of them being random are typically quite remote, like one in a million.
[222004260560] |This means that the decline effect should almost never happen.
[222004260570] |But it happens all the time!"
[222004260580] |So there's got to be something making these results look more unlikely than they really are.
[222004260590] |Lehrer suspects unconscious bias:
[222004260600] |Theodore Sterling, in 1959 ... noticed that ninety-seven percent of all published psychological studies with statistically significant data found the effect they were looking for ...
[222004260610] |Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments
[222004260620] |and again:
[222004260630] |The problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.
[222004260640] |I expect that unconscious bias is a serious problem (I illustrate some reasons below), but this is pretty unsatisfactory, as he doesn't explain how unconscious bias would affect results, and the Schooler effect is a complete red herring.
[222004260650] |I wasn't around in 1959, so I can't speak to that time, but I suspect that the numbers are similar today ... but in fact Sterling was measuring the wrong thing.
[222004260660] |Nobody cares what our hypotheses were.
[222004260670] |They don't care what order the experiments were actually run in.
[222004260680] |They care about the truth, and they have very limited time to read papers (most papers are never read, only skimmed).
[222004260690] |Good scientific writing is clear and concise.
[222004260700] |The mantra is: Tell them what you're going to tell them.
[222004260710] |Tell them.
[222004260720] |And then tell them what you told them.
[222004260730] |No fishing excursions, no detours.
[222004260740] |When we write scientific papers, we're writing science, not history.
[222004260750] |And this means we usually claim to have expected to find whatever it is that we found.
[222004260760] |It just makes for a more readable paper.
[222004260770] |So when a scientist reads the line, "We predicted X," we know that really means "We found X" -- what the author actually predicted is beside the point.
[222004260780] |Messing with that Point O Five
[222004260790] |So where do all the false positive come from, if they should be less than 5% of conducted studies?
[222004260800] |There seem to be a number of issues.
[222004260810] |First, it should be pointed out that the purpose of statistical tests (and the magic .05 threshold for significance) is to make a prediction as to how likely it is that a particular result will replicate.
[222004260820] |A p-value of .05 means roughly that there is a 95% chance that the basic result will replicate (sort of; this is not technically true but is a good approximation for present purposes).
[222004260830] |But statistics are estimates, not facts.
[222004260840] |They are based on a large number of idealizations (for instance, many require that measurement error is distributed normally
[222004260850] |a normal distribution meaning that the bulk of measurements are very close to the true measurement and a measurement is as likely to be larger than the true number as it is likely to be smaller.
[222004260860] |In fact, most data is heavily skewed, with measurements more likely to be too large than too smaller (or vice versa).
[222004260870] |For instance, give someone an IQ test.
[222004260880] |IQ tests have some measurement error -- people will score higher or lower than their "true" score due to random factors such as guessing answers correctly (or incorrectly), being sleepy (or not), etc.
[222004260890] |But it's a lot harder to get an IQ score higher than your true score than lower, because getting a higher score requires a lot of good luck (unlikely) whereas there are all sorts of ways to get a low score (brain freeze, etc.).
[222004260900] |Most statistical tests make a number of assumptions (like normally distributed error) that are not true of actual data.
[222004260910] |That leads to incorrect estimates of how likely a particular result is to replicate.
[222004260920] |The truth is most scientists -- at the very least, most psychologists -- aren't experts in statistics, and so statistical tests are misapplied all the time.
[222004260930] |I don't actually think that issues like the ones I just discussed lead to most of the difficulties (though I admit I have no data one way or another).
[222004260940] |I bring these issues up mainly to point out at that statistical tests are tools that are either used or misused according to the skill of the experimenter.
[222004260950] |And there are lots of nasty ways to misuse statistical tests.
[222004260960] |I discuss a few of them below: Run enough experiments and...
[222004260970] |Let's go back to my genius fruit experiment.
[222004260980] |I ask a group of people to eat an apple and then give them an IQ test.
[222004260990] |I compare their IQ scores with scores from a control group that didn't eat an apple.
[222004261000] |Now let's say in fact eating apples doesn't affect IQ scores.
[222004261010] |Assuming I do my statistics correctly and all the assumptions of the statistical tests are met, I should have only a 5% chance of finding a "significant" effect of apple-eating.
[222004261020] |Now let's say I'm disappointed in my result.
[222004261030] |So I try the same experiment with kiwis.
[222004261040] |Again, I have only a 5% chance of getting a significant result for kiwis.
[222004261050] |So that's not very likely to happen either.
[222004261060] |Next I try oranges....
[222004261070] |Hopefully you see where this is going.
[222004261080] |If I try only one fruit, I have a 5% chance of getting a significant result.
[222004261090] |If I try 2 fruits, I have a 1 - .95*.95 = 9.8% chance of getting a significant result for at least one of the fruits.
[222004261100] |If I try 4 fruits, now I'm up to a 1 - .95*.95*.95*.95 = 18.5% chance that I'll "discover" that one of these fruits significantly affects IQ.
[222004261110] |By the time I've tried 14 fruits, I've got a better than 50% chance of an amazing discovery.
[222004261120] |But my p-value for that one experiment -- that is, my estimate that these results won't replicate -- is less than 5%, suggesting there is only a 5% chance the results were due to chance.
[222004261130] |While there are ways of statistically correcting for this increased likelihood of false positives, my experience suggests that it's relatively rare for anyone to do so.
[222004261140] |And it's not always possible.
[222004261150] |Consider the fact that there may be 14 different labs all testing the genius fruit hypothesis (it's suddenly very fashionable for some reason).
[222004261160] |There's a better than 50% chance that one of these labs will get a significant result, even though from the perspective of an individual lab, they only ran one experiment.
[222004261170] |Data peaking
[222004261180] |Many researchers peak at their data.
[222004261190] |There are good reasons for doing this.
[222004261200] |One is curiosity (we do experiments because we really want to know the outcome).
[222004261210] |Another is to make sure all your equipment is working (don't want to waste time collecting useless data).
[222004261220] |Another reason -- and this is the problematic one -- is to see if you can stop collecting data.
[222004261230] |Time is finite.
[222004261240] |Nobody wants to spend longer on an experiment than necessary.
[222004261250] |Let's say you have a study where you expect to need -- based on intuition and past experience -- around 20 subjects.
[222004261260] |You might check your data after you've run 12, just in case that's enough.
[222004261270] |What usually happens is that if the results are significant, you stop running the study and move on.
[222004261280] |If they aren't, you run more subjects.
[222004261290] |Now maybe after you've got 20 subjects, you check your data.
[222004261300] |If it's significant, you stop the study; if not, you run some more.
[222004261310] |And you keep on doing this until either you get a significant result or you give up.
[222004261320] |It's a little harder to do back-of-the-envelop calculations on the importance of this effect, but it should be clear that this habit has the unfortunate result of increasing the relative likelihood of a false positive, since false positives lead you to declare victory and end the experiment, whereas false negatives are likely to be corrected (since you keep on collecting more subjects until the false negative is overcome).
[222004261330] |I read a nice paper on this issue that actually crunched the numbers a while back (for some reason I can't find it at the moment), and I remember the result was a pretty significant increase in the expected number of false positives.
[222004261340] |Data massaging
[222004261350] |The issues I've discussed so real problems but are pretty common and not generally regarded as ethical violations.
[222004261360] |Data massaging is at the borderline.
[222004261370] |Any dataset can be analyzed in a number of ways.
[222004261380] |Once again, if people get the result they were expecting with the first analysis they run, they're generally going to declare victory and start writing the paper.
[222004261390] |If you don't get the results you expect, you try different analysis methods.
[222004261400] |There are different statistical tests that be used.
[222004261410] |There are different covariates that could be factored out.
[222004261420] |You can through out "bad" subjects or items.
[222004261430] |This is going to significantly increase the rate of false positives.
[222004261440] |It should be pointed out that interrogating your statistical model is a good thing.
[222004261450] |Ideally, researchers should check to see if there are bad subjects or items, check whether there are covariates to be controlled for, check whether different analysis techniques give different results.
[222004261460] |But doing this affects the interpretation of your p-value (the estimate of how likely it is that your results will replicate), and most people don't know how to appropriately control for that.
[222004261470] |And some are frankly more concerned with getting the results they want than doing the statistics properly (there is where the "borderline" comes in).
[222004261480] |Better estimates
[222004261490] |The problem, at least from where I stand, is one of statistics.
[222004261500] |We want our statistical tests to tell us how likely it is that our results will replicate.
[222004261510] |We have statistical tests which, if used properly, will give us just such an estimate.
[222004261520] |However, there are lots and lots of ways to use them incorrectly.
[222004261530] |So what should we do?
[222004261540] |One possibility is to train people to use statistics better.
[222004261550] |And there are occasional revisions in standard practice that do result in better use of statistics.
[222004261560] |Another possibility is to lower the p-value that is considered significant.
[222004261570] |The choice of p=0.05 as a cutoff was, as Lehrer notes, arbitrary.
[222004261580] |Picking a smaller number would decrease the number of false positives.
[222004261590] |Unfortunately, it also decreases the number of real positives by a lot.
[222004261600] |People who don't math can skip this next section.
[222004261610] |Let's assume we're running studies with a single dependent variable and one manipulation, and that we're going to test for significance with a t-test.
[222004261620] |Let's say the manipulation really should work -- that is, it really does have an effect on our dependent measure.
[222004261630] |Let's say that the effect size is large-ish (Cohen's d of .8, which is large by psychology standards) and that we run 50 subjects.
[222004261640] |The chance of actually finding a significant effect at the p=.05 level is 79%.
[222004261650] |For people who haven't done power analyses before, this might seem low, but actually an 80% chance of finding an effect is pretty good.
[222004261660] |Dropping our significant threshold to p=.01 drops the chance of finding the effect to 56%.
[222004261670] |To put this in perspective, if we ran 20 such studies, we'd find 16 significant effects at the p=.05 level but only 11 at the p=.01 level.
[222004261680] |(If you want to play around with these numbers yourself, try this free statistical power calculator.)
[222004261690] |Now consider what happens if we're running studies where the manipulation shouldn't have an effect.
[222004261700] |If we run 20 such studies, 1 of them will nonetheless give us a false positive at the p=.05 level, whereas we probably won't get any at the p=.01 level.
[222004261710] |So we've eliminated one false positive, but at the cost of nearly 1/3 of our true positives.
[222004261720] |No better prediction of replication than replication
[222004261730] |Perhaps the easiest method is to just replicate studies before publishing them.
[222004261740] |The chances of getting the same spurious result twice in a row are vanishingly small.
[222004261750] |Many of the issues I outlined above -- other than data massaging -- won't increase your replication rate.
[222004261760] |Test 14 different fruits to see if any of them increase IQ scores, and you have over a 50% chance that one of them will spuriously do so.
[222004261770] |Test that same fruit again, and you've only got a 5% chance of repeating the effect.
[222004261780] |So replication decreases your false positive rate 20-fold.
[222004261790] |Similarly, data massaging may get you that coveted p.05, but the chances of the same massages producing the same result again are very, very low.
[222004261800] |True positives aren't nearly so affected.
[222004261810] |Again, a typical power level is B=0.80 -- 80% of the time that an effect is really there, you'll be able to find it.
[222004261820] |So when you try to replication a true positive, you'll succeed 80% of the time.
[222004261830] |So replication decreases your true positives by only 20%.
[222004261840] |So let's say the literature has a 30% false positive rate (which, based on current estimates, seems quite reasonable).
[222004261850] |Attempting to replicate every positive result prior to publication -- and note that it's extremely rare to publish a null result (no effect), so almost all published results are positive results -- should decrease the false positives 20-fold and the true positives by 20%, leaving us with a 2.6% false positive rate.
[222004261860] |That's a huge improvement.
[222004261870] |So why not replicate more?
[222004261880] |So why don't people replicate before publishing?
[222004261890] |If 30% of your own publishable results are false positives, and you eliminate them, you've just lost 30% of your potential publications.
[222004261900] |You've also lost 20% of your true positives as well, btw, which means overall you've decreased your productivity by 43%.
[222004261910] |And that's without counting the time it takes to run the replication.
[222004261920] |Yes, it's nice that you've eliminated your false positives, but you also may have eliminated your own career!
[222004261930] |When scientists are ranked, they're largely ranked on (a) number of publications, (b) number of times a publication is cited, and (c) quality of journal that the publications are in. Notice that you can improve your score on all of these metrics by publishing more false positives.
[222004261940] |Taking the time to replicate decreases your number of publications and eliminates many of the most exciting and surprising results (decreasing both citations and quality of journal).
[222004261950] |Perversely, even if someone publishes a failure to replicate your false positive, that's a citation and another feather in your cap.
[222004261960] |I'm not saying that people are cynically increasing their numbers of bogus results.
[222004261970] |Most of us got into science because we actually want to know the answers to stuff.
[222004261980] |We care about science.
[222004261990] |But there is limited time in the day, and all the methods of eliminating false positives take time.
[222004262000] |And we're always under incredible pressure to pick up the pace of research, not slow it down.
[222004262010] |I'm not sure how to solve this problem, but any solution I can think of involves some way of tracking not just how often a researcher publishes or how many citations those publications get, but how often those publications are replicated.
[222004262020] |Without having a way of tracking which publications replicate and which don't, there is no way to reward meticulous researchers or hold sloppy researchers to account.
[222004262030] |Also, I think a lot of people just don't believe that false positives are that big a problem.
[222004262040] |If you think that only 2-3% of published papers contain bogus results, there's not a lot of incentive to put in a lot of hard work learning better statistical techniques, replicating everything, etc.
[222004262050] |If you think the rate is closer to 100%, you'd question the meaning of your own existence.
[222004262060] |As long as we aren't keeping track of replication rates, nobody really knows for sure where we are on this continuum.
[222004262070] |That's my conclusion.
[222004262080] |Here's Lehrer's:
[222004262090] |The decline effect is troubling because it reminds us how difficult it is to prove anything.
[222004262100] |We like to pretend that our experiments define the truth for us.
[222004262110] |But that's often not the case.
[222004262120] |Just because an idea is true doesn't mean it can be proved.
[222004262130] |And just because an idea can be proved doesn't mean it's true.
[222004262140] |When the experiments are done, we still have to chose what to believe.
[222004262150] |I say it again: huh?
[222004270010] |Slate calls for More Republican Scientists
[222004270020] |Daniel Sarewitz of Slate worries that there aren't enough Republican scientists.
[222004270030] |Is it any wonder that Republicans don't trust science, if it's all coming from the laboratories of Democrats?
[222004270040] |I don't know.
[222004270050] |Is it?
[222004270060] |What would a Republican scientist look like?
[222004270070] |Would she accept the reality of man-made global warming?
[222004270080] |Evolution?
[222004270090] |Would she be aware that gay couples are at least as good parents as straight couples?
[222004270100] |Or that states that allow gay marriage and civil unions have lower divorce rates than those that don't.
[222004270110] |That true Keynesian economic stimulus works pretty well, and that tax cuts for the rich have little effect on the economy?
[222004270120] |That the health care system in Western Europe is cheaper and more effective than the American health care system?
[222004270130] |As Colbert once noted, reality has a well-known liberal bias.
[222004270140] |I strongly believe that communities are stronger when they are made up of people with diverse viewpoints.
[222004270150] |There is no benefit to a community made up of people with diverse facts.
[222004270160] |Yes, we should be worried by the paucity of Republican scientists.
[222004270170] |But what does Sarewitz mean by saying we need to make the scientific community "more welcoming" to Republicans?
[222004270180] |If that means wearing elephant pins, I'll go along with it.
[222004270190] |If it means abandoning facts for fiction...
[222004280010] |Poll: Do You Care about Effect Size?
[222004280020] |My recent post on false positives has generated a long thread, with a large number of informative comments from Tal, who has convinced me to think a lot more about power analyses.
[222004280030] |I recommend reading the comments.
[222004280040] |One issue that has come up is if and when we actually care about the null hypothesis.
[222004280050] |I argue that a fair amount of the time we really are deeply interested in knowing whether an effect exists or not.
[222004280060] |I don't entirely understand Tal's argument -- I'm sure he'll help out in the comments -- but I think he is saying that in any given experiment, there are always confounds such that if you have enough power, you'll find a significant result.
[222004280070] |So whether or not the manipulation has its intended effect, the confounds will ensure that the null hypothesis is false.
[222004280080] |Perhaps.
[222004280090] |Having run studies with thousands of participants and no significant effect, I'm skeptical that this is always true, but obviously the data we'd need to test his claim does not and never will exist.
[222004280100] |In any case, this is why we use converging methods: the undetected confounds in one method will (hopefully) not appear in the others, and across studies the truth will emerge.
[222004280110] |Still, this discussion has led me to wonder: across fields, how often are people deeply interested in the existence or absence of an effect (as opposed to the size of the effect).
[222004280120] |Please leave a comment with your field and how often you really are interested in the presence or absence of an effect.
[222004280130] |Examples are encouraged but are unnecessary.
[222004280140] |I'm already on the record saying I am often interested in the existence of an effect and rarely care about its size.
[222004280150] |Below I give my exaples.
[222004280160] |Why I rarely care about effect size
[222004280170] |Priming: Priming is expected to occur whenever two mental constructs share underlying structure or recruit the same underlying processes.
[222004280180] |There is a lot of interest in the underlying representations of certain verb forms.
[222004280190] |Verbs of transfer can be used two ways.
[222004280200] |Compare: John gave the book to Sally vs. John gave Sally the book.
[222004280210] |The order of the words changes and there either is or isn't a preposition.
[222004280220] |In a number of experiments, Thothathiri and Snedeker asked whether hearing give in one form would make it easier for people to understand other verbs of transfer in the same form (e.g., send).
[222004280230] |On some theories, it should (due to shared structure between verbs).
[222004280240] |On some theories, it shouldn't (due to verbs not sharing structure).
[222004280250] |So the existence of the effect mattered.
[222004280260] |But what about effect size: how much of an effect should priming have?
[222004280270] |It's an interesting question, but irrelevant to the hypotheses this study was testing, and frankly currently nobody has any hypotheses one way or another.
[222004280280] |Development: Thothathiri and Snedeker found the priming effect in adults.
[222004280290] |They also tested children.
[222004280300] |For any adult behavior, there is always the question of at what point in development the behavior should appear.
[222004280310] |This is a deep, interesting question, since some behaviors are (roughly-speaking) innate and some are learned and you'd expect the former to appear earlier than the latter.
[222004280320] |Again, there are theories that very strongly predict that young children should or should not show the same effect as adults.
[222004280330] |Once again, the existence of the effect matters.
[222004280340] |What about the size?
[222004280350] |Again, nobody has any predictions, and effect size cannot be used to tease apart theories.
[222004280360] |Even if the effect were much smaller in children, that wouldn't really matter, since in general children are difficult participants to work with and their effects are often smaller because a certain number simply didn't understand the task.
[222004280370] |Eyetracking: Many of my experiments use the Visual World Paradigm.
[222004280380] |The basic idea is that people if you show people a picture and start talking about it, they will look at the parts you are talking about as you are talking about them.
[222004280390] |If there is a picture of a cat, a dog and a horse, and I say "dog," participants will look at the part of the picture with a dog.
[222004280400] |We can then use their eye movements to see how quickly people understood the word.
[222004280410] |So we're looking for the first point in time at which more people are looking at "dog" than you'd expect by chance.
[222004280420] |At any given time point, either there is an effect or there isn't -- and there had better be a point at which there isn't, such as before I said the word "dog"!
[222004280430] |As far as effect size, though, it's not going to be the case that everyone is looking at the dog at any given time point (these effects are probabilistic).
[222004280440] |You'd expect is somewhere between 50% and 80% of people looking at the dog.
[222004280450] |But as long as you have more than 33% looking at the dog (remember, there are 3 things to look at: the cat, the dog and the horse), that's an effect.
[222004280460] |As far as size...you can measure it, but it won't help you distinguish between existing theories, which is what a good experiment is supposed to do.
[222004280470] |Etc.: It's easy to generate more examples.
[222004280480] |I'm pretty sure every study I've ever run has been of this sort, as are most of the studies I have read.
[222004280490] |Sometimes we're interested in knowing more than just whether an effect exists.
[222004280500] |Sometimes we also care about the direction.
[222004280510] |But existence in and of itself is a real question.
[222004290010] |When should an effect be called significant?
[222004290020] |note: This post originally contained an error in the mathematics, which Tal of Citation Needed kindly pointed out.
[222004290030] |This error has been corrected.
[222004290040] |In the thread following my earlier post on false positives, Tal made the observation that in a typical study that is significant at the p=.05 level has a 50% chance of being replicated.
[222004290050] |It turns out that this depends heavily on what you mean by replicate.
[222004290060] |I'm going to work through some numbers below.
[222004290070] |Again, stats isn't my specialty, so please anyone jump in to correct errors.
[222004290080] |But I think I've got the general shape of the issues correct.
[222004290090] |I got a significant result!
[222004290100] |Can I get it again?
[222004290110] |Let's say you ran an experiment comparing the IQ scores of 15 people who prefer kiwi with the IQ scores of 15 people who prefer rambutan.
[222004290120] |You find that people who prefer rambutan have IQs 11.2 points higher than those who prefer kiwi.
[222004290130] |Assuming the standard deviations is 15 (which is how IQ tests are normalized), then that should give you a t-value of 11.2 / (15 * (2/15)^.5) = 2.04 and a p-value of about .05.
[222004290140] |So you've now got a significant result!
[222004290150] |You tell all your friends, and they try running the same experiment.
[222004290160] |What are the chances they'll get the same results, significant at the p=.05 level?
[222004290170] |The chances are not great.
[222004290180] |Even assuming that the underlying effect is real (rambutan-eaters really are smarter), your friends will only replicate your result about 51%, assuming they use exactly the same methods (according to a nifty power calculator found online here).
[222004290190] |Define "get it"
[222004290200] |Of course, we were assuming above that rambutan-eaters really are 11.2 IQ points smarter than kiwi-eaters (BTW I like both kiwi and rambutan, so nothing is meant by this example).
[222004290210] |In which case, your friends might not have gotten results significant at the p=.05 level, but they very likely found higher average IQs for their samples of rambutan-eaters relative to kiwi-eaters.
[222004290220] |And of course, what we really care about is how easy it will be to replicate the rambutan/kiwi difference, not how easy it will be to get the significant p-value again.
[222004290230] |The point of science is not to be able to predict statistically-significant differences but simply to predict differences.
[222004290240] |It's well beyond my statistical abilities to say how often this would happen, but hopefully someone will step up in the comments and let us know.
[222004290250] |In practice, though, other people are only going to follow up on your effect if they can replicate it at the standard p=.05 level.
[222004290260] |What can we do to improve the chances of replicability?
[222004290270] |Lower alphas
[222004290280] |Let's suppose your effect had been significant at the p=.01 level.
[222004290290] |We can manage that while keeping the effect-size the same (11.2 IQ points) if we increase our sample to 26 kiwi-eaters and 26 rambutan-eaters (t = 8/(15 * (2/26)^.5) = 2.7).
[222004290300] |Now our chance of getting another significant result at the p=.01 level is ...
[222004290310] |52%.
[222004290320] |But we don't really care about getting a p=.01 again; we want to get the result again at the p=.05 level, which should happen around 76% of the time.
[222004290330] |Now, what if we had a result significant at the p=.001 level the first time around?
[222004290340] |We'd have needed about 42 subjects per condition.
[222004290350] |The chance or replicating that at the p=.05 level is 92%.
[222004290360] |p-value #subjects/condition Likelihood of repeating at p=.05 level .05 15 51% .01 26 76% .001 42 92%
[222004290370] |Replication
[222004290380] |Of course, there are things that I'm not considering here, like the possibility that your original experiment underestimated the effect size.
[222004290390] |For instance, let's say that the true effect size is 15 IQ points (which is a lot!).
[222004290400] |Now, your chances of finding an effect significant at the p=.05 level with only 15 participants per condition is 75%.
[222004290410] |That's a lot better than what we started with, though not perfect.
[222004290420] |To have an effect large enough to see it 95% of the time at the p=.05 level, it would need to be over 20 IQ points, which is a monstrously large effect.
[222004290430] |Incidentally, if you ran this study with 15 rambutan-eaters and 15 kiwi-eaters and found a 20 IQ point effect, that would be significant below the p=.001 level.
[222004290440] |What I get from all this is that if you want a result that you and others will be able to replicate, you're going want the p-value in your original experiment to have been lower than p.05.
[222004300010] |Do you speak Japanese?
[222004300020] |Do you speak Japanese?
[222004300030] |If so, I've got an experiment for you.
[222004300040] |A while back I presented some results from a project comparing pronoun processing in English, Spanish, Mandarin and Russian.
[222004300050] |We're also testing Japanese.
[222004300060] |So if you speak Japanese and have a few minutes, please follow this link.
[222004300070] |Even better, if you know someone who is a fluent Japanese speaker -- or, even better, a native Japanese speaker, please send him/her the link.
[222004300080] |If you speak English -- and you probably do if you're reading this post -- and have never participating in any of my English pronoun experiments, you can follow this link.
[222004300090] |These experiments usually take less than 5 minutes.
[222004300100] |Huh?
[222004300110] |Pronoun processing?
[222004300120] |For those of you wondering what I could possibly be studying, the interesting thing about pronouns is that their meaning changes wildly depending on context.
[222004300130] |Given the right context, she can refer to any female (and some things that aren't actually female, like ships).
[222004300140] |That isn't true of proper names (Jane Austen can only be used to refer to one person).
[222004300150] |Some theories state that we learn language-specific cues that help us figure out what a given pronoun in a given context means.
[222004300160] |Other theories state we use general intelligence to pull off the feat.
[222004300170] |On the second theory, if you use the same contexts in different languages, people should interpret pronouns the same way.
[222004300180] |On the first theory, that isn't necessarily the case.
[222004300190] |(Obviously I'm being cagey here in terms of how exactly we're manipulating context in the experiment, since I don't want to bias any potential participants.)
[222004300200] |More posts on pronouns: here, here and here.
[222004310010] |missing 2
[222004310020] |One of the formulas in the last post was missing a 2.
[222004310030] |Everything has now been recalculated.
[222004310040] |Some numbers changed.
[222004310050] |The basic result is that some of the numbers are not quite as dire as I had stated: the original example experiment, which had 15 participants per condition and an effect significant at p=.05 has a 51% chance of replicating (in the sense of producing another significant p-value when re-run exactly), again assuming the effect was real and the effect size is as measured in the first experiment.
[222004320010] |Paper submitted
[222004320020] |I just submitted a new paper on pronoun resolution ("Do inferred causes really drive pronoun resolution"), in which I argue that a widely-studied phenomenon called "implicit causality" has been misanalyzed and is in fact at least two different phenomena (as described in this previous post).
[222004320030] |You can find the paper on my publications page.
[222004320040] |Comments are welcome.
[222004320050] |I always find writing up methods and results relatively easy.
[222004320060] |The trick is fitting the research into the literature in a way that will make sense and be useful to readers.
[222004320070] |That is, while the narrow implications are often clear, it's not always obvious which broader implications are most relevant.
[222004320080] |That is, the paper has clear implications for the few dozen people who study implicit causality, but one would like people beyond that small group to also find the results relevant.
[222004320090] |I tried a few different approaches before ultimately settling on a particular introduction and conclusion.
[222004320100] |I was curious how much the paper had changed from the first draft to the last.
[222004320110] |Here's the first draft, according to Wordle:
[222004320120] |Here's draft 2:
[222004320130] |The most obvious differences is that I hyphenated a lot more in the final draft (I was trying to make the word limit).
[222004320140] |But it doesn't appear that the changes in theme -- as measured by Wordle -- were all that drastic.
[222004320150] |That's either a good sign (my paper didn't lose its soul in the process of editing) or a bad sign (I didn't edit it enough).
[222004320160] |I guess we'll see when the reviews come back in.
[222004330010] |Crowdsourcing My Data Analysis
[222004330020] |I just finished collecting data for a study.
[222004330030] |Do you want to help analyze it?
[222004330040] |Puns
[222004330050] |What makes a pun funny?
[222004330060] |If you said "nothing," then you should probably skip this post.
[222004330070] |But even admirers of puns recognize that while some are sublime, others are ... well, not.
[222004330080] |Over the last year, I've been asking people to rate funniness of just over 2300 different puns.
[222004330090] |(Where did I get 2300 puns?
[222004330100] |The user-submitted site PunoftheDay.
[222004330110] |PunoftheDay also has funniness ratings, but I wanted a bit more control over how the puns were rated and who rating them.).
[222004330120] |Why care what makes puns funny?
[222004330130] |There are three reasons I ran this experiment.
[222004330140] |I do mostly basic research, and while I believe in its importance and think it's fun, the idea of doing a project I could actually explain to relatives was appealing.
[222004330150] |I was partly inspired by Zenzi Griffin's 2009 CUNY talk reporting a study she ran on why parents call their kids by the wrong names (typically, calling younger children by elder children's names), work which has now been published in a book chapter.
[222004330160] |Plus, I was just interested.
[222004330170] |I mean: puns!
[222004330180] |Finally, I was beginning a line of work on the interpretation of homophones.
[222004330190] |One of the best-established facts about homophones is that we very rapidly suppress context-irrelevant meanings of words -- in fact, so rapidly that we rarely even notice.
[222004330200] |If your friend said, "I'm out of money, so I'm going to stop by the bank," would you really even notice considering that bank might mean the side of a river?
[222004330210] |A river bank. photo: Istvan, creative commons A successful pun, on the other hand, requires that at least two meanings be accessed and remain active.
[222004330220] |In some sense, a pun is homophone processing gone bad.
[222004330230] |By better understanding puns, I thought I might get some insight into language processing.
[222004330240] |Puntastic
[222004330250] |As already mentioned, my first step down this road was to collect funniness ratings for a whole bunch of puns.
[222004330260] |I popped them into a Flash survey, called it Puntastic, and put it on the Games With Words website.
[222004330270] |The idea was to mine the data and try to find patterns which could then be systematically manipulated in subsequent experiments.
[222004330280] |It turns out that there are a lot of ways that 2300 puns can be measured and categorized.
[222004330290] |So while I have a few ideas I want to try out, no doubt many of the best ones have not occurred to me.
[222004330300] |Data collection was crowdsourced, and I see no reason why the analyses shouldn't be as well.
[222004330310] |I have posted the data on my website.
[222004330320] |If you have some ideas about what might make one pun funnier than another -- or just want to play around with the data -- you are welcome to it.
[222004330330] |Please post your findings here.
[222004330340] |If you are a researcher and might use the data in an official publication, please contact me directly before beginning analysis (gameswithwords$at*gmail.com) just so there aren't misunderstandings down the line.
[222004330350] |Failure to get permission to publish analyses of these data may be punished by extremely bad karma and/or nasty looks cast your way at conferences.
[222004330360] |The results so far...
[222004330370] |Unfortunately for the crowd, I've already done the easiest analyses.
[222004330380] |The following are based on nearly 800 participants over the age of 13 who listed English as both their native and primary languages (there weren't enough non-native English speakers to conduct meaningful analyses on their responses).
[222004330390] |The average was 2.6 stars out of 7 (participants could choose anywhere from 1 to 7 stars, as well as "I don't get it," which was scored as -1 for these analyses), which says something either about the puns I used or the people who rated them.
[222004330400] |First I looked at differences between participants to see if I could find types of people who like puns more than others.
[222004330410] |There was no significant difference in overall ratings by men or women.
[222004330420] |I also asked participants if they thought they had good or poor social skills.
[222004330430] |There was no significant difference there, either.
[222004330440] |I also asked them in they had difficulty reading or if they had ever been diagnosed with any psychiatric illnesses, but neither of those factors had any significant effect either (got tired of making graphs, so just trust me on this one).
[222004330450] |The effect of age was unclear.
[222004330460] |It was the case that the youngest participants produced lower ratings than the older participants (p=.0029), which was significant even after a conservative Bonferroni correction for 15 possible pairwise comparisons (alpha=.0033).
[222004330470] |However, the 10-19 year-olds' ratings were also significantly lower than the 20-29 year-olds' (p=.0014) and the 30-39 year-olds' (p=.0008), but obviously this was not true of the 40-49 year-olds' or 50-59 year-olds' ratings.
[222004330480] |So it's not clear what to make of that.
[222004330490] |Given that the overall effect size was small and that this is an exploratory analysis, I wouldn't make much of the effect without corroboration from an independent data set.
[222004330500] |The funniest puns
[222004330510] |The only factor I've looked at so far that might explain pun funniness is the length of the joke.
[222004330520] |I considered only the 2238 puns for which I had at least 5 ratings (which was most of them).
[222004330530] |I asked whether there might be a relationship between the length of the pun and how funny it was.
[222004330540] |I could imagine this going either way, with concise jokes being favored (short and sweet) or long jokes having a better lead-up (the shaggy dog effect).
[222004330550] |In fact, the correlation between pun ratings and length in terms of number of characters (r=.05) or in terms of number of words (r=.05) were both so small I didn't bother to do significance tests.
[222004330560] |I broke up the puns into five groups according to length to see if maybe there was a bimodal effect (shortest and longest jokes are funniest) or a Goldilocks effect (average-length jokes are best).
[222004330570] |There wasn't.
[222004330580] |In short, I can't tell you anything about what makes some people like puns more than others, or why people like some puns more than others.
[222004330590] |What I can tell you is which puns people did or didn't like.
[222004330600] |Here are the top 5 and bottom 5 puns:
[222004330610] |1. He didn't tell his mother that he ate some glue.
[222004330620] |His lips were sealed.
[222004330630] |2. Cartoonist found dead in home.
[222004330640] |Details are sketchy.
[222004330650] |3. Biologists have recently produced immortal frogs by removing their vocal cords.
[222004330660] |They can't croak.
[222004330670] |4. The frustrated cannibal threw up his hands.
[222004330680] |5. Can Napoleon return to his place of birth?
[222004330690] |Of Corsican. ... 2234.
[222004330700] |The Egyptian cinema usherette sold religious icons in the daytime.
[222004330710] |Sometimes she got confused and called out, 'Get your choc isis here!'
[222004330720] |2235.
[222004330730] |Polly the senator's parrot swallowed a watch. 2236.
[222004330740] |Two pilgrims were left behind after their diagnostic test came back positive. 2237.
[222004330750] |In a baseball season, a pitcher is worth a thousands blurs. 2238.
[222004330760] |He said, "Hones', that is the truth', but I knew elide.
[222004330770] |Ten points to anyone who can even figure out what those five puns are about.
[222004330780] |Mostly participants rated this as "I don't get it."
[222004330790] |---------------------- BTW Please don't take from this discussion that there hasn't been any serious studies of puns.
[222004330800] |There have been a number, going back at least as far as Sapir of the Sapir-Whorf hypothesis, who wrote a paper on "Two Navaho Puns."
[222004330810] |There is a well-known linguistics paper by Zwicky &Zwicky and at least one computer model that generates its own puns.
[222004330820] |However, I know a lot less about this literature than I would like to, so if there are any experts in the audience, please feel free to send me links.
[222004340010] |Zipcar
[222004340020] |I've been an advocate for and member of Zipcar since my wife and I moved to Boston four and a half years ago.
[222004340030] |For that time period, I thought Zipcar was in every way superior to owning a car.
[222004340040] |Until last week, anyway.
[222004340050] |Now I'm reconsidering the car ownership issue.
[222004340060] |Own or rent?
[222004340070] |That's the question.
[222004340080] |It begins
[222004340090] |I was already unhappy early in the week, having discovered Zipcar had overcharged us $375 over the past few months.
[222004340100] |At the beginning of August, I had added my wife as a driver on our account (we'd always had just one driver to save on the yearly membership fee) and also upgraded us to a fixed $75/month plan (which has some added benefits), having noticed that we'd spent more than $75 pretty much every month for the last year.
[222004340110] |I confirmed carefully with the representative that we would only be billed $75 combined, not $75 each.
[222004340120] |It turns out, the hapless representative, rather than simply putting two of us on one account, made two accounts and put us each on both, and then charged us each every month.
[222004340130] |I didn't notice earlier as the charges appeared on different accounts, and I thought we only had one account, but the credit card charges were looking suspiciously large.
[222004340140] |It took a series of emails and a phone call to get that straightened out.
[222004340150] |They eventually agreed to refund us the bogus charges "as a one-time courtesy."
[222004340160] |That's a direct quote.
[222004340170] |Thursday
[222004340180] |We had an overnight trip to our favorite New England B&B for Christmas weekend (seriously, this place is fantastic and has one of the best restaurants I've been to anywhere in the world).
[222004340190] |Our room was even more charming than it looks.
[222004340200] |Yes, that's a working fireplace.
[222004340210] |As usual, I booked a Zipcar for the purpose.
[222004340220] |I believe it was a Nissan Sentra, helpfully parked in our apartment building's garage.
[222004340230] |On Thursday, I got an email from Zipcar saying that due to an unforeseen circumstance, they were bumping us to a Civic in the Government Center garage.
[222004340240] |The exterior of the building is distinctive.
[222004340250] |I've rented cars once or twice from that location.
[222004340260] |During the day, it's ok.
[222004340270] |At night, it's spooky as hell.
[222004340280] |I'd say it's deserted, but there are occasionally roving bans of teens doing who knows what.
[222004340290] |I couldn't find a good picture of the interior.This illustration is true to the spirit of the place.
[222004340300] |So I emailed Zipcar, explaining that I didn't really like returning cars to that location at night, so was there maybe another car nearby I could use.
[222004340310] |Or could the Civic be relocated somewhere more pleasant for a couple days?
[222004340320] |Friday
[222004340330] |I don't know if anybody read that email, since I never got a reply.
[222004340340] |I did get another email on Friday, though, saying that due to another unforeseen circumstance, my reservation had been moved to a Smart Car parked in Somerville (a 15-20 minute drive from where we live).
[222004340350] |Did I mention that the B&B was a 3 hour drive away?
[222004340360] |I didn't really want to drive a Smart Car on the highway for 3 hours, and I wasn't sure our stuff would fit (I had planned on bringing skiis).
[222004340370] |So I emailed, saying if I it was a choice between a Civic parked in the Dungeon of Dispair or a Smart Car, I'd take my chances with the dungeon.
[222004340380] |This time, I got a quick email saying there were no other cars available.
[222004340390] |So I called and explained my situation.
[222004340400] |A very polite representative explained that there really were no other cars nearby, but would I take a Mazda 3 in Arlington (3 suburbs out from Boston, where I live)?
[222004340410] |They'd pick up the cab fare.
[222004340420] |Either that, or I could have $200 towards some other form of transportation.
[222004340430] |Or I could cancel my reservation. $200 wasn't going to cover a standard car rental (I checked), and the B&B reservation was nonrefundable (plus we'd been looking forward to it), so we went with the car in Arlington.
[222004340440] |Saturday
[222004340450] |We took a cab out to Arlington in the morning.
[222004340460] |It took maybe 20 minutes (we helpfully live next to I-93, making getting out of the city easy -- Thank you, Tip O'Neil and Ted Kennedy) and cost $33.75.
[222004340470] |We'd only just gotten back onto I-93 in the direction of Vermont when there was a loud pop under the car and it sounded like something was dragging.
[222004340480] |On inspection (I took the next exit and pulled over), there was some piece of plastic hanging loose.
[222004340490] |The plastic itself didn't look problematic, but I wasn't sure what it had previously been holding in place.
[222004340500] |So I called Zipcar.
[222004340510] |The representative agreed that the car was not safe to drive and asked us to return the car to its original location, and could we perhaps take a Prius from Wellesley College instead?
[222004340520] |If we needed a cab ride, they'd cover the fare.
[222004340530] |I pointed out that (a) we were already running pretty late, and (b) a cab fare to Wellesley was going to be pretty serious, esp. on top of our cab fare to Arlington, so could I just drive the car to Wellesley, drop it off there and take the Prius.
[222004340540] |She said that wasn't possible, since when the mechanics came out to service the Mazda, they wouldn't know where it was.
[222004340550] |I said if that was the problem, I was happy to tell the mechanics that the car was in Wellesley.
[222004340560] |She put me on hold.
[222004340570] |After a brief wait, she came back on the line to apologize, saying she hadn't gotten permission to drop the car off in Wellesley.
[222004340580] |Did I want to make the switch anyway?
[222004340590] |Or there was another car in Salem, MA, if we wanted.
[222004340600] |Witch trials: Popular entertainment in Salem, MA I've actually wanted to see Wellesley for a while (I like college towns), so we went with Wellesley.
[222004340610] |We called yet another cab (have I mentioned this was Christmas?
[222004340620] |Not a lot of cabs wanting to go to a deserted college town) and went to Wellesley.
[222004340630] |That fare was $66.05, including a Christmas-appropriate tip.
[222004340640] |Finally, we got the Prius and set off.
[222004340650] |We had been traveling for 2 1/2 hours and were now farther from our destination than when we started.
[222004340660] |View Larger Map
[222004340670] |Our itinerarary: A: Home, B: Mazda in Arlington, C: Roughly where the car broke, D: Back in Arlington, E: Wellesley College.
[222004340680] |Vermont is in the North.
[222004340690] |Sunday
[222004340700] |The rest of Saturday was pretty good, and the B&B was everything we remembered (dinner, which was, as always, excellent, included what may be the perfect bread, from Orchard Hill Breadworks).
[222004340710] |Sunday morning we heard rumors that a serious blizzard was heading our way, though it wasn't expected to be bad until evening (before we had left, the weather report had put the chance of snow at only 30%).
[222004340720] |We got a slightly earlier start than we had planned, stopped at a few places on the way back.
[222004340730] |Somewhere around 3:30 or 4:00, we entered Massachusetts and it began to snow.
[222004340740] |The state had put up blizzard warnings on the roads, requesting everyone to get off the road and go home.
[222004340750] |If we had been going straight home -- as we would have were we driving that much-mourned Sentra -- that wouldn't have been a problem.
[222004340760] |But we were going to Wellesley.
[222004340770] |At least, we tried.
[222004340780] |As we neared Wellesley, the snow got very bad, and I frankly wasn't that comfortable driving, particularly once we left the highway and the streets weren't as well-plowed.
[222004340790] |My wife called a cab company to make sure we could get a ride back to Boston.
[222004340800] |They agreed to take us, but then called back shortly thereafter, saying that nobody was driving anywhere, didn't we know there was a blizzard going on?
[222004340810] |I think they made the right decision.
[222004340820] |I did know there was a commuter rail station in Wellesley.
[222004340830] |We called the MBTA to see if the trains were still running.
[222004340840] |They said the trains would most likely run, but with significant delays.
[222004340850] |The next one wouldn't be for 3 hours.
[222004340860] |Oh, and the station we'd be waiting at is outside.
[222004340870] |In a blizzard.
[222004340880] |As a backup plan, we checked to see if there was anywhere we could stay in the night in Wellesly.
[222004340890] |However, as Wellesley doesn't have any hotels, there appeared to be only one option:
[222004340900] |The only available room in Wellesley last night.
[222004340910] |We called up Zipcar to consult.
[222004340920] |They agreed to let us leave the car in Boston at no charge, as long as we told them where the car was.
[222004340930] |I'll give them credit for that decision, at least.
[222004340940] |We drove home, very slowly.
[222004340950] |Monday
[222004340960] |I wish I could say the story was over, but this morning I checked my email and saw that we were billed, not only for the Prius, but also for the Mazda 3 (the one that broke down).
[222004340970] |Plus, there was a late fee for returning the Mazda late.
[222004340980] |It seems that when the representative switched our rental from the Mazda to the Prius, she did so before we actually got back to Arlington.
[222004340990] |I sent in another email this morning.
[222004341000] |We can hope that they'll remove those charges "as a one-time courtesy."
[222004341010] |I realize that owning a car has its own hassles.
[222004341020] |I don't expect Zipcar to be perfect, either.
[222004341030] |Everyone's allowed a bad week.
[222004341040] |As long as it's just one week.
[222004341050] |And as long as they give me my money back.
[222004341060] |What did I learn from this experience?
[222004341070] |What I learned -- and what you should take home from this as well -- is to get your bread from Orchard Hill.
[222004341080] |Because it is fucking awesome bread.
[222004341090] |*Update: Tuesday*
[222004341100] |This morning I got a call from someone higher up in customer service at Zipcar, who listened to the whole story.
[222004341110] |She took the numbers for the cab rides in order to reimburse us directly, rather than my having to send in the receipts, which was nice.
[222004341120] |She also comped the entire weekend trip and added a $50 driving credit, which I also appreciate.
[222004341130] |The part I cared about more was that she at least seemed very interested in improving the service such that such problems would not be repeated or would be mitigated more quickly when they do.
[222004341140] |If this reflects a real commitment to efficient service, then hopefully this last week is an aberration, and we'll be able to go back to trusting and relying on Zipcar, as we have in the past.
[222004341150] |*Another Update: Tuesday*
[222004341160] |Now the vice president for member services has called to apologize in addition.
[222004341170] |It's great that they take this stuff seriously.
[222004341180] |I was going to take a temporary break from Zipcar and use a regular rental car company for some upcoming stuff, but now I think I'll give them another shot.
[222004350010] |Qing Wen!
[222004350020] |In the process of encouraging more Americans to study Spanish rather than Mandarin, Nicholas Kristoff notes that in Mandarin
[222004350030] |there are thousands of characters to memorize as well as the landmines of any tonal language.
[222004350040] |How true!
[222004350050] |How true!
[222004350060] |Kristoff shortly proves the latter point in more ways than one:
[222004350070] |The standard way to ask somebody a question in Chinese is “qing wen,” with the “wen” in a falling tone.
[222004350080] |That means roughly: May I ask something?
[222004350090] |But ask the same “qing wen” with the “wen” first falling and then rising, and it means roughly: May I have a kiss?
[222004350100] |Just one possible reaction if you use the wrong tone.
[222004350110] |Kristoff is right, so long as you don't mind sounding like a speech synthesizer.
[222004350120] |The classic description of third tone is a falling tone followed by a rising tone, but in practice it is relatively rare to pronounce the second half (the rising tone), particularly in fluent speech (in Taiwan, anyway; China has a lot of regional variation in Mandarin, so I don't know whether this holds everywhere).
[222004350130] |Figuring out when to pronounce the full tone and when not to is just one of many issues L2 Mandarin speakers run into.
[222004350140] |Actually, third tone is worse than I just suggested.
[222004350150] |Qing wen is actually a good example, because the qing is also in third tone.
[222004350160] |When there are two third tones in a row -- as there are in the qing3 wen3 that means "may I ask you a question?"
[222004350170] |(I'm writing in the tones with numbers here) -- the first one is pronounced as if it were second tone (start low and rise high).
[222004350180] |So even though qing technically doesn't change, its pronunciation depends on which wen you are using.
[222004350190] |If you have three or more third tones in a row (e.g., ni3 you3 hao3 gou3 gou3 ma0?), deciding which syllables will be pronounced as if they were second tone is a complicated issue.
[222004350200] |I'd explain it to you, but I don't actually know myself.
[222004350210] |I've been told you actually have some flexibility in what you do, but I'm not sure that wasn't just another way of saying, "Sorry, I can't really explain it to you."
[222004360010] |So maybe reading *should* be harder
[222004360020] |Some weeks back I chided Jonah Lehrer for his assertion that he'd
[222004360030] |love [e-readers] to include a feature that allows us to undo their ease, to make the act of reading just a little bit more difficult.
[222004360040] |Perhaps we need to alter the fonts, or reduce the contrast, or invert the monochrome color scheme.
[222004360050] |Our eyes will need to struggle, and we’ll certainly read slower, but that’s the point: Only then will we process the text a little less unconsciously, with less reliance on the ventral pathway.
[222004360060] |We won’t just scan the words –we will contemplate their meaning.
[222004360070] |This sounded like a bunch of neuro-babble to me, partly because the research he cited seemed to be about something else entirely.
[222004360080] |Obviously, the ventral pathway is the problem.
[222004360090] |Spoke too soon
[222004360100] |To the rescue come Diemand-Yauman, Oppenheimer &Vaughan, who just published a new paper in my favorite journal, Cognition.
[222004360110] |The abstract says it all:
[222004360120] |Previous research has shown that disfluency -- the subjective experience of difficulty associated with cognitive operations -- leads to deeper processing.
[222004360130] |Two studies explore the extent to which this deeper processing engendered by disfluency interventions can lead to improved memory performance.
[222004360140] |Study 1 found that information in hard-to-read fonts was better remembered that easier to read information in a controlled laboratory setting.
[222004360150] |Study 2 extended this finding to high school classrooms.
[222004360160] |The results suggest that superficial changes to learning materials could yield significant improvements in educational outcomes.
[222004360170] |The first experiment involved remembering 21 pieces of information over a 15-minute interval, which while promising, has it's limitations.
[222004360180] |Here are the authors:
[222004360190] |There are a number of reasons why this result might not generalize to actual classroom environments.
[222004360200] |First, while the effects persisted for 15 min, the time between learning and testing is typically much longer in school settings.
[222004360210] |Moreover, there are a large number of other substantive differences between the lab and actual classrooms, including the nature of materials, the learning strategies adopted, and the presence of distractions in the environment...
[222004360220] |Another concern is that because disfluent reading is, by definition, perceived as more difficult, less motivated students may become frustrated.
[222004360230] |While paid laboratory participants are willing to persist in the face of challenging fonts for 90 s, the increase in perceived difficulty may provide motivational barriers for actual students.
[222004360240] |Or it could just make the students bored.
[222004360250] |In a second, truly heroic study, the researchers talked a bunch of teachers at a public high school into sending them all their classroom worksheets and powerpoint slides.
[222004360260] |The researchers recreated two versions of these materials: one in an easy-to-read font and one in a difficult-to-read font.
[222004360270] |Each of the teachers taught at least two sections of the same course, so they were able to use one set of materials with one group of students and the other set with the another group.
[222004360280] |The classes included English, Physics, Chemistry and History.
[222004360290] |Once again, the researchers found better learning with the hard-to-read fonts.
[222004360300] |Notes and Caveats
[222004360310] |The researchers seem open to a number of possibilities as to why hard-to-read fonts would lead to better learning:
[222004360320] |It is worth noting that it is not the difficulty, per se, that leads to improvements in learning but rather the fact that the intervention engages processes that support learning.
[222004360330] |Moreover, unlike Lehrer, they don't recommend making everything harder to read, learn or do:
[222004360340] |Not all difficulties are desirable, and presumably interventions that engage more elaborative processes without also increasing difficulty would be even more effective at improving educational outcomes.
[222004360350] |There is one obvious concern one might have about their Experiment 2: the teachers were blind to hypothesis, but not to condition.
[222004360360] |The authors attempt to wave this away but asserting that the teachers would likely make the wrong hypothesis (that learning should be worse when the font is hard), and thus any "experimenter" bias would be in the wrong direction.
[222004360370] |However, we have no way of knowing whether the teachers attempted to compensate for the hard-to-read materials by explaining thing better.
[222004360380] |In fact, the authors had no way of testing whether the teachers behaved similarly in both conditions.
[222004360390] |That's not at all saying I think it was a bad study or shouldn't have been published.
[222004360400] |I think it's a fantastic study.
[222004360410] |I don't know how they roped those teachers into the project, but this is the kind of go-get-it science people should be practicing.
[222004360420] |The study isn't perfect or conclusive, but no studies are.
[222004360430] |The goal is simply to have results that are clear enough that they generate more research and new hypotheses.
[222004360440] |------- Connor Diemand-Yauman, Daniel M. Oppenheimer, and Erikka B. Vaughan (2011).
[222004360450] |Fortune favors the bold (and the italicized): Effects of disfluency on educational outcomes Cognition, 118, 111-115 : doi:10.1016/j.cognition.2010.09.012