Specifcally, this study seeks to explore, “Do men who report greater comfort with receptive penetrative anal eroticism also report less transphobia, less obedience to masculine gender norms, greater partner sensitivity, and greater awareness about rape?” This study uses semi- structured interviews with thirteen men to explore this question, analyzed with a naturalist and constructivist grounded theory approach in the context of sexualities research and introduces transhysteria as a parallel concept to Anderson’s homohysteria. This analysis recognizes potential socially remedial value for encouraging male anal eroticism with sex toys.
What about the seven papers that were accepted for publication? One was a collection of poetry for a journal called Poetry Therapy. Let’s be clear: This was bad poetry. (“Love is my name/ And yours a sweet death.”) But I’m not sure its acceptance sustains the claim that entire fields of academic inquiry have been infiltrated by social constructivism and a lack of scientific rigor.
Another three plants were scholarly essays. Two were boring and confusing; I think it’s fair to call them dreck. That dreck got published in academic journals, a fact worth noting to be sure. The third, a self-referential piece on the ethics of academic hoaxes, makes what strikes me as a somewhat plausible argument about the nature of satire. The fact that its authors secretly disagreed with the paper’s central claim—that they were parroting the sorts of arguments that had been made against them in the past, and with which they’ve strongly disagreed—doesn’t make those arguments a priori ridiculous.
That leaves us with three more examples of the hoax. These were touted as the most revealing ones—the headline grabbers, the real slam dunks: the dog-rape paper, the dildo paper, the breastaurant research. They also share a common trait: Each was presented as a product of empirical research, based on original data. The dog-rape study is supposed to have resulted from nearly 1,000 hours of observation at three dog parks in southeast Portland. The dildo paper pretends to draw from multihour interviews with 13 men—eight straight, two bisexual, three gay—about their sexual behaviors. And the breastaurant research claims to have its basis in a two-year-long project carried out in northern Florida, involving men whose educational backgrounds, ages, and marital statuses were duly recorded and reported.
How absurd was it for such work to get an airing? It may sound silly to investigate the rates at which dog owners intervene in public humping incidents, but that doesn’t mean it’s a total waste of time (as psychologist Daniel Lakens pointed out on Twitter). If the findings had been real, they would have some value irrespective of the pablum that surrounds them in the paper’s introduction and discussion sections.
One Duke University surgeon called it a “new frontier” in cancer treatment. Another said it could save “10,000 lives a year” or more. A researcher at Mass General Hospital called it “a very, very exciting tool” in the fight against lung cancer. As news spread in 2006 and 2007 of the work of Anil Potti, a star cancer researcher at Duke, the excitement grew.
What he had claimed to achieve, in leading medical journals, was a genomic technology that could predict with up to 90 percent accuracy which early stage lung cancer patients were likely to have a recurrence and therefore benefit from chemotherapy.
He had developed, Potti said in interviews at the time, a genomic “fingerprint unique to the individual patient” that would predict the chances of survival of early stage lung cancer patients.
It was considered a breakthrough because, as the Economist explained at the time, chemotherapy is “a blunt instrument … In most cases a patient’s survival depends on whether he dies from the side effects of chemotherapy before the chemotherapy kills the cancer, or vice versa. A way to pick the right type of chemotherapy would make a big difference. Anil Potti and colleagues, of Duke University in North Carolina, have proven — in principle, at least — that they can do exactly that. Instead of prescribing chemotherapies according to a doctor’s best guess, they propose a genetic analysis to predict which type of chemotherapy would stand the greatest chance of zapping cancerous cells.”
And they had ample reason for their praise. After all, the revolutionary findings by Anil Potti and his team were first published in Nature Medicine, one of the most prestigious peer-reviewed journals in the field, and later in a host of other prestigious journals.
Now, the Office of Research Integrity (ORI), the agency that investigates fraud in federally-funded medical research, has officially declared that the data generated by Potti was not only flawed, but “false.”
The data was “altered,” it said in a report published Monday in the Federal Register, to produce the results desired by the researchers. False data were also submitted to obtain further grants for research, it concluded, citing a claim by Potti that 6 of 33 patients responded favorably to a test when only 4 patients were enrolled in the trial, none of them responding positively.
Harvard Medical School and Brigham and Women’s Hospital have recommended that 31 papers from a former lab director be retracted from medical journals.
The papers from the lab of Dr. Piero Anversa, who studied cardiac stem cells, “included falsified and/or fabricated data,” according to a statement to Retraction Watch and STAT from the two institutions.
Last year, the hospital agreed to a $10 million settlement with the U.S. government over allegations Anversa and two colleagues’ work had been used to fraudulently obtain federal funding. Anversa and Dr. Annarosa Leri — who have had at least one paper already retracted, and one subject to an expression of concern — had at one point sued Harvard and the Brigham unsuccessfully for alerting journals to problems in their work back in 2014. Anversa’s lab closed in 2015; Anversa, Leri, and their colleague Dr. Jan Kajstura no longer work at the hospital.
Dozens of recent clinical trials contain suspicious statistical patterns that could indicate incorrect or falsified data, according to a review of thousands of papers published in leading medical journals.
The study, which used statistical tools to identify anomalies hidden in the data, has prompted investigations into some of the trials identified as suspect and raises new concerns about the reliability of some papers published in medical journals.
The analysis was carried out by John Carlisle, a consultant anaesthetist at Torbay Hospital, who previously used similar statistical tools to expose one of the most egregious cases of scientific fraud on record, involving a Japanese anaesthesiologist who was found to have fabricated data in many of his 183 retracted scientific papers.
In the latest study, Carlisle reviewed data from 5,087 clinical trials published during the past 15 years in two prestigious medical journals, Jama and the New England Journal of Medicine, and six anaesthesia journals. In total, 90 published trials had underlying statistical patterns that were unlikely to appear by chance in a credible dataset, the review concluded.
Consequently, I examine the following questions, which are underdeveloped
within intersectional animal/feminist literature: (1) How do human discourses of
rape culture get mapped onto dogs’ sexual encounters at dog parks; particularly,
how do companions manage, contribute, and respond to ‘dog rape culture’? (2)
GENDER, PLACE & CULTURE 5
What issues surround queer performativity and human reaction to homosexual
sex between and among dogs? and (3) Do dogs suffer oppression based upon
These feminist journals were willing to accept studies that were patently absurd, without question, as long as they followed a basic ideological template.
The research involved sitting in the park watching dogs humping and judging if the dogs enjoyed it as a measure of consent. It is patently ridiculous. It was accepted because it fit the ideological template.
The peer review process in these prominent journals failed to detect the humorously shoddy work, ridiculous conclusions, unethical practices and failures in basic logic that were in these papers. They set out to prove that all that mattered was flattering the ideological biases of this academic community... and they succeeded.
Obviously conceded by almost everyone involved. They failed and failed miserably. And I do get the dark humor in dwelling on that failure.