top of page

42 J.C. & U.L. 311

Journal of College and University Law

2016

Article

Gary S. Marxa1

Copyright © 2016 by National Association of College and University Attorneys; Gary S. Marx

AN OVERIEW OF THE RESEARCH MISCONDUCT PROCESS AND AN ANALYSIS OF THE APPROPRIATE BURDEN OF PROOF

The number of research misconduct cases faced by institutions has increased substantially over recent years.1 The proffered explanations for this increase range from greater pressure on scientists to publish quickly to there simply being more emphasis in identifying research misconduct.2

“Research misconduct” is broadly defined to mean fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. For purposes of that definition: (a) “fabrication” is making up data or results and recording or reporting them;3 (b) “falsification” is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record;4 and (c) “plagiarism” is the appropriation of another person's ideas, processes, results, or words without giving appropriate credit.5 Research misconduct does not include honest error or differences of opinion.6

 

This article discusses the administrative process in research misconduct cases pursuant to regulations adopted by the Department of Health and Human Services (HHS) and by the National Science Foundation (NSF). It also analyzes key legal terms and discusses the burden of proof applied in research misconduct cases with a focus on those instances where HHS or NSF seek to debar the researcher from future government contracts or grants.

 

Consider the following simplified example.7

Dr. White was the principal investigator on Project X. Dr. Black was a post-doctoral researcher working with Dr. White. Dr. White's team ran three complex and expensive experiments to test a particular hypothesis--Experiment 1, Experiment 2 and Experiment 3. Experiments 1 and 3 were consistent with the hypothesis although the results of the experiments were not identical. The results of Experiment 2 were inconsistent with the hypothesis. Dr. White determined that Experiment 2 was flawed in some undetermined way. He decided not to repeat Experiment 2 because he felt it would be an unnecessary cost and unduly delay the publication of his report. Dr. Black, on the other hand, felt that Dr. White's decision not to repeat Experiment 2 was a mistake and he expressed his opinion to Dr. White. During the course of the project, Dr. White required Dr. Black to change statistical assumptions relating to certain tests and, as result of such changes, the results more strongly supported Dr. White's hypothesis than would otherwise have been the case. Dr. Black expressed his view to Dr. White that the manipulation of the assumptions could cause the report to not accurately represent the research record. Dr. White explained to Dr. Black why he felt the modifications were statistically justified based upon his experience. Dr. White determined that it was not worth the time and expense to retain a statistical expert to validate his decision. Eventually Dr. White published his report without reference to Experiment 2 or a discussion of the statistical assumptions challenged by Dr. Black. In Dr. Black's view, Dr. White's decisions were a significant departure from accepted practices.

The fact pattern here would seem to be one where the objective evidence is not completely clear as to whether Dr. White's decisions were appropriate. In the past, Dr. Black may have simply kept quite as to Dr. White's report, accepting the dispute as merely an academic disagreement and one in which he should defer to Dr. White as the principal investigator. But today, with the greater emphasis being placed on research misconduct, Dr. Black may very well have felt warranted in filing a complaint with his institution asserting that Dr. White acted inappropriately.

Assuming Dr. Black filed a complaint against Dr. White, there would potentially begin a long and expensive process whereby the institution would investigate Dr. White's conduct and decision-making. Ultimately, the institution would have to make a judgment as to whether Dr. White acted inappropriately in excluding Experiment 2.8 It would also have to  determine if Dr. White's changes in his statistical assumptions constituted the falsification of data. To some degree, the institution's decision would depend upon the investigating committee's view of the credibility of Dr. White and Dr. Black and the communications between them.

 

As described below, after the institution completed its investigation and made its decision, its report would then be evaluated by the appropriate agency (typically HHS or NSF), which might undertake its own investigation and would make its own determination as to whether Dr. White's decisions constituted research misconduct. If Dr. White were found to have engaged in research misconduct, he could be debarred from receiving future government grants or contracts.

 

Under current regulations of HHS and NSF, the “preponderance of the evidence” standard would be applied to Dr. White's case. In other words, whether Dr. White would be found to have engaged in research misconduct wound depend upon whether the factfinders determined that it was more likely than not that his decisions constituted research misconduct. As some courts have held, preponderance of the evidence means 50% of the evidence and “a feather.”9 Thus, in the foregoing hypothetical, Dr. White's *316 future career may rest on that “feather.”10 If, on the other hand, the standard of proof were “clear and convincing” evidence--the traditional common law standard in fraud cases--the factfinders would be required to have a much greater degree of certainty in their conclusion before finding that Dr. White engaged in research misconduct.11

 

This article acknowledges the strong public interest in research integrity. But, it suggests that there are constitutional arguments supporting the contention that the clear and convincing standard of proof (rather than the preponderance standard) is required in cases such as Dr. White's, at least when the agencies seek to debar a researcher. And while the article concludes that the application of the preponderance standard is likely constitutional, it argues that the HHS and NSF's regulations may nevertheless be invalid under the Administrative Procedures Act (“APA”).12 It further suggests that, regardless of the legality of the current regulations, HHS and NSF should undertake rulemaking to evaluate whether the clear and convincing standard should be applied in research misconduct cases, especially where debarment is the proposed remedy.13

Marx and Lieberman, PLLC, Washington, D.C.; A.B. 1975, Washington University; J.D. 1979, Washington and Lee University.

1

The Office of Research Integrity (ORI) is a component of the Office of the Assistant Secretary for Health in the Office of the Secretary, within the U.S. Department of Health and Human Services (HHS). The ORI's mission includes research misconduct investigations. The ORI's Annual Report for 2012 states as follows:

In 2012, the 6,714 funded institutions reported 323 allegations, inquiries, or investigations. The count in year 2012 is a record of what institutions submitted in their 2011 Annual Report, which is submitted to ORI in 2012 ....; From all sources, ORI received 423 allegations in 2012, an increase of 56 percent over the 240 allegations handled in 2011, and well above the 1992-2007 average of 198; [The Division of Investigative Oversight's] review process involved opening 41 new cases, closing 35, and carrying 45 cases into 2013. The number of open cases was the highest number in 16 years. ***In 2012, ORI made findings of research misconduct in 40 percent of the cases (14/29). In contrast, the historical average of this finding is 36 percent; Administrative actions imposed on those who committed research misconduct included: debarred 6 respondents for a varying number of years, prohibited 14 from working as advisors, and required 9 to be supervised in any PHS-supported research activity. Office of Research Integrity, 2012 Annual Report.

See also, Dr. Jim Kroll, Director, Research Integrity and Administrative Investigations Unit, National Science Foundation Office of Inspector General, NSF OIG: Stories from the Case Files (“Kroll Presentation”), available at http://www.slideserve.com/poppy/nsf-oig-stories-from-the-case-files-national-science-foundation-office-of-the-inspector-general (contains statistics on NSF's research misconduct investigations). To assist the reader, there is an appendix setting forth the most common abbreviations used in this article.

2

A 2015 article in Science News noted that researchers are facing unprecedented funding challenges that put “scientist under extreme pressure to publish quickly and often.” According to the article “[t]hose pressures may lead researchers to publish results before proper vetting or to keep hush about experiments that didn't pan out.” Tina Hesman Saey, Repeat Performance: Too Many Studies, When Replicated, Fail to Pass Muster, 187 Science News 21 (Jan. 24, 2015). In a PowerPoint presentation at the INORMS 2014 CONCURRENT SESSIONS, the presenters answered the question of why there is an increase in research misconduct cases at NSF by setting forth the following: “We have become better at catching it. Increased competition for limited resources; Technology makes it easier to cheat and to catch a cheat. High profile cases increase awareness. RCR training increases awareness. Government interaction with research communities raise awareness of our role in handling RM allegations.” National Science Foundation, Office of the Inspector General Research Integrity and Administrative Investigations Division, Navigating the Research Misconduct Process: Observations from the U.S. National Science Foundation OIG (2016), available at tmcstrategies.net/wp-content/uploads/.../4-NSF-OIG-Presentation.pptx.

3

See, e.g., Case Summary: Chen, Li, Office of Research Integrity, http://ori.hhs.gov/chenli (last visited May 29, 2016).

4

See, e.g., Case Summary: Bijan, Ahvazi, Office of Research Integrity, http://ori.hhs.gov/content/case-summary-ahvazi-bijan (last visited May 29, 2016).

5

See, e.g., 20 Office of Research Integrity Newsletter 1, 7 (2011), available at https://ori.hhs.gov/images/ddblock/dec_vol20_no1.pdf. More information on plagiarism can be found at 26 Guidelines at a Glance on Avoiding Plagiarism, Office of Research Integrity, http://ori.hhs.gov/plagiarism-0 (last visited May 29, 2016).

6

42 CFR § 93.103 (2015); 45 CFR § 689.1(a) (2015).

7

Use of the hypothetical is not intended to suggest that HHS or NSF would seek debarment in such a case. To the contrary, a review of HHS and NSF debarment cases indicates that the agencies seek debarment only when the evidence of misconduct is significantly stronger. Nevertheless, under the current regulations, nothing would preclude the agencies from seeking debarment even under the facts of the hypothetical.

8

See Dov Greenbaum, Research Fraud: Methods For Dealing With An Issue That Negatively Impacts Society's View Of Science, 10 Colum. Sci. & Tech. L. Rev. 61 (2009). Dr. Greenbaum noted the following:

Additionally, experienced scientists might drop outliers in their data or add in fudge factors, relying not on scientific rigor but on honed hunches, justifying the disposal of those points as spurious. Again, dropping data points without scientific justification may border on falsification of data, or not. The gut reaction, acceptable in many other areas of life, might be necessary when researching uncharted corners of science.

Id. See also, Raymond De Vries, Melissa S. Anderson, & Brian C. Martinson, Normal Misbehavior: Scientists Talk about the Ethics of Research, 1 J. Empirical Res. on Hum. Res. Ethics 43, 45 (2006) (cited by Dr. Greenbaum) which quotes a researcher as follows:

One gray area that I am fascinated by ... is culling data based on your ‘experience’ ... [T]here was one real famous episode in our field ... [where] it was clear that some of the results had just been thrown out .... [When] queried [the researchers] ... said, ‘Well we have been doing this for 20 years, we know when we've got a spurious result ....’ [When that happens] ... [d]o you go back and double check it or do you just throw it out ... [and] do you tell everybody you threw it out? I wonder how much of that goes on?

Id. See also, Dan L. Burk, Research Misconduct: Deviance, Due Process, and the Disestablishment of Science, 3 Geo. Mason Indep. L. Rev. 305, 333-34 (1995). Professor Burk stated:

The discord between the scientific and legal approaches to misconduct is well illustrated by the efforts of federal agencies to settle upon a proper definition of “misconduct.” ... The division between misconduct and legitimate science may be difficult to distinguish, and not even a mens res requirement such as “deliberate falsification” is sufficient to adequately distinguish the two. For example, consider the problem of selective reporting of data. The scientific report is by no means a stenographic or historical description of the research completed, nor is it meant to be. The scientist chooses carefully and deliberately what aspects of his research deserve to be reported. In doing so, he exercises the creativity that lies at the heart of science, ... The essence of scientific genius is the ability to choose what ought to be left out.

Id.

9

Colon v. Sec. Dept. Health and Human Services, 2007 WL 268781 (Fed. Cl. 2007) (the preponderance of the evidence means “50% and a feather.”). See also, United States v. Restrepo, 946 F.2d 654, 661 (9th Cir.1991) (Norris, J., dissenting), (en banc), cert. denied, 503 U.S. 961 (1992) (noting that preponderance standard “allows a fact to be considered true if the factfinder is convinced that the fact is more probably true than not, or to put it differently, if the factfinder decides there is a 50%-plus chance that it is true”). Comment Note, Instructions Defining Term “Preponderance or Weight of Evidence, 93 A.L.R. 155 (originally published in 1934).

10

Roger Wood, Scientific Misconduct - The High Cost of Competition, InfoEdge (Sept. 8, 2014), http://researchadministrationdigest.com/high-cost-competition-scientific-misconduct/ (“The impact on individual researcher's careers is more significant, with most - but not all - researchers found to have engaged in misconduct by the DHHS Office of Research Integrity experiencing a “severe decline in research productivity.”). Andrew M. Stern et al., Financial Costs and Personal Consequences of Research Misconduct Resulting in Retracted Publications, eLife (Aug. 14, 2014), https://elifesciences.org/content/3/e02956 (“We found that in most cases, authors experienced a significant fall in productivity following a finding of misconduct”).

11

Speiser v. Randall, 357 U.S. 513, 526 (1958) (“the possibility of mistaken factfinding [is] inherent in all litigation”); Addington v. Texas, 441 U.S. 418, 423 (1979) (It is because of the possibility of mistakes, the standard of proof “serves to allocate the risk of error between the litigants and to indicate the relative importance attached to the ultimate decision.”).

12

5 U.S.C. § 706(2)(B) (2012).

13

Id.


 

Gary S. Marx, An Overview of the Research Misconduct Process and an Analysis of the Appropriate Burden of Proof, 42 J.C. & U.L. 311, 311–16 (2016)

bottom of page