What are the legal implications of using social media and mobile applications in clinical trials and the recent developments impacting research fraud investigations?
In this recap of our fourth quarter presentation, which includes video and an accompanying transcript, panelists – health care partner Mark Barnes and associate David Peloquin – address such topics as:
- Use of social media for research subject recruitment
- Privacy considerations related to the use of social media and mobile applications
- Institutional review board and research ethics committee views of social media
- Obligations of research sponsors to monitor adverse events posted on social media
- International considerations in use of social media and mobile applications
- Strategies for conducting research fraud investigations in industry
This presentation is part of Life Sciences Quarterly, a quarterly seminar series that delivers insights from Ropes & Gray attorneys, speakers from government and industry and other professionals as they examine key developments, issues and trends affecting the life sciences sector.
Click here to view the video.
Marc Rubenstein: Good morning. My name is Marc Rubenstein. I'm in the life sciences group at Ropes & Gray. We're thrilled you all could join us this morning and hope you all had a good holiday. I heard a lot of people say they're very excited about the topic today, which is for good reason. Obviously, we're going to talk about… I'm giving myself too much credit. The topic today is social media and mobile applications in clinical trials and recent developments in research fraud. And we're all in luck because we have two experts in that field with us, Mark Barnes and David Peloquin from our Boston office. And I will turn it over to them.
David Peloquin: Hey. Thanks, Marc. So we're talking about these two very different topics today, social media in clinical trials and research fraud developments. And so we'll start a little bit with social media. And we have the slide deck here so we can go through these topics, but since we're such a small group, let's make this conversational. Please interrupt with questions at any time. It's a lot more fun if we can have a discussion instead of just going through the slide deck necessarily. And so, turning to social media and clinical trials, there's really two spaces where we get a lot of questions on this topic. One is really the recruitment space. How can we use social media to attract subjects or patients into our clinical trials? And the second is how can we actually use social media as a tool during the clinical trial to facilitate communication between subjects and investigators in the trial, or as a tool to facilitate even communication amongst subjects in the study? And so, when we're talking about recruitment and the use of social media in recruitment, increasingly, our life sciences companies that are sponsoring, our clients that are sponsoring research, are asking a lot of questions about, “How can we harness social media? And if we do, what are the regulations that apply to our use of social media in the trials?” And I think this is becoming increasingly important to life sciences companies because traditionally recruitment was something that happened with the investigator talking to his or her patients, or we see advertisements here in Boston on the T a lot of the time advertising different research studies. And now, increasingly, because there is more competition if you will for research subjects, with more trials and scarcer subjects, life sciences companies are realizing that if they're going to get enough subjects into their trials, they need to be part of the recruitment process as well. And they're engaging recruitment vendors, their CROs are engaged in the recruitment process. And those are trying to figure out, “How do we use social media to attract subjects to our clinical trials?” And so a lot of times, we get asked, “What are the regulations? What are the guidance that apply to the use of social media in recruitment activities?” And the guidances that are out there on the recruitment of clinical trial subjects are fairly dated at this point.
David Peloquin: They come from the late 1990s, from the FDA's guidance, and the mid-2000s with the Office for Human Research Protections, OHRP's guidance. And so, they're really a pre social media world. They talk a little bit about recruitment on the internet, but they're thinking about, like, lists of clinical trials, like ClinicalTrials.gov, and not actually anything that would be interactive where people might collect information over social media. However, the principles of recruitment that are announced in these guidance documents can be applied generally. So rather than being format specific, thinking about how can we take these general principles and apply them in a social media context? So if we're recruiting via Facebook instead of doing, like, a flyer in the hospital. And so the general principles that are announced in these guidances I think are instructive in helping us think about how do we use social media to recruit a subject? And so the first thing to think about is that the FDA and OHRP always say that recruitment is really the first step of the informed consent process. And so, any materials that are used to recruit subjects, just like your informed consent form in your trial, needs to be approved by the institutional review board, the IRB, before anybody's consented. The recruitment materials should be approved by the IRB before they are used to roll out and try to attract subjects into the study.
Audience Member: So is that specifically if you are trying to focus on patient communications, or if you're just trying to create awareness about the trial that could be more HCP? Is there a difference there between IRB submission and approval or not?
David Peloquin: And so the question asked was, “Can we… what if we're directing the communication to a specific trial and to subjects, or is it just creating awareness so we have trials out there? In which case would the IRB need to be alerted or be given the materials to review and approve in advance?” And that's a great question. What this guidance we're talking about now is, is it's for subject directed recruitment materials. So when you're actually targeting the subject and you're trying to make them aware. And that goes back to the whole idea that this is the informed consent, the beginning of the informed consent process that we are starting to tell subjects what are the risks and benefits of the study, why might you want to participate in the study. And whenever that's the case, we want to have the IRB overseeing that.
Mark Barnes: But, you know, David, one thing about that though is we often get the question about something that's in between kind of awareness of trials and one particular trial because there can be, like, a cluster of trials, for example. And so, you know, it's really a case by case analysis. But when you're, for example, featuring, like, three or four trials and saying, “These three or four trials are available,” that's specific enough that probably the FDA and also IRBs would expect to be able to review those materials, those recruitment materials, those online materials. But, you know…
Audience Member:Eve, Brandon and I work at…. we work in the rare disease space for an advocacy community, often stand at the interface between the sponsor and potential trial subjects. So to what extent is the sponsor accountable for something that the advocacy organization may take upon themselves to disseminate and may not necessarily follow the guidances that, you know, we would hold ourselves accountable to as a sponsor? So can you talk about that in terms of kind of the liability of the sponsor if there is this kind of in between where, you know, send the email blast about potential, upcoming trials. And may not use the appropriate language, that kind of threads the needle in terms of how you want to position your study?
Audience Member:And is there an expectation that the sponsor would review those, even if, you know, we're not asked to review ahead of time? But, you know, you come across a posting on a website, it's not factually correct, is there an obligation on the sponsor to proactively reach out to the advocacy organization, ask them to amend or to correct or update that information? And how diligent do you need to be in surveying that information because I think it's… and it happens every day, right, where you're seeing, “Oh, it's kind of right, but not completely correct.” And, what's my obligation to reach out and send an email and ask for corrections or for updates?
David Peloquin: And so again, Mark, you may have other thoughts. I think if it's something that you have put on there and that people are posting comments in response to that, then there is an obligation to curate, to look for, if people are posting, for example, maybe adverse events that they've had in the trial, which this is not what it's intended for, but sometimes there are people that are using it in that format. As far as proactively looking for things about your trial, I think there's less of an obligation. But certainly, if you're noticing something and it's incorrect, I mean, it would be prudent I think to email that organization, especially if it has a large sway with your patient population.
Mark Barnes: And, I mean, in a way, it's-- you can't control what's on the internet. I mean, lots of people post lots of things, right? But if you are a rare disease drug company and you have close relationships with a patient advocacy group, and especially if you fund the patient advocacy group for outreach, you know, they almost-- I mean, from the point of view of the FDA or even, like, a tort law suit, they become almost an extension of you. So I would be more careful about that relationship and about what's posted than I would about, you know, what may be on a chatroom. You have no control over that. But, you know, if you do fund them, you have a bit of control over them. So it's not a one to one, it's not like your posting. But you have to calibrate really the degree of attention to the closeness of the relationship.
Audience Member:And can you describe one more thing? So when you mentioned the IRB approval, so if you don't go through a centralized IRB, you know, in the U.S., so the internet is very expansive. So it would be different that if you were working directly with an institution or a hospital and wanting to create awareness about the trial on that particular website. But if you're doing something, you know, on one of the platforms that you mentioned earlier, you know, Pinterest, Facebook, Twitter, are there certain percentages of your trial sites that you need IRB approval from that are active? Or can you just describe a little bit more about that because there could be very different perspectives and, if it's not centralized, then what is the process before you're enabled to go ahead and present that digitally?
David Peloquin: And so the question is, “How do we do this if there's not a central IRB and there's local ones at each site, if we have a social media campaign for the trial in general?” And unfortunately, I think the answer is that if you have a central IRB, this is much easier because you're submitting it to WIRB or Copernicus, whoever your central IRB is, and they're approving it. But if you have the local IRBs, generally, if this is a trial that they are hosting and it's possible that their patients are seeing this advertisement, then those local IRBs would have to approve that material as well, which may require disseminating it to multiple IRBs. So this is definitely something to consider when you're rolling out your recruitment strategy. Is this something that's going to have to go through a lot of local IRBs, or is there one central one we're using in the study?
Audience Member:So with that, how do you draw the line between what's just informational and what's for recruitment?
David Peloquin: I think you would look at are we talking about enrollment criteria for this trial? Are we naming the specific trial? Are we providing contact information where people could go and contact particular sites to enroll in the study? So we'd have to look at what's the intent of this. Is it to drive people toward enrolling in the specific study? Or is it just talking generally, that we have a number of different studies available, without putting specific information about the given trial?
Mark Barnes: If it's naming one trial or any specific trial, you know, then it is directed at potential subjects from the point of view of the FDA, and you would have to have the IRB approval. This is a good reason to use a central IRB, right?
David Peloquin: Yes. This is much easier. And we have clients now that have-- you know, WIRB does most of their stuff so they're submitting it there. But then they do have a few sites where they're having to submit this as a site-specific IRB submission. And so the other elements on the slide are more I think traditional. Just that the materials that you're doing should not be coercive. They should not overemphasizing remuneration in the study, which is something to think about when you're doing, like, Craigslist or something where there's just an opportunity for a small link that someone can click on. Generally, you wouldn't want to put, like, “Opportunity to earn $500” as, like, the catch phrase there for emphasizing what people are getting paid.
Mark Barnes: You laugh, but we've seen that.
David Peloquin: And so then the next slide is something we've really talked about already. And this has to do with the curation, about what is the duty to look at comments that are posted on social media about your study. And so really, I think when you have a Facebook page, for example, and there's comments that are being posted there, to the extent that you're not able to disable that feature and you allow people to comment there, there is the possibility of adverse events that might need to be monitored and reported potentially to a regulatory agency. And then people may also be posting things about the products' effectiveness. They may… subjects who are in the trial may use that page as a vehicle to speculate about which arm of a blinded trial they're in or speculate about how the drug or device is affecting them. And then, of course, if other people are reading that, that could influence their own perception of the trial. Especially if you have, like, patient reported outcomes that you're trying to obtain. That may sway how other people are perceiving it and sway the results. And so that would be an incentive to monitor what people are posting. And usually, the owner of the site can delete comments then that are inappropriate. So watch for that. And then because social media's international, we might have trial sites all over the world. Or we… people in other countries might be viewing that information. And so, we get questions a lot about, “Can we use social media for our international trial that has sites in Europe or sites in Asia?” And generally, like in the U.S., the guidance on recruitment is not specific to social media. It's not medium specific, but it's just in general the activity of what you're trying to do. And so in most jurisdictions where Mark and I have had to consult with local counsel or some of our colleagues in our Asian offices that work on this, generally the local ethics committee in each of these jurisdictions needs to review the material, just like the IRB does here in the United States. And sometimes they will have more stringent requirements applied locally. And so that's something else to consider. And sometimes CROs that operate internationally can help think about what the considerations are in each of the jurisdictions.
David Peloquin: That's recruitment, using social media. Social media also is often used in clinical trials as either a means to communicate between investigators and subjects, or even providing a forum for subjects to discuss with one another their disease or condition. And so there's a few things to think about when one is using social media in this context. And one is what is actually the medium that's being used? Is it, like, a private messenger function where it's just the investigator or his or her staff communicating with the subject, much as they might over an email communication otherwise? Or is it a page that we're creating, where everybody in the trial can see what other people are posting? And when one is doing this, you want to think about who can see the information. Is it just the person who's posted it? Is it other people in the trial? Is it possible that all of their Facebook friends can see information that's posted in this forum? And that oftentimes will depend on what the user's setting is. And so I think when one is doing the informed consent process, you'd want to alert people that, depending on the settings in your Facebook, this information may be visible to others, and considering that at the outset of the study. And the area where I see questions come up about this sometimes is that oftentimes in the informed consent process in the research protocols traditionally, we have language like what I have in italics here, saying that all your study information's being kept private, it's going to be on secure computers or in locked cabinets. And that's thinking about the study site is controlling all this information and it's private. But when we have a social media scenario, information maybe is-- the whole point of social media is to share things and make it available to others. And so it may be the case that other people in the study are viewing information that's being put on social media. And so, making sure that the statements you have in your documents, like your informed consent forms, are consistent with reality of what's actually happening in the study and questioning them when they're not. And so then… oh yes, question?
Audience Member:Sorry, that seems like a really big gap though. I mean, if the investigator elects to use a social media tool, the company, the sponsor, isn't reviewing the consent for that eventuality. How do you bridge that gap?
David Peloquin: No, that's a great question. And so the question is, “What if the sponsor has a study and the investigator's decided to use social media?” And so I think that the social media should be accounted for in the protocol, to the extent it's going to be used, that in the sponsor's protocol they would describe what's being used and how is it being used. And then in the sponsor's contracts with the clinical trial sites, they're requiring them to follow the protocol. And so the investigator shouldn't really be introducing social media on his or her own volition without having a protocol amendment from the sponsor in a sponsored research. And so I think it's having that type of control and anticipating that in advance.
Mark Barnes: It is a problem though because you will often get, you know, in company sponsored protocols, but even more often in investigator initiated studies, you will get investigators who will say one thing, and then they will do something else that actually is quite a bit different. For example, the use of social media that's not been contemplated. And so, you know, all I can say is actually in both categories, both the company sponsored as well as the investigator initiated grants or investigator initiated research, different companies call it different things, it's very important to have that contract clause or funding agreement clause which says that if they're going to alter the protocol in any material way, they have an affirmative obligation to get prior written approval from the funder or the sponsor. And then you actually have to have your clinical operations team or your medical team, medical affairs team, whoever is monitoring this stuff, you know, actually be attuned to make sure that they're getting… that they know what's going on in these protocols. But we've often seen deviations in this way that are actually significant.
David Peloquin: So in addition to social media, we wanted to talk a little bit about the use of mobile applications to collect data in clinical trials or to be used as a communication tool between the clinical trial subject and the investigator. And so I'm seeing these used often in two ways in studies. A lot of times, reminder messages are being sent out automatically to subjects on their mobile phone. This could either be a bring your own device, where an app is installed on the person's own iPhone, or they might be provisioned, like, a special iPad or phone to use solely for the purposes of the study. And then the other way I see used a lot is to have surveys that are on the phone and so people can report what symptoms they've experienced in the last 24 hours, or their adherence to the protocol. And so those information collection tools are sending information back to the sponsor. In this slide, which we don't have a ton of time to go through right now, but it just shows kind of the data flows in the clinical trials. And I like to show this just because it shows how many different parties are involved when one is using something like a mobile application in a clinical trial because if we have a sponsored research, for example, in our upper left hand corner, we'd have, like, the pharmaceutical or device company that's designing the protocol, they're deciding if these mobile applications are going to be used, engaging a vendor to supply them perhaps. And their protocol and informed consent forms should reflect the use of all of these. And that's some… an area of slippage I see sometimes, is that when these forms are drafted, they don't contemplate the use of these mobile devices. Or it's not described clearly. And then when one moves along in our flow and we get to the clinical trial sites or the IRBs that are reviewing these materials, questions start to arise.
Audience Member:Is that different from or is one also to include consent to getting electronic notifications otherwise, like by email?
David Peloquin: So that's a good question. And so for email, the laws let… there's a law called Canned Spam that's about more sending out commercial messages that generally in the clinical trial context would not be applicable directly. But as a best practice, you'd want consent of knowing what somebody's email address is and that you're sending it to the right address. For text messaging, things are more explicit as far as needing the actual consent. And so having that in your consent form that we're sending you text messages is more important. There's also push notifications. This gets very techie and technical. But within the app, people can sometimes receive a notification through their app themselves. And those technically aren't part of the TPCA, this Telephone Consumer Protection Act but generally, you want to tell people that as well if you're going to be sending them messages through the app. I mean, and the whole policy rationale behind this TCP is that people get charged for text messages a lot. And so we want people to be aware if they're getting them so that they know to talk to their mobile carrier about, “What am I going to get charged if I'm getting, like, ten messages a day from this trial, and now I have this huge cell phone bill?” A few practical considerations that we've seen come up when advising clients on the use of apps in clinical trials is, one, think about what the content of the messages are. And this is important. The IRB, actually see these. Oftentimes, there's Excel spreadsheets presented to them that show what the messages sent to the participants are because a lot of times when you have your phone out, your iPhone and a message comes through, it just pops up on the screen unless you've enabled… disabled that setting to have the messages come to your screen. And so if you're sending people sensitive information about, “Remember to take XYZ drug,” that information may be visible to other people around them. So, for example, we had a sponsor client that was testing a smoking cessation drug in a minor population in a clinical trial. And they were sending out all these text messages for, like, “Remember to fill out your smoking diary. How many cigarettes did you use yesterday?” And so since this was a minor population, if the kids are in school or something, they technically may be doing an illegal activity by having this. And so, thinking sensitively just saying, like, “Well, remember to fill out your diary,” or something today. And then the last thing, which could be a whole discussion in and of itself, but we only have time to just briefly mention is that when people are using these mobile applications and you have an international trial, a lot of times, the information is collected in Europe, for example, or in Asia.
David Peloquin: And it's sent to a data coordinating center back here in the United States, where a lot of the sponsors are or the lead sites. And thinking about what are the ramifications of that cross border data flow, because we have laws in Europe, the general data protection regulation that's coming into effect next year, and even the current regime, that restrict those cross border flows of personal data. And so it's necessary to think about what's the mechanism to legitimize that? Is it obtaining the consent of people to send that information? There are certain contractual clauses that can be used. But thinking about that whenever one is putting in place a mobile application transmitting information across borders in a clinical research study. And I think… question?
Audience Member:I know this is probably the subject of another one of these, but on the GDPR, is the general sense that you could still use consent to get information out of your clinical trial? Or is there… is it a little bit trickier?
David Peloquin: The short answer is that yes, the law still allows consent to be a basis to transfer data from Europe to the United States. The article 29 working party, which is the advisory body in Europe, they've always frowned on consent being used as a basis of transfer of data. But one thing I think to keep in mind in the clinical trials context, because the GDPR is for all sectors of the economy, is that there is a much more robust process taking place in the clinical trial, oftentimes face to face consent with people. And so I think you have a better opportunity in this setting than almost anyone else in the economy to actually obtain an appropriate consent for transfer of information. So the last thing I would leave with is just that when you're using these mobile apps, again, it's subject the IRB review and approval process. And so making sure that they are aware of what's being used and how it's being used. And I think that institutions were not as involved in this in the IRB process. But recently, I know that, like, AHARP, which is the organization that accredits most American IRBs, is asking questions now when they're doing these accreditations that I've been part of recently, like, “How are mobile applications being used in your studies? What is your process for reviewing that?” And IRBs are starting to set up specific policies for the use of mobile applications and studies. And so with sponsors that are designing the protocol, it's important to just make sure that these things are laid out clearly for them to prevent the questions from coming at the time that they're reviewing those applications or those IRB submissions. And so I think we'll pivot now to the research fraud discussion unless there's other specific questions here.
Mark Barnes: I would just add a couple of things to this discussion. One is that in terms of what David was just saying about the IRBs adopting social media specific policies, if you are a sponsor and you intend to use a central IRB and you intend to have heavy use of social media for either recruitment or for data collection during a study, you actually may want to ask for and look at the various policies of the central IRBs that you may decide to use because those that are more sophisticated will probably give you a better review and better protection in case those mechanisms are questioned. So I don't know, you know, what Western or, you know, the others or Quorum or others have about this, but that is something worth looking at when you go through the selection process for the IRB. The other thing about the GDPR that I did want to say is that it also applies to recruitment because if your online recruitment efforts are targeted at patients in Europe, and that may include according to the E.U. even putting the recruitment materials or the postings in languages of the European Union, in one of the, whatever, it is 27 languages of the European Union. If you do that, the E.U. will actually construe that as… just the putting in that language in Italian, for example, the E.U. will construe that as targeting Italians. And therefore, residents of the E.U., which means that, according to the GDPR, GDPR jurisdiction attaches to data that you are drawing out of Europe in response to that posting. So, you know, it's quite fine grain. And we have… the way that GDPR works in regard to clinical trials is much more aggressive than we've ever seen these data laws actually work around the world. So it's something to be attuned to. In any case, we are going to move on just very briefly to research fraud, if that's okay. There is this term of art which is research misconduct. Research misconduct to the pedestrian, to the layperson, it sounds as though it means, like, you know, abusing humans or giving them the wrong drug, or something like that. But that is actually not what research misconduct is. It has a very specific definition in the federal regulations, which is fabrication of data, falsification of data or plagiarism. Plagiarism is usually not our problem in either sponsors in industry or in academia. It's much more the problem of falsification or fabrication of data.
Mark Barnes: Unfortunately, you know, this is… I have to say, this surprised me because I come from, you know, from an academic background. I was, like, sort of in charge of this process at Harvard University for a number of years, of kind of ferreting out research fraud and investigating. I had people who worked for me who did this. And what I have learned over the years in law practice is that it is as prevalent in industry, in internal, intra industry, intra company labs as it is in academic labs, these issues. And in fact, in some ways, industry sometimes treats its investigators, the heads of its labs, with even more deference than a place like Harvard does. And so there are often very few controls on the way that data are gathered, are stored, are recorded, are analyzed and ultimately are published. And it's kind of… this is sort of an unfortunate specialty to have because it's sad. You know, usually, there are allegations. In more cases than not, the allegations usually have some truth in them. But, you know, you can't assume that from the beginning. But we have done a number of investigations, you know, like, in the past and now and going forward in which industry as well as academia will call us in to do an investigation of these allegations, sometimes because the industry, the sponsor, the industry player does not have a well defined internal policy or internal procedure for how to accept complaints like this and investigate complaints like this and resolve complaints like this. And so the… so we get called in because, you know, we've done it so many times. And as I said, it's a sad process because you end up talking to, you end up interviewing lots of people and looking at lots of research records and scientific records, trying to figure out what happened.
Mark Barnes: It's also the case that it's an iterative process because no one… I've never had a situation where someone confessed on the first interview that either they or their lab leader or someone in their lab or some colleague at another institution had falsified or fabricated data. But usually what happens is you're kind of going in a step wise fashion. You're trying to figure out, like, what are the chinks in this story? What are the inconsistencies in the story? What are the data that are missing in the research record, and why are those data not available? Why are, for example, lab PowerPoint summaries available, but the raw data can never be found? What does that mean? It means that someone has, like, extracted data and then thrown the data away and just left a lab PowerPoint. So you sort of ask all these questions. And little by little, and it takes a while usually, whether you did these investigations in academia, whether you did them in industry. Little by little, you begin to discern the inconsistencies in the stories. And then you can… and then at some point, people begin to turn on each other. And then you will have the lab members turn in the lab director, or the lab director turn in the lab members as having cut corners, falsified data, fabricated data. Taken data from two different experiments and combined them and presented them as though they were one experiment. All of these things are, unfortunately, quite common. No one knows how common this is because there's no way that you could actually do a study and figure it out. All we know is that both inside industry and also inside academia these…. there are allegations of research fraud with increasing frequency. And they have to be investigated. There are specific federal regulations. But the federal regulations apply to federally funded research. Now, some companies now are doing this cooperative research with NIH. And they actually are accepting a little bit of NIH funding. And so therefore, they accept the jurisdiction of this office in Washington called the Office of Research Integrity, which oversees the research integrity process.
Mark Barnes: The academic institutions are fully familiar with ORI because they all receive huge amounts of money from the NIH and National Science Foundation, Department of Defense, Department of Energy, Department of Education, etc. Many companies are, when they… even though they may not receive any federal funding, when they have been hit by one or more of these allegations and investigations, they will often develop an internal policy that largely follows the federal policy because that's what everybody tends to be familiar with. And in fact, as you know, I mean, lab directors within a company often come from academia. So, to the extent that they are familiar with the policy, it's going to be the federal policy. The federal policy is complex because it's incredible protective of the accused. More protective than any criminal procedure that you've ever seen. It actually requires, basically, in order to declare somebody guilty of having intentionally or recklessly falsified or fabricated data, which is the standard under the federal rules, it requires essentially three proceedings. First an assessment of the allegation to see whether it's credible. Secondly, what's called an inquiry, meaning that there's an in factual investigation to see whether it looks like there is some evidence of falsification or fabrication. And lastly, a full investigation with transcripts, with a Court transcriptionist and everything else. And, you know, a full record presented.
Mark Barnes: It is not uncommon, usually with a peer review committee of people who were not involved in the research or not from that lab or even from that department, but who know enough science to understand what's going on, usually there's a compliance person or a lawyer either in-house or external who is kind of, like, driving the process because these peer review scientists don't really… most of them, luckily for them, have never been through this process before and have no knowledge of how even to work a process like this but… and in each stage, the accused or the set of accused, because often it's more than one person within a lab who's been alleged to have engaged in this, they have the ability to respond, to call their own witnesses, to file objections to findings and things like that. So it's not uncommon in academia for this process to take two to three years from the initial allegation to the resolution. And those of you in here from academic institutions will know that two to three years is sometimes conservative in terms of how long it takes, largely because of the procedural protections. In companies, it doesn't take that long. In my experience, it usually takes, like, somewhere between three to six months to kind of spin the whole thing out and come to a conclusion. And you're not required to adhere to the federal law unless it's federally funded research or federal collaborative research. Why should you care about this? There are reputation… aside from the legal issues, which I'll talk about secondly, there's a huge reputational issue. We have people who sit in West Germany and in the U.S. and in the U.K. especially who do nothing but troll scientific articles on the web to look at the blots, the western blots, and to look at the data tables and try to understand whether there are any apparent inconsistencies in the presentation or the way that the data are presented. I mean, these… and they have their own little software programs. And they look, for example, at data tables to understand whether the distribution of the numerals matches the random distribution. So, I mean, this is how sophisticated this stuff is. And then they will publish on various websites, including one very famous one I think that's monitored in Germany called PubPeer.com, they will publish whenever they think that there is something suspicious. And they will name the investigators, they'll name the institution, they'll name the company. And then in many cases, the compliance office or the general counsel's office or medical affairs at the company first hears of an allegation because it's been flagged on PubPeer.com.
Mark Barnes: In addition to that, people in the business who are scientists, like, for example, a disaffected post-doc who's furious that he or she didn't get appropriate credit for research, and therefore who have an incentive to complain, they know that PubPeer.com and other similar posting websites exist. They will make back office, you know, kind of back, anonymous complaints. Or even non anonymous complaints to PubPeer.com, which then triggers the posting saying, “We hear from a lab member of Dr. X at Y Company that there has been… that the western blots have been substituted and not accurately categorized in the following publication.” And then they'll put the publication down. And then so you at a company are receiving this information from PubPeer.com probably because your communication staff has been alerted in their trolling of the internet to see when your company name comes up. They alert you and then you're off to the races. Once you do the investigation or once something has come to a conclusion, then usually, if there has been fabrication or falsification of data, or just grossly inaccurate data, then you will have to tell the journals. And then that's its own process. Do you do it directly? Do you do it through the authors? What if the authors disagree among themselves as to whether something should be corrected or retracted? But ultimately, there is a retraction or correction published in the journal, which leads to a second round of publicity because there are other websites, especially one called Retraction Watch, that troll the journals and the scientific society presentation pages to understand whenever there's been a retraction or a correction, they then at Retraction Watch and similar sites, they will present this.
Mark Barnes: They will highlight it and they will write a story about… essentially, like, a news story, which sometimes is picked up by the more popular press, about what labs have had the most retractions and corrections. Most often, a retraction or correction does not say, “Oh, this is being retracted because there was fraud.” It will be much more cryptic than that. It will say, “This is being retracted because the data are thought no longer to be reliable.” But Retraction Watch will read between the lines and they will say, “Oh, isn't it interesting that Dr. X's lab had four different corrections or retractions, none of which was explained. It seems…” and then Retraction Watch will call their reporters. They will try to identify members of the lab and some of the junior authors. They will call them. And they will try to interview them either by name or on deep background, and they'll report the… what they find there. So there's a real reputational issue. There's also a legal issue, or a set of legal issues, which is that if there has been fabrication or falsification of data and it's been published, if those data went into a patent application, you have potential fraud on the U.S. Patent Office. If they had been submitted to support an IND or an IDE, you've got potential falsification, a false representation made to a federal agency, which is a crime under the Federal Criminal Code. And trying to… and then you've got also the data about clinical efficacy or adverse events that may factor into the Medicare compendia, which are used, as you know, to determine what drugs and devices are paid for off-label by Medicare or some remote state Medicaid programs. And so that has False Claims Act implications. So this stuff, if it's fraud about an abandoned compound or an abandoned device or an abandoned, like, you know, riff on an established device, it's bad because it's a reputational issue. But if this stuff has gone into… if these data have supported a compound or a device that's been approved and being marketed, there can be huge problems. So for all of these reasons. In addition, if it relates to an important asset, where it's being marketed or not, there are SEC disclosure obligations on the part of a company. So we end up you know, as I said, it's a sad kind of practice to have. But we end up counseling a number of companies when this comes up about what has to be disclosed and when. What level of certainty do you have to have that it's actually a material effect on the business of the company. So anyway, all of these things happen.
Mark Barnes: It has proliferated. Here's an example of something that people found, that these bloggers find. They actually look at the western blots very carefully. And they try to see whether there's been splicing in a western blot, or whether the western blots in one table look as though they've been done on the same machine. And if they see that there's something that's odd, they will flag it. And they'll… they won't flag it as fraud, they will flag it as something that needs to be investigated. And that's when it will get picked up. Another issue – here's an example of a retraction that is really messy, which does happen sometimes after an investigation. This is in science, so this is, like, big news, right? So you got this guy who was a Nobel Prize Winner, Bruce Beutler. And he writes to-- someone called to his attention a paper that came out of his lab. And so he writes to the science editor in chief saying, “I no longer have confidence in the results of this paper of which I was the senior and corresponding author.” And so, what is science going to do? Well, it contacts the other coauthors. So in this case, the coauthors… some coauthors said, “We agree with the senior author that this is a problem.” Other coauthors said, “We stand by these data.” So what is science going to do at that point when you have coauthors differing on what they should do? This happens in academic research and it happens in industry sponsored research as well. I mean, the important point here is to manage an investigation so that, by the end of it, all the coauthors either agree or have been compelled to agree by the weight of the evidence. And I have to tell you that one of the tricks here in terms of doing these investigations is to manage them in such a way that you end up with unanimity among the coauthors because they are either convinced or cowed by the evidence that you've been able to marshal. What happened here, I will bet, is that the evidence was equivocal about whether there had been fraud or not. And so, some people who engaged in fraud perhaps continued to defend it. Others who thought… who had not engaged in the fraud and thought they were victims of it too wanted the correction or retraction. But this is an example of how messy this can get.
Mark Barnes: Records retention is a huge issue because usually when there is fraud, you will find that there's a problem in the lack of raw data. The raw images, the raw western blots, the raw lab values, et cetera. You won't find them. This can also, by the way, happen obviously not just in human research, but it can happen in animal research and bench science research. So it's all these different scenarios. And I think kind of the takeaway message is it's important to take prophylactic steps to have policies about data integrity, retention of raw data from any lab, retention of all records that go into publications or presentations or into patent applications, or into submissions to the FDA. Have those as policies for which you can hold people accountable in the labs when they have not retained the records. And then you can have a presumption that, if the records have not been retained and there's a question about the records, the burden shifts to the person who did the research to prove that the data are accurate. And the other thing is to have a policy before it happens to you about how to report and investigate and conclude or resolve allegations of research… of fabrication and falsification of data. You know, many cases, especially startup companies or small companies, they're unfamiliar with this. The big companies, unfortunately, are familiar with this. And they have and over the years, in my experience, as things have happened – I'm not telling you anything out of school because you can find all this on retraction watch, but as things have happened, they have had to develop the internal policies about record retention, about curation of research records, about review process for publications before the publications are allowed to go out from the company being submitted to a journal like Science or any other peer reviewed journal. And they've also developed policies about how to do these investigations and how to handle these complaints. But many of the smaller companies or the startup companies have never had been through this before. And so, when this happens to them, they end up scrambling because there is no playbook. So it's much better to have a playbook before it happens rather than after. And I'll stop there because we're almost out of time, so anyway. Yeah?
Audience Member:You mentioned the relative before donors and expenses scenario, where an inquiry arises after the drug has been approved. Just from your familiarity with the trolling entities, what is typically the timeline for review of research results and articles and finding these problems that they're going to dig into? Does that typically happen, like, immediately after a publication, or is it many years later when the drug is on the market?
Mark Barnes: It can be many years. Sometimes it can be, like, you know, it could can be very quick if the flaw in the research is obvious. I mean, what you find a lot of are duplications of western blot images. You find that a lot. I mean, and then, of course, when you find it, the investigator will say, “Oh, that was a mistake. It was a mislabeling.” But when you have 15 mislabelings in the same article, then you wonder. But stuff like that can be identified very quickly after publication. Usually, the publication of the pre-clinical data occurs, you know, before the drug is actually… the compound or device has been approved for marketing. I mean, there are situations all over. But you can have a situation in which the drug has been approved, like, last year. And the pre-clinical publications came out two years ago, and now they're being questioned. And so, the drug is already being marketing, but you're in the middle of an investigation about the data that were used for the pre-clinical, which were used for the IND, which were used for, you know, in the investigative brochure for the phase one study. You know, and it goes on and on. But I've also had situations, or we've had situations, in which, you know, there's been the final clinical studies are published a couple of years after the drug has already been approved and marketed and then the problems are detected, like, five years later. And so you're going back ten years to look at research records. And if you don't have a record retention policy, then the investigators will say, “Who told me I had to keep these records?” So it's very important to have expectations about retention of records, whether you're, you know, at an academic institution or whether you're at a company. And about how they're maintained, where they're maintained. They could only be maintained on company or academic servers and not on people's personal laptops. God knows I had terrible problems with an investigator saying, “Oh, it's on my personal laptop and that was lost.” You know, and he or she says, “Where was it written down that I couldn't keep this data on my personal laptop?” So you've got to establish, like, baseline expectations. Otherwise, you'll be stymied in trying to resolve one of these things, so. Well, have a good day. Nice to see all of you. Thanks for coming.