A nov­el study in­sight an­a­lyt­ics (SIA) method­ol­o­gy im­proves PANSS da­ta qual­i­ty and sig­nal de­tec­tion in a glob­al clin­i­cal tri­al in schiz­o­phre­nia

Drug de­vel­op­ment in neu­ro­science is an ex­tra­or­di­nar­i­ly chal­leng­ing process. Many late-stage CNS tri­als fail, even when test­ing pre­vi­ous­ly proven ef­fec­tive drugs due to high place­bo re­sponse and poor sig­nal-to-noise ra­tios. This re­sults in the can­cel­la­tion of promis­ing de­vel­op­ment pro­grams. Such fail­ures rep­re­sent a tremen­dous loss of com­mer­cial and sci­en­tif­ic in­vest­ment. Ro­bust, prac­ti­cal, proac­tive meth­ods are bad­ly need­ed to help im­prove the like­li­hood of tri­al suc­cess.

Var­i­ous qual­i­ty as­sur­ance meth­ods have been pro­posed and im­ple­ment­ed over the last few decades, in­clud­ing WCG’s Study In­sight An­a­lyt­ics (SIA) ser­vice. SIA us­es pre­de­fined al­go­rithms to mon­i­tor all as­pects of da­ta health through­out the con­duct of a clin­i­cal tri­al and de­tect atyp­i­cal da­ta pat­terns with­in and across end­points. While al­go­rithms have been stud­ied and doc­u­ment­ed in sci­en­tif­ic lit­er­a­ture, the ac­tu­al im­pact of this method on study out­come has not been well in­ves­ti­gat­ed. In our re­cent pub­li­ca­tion*, we pre­sent­ed an ex­am­ple of the ap­pli­ca­tion of SIA in a Phase III glob­al schiz­o­phre­nia tri­al where prospec­tive, re­al-time eval­u­a­tion of da­ta qual­i­ty was con­duct­ed; we de­scribe the im­ple­men­ta­tion of de­tec­tion, analy­sis, and in­ter­ven­tion tech­niques dur­ing the blind­ed phase of the study; and we re­view the re­sults of our post-hoc analy­sis on un­blind­ed da­ta as they ap­ply to the pri­ma­ry study out­come. Ad­di­tion­al­ly, we show how these types of da­ta sig­nals re­late to place­bo re­sponse and sig­nal de­tec­tion at the group lev­el, and can sig­nif­i­cant­ly con­tribute to clin­i­cal tri­al suc­cess.


The dataset used for the analy­sis was tak­en from a phase 3 glob­al clin­i­cal study which as­sessed the ef­fi­ca­cy and safe­ty of HP-3070, an ase­nap­ine trans­der­mal sys­tem (patch), in adults with acute­ly ex­ac­er­bat­ed schiz­o­phre­nia. Fifty-nine study sites from the Unit­ed States, Rus­sia, Ukraine, Bul­gar­ia and Ser­bia par­tic­i­pat­ed in the study. Dur­ing the course of the tri­al, a blind­ed da­ta an­a­lyt­ics pro­gram was im­ple­ment­ed us­ing elec­tron­ic da­ta cap­ture and eval­u­at­ed on a dai­ly ba­sis to iden­ti­fy anom­alies in da­ta. Al­go­rithms in­clud­ing da­ta qual­i­ty mark­ers pro­posed by the In­ter­na­tion­al So­ci­ety for CNS Clin­i­cal Tri­als and Method­ol­o­gy (ISCTM) as well as pe­ri­od­i­cal re­view of oth­er in-study per­for­mance in­di­ca­tors such as num­bers of raters per pa­tient and pri­ma­ry and sec­ondary (PANSS – CGI-S) end­point cor­re­la­tions were im­ple­ment­ed to mon­i­tor anom­alous da­ta pat­terns.


Any as­sess­ments flagged by these al­go­rithms and oth­er qual­i­ty in­di­ca­tors for da­ta ab­nor­mal­i­ties re­sult­ed in queries to site raters, along with con­sen­sus dis­cus­sions and re­train­ing. The lev­el of in­ter­ven­tion was de­ter­mined by clin­i­cal sci­en­tists who in­ter­pret­ed the sever­i­ty of anom­alous da­ta. Any rater who failed to im­prove in re­sponse to re­train­ing was asked to dis­con­tin­ue rat­ing. If over­all site per­for­mance was deemed un­sat­is­fac­to­ry and did not demon­strate im­prove­ment af­ter ear­ly in­ter­ven­tion, the site was not per­mit­ted to screen ad­di­tion­al sub­jects, thus cur­tail­ing their over­all con­tri­bu­tion to the tri­al. Im­ple­men­ta­tion of ac­tion­able in­ter­ven­tion strat­e­gy is as im­por­tant as se­lect­ing ap­pro­pri­ate al­go­rithms to main­tain the in­tegri­ty of da­ta.


Our post-hoc analy­sis on un­blind­ed da­ta re­vealed that greater place­bo re­sponse was ob­served for a) sites with mod­er­ate to low PANSS – CGI cor­re­la­tion (r < 0.80), b) sub­op­ti­mal sites with at least one as­sess­ment trig­ger­ing two or more al­go­rithms, c) re­gions/coun­tries with high­er flag rates. In ad­di­tion, there was a greater as­so­ci­a­tion be­tween rater in­con­sis­ten­cy and er­rat­ic vis­it-to-vis­it score changes in the PANSS. These re­sults sug­gest that qual­i­ty mark­ers in­clud­ing high­er rates of rater in­con­sis­ten­cy in PANSS as­sess­ments, high­er rates of flags with­in PANSS as­sess­ments sug­gest­ing low in­ter­nal con­cor­dance, and di­ver­gence be­tween PANSS to­tal and CGI-S rat­ings are in­dica­tive of greater place­bo re­sponse. Ad­dress­ing these con­cerns with re­al-time over­sight and rapid in­ter­ven­tion to pre­vent fur­ther oc­cur­rence of mea­sure­ment er­rors would im­prove sig­nal de­tec­tion.


Find­ings from this study em­pha­size the im­por­tance of holis­ti­cal­ly eval­u­at­ing and re­me­di­at­ing each clin­i­cal study site as an in­ter­ven­tion­al unit. While in­di­vid­ual raters do have a strong im­pact on the qual­i­ty of da­ta pro­duced, oth­er fac­tors such as re­gion­al dif­fer­ences, re­source con­straints, and high­er rater turnover al­so con­tribute. It is note­wor­thy that while iden­ti­fi­ca­tion of anom­alous da­ta can be au­to­mat­ed by pro­gram­ing al­go­rithms, it is im­per­a­tive that clin­i­cal ex­perts eval­u­ate the na­ture and sever­i­ty of the po­ten­tial da­ta is­sues and se­lect the ap­pro­pri­ate lev­el of in­ter­ven­tion. Each study site plays a cru­cial role in com­mu­ni­cat­ing with the study sub­jects dur­ing tri­als and they may en­counter nu­mer­ous chal­lenges. Any in­ter­ven­tions should al­ways be car­ried out in a col­le­gial man­ner with aim to sup­port the site ef­fort and mit­i­gate their chal­lenges.

The re­view of site per­for­mance is ul­ti­mate­ly a team ef­fort, where each study team (e.g., spon­sor and CRO) brings avail­able in­for­ma­tion about the site in ques­tion to the ta­ble and de­ter­mines the course of ac­tion. Ac­tive mon­i­tor­ing method­olo­gies that en­com­pass the use of sci­en­tif­i­cal­ly sound al­go­rithms, clin­i­cal re­view, ac­tion­able in­ter­ven­tion strat­e­gy, and team ef­fort would im­prove and main­tain ac­cu­ra­cy and qual­i­ty of da­ta, which in turn, de­crease place­bo re­sponse.

Learn more about WCG and our so­lu­tions here or con­tact us here.

*Use of a nov­el study in­sight an­a­lyt­ics (SIA) method­ol­o­gy to im­prove PANSS da­ta qual­i­ty and sig­nal de­tec­tion in a glob­al clin­i­cal tri­al in schiz­o­phre­nia. Schiz­o­phre­nia Re­search, 267, (2024), 239-246.


Kazunori Tatsumi

Senior Clinical Scientist, Clinical Endpoint Solutions, WCG