Department Of Biostatistics

UNMC Cancer Center

The Biostatistics Department houses the Biostatistics Shared Resource (BSR) of the UNMC. Fred & Pamela Buffet Cancer Center. The BSR, under the direction of Jane Meza, PhD, provides Cancer Center members with expertise in the design and analysis of basic science, clinical, and population-based research.

Statistical Methods

There are a number of statistical methods that are of key interest to members of the Biostatistics Department:

Interim Monitoring

Most clinical trials are presently designed with plans for interim monitoring of outcomes, so that if convincing differences in the rates of adverse events (toxicity) or study outcomes (treatment failures) are seen, studies can be recommended for closure of amendment. Most often, this interim monitoring is done not by the study committee, but a group independent of the study committee, a Data and Safety Monitoring Committee (DSMC). The study committee is kept ‘blinded’ to the emerging outcome results, and the DSMC takes responsibility for assuring the patient safety is adequately monitored. Specific monitoring rules are built into the protocol documents that appropriately control the chance that a study will be declared positive when the treatment outcomes are the same (the type-1 error rate). In addition, studies are sometimes designed with futility monitoring rules, which monitor the likelihood that the study will be positive if the alternative hypothesis is assumed to be true. These monitoring rules may allow early termination of a study and the release of study data back to the study committee when it is clear that the hypothesized improvement in outcome with a new therapy will not be observed.

Department faculty have an interest in the development and application of these interim monitoring boundaries and also in the way Data and Safety Monitoring Committees apply these monitoring boundaries in real life.

Stochastic Modeling

A stochastic process is a random process changing over time. It is well-suited for modeling dynamic and complex biomedical processes. Combined with other statistical theories and methodologies, stochastic modeling can bring us unique insight into these processes. An interesting application is the modeling of patient compliance (adherence) data. By modeling the variability of patient compliance behavior (for each patient over time and across different patients), we can systematically study the statistical properties of various compliance indices. With the incorporation of pharmacokinetic information of the medication, it can help us to find optimal ways of measuring and adjusting for compliance in clinical studies.

High Dimension Data

The past ten years have witnessed the emergence of microarray technology and its vast applicability in various biomedical areas. Arguably, analyzing microarray data has become the most active research areas of statistics and bioinformatics. Like the classical statistical design, the determination of sample size and power play a critical role in microarray research. False discovery rate (FDR) is widely used for the adjustment of multiple comparisons in microarray research. It is, however, still a challenging work about how to effectively plug the concept of FDR into the determination of sample size and power for a microarray experiment.

Affymetrix Oligonucleotide GeneChip is a major microarray platform and is becoming a gold standard in studying gene expressions. Analyzing Affymetrix data at its probe level (e.g., studying the relationship between perfect match and mismatch) can help us better understand the mechanism of hybridization and increase the efficiency of data summarization.

Other research areas include the meta-analysis of microarray data given that different platforms of microarray have been designed and carried out to address the same or similar biological problems, as well as deriving time-dependent weight functions to improve the performance of clusterings during the analysis of temporal gene-expression patterns.

Survey Methodology

Surveys are often designed to produce reliable estimates at the nation or state level. Survey users are often interested in using the survey data to produce estimates for subgroups of the target population. However, the sample size is usually not large enough to produce reliable estimates for these geographic or demographic “small-areas” at the sub-national or sub-state level. Small area estimation methods use alternative methods, including synthetic estimation, composite estimation, and model-based methods to produce reliable small-area estimates. Small area estimation methods can be extended to disease mapping applications and combining national and state data to estimate the probability of a rare event, such as un-insurance or drug use.

Design and Analysis of Correlated Data Studies

Correlated data arises in longitudinal follow-up studies, where multiple measurements are made on the same subject over time, and in cluster-sampling studies, where multiple measurements are made on the same sampling unit such as children within a common classroom, patients treated by a common physician, or teeth within a mouth. The correlation among observations made on the same subject or sampling unit must be accounted for in the design and analysis of correlated data studies to ensure adequate power of the designed study and unbiased estimation of the variability of parameter estimates. Common analysis methods include Generalized Estimating Equations methodology to fit population-average regression models, where estimation focuses on average effects such as overall treatment effects, and random or mixed effects models to estimate subject-specific parameters, where interest focuses on within-subject changes such as growth curves over time.