Conference Schedule

8:15 – 9:00
Registration and Continental Breakfast

9:00 – 9:10
Introductory Remarks
Boris Iglewicz, Temple University

9:10 – 9:45
Inaction on Missings: Why Do So Many People Ignore Missing Data in Randomized Clinical Trials?
Janet Wittes, Statistics Collaborative, Inc

9:45 – 10:20
Should the Analysis of Multi-Center Trials be Guided by the Trial Design: The role of randomization?
Marvin Zelen, Harvard University

10:20 – 10:40
Break

10:40 – 10:50
Discussant
Brenda Gillespie, University of Michigan

10:50 – 11:50
Discussion

11:50 – 1:00
Lunch

1:00 – 1:35
A Framework for Generalization in Meta-Analysis
Betsy Becker, Florida State University

1:35 – 2:10
Meta-Analysis: Statistical Methods For Combining The Results of Independent Studies 
Ingram Olkin, Stanford University

2:10 – 2:20
Discussant
Lawrence Gould, Merck Research Laboratories

2:20 – 3:20
Discussion

3:20 – 3:25
Closing Remarks


Abstracts

A Framework for Generalization in Meta-Analysis

Betsy Becker, Florida State University (along with Ariel M. Aloe)

This presentation introduces a systematic approach to examining sources of heterogeneity in meta-analysis based on a framework originated by Cronbach (1982) and applied to meta-analysis by Becker (1996).  Our framework adds one more component to Cronbach’s (1982) UTOS framework: we add methods (M) to Cronbach’s units (U), treatments (T), observing operations (O), and setting (S).  We argue that considering each component of the framework as a source of potential variation gives a way to organize analyses of between studies differences. Our approach also allows the reviewer to clarify across which components the results of a meta-analysis can be generalized, versus where more contingent generalizations are needed. We examine the use of several measures of heterogeneity as indices to assist in making judgments about generalizability of results. We apply our approach to a synthesis of studies of the relationship of teacher content knowledge in science to student science-achievement outcomes (Becker & Aloe, 2008).


Meta-Analysis: Statistical Methods For Combining The Results of Independent Studies

Ingram Olkin, Stanford University

Meta-analysis enables researchers to synthesize the results of a number of independent studies designed to determine the effect of an experimental protocol such as an intervention, so that the combined weight of evidence can be considered and applied.  Increasingly meta-analysis is being used in the health sciences, education and economics to augment traditional methods of narrative research by systematically aggregating and quantifying research literature.  Two meta-analytic examples are the effectiveness of mammography in the detection of breast cancer, and an evaluation of gender differences in mathematics education.  The information explosion in almost every field coupled with the movement towards evidence based decision making and cost-effective analysis has served as a catalyst for the development of procedures to synthesize the results of independent studies.

In this talk we provide an historical perspective of meta-analysis, and discuss some issues, such as bias.  We also give a brief review of the statistical methods used in combining results.


Inaction on Missings: Why Do So Many People Ignore Missing Data in Randomized Clinical Trials?

Janet Wittes, Statistics Collaborative, Inc

This talk starts from the premise that most of the audience understands the deleterious effect of missing data on inference in experiments in general and in randomized clinical trials in particular. I also assume that the participants in the conference are familiar with a variety of techniques for missing data, some simple and some quite sophisticated. The talk will start with a brief review of these methods and will present some calculations showing the uncertainty in inference that missing data induce. Rather than focusing on approaches for handling missing data, however, most of the talk will address a different set of questions stemming from my observation that many investigators do not feel angst when they see even substantial amounts of missing data in their trial. The talk will summarize the types of missing data we often encounter in trials. Examples include missing outcome data in trials of long-term outcomes; missing partial outcome data in the same type of trials; missing data on symptoms and measurements for trials that study outcomes like pain or blood pressure; missing items when the outcome is a score from a questionnaire with several parts; and structured missing data that arise in trials of vaccine where outcomes are not counted until several months after the last immunization. I will hazard some guesses about the reasons for the apparent lack of concern among many experienced investigators and sponsors. The talk will then discuss suggestions for communicating to sponsors, investigators, study participants, and IRBs the importance of collecting full data even when a participant stops active study medication.


Should the Analysis of Multi-Center Trials be Guided by the Trial Design: The role of randomization?

Marvin Zelen, Harvard University

Consider a multi-center randomized clinical trial. Should the analysis be guided by the design of the trial? Most investigators would answer in the affirmative.   Yet in practice the design and many important features of most trials are ignored in the analysis. . Most analyses of clinical trials assume that the trial has a random sample of patients from some well defined population. This is the basic assumption of most statistical methods employed to analyze trials in which the inference is targeted at drawing conclusions from a well defined population.  In truth there is no random sample of patients nor is there a well defined population of patients.  The patients in a trial can best be described as a “collection” which is defined as the complement of a random sample. Conclusion — most randomized clinical trials are analyzed incorrectly as the basic assumption of a random sample is   not true. However the basis of the inference can rely on the randomization process.  Analytical techniques can be derived which depend only on the randomization process. However the resulting inference will be a “local” inference in that it will only apply to the patients who have entered the study. Another basic tenet in any analysis is to take into account factors that affect the outcome.  Many multi-center trials have large institutional variation. This is especially true in drug trials where the institution’s patient management and support may influence the observed toxicity. However efficiently accounting for institutional variation may be difficult as many multi- center trials may have large numbers of centers, who typically enter a small number of patients.  In this lecture, methods will be described for making inferences which only rely on the randomization process, but which also account for institutional variation. These methods have been adapted to account for permuted blocks which are typically used to design the randomization allocation in many trials. The methods generally result in greater power when compared to statistical methods which tend to ignore both institutional variation and permuted blocks. The methods have been adapted to group sequential trials.


Speakers

Betsy Jane Becker is a professor and coordinator of the program in Measurement and Statistics in the College of Education at Florida State University, where she has been on faculty since Fall 2004.

Becker earned both B.A. and M.A. degrees in Psychology from The Johns Hopkins University in 1978. Becker earned her Ph.D. in Education from The University of Chicago in 1985, where she worked with Larry Hedges and completed a dissertation on combined probability methods for meta-analysis. Her dissertation won the American Educational Research Association’s Outstanding Dissertation Award in 1985.

Becker’s current research involves methods for synthesizing correlation matrices and regression slopes, and she is also involved in two synthesis projects regarding teacher knowledge and teacher qualifications. Becker and her collaborators currently have two grants to support this work, through the National Science Foundation.

She is a founding member and past president of the Society for Research Synthesis Methodology, and serves as co-convener of the Methods Training Group for the Campbell Collaboration, an organization whose goals include the promotion of evidence based analysis for policy making in the social sciences. Becker is a Fellow of the American Statistical Association, and in the past has served as associate editor of Psychological Methods, and on the editorial boards of Journal of Educational and Behavioral Statisticsand the Journal of the American Statistical Association’s Applications and Cases Section.


Brenda Gillespie is Associate Director, Center for Statistical Consultation and Research at The University of Michigan, and Assistant Professor, Department of Biostatistics.  She received her PhD from Temple University in 1989.  She has taught a variety of graduate level courses, including survival analysis, clinical trials, regression, and DOE, as well as numerous short courses to both statistical and non-statistical audiences.   She has collaborated and published widely on clinical trials (glaucoma), longitudinal cohort studies (chronic kidney disease, dialysis, and transplant), case-control studies (breast implants), and environmental exposure studies (dioxins).  She is a contributing author to the recent book, Linear Mixed Models: A Practical Guide Using Statistical Software, by West, Welch and Galecki.


A. Lawrence Gould is Senior Director, Scientific Staff, Biostatistics and Research Data Systems, Merck Research Laboratories. He received his Ph.D. in Biometry from Case
Western Reserve University. Dr. Gould is a Fellow of the American Statistical Association, has served on a number of grant review panels, and has served in a variety of positions with the Biopharmaceutical Section of the ASA and the Biometric Society, ENAR. He served as Editor of the Journal of Biopharmaceutical Statistics, received the 1991 Donald Francke Award for most outstanding article published in DIA Journal that year, and the 1994 best presentation award from the Biopharmaceutical Section of ASA. His research interests tend to be driven by problems arising in drug development, and include blinded sample size re-estimation, Bayesian methods, meta-analysis, bioequivalence, analysis of safety data, data mining, clinical trial simulation, and management science.


Ingram Olkin received a Ph.D. in mathematical statistics from the University of North Carolina, and is professor of statistics and education at Stanford University.  Before moving to Stanford he was on the faculties of Michigan State University and the University of Minnesota, where he served as chair of the Department of Statistics. His academic background consists of a bachelor’s degree in mathematics from The City College of New York, a master’s degree from Columbia University, and a doctorate from the University of North Carolina. He has coauthored and coedited over fifteen books, and has published over 200 papers.  He served as editor of the Annals of Mathematical Statistics and Annals of Statistics, and an associate editor of Psychometrika, the Journal of Educational Statistics and the Journal of the American Statistical Association, as well as on several mathematical journals.  He served as chair of the National Research Council’s Committee on Applied and Theoretical Statistics, as president of the Institute of Mathematical Statistics, and has been a member of many governmental panels. Among his honors are a Lifetime Contribution Award from the American Psychological Association, a Wilks Medal and Founders Award from the American Statistical Association, a Guggenheim Fellowship, a Fulbright Fellowship, a Lady Davis Fellowship, an honorary D.Sci. from DeMontfort University, Fellow of The American Statistical Association and of Institute of Mathematical Statistics. He was elected to the National Academy of Education.  His current research relates to combining the results of independent studies (meta-analysis) and models for survival analysis and reliability.


Janet Wittes, Ph.D. is the President of Statistics Collaborative, Inc. which she founded in 1990. From 1974 to 1983 she was on the faculty of the Department of Mathematical Sciences at Hunter College. In 1983 she joined the Biostatistics Research Branch, National Heart, Lung, and Blood Institute as its Chief. Her research has focused on the design and analysis of randomized clinical trials. She is a Fellow of the American Statistical Association, the American Association for the Advancement of Science, and the Society for Clinical Trials. She is a past President of The International Biometric Society – Eastern North American Region (1995) and The Society for Clinical Trials (2001). From 1990 through 1995 she served as Editor-in-Chief of Controlled Clinical Trials. She is a member of many Data and Safety Monitoring Boards (DSMBs) for randomized clinical trials sponsored by both industry and government. Coauthored with Drs. Michael Proschan and Gordon Lan, her recently (2006) published book “Monitoring Clinical Trials: A Unified Approach” deals with methods for interim analysis of data. She received her A.B. in Mathematics from Radcliffe College (1964) and her Ph.D. from the Department of Statistics of Harvard University (1970).


Marvin Zelen received a BS (City College of New York), MS (Univ of North Carolina, Chapel Hill) and Ph.D. (American University). Of some note he was a night school student at American University. He presently serves as the Lemuel Shattuck Research Professor of Statistical Science, Harvard School of Public Health and has been a professor at Harvard University since 1977. He is also Chairman of the Board and President, Frontier Science and Technology Research Foundation which is a not for profit foundation he formed in 1975. Previous positions have been at the National Bureau of Standards, National Cancer Institute, State University of New York at Buffalo. Some of his appointments include :Head, Division of Biostatistics, Dana Farber Cancer Institute, 1977-1996; Chairman, Department of  Biostatistics, Harvard University, 1980-1990; Chairman, Committee of Presidents of Statistical Societies, 1998-2000;  Head, Mathematical Statistics and Applied Mathematics Section, National Cancer Institute, NIH, 1963-1967;  and Senior Fulbright Scholar, United Kingdom, 1965-1966. He has been a visiting faculty member at the University of California (Berkeley), University of Wisconsin and Hebrew University. For his research and service to the profession, Professor Zelen has received a number of awards and recognitions including: Fellow, American Statistical Association, Institute of Mathematical Statistics, American Association for the Advancement of Science and American Academy of Arts and Sciences; elected Member, International Statistical Institute. He received The 2006 Samuel S. Wilks Award, (Committee of Presidents of Statistical Societies) and was chosen to deliver the RA Fisher Memorial Lecture (Committee of Presidents of Statistical Societies) in 2007.  He was twice invited to give the President’s Special Invited Address for the Biometric Society (ENAR). In 2003 He was awarded an honorary doctorate by the University Of Victor Segalen in Bordeaux.


Registration

Charges:
General – $90
Merck – $40
Bristol-Myers Squibb – $60
Wyeth – $60
Full time graduate students – $25

Registration includes: Continental Breakfast, Lunch, Break. Parking is free.

Registration:
8:15AM – 9:00AM
Meeting: 9:00AM – 3:30PM

Seating is limited. Please make checks payable to Temple University (Biostatistics) and send to:

Boris Iglewicz, Director,
Biostatistics Research Center,
Department of Statistics, Temple University,
1810 N. 13th Street,
Philadelphia, PA 19122-6083

Please include your name, the name of your company, and your email address. We must receive checks by Wednesday, October 15, 2008. We cannot accept cash or credit card payments.

For additional information, contact Boris Iglewicz, Director, email: borisi@temple.edu or telephone (215) 204-8637.


Directions

DOUBLE TREE GUEST SUITES, PLYMOUTH MEETING
640 W. Germantown Pike, Plymouth Meeting PA 19462
(610) 834-8300

From Airport: Take 95 South to 476 North to the last exit #20(Germantown Pike-West). Merge with Germantown Pike and follow for 3 lights. Make a right onto Hickory Rd. at the 3rd light. The hotel is the 3rd building on the left.

New York/ New Jersey Turnpike: Take the New Jersey Turnpike to exit #6, which is PA turnpike. Go west to exit #333- Norristown. Follow signs to Plymouth Rd. Go to the 1st light and make a left. Go to the next light and make a right onto Germantown Pike. Go to the second light and make a right on Hickory Rd. The hotel is the second driveway on the left.

Washington D.C., Wilmington, and Delaware: Take I-95 North to Route 476 North. Take Route 476 to the Germantown Pike West exit #20. Go to the third light, Hickory Rd., and make a right. The hotel is the 2nd driveway on the left.

Route 476: Take 476 to the Germantown Pike West exit #20. Go to the third light, Hickory Rd., and make a right. The hotel is the 2nd driveway on the left.

From downtown Philadelphia: I-76 west Plymouth Meeting exit #331B (Route 476). Take Route 476 north to Germantown Pike exit. Go to the third light, Hickory Rd., and make a right. The hotel is the 2nd driveway on the left.