This FAES course covers advanced SAS coding concepts such as the use of SAS Macro, SAS SQL, as well as a combination of both. The course also introduces students to SAS STAT coding for common statistical tests (such as t-test, ANOVA, linear regression, and others). Students have the opportunity to practice in class, using sample datasets. Homework and project assignments are provided as well.
This FAES course gives a broad and conceptual overview of the most popular machine learning algorithms, followed by examples of how and when to apply them to real data. Best practices in designing machine learning analyses will be emphasized and reviewed, along with how to avoid common pitfalls and how to interpret analysis results.
In this Methods: Mind the Gap webinar, Dr. David MacKinnon describes mediation analysis methods with attention to solutions for some of the limitations of these methods. He also discussed future directions in mediation theory and statistical analysis.
Applying Models and Frameworks to Dissemination and Implementation (D&I) Research: An Overview & Analysis
Part of a joint presentation, Dr. Rachel Tabak presents a review which uses snowball sampling to develop an inventory of models, synthesizes this information, and provide guidance on how to select a model. Dr. Ted Albert Skolarus discusses an examination of citation frequency and impact of D&I models using citation analysis.
Approaches to Evidence Synthesis in Systematic Reviews of Public Health Interventions: Methods and Experiences of the Community Preventive Services Task Force
In this Methods: Mind the Gap webinar, Dr. David Hopkins discuses the conceptual decisions to emphasize a broad consideration of available evidence for reviews of public health interventions; the methods required to ensure a balanced assessment of mixed bodies of evidence; and factors weighed by the CPSTF in translating evidence into conclusions on effectiveness and recommendations regarding use.
In this Methods: Mind the Gap webinar, Dr. Jason Moore reviews the new discipline of automated machine learning (AutoML). The goal of AutoML is to simplify the process of combining different types of algorithms and methods in an analytical pipeline and to make machine learning more accessible.
Balancing Fidelity & Adaptation: If We Want More Evidence-Based Practice, We Need More Practice-Based Evidence
In this webinar, Drs. Larry Green and Rachel Gold deliver a joint presentation on fidelity and adaptation. Fidelity and adaptation relate to the manner in which the evidence from a research study is brought to practice. There is fidelity if the program is implemented in a way that is very similar to how it was originally designed, and there is adaptation when there are changes made to the process and content of the program to fit to a particular context. In most cases, contextual factors can influence the ability to maintain fidelity as well as the need for adaptation.
A collection of online chapters that provide an introduction to selected behavioral and social science research approaches, including theory development and testing, survey methods, measurement, and study design. eSource was developed in 2010, and these chapters have not been updated to reflect advances in the past decade. However, they can still be used as supplementary teaching materials.
Big Data and the Promise and Pitfalls When Applied to Disease Prevention and Promoting Better Health
How disruptive will Big Data be in the long run to biomedical research and health care? In his Methods: Mind the Gap webinar, Dr. Philip Bourne addresses this question in light of the Big Data to Knowledge (BD2K) initiative and other trans-NIH data science programs.
This FAES Graduate School course introduces students to the theory and practice of cancer screening in the United States. Students learn about the methodology used to assess cancer screening tests; how to interpret cancer screening data; and how to identify potential benefits and harms of cancer screening. They also become familiar with the evidence in favor of and against population-based screening for breast, colorectal, lung, cervical, and prostate cancer, as well as the controversies that surround mass screening for these diseases.
This archive provides a collection of webinars on methodology. The topics include HIV prevention, implementation methods, personalized medicine, complexity, and longitudinal data. In 2017, the Office of Disease Prevention (ODP) provided co-funding to the Center for Prevention Implementation Methodology to help create this archive.
In this Methods: Mind the Gap webinar, Dr. Evan Mayo-Wilson discusses the consequences of “multiplicity” for clinical investigators, systematic reviews and guideline developers, and clinical decision-makers. He highlights some potential solutions to these challenges, including prospective registration and core outcome sets.
A collection of training modules that came out of the NIH's initiative to enhance rigor and reproducibility in the research endeavor. The modules were developed by the NIH or NIH-funded grantees and focus on a variety of topics, including integrating sex and gender into research, the design and analysis of group-randomized trials, and computational analyses.
This training is geared towards raising comprehension of fundamental data science processes and concepts across ten technical data science competencies: research design, programming and scripting, computer science, advanced math, database science, data mining and integration, statistical modeling, machine learning, operations research, and data visualization.
This FAES course demonstrates and practices the use of R in creating and presenting data visualizations. After a short introduction to R tools, especially the tidyverse packages, the course covers principles for data visualization, examples of good and bad visualizations, and the use of ggplot2 to create static publication-quality graphs. Students also have the chance to learn about modern web-based interactive graphics using the html widgets packages as well as dynamic graphics and dashboards that can be created using flexdashboard and Shiny. The course explores ways in which bioinformatics data can be presented using static and dynamic visualizations. Finally, RMarkdown and other packages are used to develop webpages for presenting data visualizations as self-explanatory and possibly interactive storyboards.
In this Methods: Mind the Gap webinar, Dr. David M. Murray reviews the options available to evaluate multilevel interventions, including group- or cluster-randomized trials, and discusses their strengths and weaknesses.
In this Methods: Mind the Gap webinar, participants learn what the field of dissemination and implementation (D&I) is, why it is important, what it is trying to achieve, and how it is relevant to research and practice. Dr. Fernandez discusses the major components of a D&I study, D&I theories, models and frameworks, and design considerations. She also teaches participants how to tailor their own research to better enhance its value for dissemination and implementation.
This purpose of this Methods: Mind the Gap webinar is to equip public health researchers and practitioners with awareness and confidence in approaching and conducting qualitative research projects, and to familiarize participants with qualitative data collection and data analysis techniques and tools.
In this Methods: Mind the Gap presentation, Dr. Jeffrey Sparks illustrates how epidemiologic and patient-oriented research studies can further the understanding of etiology and outcomes of rheumatoid arthritis (RA), a common chronic disease. Different study designs are needed to investigate different types of exposures and outcomes. This presentation discusses studies related to lifestyle factors, genetics, biomarkers, comorbid conditions for RA risk, and outcomes, focusing in particular on how inflammation in the lung may be a nidus for both RA onset and worsened clinical outcomes.
The objective of this FAES Graduate School is to provide a deeper understanding of epidemiologic research methodology that can be used to interpret critically the results of epidemiologic research. This understanding is the result of investigating conceptual models for study designs, disease frequency, measures of association and impact, imprecision, bias, and effect modification. The course emphasizes the interpretation of research, even when the design or execution of the respective research is less than ideal.