Fostering Student Success with Access to Practice

By Vanessa Woods, Maggie Safronova, and Linda Adler-Kassner


As we develop courses that introduce students to practices and content within our respective disciplines, we must recognize that the students who enter our classrooms come with different social and academic experiences that inform their learning. In large introductory courses where the majority of participants can be categorized as disciplinary novices, this lack of access can lead to a disconnect that discourages students and frustrates professors.  As educators, we strive to design courses that provide students with access to our discipline regardless of how their prior academic experience shaped students’ process of learning.  Access to Practice (ATP) investigates how the development and implementation of highly structured peer review activities contribute to student academic experience.

Through ATP, UC Santa Barbara instructors work with the Center for Innovative Teaching, Research, and Learning (CITRAL) to design highly structured writing and peer review prompts focusing on challenging course concepts or challenging ways of writing. Over the two+ years CITRAL has sponsored ATP, faculty from Classics to Psychological and Brain Sciences, Political Science to Ecology, Evolutionary, and Marine Biology have developed ATP prompts and peer reviews, often in very large classes. Students have written and provided feedback about subjects as wide-ranging as the calculations involved in stoichiometry and the current-day applications of the Oedipus myth.

Structured peer review activities provide students with an environment in which they can develop ways of practicing within a discipline. As the students practice the application process several times in a guided environment they have the opportunity to develop their own approach that can be transferred to different contexts. The goal of this process helps students figure out the disciplinary norms including what type of questions to ask and how to answer them, writing conventions, and appropriate types of evidence. As the students get the opportunity to actively engage with disciplinary practices we expect to see an increase in student’s metacognitive strategies as well as help them feel like true members of their discipline. Preliminary data from UCSB suggests that by writing and providing feedback aimed at practicing disciplinary norms, students gain a deeper understanding of challenging concepts. This promising data is the basis for further exploration of the mechanism by which peer review activities affect elements that contribute to student success. The possible mechanisms being explored are metacognitive strategies, a sense of disciplinary membership, and disciplinary practices. We are currently looking for collaborators. Contact Vanessa Woods at for more information.


Vanessa Woods, Ph.D.

Vanessa Woods, Ph.D. is a Lecturer with Potential Security of Employment (LPSOE) which is a tenure track lecturer position (teaching professor) at the University of California Santa Barbara. She is an expert in effective teaching practices and student success research. Vanessa is active in the SEISMIC collaboration and leads the Access to Practice Project (Experiments Working Group, Project 4).


Maggie Safronova, Ph.D.

Maggie Safronova, Ph.D. is the Associate Director at the Center for Innovative Teaching, Research, and Learning at the University of California Santa Barbara. Maggie is involved in projects that explore the role of pedagogical innovations on students’ sense of belonging in large universities. Maggie is also the project director for UCSB’s ECoach project.


Linda Adler-Kassner, Ph.D.

Linda Adler-Kassner, Ph.D. is the Faculty Director of the Center for Innovative Teaching, Research, and Learning at the University of California Santa Barbara as well as a Professor of Writing Studies and the Associate Dean of Undergraduate Education. Linda is a member of the SEISMIC Collaboration Council and co-leads the Access to Practice project with Vanessa.

Scaling Up Analysis in a SEISMIC Measurement Project

By Eben B. Witherspoon


The AP Project in the Measurement Working Group has been moving quickly through their project milestones. After starting strong last fall and developing shared analyses for the initial three SEISMIC institutions involved, they have started “scaling up” their project to the other seven SEISMIC institutions. While the three initial institutions are ironing out the details of their final models, the scale-up institutions are in various stages of cleaning their data and getting it ready to process. In this scale-up process, our AP Project team has learned much about what it takes to run a project across SEISMIC, and what key challenges can come up. We hope this post will provide some support for our fellow Measurement projects as they start preparing to scale up their efforts!

1. Set Common Variable Names

The first thing to consider when scaling up a Measurement project is setting common definitions and variable names for key concepts in the project. It was surprising to us how much variation there was by institution in seemingly straightforward terms such as “cohort.” Taking the time to clarify these not only facilitates discussions during meetings, but is important for ensuring that variables are being defined and generated in the same way, and that eventually, shared analysis code can be run on each institution’s dataset. 

Variable definitions and names from initial discussions in the AP and Demographic projects were merged in a working document to categorize variables across the Working Group. This could be a good jumping off point for coming up with SEISMIC-wide institutional data variable names.

An example of a method for generating project-wide variable definitions.


2. Attend to Dataset Formatting

Second, it is essential to decide up front on the format of the dataset for analysis (i.e. long vs. wide). Unless this is explicitly discussed, it’s easy to assume everyone is doing it the same way (but there are lots of ways people store and think about time-series data!) and this has big implications for doing shared analyses later. For example, in the AP Project, we eventually landed on a combination similar to “panel data” – our data is wide by student (i.e. each row is a single student) and stacked long by each discipline (i.e., all observations who took BIO are “stacked” on top of all observations in PHYSICS), with every student unique within each discipline, but able to be repeated across disciplines (i.e. if a student took both BIO and PHYSICS). This made the most sense for our project, as it allowed us to easily subset our analyses by discipline. It might be overkill, but creating a mock-up dataset could even be helpful to visually represent how the data looks, which variables are time-invariant or not, etc.

3. Share Model Specifications and Basic Descriptive Stats

Last but certainly not least, once you have settled on your Research Questions (RQs), it is very helpful to share clear model specifications including DV, IV and analytic sample. Even when the same model is being run, misinterpretation of patterns across institutions can easily be missed when looking at only regression tables when there are different understandings of what sample is being analyzed. Sometimes something as simple as looking at sample sizes can help catch these discrepancies early on. For example, if two schools of about the same size have vastly different Ns, something may be up. One way our project addressed this issue was by moving the part of the analysis that defines the sample (which was previously done in each individual institution’s data cleaning) to the shared script, so that each institution was literally running the same code to subset the data and generate the sample for each analysis. Of course, in order to do this, there first needed to be common variable names and similarly formatted datasets…hence parts 1 and 2 🙂 As an added “bonus,” these checks and balances worked together; if our shared code couldn’t run or gave us weird results, this led us to uncover previously undiscovered issues in our variable or sample definitions!

Additional Recommendations

We recommend each project save their analysis code in the SEISMIC-wide GitHub repository (email your GitHub username to to join). This is a great way to share code and track changes, without making overlapping edits. Our project also used R/RStudio and Google Co-Lab with Jupyter notebooks to share, run, and comment on each other’s code as we were developing it. Then, we saved the agreed-upon code in our AP Project (WG1-P4) GitHub repository.

We have also found it helpful to use R-Markdown to create an “Analysis Workflow” file, which acts as a guide for AP project participants in understanding the analysis process overall, including how to create a dataset that will work with our shared analysis code. It captures much of our thinking on streamlining and simplifying the scale-up process.  It also serves as a single location that links to various other relevant documents for running analyses (i.e. variable naming conventions, model specifications). In addition, the document itself is shared and editable, which allows notes to be added by institutions as more specific things pop up that might be useful to others (i.e. Pitt didn’t have a variable for the year AP was taken, so we developed and noted our work-around).

Example of the Analysis Workflow document for AP Project.


Calling All Demographics Project Analysts

We would love feedback from Demographics Project (WG 1 P1) analysts about our process and what you have been doing to coordinate. For example, have you found less complicated workarounds for the same issues? Are we missing key parts of scaling up that you’ve experienced? Let us know!

Interested in joining the AP Project?

Overall, the process for onboarding is:

  1. Join the Project GitHub repository (by emailing a GitHub username to
  2. Read the “Analysis Workflow” file (available on GitHub)
  3. Preview the SEISMIC variable definitions doc (also linked to in the Analysis Workflow file)
  4. Create an institution-specific folder in the WG1-P4 GitHub that contains the data cleaning files for that institution’s data – these will all be slightly different, but may be useful to see how others have done it as there will be some overlap.
  5. Once the data is in the same format, run the shared analysis file (available on GitHub). 
  6. Join one of our meetings and share your findings!




Eben B. Witherspoon, Ph. D.

Eben Witherspoon is a post-doctoral researcher in the Learning Research and Development Center (LRDC) at the University of Pittsburgh. His main line of research examines attitudinal and environmental factors during the transition to college that influence retention in STEM career pathways for underrepresented students. Currently, he is working on a project looking at the factors influencing gendered attrition in the pre-med course sequence. Eben is an active SEISMIC member and works on the AP Project (Measurement Working Group, Project 4).