I am a Researcher at the Center for Research on Computation and Society at Harvard University. I work on machine learning and statistical methods for broad problems in quantitative social science, such as missing data, measurement error, privacy preservation and human-guided machine learning. I also build software tools to implement these methodological solutions.
This prototype system will allow researchers with sensitive datasets to make differentially private statistics about their data available through data repositories, such as the Dataverse platform. The system will allow researchers to:  upload private data to a secured Dataverse archive,  decide what statistics they would like to release about that data, and  release privacy preserving versions of those statistics to the repository,  that can be explored through a curator interface without releasing the raw data, including  interactive queries. This system was created by the Privacy Tools for Sharing Research Data project. Differential privacy is a mathematical framework for enabling statistical analysis of sensitive datasets while ensuring that individual-level information cannot be leaked.
TwoRavens is a graphical user interface for quantitative analysis that allows users at all levels of statistical expertise to explore their data, describe their substantive understanding of the data, and appropriately construct and interpret statistical models. The interface is a browser-based, thin client, with the data remaining in an online repository, and the statistical modeling occurring on a remote server. In our implementation, we integrate with tens of thousands of datasets from the Dataverse repository, and the large library of statistical models available in the Zelig package for the R statistical language. Our interface is entirely gesture-driven, and so easily used on tablets and phones. This, in combination with being browser-based, makes data exploration and quantitative reasoning easily portable to the classroom with minimal infrastructure or technology overhead.
Amelia II "multiply imputes" missing data in a single cross-section (such as a survey), from a time series (like variables collected for each year in a country), or from a time-series-cross-sectional data set (such as collected by years for each of several countries). The program also generalizes existing approaches by allowing for trends in time series across observations within a cross-sectional unit, as well as priors that allow experts to incorporate beliefs they have about the values of missing cells in their data. Amelia II also includes useful diagnostics of the fit of multiple imputation models. The program works from the R command line or via a graphical user interface that does not require users to know R.
Zelig is a framework that brings together an abundance of common statistical models found across R packages into a unified interface, and provides a common architecture for estimation and interpretation, as well as bridging functions to absorb increasingly more models into the collective library. Zelig allows each individual package, for each statistical model, to be accessed by a common uniformly structured call and set of arguments. Moreover, Zelig automates all the surrounding building blocks of a statistical workflow --procedures and algorithms that may be essential to one user's application but which the original package developer did use in their own research and might not themselves support. These include bootstrapping, jackknifing, and reweighting of data, and in particular, Zelig automatically generates predicted and simulated quantities of interest (such as relative risk ratios, average treatment effects, first differences and predicted and expected values) to interpret and visualize complex models.