Home on Christopher Dishop
/
Recent content in Home on Christopher DishopHugo -- gohugo.ioen-usSat, 05 May 2018 00:00:00 +0000Recommended Reading
/recommended_reading/rec_reading/
Sat, 05 May 2018 00:00:00 +0000/recommended_reading/rec_reading/Quantifying Life: A Symbiosis of Computation, Mathematics, and Biology
Dmitry A. Kondrashov
Click here for link
How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life
Thomas Gilovich
Click here for link
Complexity: A Guided Tour
Melanie Mitchell
Click here for link
Statistical Rethinking: A Bayesian Course with Examples in R and Stan
Richard McElreath
Click here for link
The Drunkard’s Walk: How Randomness Rules Our LivesSpline Modeling
/computational_notes/spline/
Sat, 05 May 2018 00:00:00 +0000/computational_notes/spline/A few spline models (also known as piecewise models). As in previous posts, ‘affect’ is the name given to values of \(y\) throughout.
1) Growth and Even More Growth A model that captures a process that increases initially and then increases at an even greater rate once it reaches time point 5. The data generating process:
\[\begin{equation} y_{it} = \begin{cases} 4 + 0.3_{t} + error_{t}, & \text{if time < 5}\\ 8 + 0.Latent Growth Curves
/computational_notes/latent_growth/
Sun, 15 Apr 2018 00:00:00 +0000/computational_notes/latent_growth/Latent Growth Curves I will progress through three models: linear, quadratic growth, and latent basis. In every example I use a sample of 400, 6 time points, and ‘affect’ as the variable of interest.
1) Linear The data generating process:
\[\begin{equation} y_{it} = 4 - 0.6_{t} + e_{t} \end{equation}\] library(tidyverse) library(ggplot2) library(MASS) N <- 400 time <- 6 intercept_mu <- 2 linear_growth_parameter_mu <- -0.6 sigma <- matrix(c(1.0, 0.3, 0.3, 1.0), 2, 2, byrow = T) df_matrix <- matrix(, nrow = N*time, ncol = 3) count <- 0 for(i in 1:400){ unob_het_affect <- rnorm(1,0,3) parameter_space <- mvrnorm(1, c(intercept_mu, linear_growth_parameter_mu), sigma) intercept <- parameter_space[1] linear_growth <- parameter_space[2] for(j in 1:6){ count <- count + 1 if(j == 1){ df_matrix[count, 1] <- i df_matrix[count, 2] <- j df_matrix[count, 3] <- intercept + unob_het_affect + rnorm(1,0,1) }else{ df_matrix[count, 1] <- i df_matrix[count, 2] <- j df_matrix[count, 3] <- intercept + linear_growth*j + unob_het_affect + rnorm(1,0,1) } } } df <- data.Social Trait Development Computational Model
/computational_notes/social_trait_comp_model/
Fri, 30 Mar 2018 00:00:00 +0000/computational_notes/social_trait_comp_model/I built the following simple computational model for an individual differences class in the Spring of 2018 to demonstrate how to incorporate explantory elements for trait development into a computational framework. This model assumes that an individual’s trait development depends on 1) the environment and 2) interactions with others inside and outside of the individual’s social group. Moreover, the model assumes traits are somewhat stable and exhibit self-similarity across time. The main properties I am trying to capture, therefore, include:Numerical Integration and Optimization
/computational_notes/integration_optimization/
Fri, 16 Feb 2018 00:00:00 +0000/computational_notes/integration_optimization/Integration Trapezoid Rule
To find the area under a curve we can generate a sequence of trapezoids that follow the rules of the curve (i.e., the data generating function for the curve) along the \(x\)-axis and then add all of the trapezoids together. To create a trapezoid we use the following equation:
let \(w\) equal the width of the trapezoid (along the \(x\)-axis), then
Area = (\(w/2\) * \(f(x_i)\)) + \(f(x_i+1)\) for a single trapezoid.Random Walks
/computational_notes/random_walks/
Thu, 11 Jan 2018 00:00:00 +0000/computational_notes/random_walks/Some random walk fun. I use 400 steps in each example.
One-Dimensional Random Walk A random walk using a recursive equation.
# Empty vector to store the walk rw_1 <- numeric(400) # Initial value rw_1[1] <- 7 # The Random Walk equation in a for-loop for(i in 2:400){ rw_1[i] <- 1*rw_1[i - 1] + rnorm(1,0,2) } plot(rw_1) A random walk using R’s “cumsum” command. Here, I will generate a vector of randomly selected 1’s and -1’s.Combining CSV Files
/computational_notes/load_csv/
Wed, 03 Jan 2018 00:00:00 +0000/computational_notes/load_csv/A couple quick pieces of code to assist any time I need to work with many CSV files.
Into List This first code chunk loads all of the CSV files in a folder, makes each into data frame, and stores each separately in a list.
setwd("enter path") # A character vector of every file name files <- Sys.glob("*.csv") # A list of all CSV files in the respective folder as data.Formatting Qualtrics Responses
/computational_notes/formatting_qualtrics/
Tue, 02 Jan 2018 00:00:00 +0000/computational_notes/formatting_qualtrics/When you download data from Qualtrics it will populate using strings (e.g., “Strongly Agree, Agree, Neutral”) rather than values (e.g., 4, 3, 2). Here is a quick piece of code to create numeric response scores for analysis.
library(tidyverse) library(dplyr) library(plyr) df <- read.csv("path") labels_to_values1 <- function(x){ mapvalues(x, from = c("Strongly Agree", "Agree", "Slightly Agree", "Slightly Disagree", "Disagree", "Strongly Disagree"), to = c(6,5,4,3,2,1)) } recode_df <- df %>% select(column_to_modify1, column_to_modify2, column_to_modify2, etc) %>% apply(2, FUN = labels_to_values1) %>% data.Why Detecting Interactions is Easier in the Lab
/computational_notes/interactions_fve/
Wed, 15 Nov 2017 00:00:00 +0000/computational_notes/interactions_fve/A fun simulation by McClelland and Judd (1993) in Psychological Bulletin that demonstrates why detecting interactions outside the lab (i.e., in field studies) is difficult. In experiments, scores on the independent variables are located at the extremes of their respective distributions because we manipulate conditions. The distribution of scores across all of the independent variables in field studies, conversely, is typically assumed to be normal. By creating “extreme groups” in experiments, therefore, it becomes easier to detect interactions.Higher Order CFA
/computational_notes/higher_order_cfa/
Wed, 01 Nov 2017 00:00:00 +0000/computational_notes/higher_order_cfa/The system of variables: {"x":{"diagram":"\ndigraph SEM{\nnode [shape = rectangle]\ny1;\ny2;\ny3;\ny4;\ny5;\ny6;\n\nnode [shape = oval]\nx1;\nx2;\nG;\n\nx1 - y1\nx1 - y2\nx1 - y3\nx2 - y4\nx2 - y5\nx2 - y6\n\nG - x1\nG - x2\n}\n ","config":{"engine":"dot","options":null}},"evals":[],"jsHooks":[]} Data generation: G = rnorm(200, 90, 25) x1 = G*0.40 + rnorm(200,0,5) x2 = G*0.18 + rnorm(200,0,10) y1 = x1*0.75 + rnorm(200,0,2) y2 = x1*0.35 + rnorm(200,0,4) y3 = x1*0.15 + rnorm(200,0,10) y4 = x2*0.88 + rnorm(200,0,7) y5 = x2*0.SEM Path Analysis
/computational_notes/path_analysis/
Sun, 01 Oct 2017 00:00:00 +0000/computational_notes/path_analysis/The system of variables: {"x":{"diagram":"\ndigraph SEM{\nnode [shape = rectangle]\nx1;\nx2;\nx3;\ny1;\ny2;\n\nx1 - y1\nx2 - y1\nx3 - y1\n\nx2 - y2\nx3 - y2\ny1 - y2\n}\n ","config":{"engine":"dot","options":null}},"evals":[],"jsHooks":[]} Generate data to represent the system above: cov_matrix = matrix(c(1.0, 0.02, 0.02, 0.04, 0.01, 0.02, 1.0, 0.08, 0.16, 0.28, 0.02, 0.08, 1.0, 0.35, 0.47, 0.04, 0.16, 0.35, 1.0, 0.02, 0.01, 0.28, 0.47, 0.02, 1.0), 5, 5) library(MASS) N = 800 Mu = c(0,0,0,0,0) data_f = data.Dynamic Theory of Reasoned Action
/computational_notes/theory_of_reasoned_action/
Tue, 05 Sep 2017 00:00:00 +0000/computational_notes/theory_of_reasoned_action/A replicated dynamic theory of reasoned action, inspired by Boster, Shaw, Carpenter, and Lindsey (2015; link HERE).
The Theory {"x":{"diagram":"\ndigraph SEM{\nnode [shape = rectangle]\nIntention;\nAttitudes;\nNorms;\n\nAttitudes - Intention\nNorms - Intention\n}\n","config":{"engine":"dot","options":null}},"evals":[],"jsHooks":[]} The Theory In A Difference Equation: I(t) = b1*Norms(t-1) + b2*Attitudes(t-1) + b3*Intention(t-1) Simulation b1 = 0.15
b2 = 0.25
b3 = 0.60
Initial Distributions:
1,600 cases with a mean of 3 and a standard deviation of 1.Workforce Dynamics
/computational_notes/role_dynamics/
Tue, 22 Aug 2017 00:00:00 +0000/computational_notes/role_dynamics/We can model the states of a system by applying a transition matrix to values represented in an initial distribution and repeating it until we reach an equilibrium.
Suppose we want to model how job roles in a given company change over time. Let us assume the following:
There are three (hierarchical) positions in the company:
Analyst
Project Coordinator
Manager
30 new workers enter the company each year, and they all begin as analystsConvert Text File
/computational_notes/convert_text/
Sun, 09 Apr 2017 00:00:00 +0000/computational_notes/convert_text/A quick piece of code that reads a text file, changes something, saves a new text file, and iterates that process for every text file in that folder.
setwd("path to the text files") library(readr) all_files = Sys.glob("*.txt") for(i in 1:length(all_files)){ data = all_files[i] mystring = read_file(paste(data)) new_data = gsub("old piece of text", "new piece of text", mystring) write_file(new_data, path = paste("something", code, ".txt", sep = "") } Art With Monte Carlo
/computational_notes/art_montecarlo/
Sat, 18 Mar 2017 00:00:00 +0000/computational_notes/art_montecarlo/I like to think of Monte Carlo as a counting method. If a condition is satisfied we make a note (e.g., 1), and if the condition is not satisfied we make a different note (e.g., 0). We then iterate and evaluate the pattern of 1’s and 0’s to learn about our process. Art can be described in a similar way: if a condition is satisfied we use a color, and if a condition is not satisfied we use a different color.The Binomial Effect Size Display
/computational_notes/besd/
Sun, 01 Jan 2017 00:00:00 +0000/computational_notes/besd/Effect sizes provide information about the magnitude of an effect. Unfortunately, they can be difficult to interpret or appear “small” to anyone unfamiliar with the typical effect sizes in a given research field. Rosenthal and Rubin (1992) provide an intuitive effect size, called the Binomial Effect Size Display, that captures the change in success rate due to a treatment.
The calculation is simple:
Treamtment BESD = 0.50 + (r / 2)