Author Archives: EEPuckett

Welcome to Heather Clendenin

Very excited to welcome Heather Clendenin to the Puckett Lab!  Heather recently finished her MS at the University of Idaho where she investigated sibling relatedness in gray wolves (Canis lupus).  For her PhD, she will estimate genetic load in black bear (Ursus americanus) populations with varying demographic histories.

Puckett Lab Hosts Emily Latch for Seminar

The Puckett Lab hosted Dr. Emily Latch for seminar. Dr. Latch is an Associate Professor at the University of Wisconsin- Milwaukee. She studies phylogeography and landscape genetics of several mammal species, with an emphasis on using this information to inform management. Dr. Latch met with the students in the Urban Ecology & Wildlife Management class, toured Meeman Biological Field Station, and gave her talk, “Wild bison, hidden deer: Conservation Genetics for a changing world.”

Puckett Lab Opening Fall 2018 at the University of Memphis

I am ecstatic to join the faculty at the University of Memphis as an Assistant Professor in the Biology Department.  The Puckett Lab will open Fall 2018 and focus on phylogeography and evolutionary genomics within the bear family.

If you are interested in joining the lab, please see the “Positions in the Lab” page for current information on positions.

Making conStruct Input Files

As part of my postdoc with Gideon Bradburd, I’m using his new software package conStruct (bioRxiv; GitHub) to analyze dozens of genomic datasets.  conStruct requires three input files: 1) genetic data, 2) coordinate data (longitude in the first column, latitude in the second), and 3) a pairwise distance matrix with the same number of sites as in the coordinate data.  Files 2 and 3 are straight forward; but it took me a little time to be able to go from a regular STRUCTURE file to a conStruct file.  So below is the R code I’m using to do this conversion.

Now if your data is not already in STRUCTURE two row format (i.e. two rows per sample), then you’ll need to get there as a starting place. I used PGDSpider to make the STRUCTURE files WITH a header row and a column to denote sampling site.
(PLINK, bless its heart, makes 1 row 1 column STRUCTURE files, and I’m not coding out of that.) Remember you want to denote sampling sites for conStruct and not putative populations. I then replaced the two blank headers for columns one and two with “SampleID” and “PopID.”

conStruct can take data as counts or frequencies. The code below makes a table of frequencies for one allele (doesn’t matter major or minor, derived or ancestral) for each sampling site for each locus.

I have written this as a loop to process multiple input files at once. You can remove the for loop and start at “str <- read.table()” if you only have one file.

setwd("Enter the path to your working directory")
files <- list.files(pattern = "*.str",full.names=T)
newnames <- paste(sep="",sub('.str', '',files),"-Processed.str")

#Loop over all files and make the processed files needed for conStruct
for(i in 1:length(files)){

#Read data file and convert missing data to NA
str <- read.table(files[i],header=T)
str[str == "-9"] <- NA                          
str <- str[ order(str$PopID,str$SampleID),]

#Count number of samples
SampleID <- as.character(unique(str$SampleID))

#Looping over all loci, create a frequency table of alleles (0,1,2)
#Jacob Burkhart wrote this loop
count <- data.frame(SampleID)
for(loci in 3:dim(str)[2]){   
  temp <- table(str$SampleID, str[,loci])           
  colnames(temp) <- paste0(colnames(str)[loci], "-", colnames(temp)) 
  temp <- data.frame(unclass(temp)) 
  
#If there are no alleles, recode the row as -9
  for(j in 1:dim(temp)[1]){
    if(sum(temp[j,]) == 0) {temp[j,] <- NA} 
  }
#Test if a monomorphic locus slipped through your data processing
#If so, column bind data to sample ID and any previous datasets
#If not (as expected), then the column bind will be applied to the 2nd allele
#Why the 2nd allele?  Because any loci with missing data will result in data being added to the table
  count <- as.matrix(cbind(count,if(length(temp)==1){temp[,1]} else{temp[,2]}))
}

#Create a vector of the sampling site information for each sample
pop.vec <- as.vector(str[,2])
pop.vec <- pop.vec[c(seq(from=1, to=nrow(str), by=2))]

#Make variables to utilize below
n.pops <- length(unique(pop.vec))
table.pops <- data.frame(table(pop.vec))

#Make a file of individual sample allele frequencies
#If you only have one sample per sampling site, then you could stop here
freq <- matrix(as.numeric(count[,-1])/2,nrow(count),ncol(count)-1)
f <- matrix(as.numeric(freq),nrow(freq),ncol(freq))

#Empty matrix for sampling site level calculations
admix.props <- matrix(NA, n.pops,ncol(f))

#Calculate frequency (of 2nd allele) per sampling site
#The last line tests if there is a sampling site with n=1
#If so, prints vector because frequency has already been calculated (0, 0.5, or 1)
#If not, then calculates mean across samples from that site
for(m in 1:length(table.pops$pop.vec)){
  t<-as.factor(unique(pop.vec))[m]
  admix.props[m,] <- if(table.pops[table.pops$pop.vec == t,2] == 1){f[which(pop.vec==t),]} else{colMeans(f[which(pop.vec==t),],na.rm=T)}
  }

#Export conStruct file and save in working directory
write.table(admix.props, newnames[i],quote=F,sep="\t",row.names=F,col.names=F)
}

As I noted in the code, my friend Jake Burkhart wrote the internal for loop that makes the frequency table. He originally wrote the loop to make pseudo-SNP datasets out of microsatellite data. Which means, if you want to run conStruct on a microsatellite dataset, you can print all of the loci (instead of just one of the biallelic SNPs), then keep processing the frequencies at each sampling site.  Note, conStruct will throw an error if there are fewer loci than samples, which shows up more readily when using pseudo-SNP data from (even highly polymorphic) microsatellites.

BayesAss for RADseq Data

I want to use BayesAss on a large SNP dataset generated with RADseq.  But I found out when I went to convert the data into the .immaq format that my favorite converter, PGDspider, would only convert the first 40 loci.  I didn’t get 10,000s of loci for nothing, so that wasn’t going to work.  But a second problem was that BayesAss 3.0.3 only allows 240 SNPs anyways.

Obviously I’m not the only person with this problem.  And thanks to Steve Mussmann there’s a solution.  Steve re-wrote BayesAss 3.0.4 to be able to handle large SNP datasets, as well as a program to convert the STRUCTURE files from pyRAD into .immaq input files for BayesAss.

Since I have STRUCTURE files from STACKS and not pyRAD, I had to do a little conversion.  My messy code is below, but leave a comment if you are a wiz with an elegant solution.

Turning STRUCTURE Output from STACKS into STRUCTURE Output from pyRAD
First, output data from STACKS in STRUCTURE format (.str).  Remove the first two rows from this output (STACKS header and loci identifiers).  Then, print the first column (sample names), insert five empty columns to match pyRAD.  Do not print the second column from the STACKS STRUCTURE output because that is the population code from your Population Map input into STACKS.

awk '{print $1 "\t" "\t" "\t" "\t" "\t" "\t"}' data.str > test.out

Next, print out the remainder of your data starting at column 3 (i.e. first locus) in the original dataset (awk code from here).  Then use paste to concatenate the two files into the .str file you will convert.

awk '{for(i=3;i<NF;i++)printf"%s",$i OFS;if(NF)printf"%s",$NF;printf ORS}' data.str | tr ' ' '\t' > test2.out

paste test.out test2.out > data.str

Convert .str to .immac Using a Custom PERL Script
If you already had data from pyRAD and did not have to do the steps above, you can just convert your .str file to an .immac using Steve’s script: str2immaq.pl. However, if you have data from STACKS then you need to modify lines 39-52 in the original script to the following:

# convert structure format to immanc format, and push data into hash
for(my $i = 0; $i < @strlines; $i++){
	my @ima;
	my @temp = split(/\s+/, $strlines[$i]);
	my $name = shift(@temp);
	foreach my $allele(@temp){
		push( @ima, $allele);
	}

Since STACKS exports missing data as 0 (whereas pyRAD exports missing data as -9), this change removes converting missing data from -9 to 0. Steve also wrote this change.  Save the change in the perl script, then use the script to convert data from .str to .immac.  Now you’ve got an input file for BayesAss.

Not on the Postdoc Market- Round 2

In 2008 I graduated with my MS from Larry Smart’s Lab at SUNY-ESF.  Larry gives his students a personalized graduation gift, something that reflects the rapport he had with each student.  Mine included a hunter green sweatshirt, a hunter green picnic blanket, and a green water bottle because as he said, “she needs more green stuff.”  So yes, SUNY-ESF’s school colors are green and gold, but I’m pretty sure he had Michigan State University in mind with my hunter graduation gifts.  Larry went to MSU for his PhD, and I went to NC State for undergrad whose school colors are red and white.  During my 2nd year in his lab, NCSU and MSU played each other in the ACC-Big 10 Challenge and Larry and I bet on our respective teams, loser bakes the winner dessert in school spirit colors.  I made cupcakes with bright green frosting.  But apparently all that hunter apparel was just getting me ready for 2017…

I started a postdoctoral position in the Bradburd Lab at MSU.  I will work on spatial and temporal population genomics.  I’m really looking forward to learning new modeling skills.