Know Your Dataset: Specifying colClasses to load up an ffdf

When I finally figured out how to successfully use the ff package to load data into R, I was apparently working with relatively pain free data to load up through read.csv.ffdf (see my previous post).  Just this past Sunday, I naively followed my own post to load a completely new dataset (over 400,000 rows and about 180 columns) for analysis.  Unfortunately for me, the data file was a bit messier, and so read.csv.ffdf wasn’t able to finalize the column classes by itself.  It would chug along until certain columns in my dataset, which it at first took to be one data type, proved to be a different data type, and then it would give me an error message basically telling me it didn’t want to adapt to the changing assumptions of which data type each column represented.

So, I set out to learn how I could use the colClasses argument in the read.csv.ffdf command to manually set the data types for each column.  I adapted the following solution from a stackoverflow thread about specifying colClasses in the regular read.csv function.

First, load up a sample of the big dataset using the read.csv command (The following is obviously non-random. If you can figure out how to read the sample in randomly, I think it would work much better):

headset = read.csv(fname, header = TRUE, nrows = 5000)

The next command generates a list of all the variable names in your dataset, and the classes R was able to derive based on the number of rows you imported:

headclasses = sapply(headset, class)

Now comes the fairly manual part. Look at the list of variables and classes (data types) that you generated, and look for obvious mismatches. Examples could be a numeric variable that got coded as a factor or logical, or a factor that got coded as a numeric. When you find such a mismatch, the following syntax suffices for changing a class one at a time:

headclasses["variable.name"] = "numeric"

Obviously, the “variable.name” should be replaced by the actual variable name you’re reclassifying, and the “numeric” string can also be “factor”, “ordered”, “Date”, “POSIXct” (the last two being date/time data types). Finally, let’s say you want to change every variable that got coded as “logical” into “numeric”. Here’s some syntax you can use:

headclasses[grep("logical", headclasses)] = "numeric"

Once you are certain that all the classes represented in the list you just generated and modified are accurate to the dataset, you can load up the data with confidence, using the headclasses list:

bigdataset = read.csv.ffdf(file="C:/big/data/loc.csv", first.rows=5000, colClasses=headclasses)

This was certainly not easy, but I must say that I seem to be willing to jump through many hoops for R!!

Big data analysis, for free, in R (or “How I learned to load, manipulate, and save data using the ff package”)

Before choosing to support the purchase of Statistica at my workplace, I came across the ff package as an option for working with really big datasets (with special attention paid to ff dataframes, or ffdf). It looked like a good option to use, allowing dataframes with multiple data types and way more rows than if I were loading such a dataset into RAM as is normal with R. The one big problem I had is that every time I tried to use the ffsave function to save my work from one R session to the next, it told me that it could not find an external zip utility on my Windows machine. I guess because I just had so much else going on, I didn’t have the patience to do the research to find a solution to this problem.

This weekend I finally found some time to revisit this problem, and managed to find a solution! From what I can tell, R appears to expect, in cases like the ffsave function, that you have command-line utilities like a zip utility at the ready and recognizable by R. Although I haven’t tested the ff package on either of my linux laptops at home, I suspect that R recognizes the utilities that come pre-installed on them. However, in the windows case, the solution seems to be to install a supplementary group of command-line programs called Rtools.  When you visit the page, be sure to download the version of Rtools that corresponds with your R version.

When you go through the installation process, you will see a screen like below. Be sure that you check the same boxes as in the screenshot below so that R knows where the zip utility lives.

Once you have it installed, that’s when the fun finally begins. Like in the smaller data case, I like reading in CSV files. So, ff provides read.csv.ffdf for importing external data into R. Let’s say that you have a data file named bigdata.csv, here would be a command for loading it up:

bigdata = read.csv.ffdf(file=”c:/fileloc/bigdata.csv”, first.rows=5000, colClasses=NA)

The first part of the command, directing R to your file, should look straightforward. The first.rows argument tells it how big you want the first chunk of data it reads in should be (ff reads parts of your data at a time to save RAM.  Correct me if I’m wrong).  Finally, and importantly, the colClasses=NA argument tells R not to assume the data types of each of your columns from the first chunk alone.

Now that you’ve loaded your big dataset, you can manipulate it at will.  If you look at the ff and ffbase documentation, a lot of the standard R functions for working with and summarizing data have been optimized for use with ff dataframes and vectors.  The upshot of this is that working with data stored in ffdf format seems to be a pretty similar experience compared to working with normal data frames.  Importantly, when you want to subset your data frame to create a test sample, the ffbase package replaces the subset command so that the resultant subset is also an ffdf, and doesn’t take up more of your RAM.

I noticed that you can use the glm() and lm() functions on an ffdf, but I think you have to be careful because they are not optimized for use with ffdfs and therefore will take up the usual amount of memory if you save them to your workspace.  So if you build models using these functions, be sure to select a sample from your ffdf that isn’t overly big!

Next, comes the step of saving your work.  The syntax is simple enough:

ffsave(bigdata, file=”C:/fileloc/Rwork/bigdata”)

This saves a .ffData file and a .RData file to the directory of your choice with “bigdata” as the filenames.

Then, when you want to load up your data in a new R session during some later time, you use the simple ffload command:

ffload(file=”C:/fileloc/Rwork/bigdata”)

It gives you some warning messages, but as far as I can tell they do not get in the way of accessing your data.  That covers the basics of working with big data using the ff package.    Have fun analyzing your data using less RAM! 🙂

Functions ddply and melt make plotting summary stats in R more tolerable

The main reason why I have usually chosen to use excel to make my plots at work is because I had difficulty feeding the summary stats in R into a plotting function.  One thing I learned this week is how to make summary stats into a data frame suitable for plotting, making the whole process of plotting in R more tolerable for me.  Below I show the process using the ever-popular iris dataset.  I use the functions ddply and melt to both summarize and restructure the data into a form amenable to plotting.

 length.by.species = ddply(iris, "Species", function (x) quantile(x$Sepal.Length, c(.25,.5,.75)))
> length.by.species
     Species   25% 50% 75%
1     setosa 4.800 5.0 5.2
2 versicolor 5.600 5.9 6.3
3  virginica 6.225 6.5 6.9
length.by.species = melt(length.by.species, variable.name="Quantile",value.name="Sepal.Length")
length.by.species
     Species Quantile Sepal.Length
1     setosa      25%        4.800
2 versicolor      25%        5.600
3  virginica      25%        6.225
4     setosa      50%        5.000
5 versicolor      50%        5.900
6  virginica      50%        6.500
7     setosa      75%        5.200
8 versicolor      75%        6.300
9  virginica      75%        6.900

One thing you can see in my call to ddply is that the main qualitative variable, whose values are used to subset your data frame, is referred to using quotes.  Somehow I find that a bit weird (I’m used to referring to variables without quotes, I suppose!).  Other than that, the syntax for the ddply command is similar enough to the apply family of functions, so no more complaints here.  You can also see that once I call the function, it gives me a nice neat data frame where the quantiles I asked for are columns, and the values of the Species variable represent different rows (or subsets of the data frame).

The melt command is easy enough, simply wanting to know what to call the column that will represent the values in the column titles (Quantile!) and what to call the numeric measure that the values come from (Sepal.Length).

Now that the summary stats are in a “Long” form data frame, with one column representing the numbers, and two columns containing text, it’s just a simple one liner to create a graph (here done in ggplot).  Below I show one line to create a dodged bar graph, and another line to create a dot plot, both showing the 1st to 3rd quantiles of Sepal.Length by Species.

ggplot(length.by.species, aes(y=Sepal.Length, x=Species, fill=Quantile, stat="identity")) + geom_bar(position="dodge")
ggplot(length.by.species, aes(x=Sepal.Length, y=Species, colour=Quantile, stat="identity")) + geom_point(size=4)

Thank you ddply and melt!

An embarrassing admission; Copy pasting tables with text containing spaces from Excel to R

I can’t believe I didn’t learn how to do it earlier, but I never knew how to accurately copy tables from excel that had text with spaces in them, and paste into a data frame in R without generating confusion around spaces representing different variables.

Say you have a column title in a table in excel like “Group Size”.  You then copy the table and try to load it into R using: my.table = read.table(“clipboard”, header=TRUE).  R then gives you an error like this:

Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,  : 
  line 1 did not have 2 elements

That’s where I stopped most times and made sure that my column titles had no spaces in them (replacing spaces with dots), or just did something else entirely.  Finally, I learned that if you add one teeny tiny argument to the read.table command… read.table(“clipboard”, header=TRUE, sep=”\t”) … then all text values with spaces in them actually get imported into R without a fuss!  Spaces in column names get converted into dots automatically, but actual text values in the columns with spaces are untouched and accessible as they should be in R.  Yay!