Introduction

Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement – a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).

The project can be found on github: http://benakiva.github.io/practicalML/ and https://github.com/benakiva/practicalML

Data Source

The training data for this project are available here:

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv

The test data are available here:

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv

The data for this project come from this source: http://groupware.les.inf.puc-rio.br/har. If you use the document you create for this class for any purpose please cite them as they have been very generous in allowing their data to be used for this kind of assignment.

Loading the Dataset

In this section, we download the data files from the Internet and load them into two data frames. We ended up with a training dataset and a 20 observations testing dataset that will be submitted to Coursera.

library(caret)
## Loading required package: lattice
## Loading required package: ggplot2
library(rpart)
library(rpart.plot)
library(RColorBrewer)
library(rattle)
## Rattle: A free graphical interface for data mining with R.
## Version 4.1.0 Copyright (c) 2006-2015 Togaware Pty Ltd.
## Type 'rattle()' to shake, rattle, and roll your data.
library(randomForest)
## randomForest 4.6-12
## Type rfNews() to see new features/changes/bug fixes.
## 
## Attaching package: 'randomForest'
## The following object is masked from 'package:ggplot2':
## 
##     margin
library(gbm)
## Loading required package: survival
## 
## Attaching package: 'survival'
## The following object is masked from 'package:caret':
## 
##     cluster
## Loading required package: splines
## Loading required package: parallel
## Loaded gbm 2.1.1
library(plyr)
# Download the training data
download.file(url = "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv", 
              destfile = "./pml-training.csv", method = "curl")

# Load the training dataset
dt_training <- read.csv("./pml-training.csv", na.strings=c("NA","#DIV/0!",""))

# Download the testing data
download.file(url = "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv", 
              destfile = "./pml-testing.csv", method = "curl")

# Load the testing dataset
dt_testing <- read.csv("./pml-testing.csv", na.strings=c("NA","#DIV/0!",""))

Cleaning the Data

In this section, we will remove all columns that contains NA and remove features that are not in the testing dataset. The features containing NA are the variance, mean and standard devition (SD) within each window for each feature. Since the testing dataset has no time-dependence, these values are useless and can be disregarded. We will also remove the first 7 features since they are related to the time-series or are not numeric.

features <- names(dt_testing[,colSums(is.na(dt_testing)) == 0])[8:59]

# Only use features used in testing cases.
dt_training <- dt_training[,c(features,"classe")]
dt_testing <- dt_testing[,c(features,"problem_id")]

dim(dt_training); dim(dt_testing);
## [1] 19622    53
## [1] 20 53

Partitioning the Dataset

Following the recommendation in the course Practical Machine Learning, we will split our data into a training data set (60% of the total cases) and a testing data set (40% of the total cases; the latter should not be confused with the data in the pml-testing.csv file). This will allow us to estimate the out of sample error of our predictor.

set.seed(12345)

inTrain <- createDataPartition(dt_training$classe, p=0.6, list=FALSE)
training <- dt_training[inTrain,]
testing <- dt_training[-inTrain,]

dim(training); dim(testing);
## [1] 11776    53
## [1] 7846   53

Building the Decision Tree Model

Using Decision Tree, we shouldn’t expect the accuracy to be high. In fact, anything around 80% would be acceptable.

set.seed(12345)
modFitDT <- rpart(classe ~ ., data = training, method="class", control = rpart.control(method = "cv", number = 10))
fancyRpartPlot(modFitDT)

Predicting with the Decision Tree Model

set.seed(12345)

prediction <- predict(modFitDT, testing, type = "class")
confusionMatrix(prediction, testing$classe)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1879  260   30   69   66
##          B   56  759   88   34   54
##          C  105  340 1226  354  234
##          D  155  132   23  807   57
##          E   37   27    1   22 1031
## 
## Overall Statistics
##                                           
##                Accuracy : 0.7267          
##                  95% CI : (0.7167, 0.7366)
##     No Information Rate : 0.2845          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.6546          
##  Mcnemar's Test P-Value : < 2.2e-16       
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.8418  0.50000   0.8962   0.6275   0.7150
## Specificity            0.9243  0.96334   0.8405   0.9441   0.9864
## Pos Pred Value         0.8155  0.76589   0.5427   0.6874   0.9222
## Neg Pred Value         0.9363  0.88928   0.9746   0.9282   0.9389
## Prevalence             0.2845  0.19347   0.1744   0.1639   0.1838
## Detection Rate         0.2395  0.09674   0.1563   0.1029   0.1314
## Detection Prevalence   0.2937  0.12631   0.2879   0.1496   0.1425
## Balanced Accuracy      0.8831  0.73167   0.8684   0.7858   0.8507

Building the Random Forest Model

Using random forest, the out of sample error should be small. The error will be estimated using the 40% testing sample. We should expect an error estimate of < 3%.

set.seed(12345)

modFitRF <- randomForest(classe ~ ., data = training, method = "rf", importance = T, trControl = trainControl(method = "cv", classProbs=TRUE,savePredictions=TRUE,allowParallel=TRUE, number = 10))

plot(modFitRF)

Building the Boosting Model

modFitBoost <- train(classe ~ ., method = "gbm", data = training,
                    verbose = F,
                    trControl = trainControl(method = "cv", number = 10))

modFitBoost
## Stochastic Gradient Boosting 
## 
## 11776 samples
##    52 predictor
##     5 classes: 'A', 'B', 'C', 'D', 'E' 
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 10597, 10598, 10597, 10600, 10598, 10598, ... 
## Resampling results across tuning parameters:
## 
##   interaction.depth  n.trees  Accuracy   Kappa      Accuracy SD
##   1                   50      0.7512747  0.6848767  0.011603101
##   1                  100      0.8214149  0.7739198  0.011167472
##   1                  150      0.8531755  0.8141765  0.008960435
##   2                   50      0.8513943  0.8116228  0.016334281
##   2                  100      0.9053162  0.8801392  0.011106669
##   2                  150      0.9309611  0.9126161  0.008237664
##   3                   50      0.8951270  0.8671789  0.009583081
##   3                  100      0.9399654  0.9240308  0.009966005
##   3                  150      0.9606002  0.9501563  0.007081194
##   Kappa SD   
##   0.014912249
##   0.014234663
##   0.011409532
##   0.020843049
##   0.014081010
##   0.010438578
##   0.012189374
##   0.012630218
##   0.008961333
## 
## Tuning parameter 'shrinkage' was held constant at a value of 0.1
## 
## Tuning parameter 'n.minobsinnode' was held constant at a value of 10
## Accuracy was used to select the optimal model using  the largest value.
## The final values used for the model were n.trees = 150,
##  interaction.depth = 3, shrinkage = 0.1 and n.minobsinnode = 10.
plot(modFitBoost)

Predicting with the Random Forest Model

prediction <- predict(modFitRF, testing, type = "class")
confusionMatrix(prediction, testing$classe)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 2229    7    0    0    0
##          B    3 1506    7    0    0
##          C    0    5 1360   15    2
##          D    0    0    1 1269    3
##          E    0    0    0    2 1437
## 
## Overall Statistics
##                                           
##                Accuracy : 0.9943          
##                  95% CI : (0.9923, 0.9958)
##     No Information Rate : 0.2845          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.9927          
##  Mcnemar's Test P-Value : NA              
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.9987   0.9921   0.9942   0.9868   0.9965
## Specificity            0.9988   0.9984   0.9966   0.9994   0.9997
## Pos Pred Value         0.9969   0.9934   0.9841   0.9969   0.9986
## Neg Pred Value         0.9995   0.9981   0.9988   0.9974   0.9992
## Prevalence             0.2845   0.1935   0.1744   0.1639   0.1838
## Detection Rate         0.2841   0.1919   0.1733   0.1617   0.1832
## Detection Prevalence   0.2850   0.1932   0.1761   0.1622   0.1834
## Balanced Accuracy      0.9987   0.9953   0.9954   0.9931   0.9981

The random forest model performed very well in-sample, with about 99.3% Accuracy.

Predicting with the Boosting Model

prediction <- predict(modFitBoost, testing)
confusionMatrix(prediction, testing$classe)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 2205   43    0    4    0
##          B   19 1424   32    3   15
##          C    3   44 1315   50   14
##          D    3    2   21 1219   22
##          E    2    5    0   10 1391
## 
## Overall Statistics
##                                           
##                Accuracy : 0.9628          
##                  95% CI : (0.9584, 0.9669)
##     No Information Rate : 0.2845          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.9529          
##  Mcnemar's Test P-Value : 1.205e-07       
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.9879   0.9381   0.9613   0.9479   0.9646
## Specificity            0.9916   0.9891   0.9829   0.9927   0.9973
## Pos Pred Value         0.9791   0.9538   0.9222   0.9621   0.9879
## Neg Pred Value         0.9952   0.9852   0.9917   0.9898   0.9921
## Prevalence             0.2845   0.1935   0.1744   0.1639   0.1838
## Detection Rate         0.2810   0.1815   0.1676   0.1554   0.1773
## Detection Prevalence   0.2870   0.1903   0.1817   0.1615   0.1795
## Balanced Accuracy      0.9898   0.9636   0.9721   0.9703   0.9810

Predicting with the Testing Data (pml-testing.csv)

Decision Tree Prediction

predictionDT <- predict(modFitDT, dt_testing)
predictionDT
##             A           B           C          D          E
## 1  0.02992958 0.058098592 0.593309859 0.24295775 0.07570423
## 2  0.42523364 0.308411215 0.093457944 0.04439252 0.12850467
## 3  0.04610131 0.138303927 0.564598748 0.13431986 0.11667615
## 4  0.97005988 0.029940120 0.000000000 0.00000000 0.00000000
## 5  0.72211350 0.211350294 0.015655577 0.01369863 0.03718200
## 6  0.02996255 0.018726592 0.000000000 0.00000000 0.95131086
## 7  0.14067914 0.131670132 0.018711019 0.64656965 0.06237006
## 8  0.14067914 0.131670132 0.018711019 0.64656965 0.06237006
## 9  0.99153439 0.008465608 0.000000000 0.00000000 0.00000000
## 10 0.77419355 0.145161290 0.007444169 0.04218362 0.03101737
## 11 0.42523364 0.308411215 0.093457944 0.04439252 0.12850467
## 12 0.04610131 0.138303927 0.564598748 0.13431986 0.11667615
## 13 0.72211350 0.211350294 0.015655577 0.01369863 0.03718200
## 14 0.99153439 0.008465608 0.000000000 0.00000000 0.00000000
## 15 0.04610131 0.138303927 0.564598748 0.13431986 0.11667615
## 16 0.09090909 0.057575758 0.000000000 0.08484848 0.76666667
## 17 0.72211350 0.211350294 0.015655577 0.01369863 0.03718200
## 18 0.14067914 0.131670132 0.018711019 0.64656965 0.06237006
## 19 0.02992958 0.058098592 0.593309859 0.24295775 0.07570423
## 20 0.05384615 0.930769231 0.000000000 0.00000000 0.01538462

Random Forest Prediction

predictionRF <- predict(modFitRF, dt_testing)
predictionRF
##  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 
##  B  A  B  A  A  E  D  B  A  A  B  C  B  A  E  E  A  B  B  B 
## Levels: A B C D E

Boosting Prediction

predictionBoost <- predict(modFitBoost, dt_testing)
predictionBoost
##  [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E

Submission file

As can be seen from the confusion matrix the Random Forest model is very accurate, about 99%. Because of that we could expect nearly all of the submitted test cases to be correct. It turned out they were all correct.

Prepare the submission.

pml_write_files = function(x){
  n = length(x)
  for(i in 1:n){
    filename = paste0("problem_id_",i,".txt")
    write.table(x[i],file=filename,quote=FALSE,row.names=FALSE,col.names=FALSE)
  }
}

pml_write_files(predictionRF)