A. Defined distance metric
B. Number of clusters
C. Initial guess as to cluster centroids
D. All of the mentioned
Answer: D) All of the mentioned
Explanation: K-means clustering follows partitioning approach.
A. Regressive model predicts a numeric value instead of category.
B. Regressive model organizes similar item in your dataset into groups.
C. Regressive model comes up with a set of rules to capture associations between items or events.
D. None of the Mentioned
Answer: A) Regressive model predicts a numeric value instead of category.
A. Predicting the amount of rainfall based on various cues
B. Training a robot to solve a maze
C. Detecting fraudulent credit card transactions
D. All of the mentioned
Answer: C) Detecting fraudulent credit card transactions
Explanation: Credit card transactions can be clustered into fraud transactions using unsupervised learning.
A. Dimensionality reduction
B. Elbow method
C. Both Dimensionality reduction and Elbow method
D. Data partitioning
Answer: C) Both Dimensionality reduction and Elbow method
Statement I: The idea of Pre-pruning is to stop tree induction before a fully grown tree is built, that perfectly fits the training data.
Statement II: The idea of Post-pruning is to grow a tree to its maximum size and then remove the nodes using a top-bottom approach.
A. Only statement I is true
B. Only statement II is true
C. Both statements are true
D. Both statements are false
Answer: A) Only statement I is true
Explanation: With early pruning, the idea is to stop tree induction before a mature tree is built that fits the training data perfectly.
In post-pruning, the tree is grown to its maximum size, then the tree is pruned by removing the nodes using a bottom-up approach.
1. Increase in K will result in higher time required to cross validate the result.
2. Higher values of K will result in higher confidence on the cross-validation result as compared to lower value of K.
3. If K=N, then it is called Leave one out cross validation, where N is the number of observations.
A. 1 and 2
B. 2 and 3
C. 1 and 3
D. 1, 2 and 3
Answer: D) 1,2 and 3
Explanation: A larger k value means less bias towards the true expected error estimate (because the training fold will be closer to the total dataset) and higher runtime (when you approach the edge case: Leave-One-Out CV). We must also consider the variance between the accuracy of k folds when selecting k.
1. Accuracy is ~0.91
2. Misclassification rate is ~ 0.91
3. False positive rate is ~0.95
4. True positive rate is ~0.95
A. 1 and 3
B. 2 and 4
C. 2 and 3
The true Positive Rate is how many times you are predicting positive class correctly so true positive rate would be 100/105 = 0.95 also known as “Sensitivity” or “Recall”
Statement I: In supervised approaches, the target that the model is predicting is unknown or unavailable. This means that you have unlabeled data.
Statement II: In unsupervised approaches the target, which is what the model is predicting, is provided. This is referred to as having labeled data because the target is labeled for every sample that you have in your data set.
A. Only Statement I is true
B. Only Statement II is true
C. Both Statements are false
D. Both Statements are true
Answer: C) Both Statements are false
Explanation: The correct statements are:
Statement I: In the supervised approach, goals are given, which are predicted by the model. This is called having labeled data because the target is labeled for each sample you have in your data set.
Statement II: In the unsupervised approach, the target predicted by the model is unknown or unavailable. This means you have unmarked data.