Showing posts with label K-means clustering. Show all posts
Showing posts with label K-means clustering. Show all posts

Friday, April 22, 2022

Data Science applications

 If you are a certified data scientist, you probably have encountered some of these issues before. If you are a beginner, these use cases will help you learn different data science ideas that can be applied across the industry. Data science challenges are not evolving as quickly as possible for most organizations. Use cases will grow through many competitors depending on your planning needs and expectations. It is crucial to provide insights into current use cases so that they can be condensed and applied to new use cases. You'll occasionally encounter scenarios that haven't been written about in articles or studied at institutions. The allure of data science is that it is scalable and applicable to many issues while requiring relatively little effort.

 1. Credit Card Fraud Detection

We'd create a supervised model to classify it as either fraud or not fraud in this situation. In an ideal world, you'd have many samples of what noise looks like in your data.

 2. Customer Segmentation

Unsupervised learning would be preferred over classification to employ clustering in this circumstance. K-Means is a traditional clustering algorithm. This task is unsupervised because you don't possess labels and don't know what to group. However, you'd like to uncover patterns of novel combinations based on their shared points.

 3. Customer Churn Prediction

A family of machine learning techniques could help with this problem. This query is similar to the one used to detect credit card fraud. We want to collect customer information with a specific label, such as churn or no-churn.

 4. Sales Forecasting

Forecasting transactions is perhaps the most diverse of the three use cases discussed so far. We can apply deep learning to anticipate future commodities purchases in this example. The LSTM algorithm was utilized. LSTM stands for Long Short-Term Memory.


Friday, October 15, 2021

Big Data Computing: Quiz Assignment-VI Solutions (Week-6)

1. Which of the following is required by K-means clustering ?
A. Defined distance metric
B. Number of clusters
C. Initial guess as to cluster centroids
D. All of the mentioned
Answer: D) All of the mentioned
Explanation: K-means clustering follows partitioning approach.
 
 
2. Identify the correct statement in context of Regressive model of Machine Learning.
A. Regressive model predicts a numeric value instead of category.
B. Regressive model organizes similar item in your dataset into groups.
C. Regressive model comes up with a set of rules to capture associations between items or events.
D. None of the Mentioned
Answer: A) Regressive model predicts a numeric value instead of category.
 
 
3. Which of the following tasks can be best solved using Clustering ?
A. Predicting the amount of rainfall based on various cues
B. Training a robot to solve a maze
C. Detecting fraudulent credit card transactions
D. All of the mentioned
Answer: C) Detecting fraudulent credit card transactions
Explanation: Credit card transactions can be clustered into fraud transactions using unsupervised learning.
 
 
4. Identify the correct method for choosing the value of ‘k’ in k-means algorithm ?
A. Dimensionality reduction
B. Elbow method
C. Both Dimensionality reduction and Elbow method
D. Data partitioning
Answer: C) Both Dimensionality reduction and Elbow method
 
 
5. Identify the correct statement(s) in context of overfitting in decision trees:
Statement I: The idea of Pre-pruning is to stop tree induction before a fully grown tree is built, that perfectly fits the training data.
Statement II: The idea of Post-pruning is to grow a tree to its maximum size and then remove the nodes using a top-bottom approach.
A. Only statement I is true
B. Only statement II is true
C. Both statements are true
D. Both statements are false
Answer: A) Only statement I is true
Explanation: With early pruning, the idea is to stop tree induction before a mature tree is built that fits the training data perfectly.
In post-pruning, the tree is grown to its maximum size, then the tree is pruned by removing the nodes using a bottom-up approach.
 
 
6. Which of the following options is/are true for K-fold cross-validation ?
1. Increase in K will result in higher time required to cross validate the result.
2. Higher values of K will result in higher confidence on the cross-validation result as compared to lower value of K.
3. If K=N, then it is called Leave one out cross validation, where N is the number of observations.
A. 1 and 2
B. 2 and 3
C. 1 and 3
D. 1, 2 and 3
Answer: D) 1,2 and 3
Explanation: A larger k value means less bias towards the true expected error estimate (because the training fold will be closer to the total dataset) and higher runtime (when you approach the edge case: Leave-One-Out CV). We must also consider the variance between the accuracy of k folds when selecting k.
 
 
7. Imagine you are working on a project which is a binary classification problem. You trained a model on training dataset and get the below confusion matrix on validation dataset. 
 
Based on the above confusion matrix, choose which option(s) below will give you correct predictions ?
1. Accuracy is ~0.91
2. Misclassification rate is ~ 0.91
3. False positive rate is ~0.95
4. True positive rate is ~0.95
A. 1 and 3
B. 2 and 4
C. 2 and 3
D. 1 and 4 
Answer: D) 1 and 4 
Explanation:
The Accuracy (correct classification) is (50+100)/165 which is nearly equal to 0.91.
The true Positive Rate is how many times you are predicting positive class correctly so true positive rate would be 100/105 = 0.95 also known as “Sensitivity” or “Recall” 
 
 
8. Identify the correct statement(s) in context of machine learning approaches:
Statement I: In supervised approaches, the target that the model is predicting is unknown or unavailable. This means that you have unlabeled data.
Statement II: In unsupervised approaches the target, which is what the model is predicting, is provided. This is referred to as having labeled data because the target is labeled for every sample that you have in your data set.
A. Only Statement I is true
B. Only Statement II is true
C. Both Statements are false
D. Both Statements are true
Answer: C) Both Statements are false
Explanation: The correct statements are:
Statement I: In the supervised approach, goals are given, which are predicted by the model. This is called having labeled data because the target is labeled for each sample you have in your data set.
Statement II: In the unsupervised approach, the target predicted by the model is unknown or unavailable. This means you have unmarked data.

Search Aptipedia