Machine Learning with Python Exam Answers 2021

Machine Learning with Python Exam Answers 2021

Entropy is a measurable physical property that is most often associated with a state of chaos, randomness, or ambiguity. The word and definition are used in a wide range of fields, from classical thermodynamics, where it was first known, to statistical physics’ microscopic explanation of existence, and knowledge theory’s principles. It has a wide range of applications in chemistry and physics, biological systems and their relationships to life, cosmology, economics, sociology, weather science, climate change, and information systems, including telecommunications.

1. You can define Jaccard as the size of the intersection divided by the size of the union of two label sets.

  1. True
  2. False

2. When building a decision tree, we want to split the nodes in a way that increases entropy and decreases information gain.

  1. True
  2. False

3. Which of the following statements are true? (Select all that apply.)

  1. K needs to be initialized in K-Nearest Neighbor.
  2. Supervised learning works on labelled data.
  3. A high value of K in KNN creates a model that is over-fit
  4. KNN takes a bunch of unlabelled points and uses them to predict unknown points.
  5. Unsupervised learning works on unlabelled data.

4. To calculate a model’s accuracy using the test set, you pass the test set to your model to predict the class labels, and then compare the predicted values with actual values.

  1. True
  2. False

5. Which is the definition of entropy?

  1. The purity of each node in a decition tree.
  2. Information collected that can increase the level of certainty in a particular prediction.
  3. The information that is used to randomly select a subset of data.
  4. The amount of information disorder in the data.

6. Which of the following is true about hierarchical linkages?

  1. Average linkage is the average distance of each point in one cluster to every point in another cluster
  2. Complete linkage is the shortest distance between a point in two clusters
  3. Centroid linkage is the distance between two randomly generated centroids in two clusters
  4. Single linkage is the distance between any points in two clusters

7. The goal of regression is to build a model to accurately predict the continues value of a dependent variable for an unknown case.

  1. True
  2. False

8. Which of the following statements are true about linear regression? (Select all that apply)

  1. With linear regression, you can fit a line through the data.
  2. y=a+b_x1 is the equation for a straight line, which can be used to predict the continuous value y.
  3. In y=θ^T.X, θ is the feature set and X is the “weight vector” or “confidences of the equation”, with both of these terms used interchangeably.

9. The Sigmoid function is the main part of logistic regression, where Sigmoid of 𝜃^𝑇.𝑋, gives us the probability of a point belonging to a class, instead of the value of y directly.

  1. True
  2. False

10. In comparison to supervised learning, unsupervised learning has:

  1. Less tests (evaluation approaches)
  2. More models
  3. A better controlled environment
  4. More tests (evaluation approaches), but less models

11. The points that are classified by Density-Based Clustering and do not belong to any cluster, are outliers.

  1. True
  2. False

12. Which of the following is false about Simple Linear Regression?

  1. It does not require tuning parameters
  2. It is highly interpretable
  3. It is fast
  4. It is used for finding outliers

13. Which one of the following statements is the most accurate?

  1. Machine Learning is the branch of AI that covers the statistical and learning part of artificial intelligence.
  2. Deep Learning is a branch of Artificial Intelligence where computers learn by being explicitely programmed.
  3. Artificial Intelligence is a branch of Machine Learning that covers the statistical part of Deep Learning.
  4. Artificial Intelligence is the branch of Deep Learning that allows us to create models.

14. Which of the following are types of supervised learning?

  1. Classification
  2. Regression
  3. KNN
  4. K-Means
  5. Clustering

15. A Bottom-Up version of hierarchical clustering is known as Divisive clustering. It is a more popular method than the Agglomerative method.

  1. True
  2. False

16. Select all the true statements related to Hierarchical clustering and K-Means.

  1. Hierarchical clustering does not require the number of clusters to be specified.
  2. Hierarchical clustering always generates different clusters, whereas k-Means returns the same clusters each time it is run.
  3. K-Means is more efficient than Hierarchical clustering for large datasets.

17. What is a content-based recommendation system?

  1. Content-based recommendation system tries to recommend items to the users based on their profile built upon their preferences and taste.
  2. Content-based recommendation system tries to recommend items based on similarity among items.
  3. Content-based recommendation system tries to recommend items based on the similarity of users when buying, watching, or enjoying something.

18. Before running Agglomerative clustering, you need to compute a distance/proximity matrix, which is an n by n table of all distances between each data point in each cluster of your dataset.

  1. True
  2. False

19. Which of the following statements are true about DBSCAN? (Select all that apply)

  1. DBSCAN can be used when examining spatial data.
  2. DBSCAN can be applied to tasks with arbitrary shaped clusters, or clusters within clusters.
  3. DBSCAN is a hierarchical algorithm that finds core and border points.
  4. DBSCAN can find any arbitrary shaped cluster without getting affected by noise.

20. In recommender systems, “cold start” happens when you have a large dataset of users who have rated only a limited number of items.

  1. True
  2. False

21. Machine Learning uses algorithms that can learn from data without relying on explicitly programmed methods.

  1. True
  2. False

22. Which are the two types of Supervised learning techniques?

  1. Classification and Clustering
  2. Classification and K-Means
  3. Regression and Clustering
  4. Regression and Partitioning
  5. Classification and Regression

23. Which of the following statements best describes the Python scikit library?

  1. A library for scientific and high-performance computation.
  2. A collection of algorithms and tools for machine learning.
  3. A popular plotting package that provides 2D plotting as well as 3D plotting.
  4. A library that provides high-performance, easy to use data structures.
  5. A collection of numerical algorithms and domain-specific toolboxes.

24. Train and Test on the Same Dataset might have a high training accuracy, but its out-of-sample accuracy can be low.

  1. True
  2. False

25. Which of the following matrices can be used to show the results of model accuracy evaluation or the model’s ability to correctly predict or separate the classes?

  1. Confusion matrix
  2. Evaluation matrix
  3. Accuracy matrix
  4. Error matrix
  5. Identity matrix

26. When we should use Multiple Linear Regression?

  1. When we would like to identify the strength of the effect that the independent variables have on a dependent variable.
  2. When there are multiple dependent variables.

27. In K-Nearest Neighbors, which of the following is true:

  1. A very high value of K (ex. K = 100) produces an overly generalised model, while a very low value of k (ex. k = 1) produces a highly complex model.
  2. A very high value of K (ex. K = 100) produces a model that is better than a very low value of K (ex. K = 1)
  3. A very high value of k (ex. k = 100) produces a highly complex model, while a very low value of K (ex. K = 1) produces an overly generalized model.

28. A classifier with lower log loss has better accuracy.

  1. True
  2. False

29. When building a decision tree, we want to split the nodes in a way that decreases entropy and increases information gain.

  1. True
  2. False

30. Which one is NOT TRUE about k-means clustering??

  1. k-means divides the data into non-overlapping clusters without any cluster-internal structure.
  2. The objective of k-means, is to form clusters in such a way that similar samples go into a cluster, and dissimilar samples fall into different clusters.
  3. As k-means is an iterative algorithm, it guarantees that it will always converge to the global optimum.

31. Customer Segmentation is a supervised way of clustering data, based on the similarity of customers to each other.

  1. True
  2. False

32. How is a center point (centroid) picked for each cluster in k-means?

  1. We can randomly choose some observations out of the data set and use these observations as the initial means.
  2. We can select the centroid through correlation analysis.

33. Collaborative filtering is based on relationships between products and people’s rating patterns.

  1. True
  2. False

34. Which one is TRUE about Content-based recommendation systems?

  1. Content-based recommendation system tries to recommend items to the users based on their profile.
  2. In content-based approach, the recommendation process is based on similarity of users.
  3. In content-based recommender systems, similarity of users should be measured based on the similarity of the actions of users.

35. Which one is correct about user-based and item-based collaborative filtering?

  1. In item-based approach, the recommendation is based on profile of a user that shows interest of the user on specific item
  2. In user-based approach, the recommendation is based on users of the same neighborhood, with whom he/she shares common preferences.

Recommender systems use a technique called collaborative filtering (CF). Collaborative filtering has two meanings: a specific one and a broader one. Collaborative filtering is a method of making automated assumptions (filtering) about a user’s desires by gathering tastes or taste information from multiple users in a newer, narrower context (collaborating). The collective filtering approach is based on the premise that if two people have the same opinion on an issue, A is more likely to have B’s opinion on a different issue than a randomly chosen individual.

Leave a Reply

Your email address will not be published. Required fields are marked *