CertNexus - Big Savings Alert – Don’t Miss This Deal - Ends In 1d 00h 00m 00s Coupon code: 26Y30OFF
  1. Home
  2. CertNexus
  3. AIP-210 Exam
  4. Free AIP-210 Questions

Free Practice Questions for CertNexus AIP-210 Exam

Pass4Future also provide interactive practice exam software for preparing CertNexus Certified Artificial Intelligence Practitioner (AIP-210) Exam effectively. You are welcome to explore sample free CertNexus AIP-210 Exam questions below and also try CertNexus AIP-210 Exam practice test software.

Page:    1 / 14   
Total 92 questions

Question 1

Which of the following is NOT a valid cross-validation method?



Answer : D

Stratification is not a valid cross-validation method, but a technique to ensure that each subset of data has the same proportion of classes or labels as the original data. Stratification can be used in conjunction with cross-validation methods such as k-fold or leave-one-out to preserve the class distribution and reduce bias or variance in the validation results. Bootstrapping, k-fold, and leave-one-out are all valid cross-validation methods that use different ways of splitting and resampling the data to estimate the performance of a machine learning model.


Question 2

An organization sells house security cameras and has asked their data scientists to implement a model to detect human feces, as distinguished from animals, so they can alert th customers only when a human gets close to their house.

Which of the following algorithms is an appropriate option with a correct reason?



Answer : D

Neural network models are suitable for classification problems with a large number of features, because they can learn complex and non-linear patterns from high-dimensional data. They can also handle image data, which is likely to be the input for the human face detection problem. Neural networks can also be trained using transfer learning, which can leverage pre-trained models on similar tasks and improve the accuracy and efficiency of the model. Reference: [Neural network - Wikipedia], [Transfer Learning - Machine Learning's Next Frontier]


Question 3

You create a prediction model with 96% accuracy. While the model's true positive rate (TPR) is performing well at 99%, the true negative rate (TNR) is only 50%. Your supervisor tells you that the TNR needs to be higher, even if it decreases the TPR. Upon further inspection, you notice that the vast majority of your data is truly positive.

What method could help address your issue?



Answer : B

Oversampling is a method that can help address the issue of imbalanced data, which is when one class is much more frequent than the other in the dataset. This can cause the model to be biased towards the majority class and have a low true negative rate. Oversampling involves creating synthetic samples of the minority class or replicating existing samples to balance the class distribution. This can help the model learn more from the minority class and improve the true negative rate. Reference: [Handling imbalanced datasets in machine learning], [Oversampling and undersampling in data analysis - Wikipedia]


Question 4

Normalization is the transformation of features:



Answer : C

Normalization is the transformation of features so that they are on a similar scale, usually between 0 and 1 or -1 and 1. This can help reduce the influence of outliers and improve the performance of some machine learning algorithms that are sensitive to the scale of the features, such as gradient descent, k-means, or k-nearest neighbors. Reference: [Feature scaling - Wikipedia], [Normalization vs Standardization --- Quantitative analysis]


Question 5

You are implementing a support-vector machine on your data, and a colleague suggests you use a polynomial kernel. In what situation might this help improve the prediction of your model?



Answer : B

A support-vector machine (SVM) is a supervised learning algorithm that can be used for classification or regression problems. An SVM tries to find an optimal hyperplane that separates the data into different categories or classes. However, sometimes the data is not linearly separable, meaning there is no straight line or plane that can separate them. In such cases, a polynomial kernel can help improve the prediction of the SVM by transforming the data into a higher-dimensional space where it becomes linearly separable. A polynomial kernel is a function that computes the similarity between two data points using a polynomial function of their features.


Page:    1 / 14   
Total 92 questions