Question

These models are memory efficient because the Lagrange multipliers in their objective function are zero for most of the points in the training set. For 10 points each:
[10m] Name these supervised learning models that solve an optimization problem to find the maximum-margin hyperplane that divides a linearly separable dataset.
ANSWER: support vector machines [or SVMs; reject “(vector) machines” or “support vector”]
[10e] The support vector machine, or SVM, “kernel trick” can be used because the dual problem’s objective function is written in terms of this operation between two vectors. This operation generalizes the dot product.
ANSWER: inner product
[10h] The misclassification penalty and the spread of the kernel function for an SVM can be tuned using this method. This hyperparameter tuning method uses a cross-validation score to select the best combination of explicitly enumerated values.
ANSWER: grid search [accept GridSearchCV]

Back to bonuses

Data

TeamOpponentPart 1Part 2Part 3Total
Brown AFlorida A010010
Chicago BChicago A1010020
Georgia Tech AStanford A10101030
UC Berkeley AColumbia A10101030
Vanderbilt AJohns Hopkins A0000