ml-engineer-pro video for you have recently developed a proof-of-concept (POC) deep learning model, and while you are satisfied with the overall architecture,
You have recently developed a proof-of-concept (POC) deep learning model, and while you are satisfied with the overall architecture, you need to fine-tune a couple of hyperparameters. Specifically, you want to perform hyperparameter tuning on Vertex AI to determine the optimal values for the embedding dimension of a categorical feature and the learning rate. Here are the configurations you have set: For the embedding dimension, you have defined the type as INTEGER with a range from a minimum value of 16 to a maximum value of 64. For the learning rate, you have defined the type as DOUBLE with a range from a minimum value of 10e-05 to a maximum value of 10e-02. You are utilizing the default Bayesian optimization tuning algorithm, and your primary goal is to maximize the accuracy of the model. Training time is not a significant concern. In this context, how should you configure the hyperparameter scaling for each hyperparameter, and what should be the setting for maxParallelTrials?