This is a dedicated watch page for a single video.
A machine learning engineer uses the following code block to scale the inference of a single-node model on a Spark DataFrame with one million records: @pandas_udf ( "double" ) def predict ( iterator : Iterator [ pd . DataFrame ])-> Iterator [ pd . Series ]: model_path = f "runs://(run.info.run_id)/model" model = mlflow . sklearn . load_model ( model_path ) for features in iterator : pdf = pd . concat ( features , axis = 1 ) yield pd . Series ( model . predict ( pdf )) Assuming the default Spark configuration is in place, what is the advantage of using an Iterator? Choose only ONE best answer.