A data scientist is transitioning their pandas DataFrame code to make use of the pandas API on Spark. They're working with the following incomplete code: ________BLANK_________ df = ps . read_parquet ( path ) df [ "category" ]. value_counts () Which line of code should they use to complete the refactoring successfully with the pandas API on Spark?