Scenario: You migrated your on-premises Apache Hadoop Distributed File System (HDFS) data lake to Cloud Storage. The data scientist team needs to process the data using Apache Spark and SQL. Security policies need to be applied at the column level, and the solution must be cost-effective and scalable into a data mesh. Question: How should you process the data and apply column-level security while ensuring scalability and cost-effectiveness?