A data analyst at your company is building a pipeline that loads JSON files into BigQuery from Cloud Storage. These files have a nested and repeated structure, and some fields may be missing in certain records. The analyst wants to ensure schema flexibility while avoiding frequent pipeline failures due to minor schema mismatches. Which approach should the analyst use when configuring the BigQuery load job to best handle this scenario?