In PySpark it you can define a schema and read data sources with this pre-defined schema, e. g.:
Schema = StructType([ Str
The code below will give you a well formatted tabular schema definition of the known dataframe. Quite useful when you have very huge number of columns & where editing is cumbersome. You can then now apply it to your new dataframe & hand-edit any columns you may want to accordingly.
from pyspark.sql.types import StructType
schema = [i for i in df.schema]
And then from here, you have your new schema:
NewSchema = StructType(schema)