I would like to setup Dataflow pipeline that reads from file in a gcs bucket, and writes to bigquery table. Caveat being, table to write to should be decided based on conten
On your first question: see Writing different values to different BigQuery tables in Apache Beam
On your second question: one way to accomplish that would be to have your appengine app publish every change notification to Cloud Pubsub, and have a constantly running streaming Dataflow pipeline watching the pubsub topic and writing to BigQuery.
On your third question: yes, assuming your data representation on GCS is fixed, the rest seems like a reasonable ingestion architecture to me :)