问题
I'm trying to dump relation into AVRO file but I'm getting a strange error:
org.apache.pig.data.DataByteArray cannot be cast to java.lang.CharSequence
I don't use DataByteArray
(bytearray), see description of the relation below.
sensitiveSet: {rank_ID: long,name: chararray,customerId: long,VIN: chararray,birth_date: chararray,fuel_mileage: chararray,fuel_consumption: chararray}
Even when I do explicit casting I get the same error:
sensitiveSet = foreach sensitiveSet generate (long) $0, (chararray) $1, (long) $2, (chararray) $3, (chararray) $4, (chararray) $5, (chararray) $6;
STORE sensitiveSet INTO 'testOut2222.avro'
USING org.apache.pig.piggybank.storage.avro.AvroStorage('no_schema_check', 'schema', '{"type":"record","name":"xxxx","namespace":"","fields":[{"name":"rank_ID","type":"long"},{"name":"name","type":"string","store":"no","sensitive":"na"},{"name":"customerId","type":"string","store":"yes","sensitive":"yes"},{"name":"VIN","type":"string","store":"yes","sensitive":"yes"},{"name":"birth_date","type":"string","store":"yes","sensitive":"no"},{"name":"fuel_mileage","type":"string","store":"yes","sensitive":"no"},{"name":"fuel_consumption","type":"string","store":"yes","sensitive":"no"}]}');
EDITED:
I'm trying to define an output schema which should be a Tuple that contains another two tuples, i.e. stats:tuple(c:tuple(),d:tuple)
.
The code below doesn't work as it was intended. It somehow produces structure as:
stats:tuple(b:tuple(c:tuple(),d:tuple()))
Below is output produced by describe
.
sourceData: {com.mortardata.pig.dataspliter_36: (stats: ((name: chararray,customerId: chararray,VIN: chararray,birth_date: chararray,fuel_mileage: chararray,fuel_consumption: chararray),(name: chararray,customerId: chararray,VIN: chararray,birth_date: chararray,fuel_mileage: chararray,fuel_consumption: chararray)))}
Is it possible to create structure as below, which means I need to remove the tuple b from the previous example.
grunt> describe sourceData;
sourceData: {t: (s: (name: chararray,customerId: chararray,VIN: chararray,birth_date: chararray,fuel_mileage: chararray,fuel_consumption: chararray),n: (name: chararray,customerId: chararray,VIN: chararray,birth_date: chararray,fuel_mileage: chararray,fuel_consumption: chararray))}
The below code doesn't work as expected.
public Schema outputSchema(Schema input) {
Schema sensTuple = new Schema();
sensTuple.add(new Schema.FieldSchema("name", DataType.CHARARRAY));
sensTuple.add(new Schema.FieldSchema("customerId", DataType.CHARARRAY));
sensTuple.add(new Schema.FieldSchema("VIN", DataType.CHARARRAY));
sensTuple.add(new Schema.FieldSchema("birth_date", DataType.CHARARRAY));
sensTuple.add(new Schema.FieldSchema("fuel_mileage", DataType.CHARARRAY));
sensTuple.add(new Schema.FieldSchema("fuel_consumption", DataType.CHARARRAY));
Schema nonSensTuple = new Schema();
nonSensTuple.add(new Schema.FieldSchema("name", DataType.CHARARRAY));
nonSensTuple.add(new Schema.FieldSchema("customerId", DataType.CHARARRAY));
nonSensTuple.add(new Schema.FieldSchema("VIN", DataType.CHARARRAY));
nonSensTuple.add(new Schema.FieldSchema("birth_date", DataType.CHARARRAY));
nonSensTuple.add(new Schema.FieldSchema("fuel_mileage", DataType.CHARARRAY));
nonSensTuple.add(new Schema.FieldSchema("fuel_consumption", DataType.CHARARRAY));
Schema parentTuple = new Schema();
parentTuple.add(new Schema.FieldSchema(null, sensTuple, DataType.TUPLE));
parentTuple.add(new Schema.FieldSchema(null, nonSensTuple, DataType.TUPLE));
Schema outputSchema = new Schema();
outputSchema.add(new Schema.FieldSchema("stats", parentTuple, DataType.TUPLE));
return new Schema(new Schema.FieldSchema(getSchemaName(this.getClass().getName().toLowerCase(), input), outputSchema, DataType.TUPLE));
The UDF's exec method returns:
public Tuple exec(Tuple tuple) throws IOException {
Tuple parentTuple = mTupleFactory.newTuple();
parentTuple.append(tuple1);
parentTuple.append(tuple2);
EDIT2 (FIXED)
...
Schema outputSchema = new Schema();
outputSchema.add(new Schema.FieldSchema("stats", parentTuple, DataType.TUPLE));
return outputSchema;
Now I return proper schema from UDF where all items are chararray but when I try to store those items into avro file as type: string I got the same error:
java.lang.Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: java.lang.ClassCastException: org.apache.pig.data.DataByteArray cannot be cast to java.lang.CharSequence
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
SOLVED: Ok, the issue was that data wasnt casted to the proper type inside the UDF body - exec () method. Looks like it works now!
回答1:
Usually this means you are using a UDF that isn't preserving the schema, or somewhere it is getting lost. I believe DataByteArray is the fallback type when the real type isn't known. You may need to add a caste to workaround this, however a better solution is to fix whatever UDF is dropping the schema.
来源:https://stackoverflow.com/questions/34727136/pig-casting-datatypes