hortonworks-data-platform

HiveMetaStoreClient fails to connect to a Kerberized cluster

烂漫一生 提交于 2019-12-11 15:22:41
问题 Kerberized HDP-2.6.3.0. I have a test code running on my local Windows 7 machine. Note the commented code, as well, that's not making a difference. private static void connectHiveMetastore() throws MetaException, MalformedURLException { System.setProperty("hadoop.home.dir", "E:\\Development\\Software\\Virtualization"); /*Start : Commented or un-commented, immaterial ...*/ System.setProperty("javax.security.auth.useSubjectCredsOnly","false"); System.setProperty("java.security.auth.login.config

Permission exception for Sqoop

大兔子大兔子 提交于 2019-12-11 12:42:34
问题 Stack : Installed HDP-2.3.2.0-2950 using Ambari 2.1 The installation was auto. as the machines(total 9 nodes) had Internet connectivity and was done using root credentials. An ls command output for reference( sqoop user IS missing ) : [root@l1031lab ~]# hadoop fs -ls /user Found 7 items drwx------ - accumulo hdfs 0 2015-11-05 14:03 /user/accumulo drwxrwx--- - ambari-qa hdfs 0 2015-10-30 16:08 /user/ambari-qa drwxr-xr-x - hcat hdfs 0 2015-10-30 16:17 /user/hcat drwxr-xr-x - hdfs hdfs 0 2015-11

Accessing HBase table data from Hive based on Time Stamp

主宰稳场 提交于 2019-12-11 12:38:31
问题 I have created a HBase by mentioning the default versions as 10 create 'tablename',{NAME => 'cf', VERSIONS => 10} and inserted two rows(row1 and row2) put 'tablename','row1','cf:id','row1id' put 'tablename','row1','cf:name','row1name' put 'tablename','row2','cf:id','row2id' put 'tablename','row2','cf:name','row2name' put 'tablename','row2','cf:name','row2nameupdate' put 'tablename','row2','cf:name','row2nameupdateagain' put 'tablename','row2','cf:name','row2nameupdateonemoretime' Tried to

Accessing kafka in sandbox from Host OS (after trying every solution)

爷,独闯天下 提交于 2019-12-11 12:23:02
问题 Cnsider me a noob. I have read all the issues on stack overflow and tried for one day but the solution just do not click to me. PLEASE Help me specifically to my SETTINGS and CODE(because I have tried all possibilities from same issues on stack overflow - ) This is my Producer.properties file This is my server.properties file This is my code Properties props = new Properties(); props.put("metadata.broker.list", "sandbox.hortonworks.com:9093"); //props.put("zk.connect", "sandbox.hortonworks

Read HBase table with where clause using Spark

ε祈祈猫儿з 提交于 2019-12-11 06:49:54
问题 I am trying to read a HBase table using Spark Scala API. Sample Code: conf.set("hbase.master", "localhost:60000") conf.set("hbase.zookeeper.quorum", "localhost") conf.set(TableInputFormat.INPUT_TABLE, tableName) val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result]) println("Number of Records found : " + hBaseRDD.count()) How to add where clause if i use newAPIHadoopRDD ? Or we need to use any Spark Hbase Connector to achieve this?

Kafka Connect - File Source Connector error

人盡茶涼 提交于 2019-12-11 06:14:06
问题 I am playing with Conluent Platform/Kafka Connect and similar things and I wanted to run few examples. I followed quickstart from here. It means: Install Confluent Platform (v3.2.1) Run Zookeeper, Kafka Broker and Schema Register Run example for reading file data (witk Kafka Connect) I ran this command (number 3): [root@sandbox confluent-3.2.1]# ./bin/connect-standalone ./etc/schema-registry/connect-avro-standalone.properties ./etc/kafka/connect-file-source.properties but got this result:

HDP 2.5: Spark History Server UI won't show incomplete applications

╄→尐↘猪︶ㄣ 提交于 2019-12-11 05:11:42
问题 I set-up a new Hadoop Cluster with Hortonworks Data Platform 2.5 . In the "old" cluster (installed HDP 2.4 ) I was able to see the information about running Spark jobs via the History Server UI by clicking the link show incomplete applications : Within the new installation this link opens the page, but it always sais No incomplete applications found! (when there's still an application running). I just saw, that the YARN ResourceManager UI shows two different kind of links in the "Tracking UI"

'DBCPConnectionPool' Service Not accepting values stored in attributes

十年热恋 提交于 2019-12-11 04:56:58
问题 Following are the combination of processors that I am using:- GetFile + SplitText + ExtractText + UpdateAttribute + ExecuteSQL + ConvertAvroToJson + PutFile Basically,I have a properties file which contains 5 comma separated values that are required by the 'DBCPConnectionPool' controller service to establish connection with the database. Here is the content of my properties file:- jdbc:mysql://localhost:3306/test,com.mysql.jdbc.Driver,C:\Program Files\MySQL\mysql-connector.jar,root,root Now,

grant permissions in hive does not work on hdp2.2

你说的曾经没有我的故事 提交于 2019-12-11 04:19:18
问题 I'm experimenting with HDP2.2 cluster with Ambari setup on CentOS 6.5 and I have problems with running Hive GRANT queries. For example, a query grant select on Tbl1 to user root; gives me an exception that looks like that FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to retrieve roles for hdfs: Metastore Authorization api invocation for remote metastore is disabled in this configuration. What's going on here and could you explain the meaning of

SparkAction for yarn-cluster

二次信任 提交于 2019-12-11 03:48:23
问题 Using the Hortonworks HDP 2.3 preview sandbox (oozie:4.2.0.2.3.0.0-2130, spark:1.3 and Hadoop:2.7.1.2.3.0.0-2130), I am trying to invoke the oozie spark action using "yarn-cluster" as the master. The example provided in Oozie Spark Action is for running the spark action on "local" master. The same page also suggests to be able to run on Yarn, the spark assembly jar should be available to the spark action. I have two questions How do we make the spark assembly jar available to Spark Action?