hortonworks-data-platform

Error connecting Hortonworks Hive ODBC in Excel 2013

天大地大妈咪最大 提交于 2019-12-06 22:41:30
问题 I am trying to query Hortonworks Hive via ODBC Driver in Excel 2013. I downloaded the driver here (32-bit): http://hortonworks.com/downloads/ Hortonworks 2.5 Hive 2.5.0.0-1245 Then I add the config in ODBC Data Source Administrator (32-bit) Everything seems fine. Then when I go into Excel 2013 to build the query: I got this error: Anyone knows why? 回答1: The problem is the Hive driver from Hortonworks. For some reason, it's not compatible with Excel or Power BI. I downloaded Microsoft Hive

Apache NiFi - OutOfMemory Error: GC overhead limit exceeded on SplitText processor

旧巷老猫 提交于 2019-12-06 06:49:47
问题 I am trying to use NiFi to process large CSV files (potentially billions of records each) using HDF 1.2. I've implemented my flow, and everything is working fine for small files. The problem is that if I try to push the file size to 100MB (1M records) I get a java.lang.OutOfMemoryError: GC overhead limit exceeded from the SplitText processor responsible of splitting the file into single records. I've searched for that, and it basically means that the garbage collector is executed for too long

HBase on Hortonworks HDP Sandbox: Can't get master address from ZooKeeper

女生的网名这么多〃 提交于 2019-12-06 05:13:36
问题 I downloaded HDP 2.1 from hortonworks for virtualbox. I got the following error when using Hbase shell in case simple command: create 't1', {NAME=> 'f1', VERSIONS => 5} Hortonworks “ERROR: Can't get master address from ZooKeeper; znode data == null” What do I need to do to get hbase working in this sandbox environment? 回答1: In hortonwork sandbox you have to manually start hbase. Try to run the following command (as root user), su hbase - -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc

How to load SQL data into the Hortonworks?

十年热恋 提交于 2019-12-06 04:11:44
问题 I have Installed Hortonworks SandBox in my pc. also tried with a CSV file and its getting in a table structerd manner its OK (Hive + Hadoop), nw I want to migrate my current SQL Databse into Sandbox (MS SQL 2008 r2).How I will do this? Also want to connect to my project (VS 2010 C#). Is it possible to connect through ODBC? I Heard sqoop is using for transferring data from SQL to Hadoop so how I can do this migration with sqoop? 回答1: You could write your own job to migrate the data. But Sqoop

Cannot retrieve repository metadata (repomd.xml) for repository: sandbox. Please verify its path and try again

佐手、 提交于 2019-12-06 02:38:11
I have HDP 2.6.1 installed on VirtualBox and am attempting to run yum install python-pip However, the error below appears: http://dev2.hortonworks.com.s3.amazonaws.com/repo/dev/master/utils/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden" Trying other mirror. To address this issue please refer to the below knowledge base article https://access.redhat.com/solutions/69319 If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. Error: Cannot retrieve repository metadata (repomd.xml) for repository: sandbox.

kinit: Client's credentials have been revoked while getting initial credentials

做~自己de王妃 提交于 2019-12-06 01:53:00
I have hdp cluster configured with kerberos with AD. All HDP service accounts have principals and keytabs generated including spark. I know service accounts will not have passwords and set to unexpire. Now while doing kinit -kt spark.keytab -p spark-PRINCIPAL I get the following error (see the title). I read in MIT website it happens due to many unsuccessful login attempts or account expiry set in default policy in KDC.account can be unlocked using kadmin commands such as kadmin:modprinci spark/principal but I have cross checked with AD admin. He says we don't use kdc server to execute kadmin

How to delete files from the HDFS?

大憨熊 提交于 2019-12-05 09:59:52
问题 I just downloaded Hortonworks sandbox VM, inside it there are Hadoop with the version 2.7.1. I adding some files by using the hadoop fs -put /hw1/* /hw1 ...command. After it I am deleting the added files, by the hadoop fs -rm /hw1/* ...command, and after it cleaning the recycle bin, by the hadoop fs -expunge ...command. But the DFS Remaining space not changed after recyle bin cleaned. Even I can see that the data was truly deleted from the /hw1/ and the recyle bin. I have the fs.trash

Passing HBase credentials in oozie Java Action

两盒软妹~` 提交于 2019-12-05 03:57:22
问题 I need to schedule an oozie Java action which interacts with secured hbase, so I need to provide hbase credentials to the Java action. I am using a secured hortonworks 2.2 environment, my workflow XML is as below <workflow-app xmlns="uri:oozie:workflow:0.4" name="solr-wf"> <credentials> <credential name="hbase" type="hbase"> </credential> </credentials> <start to="java-node"/> <action name="java-node" cred="hbase"> <java> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name

nifi ConvertRecord JSON to CSV getting only single record?

删除回忆录丶 提交于 2019-12-04 14:50:50
问题 I have the below flow set up for reading json data and convert it to csv using the convertRecord processor. However, the output flowfile is only populated with single record (I am assuming only the first record) instead of all the records. Can someone help provide the correct configuration? Source json data: {"creation_Date": "2018-08-19", "Hour_of_day": 7, "log_count": 2136} {"creation_Date": "2018-08-19", "Hour_of_day": 17, "log_count": 606} {"creation_Date": "2018-08-19", "Hour_of_day": 14

HBase on Hortonworks HDP Sandbox: Can't get master address from ZooKeeper

北城余情 提交于 2019-12-04 11:24:01
I downloaded HDP 2.1 from hortonworks for virtualbox. I got the following error when using Hbase shell in case simple command: create 't1', {NAME=> 'f1', VERSIONS => 5} Hortonworks “ERROR: Can't get master address from ZooKeeper; znode data == null” What do I need to do to get hbase working in this sandbox environment? In hortonwork sandbox you have to manually start hbase. Try to run the following command (as root user), su hbase - -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master; sleep 20" su hbase - -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf