ambari

Spark wordcount assertion failed: unsafe symbol Unstable

妖精的绣舞 提交于 2019-12-11 21:15:09
问题 I have installed HDFS, Yarn and Spark using Hortonworks Ambari. I've written simple programs to read/read to HDFS, Map-Reduce wordcount, all worked fine. I then tried to test Spark. I copied the word count program from official Spark example: public final class JavaWordCount { private static final Pattern SPACE = Pattern.compile(" "); public static void main(String[] args) throws Exception { if (args.length < 1) { System.err.println("Usage: JavaWordCount <file>"); System.exit(1); }

How to check spark config for an application in Ambari UI, posted with livy

五迷三道 提交于 2019-12-11 17:00:23
问题 I am posting jobs to a spark cluster using livy APIs. I want to increase the spark.network.timeout value and passing the same value ( 600s ) with the conf field in livy post call. How can I verify that it is getting correctly honoured and getting applied to the jobs posted? 来源: https://stackoverflow.com/questions/55690915/how-to-check-spark-config-for-an-application-in-ambari-ui-posted-with-livy

Apache Metrics Collector install failed while deploying Apache Ambari 2.5.1

别说谁变了你拦得住时间么 提交于 2019-12-11 15:17:37
问题 I've tried to deploy Apache Ambari 2.5.1, and Apache Metrics Collector install failed. I have researched this issue and I can not find the same issue in the Internet. Can you help me to solve this problem? Thanks! stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py", line 86, in <module> AmsCollector().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script

grant permissions in hive does not work on hdp2.2

你说的曾经没有我的故事 提交于 2019-12-11 04:19:18
问题 I'm experimenting with HDP2.2 cluster with Ambari setup on CentOS 6.5 and I have problems with running Hive GRANT queries. For example, a query grant select on Tbl1 to user root; gives me an exception that looks like that FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to retrieve roles for hdfs: Metastore Authorization api invocation for remote metastore is disabled in this configuration. What's going on here and could you explain the meaning of

使用Ambari快速部署Hadoop大数据环境

北城余情 提交于 2019-12-09 14:15:40
做大数据相关的后端开发工作一年多来,随着Hadoop社区的不断发展,也在不断尝试新的东西,本文着重来讲解下 Ambari ,这个新的Apache的项目,旨在让大家能够方便快速的配置和部署Hadoop生态圈相关的组件的环境,并提供维护和监控的功能. 作为新手,我讲讲我自己的学习经历,刚刚开始学习的时候,当然最简单的 Google 下Hadoop ,然后下载相关的包,在自己的虚拟机(CentOS 6.3) 上安装一个单机的Hadoop版本用来做测试,写几个测试类,然后做下CRUD测试之类的,跑跑Map/Reduce的测试,当然这个时候对于Hadoop还不是很了解,不断的看别人的文章,了解下整体的架构,自己所做的就是修改conf下的几个配置文件,让Hadoop能够正常的跑起来,这个时候几种在修改配置上,这个阶段之后,又用到了HBase,这个Hadoop生态圈的另外一个产品,当然还是修改配置,然后 start-all.sh , start-hbase.sh 把服务起起来,然后就是修改自己的程序,做测试,随着用Hbase 学了下 Zookeeper 和Hive等, 接着过了这个操作阶段了之后,开始研究Hadoop2.0看了 的相关文章,还有CSDN上很多大牛的文章了之后, 算是对Hadoop的生态圈整体有一些了解,介于自己在公司所承担的开发所涉及到相关的技术仅仅就这些

Ambari 常用的 REST API 介绍

最后都变了- 提交于 2019-12-07 06:41:26
Ambari 借鉴了很多成熟分布式软件的 API 设计。 Rest API 就是一个很好地体现。通过 Ambari 的 Rest API,可以在脚本中通过 curl 维护整个集群。 并且,我们可以用 Rest API 实现一些无法在 Ambari GUI 上面做的操作。下面是一些实例。 查询关于集群信息 [root @hadron ~] # curl -u admin:admin http://192.168.1.25:8080/api/v1/clusters { "href" : "http://192.168.1.25:8080/api/v1/clusters" , "items" : [ { "href" : "http://192.168.1.25:8080/api/v1/clusters/cc" , "Clusters" : { "cluster_name" : "cc" , "version" : "HDP-2.5" } } ] } 等同于 [root@hadron ~] #curl -H "X-Requested-By: ambari" -X GET -u admin:admin http: //192.168.1.25:8080/api/v1/clusters { "href" : "http://192.168.1.25:8080/api/v1/clusters"

Hue 3.9 on HDP2.3.4 安装备忘

邮差的信 提交于 2019-12-06 19:40:41
安装步骤 基于CentOS,使用root账号 准备环境 使用 Ambari 安装HDP2.3.4 按 http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/configure_hdp_hue.html 配置HDP 重启HDP 下载 maven 及 ant 配置profile,加入几个环境变量: export JAVA_HOME=/usr/jdk64/jdk1.8.0_60 export ANT_HOME=/hadoop/program/apache-ant-1.9.6 export MAVEN_HOME=/hadoop/program/apache-maven-3.3.9 export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin source /etc/profile yum -y install gcc-c++ asciidoc cyrus-sasl-devel cyrus-sasl-gssapi krb5-devel libxml2-devel libxslt-devel mysql-devel openldap-devel python-devel sqlite-devel

HDP上安装impala

别来无恙 提交于 2019-12-06 03:02:13
Impala是Cloudera公司主导开发的新型查询系统,它提供SQL语义,能查询存储在Hadoop的HDFS和HBase中的PB级大数据。Impala提供更快的查询速度,性能上号称比Hive快3~10倍。Impala是开源的,但一般都是通过cloudera manager或者在CDH版本上安装,今天主要介绍的是在HDP版本上的安装。 版本 Impala对于Hadoop的版本要求很高,现在说明一下当前安装的版本信息 Impala 2.5 HDP 2.2.8.0 基于Hadoop2.6 安装步骤 1. 在/etc/yum.repo.d 中创建impala.repo [cloudera-cdh5] # Packages for Cloudera's Distribution for Hadoop, Version 5, on RedHat or CentOS 6 x86_64 name=Cloudera's Distribution for Hadoop, Version 5 baseurl=https://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/5/ gpgkey =https://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera gpgcheck

URI to access a file in HDFS

走远了吗. 提交于 2019-12-05 17:08:08
I have setup a cluster using Ambari that includes 3 nodes . Now I want to access a file in a HDFS using my client application. I can find all node URIs under Data Nodes in Amabari. What is the URI + Port I need to use to access a file ? I have used the default installation process. Default port is "8020". You can access the "hdfs" paths in 3 different ways. Simply use "/" as the root path For e.g. E:\HadoopTests\target>hadoop fs -ls / Found 6 items drwxrwxrwt - hadoop hdfs 0 2015-08-17 18:43 /app-logs drwxr-xr-x - mballur hdfs 0 2015-11-24 15:36 /tmp drwxrwxr-x - mballur hdfs 0 2015-10-20 15

Failed to get D-Bus connection: Operation not permitted

依然范特西╮ 提交于 2019-12-04 16:16:33
问题 I'm trying to install ambari 2.6 on a docker centos7 image but in the the ambari setup step and exactly while intializing the postgresql db I receive this error: Failed to get D-Bus connection: Operation not permitted I've got this error every time I try to run a serice on my docker image. I tried every solution in the net but nothing worked yet. Does any one have an idea how to resolve this ? Thank you in advance 回答1: Use this command docker run -d -it --privileged ContainerId /usr/sbin/init