apache-kudu

Apache Kudu slow insert, high queuing time

两盒软妹~` 提交于 2021-02-07 10:15:38
问题 I have been using Spark Data Source to write to Kudu from Parquet, and the write performance is terrible: about 12000 rows / seconds. Each row roughly 160 bytes. We have 7 kudu nodes, 24 core + 64 GB RAM each + 12 SATA disk each. None of the resources seem to be the bottleneck: tserver cpu usage ~3-4 core, RAM 10G, no disk congestion. Still I see most of the time write requests were stuck in queuing. Any ideas are appreciated. W0811 12:34:03.526340 7753 rpcz_store.cc:251] Call kudu.tserver

How to get current kudu master or tserver flag value?

时光毁灭记忆、已成空白 提交于 2019-12-12 04:21:32
问题 Master and tserver flags can be accessed from kudu web interfaces (by defult http://127.0.0.1:8051/varz and http://127.0.0.1:8050/varz). But I couldn't find a way to get it from command line. For example, how to get tserver_master_addrs from a running kudu-tserver instance? Something like : kudu-tserver show tserver_master_addrs 回答1: The command kudu master list will show you the master addresses however you still need to know of one master addresses (I know, seems strange to me too). $ kudu

NonRecoverableException: Not enough live tablet servers to create a table with the requested replication factor 3. 1 tablet servers are alive

孤街浪徒 提交于 2019-12-11 15:19:27
问题 I am trying to create a Kudu table using Impala-shell . Query : CREATE TABLE lol ( uname STRING, age INTEGER, PRIMARY KEY(uname) ) STORED AS KUDU TBLPROPERTIES ( 'kudu.master_addresses' = '127.0.0.1' ); CREATE TABLE t (k INT PRIMARY KEY) STORED AS KUDU TBLPROPERTIES ( 'kudu.master_addresses' = '127.0.0.1' ); But I am getting error: ERROR: ImpalaRuntimeException: Error creating Kudu table 'impala::default.t' CAUSED BY: NonRecoverableException: Not enough live tablet servers to create a table

Load a text file into Apache Kudu table?

China☆狼群 提交于 2019-12-11 04:39:59
问题 How do you load a text file to an Apache Kudu table? Does the source file need to be in HDFS space first? If it doesn't share the same hdfs space as other hadoop ecosystem programs (ie/ hive, impala), is there Apache Kudu equivalent of: hdfs dfs -put /path/to/file before I try to load the file? 回答1: The file need not to be in HDFS first.It can be taken from an edge node/local machine.Kudu is similar to Hbase.It is a real-time store that supports key-indexed record lookup and mutation but cant