phoenix

phoenix / testing dates in controllers

守給你的承諾、 提交于 2019-12-11 18:55:57
问题 Having the following basic test (using ex_machina) : # factory def item_factory do %Api.Content.Item{ title: "Some title", content: "Some content", published_at: NaiveDateTime.utc_now } end # test test "lists all items", %{conn: conn} do item = insert(:item) conn = get conn, item_path(conn, :index) assert json_response(conn, 200)["data"] == [ %{ "content" => item.content, "published_at" => item.published_at, "title" => item.title, "id" => item.id } ] end Am getting an error on the date : left

Using Apache Phoenix and Spark to save a CSV in HBase #spark2.2 #intelliJIdea

五迷三道 提交于 2019-12-11 17:10:31
问题 I have been trying to load data from a CSV using Spark and write it to HBase. I am able to do it in Spark 1.6 easily but not in Spark 2.2. I have tried multiple approaches and finally/ultimately everything leads me to the same error with Spark 2.2: Exception in thread "main" java.lang.IllegalArgumentException: Can not create a Path from an empty string Any idea why this is happening. Sharing code snippet: def main(args : Array[String]) { val spark = SparkSession.builder .appName("PhoenixSpark

pheonix api options call shows empty params

自作多情 提交于 2019-12-11 15:46:09
问题 I am making a phoenix api for my react frontend and I am sending a post request with an object of objects, and I figured that I need to use options in the api to get them beause with post it just failed, but now the params are empty. Why is that? What I send: axios .post(PAYMENT_SERVER_URL, { description, email: token.email, source: token.id, subscriptionID }) api router: pipeline :api do plug(:accepts, ["json"]) end scope "/api", MyApiWeb do pipe_through(:api) options("/users",

hbase-indexer solr numFound different from hbase table rows size

99封情书 提交于 2019-12-11 13:40:00
问题 Recently my team is using hbase-indexer on CDH for indexing hbase table column to solr . When we deploy hbase-indexer server (which is called Key-Value Store Indexer) and begin testing. We found a situation that the rows size between hbase table and solr index is different : We used Phoenix to count hbase table rows: 0: jdbc:phoenix:slave1,slave2,slave3:2181> SELECT /*+ NO_INDEX */ COUNT(1) FROM C_PICRECORD; +------------------------------------------+ | COUNT(1) | +--------------------------

Exception: Unread Block Data with PySpark, Phoenix and Hbase

我的未来我决定 提交于 2019-12-11 12:44:42
问题 I am quite new to Python 2.6.x, PySpark, spark 1.6, HBase 1.1 and i am trying to read data from a table using Apache spark plugin. Read Data: dfRows = sparkConfig.getSqlContext().read\ .format('org.apache.phoenix.spark')\ .option('table', 'TableA')\ .option('zkUrl', 'xxx:2181:/hbase-secure')\ .load() Also, I run the python file using spark-submit using the below args and jars spark-submit --master yarn-client --executor-memory 24G --driver-memory 20G --num-executors 10 --queue aQueue --jars

Phoenix udf not working

烂漫一生 提交于 2019-12-11 06:17:25
问题 I am trying to run a custom udf in apache phoenix but getting error. Please help me to figure out the issue. Following is my function class: package co.abc.phoenix.customudfs; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.phoenix.expression.Expression; import org.apache.phoenix.expression.function.ScalarFunction; import org.apache.phoenix.parse.FunctionParseNode.Argument; import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction; import org.apache.phoenix

why hbase KeyValueSortReducer need to sort all KeyValue

Deadly 提交于 2019-12-10 18:44:43
问题 I am learning Phoenix CSV Bulk Load recently and I found that the source code of org.apache.phoenix.mapreduce.CsvToKeyValueReducer will cause OOM ( java heap out of memory ) when columns are large in one row (In my case, 44 columns in one row and the avg size of one row is 4KB). What's more, this class is similar with the hbase bulk load reducer class - KeyValueSortReducer . It means that OOM may happen when using KeyValueSortReducer in my case. So, I have a question of KeyValueSortReducer -

ODBC driver for HBase/Phoenix

不打扰是莪最后的温柔 提交于 2019-12-10 18:06:20
问题 I need to connect Tableau to HBase or Phoenix and Tableau does not support JDBC. Bummer! I've read about the proprietary Simba driver but haven't seen any reports of people using it. I don't feel like forking over money when it's not ideal, and my employer feels the same way. Is there another way to connect Tableau to HBase or Phoenix? How are other people doing it? I don't like the idea of using Hive to connect to HBase because one of the main reasons to go away from Hive is its atrocious

Apache Phoenix + Pentaho Mondrian wrong join order

試著忘記壹切 提交于 2019-12-10 12:27:36
问题 I am using Apache Phoenix 4.5.2 from Cloudera labs distribution, which is installed over CDH 5.4 cluster. Now I'm trying to use it from Pentaho BA 5.4 server with embedded Mondrian and SAIKU Plugin installed. I'm planning to use is as aggregator for Pentaho Mondrian ROLAP engine. So I have imported about 65 millions facts into fact table via slightly customized Pentaho Data Integration(if someone's interested, I added UPSERT to Table Output step, set Commit size to -1 , set thin driver

How we place Phoenix Singleton on same address? C++

谁说胖子不能爱 提交于 2019-12-10 11:47:53
问题 below code illustrates the Phoenix Singleton described in Andrey Alexandrescu's Modern C++ Design book. Singleton& Instance() { if (!pInstance_) { // Check for dead reference if (destroyed_) { OnDeadReference(); } else { // First call—initialize Create(); } } return pInstance_; } void Singleton::OnDeadReference() { // Obtain the shell of the destroyed singleton Create(); // Now pInstance_ points to the "ashes" of the singleton // - the raw memory that the singleton was seated in. // Create a