fiware-cosmos

How to configure Cygnus in relation to Orion and Cosmos

孤人 提交于 2019-12-24 05:28:10
问题 We have Orion, Cygnus, and Cosmos installed, and are trying to get the connections between them working: via the broker Orion messages are to be forwarded to Cygnus, which in turn is to write those to the Cosmos database. We know that Orion is working properly (it has been tested and used before), and have tested Cygnus with the test python script (as explained in https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/quick_start_guide.md). Currently we are trying to configure Cygnus

SSH access for the headnode of FIWARE-Cosmos

北城以北 提交于 2019-12-24 05:07:06
问题 I am following this guide on Hadoop/FIWARE-Cosmos and I have a question about the Hive part. I can access the old cluster’s ( cosmos.lab.fiware.org ) headnode through SSH, but I cannot do it for the new cluster. I tried both storage.cosmos.lab.fiware.org and computing.cosmos.lab.fiware.org and failed to connect. My intention in trying to connect via SSH was to test Hive queries on our data through the Hive CLI. After failing to do so, I checked and was able to connect to the 10000 port of

Cygnus can not persist data on Cosmos global instance

徘徊边缘 提交于 2019-12-20 02:48:07
问题 When trying to persist an entity from Cygnus to Cosmos global instance it fails. Looking at the log file I see something like that: 2015 15:31:50,006 DEBUG [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.http.impl.conn.DefaultClientConnection.sendRequestHeader:273) - >> GET /webhdfs/v1/user/ms/def_serv/def_servpath/6_registervalues/6_registervalues.txt?op=getfilestatus&user.name=ms HTTP/1.1 12 Nov 2015 15:31:50,006 DEBUG [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache

How to scale Orion GE?

╄→гoц情女王★ 提交于 2019-12-19 08:53:59
问题 I have deployed an Orion instance in FILAB and I have configured the Cygnus inyector in order to store information in Cosmos. But...let us imagine a scenario in which the number of entities increases drastically. In this hypothetical scenario one instance of Orion GE wouldn't be enough so it would be necessary to deploy more instances. What would be the scale procedure? Taking into account the maximum quotas are: VM Instances: 5 VCPUs: 10 Hard Disk: 100 GB Memory: 10240 MB Public IP: 1 I

connecting spagobi to cosmos

和自甴很熟 提交于 2019-12-14 03:59:40
问题 I'm trying to connect SpagoBI to Cosmos via Hive JDBC driver. The connection works but I need to add jar (json-serde-1.3.1-SNAPSHOT-jar-with-dependencies.jar) to be able to execute map reduce when querying. The problem is that spago bi doesn't support multiple queries for the definition of a dataset and therefore I cannot add the jar before executing the actual select (the semicolon is interpreted as part of the path of the jar) How can I do? Is there a way to definitely add the jar so I don

Fiware Cosmos Hive Authorization Issue

Deadly 提交于 2019-12-11 03:57:35
问题 I'm using a shared instance of Fiware Cosmos (meaning I don't have root privileges). I have until today successfully acessed and managed tables in hive both remotely using jdbc, and Hive CLI. But now I'm getting this error when starting Hive CLI: log4j:ERROR Could not instantiate class [org.apache.hadoop.hive.shims.HiveEventCounter]. java.lang.RuntimeException: Could not load shims in class org.apache.hadoop.log.metrics.EventCounter at org.apache.hadoop.hive.shims.ShimLoader.createShim

Connectivity problems between FILAB VMs and Cosmos global instance

眉间皱痕 提交于 2019-12-10 19:39:10
问题 I have the same kind of connectivity problem discussed in the question "Cygnus can not persist data on Cosmos global instance". However, I have found no solution after read it. Nowadays, I have recently deployed two virtual machines in FILAB (both VMs contain Orion ContextBroker 0.26.1 and Cygnus 0.11.0). When I try to persist data on Cosmos via Cygnus, I get the following error message (the same in both VMs) : 2015-12-17 19:03:00,221 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR -

Start Cosmos-GUI

断了今生、忘了曾经 提交于 2019-12-10 17:56:00
问题 I want to install Cosmos. I have installed Apache-Hadoop 2.6 with a single node and my next move was install cosmos-gui. So I follow the official installation guide - https://github.com/telefonicaid/fiware-cosmos/blob/develop/cosmos-gui/README.md#installation but npm start command doesn't work. Error: fs.js:432 return binding.open(pathModule._makeLong(path), stringToFlags(flags), mode); ^ Error: ENOENT, no such file or directory '' at Object.fs.openSync (fs.js:432:18) at Object.fs

PEP proxy config file for integration of IDM GE, PEP proxy and Cosmos big data

本秂侑毒 提交于 2019-12-04 04:13:37
问题 I have a question regarding PEP proxy file. My keystone service is running on 192.168.4.33:5000. My horizon service is running on 192.168.4.33:443. My WebHDFS service is running on 192.168.4.180:50070 and i intend to run PEP Proxy on 192.168.4.180:80 But what i don't get is what should i put in place of config.account_host? Inside mysql database for keyrock manager there is "idm" user with "idm" password and every request i make via curl on Identity manager works. But with this config: config