mnesia

Is it possible to develop a powerful web search engine using Erlang, Mnesia & Yaws?

99封情书 提交于 2019-12-02 19:47:19
I am thinking of developing a web search engine using Erlang, Mnesia & Yaws. Is it possible to make a powerful and the fastest web search engine using these software? What will it need to accomplish this and how what do I start with? Erlang can make the most powerful web crawler today. Let me take you through my simple crawler. Step 1. I create a simple parallelism module, which i call mapreduce -module(mapreduce). -export([compute/2]). %%===================================================================== %% usage example %% Module = string %% Function = tokens %% List_of_arg_lists = [["file

Online mnesia recovery from network partition [closed]

我与影子孤独终老i 提交于 2019-12-02 18:15:29
Is it possible to recover from a network partition in an mnesia cluster without restarting any of the nodes involved? If so, how does one go about it? I'm interested specifically in knowing: How this can be done with the standard OTP mnesia (v4.4.7) What custom code if any one needs to write to make this happen (e.g. subscribe to mnesia running_paritioned_network events, determine a new master, merge records from non-master to master, force load table from the new master, clear running parititioned network event -- example code would be greatly appreciated). Or, that mnesia categorically does

Unintentionally intercepting Mnesia's transactional retries with try/catch results in all kinds of weirdness

ε祈祈猫儿з 提交于 2019-12-01 23:19:51
问题 So, I was having all kinds of trouble with CRUD operations on sets of records in one transaction. It lead me to post 2 questions here, Trouble and MoreTrouble. However , I think that both those issues where created by the following: Within my transactions, I enclosed my mnesia:writes, reads, etc. in try/catch blocks that caught everything including mnesia's aborted transactions as part of its deadlock avoidance algorithm. I.e., insert(Key, Value) -> F = fun() -> case sc_store:lookup(Key) of

Unintentionally intercepting Mnesia's transactional retries with try/catch results in all kinds of weirdness

…衆ロ難τιáo~ 提交于 2019-12-01 21:01:31
So, I was having all kinds of trouble with CRUD operations on sets of records in one transaction. It lead me to post 2 questions here, Trouble and MoreTrouble . However , I think that both those issues where created by the following: Within my transactions, I enclosed my mnesia:writes, reads, etc. in try/catch blocks that caught everything including mnesia's aborted transactions as part of its deadlock avoidance algorithm. I.e., insert(Key, Value) -> F = fun() -> case sc_store:lookup(Key) of {ok, _Value} -> sc_store:replace(Key, Value); {error, not_found} -> sc_store:insert(Key,Value) end end,

rabbitmq单机多实例集群搭建

耗尽温柔 提交于 2019-12-01 08:46:29
1.安装单机版的 2.要搭建集群,先将之前单机版中历史记录干掉,删除rabbitmq /var/lib/rabbitmq/mnesia下的所有内容。 3.启动3个实例 #因为我配置了web管理插件,所以还要指定其web插件占用的端口号,如果不指定,将不能启动多个节点,因为端口号被占用 RABBITMQ_NODE_PORT=5672 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15672}]" RABBITMQ_NODENAME=rabbit rabbitmq-server -detached RABBITMQ_NODE_PORT=5673 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15673}]" RABBITMQ_NODENAME=rabbit2 rabbitmq-server -detached RABBITMQ_NODE_PORT=5674 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15674}]" RABBITMQ_NODENAME=rabbit3 rabbitmq-server -detached 4

ejabberd clustering, Slave doesn't work when master goes down

断了今生、忘了曾经 提交于 2019-12-01 07:17:15
I have setup ejabberd clustering, one is master and other is slave as described here . I have copied .erlang.cookie and database files from master to slave. Everything is working fine. The issue is when I stop master node: Then no request getting routed to slave. When trying to restart slave node its not getting start once it down. I get stuck here, please help me out. Thanks This is the standard behaviour of Mnesia. If the node you start was not the last one that was stopped in a cluster, then it does not have any way to know if it has the latest, most up to date data. The process to start a

ejabberd clustering, Slave doesn't work when master goes down

爷,独闯天下 提交于 2019-12-01 05:20:48
问题 I have setup ejabberd clustering, one is master and other is slave as described here. I have copied .erlang.cookie and database files from master to slave. Everything is working fine. The issue is when I stop master node: Then no request getting routed to slave. When trying to restart slave node its not getting start once it down. I get stuck here, please help me out. Thanks 回答1: This is the standard behaviour of Mnesia. If the node you start was not the last one that was stopped in a cluster

Remove not_exist_already node from mnesia cluster(scheme)

自作多情 提交于 2019-11-30 06:04:20
I have a bad node (it doesn't exist) in the mnesia cluster data when I get: > mnesia:system_info(db_nodes) [bad@node, ...] How do I remove it from the cluster? I tried: > mnesia:del_table_copy(scheme, bad@node). {aborted,{not_active,"All replicas on diskfull nodes are not active yet"... What does this mean? How can I fix it? Update. Before remove node from schema we need to stop mnesia on it I had a similar problem years ago. What you are trying to do is remove an offline node, which as far as I am aware was impossible in earlier versions of mnesia. You can however connect to the cluster using

Very Large Mnesia Tables in Production

自闭症网瘾萝莉.ら 提交于 2019-11-29 21:48:52
We are using Mnesia as a primary Database for a very large system. Mnesia Fragmented Tables have behaved so well over the testing period. System has got about 15 tables, each replicated across 2 sites (nodes), and each table is highly fragmented. During the testing phase, (which focused on availability, efficiency and load tests), we accepted the Mnesia with its many advantages of complex structures will do for us, given that all our applications running on top of the service are Erlang/OTP apps. We are running Yaws 1.91 as the main WebServer. For efficiently configuring Fragmented Tables, we

Remove not_exist_already node from mnesia cluster(scheme)

偶尔善良 提交于 2019-11-29 04:37:21
问题 I have a bad node (it doesn't exist) in the mnesia cluster data when I get: > mnesia:system_info(db_nodes) [bad@node, ...] How do I remove it from the cluster? I tried: > mnesia:del_table_copy(scheme, bad@node). {aborted,{not_active,"All replicas on diskfull nodes are not active yet"... What does this mean? How can I fix it? Update. Before remove node from schema we need to stop mnesia on it 回答1: I had a similar problem years ago. What you are trying to do is remove an offline node, which as