mnesia

Safe, Sequential And Scalable Counters In Mnesia

橙三吉。 提交于 2019-12-04 14:25:29
问题 I am writing an application in Erlang/OTP and want to use sequential counters on a version recording system. I first implemented them with mnesia:dirty_update_counter but the experience of using it drove out these hard requirements: The counters must have the following properties: be strictly sequential - 1 followed by 2 followed by 3 etc, etc the sequence is shared across a distributed set of systems and if I have you down as a '3' and you come in a '5' I need to know we have lost some comms

Mnesia: unexpectedly getting aborted, cyclic transactions

匆匆过客 提交于 2019-12-04 11:57:41
I have a 5 processes that insert/update the same 3 records in a mnesia table. Each of these processes does its insert/updates within a single transaction. I have 5 other process that read these very same 3 records, also within a single transaction. Unless I lock the entire table as part of the multi-record transaction, I get an {aborted, {cyclic, node....}} error. My intuition is that my use-case is ordinary and should not, in of itself, result in in an aborted transaction. Can someone help me with my bone-headed thinking? All I am doing is inserting (or reading) multiple rows in a cache

Best way to print out Mnesia table

我的未来我决定 提交于 2019-12-04 09:27:23
问题 I tried this code snippet: print_next(Current) -> case mnesia:dirty_next(muppet, Current) of '$end_of_table' -> io:format("~n", []), ok; Next -> [Muppet] = mnesia:dirty_read({muppet, Next}), io:format("~p~n", [Muppet]), print_next(Next), ok end. print() -> case mnesia:dirty_first(muppet) of '$end_of_table' -> ok; First -> [Muppet] = mnesia:dirty_read({muppet, First}), io:format("~p~n", [Muppet]), print_next(First), ok end. But it is so long. Also I can use dirty_all_keys and then iterate

how do we efficiently handle time related constraints on mnesia records?

点点圈 提交于 2019-12-03 20:51:24
i am writing records into mnesia which should be kept there only for an allowed time (24 hours). after 24 hours, before a user modifies part of them, the system should remove them automatically. forexample, a user is given free airtime (for voice calls) which they should use in a given time. if they do not use it, after 24 hours, the system should remove these resource reservation from the users record. Now, this has brought in timers. an example of a record structure is: -record(free_airtime, { reference_no, timer_object, %% value returned by timer:apply_after/4 amount }). The timer object in

Managing incremental counters in mnesia DBMS?

徘徊边缘 提交于 2019-12-03 17:32:53
I have realised that mnesia doesnot support auto-increment feature as does MySQL or other RDBMS do.The counters talked about in mnesia documentation are not really well explained. forexample i have found sofar one function in the entire documentation which manipulates counters mnesia:dirty_update_counter({Tab::atom(),Key::any()}, Val::positive_integer()) So, this has disturbed me for a time coz it works with records of type {TabName, Key, Integer} This is also unclear and possibly because no erlang book or mnesia documentation provides an example to explain it.This has forced me to implement

Safe, Sequential And Scalable Counters In Mnesia

ぐ巨炮叔叔 提交于 2019-12-03 09:03:24
I am writing an application in Erlang/OTP and want to use sequential counters on a version recording system. I first implemented them with mnesia:dirty_update_counter but the experience of using it drove out these hard requirements: The counters must have the following properties: be strictly sequential - 1 followed by 2 followed by 3 etc, etc the sequence is shared across a distributed set of systems and if I have you down as a '3' and you come in a '5' I need to know we have lost some comms and should resync safe with a distributed database mnesia:dirty_update_counter meets neither of these

Is it possible to develop a powerful web search engine using Erlang, Mnesia & Yaws?

岁酱吖の 提交于 2019-12-03 06:25:28
问题 I am thinking of developing a web search engine using Erlang, Mnesia & Yaws. Is it possible to make a powerful and the fastest web search engine using these software? What will it need to accomplish this and how what do I start with? 回答1: Erlang can make the most powerful web crawler today. Let me take you through my simple crawler. Step 1. I create a simple parallelism module, which i call mapreduce -module(mapreduce). -export([compute/2]). %%=================================================

Online mnesia recovery from network partition [closed]

邮差的信 提交于 2019-12-03 05:46:49
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 7 years ago . Is it possible to recover from a network partition in an mnesia cluster without restarting any of the nodes involved? If so, how does one go about it? I'm interested specifically in knowing: How this can be done with the standard OTP mnesia (v4.4.7) What custom code if any one needs to write to make this happen

what is the proper way to backup/restore a mnesia database?

喜你入骨 提交于 2019-12-03 02:09:06
WARNING: the background info is pretty long. Skip to the bottom if you think you need the question before the background info. Appreciate the time this is gonna take! I've been all over the web (read google) and I have not found a good answer. YES, there are plenty of links and references to the Mnesia documentation on the erlang.org site but even those links suffer from version-itis. So in the simplest case where the node() you are currently connected to is the same as the owner of the table set then the backup/restore is going to work. For example: $ erl -sname mydatabase > mnesia:start(). >

how do I remove an extra node

社会主义新天地 提交于 2019-12-03 01:08:45
I have a group of erlang nodes that are replicating their data through Mnesia's "extra_db_nodes"... I need to upgrade hardware and software so I have to detach some nodes as I make my way from node to node. How does one remove a node and still preserve the data that was inserted? [update] removing nodes is as important as adding them. Over time as your cluster grows it must also contract. If not then Mnesia is going to be busy trying to send data to nonexistent nodes filling up queues and keeping the network busy. [ final update ] after pouring through the erlang/mnesia source code I was able