large-data-volumes

javascript to find memory available

扶醉桌前 提交于 2019-12-18 04:52:41
问题 Let's make it immediately clear: this is not a question about memory leak! I have a page which allows the user to enter some data and a JavaScript to handle this data and produce a result. The JavaScript produces incremental outputs on a DIV, something like this: (function() { var newdiv = document.createElement("div"); newdiv.innerHTML = produceAnswer(); result.appendChild(newdiv); if (done) { return; } else { setTimeout(arguments.callee, 0); } })(); Under certain circumstances the

How do I efficiently search a potentially large database?

蹲街弑〆低调 提交于 2019-12-13 01:37:14
问题 This is more of a discussion. We have a system which is multitenanted and will have tables that can have millions of rows. Our UI allows users to perform searches against these tables with many different search criterias -- so they can have any combination of these criteria. It is not practical to index all these search columns in the database or to load the full tables in memory and then filter. Can anybody point me in the correct direction for patterns/designs that tackles this issue? 回答1:

Update column from another table in large mysql db (7 million rows)

一个人想着一个人 提交于 2019-12-12 10:09:18
问题 Description I have 2 tables with the following structure (irrelevant columns removed): mysql> explain parts; +-------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+-------+ | code | varchar(32) | NO | PRI | NULL | | | slug | varchar(255) | YES | | NULL | | | title | varchar(64) | YES | | NULL | | +-------------+--------------+------+-----+---------+-------+ 4 rows in set (0.00 sec) and

serialize list of huge composite graphs using protobuf-net causing out-of-memory-exception

时光总嘲笑我的痴心妄想 提交于 2019-12-11 02:22:28
问题 I am trying to serialize an object containing a list of very large composite object graphs (~200000 nodes or more) using Protobuf-net. Basically what I want to achieve is to save the complete object into a single file as fast and as compact as possible. My problem is that I get an out-of-memory-exception while trying to serialize the object. On my machine the exception is thrown when the file size is around 1.5GB. I am running a 64 bit process and using a StreamWriter as input to protobuf-net

How do I design a table which will store very large data?

让人想犯罪 __ 提交于 2019-12-10 10:43:52
问题 I need to design a table in Oracle, which will store 2-5 TB of data in a day. It can grow to 200TB, and records will purged when it crosses 200 TB. Is it a feasible choice to keep it in OLTP, or do I need to shift it to data warehouse DB? Please advice considerations I should keep in mind when designing the schema of this table, or the database. Also, please advice if it is a SQL server, as I can use either database. 回答1: That size puts you in the VLDB territory (very large databases). Things

Alternatives to huge drop down lists (24,000+ items)

删除回忆录丶 提交于 2019-12-10 04:19:00
问题 In my admin section, when I edit items, I have to attach each item to a parent item. I have a list of over 24,000 parent items, which are listed alphabetically in a drop down list (a list of music artists). The edit page that lists all these items in a drop down menu is 2MB, and it lags like crazy for people with old machines, especially in Internet Explorer. Whats a good alternative to replicate the same function, where I would need to select 1 of these 24,000 artists, without actually

Parallel.ForEach throws exception when processing extremely large sets of data

谁说胖子不能爱 提交于 2019-12-08 14:00:33
My question centers on some Parallel.ForEach code that used to work without fail, and now that our database has grown to 5 times as large, it breaks almost regularly. Parallel.ForEach<Stock_ListAllResult>( lbStockList.SelectedItems.Cast<Stock_ListAllResult>(), SelectedStock => { ComputeTipDown( SelectedStock.Symbol ); } ); The ComputeTipDown() method gets all daily stock tic data for the symbol, and iterates through each day, gets yesterday's data and does a few calculations and then inserts them into the database for each day. We use this rarely to recalculate static data values when a

Select Count(*) over large amount of data

自作多情 提交于 2019-12-08 05:20:29
I want to do this for a Report but i have 20,000,000 of records in my table and it causes an TimeOut in my application. SELECT T.transactionStatusID, TS.shortName AS TransactionStatusDefShortName, count(*) AS qtyTransactions FROM Transactions T INNER JOIN TransactionTypesCurrencies TTC ON T.id_Ent = TTC.id_Ent AND T.trnTypeCurrencyID = TTC.trnTypeCurrencyID INNER JOIN TransactionStatusDef TS ON T.id_Ent = TS.ent_Ent AND T.transactionStatusID = TS.ID WHERE T.id_Ent = @id_Ent GROUP BY T.transactionStatusID, TS.shortName as far as i know COUNT(*) causes a full table scan and it makes my query to

Can I break down a large-scale correlation matrix?

点点圈 提交于 2019-12-07 21:50:53
问题 the correlation matrix is so large (50000by50000) that it is not efficient in calculating what I want. What I want to do is to break it down to groups and treat each as separate correlation matrices. However, how do I deal with the dependence between those smaller correlation matrices? I have been researching online all day but nothing comes up. There should be some algorithm out there that is related to the approximation of large correlation matrices like this, right? 回答1: Even a 4 x 4

How do I design a table which will store very large data?

▼魔方 西西 提交于 2019-12-06 11:01:01
I need to design a table in Oracle, which will store 2-5 TB of data in a day. It can grow to 200TB, and records will purged when it crosses 200 TB. Is it a feasible choice to keep it in OLTP, or do I need to shift it to data warehouse DB? Please advice considerations I should keep in mind when designing the schema of this table, or the database. Also, please advice if it is a SQL server, as I can use either database. That size puts you in the VLDB territory (very large databases). Things are fundamentally different at that altitude. Your question cannot be answered without the full