purge

Purge Kafka Topic

て烟熏妆下的殇ゞ 提交于 2019-12-17 05:16:51
问题 I pushed a message that was too big into a kafka message topic on my local machine, now I'm getting an error: kafka.common.InvalidMessageSizeException: invalid message size Increasing the fetch.size is not ideal here, because I don't actually want to accept messages that big. Is there a way to purge the topic in kafka? 回答1: Temporarily update the retention time on the topic to one second: kafka-topics.sh --zookeeper <zkhost>:2181 --alter --topic <topic name> --config retention.ms=1000 And in

GIT - Remove old reflog entries

本小妞迷上赌 提交于 2019-12-14 01:26:38
问题 After a lot of rebasing a repository to our latest needs our reflog is full of commits and orphan branches. We reached the final state of our reorganization. While there're branches and commits left with a lot of binary data the repository grew multiple times of its origin size we decided to purge all the old reflog entries and data. I was digging in the manual but didn't get much smarter experimenting with git-reflog expire This is an example of the log (shortened) -> <sha1> [development] ..

why does CouchDBs _dbs.couch keep growing when purging/compacting DBs?

岁酱吖の 提交于 2019-12-13 07:37:43
问题 The setup: A CouchDB 2.0 running in Docker on a Raspberry PI 3 A node-application that uses pouchdb, also in Docker on the same PI 3 The scenario: At any given moment, the CouchDB has at max 4 Databases with a total of about 60 documents the node application purges (using pouchdbs destroy) and recreates these databases periodically (some of them every two seconds, others every 15 minutes) The databases are always recreated with the newest entries The reason for purging the databases, instead

Purge with PowerShell a big log file by deleting line by line and stopping it when my date comparison is true

巧了我就是萌 提交于 2019-12-11 15:22:46
问题 During these days i'm working on different scripts to delete lines into different log files. But i have still one file that i can't handle because the structure is bit more complex than the others files i have purged. To give you an idea i put here some example lines of that log file. [ 30-10-2017 16:38:07.62 | INFO | Some text [ 30-10-2017 16:38:11.07 | INFO | Some text [1];Erreur XXXX non-gérée : Some text. Merci de communiquer some text : - Some text again - Identifiant : XXXXXXXX-1789

clearing cloudflare cache programmatically

邮差的信 提交于 2019-12-11 06:17:20
问题 I am trying to clear the cloudflare cache for single urls programmatically after put requests to a node.js api. I am using the https://github.com/cloudflare/node-cloudflare library, however I can't figure out how to log a callback from cloudflare. According to the test file in the same repo, the syntax should be something like this: //client declaration: t.context.cf = new CF({ key: 'deadbeef', email: 'cloudflare@example.com', h2: false }); //invoke clearCache: t.context.cf.deleteCache('1', {

Node.js global variable property is purged

妖精的绣舞 提交于 2019-12-11 03:29:51
问题 my problem is not about "memory leakage", but about "memory purge" of node.js (expressjs) app. My app should maintain some objects in memory for the fast look-up's during the service. For the time being (one or two days) after starting the app, everthing seemed fine, until suddenly my web client failed to look-up the object bacause it has been purged (undefined). I suspect Javascript GC (garbage collection). However, as you can see in the psedu-code, I assigned the objects to the node.js

Backing up, Deleting, Restoring Elasticsearch Indexes By Index Folder

与世无争的帅哥 提交于 2019-12-08 21:47:52
问题 Most of the ElasticSearch documentation discusses working with the indexes through the REST API - is there any reason I can't simply move or delete index folders from the disk? 回答1: You can move data around on disk, to a point - If Elasticsearch is running, it is never a good idea to move or delete the index folders, because Elasticsearch will not know what happened to the data, and you will get all kinds of FileNotFoundExceptions in the logs as well as indices that are red until you manually

delete partitions folders in hdfs older than N days

隐身守侯 提交于 2019-12-04 06:10:01
问题 I want to delete the partition folders which are older than N days. The below command gives the folders which are exactly 50 days ago. I want the list of all folders which are less than 50 days hadoop fs -ls /data/publish/DMPD/VMCP/staging/tvmcpr_usr_prof/chgdt=`date --date '50 days ago' +\%Y-\%m-\%d` 回答1: You can try with solr hdfsfindtool: hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-job.jar org.apache.solr.hadoop.HdfsFindTool -find /data/publish/DMPD/VMCP/staging

Removing history from git - git command fails

和自甴很熟 提交于 2019-12-04 05:09:01
Im trying to purge a projects bin directory from Git history. I have already added 'bin' to .gitignore and run $git rm --cached -r bin successfully. Now I have tried using the command as recommended in the GitHub help pages to purge the history: $ git filter-branch --force --index-filter \ 'git rm --cached --ignore-unmatch bin' \ --prune-empty --tag-name-filter cat -- --all But this results in errors: Rewrite <hash> (1/164) fatal: not removing 'bin' recursively without -r index filter failed: git rm --cached --ignore-unmatch bin rm: cannot remove 'c:/pathToMyLocalRepo/.git-rewrite/revs':

Varnish purge using HTTP and REGEX

淺唱寂寞╮ 提交于 2019-12-03 07:13:51
问题 I want to purge Elements of my varnish using HTTP. This http call is triggered from a backend server behind the varnish itself, so the backend server has not other access but HTTP. I have implemented the following purging rules with the according ACL which work fine for curl -X PURGE http://www.example.com/image/123/photo-100-150.jpg but I want to be able to purge an URL via HTTP using Regex curl -X PURGE http://www.example.com/image/123/*.jpg That way I want to clear all scaled version of