entropy

entropy estimation using histogram of normal data vs direct formula (matlab)

こ雲淡風輕ζ 提交于 2019-12-22 09:09:53
问题 Let's assume we have drawn n=10000 samples of the standard normal distribution. Now I want to calculate its entropy using histograms to calculate the probabilities. 1) calculate probabilities (for example using matlab) [p,x] = hist(samples,binnumbers); area = (x(2)-x(1))*sum(p); p = p/area; (binnumbers is determined due to some rule) 2) estimate entropy H = -sum(p.*log2(p)) which gives 58.6488 Now when i use the direct formula to calculate the entropy of normal data H = 0.5*log2(2*pi*exp(1))

How is the gradient and hessian of logarithmic loss computed in the custom objective function example script in xgboost's github repository?

[亡魂溺海] 提交于 2019-12-22 05:48:18
问题 I would like to understand how the gradient and hessian of the logloss function are computed in an xgboost sample script. I've simplified the function to take numpy arrays, and generated y_hat and y_true which are a sample of the values used in the script. Here is a simplified example: import numpy as np def loglikelihoodloss(y_hat, y_true): prob = 1.0 / (1.0 + np.exp(-y_hat)) grad = prob - y_true hess = prob * (1.0 - prob) return grad, hess y_hat = np.array([1.80087972, -1.82414818, -1

CFHTTP: first request fast, following slow

妖精的绣舞 提交于 2019-12-22 04:56:07
问题 I'm having a lot of trouble with CF10's CFHTTP at the moment. First, my test script: <CFSET results = arraynew(1) /> <CFLOOP from="1" to="10" index="idx"> <CFSET timer_start = getTickCount() /> <CFHTTP url="https://www.google.de" method="get" result="test" /> <CFSET arrayappend(results, (getTickCount()-timer_start)/1000 & " s") /> </CFLOOP> <CFDUMP var="#results#" /> 10 CFHTTP calls in a row, the time they take gets pushed to an array; that's all. Results of our CF9 server: Results of our

Weird output while finding entropy of frames of a video in opencv

夙愿已清 提交于 2019-12-21 15:40:22
问题 #include <cv.h> #include <highgui.h> #include <iostream> #include <cmath> #include <cstdlib> #include <fstream> using namespace std; typedef struct histBundle { double rCh[256]; double gCh[256]; double bCh[256]; }bundleForHist; bundleForHist getHistFromImage (IplImage* img, int numBins) { float range[] = { 0, numBins }; float *ranges[] = { range }; bundleForHist bfh; CvHistogram *hist = cvCreateHist (1, &numBins, CV_HIST_ARRAY, ranges, 1); cvClearHist (hist); IplImage* imgRed = cvCreateImage

Is there an algorithm for “perfect” compression?

佐手、 提交于 2019-12-20 23:32:14
问题 Let me clarify, I'm not talking about perfect compression in the sense of an algorithm that is able to compress any given source material, I realize that is impossible. What I'm trying to get at is an algorithm that is able to encode any source string of bits to it's absolute maximum compressed state, as determined by it's Shannon entropy. I believe I have heard some things about Huffman Coding being in some sense optimal, so I believe that this encryption scheme might be based off that, but

Calculating entropy from GLCM of an image

和自甴很熟 提交于 2019-12-20 10:39:39
问题 I am using skimage library for most of image analysis work. I have an RGB image and I intend to extract texture features like entropy , energy , homogeneity and contrast from the image. Below are the steps that I am performing: from skimage import io, color, feature from skimage.filters import rank rgbImg = io.imread(imgFlNm) grayImg = color.rgb2gray(rgbImg) print(grayImg.shape) # (667,1000), a 2 dimensional grayscale image glcm = feature.greycomatrix(grayImg, [1], [0, np.pi/4, np.pi/2, 3*np

Shuffling a poker deck in JavaScript with window.crypto.getRandomValues

☆樱花仙子☆ 提交于 2019-12-20 10:25:48
问题 A poker deck has 52 cards and thus 52! or roughly 2^226 possible permutations. Now I want to shuffle such a deck of cards perfectly, with truly random results and a uniform distribution, so that you can reach every single one of those possible permutations and each is equally likely to appear. Why is this actually necessary? For games, perhaps, you don't really need perfect randomness, unless there's money to be won. Apart from that, humans probably won't even perceive the "differences" in

How good is SecRandomCopyBytes?

淺唱寂寞╮ 提交于 2019-12-20 10:24:05
问题 I'm principally interested in the implementation of SecRandomCopyBytes on iOS , if it differs from the OS X implementation. (I would presume that it does, since a mobile device has more and more readily available sources of entropy than a desktop computer.) Does anyone have information on: Where SecRandomCopyBytes gets entropy from? What rate it can generate good random numbers? Will it block, or fail immediately if not enough entropy is available? Is it FIPS 140-2 compliant, or has it been

Quality of PostgreSQL's random() function?

霸气de小男生 提交于 2019-12-19 19:57:27
问题 Let's say I'm creating a table foo with a column bar that should be a very large random integer. CREATE TABLE foo ( bar bigint DEFAULT round(((9223372036854775807::bigint)::double precision * random())) NOT NULL, baz text ); Is this the best way to do this? Can anyone speak to the quality of PostgreSQL's random() function? Is the multiplication here masking the entropy? Note that I do have good hardware entropy feeding into /dev/random . 回答1: Postgresql random is based on their own portable

How to calculate clustering entropy? A working example or software code [closed]

六眼飞鱼酱① 提交于 2019-12-18 12:02:41
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 3 years ago . I would like to calculate entropy of this example scheme http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html Can anybody please explain step by step with real values? I know there are unliminted number of formulas but i am really bad at understanding formulas :) For example in the