quantization

How to quantize surface normals

六眼飞鱼酱① 提交于 2019-12-08 05:57:00
问题 I am trying to quantize surface normals into let's say 8 bins. For example, when computing features like HOG to quantize 2D gradients [x,y] into 8 bins we just take the angle with the y plane i.e. arctan(y/x) which will give us an angle between 0-360. My question is, given a 3D direction [x,y,z] , a surface normal in this case, how can we histogram it in a similar way? Do we just project onto one plane and use that angle i.e. the dot product of [x,y,z] and [0,1,0] for example? Thanks EDIT I

Why we need a coarse quantizer?

大憨熊 提交于 2019-12-07 16:13:27
In Product Quantization for Nearest Neighbor Search , when it comes to section IV.A, it says they they will use a coarse quantizer too (which they way I feel it, is just a really smaller product quantizer, smaller w.r.t. k , the number of centroids). I don't really get why this helps the search procedure and the cause might be that I think I don't get the way they use it. Any ides please ? As mentioned in the NON EXHAUSTIVE SEARCH section, Approximate nearest neighbor search with product quantizers is fast and reduces significantly the memory requirements for storing the descriptors.

Graph transform gives error in Tensorflow

匆匆过客 提交于 2019-12-06 15:23:13
I am using tensorflow 1.1 version. I want to quantize inception_resnet_v2 model. The quantization method using bazel build tensorflow/tools/quantization/tools:quantize_graph bazel-bin/tensorflow/tools/quantization/tools/quantize_graph \ --input=/tmp/classify_image_graph_def.pb \ --output_node_names="softmax" --output=/tmp/quantized_graph.pb \ --mode=eightbit this doesn't give accurate results. For inception_v3 the results are okay but for inception_resnet_v2 it doesn't work (0% accuracy for the predicted class labels). I got to know that I can rather use graph_transform in my case to quantise.

How to quantize surface normals

∥☆過路亽.° 提交于 2019-12-06 14:54:55
I am trying to quantize surface normals into let's say 8 bins. For example, when computing features like HOG to quantize 2D gradients [x,y] into 8 bins we just take the angle with the y plane i.e. arctan(y/x) which will give us an angle between 0-360. My question is, given a 3D direction [x,y,z] , a surface normal in this case, how can we histogram it in a similar way? Do we just project onto one plane and use that angle i.e. the dot product of [x,y,z] and [0,1,0] for example? Thanks EDIT I also read a paper recently where they quantized surface normals by measuring angles between normal and

TensorFlow Lite quantization fails to improve inference latency

ぐ巨炮叔叔 提交于 2019-12-06 14:06:17
TensorFlow website claims that Quantization provides up to 3x lower latency on mobile devices: https://www.tensorflow.org/lite/performance/post_training_quantization I tried to verify this claim, and found that Quantized models are 45%-75% SLOWER than Float models in spite of being almost 4 times smaller in size. Needless to say, this is very disappointing and conflicts with Google's claims. My test uses Google's official MnasNet model: https://storage.googleapis.com/mnasnet/checkpoints/mnasnet-a1.tar.gz Here is the average latency based on 100 inference operations on a freshly rebooted phone:

How to use ColorQuantizerDescriptor?

江枫思渺然 提交于 2019-12-06 13:23:15
Following the idea of @PhiLho's answer to How to convert a BufferedImage to 8 bit? , I want to use ColorQuantizerDescriptor to convert a BufferedImage , imageType TYPE_INT_RGB, but RenderedOp#getColorModel() is throwing the following exception: java.lang.IllegalArgumentException: The specified ColorModel is incompatible with the image SampleModel. at javax.media.jai.PlanarImage.setImageLayout(PlanarImage.java:541) at javax.media.jai.RenderedOp.createRendering(RenderedOp.java:878) at javax.media.jai.RenderedOp.getColorModel(RenderedOp.java:2253) This is the code that I am attempting to use:

Tensorflow build quantization tool - bazel build error

谁都会走 提交于 2019-12-05 14:14:29
I am trying to compile the quantization script as described in Pete Warden's blog . However I get the following error message after running the following bazel build: bazel build tensorflow/contrib/quantization/tools:quantize_graph ERROR: no such package 'tensorflow/contrib/quantization/tools': BUILD file not found on package path. INFO: Elapsed time: 0.277s I think what happened is that this quantization tool got moved out of contrib and into TensorFlow core. You should be able to use that instead: bazel build tensorflow/tools/quantization:quantize_graph 来源: https://stackoverflow.com

Fastest dithering / halftoning library in C

假如想象 提交于 2019-12-05 10:08:04
I'm developing a custom thin-client server that serves rendered webpages to its clients. Server is running on multicore Linux box, with Webkit providing the html rendering engine. The only problem is the fact that clients display is limited with a 4bit (16 colors) grayscale palette. I'm currently using LibGraphicsMagick to dither images (RGB->4bit grayscale), which is an apparent bottleneck in the server performance. Profiling shows that more than 70% of time is spent running GraphicsMagick dithering functions. I've explored stackoverflow and the Interwebs for a good high performance solution,

Generate the Dominant Colors for an RGB image with XMLHttpRequest

好久不见. 提交于 2019-12-03 14:40:12
A Note For Readers: This is a long question, but it needs a background to understand the question asked. The color quantization technique is commonly used to get the dominant colors of an image. One of the well-known libraries that do color quantization is Leptonica through the Modified Median Cut Quantization (MMCQ) and octree quantization (OQ) Github's Color-thief by @lokesh is a very simple implementation in JavaScript of the MMCQ algorithm: var colorThief = new ColorThief(); colorThief.getColor(sourceImage); Technically, the image on a <img/> HTML element is backed on a <canvas/> element:

NeuQuant.js (JavaScript color quantization) hidden bug in JS conversion

空扰寡人 提交于 2019-12-03 11:19:45
问题 NeuQuant.js works well when the image width and height are a multiple of 100: 300x300 Otherwise, there is obviously a bug: 299x300 (These were made with this web app.) I'm 90% sure that the bug is in NeuQuant.js. I have made tests using it with jsgif and omggif, and both encoders have the same bug. It is only obvious with photographic images (quantize to 256 colors) when the image size is anything other than a multiple of 100. If you understand neural networks, color quantization, and/or