residual

Review of Image Super-resolution Reconstruction Based on Deep Learning

与世无争的帅哥 提交于 2019-12-16 13:04:57
Abstract With the deep learning method being applied to image super-resolution (SR), SR methods based on deep learning have achieved better reconstruction results than traditional SR methods. This paper briefly summarizes SR methods based on deep learning , anal yz es the characteristics and deficiencies of different network models and compares various deep learning network models on mainstream data set. Keyword: Image super-resolution reconstruction; deep learning; convolutional neural network 1. Introduction Image super-resolution reconstruction is to recover a corresponding high-resolution

经典 network -- 图像分类篇(03 ResNet v1-v2)

大城市里の小女人 提交于 2019-12-12 00:10:16
近期,实验室小组成员决定定期学习经典网络模型。因此,特别准备写这么一个博客, 持续更新 我们的学习、及个人对各种经典网络的理解。如有不足和理解不到位的地方,还望读者提出质疑和批评,定虚心改进。望共同讨论、学习和进步。 系列目录: 经典 network -- 图像分类篇(01 AlexNet / VGG) 经典 network -- 图像分类篇(02 Inception v1-v4)(-ing) 经典 network -- 图像分类篇(03 ResNet v1-v2) 经典 network -- 图像分类篇(03 ResNet v1-v2) 本部分包括 ResNet,ResNet v2,ResNeXt。 ResNet [paper] Deep Residual Learning for Image Recognition [github] https://github.com/KaimingHe/deep-residual-networks [pytorch] https://pytorch.org/hub/pytorch_vision_resnet/ Introduction We explicitly reformulate the layers as learning residual functions with reference to the layer inputs,

An interesting Google interview algorithm I found online that requires linear time [closed]

匿名 (未验证) 提交于 2019-12-03 02:45:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: So I found this Google interview algorithm question online. It's really interesting and I still have not come up with a good solution yet. Please have a look, and give me a hint/solution, it would be great if you can write the code in Java :). "Design an algorithm that, given a list of n elements in an array, finds all the elements that appear more than n/3 times in the list. The algorithm should run in linear time. (n >=0 ) You are expected to use comparisons and achieve linear time. No hashing/excessive space/ and don't use

R nls singular gradient

匿名 (未验证) 提交于 2019-12-03 02:06:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I've tried searching the other threads on this topic but none of the fixes are working for me. I have the results of a natural experiment and I want to show the number of consecutive occurrences of an event fit an exponential distribution. My R shell is pasted below f x [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [26] 26 27 > y [1] 1880 813 376 161 100 61 31 9 8 2 7 4 3 2 0 [16] 1 0 0 0 0 0 1 0 0 0 0 1 > dat2 x y 1 1 1880 2 2 813 3 3 376 4 4 161 5 5 100 6 6 61 7 7 31 8 8 9 9 9 8 10 10 2 11 11 7 12 12 4 13 13 3 14 14

Learning Attentions: Residual Attentional Siamese Network for High Performance 论文读后感

匿名 (未验证) 提交于 2019-12-03 00:19:01
Learning Attentions: Residual Attentional Siamese Networkfor High Performance Online Visual Tracking 王强大神维护的benchamark-results跟踪结果: https://github.com/foolwood/benchmark_results 论文下载地址: http://www.dcs.bbk.ac.uk/~sjmaybank/CVPR18RASTrackCameraV3.3.pdf 论文代码: https://github.com/foolwood/RASNet 1. 摘要: 基于离线训练的目标跟踪可以很好的平衡准确率和跟踪速度,但是基于离线训练的模型来适应在线跟踪目标仍然是一个挑战。本文在孪生网络里面重构了相关滤波、加入了三种Attention机制。该算法缓解了深度学习中过拟合的问题,同时将表征学习和判别学习分开来增强算法的判别能力和适应能力。算法在OTB2015和VOT2017的跟踪里面取得了很好的结果,速度可以到达80fps。 2. 文章的三个主要的贡献: 3. 算法跟踪过程: 4. Attention机制 5.加权相关滤波: 作者认为蓝色的框比绿色的框更能表示所跟踪的目标。所以用加权相关滤波来表示这种特征,找到一个响应值最大的跟踪框(找到图中蓝色的跟踪框)。