gcc

C++ std::vector优化部分性能大幅提升

泄露秘密 提交于 2021-02-13 20:24:23
std::vector 是C++中最简单最常用的容器,一般多数人认为这个库太简单了,可能没有多少可以优化的地方。这两天回答了一个关于vector优化的问题,刚好可以谈一下. 对于多数T对象的优化解决方案。自研版本的vector在多数T对象下比std::vector更快,更少的空间占用,主要的优化内容是: std:: vector中增加和删除动作,因为有内存的申请释放和T对象的移动,相对开销都比较大,针对多数T对象有优化余地。优化的关键就是设法减少T对象移动的开销. 检测能否优化标志是一个static bool值, 主要是判断对象T内部是否存在指向自身或者依赖自身地址的指针。如果不存在这样的指针,并且T()对象为全零值(可以memset),那么就可以进行完全优化。在第一次创建vector<T>时就得到分析结果,设置bool值,后面再创建同样的对象时不会执行检测代码,所以这种检测开销极小。 (一)在检测到允许进行优化处理时,进行下列性能的优化: 1)避免T对象移动的优化: 自研库使用realloc的方式申请内存扩展空间,而不是使用常规的malloc的方式申请内存,这样在很多情况扩展时都不用拷贝数据,大幅度提升了性能.这块的实际性能提升倍数取决于能够连续多少次在原地址处申请扩展内存成功,只要在原地址处能够成功扩展空间,就不需要移动T对象了,大大节约了开销. 2)减少T对象创建和析构的开销

python3 and python2 共存

房东的猫 提交于 2021-02-13 14:01:49
我目前使用的服务器为centos7.x 系统自带的python的版本为2. x,如果想学习还是使用python那么3.x是首选,那么问题来了。 ---如何安装python3环境,又如何给python3安装对应的pip3呢? 更关键的是我们原来的系统中还有一些自带的工具需要用到python2.x版本,所以要求的是python3 and python2 共存,pip2 and pip3共存。 网络真是个地方。之前一直有一个疑问在网上找了几个运维问了下如果将python2与python3共存。回复建议是:使用python3 pip3这样执行 但是我找到更好的方法: Python3.3以上的版本通过venv模块原生支持虚拟环境,可以代替Python之前的virtualenv。 该venv模块提供了创建轻量级“虚拟环境”,提供与系统Python的隔离支持。每一个虚拟环境都有其自己的Python二进制(允许有不同的Python版本创作环境),并且可以拥有自己独立的一套Python包。他最大的好处是,可以让每一个python项目单独使用一个环境,而不会影响python系统环境,也不会影响其他项目的环境。 优点 使不同应用开发环境独立 环境升级不影响其他应用,也不会影响全局的python环境 防止系统中出现包管理混乱和版本冲突 Centos7 创建虚拟环境 1.1)安装依赖包 [root

Nginx搭建负载均衡集群

送分小仙女□ 提交于 2021-02-13 07:18:55
(1).实验环境 youxi1  192.168.5.101  负载均衡器 youxi2  192.168.5.102  主机1 youxi3  192.168.5.103  主机2 (2).Nginx负载均衡策略   nginx的负载均衡用于upstream模板定义的后端服务器列表中选取一台服务器接收用户的请求。一个基本的upstream模块如下: upstream [服务器组名称]{   server [IP地址]:[端口号];   server [IP地址]:[端口号];   .... }   在upstream模块配置完成后,要让指定的访问反向代理到服务器列表,格式如下: location ~ .*$ {   index index.jsp index.html;   proxy_pass http://[服务器组名称]; }   扩展:nginx的location配置规则: http://outofmemory.cn/code-snippet/742/nginx-location-configuration-xiangxi-explain   这样就完成了最基本的负载均衡,但是这并不能满足实际需求。目前Nginx的upstream模块支持6种方式的负载均衡策略(算法):轮询(默认方式)、weight(权重方式)、ip_hash(依据ip分配方式)、least_conn

Linux笔记-SIGHUP与daemon

青春壹個敷衍的年華 提交于 2021-02-12 06:46:14
参考资料:linux信号signal和sigaction理解 http://blog.csdn.net/beginning1126/article/details/8680757 signal,此函数相对简单一些,给定一个信号,给出信号处理函数则可,当然,函数简单,其功能也相对简单许多,简单给出个函数例子如下: #include <signal.h> #include <stdio.h> #include <unistd.h> void ouch( int sig) { printf( " I got signal %d\n " , sig); // (void) signal(SIGINT, SIG_DFL); // (void) signal(SIGINT, ouch); } int main() { ( void ) signal(SIGINT, ouch); while ( 1 ) { printf( " hello world...\n " ); sleep( 1 ); } } 当然,实际运用中,需要对不同到signal设定不同的到信号处理函数,SIG_IGN忽略/SIG_DFL默认,这俩宏也可以作为信号处理函数。同时SIGSTOP/SIGKILL这俩信号无法捕获和忽略。注意,经过实验发现,signal函数也会堵塞当前正在处理的signal

GCC ARM Performance drop

你离开我真会死。 提交于 2021-02-11 18:19:25
问题 I stumbled upon very strange issue with GCC. The issue is 25% drop in performance. Here is the story. I have a pice of software which is fp32 compute intensive (neural networks compiled with TVM). I compile it for ARM (rk3399 device), here is info: gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/arm-linux-gnueabihf/5/lto-wrapper Target: arm-linux-gnueabihf Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.12' --with-bugurl

GCC ARM Performance drop

回眸只為那壹抹淺笑 提交于 2021-02-11 18:18:57
问题 I stumbled upon very strange issue with GCC. The issue is 25% drop in performance. Here is the story. I have a pice of software which is fp32 compute intensive (neural networks compiled with TVM). I compile it for ARM (rk3399 device), here is info: gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/arm-linux-gnueabihf/5/lto-wrapper Target: arm-linux-gnueabihf Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.12' --with-bugurl

GCC ARM Performance drop

限于喜欢 提交于 2021-02-11 18:18:36
问题 I stumbled upon very strange issue with GCC. The issue is 25% drop in performance. Here is the story. I have a pice of software which is fp32 compute intensive (neural networks compiled with TVM). I compile it for ARM (rk3399 device), here is info: gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/arm-linux-gnueabihf/5/lto-wrapper Target: arm-linux-gnueabihf Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.12' --with-bugurl

Eclipse will not run C programs

倖福魔咒の 提交于 2021-02-11 17:29:45
问题 I just recently installed the CDT plugin for Eclipse in Windows 8 and I'm getting the error: "Launch failed. Binary not found." Now I've looked into this and I have installed cygwin with gcc and set that up in Eclipse settings. I went to Window>Preferences>New C/C++ Project Wizard>Makefile Project and checked Cygwin PE Parser (and just in case I checked PE Windows Parser as well). Then I went to Window>Preferences>Build>Environment and added my PATH variable there. I made sure to add C:

Makefile compiles all the files everytime

隐身守侯 提交于 2021-02-11 17:01:08
问题 My Makefile compiles all the files everytime I run it though the files have not been changed. I know that this question has been asked several times but none of the provided solutions seem to work for me. I am new to Makefile and most of the times I do not understand the jargon used in the solution. Also, I want to save all the generated .o files under the folder 'obj' Here is my folder structure project (-) gen (-) display (-) .c and .h files logic (-) .c and .h files lib (-) include (-) .h

Makefile compiles all the files everytime

痴心易碎 提交于 2021-02-11 17:01:03
问题 My Makefile compiles all the files everytime I run it though the files have not been changed. I know that this question has been asked several times but none of the provided solutions seem to work for me. I am new to Makefile and most of the times I do not understand the jargon used in the solution. Also, I want to save all the generated .o files under the folder 'obj' Here is my folder structure project (-) gen (-) display (-) .c and .h files logic (-) .c and .h files lib (-) include (-) .h