scipy

How to find periodic interval and periodic mean through Von mises distribution?

扶醉桌前 提交于 2020-12-07 04:46:00
问题 I have some data of time (hours of the day). I would like to fit a von mises distribution to this data, and find the periodic mean. How do I do this using scipy in python ? for example : from scipy.stats import vonmises data = [1, 2, 22, 23] A = vonmises.fit(data) I am not sure how do I get the distribution (interval probably) and periodic mean of this data using fit or mean or interval methods. 回答1: Good job on finding the VM distribution. That's half of the battle. But unless I'm mistaken

How to find periodic interval and periodic mean through Von mises distribution?

蓝咒 提交于 2020-12-07 04:45:29
问题 I have some data of time (hours of the day). I would like to fit a von mises distribution to this data, and find the periodic mean. How do I do this using scipy in python ? for example : from scipy.stats import vonmises data = [1, 2, 22, 23] A = vonmises.fit(data) I am not sure how do I get the distribution (interval probably) and periodic mean of this data using fit or mean or interval methods. 回答1: Good job on finding the VM distribution. That's half of the battle. But unless I'm mistaken

Built in functions available in opencv2 python to find distance between to images

落花浮王杯 提交于 2020-12-06 19:17:47
问题 I want a faster Normalized cross correlation using which i can compute similarity between two images. I want to know whether there is any built in functions which can find correlation between two images other than scipy.signal.correlate2d() and matplotlib xcorr() . If these two functions are working can anyone show me an example to find correlation between two images. path1='D:/image/cat1.jpg' path2='D:/image/cat2.jpg' corrCoefft = computeCorrelationCoefft(path1,path2) 回答1: OpenCV does

Evaluate the goodness of a distributional fits

自闭症网瘾萝莉.ら 提交于 2020-12-06 07:34:45
问题 I have fitted some distributions for sample data with the following code: import numpy as np import pylab import matplotlib.pyplot as plt from scipy.stats import norm samp = norm.rvs(loc=0,scale=1,size=150) # (example) sample values. figprops = dict(figsize=(8., 7. / 1.618), dpi=128) adjustprops = dict(left=0.1, bottom=0.1, right=0.97, top=0.93, wspace=0.2, hspace=0.2) import pylab fig = pylab.figure(**figprops) fig.subplots_adjust(**adjustprops) ax = fig.add_subplot(1, 1, 1) ax.hist(samp

Python spectrogram in 3D (like matlab's spectrogram function)

与世无争的帅哥 提交于 2020-12-06 06:57:31
问题 My question is the following: I have all the values that I need for a spectrogram ( scipy.fftpack.fft ). I would like to create a 3D spectrogram in python. In MATLAB this is a very simple task, while in python it seems much more complicated. I tried mayavi, 3D plotting matplotlib but I have not managed to do this. Thanks My code: import numpy as np import pandas as pd import numpy as np from scipy import signal import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from

solving colebrook (nonlinear) equation in python

爱⌒轻易说出口 提交于 2020-12-06 04:21:10
问题 I want to do in python what this guy did in MATLAB. I have installed anaconda, so i have numpy and sympy libraries. So far I have tried with numpy nsolve, but that doesn't work. I should say I'm new with python, and also that I konw how to do it in MATLAB :P. The equation: -2*log(( 2.51/(331428*sqrt(x)) ) + ( 0.0002 /(3.71*0.26)) ) = 1/sqrt(x) Normally, I would solve this iteratively, simply guessing x on the left and than solving for the x on the right. Put solution on the left, solve again.

wordcloud + jieba 生成词云

北战南征 提交于 2020-12-04 08:52:48
利用jieba库和wordcloud生成中文词云。 jieba库:中文分词第三方库     分词原理:      利用中文 词库,确定汉字之间的关联概率,关联概率大的生成词组    三种分词模式:     1、精确模式:把文本精确的切分开,不存在冗余单词     2、全模式:把文本中所有可能的词语都扫描出来,有冗余      3、搜索引擎模式:在精确模式基础上,对长词再次切分    常用函数:     jieba.lcut(s) # 精确模式 ,返回列表类型的分词结果     jieba.lcut(s,cut_all=True) # 全模式 ,返回列表类型的分词结果     jieba.lcut_for_search(s,cut_all=True) # 搜索引擎模式 (精确模式后对过长的词再精确分词),返回列表类型的分词结果     jieba.add_word(w)  #在参考的中文词库中 添加 自定义的词,如:jieba.add_word(“产生式系统”),无返回     jieba.del_word(w)  #在参考的中文词库中 删除 词     jieba.analyse.extract_tags(sentence,topK=10)  # 关键词提取 ,返回权重最大的10个词语,返回列表类型的提取结果, 注意: import jieba.analyse     

scikit_learn

感情迁移 提交于 2020-12-02 08:21:30
scikit-learn 是基于 Python 语言的机器学习工具。 简单高效的数据挖掘和数据分析工具 可供大家在各种环境中重复使用 建立在 NumPy ,SciPy 和 matplotlib 上 开源,可商业使用 - BSD许可证 机器学习问题 : 监督学习 :数据带有我们想要预测的附加属性(各个属性已知) 1.分类:样本属于两个或更多类,从标记得数据训练并能预测出未标记的数据类别;另一个因素是,数据是离散的,我们想要使用正确的类别来标记这些数据。 2.回归:期望输出是一个或多个连续变量,则使用回归方法。比如预测人身高和体重的函数。 来源: oschina 链接: https://my.oschina.net/u/3955849/blog/2997421

Fast fuse of close points in a numpy-2d (vectorized)

早过忘川 提交于 2020-12-01 10:55:41
问题 I have a question similar to the question asked here: simple way of fusing a few close points. I want to replace points that are located close to each other with the average of their coordinates. The closeness in cells is specified by the user (I am talking about euclidean distance). In my case I have a lot of points (about 1-million). This method is working, but is time consuming as it uses a double for loop. Is there a faster way to detect and fuse close points in a numpy 2d array? To be

Fast fuse of close points in a numpy-2d (vectorized)

做~自己de王妃 提交于 2020-12-01 10:55:21
问题 I have a question similar to the question asked here: simple way of fusing a few close points. I want to replace points that are located close to each other with the average of their coordinates. The closeness in cells is specified by the user (I am talking about euclidean distance). In my case I have a lot of points (about 1-million). This method is working, but is time consuming as it uses a double for loop. Is there a faster way to detect and fuse close points in a numpy 2d array? To be