non-linear-regression

`nls` fails to estimate parameters of my model

余生颓废 提交于 2019-12-02 10:25:18
问题 I am trying to estimate the constants for Heaps law. I have the following dataset novels_colection : Number of novels DistinctWords WordOccurrences 1 1 13575 117795 2 1 34224 947652 3 1 40353 1146953 4 1 55392 1661664 5 1 60656 1968274 Then I build the next function: # Function for Heaps law heaps <- function(K, n, B){ K*n^B } heaps(2,117795,.7) #Just to test it works So n = Word Occurrences , and K and B are values that should be constants in order to find my prediction of Distinct Words. I

Simple Regression Prediction Algorithm in JavaScript

无人久伴 提交于 2019-12-02 10:04:06
I am trying to do a simple forecast of future profit of an organization based on the past records by using regression. I am following this link . For testing purpose, I have changed the sample data and it produced these results: My actual data will be date and profit, and they will be going up and down rather than in a contiguous increment way. I realized that the method above works for sample data which keep on increasing as the prediction is quite accurate. However, when I changed the data to the one in the screenshot which goes up and down crazily, the prediction is not so accurate anymore.

Using Scipy curve_fit with variable number of parameters to optimize

落爺英雄遲暮 提交于 2019-12-02 07:42:28
问题 Assuming we have the below function to optimize for 4 parameters, we have to write the function as below, but if we want the same function with more number of parameters, we have to rewrite the function definition. def radius (z,a0,a1,k0,k1,): k = np.array([k0,k1,]) a = np.array([a0,a1,]) w = 1.0 phi = 0.0 rs = r0 + np.sum(a*np.sin(k*z +w*t +phi), axis=1) return rs The question is if this can be done easier in a more automatic way, and more intuitive than this question suggests. example would

`nls` fails to estimate parameters of my model

守給你的承諾、 提交于 2019-12-02 06:15:19
I am trying to estimate the constants for Heaps law. I have the following dataset novels_colection : Number of novels DistinctWords WordOccurrences 1 1 13575 117795 2 1 34224 947652 3 1 40353 1146953 4 1 55392 1661664 5 1 60656 1968274 Then I build the next function: # Function for Heaps law heaps <- function(K, n, B){ K*n^B } heaps(2,117795,.7) #Just to test it works So n = Word Occurrences , and K and B are values that should be constants in order to find my prediction of Distinct Words. I tried this but it gives me an error: fitHeaps <- nls(DistinctWords ~ heaps(K,WordOccurrences,B), data =

Using Scipy curve_fit with variable number of parameters to optimize

血红的双手。 提交于 2019-12-02 03:54:08
Assuming we have the below function to optimize for 4 parameters, we have to write the function as below, but if we want the same function with more number of parameters, we have to rewrite the function definition. def radius (z,a0,a1,k0,k1,): k = np.array([k0,k1,]) a = np.array([a0,a1,]) w = 1.0 phi = 0.0 rs = r0 + np.sum(a*np.sin(k*z +w*t +phi), axis=1) return rs The question is if this can be done easier in a more automatic way, and more intuitive than this question suggests. example would be as following which has to be written by hand. def radius (z,a0,a1,a2,a3,a4,a5,a6,a7,a8,a9,k0,k1,k2

R Warning: newdata' had 15 rows but variables found have 22 rows [duplicate]

喜欢而已 提交于 2019-12-02 03:42:58
This question already has an answer here: Predict() - Maybe I'm not understanding it 4 answers I have read few answers on this here but I am afraid I have not been able to figure out an answer. My R code is: colors <- bmw[bmw$Channel=="Colors" & bmw$Hour=20,] colors_test <- tail(colors, 89) colors_train <- head(colors, 810) colors_train_agg <- aggregate(colors_train$Impressions, list(colors_train$`Position of Ad in Break`), FUN=mean, na.rm=TRUE) colnames(colors_train_agg) <- c("ad_position", "avg_impressions") lm_colors <- lm(colors_train_agg$avg_impressions ~ poly(colors_train_agg$ad_position

Multinomial logit in R: mlogit versus nnet

≡放荡痞女 提交于 2019-11-29 08:34:49
问题 I want to run a multinomial logit in R and have used two libraries, nnet and mlogit, which produce different results and report different types of statistics. My questions are: What is the source of discrepency between the coefficients and standard errors reported by nnet and those reported by mlogit ? I would like to report my results to a Latex file using stargazer . When doing so, there is a problematic tradeoff: If I use the results from mlogit then I get the statistics I wish, such as

Is deep learning bad at fitting simple non linear functions outside training scope (extrapolating)?

柔情痞子 提交于 2019-11-27 14:52:28
I am trying to create a simple deep-learning based model to predict y=x**2 But looks like deep learning is not able to learn the general function outside the scope of its training set . Intuitively I can think that neural network might not be able to fit y=x**2 as there is no multiplication involved between the inputs. Please note I am not asking how to create a model to fit x**2 . I have already achieved that. I want to know the answers to following questions: Is my analysis correct? If the answer to 1 is yes, then isn't the prediction scope of deep learning very limited? Is there a better