minimization

Knight's Shortest Path on Chessboard

我们两清 提交于 2019-12-17 15:18:28
问题 I've been practicing for an upcoming programming competition and I have stumbled across a question that I am just completely bewildered at. However, I feel as though it's a concept I should learn now rather than cross my fingers that it never comes up. Basically, it deals with a knight piece on a chess board. You are given two inputs: starting location and ending location. The goal is to then calculate and print the shortest path that the knight can take to get to the target location. I've

When I minimize my HTML/CSS page it squishes all the pictures and texts and ruins the formatting

空扰寡人 提交于 2019-12-12 02:13:47
问题 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Strict//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta name="generator" content="HTML-Kit Tools HTML Tidy plugin"> <link href="layout.css" rel="stylesheet" type="text/css"> <title></title> </head> <body> <div class="square" font=10px> <p></p> </div> <div id="wrapper"> <img class="A" src="download.jpg" alt="Solar Panels"/> <img class="B" src= "wind.jpg" alt="Windmills" /> <img class="C" src= "biomass.jpg" alt="Biomass" /> <img

Why my scipy.optimize.minimize fails?

只愿长相守 提交于 2019-12-11 12:48:32
问题 I try with fmin_bfgs to find the local minimum of the absolute function abs(x) . The initial point is set to 100.0; the expected answer is 0.0. However, I get: In [184]: op.fmin_bfgs(lambda x:np.abs(x),100.0) Warning: Desired error not necessarily achieved due to precision loss. Current function value: 100.000000 Iterations: 0 Function evaluations: 64 Gradient evaluations: 20 Out[184]: array([100.0]) Why? 回答1: Methods like fmin_bfgs and fmin_slsqp require smooth (continuous derivative)

R: Isotonic regression Minimisation

时光总嘲笑我的痴心妄想 提交于 2019-12-11 12:05:51
问题 I want minimize the following equation: F=SUM{u 1:20}sum{w 1:10} Quw(ruw-yuw) with the following constraints: yuw >= yu,w+1 yuw >= yu-1,w y20,0 >= 100 y0,10 >= 0 I have a 20*10 ruw and 20*10 quw matrix, I now need to generate a yuw matrix which adheres to the constraints. I am coding in R and am familiar with the lpsolve and optimx packages, but don't know how to use them for this particular question. 回答1: Because Quw and ruw are both data, all constraints as well as the objective are linear

How do I pass through arguments to other functions (generally and via scipy)?

泄露秘密 提交于 2019-12-10 22:26:11
问题 I am trying to minimize a function that outputs chi-square via scipy and find the mu,sigma,normc that provide the best fit for a Gaussian overlay. from math import exp from math import pi from scipy.integrate import quad from scipy.optimize import minimize from scipy.stats import chisquare import numpy as np # guess intitial values for minimized chi-square mu, sigma = np.mean(mydata), np.std(mydata) # mydata is my data points normc = 1/(sigma * (2*pi)**(1/2)) gauss = lambda x: normc * exp( (

Fitting an ellipse through orbital data

二次信任 提交于 2019-12-10 16:25:46
问题 I've generated a bunch of data for the (x,y,z) coordinates of a planet as it orbits around the Sun. Now I want to fit an ellipse through this data. What I tried to do: I created a dummy ellipse based on five parameters: The semi-major axis & eccentricity that defines the size & shape and the three euler angles that rotate the ellipse around. Since my data is not always centered at origin I also need to translate the ellipse requiring additional three variables (dx,dy,dz). Once I initialise

Constrained minimization in MATLAB

喜你入骨 提交于 2019-12-08 13:54:14
问题 I want to solve a constrained minimization problem and I am asking for some help on how to structure the code. I understand that fmincon is what I should use by playing with the argument @mycon but I am struggling in adapting it to my case. Any suggestion would be extremely appreciated. These are my files ( a and b are predefined parameters): f1.m function [y1, y2, y3]=f1(x1, x2, a) ... end f2.m function w1=f2(x1, x2, y2, y3, b) ... end Problem that I want to code: min y1 w.r.t x1 , x2 such

Utilizing scipy.optimize.minimize with multiple variables of different shapes

点点圈 提交于 2019-12-06 13:19:43
问题 I am curious is there is a straightforward method for utilizing scipy.optimize.minimize with multiple variables that take different shapes. For example, let's take a look at a matrix decomposition problem. I apologize, but I will be using latex here in the hope that one day SO will implement it. We can deconstruct the matrix $ A_{n \times m} $ into two matrices $ W_{k \times n} $ and $ H_{k \times m} s.t. A \approx W^TH $ There are numerous methods for solving for W and H , but let this just

Optimization algorithm (dog-leg trust-region) in Matlab and Python

China☆狼群 提交于 2019-12-06 11:46:11
问题 I'm trying to solve a set of nonlinear equations using the dog-leg trust-region algorithm in Matlab and Python. In Matlab there is fsolve where this algorithm is the default, whereas for Python we specify 'dogleg' in scipy.optimize.minimize. I won't need to specify a Jacobian or Hessian for the Matlab whereas Python needs either one to solve the problem. I don't have the Jacobian/Hessian so is there a way around this issue for Python? Or is there another function that performs the equivalent

How to properly round numpy float arrays

*爱你&永不变心* 提交于 2019-12-06 09:22:06
问题 I have some sort of rounding issue when rounding floats. x = np.array([[1.234793487329877,2.37432987432],[1.348732847,8437.328737874]]) np.round(x,2) array([[ 1.23000000e+00, 2.37000000e+00], [ 1.35000000e+00, 8.43733000e+03]]) Is there a way to display these numbers without the zero extensions? 回答1: Rounding floating point numbers is almost never needed (unless you want to bucket them, then your code will work just fine), if you only want to print them with less precision, use this: print(np