Replies: 10 comments 4 replies
-
The problem here is that the minimizer w.r.t x will depend on scalar, so if you change the value of scalar, you will have to recompute the minimizer in order to evaluate your objective ? |
Beta Was this translation helpful? Give feedback.
-
Thanks. That's ok. I appreciate it's very expensive but allows me to solve something without the worry on whether I've chosen an eps for finite difference that is overly affecting the answer. Would you be able to show me how to do it? |
Beta Was this translation helpful? Give feedback.
-
I am still not sure I understand our problem. I think that you want to solve the problem
and then compute the partial of f(x) w.r.t. x6. This will just be the positive (negative) of the Lagrange multiplier for x6 at the optimal solution if L6 (U6) is active at the solution |
Beta Was this translation helpful? Give feedback.
-
I think what I'm trying to do, is the following: Once I've solved the minimisation of Given: Compute: I've computed the first partial derivative using cppAD and compared this to a known result and it's exact, code to do so as follows: const std::size_t n = 1;
std::vector<ADNumber> ax(n);
ax[0] = R; // constant known scalar as noted above
CppAD::Independent(ax);
const std::size_t m = 1;
std::vector<ADNumber> B(m);
B[0] = 0.0;
for (std::size_t s = 0; s < size; s++)
{
ADNumber b = 0.0;
Compute_b_term(ax[0], b); // this takes ax[0] as a const&, and b is taken as a non-const& and is assigned inside the function
// Note! x comes from solution.x (after ipopt call completes) and is simply a std::vector<double>, not an AD type
B[0] += x[s] * b; // here x is the solution vector coming from ipopt after minimization
}
CppAD::ADFun<double> f(ax, B);
// compute derivative using operation sequence stored in f
// std::vector<double> jac(m * n); // Jacobian of f (m by n matrix)
std::vector<double> val(n); // domain space vector
val[0] = R; // argument value for computing derivative
const auto &first_partial_derivative = f.Jacobian(val); // Jacobian for operation sequence I've no idea how to compute the second partial derivative above...could you provide any advice? I assume it requires me to use similar code to the above, but to use |
Beta Was this translation helpful? Give feedback.
-
Suppose we are given the unconstrained poblem Define because the derivative is zero at an unconstrained optimum. The problem here is computing the derivatives of x(R) w.r.t. R. I suggest you see |
Beta Was this translation helpful? Give feedback.
-
Looks like you're missing double-$ at the end of the line? Many thanks for this, that's going to take some time to digest. I should have added, and omitted because I was unaware of where this was going, that my problem is a constrained optimisation problem (equality and inequality). Can I ask another question, which is related as it came up setting up the finite difference version of this, which I should add seems to work and produces answers that are reported elsewhere for my problem. Using my nomenclature above, I am doing: To get this to work using as a starting point this example and since I need to call the optimiser twice per derivative that I need, and I need those derivatives at a lot of
Many thanks, |
Beta Was this translation helpful? Give feedback.
-
Yes |
Beta Was this translation helpful? Give feedback.
-
Are you using ipopt_solve for your optimization ? |
Beta Was this translation helpful? Give feedback.
-
I am going to move this issue to a discussion. |
Beta Was this translation helpful? Give feedback.
-
cppad_ipopt_nlp was deprecated about 10 years ago; see Still, it is a long time since I wroked on ipopt::colve and it could use improvement. |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm new to cppad, but not new to ipopt or the concept of AD. In trying to explain my question, I was going to use this example. My question (sorry if title is not quite right), let's assume that the objective also depended on another scalar:
fg[0] = scalar*x1 * x4 * (x1 + x2 + x3) + x3;
I'd like to understand how to obtain the jacobian representing d(xi)/d(scalar), but here the xi is only obtained after minimising via ipopt. Is that possible, does my question make sense? scalar can be any type as needed. So I guess in finite difference terms, I'd compute xi(scalar+eps) via an ipopt call and then xi(scalar) via another ipopt call, then compute the FD Jac as (xi(scalar+eps) - xi(scalar))/(eps). I'm hoping instead to do this by AD...
Many thanks,
Andy
Beta Was this translation helpful? Give feedback.
All reactions