Does dccp solve convex (but non-DCP) problems to global optimality?
Hi there - apologies if this is a nieve question or the wrong forum. Of course, DCCP (and CCP) in general is a local optimization method. However, what about the case where the problem is convex, but not DCP? For instance, consider the trivial example minimize J(x) = x^2 - 0.1*x^2. If this is applied to DCCP considering J(x) = f(x) - g(x) with f(x) = x^2 and g(x) = 0.1x^2, is there any proof or expectation that the algorithm should converge to a global optimum? I am generally trying to understand which algorithms are best suited to the case of a known convex, but non-DCP problem, that is the difference between two convex functions. Thanks!
hello,
$$ J(x) = x^2 - 0.1*x^2 $$
Is neither DCP nor DCCP. You can use is_dccp(prob) to confirm that:
>>> import cvxpy as cp
>>> import dccp
>>>
>>> x = cp.Variable(1)
>>> obj = cp.Minimize(cp.square(x) - 0.1*cp.square(x))
>>> obj.is_dcp()
False
>>> prob = cp.Problem(obj)
>>> dccp.is_dccp(prob)
False
If the objective is known to be convex, but is not DCP, it will not pass DCCP checks.
Put it depends on how you write it, right? For instance:
x = cp.Variable(1)
t = cp.Variable(1)
obj = cp.Minimize(t)
print("Is the objective function DCP?", obj.is_dcp()) # True
cons = []
cons.append(t >= 0)
cons.append(cp.square(x) <= t + 0.1 * cp.square(x))
prob = cp.Problem(obj, cons)
print("Is the problem DCCP?", dccp.is_dccp(prob)) # True
Yes absolutely. That applies to both DCP and DCCP.