safe-control-gym
safe-control-gym copied to clipboard
Understanding BoundedConstraint class
I'm not sure if this is a bug or i'm using the functionality wrong, so please let me know accordingly.
Here is how I'm trying to use BoundedConstraint, specified in my task_config yaml file.
constraints:
- constraint_form: bounded_constraint
lower_bounds: [0, 0, 0] # should match state dim
upper_bounds: [2.6, 0, 0]
constrained_variable: state
active_dims: 0 # only position
When i use BoundedConstraint for constraining state (I have 3 states but want to constraint only 1), I realize I needed to supply lower_bounds & upper_bounds with shapes that are equal to the number of states (e.g. 3 if have 3 states), due to self.dim being defined as such: self.dim = env.state_dim which is used in self.constraint_filter = np.eye(self.dim)[active_dims] here, where it is supposed to only extract active_dims from these bounds for the target state to be constrained.
But when i do so, where lower_bounds matches shape of env.state_dim, the assertion here assert A.shape[1] == self.dim, '[ERROR] A has the wrong dimension!' fails.
This seem to fail because in Constraint , inside this code chunk, after constraint_filter is defined, self.dim is overwritten by len(active_dims), so it would have shape of active_dims which when I use is 1, while A already has shape (6, 3) due to self.dim being set to env.state_dim which was 3 earlier.
if self.constrained_variable == ConstrainedVariableType.STATE:
self.dim = env.state_dim
elif self.constrained_variable == ConstrainedVariableType.INPUT:
self.dim = env.action_dim
elif self.constrained_variable == ConstrainedVariableType.INPUT_AND_STATE:
self.dim = env.state_dim + env.action_dim
else:
raise NotImplementedError('[ERROR] invalid constrained_variable (use STATE, INPUT or INPUT_AND_STATE).')
# Save the strictness attribute
self.strict = strict
# Only want to select specific dimensions, implemented via a filter matrix.
if active_dims is not None:
if isinstance(active_dims, int):
active_dims = [active_dims]
assert isinstance(active_dims, (list, np.ndarray)), '[ERROR] active_dims is not a list/array.'
assert (len(active_dims) <= self.dim), '[ERROR] more active_dim than constrainable self.dim'
assert all(isinstance(n, int) for n in active_dims), '[ERROR] non-integer active_dim.'
assert all((n < self.dim) for n in active_dims), '[ERROR] active_dim not stricly smaller than self.dim.'
assert (len(active_dims) == len(set(active_dims))), '[ERROR] duplicates in active_dim'
self.constraint_filter = np.eye(self.dim)[active_dims]
self.dim = len(active_dims)
Could you please check if this was the issue?
Another attempt I took was to set lower_bounds to have shape same as active_dims, e.g. 1 (only want to constraint 1 state). It doesn't work, because matmul would fail here self.sym_func = lambda x: self.A @ self.constraint_filter @ x - self.b for LinearConstraint.
Full error: ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 3)
Summary:
What is the right shape for lower_bounds and upper_bounds?
- When I set it equal to shape of
env.state_dim, I get error fromassert A.shape[1] == self.dim, '[ERROR] A has the wrong dimension!'becauseself.dim = len(active_dims). - When I set it equal to
len(active_dims), it throws errorValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 3)because ofself.sym_func = lambda x: self.A @ self.constraint_filter @ x - self.b.
Would love to know if this is indeed a bug or if I'm using it wrong. I'll also keep trying again in case maybe I missed anything.
Thank you!
@adamhall @JacopoPan @Justin-Yuan
Hi Nicholas,
Thanks for the comment. You actually have to set lower_bounds and upper_bounds to have the same dimension as active_dims. For example, the following works as expected for me
constraints:
- constraint_form: bounded_constraint
lower_bounds: [ 0 ] # should match state dim
upper_bounds: [ 2.6 ]
constrained_variable: state
active_dims: 0 # only position
as self.dim = len(active_dims) is set here. Can you try this and see if it works? If not, can you post a minimal working example of your bug so I can recreate it? An alternative option is to set the dimensions you do not want to constraint to large values
constraints:
- constraint_form: bounded_constraint
lower_bounds: [ 0, -100, -100 ]
upper_bounds: [ 2.6, 100, 100]
constrained_variable: state
On another note, why do you only have 3 states? cartpole has 4, and quadrotor has either 2, 6, or 12, depending on which quad you run.
@adamhall
If missing, can you then create a patch/PR with the dimension check on lower_bounds, upper_bounds, active_dims raising and exception and error message?
It could also be something to mention in the docstrings of BoundedConstraint's constructor (and any other similar class)
https://github.com/utiasDSL/safe-control-gym/blob/c031b74ea3b05d4c91ca13d4f2fd9cb410d23a70/safe_control_gym/envs/constraints.py#L285-L292
(I think Nicholas has a different state vector length because wants to work on creating a new environment)
@JacopoPan Yes for sure, but I just want to make sure this is actually the issue first, so I'll wait until Nicholas gets it working.
Hi @adamhall
Thanks a lot for the detailed response and explanation, as well as helping me check if it is working on your end.
Yes indeed I actually tried that out as well, which as I mentioned in the above post, if I set lower_bounds to have same dimension as active_dims, I get the following error:
ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 3)
However, I did a sanity check with the following code and indeed it works.
lower_bounds =np.array( [0.2])
upper_bounds = np.array([5])
dim = lower_bounds.shape[0]
A = np.vstack((-np.eye(dim), np.eye(dim)))
b = np.hstack((-lower_bounds, upper_bounds))
active_dims = [0]
x = np.array([0,0,0])
dim_state = x.shape[0]
constraint_filter = np.eye(dim_state)[active_dims]
print(A@constraint_filter@x)
array([-3., 3.])
After further debugging, I realized since I've been experimenting with classical control for my own environment, I haven't been paying attention to my _set_observation_space() function, which I was mainly using when testing RL methods.
I just found out env.state_dim is set to depend on observation_space (and not my env.state)
https://github.com/utiasDSL/safe-control-gym/blob/d7a59cc71af589a9203effbbc11391592c7f6559/safe_control_gym/envs/benchmark_env.py#L182-L187
so when I forgot to update my observation_space accordingly, the env.state_dim becomes incorrect.
I fixed the observation_space and seems like now I can see the constraints returned properly.
Thank you so much for your clarification.
It's a silly mistake for me forgetting to maintain my observation_space as I experiment with different state dimensions 😄 but perhaps this realization might come in handy for other users implementing their own environments (that's based on BenchmarkEnv) and using BoundedConstraints, ensuring lower_bounds should have same shape as active_dims.
can you then create a patch/PR with the dimension check on
lower_bounds,upper_bounds,active_dimsraising and exception and error message?
@adamhall PR patch to main and/or dev-experiment-class?
Some warnings and checks for the dimension of the bounds and the active_dims added as part of PR #88.