"NamespaceResolutionFailedException: unresolved namespace" with fabric8 Reactive Discovery Client
We have a bug that I found out about while refactoring integration tests.
Here is the set-up for it. We need to look at two properties:
spring.cloud.discovery.blocking.enabled=false
spring.cloud.discovery.reactive.enabled=true
that is: disable blocking, enable reactive. Then, we need to inject the reactive client, for example:
@Autowired
private ReactiveDiscoveryClient discoveryClient;
Since we have disabled blocking discovery, no beans from blocking implementation will be created, because:
@ConditionalOnSpringCloudKubernetesBlockingDiscovery
public class KubernetesDiscoveryClientAutoConfiguration {
On the other hand, our reactive client is based on the blocking one, so we have:
public KubernetesReactiveDiscoveryClient(KubernetesClient client, KubernetesDiscoveryProperties properties,
KubernetesClientServicesFunction kubernetesClientServicesFunction) {
this.kubernetesDiscoveryClient = new KubernetesDiscoveryClient(client, properties,
kubernetesClientServicesFunction);
}
So, even if blocking implementation is disabled, we will still create the blocking client.
The problem arises because the blocking client has:
@Deprecated(forRemoval = true)
@Override
public final void setEnvironment(Environment environment) {
namespaceProvider = new KubernetesNamespaceProvider(environment);
}
which of course no one will call, since this is not a bean and it ultimately leads to a NamespaceResolutionFailedException even if you provide an explicit property for a namespace.
The work-around is simple, just use the blocking discovery client, until we fix this one.
The fix I am proposing, is to actually not fix this, but wait for this one where I re-work a little bit the discovery client, and where this issue is already fixed.
I wanted to create this issue so that we keep track when closing it and may be future references.
I am fine with fixing it in the PR you linked to above but that will only address the issue in the next major. We should fix it in the other releases as well.
If that would really be a problem, someone would have reported this by now, I guess. The thing is, I can probably fix it (I've tried already), but it gets ugly fast, because there are public APIs we need to break... I'll leave this one open, and if we get a report from someone that this is indeed a problem, I'll take another look at it. Especially since the workaround is pretty simple...
@ryanjbaxter this can now be closed