druid icon indicating copy to clipboard operation
druid copied to clipboard

Support for middle manager less druid, tasks launch as k8s jobs

Open churromorales opened this issue 2 years ago • 10 comments

Description

Add an extension to allow tasks to be run as k8s jobs from the overlord, eliminating the need for a middle manager.

The core changes are as follows:

  1. Refactored arguments to CliPeon to be more generic
  2. Had to add a setup and cleanup method to AbstractTask.
    a. Because tasks run on separate pods, the task needs to setup its own filesystem directories. b. Again because the tasks run on separate pods, we push the task logs from the task itself and task reports. in the cleanup method.
  3. A few other small changes to core required for tasks to run independently on their own.

How it works

The KubernetesTaskRunner runs in the overlord process. When it has a request to launch a task, it goes to the K8sApi, grabs its own PodSpec (the overlord itself). Takes that podSpec, modifies the necessary attributes (eg: command, labels, env variables etc). Takes the task.json, compresses and base64 encodes it. Then launches a K8s Job.

The K8s Job on startup, will unwrap the task.json env variable, write it to the appropriate directory and run the task.

The KubernetesTaskRunner monitors the lifecycle of the task, just as the ForkingTaskRunner and returns the TaskStatus.

What if you are running Sidecars?

The config option druid.indexer.runner.sidecarSupport will support launching sidecars, I utilize kubexit (https://github.com/karlkfi/kubexit) to setup the spec such that when the main container completes, it terminates the sidecars. This is a known issue with k8s jobs and this is how I work around it.

Another nice side-effect

Because the launching of tasks has been decoupled from the service itself, the tasks run independently regardless of the state of the overlord process. You can shut down the overlord process, and when it comes back. It will go to the k8s api and get the status of all peon jobs regardless of phase (in flight, completed, failed, pending) and will do the proper bookeeping for completed tasks and will resume monitoring tasks in flight.

To run a middle manager less druid, simply omit the middle manager from your deployment.

Make sure you also change druid.processing_intermediaryData.storage.type=deepStorage

In your overlord config: 1. Add the druid-kubernetes-overlord-extensions to your extensions load list. 2. druid.indexer.runner.type=k8s 3. druid.indexer.runner.namespace=<currentNamespace> 4. druid.indexer.queue.maxSize controls max concurrent tasks, you must set it to a value less than the default of Integer.MAX_VALUE 5. druid.indexer.task.enableTaskLevelLogPush=true (optional but recommended).

This PR has:

  • [ x] been self-reviewed.
    • [x ] using the concurrency checklist (Remove this item if the PR doesn't have any relation to concurrency.)
  • [ x] added documentation for new or modified features or behaviors.
  • [ x] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • [ x] added or updated version, license, or notice information in licenses.yaml
  • [ x] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • [ x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • [ ] added integration tests. (this has been added but the k8s integration tests only work on a linux machine as they use conntrack with minikube. Thus I will have to let travis run and figure things out from there.
  • [x] been tested in a test Druid cluster.

churromorales avatar Sep 29 '22 21:09 churromorales

This pull request introduces 5 alerts when merging 190f8dd0c0132ee55c4e9581efac4a305403f37f into ce5f55e5ce00d876277424ea0724b70a330315b3 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Sep 29 '22 23:09 lgtm-com[bot]

This pull request introduces 5 alerts when merging afff50921b672cf90ceef9ce1c5e4c5610dd6362 into ce5f55e5ce00d876277424ea0724b70a330315b3 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Sep 30 '22 17:09 lgtm-com[bot]

This pull request introduces 5 alerts when merging 2f06ec11211608548546c28b3b4d0384f5d5b018 into ebfe1c0c90d86e4d188617fe840dafb2c9b7e5b0 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Sep 30 '22 21:09 lgtm-com[bot]

This pull request introduces 5 alerts when merging 2e239529c514cf539ac508ebbe62eeb882c9b7ba into 92d2633ae6808116c52d5b813403fd1f5b84309c - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 04 '22 01:10 lgtm-com[bot]

this looks great @churromorales, i've started looking through some of the kubernetes related code today, will hopefully finish up by tomorrow

georgew5656 avatar Oct 06 '22 17:10 georgew5656

This pull request introduces 5 alerts when merging 6c13bf0a72fab4d6f66a502ba35920afcc0d6fbb into 45dfd679e92e668172c470c37615e7447b601af1 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 13 '22 21:10 lgtm-com[bot]

I noticed a slight issue when testing on heavily loaded k8s clusters. Those that take a while to spin up / down pods.

Currently the way things work: 1. The overlord launches a task, waits for the pod to come up, it not only waits for a pod ip to be available, but also opens a socket connection to the peon processes web server. Once you have a socket connection the task is considered started. 2. Then it will monitor the job to complete, but this is a blocking call, after the job completes, sends pushes task reports, logs and then does a delete for the peon k8s job.

What was problematic here, is I noticed that the process was complete, the task itself was finished, but the k8s operations were slow on my cluster. The task itself took less than a minute to complete, but the entire k8s lifecycle was taking much longer around 10 minutes. (A very heavily loaded cluster).

Thus the task status only updates in the finally block of the K8sTaskRunner.run() so the task lock is only released when the task finishes and the k8s job is deleted. Which is technically correct, but in reality, the task jvm process itself could've completed long before this. I have another commit and I will test on our clusters but a brief description of the patch is this:

  1. The overlord launches a task, waits for the pod to come up. No more opening a socket for the webserver. Instead I added 2 TaskActions: One to update the TaskStatus, one to update the TaskLocation.
  2. Now in the AbstractTask the setup method will update its own location before the run method is called. If the overlord goes down for whatever reason, that is fine. We don't lose anything really, the call fails and the task itself dies.
  3. In the cleanup method of the AbstractTaskwe also update the TaskLocation to unknown and update the status to success or failure. So when the process exits, we can give up the lock and things wont be blocked. In case the pod goes away during the cleanup, that is okay, the overlord will still monitor the job and report the correct results albeit a bit slower.

I thought about this approach originally, but didn't want to make too many changes to core, but after running this patch on our clusters, I do think this is the best approach. I will call it out in a separate commit so you guys can review this work, it doesn't seem too abrasive to me, but we should focus on correctness here and not hold the locks longer than necessary as we were doing in the forking task runner.

I will test this out over the next day or so and then add the commit to this PR.

churromorales avatar Oct 13 '22 23:10 churromorales

Hey, this is an awesome PR, thanks for all the work I am concerned about this part, though:

The KubernetesTaskRunner runs in the overlord process. When it has a request to launch a task, it goes to the K8sApi, grabs its own PodSpec (the overlord itself). Takes that podSpec, modifies the necessary attributes (eg: command, labels, env variables etc). Takes the task.json, compresses and base64 encodes it. Then launches a K8s Job.

The resource use for overlords and tasks is vastly different, using the same podSpec for those would require reserving much more resources for the overlords and the tasks than necessary.

One idea is that instead of reading from the overlord pod itself, it would be better to provide the name of a config map containing the podSpec.

Another idea: the middle-managers have tiers, which we can use to assign tasks to middle-managers with different resource allocations. Those tiers could be used to define different podSpecs for different tasks without changing the tasks API.

Those changes can be done on a later PR. All in all, this PR is a huge step forward in making Druid more scalable for cloud environments.

Fryuni avatar Oct 14 '22 00:10 Fryuni

Thank you for your kind words. I hope this patch will be useful to folks upstream.

The resource use for overlords and tasks is vastly different, using the same podSpec for those would require reserving much more resources for the overlords and the tasks than necessary.

This doesn't use the resources of the overlord, we overwrite the resources for podSpec. The memory is derived from your JAVA_OPTS (for your task) basically 1.2*(Xmx + Dbb) for the memory limit and the cpu is always 1. You get a core per task just like we did before. We only grab certain items from the parent podSpec. You can see the logic in the K8sTaskAdapter

I am open to whatever changes you guys think are necessary. I have a few more ideas for subsequent PRs and I do like the configmap pod spec for users that want more control. That might be a configurable option we can provide in another PR.

churromorales avatar Oct 14 '22 17:10 churromorales

This doesn't use the resources of the overlord, we overwrite the resources for podSpec. The memory is derived from your JAVA_OPTS (for your task) basically 1.2*(Xmx + Dbb) for the memory limit and the cpu is always 1.

That is super great news on the memory side, but not so great on the CPU side for us. We have 23 continuously running ingestion tasks and they only use a tenth of a core (100m) each, so fixing the resource request to one core is a 9x overprovision totaling 21 cores. A full core works for our compaction tasks, but those run for about 10 minutes every 4 hours.

We'll explore some things with Druid in the near future, so we might use this PR before it lands and give some feedback.

Fryuni avatar Oct 14 '22 23:10 Fryuni

This doesn't use the resources of the overlord, we overwrite the resources for podSpec. The memory is derived from your JAVA_OPTS (for your task) basically 1.2*(Xmx + Dbb) for the memory limit and the cpu is always 1.

That is super great news on the memory side, but not so great on the CPU side for us. We have 23 continuously running ingestion tasks and they only use a tenth of a core (100m) each, so fixing the resource request to one core is a 9x overprovision totaling 21 cores. A full core works for our compaction tasks, but those run for about 10 minutes every 4 hours.

We'll explore some things with Druid in the near future, so we might use this PR before it lands and give some feedback.

I agree, we need some extra configuration, I think that can come in a subsequent PR as the more folks that use this, the more we will find that needs to be configurable. I like the idea of having the current model be the default and then for users that want something more customized, perhaps we can have a configmap based podSpec they could load as a base template. I think that could please most folks, while still providing an easy configuration for those users that want to just to have jobs run in k8s.

I just pushed up a commit, I had it run on a few clusters for 4-5 days and everything seems good. If you want to test this patch, I would recommend using the latest commit from this branch as it reduces the time spent locking dramatically. Let me know if you guys have any other concerns about this patch. If you have already done a review, you can look at the last commit and see what changed. Thanks again for the reviews.

churromorales avatar Oct 17 '22 17:10 churromorales

This pull request introduces 5 alerts when merging e491abc2bee159adeff1d2b1d79b03f21b621b54 into fc262dfbaf434614a2ef550775b0c6b736856e28 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 20 '22 22:10 lgtm-com[bot]

This pull request introduces 5 alerts when merging 27d29fb77b1bf8203b1a91a8c05e52f240948a8d into 86e6e61e884230e7a82e726f5f5c33602b242caf - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 21 '22 22:10 lgtm-com[bot]

@gianm I was testing the MM-less patch on the msq work you did. I ran a test ingestion and the tasks just hang forever, after a bit of debugging here is what is happening, launch a controller with one worker:

I get this exception:

2022-10-25T20:32:25,413 ERROR [ServiceClientFactory-0] com.google.common.util.concurrent.ExecutionList - RuntimeException while executing runnable com.google.common.util.concurrent.Futures$4@7f14c4aa with executor com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService@138e191f
java.lang.NullPointerException: host
	at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:229) ~[guava-16.0.1.jar:?]
	at org.apache.druid.rpc.ServiceLocation.<init>(ServiceLocation.java:39) ~[druid-server-24.0.0-6.jar:24.0.0-6]
	at org.apache.druid.rpc.indexing.SpecificTaskServiceLocator$1.onSuccess(SpecificTaskServiceLocator.java:137) ~[druid-server-24.0.0-6.jar:24.0.0-6]
	at org.apache.druid.rpc.indexing.SpecificTaskServiceLocator$1.onSuccess(SpecificTaskServiceLocator.java:113) ~[druid-server-24.0.0-6.jar:24.0.0-6]
	at com.google.common.util.concurrent.Futures$4.run(Futures.java:1181) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:185) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.Futures$ChainingListenableFuture$1.run(Futures.java:872) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:102) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:868) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:185) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:53) ~[guava-16.0.1.jar:?]
	at org.apache.druid.rpc.ServiceClientImpl$1.onSuccess(ServiceClientImpl.java:194) ~[druid-server-24.0.0-6.jar:24.0.0-6]
	at org.apache.druid.rpc.ServiceClientImpl$1.onSuccess(ServiceClientImpl.java:168) ~[druid-server-24.0.0-6.jar:24.0.0-6]
	at com.google.common.util.concurrent.Futures$4.run(Futures.java:1181) ~[guava-16.0.1.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:829) ~[?:?]

For the middle manager less patch in k8s. I launch a task, I added a setup and teardown functions, so before every AbstractTask runs, it will announce it's own location, then on teardown update status etc...

From what I see here, is that we do have a taskStatus (the task gets launched) but the location has not yet been announced, in k8s we don't know the location until the pod comes up and the service is available to take a request. So in the msq patch, it doesn't wait for the location, it assumes it knows it. But we need this for the MM-less patch.

TLDR, its a race, we try to get the location for the controller before it announces it. The TaskLocation is unknown until the task's runTask() method is invoked. The precondition on a null host in the ServiceLocation constructor causes everything to die.

Do you have any advice how we can make these two co-exist? This is the only blocker I see for this to work, everything else works as it did before. I can't figure out a clean way, also I don't fully understand the msq patch, I thought you might have a solution for this.

Here is some relevant information:

From the worker:

2022-10-25T21:20:20,454 DEBUG [task-runner-0-priority-0] org.apache.druid.indexing.common.task.AbstractTask - Task setup complete

This means the task location has been announced at this point.

From the controller:

2022-10-25T21:20:06,469 DEBUG [task-runner-0-priority-0] org.apache.druid.indexing.common.task.AbstractTask - Task setup complete

The controllers task location has been announced here

Now in the controller we get this exception:

2022-10-25T21:20:10,127 ERROR [ServiceClientFactory-0] com.google.common.util.concurrent.ExecutionList - RuntimeException while executing runnable com.google.common.util.concurrent.Futures$4@407b1b94 with executor com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService@47a1188b
java.lang.NullPointerException: host

Does this mean that we assume that the worker and controller start at the same time? Is the controller looking for the worker here? If so then it would make sense the failure. Because in a k8s world, you can't guarantee things get launched immediately, that is why we had tasks themselves announce the location when the process itself actually starts running the task. Also when i tested the kafka indexing I believed it worked because the supervisor is launched first, then that launches sub-tasks which preserves the order or task dependencies.

Thank you

churromorales avatar Oct 25 '22 21:10 churromorales

@abhishekagarwal87

may not be an issue but is there a limit in k8s on how large the env variable value can be? The task JSON could be large.

I just checked the defaults for k3s, microk8s, GKE, and EKS:

  • A configMap value is limited to 1MiB
  • Similarly, an inline env value is limited to 1MiB
  • The entire final command passed to the container (command + arguments + ENVs) is limited to the ARG_MAX configuration of the host node, which defaults to 2MiB on Linux.

None of our tasks reach close to a mebibyte, but if that could be a problem, then the best would be retrieving the task from the coordinator.

Mounting a config map instead of passing as an env var is cleaner IMHO, but would only make a difference regarding limits if there were multiple envs smaller than 1MiB that combined go over 2MiB.

Fryuni avatar Oct 26 '22 21:10 Fryuni

@abhishekagarwal87

may not be an issue but is there a limit in k8s on how large the env variable value can be? The task JSON could be large.

I just checked the defaults for k3s, microk8s, GKE, and EKS:

* A configMap value is limited to 1MiB

* Similarly, an inline `env` value is limited to 1MiB

* The entire final command passed to the container (command + arguments + ENVs) is limited to the `ARG_MAX` configuration of the host node, which defaults to 2MiB on Linux.

None of our tasks reach close to a mebibyte, but if that could be a problem, then the best would be retrieving the task from the coordinator.

Mounting a config map instead of passing as an env var is cleaner IMHO, but would only make a difference regarding limits if there were multiple envs smaller than 1MiB that combined go over 2MiB.

I didn't want to waste quotas for no reason. In k8s you can have a quota for configmaps. I was attempting to not eat up quota items for this work. Config map is cleaner, but env doesn't eat up quotas. Also we compress and base64 encode the task.json, I doubt it would ever get large enough to cause a problem. We have some pretty large task.json files running with this patch and they have not come close to hitting the limits.

churromorales avatar Oct 26 '22 21:10 churromorales

We have some pretty large task.json files running with this patch and they have not come close to hitting the limits.

Yeah, our largest task specs are only about 4KiB. I don't see any task getting close to those limits.

If and when someone has a use case for a spec that is at the scale of 1MiB, there are alternatives.

Fryuni avatar Oct 26 '22 21:10 Fryuni

This pull request introduces 5 alerts when merging 5a0c734d9972c7f9319a53044ec1a43d29d88871 into affc522b9f8227a42631e55bc9ad833ee57fc3a0 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 27 '22 20:10 lgtm-com[bot]

This pull request introduces 5 alerts when merging a23bba92d722562a7342b8283978a187679a7a1f into affc522b9f8227a42631e55bc9ad833ee57fc3a0 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 27 '22 23:10 lgtm-com[bot]

@gianm I was testing the MM-less patch on the msq work you did. I ran a test ingestion and the tasks just hang forever, after a bit of debugging here is what is happening, launch a controller with one worker:

Thanks for the details! Looking into it I think the issue is that the SpecificTaskServiceLocator is able to handle tasks with unknown location, but it does this by a logic branch involving TaskLocation.unknown().equals(statusPlus.getLocation()). Probably, this branch doesn't get used because the location has null host, yet port and tlsPort are set. So, it does not equals to TaskLocation.unknown(), and therefore we end up trying to treat it as valid in the service locator.

Couple ways to fix it:

  • Avoid creating TaskLocations with null host and non--1 values for port and tlsPort. This may be happening in K8sWorkItem if mainPod.getStatus().getPodIP() is null. Do you think this is possible? To keep things coherent I suggest enforcing it in the TaskLocation constructor, which could throw an exception if host is null and port and tlsPort are anything other than -1. (This would ensure that null host always compares equal to TaskLocation.unknown().)
  • Or, an alternate approach: in SpecificTaskServiceLocator, use statusPlus.getLocation().getHost() == null instead of comparing to TaskLocation.unknown(). (With this approach, note that there are a bunch of other places that use equality checks on TaskLocation.unknown(). They should be updated too.)

I'm happy if we do one of these fixes as a follow-up to your patch. You don't need to do it as part of this initial PR.

gianm avatar Oct 28 '22 07:10 gianm

This pull request introduces 5 alerts when merging b12de57c02930116aa72fccb86e5e8f7aa94e373 into 4f0145fb8506257f6531cce04b5dee8354a1a1a1 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 28 '22 20:10 lgtm-com[bot]

* Avoid creating TaskLocations with `null` host and non-`-1` values for `port` and `tlsPort`. This may be happening in `K8sWorkItem` if `mainPod.getStatus().getPodIP()` is `null`. Do you think this is possible? To keep things coherent I suggest enforcing it in the TaskLocation constructor, which could throw an exception if host is `null` and `port` and `tlsPort` are anything other than `-1`. (This would ensure that `null` host always compares equal to `TaskLocation.unknown()`.)

@gianm thats absolutely the problem I believe, thank you for pointing me in the right direction, I couldn't tell from the trace where the issue was. I'll fix that in a follow up PR so we can get these two features working together. Thank you again for getting back to me about this.

churromorales avatar Oct 31 '22 22:10 churromorales

This pull request introduces 5 alerts when merging fb6609e189e8f041219821ef349e401b6ba28cdb into 675fd982fb5ca274b057495a90563ecc248ad823 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Oct 31 '22 23:10 lgtm-com[bot]

This pull request introduces 5 alerts when merging a4b4cdb1b021a097627963b74cccbf7adb431d91 into fd7864ae33ba8afe56d660c2b6d84ec5b498d321 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Nov 01 '22 18:11 lgtm-com[bot]

This pull request introduces 5 alerts when merging ab612d83fb190f98a106e4ca70059f95f470d986 into 176934e849d50a2360f50db7771c1775110308f3 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Nov 01 '22 21:11 lgtm-com[bot]

This pull request introduces 5 alerts when merging 9ebadf429b3aef83feaac7eccfe42f3c7fd97408 into 176934e849d50a2360f50db7771c1775110308f3 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Nov 02 '22 17:11 lgtm-com[bot]

This pull request introduces 5 alerts when merging d40db8f8226b4d0da7325d55d7ad8f3367de64a4 into 176934e849d50a2360f50db7771c1775110308f3 - view on LGTM.com

new alerts:

  • 3 for Uncontrolled data used in path expression
  • 2 for Unused format argument

lgtm-com[bot] avatar Nov 02 '22 22:11 lgtm-com[bot]

Thanks for the contribution @churromorales ! Looking forward to the followup PRs to handle the outstanding items identified from this PR.

a2l007 avatar Nov 03 '22 02:11 a2l007

Thank you for the contribution @churromorales. This will be a major feature in the upcoming 25.0.0 release.

abhishekagarwal87 avatar Nov 03 '22 06:11 abhishekagarwal87