Add dynamic process deployment API endpoint
Existing Functionality
At present, deployment of a new process requires:
- committing/pushing the process BPMN and JSON to a git repo
- ensuring the commit makes it through a CI/CD pipeline
- creating a new Activiti cluster deployment for the committed BPMN/JSON
Unsupported Use Case
We wish to allow clients to run (within our own Activity cluster) their arbitrary processes at will.
For example, suppose we have a customer who wants to run a process in our cluster. They use our modeller to generate a constrained subset of BPMN XML, and then ask our system to start it. Our system passes the BPMN off to our Activiti cluster for execution.
This case provides good reasons for maintaining a fixed number of Activiti clusters that are able to dynamically deploy and start processes:
- Our clients' use cases are time-sensitive, so if we would like to avoid them having to wait for the CI/CD pipeline to build, and for the cluster to spin up and become available
- Since there might be a large number of processes starting and completing at a rapid rate, we would like to reduce the overhead incurred by each process
Additional Considerations
Since we use a custom modeller (a modification of bpmn.io), we don't generate JSON files for variable data like with Activiti's modeller. We just create a single XML file.
It would be preferable for us (and anyone else who generates BPMN from non-Activiti modellers) if we could submit raw XML/BPMN to the engine for deployment and starting, rather than having to create an Activiti-style application zip file, completed with directory structure and internal file structure.
Failing that, specification (and back-compatible support of that spec) for the desired format would be crucial for using the solution to this unsupported use case. This specification may be desirable in any case, in order to make use of functionality offered by the JSON files in the application zip (e.g. the work in this PR).
Please let me know if I can answer any questions or clarify anything above, @salaboy and @igdianov
@sheac great thanks for reporting this, this is worth a discussion and it might require some extension points to be added. We are currently rushing to ship the first GA release that we want to use as initial base layer to build this new cloud native approach. As soon as we wrap up that work I will add a more complete and detailed answer to this "feature request". I am hoping to be able to this before the end of this week.
Thanks for the response, @salaboy!
For the time being, best of luck with the GA release.
@sheac by the way.. we merged this one: https://github.com/Activiti/Activiti/issues/2213 it is the initial work around variables mapping.. but it should solve some of the problems that you were facing with connectors.
Thanks, @salaboy.
Let me know if I can help answer any questions about this feature request.
@sheac thanks for the patience.
Here is a somewhat detailed answer to your questions and description about how things are currently working. We understand that this is a big change compared with Activiti 5.x/6.x and even other BPM frameworks, but we believe that these changes are fundamental to succeed in our Cloud Native journey.
So, let's clarify this first:
At present, deployment of a new process requires:
- committing/pushing the process BPMN and JSON to a git repo
- ensuring the commit makes it through a CI/CD pipeline
- creating a new Activiti cluster deployment for the committed BPMN/JSON
None of this is strictly true for all cases, we do not force you to use GIT to store your files, we don't force you to use Docker, we don't force you to deploy to a cluster. We definitely recommend those things, but you are free to not do that. You can run a Spring Boot application, stop it, change the BPMN file, run all the tests for the changes and start it again.
The Runtime Bundle must to be immutable in order to guarantee that after testing the behaviour of the Runtime Bundle as a unit doesn't change. This is extremely important to start providing guarantees and checks that are impossible to provide if the content of the unit is mutable.
Now.. jumping to your "Unsupported Use Case" section and based on:
- creating a new Activiti cluster deployment for the committed BPMN/JSON
You don't really need to do a completely new deployment, just an upgrade of the Runtime Bundle that has the change, in general that shouldn't take long. Also, we believe that you need this extra step to make sure that you test the new behavior before deploying it and having it live.
It is also important to notice that currently our Modeler is providing more functionality and keeping the BPMN xml file as standard as possible. This will expand in the future and we recommend to use this extra information generated in JSON along with the concepts of Runtime Bundle, Cloud Connector, Query, Audit and Applications Service (work in progress).
Proposed Solution:
To make sure that we promote the Runtime Bundle to be immutable we cannot provide this functionality on the Runtime Bundle Starter endpoints, but we can:
- We can create an extension mechanism, a spring boot starter that adds the functionality that you need based on your contributions.
- We can keep these extension as a separate community led effort, so we can run all the tests and make sure that it doesn't get outdated when we move forward the Runtime Bundle Starter.
- We can review together your "customer" requirements around time to see how different tools can be used to speed up the immutable approach proposed by the Runtime Bundle.
Remember that we will work hard to make this approach work, not only with the Runtime Bundle but with the services around it. We know that the industry is building tools to make these process smooth for cloud deployments, so we are very active in integrating with tools in such space. We need to make sure that we do not re-invent the wheel on something that is not even related with BPM/Activiti.
Having said that, take this as our position right now, we are more than open to discuss alternatives if you have them. Also we are open to give more reasons why we advice against these changes of behaviour at runtime.
@igdianov @ryandawsonuk @erdemedeiros ^ please feel free to add your views about this as well
@salaboy This makes a lot of sense to me. This approach will make sure that any approved extension will go through CI/CD pipelines and release process to maintain managed dependencies, so that folks can add them into their runtime-bundle starters and build deployments to use them in their environments.
I also like that these extensions will benefit from community contributions and reviews.
@salaboy sorry for the delay in responding.
We're meeting with @igdianov this week to understand the details of what this would look like for us. After that meeting, we'll respond here.
@sheac please share the details after you discuss with @igdianov so we can think about the next steps.
Thanks for your patience while we continue to consider your response above and research our options, @salaboy.
/CC @igdianov
@salaboy and @igdianov
After a good deal of discussion and consideration, we decided we'd like to try the route of adding a Spring Boot starter extension to the Runtime Bundle:
We can create an extension mechanism, a spring boot starter that adds the functionality that you need based on your contributions.
Can you tell us about the timeline for the creation of such an extension, and what you expect in terms of "your contributions"?
Thank you.
@sheac more than timelines for creation of the extensions is about how do you want to contribute that back. From our side is more to create a repository and the pipelines to build those extensions to make sure that they are kept on sync. If you already have a spring boot starter that provides the extension code, we can start working out where to host it and how to build it. Hopefully that makes sense.
@salaboy that makes sense.
After writing my above message, I realized that there needs to be (at least) two components to a workable solution for this:
- (already discussed) an HTTP API endpoint that accepts BPMN and registers them as process definitions
- (new) a way to allow updating of Runtime Bundle components (e.g. Process Runtime) without losing the Process Definitions.
(We certainly understand why immutable Runtime Bundles makes sense for the Activiti deployments (lowercase 'd' -- not Kube Deployments) that you mostly consider. For us, for the time being mutable is what we need to pursue. The purpose of mentioning this is to let you know we have seriously considered the benefits of immutability, and that we're not just ignoring prescribed best practices out of laziness.)
In our situation, there will be new Process Definitions arriving in the Runtime Bundle that are not a part of any Docker image. Thus we cannot simply create a new chart for that resource and deploy an upgrade. We would lose state (I know how that must sound 😛).
What I'm considering as a solution is attaching a volume to the Runtime Bundle resource and ensuring the Process Definitions are stored there. That way, when we redeploy a new Runtime Bundle resource with new images, they still find the same Process Definitions that were dynamically installed on the volume attached to the old Runtime Bundle resource.
Someone who understands the justification of immutability in operations might even feel slightly relieved at this suggestion. While it still prevents and CI/CD unit testing of the Process-Definitions-plus-Process-Runtime package, we've revived a certain amount of fungibility and statelessness that makes immutability so valuable. (For our case, unit testing in a CI/CD pipeline is actually not advantageous, so we don't lose much.)
My questions are:
- Is this consideration about upgrading components of the Runtime Bundle (e.g. Process Runtime) valid?
- Is my solution a suitable one?
- Would the two parts (dynamic uploading + storing Process Definitions in an attached volume) be part of a single extension, or would they be part of two closely-related extensions?
Thank you!
@salaboy after further investigation, I believe a (Kube) volume of any type is not suitable.
The goals is to be able to add or remove pods at any time for scaling or upgrade purposes. But since a volume can only be attached (with read-write access) to one pod at a time, volumes will not allow us to add additional pods that can access the files on the volumes used by existing pods.
Instead, it looks as if the only option is for Process Runtimes to maintain their deployments in an external datastore like Redis.
Implicit in this plan is our goal that any newly-added Process Runtime added to a cluster will be able to begin working on shared Process Definitions as soon as it is up and running. That is to say, the Process Runtimes in the cluster all share Process Definitions and can work on them together.
Is this many-to-many mapping from Process Runtime to Process Definition sensible in Activiti, or does a Runtime Bundle depend on a one-to-many mapping from Process Runtime to Process Definition?
/CC @igdianov
@sheac a kubernetes configmap could be accessed from multiple pods. It would allow you to mount files into a directory on the pod to be read by the runtime bundle. So you could load from the filesystem much like the runtime bundle already does from src/main/resources/processes/. We actually did something similar in the past with external resource loading to load the definitions from a /processes/ directory on disk rather than in the jar.
To change a configmap you do have to go through the kubernetes API. But you could either do this through ‘kubectl apply’ or by writing code to call the API, depending on your use case.
FWIW I think there would also be other ways to mount a volume into more than one pod as it sounds to me like you’re thinking of specifically emptyDir volumes. But configmap seems like it could be a fit for you as it is not specific to a cloud provider, is versioned through the API and is intended for mounting config files into pods.
Re: “when we redeploy a new Runtime Bundle resource with new images, they still find the same Process Definitions that were dynamically installed on the volume attached to the old Runtime Bundle resource” - when Activiti reads process definitions at startup it deploys them to the database. This is engine behaviour and is the same as v5. So as long as you have the database where the definition was deployed then you have the definition. So now that’s a runtime bundle database but it’s still just an Activiti engine database.
Rumtime bundles can actually be set to share a database. If you do that then you have a setup similar to running Activiti v5 or v6 in high availability or clustered mode. The key difference for you then is just that the v7 runtime bundle REST API doesn’t have a POST method to deploy new process definitions, whereas previous Activiti REST APIs did have this.
I’m not meaning to suggest one option over another here. I'm just trying to help you help you decide by clarifying how it works and what it is possible to do.
@ryandawsonuk thanks for your helpful comments!
For our particular use case, I believe that the following setup would be best:
-
The Runtime Bundles all share the same database, so they can share a set of Process Definitions and Instances. This will allow us to ensure high availability (fault tolerance and rolling upgrades) and to scale our cluster of workers to accommodate load.
-
The Runtime Bundles all use the Spring Boot Starter that we are discussing in this thread, that allows any of them to serve an HTTP POST method to deploy new Process Definitions.
-
The Runtime Bundles are all behind something like a Kube Service that acts as a load balancer.
My questions are:
-
Have I described a feasible design? Based on my use cases described at the top of this feature request, are there any obvious problems?
-
How can I accomplish point 1. in the setup description? There are plenty of examples of Activiti Helm charts, but I'm unable to determine on which lines in which files I'd be making changes from those examples.
-
Do we get the load balancer-like Kube Service behaviour for free, by copying existing Activiti Helm chart examples?
Thanks again!
@sheac good thread! I am just catching up after my holidays.. so sorry for the delayed response.
There are several things that I would like to point out based on your comments and proposed design, some of them align quite good with the Activiti Cloud direction and some of them goes in the opposite direction. As soon is it is clear to you, you can make your own choices.. we will try only to suggest more than enforce these design/architecture approaches:
- Regarding: "The Runtime Bundles all share the same database": this can be achieved today, all the replicas (of the same runtime bundle) will share the same database, so nothing there to do for fault tolerance or high availability. One thing that you need to notice is about "rolling upgrades", if you share the same database for multiple versions of the runtime bundle you loose rollbacks to previous versions. We can have a separate thread for that
- Regarding " HTTP POST method to deploy new Process Definitions" yes.. that is your extension, again this will impact several other aspects about how things get versioned and the possibility of running versions in isolation, but you can do it.. we recommend against, but if you need to do it, you can. About this, you also need to be aware of new descriptor files that needs to be submitted as well for doing a cloud deployment, we currently have some json files to do variable definitions/mappings and connectors. 3)Regarding: *"something like a Kube Service that acts as a load balancer." yes.. that is already the case.. nothing needs to be done for that
Regarding your questions:
- yes, there are problems, but that you tackle them down later, when we provide some tools to deal with versioning allowing you to do rolling upgrades and downgrades
- You just have your extended runtime bundle with multiple replicas.. that will create different pods all pointing to the same database.
- You will get that for free. that is default way of working for all our services.
Hopefully this helps .. sorry again for the delay in the response.
@salaboy thanks for your responses!
Right now I'm working on understanding what the extension would look like. This is my first time working with Spring, so when the Activiti team uses terms like "starter", it's difficult for me to make the conceptual leap from what's demonstrated in the online tutorials to what's required in this particular scenario.
I'll put something together soon, but I suspect you'll need to provide some guidance in order to massage it into a form that's suitable for sharing with the wider community.
@sheac what we are asking is exactly that "a Spring Boot starter is essentially a bundle of dependencies and configuration” it is a maven module (jar) that you can attach to an existing spring boot application if you look at the shape of the repos activiti-cloud-audit-service, activiti-cloud-query-service and activiti-cloud-runtime-bundle-service you will notice that they are not spring boot apps.. just starters (the starter modules inside those repos) hopefully that helps
Here's my first crack at the extension: https://github.com/parsable/rb-definition-deployer-extension
As far as I can tell there are a couple places I'll need to update based on your feedback:
- module naming and namespacing conventions
- Making it more difficult to use the starter incorrectly
What I mean by 2. is that the way things are written now, the user needs to know how to decorate the deployBpmn() method on their @RestController-annotated class. I'm not experienced enough with annotations to know how this could be made more user-friendly.
I look forward to your feedback.
/CC @salaboy @igdianov @ryandawsonuk
@sheac awesome stuff! let me take a look and comment more in detail
@sheac ok.. so yeah.. about 2 and the @RestController.. you need to provide the endpoints themselves, so when your starter is in the classpath the endpoints can be loaded.. so for example:
https://github.com/parsable/rb-definition-deployer-extension/blob/master/definition-deployer-starter/src/main/java/org/activiti/cloud/definitiondeployer/behavior/DefinitionDeployer.java#L20
That instead of being a Deployer.. can be a ProcessDefinitionDeployerController which is a @RestController which expose a single @PostMapping endpoint to deploy new process definitions. What we have done for the other starters is to create separated modules for the core logic and the APIs for the controllers, but for your first iteration of your extension (which looks to be very simple) I would keep it in a single module for now. Look at the the rb services project for controllers:
https://github.com/Activiti/activiti-cloud-runtime-bundle-service/blob/develop/activiti-cloud-services-runtime-bundle/activiti-cloud-services-rest-api/src/main/java/org/activiti/cloud/services/rest/api/ProcessDefinitionController.java
Your extension should only have that, plus the configuration.
Hopefully that makes sense.
Thanks very much, @salaboy. I agree that the way things are right now with line 20 of DefinitionDeployer.java is not acceptable.
Unfortunately, I'm still not clear on what the solution looks like. I'm sorry that my understanding of Spring Boot isn't up to snuff. I'm more than happy to read any tutorials or documentation that apply directly to my questions below: I don't see it as your job to be my Spring Boot tutor :)
The file you linked me to shows an interface. Can you tell me how that would be utilized by child applications that include the starter?
For example, would child applications would need to extend the interface by creating a method that calls the behavior I created? For example, would the user of the starter write:
@RestController
MyDefinitionDeployerController implements DefinitionDeployerController {
@Autowired
DefinitionDeployer definitionDeployer;
@Override
@PostMapping(value=DefinitionDeployerEndpoint.POST_MAPPING_VALUE, consumes=DefinitionDeployerEndpoint.POST_MAPPING_CONSUMES)
public String deployDefinition(@PathVariable String deploymentName,
@PathVariable String resourceName,
@RequestBody String bpmnModelXml) {
return definitionDeployer.deployProcessDefinition(deploymentName, resourceName, new ByteArrayInputStream(bpmnModelXml.getBytes()));
}
}
Where:
DefinitionDeployerControlleris an interface I would hypothetically create, in the vein of [the one you linked me to|https://github.com/Activiti/activiti-cloud-runtime-bundle-service/blob/develop/activiti-cloud-services-runtime-bundle/activiti-cloud-services-rest-api/src/main/java/org/activiti/cloud/services/rest/api/ProcessDefinitionController.java]deployProcessDefinition()is a method on the interface that has all the relevant annotations already on it (@PostMapping(...).DefinitionDeployeris a hypothetical class that I probably should have written that has theRepositoryServiceautowired into it.
If this isn't how it's done, I fail to see how the interface example you showed me (to define a controller and its endpoints) gets connected up with the implementation logic I wrote to actually do the definition deployment.
Thanks again for working with me on this. Please let me know if you have specific Google search terms, or articles that bear directly on this specific topic.
@sheac don't worry about the questions.. we are all here learning.. My gut feeling is that you are getting confused by the "starter" term.. which is basically a sub set of code that can be loaded as part of a spring boot app. When you create a spring boot app you will usually do:
- Create your core logic (Spring @Service/@Component) -> which in your case uses the Spring beans of Activiti Core
- Create your REST endpoints with @RestContoller and the @XXXMappings, as I showed you in the interface, to expose the core logic via REST
- They have a main class with the @SpringBootApplication annotation
- Add configurations to wire up all the beans together (most of the time just specific configuration to your services/controllers)
- write tests
Usually people (1 and 2) don't have an interface and they just put the annotations on a concrete class. You can start by just having the concrete class there with the controller and the logic inside interacting with the internal Activiti Core beans. But in your case, because you are writing a "starter", you omit 3, meaning that someone will need to provide that main class. 4 is optional depending on how much configuration you want to provider for your services. About 5, you will find that in our starter projects, we have tests adding the Application class with @SpringBootApplication annotation, to simulate an external app using our starter. In such tests the app context is started.
You can take a look at the spring boot documentation which explains who and why people might want to create "starters" and how the concept is associated to third party library providers building stuff to extend spring boot, they usually mention the term when they talk about configurations, because configurations are the main entry point for extensions to Spring Boot Apps: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-developing-auto-configuration.html
Hopefully it makes more sense now.. these is one of the things that sounds really complicated but in reality when you see the final solution you will surprise yourself on how simple it look like.
@salaboy that does make sense, thanks. I think I may have gotten it right with this commit: https://github.com/parsable/rb-definition-deployer-extension/commit/f601df6c8d2a0bd2dc97029e227529faf2078da9
Just bumping my previous comment, @salaboy and @igdianov
@sheac It is a good starter but needs a few tweaks.
-
The DefinitionDeployerController needs to be added to DefinitionDeployerAutoConfiguration. Otherwise, it will not be added into Spring Context by Auto Configuration. It is also a good idea to add
@Conditionalproperties for disabling auto-configuration. -
The starter module should include
activiti-cloud-starter-runtime-bundlewith<optional>true</optional>so it will not bring any transitive dependencies when included into Rb application: https://github.com/parsable/rb-definition-deployer-extension/blob/f601df6c8d2a0bd2dc97029e227529faf2078da9/definition-deployer-starter/pom.xml#L58 -
The IT test should actually test deploying bpmn via Rest Controller.
I will clone and run this on my machine to check for other things that may pop-up.
Thanks for the feedback, @igdianov . I created a commit to address points 1. and 2.. I'm still working on addressing point 3.