Test data setup for test suit
I would like to if there is a common way of setting up the input data required the test case to test the package/procedure I have written.
For example:
- If my procedure functionality is to check the balance of a bank account.
- Bank account number is input to the procedure.
For this functionality to work :
I need to have this bank account number already present in the database. So ,prior to calling my procedure I need to call few other APIs/packages/procedures to create the account and corresponding data etc. But this is not quite simple as it involves multiple functionalities to be called to setup the account. And also it is time consuming process which impacts the test suit execution times.
So I have come up with an idea :
1.Create the account setup manually for the first time when I develop my test case. Probably reuse the existing data. 2.I know all the tables in which this data would go and I have a list of tables. 3.So ,I have created an xml for each table as <table_name>.xml file in an unix directory. This directory is the directory_path for the oracle directory which be created automatially by a script when the first time test case developed. 4. Then from now onwards whenever I run my test case , in --%beforeeach annotation I would call a procedure which dumps(using DBMS_XMLSTORE.insertxml) the data into table.
This was working fine until we upgrade to 12c . In 12c as a security compliance the oracle directory creation (create directory) facility was removed for individual schema/users. My DBA is strict on this as it is security compliance issue.
I do not want to create a one time directory with help of DBA and use for all the test cases as I want to maintain the data in a better way for each test case.
I am maintaining my data in Unix box as /app/utPL/<suit_name>/
So on high level
- I want to avoid calling multiple APIs/processes while setting up the data each time which are complex and time consuming.
- Want to use a pre-stored data ( kind of dump created by expdp or in xmls) which can be inserted into tables before each test case. This data will never be committed so no need of special clean up instructions.
Currently I am using the version V3.1.2
I have searched if same question already raised but only found this https://github.com/utPLSQL/utPLSQL/issues/333
Understood that it is not a generic solution w.r.t framework that you guys could provide. But what are the common practices, if you are aware ,that are being followed in similar cases as mine?
and also for the grants that are required by the test schema causing a clash between DBA and dev teams. Is there a suggestion on this?
Thanks!!
Hi there. Since the details of any data setup required are application specific, this isn’t something that utPLSQL is likely to enable.
To share my experience;
We set up all data programmatically. We do this with typically a one line call to set up the data prerequisites (business data and not reference data). We have build a DSL following the Builder Patter. Using PLSQL objects that allows us to do something equivalent to the following for data setup;
customer := customer.withName(...).withAddress(...).withAccount(...).withBalance(...).build();
The build() triggers the various inserts to persist the require data, and might delegate work to objects passed along the chain ( above there might be a customer object that would know how to persist and an account object that would know how to persist.
The return value encapsulates a means to obtain identifiers for the items created e.g. a customer id and an account number etc. These can then be used in Expectations.
With good default values the chain does not become too long. And it’s only necessary to define the exceptional configuration.
We also then set up template objects for common cases and these are used as aliases. So, we might set up customers in specific states, or accounts in specific states. This then becomes a point of reuse, so that all developers refer to, and use the same templates.
Over time we’ve found that we can cover a lot of data edge cases like this, and we’ve extended the capability to cover data areas.
In terms of interacting with DBA, it’s always best to have lots of conversations so they understand what you are trying to achieve and why. Usually, they want to help, and everyone benefits from better software through automated testing.
One way to handle these kind of DBA requests is to agree the controls appropriate for your environment and have these codified in a stored function/procedure that carries out appropriate validations. The DBA’s can deliver this as an API to e.g. create the Directory definitions needed, or whatever. You might then be given price to execute this API.
Hope this helps.
Just to add. Between the DSL and the tables we use an auto generated API to encapsulate insert/update/delete against our tables. This has a per row and bulk capability, the bulk accepts PLSQL collections. This should return identifiers for the created/modified data. But we don’t do this reliably for all tables currently.
I've experimented with this table API generator gor preparing test data.
It is not ideal as it was created to satisfy different needs. But it is a starting point.
What you describe @mathewbutler, is a dream come true for many developers I suppose. It looks quite mature and very robust.
It would be great to have such API generator open-sourced.
I would definitely use it!
The older I get the more I appreciate ability to reuse what was invented and already battle-proved and matured 😁
Thanks @jgebal and @mathewbutler !!
I'm re-opening this Issue, as it's something that most engineers will struggle with. The topic is definitely worth attention and maybe we could come up with some sort of open-source solution. I agree with @mathewbutler - it would't be part of utPLSQL itself - not core anyway.
I really love the customer := customer.withName(...).withAddress(...).withAccount(...).withBalance(...).build(); example by @mathewbutler . I can imagine to implement something manually using object types.
Maybe we could start with an advanced demo app where this concept is applied. As a good example. In the first step everything is done manually.
In the second step we could think about how to automate parts of it and extract these parts in an utPLSQL utilities repo to be reused by this advanced demo app and of course other projects.
On current project we’ve used object types for this. It’s called the builder pattern. We then have a wrapper packet that uses the object types to define references to standard data setup using the builder. Using the package you can essentially say “create an onshore account with no transaction history” by referencing a constant of the required type.
It’s also useful to have some known default settings for data attributes so that user has the option of obtaining a “default” data configuration without needing to make every “withX()” call.
One challenge in earlier DB versions is keeping the method names short. In some case then I’ve shortened “with” to “w” in the method names.
There’s likely some core capabilities that could be re-used across projects, leaving users to write their application specific wrappers....