Farmer sample data generation error
Describe the bug The error "Farm must have a head member." occurs when we try to generate sample data.
To Reproduce
- Go to the
Registry - Go to the
Configurationmenu - Click
Generate Sample Farmer Data - Give the sample size as 1000 and generate
Expected behavior Should be able to generate farmer sample data
Screenshots
Additional context
🔔 Note: This ticket should address common considerations without including country-specific content. Please ensure all references are generic and applicable across various contexts.
Re-opened because of invalid automation rules.
Findings: Returned to dev
generating a sample data displays validation head member error
Findings: Returned to dev
Verified in PRJ10 instance Able to generate farm group - pass Able to generate farms within farm groups - pass Able to generate members within farms - as agreed internally. this must work as well. Able to generate 1000 records - fail
I can provide video recording upon request
Findings: Unable to test it in Prj 10 instance as the Latest PR for this ticket is not merged yet.
cc @anjclarise @celinenilla
Findings: In progress Tested in a runboat instance b3985ca4d-011f-430f-b5a9-882ca2dba06b
Data generated for groups should have members linked in its members - Pass Individuals should be generated along with groups - Pass
Able to generate sample data with the following sizes 5 - Pass 900 - Pass 1005 - Pass 2010- Pass 9999 - Fail, permanent 504 Gateway Time-out, despite multiple page refresh.
There are issues found during testing:
When attempting to generate more than 1000 sample data, the warning message does not appear immediately. It took 40 seconds more or less before it appeared.
When attempting to generate large sample data(900+), if its done generating in the background then i click refresh page from the warning message, it will display an nginx error (can be resolved by refreshing page)
The instance gets flakier as larger data is generated, Page unresponsive, 503 Service Temporarily Unavailable nginx error.
@anthonymarkQA did you get any messages for the queue job itself? I.e. "MemoryError"?
What has been suggested for other parts of OpenSPP, is that all batch sizes should be configurable through environment settings, so that operations/DevOps can adjust according to resources available on the platform.
@anthonymarkQA I tried the instance I could infer from your screenshots – http://openspp-openspp-modules-penn-632-41fdf5be3093.runboatk8.newlogic-demo.com/web#action=143&model=queue.job&view_type=list&cids=1&menu_id=104 –, and I could not see any failures.
What you are observing in terms of gateway timeout (HTTP 504)/Service Temporarily Unavailable (HTTP 503) is simply an effect of the system being overloaded and that there are no available UI workers for you while the sample data is being generated. Looking at the execution on the queue over time shows that the later jobs took a little bit longer, which could be an effect of the database growth or just that there was another load on the system simultaneously.
Looking at the configuration, each Runboat instance is configured with two worker nodes but also two channels usable by the queues. Thus, there is a risk that a long-running queue job can make the UI unresponsive as both workers can be occupied by the queue. If this is a problem for your testing, we can increase the number of worker nodes to 3 or maybe 4. However, that will allow a single Runboat instance to consume more of the shared resources.
The PR above should address the issue reported by @anthonymarkQA when the instance becomes unavailable. Do note, that slowness (and severe slowness) should be expected when generating sampledata using the UI function.
FYI - new instances on Runboat will use the change from the PR above.
Findings: QA Passed Tested in a runboat instance be02d47f4-1731-4c43-9193-5e2a88158fd9
Data generated for groups should have members linked in its members - Pass Individuals should be generated along with groups - Pass
Able to generate sample data with the following sizes 5 - Pass 900 - Pass 1005 - Pass 2010- Pass 9999 - Pass ( it seems to generate the data by batches, generation rate around 1000 per 5 minutes, giving the instance a chance to function and can access other pages while it generates).
screenshot below seems to be a queue for the data generation jobs.