In InstructorFeedbackResultsPageScalabilityTest, the entire set of test data is currently refreshed for each test case. Since the content of the test data does not need to be unique for each test case (we only need to increase the amount of data), we should explore adding test data for each subsequent test case instead of replacing the entire dataset.
This is important because the scalability tests are meant to be run on the live/staging server, and write operations are not free there.
@samsontmr Are we talking about calling @BeforeMethod
public void refreshTestData() before every test?
That's right. Ideally it should "top up" the data to the amount required for the next test case or remove excess data.
Hi, the issue is interesting, I would like to try solving it.
Can you please help to understand the task correctly: method refreshTestData(String filename) loads DataBundle from JSON file, which is created in class InstructorFeedbackResultsPageDataGenerator. We have 6 such files (10 or 20 students with 1, 5 and 10 questions), and for each such file new Student objects are generated. So we got 3 * 10 + 3 * 20 = 90 randomly generated students for test.
Should I change this and generate only 20 rendom students, and then use 10 of them for the first 3 tests and all 20 students for the rest 3 tests?
Yes, that sounds about right. However, take note of the transition from 10 students+10 questions to 20 students+1 question.
Thanks, I'll be careful with that transition!
Please advise on one more question for better understanding the goal: you wrote that "write operations are not free" on live/staging server. Which operations specificly are chargeable? I mean, which result is beeter:
I believe it should be quantity of file does not matter and only their sum volume in bytes matter?
@whipermr5 can confirm?
It should be number of calls to the datastore.
we should explore adding test data for each subsequent test case instead of replacing the entire dataset.
Apologies for not flagging this up earlier, but this does not reduce the number of write operations. The test is about adding x amount of test data (with increasing values of x) and measuring the time taken, right? Adding x amount of test data will always incur x amount of Datastore writes. And if you're thinking of adding data incrementally, incrementing the amount of data by x each time and measuring amount of time taken to write the additional amount of data, each increment will always take the same amount of time (which defeats the purpose of the test).
From the name of the test, I assume this is supposed to measure the scalability of reading response objects, not writing them. Therefore, objects can be written once and read every time the test is run.
From the name of the test, I assume this is supposed to measure the scalability of reading response objects, not writing them. Therefore, objects can be written once and read every time the test is run.
Yes, that is the idea.
Thanks, I'll try to update my pull request correspondingly.
@wkurniawan07 do we still have the scalability tests in concern? if not, we can close this issue.
This is the script written by @samsontmr roughly a year ago, not the outdated ones I deleted in #8369.
May I know if this issue is still relevant? I notice that this script hasn't been updated for a while and test currently fails due to recent java-time-migration because the deserialization of json gives error
If it is relevant then I can start working on it
I don't think this will be used in near future. Putting on hold.