Pytest: Repeatedly run class instead of tests with parametrize

Created on 28 May 2020  路  6Comments  路  Source: pytest-dev/pytest

Quick question:

Given class A has test_1, test_2, test_3 and parametrize Mark applied at the class level with dataset (xa, ya) and (xb, yb). I would like the following execution pattern:

Class A:
test_1 with xa, ya
test_2 with xa, ya
test_3 with xa, ya

Class A:
test_1 with xb, yb
test_2 with xb, yb
test_3 with xb, yb

instead I am getting:
Class A:
test_1 with xa, ya
test_1 with xb, yb
test_2 with xa, ya
test_2 with xb, yb
test_3 with xa, ya
test_3 with xb, yb

I hope this makes sense, I'll post some of the code in the morning.

Any help is appreciated.

parametrize question

All 6 comments

Are you creating dependencies between tests?... I don't think this is a good idea..... but I may be wrong

I think you are right, I am doing that and it does sound like a bad idea.. but this is my rational:

1) test_1 user1 logs in and gets to home page
2) test_2 we check correctness of data on page
3) test_3 another browser is opened and user2 logs in and does some interaction
3a) this interaction is then tested to see if it appears on user1's browser

I guess there is a strong dependency between test_1 and test_3? but the current way its executing doesnt respect the browser session either. If test_1 is executed twice it will startup the browser twice. I think my actual problem is that im keeping too much state in the tese case right?

Here is the code.

`
@pytest.mark.parametrize("user, target", [
pytest.param("user1","glb", marks=pytest.mark.glb),
pytest.param("user1","site1", marks=pytest.mark.site1),
pytest.param("user1","site2", marks=pytest.mark.site2),
pytest.param("user1","site3", marks=pytest.mark.site3)
], ids = ['glb', 'site1', 'site2', 'site3'])
class TestClient(BaseTest):
def test_do_user_login(self, user, target):
uname = self.cfg['store']['client'][user]['creds']['username']
passwd = self.cfg['store']['client'][user]['creds']['password']

    logger.info('Client: {} '.format(user))
    logger.info('Site: {} '.format(target))

    if 'secret' in self.cfg['store']['client'][user]['creds']:
        secret = self.cfg['store']['client'][user]['creds']['secret']
    else:
        logger.info('No 2fa secret found')

    url = self.cfg['store']['client']['urls'][target]

    self.go_to_url(url)
    wait_on_el('name', 'email').send_keys(uname)
    find_el('name', 'password').send_keys(passwd)

    find_el('css_selector', '.0-lock-submit').click()

    if secret is not None:
        try:
            wait_on_el('name', 'code').send_keys(get_totp_token(secret))
            find_el('id', 'ok-button').click()
        except TimeoutException as e:
            pass

    wait_on_el('xpath', '//*[starts-with(@class, "actionButtonStyles_actionButton")]')
    self.screenshot()


def test_check_ratio(self, user, target):
    assert(len(wait_on_el('xpath',\
        '//*[starts-with(@class, "actionButtonStyles_actionButton_")]',\
            cond='visibility_of_all_elements_located')) == 3)

    currx = re.compile(r'(?<!,)\b(\d{1,3}(?:,\d{3})*)\b(?!,)\.\d{2}')

    tot_bal = find_el(
        'xpath',
        '//*[starts-with(@class, "storeDetailsStyles_totalstoreRatio_")]'
    )

    assert (currx.match(tot_bal.text))


def prep_user2(self, target):
    global browser_t2
    self.browser = gen_driver()

    self.test_do_bank_login('user2', target)
    self.browser_t2 = self.browser
    self.browser = set_odriver()


def test_user1_user2_transaction(self, user ,target):
    self.prep_user2(target)
    pass

`

basically I was aiming to run the same code against 3 different urls

(sorry - cant seem to get the formatting correct)

I have a feeling I probably need to return the driver (browser) instance using a fixture.. the driver instance should be uniq to the 'target' value and it should return the same driver instance for a given target value..

I might have to redesign my selenium driver abstraction as that is too vested in state (driver instance) - quickest approach is to pass the driver in the function args..

hope I'm making sense - any help/suggestions are welcome!

Thank you.

OK so I been thinking that I should hide the driver in a page object.. return the page object as a fixture to the test.. then

1) it will be one or two method calls on the page object to get to the right state (unless its there)
2) the page object can return the driver instance - which I can use to grab things and assert

last this fixture should live doe the duration of the class.. is that possible?.. ie its the same instance for all the tests in the class..

The reason dependencies between tests is bad, is because it means the depending test is designed to introduce confounding variables, and isn't designed around the specific behavior it's meant to test. If test 2 picks up where test 1 left off, using test 1's stuff, then a failure in test 1 can be completely unrelated to the behavior test 2 is targeting, and yet, test 2 would still have problems. It both threatens the validity of the test results, and the practicality of automation.

In this case, test_3 is tightly coupled to test_1, and this is because there's more than 3 behaviors being tested at once.

A solution is to break things down further. Leverage the backend's web API to establish multiple sorts of checkpoints of confidence.

Going with just test_3 for a moment, you can have it broken down into 3 tests:

  1. Test the frontend's implementation of the backend's web API by launching the browser and performing that interaction. That interaction will send some web request to the backend's web API, creating a record (or multiple records) in the DB that will theoretically be sent to the browser session from test_1. But this test won't be using that browser session. Instead, you can use an API client to request that data from the backend's web API to make sure the records were created appropriately. That's all that's needed for this test.
  2. Test that sending the same request that the browser from test_3 would have sent, but this time, sending it through an API client, can be retrieved by an API client. This way, a failure in the frontend's implementation of the backend's API doesn't mean you can't know if the API isn't working.
  3. Test a different part of the frontend's implementation of the backend's API by landing the browser, going to the home page, and then sending that interaction request from an API client, and then make sure it appears on the home page in the browser.

This allows you to test these behaviors in isolation from each other, and as a result, the ones that fail will tell you where the bugs are. They can each fail individually and tell you about a different bug.

As for your test structure, I can see some other ways to improve it to make things much easier to maintain and add on to.

A lot of my recommendations can be found here and here.

I also recommend, as you said, using a page object framework. [Here's the one I made] (https://pypcom.readthedocs.io/en/latest/), which you might find useful.

I would also recommend not using unittest.TestCase, or self when working with pytest. Fixtures are incredibly powerful in pytest, and make organizing tests and their dependencies incredibly easy. They become how you manage and reference state. You can use as many as you like for a single test, and even structure them in ways that apply to multiple tests, but I recommend limiting them to providing one resource and/or performing one state-changing action each.

Regarding your parameterization, I recommend only using it when differing input should trigger the same input, and result in the same output. For example, using firefox vs chrome should trigger the exact same behavior and yield the exact same result if the same actions are performed.

If the result is different, then it likely means you are testing different behavior, and if it's different behavior, another test should be defined to cover it. This way you can engineer a test around a specific behavior and not sacrifice anything either in terms of the complexity of your test logic/structure, the validity of the test results, or even just how readable the test names are.

I touch on it a little in the links I pasted above, but a single action can result in multiple behaviors being triggered. That's fine, and you can use larger scopes, like a class, to house multiple tests to assess the resulting state from those various behaviors. The important bit is to follow "arrange, act, assert", and not "arrange, act, assert, act, assert, ...".

Closing this as I think @SalmonMode's advice is a nice and idiomatic way to write the tests you need :slightly_smiling_face:

Was this page helpful?
0 / 5 - 0 ratings