TestCafe - easier and faster end-to-end testing

Authors: Iva Stolnik (Junior Software Engineer) & Mario Galović (Junior QA Engineer)

When deciding on the best testing tool for our company, we considered three options - Selenium, Cypress, and TestCafe.
After comparing their features and discussing the pros & cons, we have come to the conclusion that TestCafe is the best option for us. In this blog, we will share a few thoughts on why we have chosen TestCafe and some practical examples of our TestCafe daily use.

  1. Why TestCafe?
  2. Starting with TestCafe
  3. Running Tests Locally, Minimizing Bugs, and Constant Testing
  4. Randomizing Every Action
  5. End-to-End Service with Slack Messages
  6. TestCafe Runner
  7. Load Testing/User Simulation Using TestCafe

Why TestCafe?

TestCafe provided us with all the necessary features and surprised us with its speed, ease of use, and useful features.

It is a Node.js End-To-End open source automation tool providing cross-browser testing, fast execution speed, and an extensive set of features for writing tests with JavaScript code. TestCafe supports browsers in headless mode and uses native code which makes it very practical and fast.

It can be used on any platform - Windows, Linux, or macOS.

TestCafe is designed to support most modern browsers, you can find the officially supported browsers here.

What we find particularly important is that it can concurrently run multiple tests in one browser or just one test in multiple browsers. It has a built-in waiting mechanism and doesn't require external libraries or plugins. On top of that, we have successfully used it to simulate load tests with dockerized browsers.

Of course, TestCafe has some downsides too:

  • only supports JavaScript/TypeScript
  • can't create unit tests
  • unpopular (but growing in popularity)

The problem with its unpopularity is that there are not many users online who can help if there is a problem.

Fortunately, TestCafe has its own forum for reporting bugs, and it is very efficient when it comes to bug fixing.

COMPETITION

TestCafe's current competitors are Selenium and Cypress. Each tool has some advantages and disadvantages.

The fundamental difference between Selenium and TestCafe is that Selenium runs the code in the browser process itself, whereas TestCafe core is a proxy server that runs behind the scenes and transforms HTML and JavaScript files to include the code needed for test automation.

On the other hand, Cypress can't access iframes directly like TestCafe can. Also, it doesn't support parallel testing. When it comes to multiple tabs and switching between windows, TestCafe wins out.

NOTE: TestCafe Studio is a commercial version with GUI and it doesn't require coding knowledge.

Starting with TestCafe

Setting TestCafe up is pretty straightforward. After the installation with only one line of code (“npm install -g testcafe”), we have created a test directory inside our project which contains fixtures (groups of tests with the same starting URL), page-modals (the directory that contains all logic), and selectors (or any other directories and files depending on testing needs).

TestCafe does not require any external plugins to run tests on different browsers, you only need to have a browser installed on your computer.

Running Tests Locally, Minimizing Bugs, and Constant Testing

Our requirements for this solution to be useful were to have the ability to run all tests locally in a development environment to minimize bugs in production releases and to be able to run specific tests as part of our End-to-End service that runs tests on production constantly.

Having this in mind we have created a git pre-push hook which, on the git push command, starts the TestCafe script and runs all tests. With the help of .beforeEach() and .afterEach() fixture hooks we control the behavior of local runs before changes are pushed to the remote repository and in case any test fails, a git push command will be terminated.

In the End-to-End testing case, we use a test.meta() method inside which we tag tests eligible for a continuous run on production and our End-to-End testing will filter them as they come - this will be demonstrated later on. Those features have led us to write less code and, essentially, tests have become reusable and more controllable.

We use it as follows:

                    import  from 'testcafe';

                    fixture(`Fixture name`)
                        .page('https://www.leapbit.com')
                        .meta()
                        .beforeEach(async (t) => )
                        .afterEach((t) => {
                            if (!t.ctx.passed) 
                        });

                    test('Test name', async (t) => )
                    .meta();    
                

Randomizing Every Action

Another key demand we had was the ability to randomize every action.

TestCafe turned out to be great in element.child() control - it effectively mimics user input and, with a little help of JavaScript, goes in a loop until a given condition is met or it reaches a timeout which we define in the configuration as the “exitAfter” parameter.

End-to-End Service with Slack Messages

The most important part of our End-to-End testing is our “flex-test-endtoend” service, which runs tests on servers 0-24 and reports results via Slack channels.

Our service is primarily focused on production, but when we run tests locally, we always run them on the develop branch. If needed, in the configuration we can change the branch from production to develop, or any other existing branch.

After deploying our service on the server, it retrieves the configuration defined in ETCD. The configuration contains data necessary for the operation of the service, such as users, user passwords, test repositories, tokens, etc.

In order for the service to work normally on the server, it is necessary to prepare a server and install some additional packages such as git, FFmpeg (for failed tests videos), browsers of your choice, zip-unzip, node.js, etc. The service works by cloning defined repositories and loading the configuration it needs, after which TestCafe filters tests with meta e2e argument, and runs only them (such as tests for payments, withdrawals, etc.).

After the test finishes, a message comes to Slack. If a test passes, we will receive a message on the status indicator channel that a test passed successfully, but if a test fails, a message will come to a different channel, and colleagues on call will be notified about the problem. The message will contain a section with the exact line of code where the test failed and a video where TestCafe got stuck.

This is an example of the messages we receive for passed tests.

Below you can see what it looks like on Slack when a test fails. We receive the section of the code with an indication of the exact line where an error has occurred and a video of the whole test.

CONFIGURATION

As you can see below, there is a purified example of our configuration file which defines some of the basic settings necessary for tests to run. In addition to the filter that filters tests on the End-to-End service, we also added the "skipTest" option in which we can pass the name of a fixture or a test we want to skip for any number of reasons, so we can simply skip it without pushing new changes on the branch. Also, when we are checking our tests, in the configuration file we can define from which branch the service takes the code, so we can test everything well before pushing it into production.

                {
                "url": "PAGE_URL",
                "repositories": [],
                "slackToken": "SLACK_TOKEN",
                "slackRunningIndicatorChannel": "#INDICATOR_CHANNEL_NAME",
                "exitAfter": "EXIT_AFTER_MINUTES",
                "defaultUsername": "USERNAME",
                "defaultPassword": "PASSWORD",
                "adminDefaultUsername": "ADMIN_USERNAME",
                "adminDefaultPassword": "ADMIN_PASSWORD",
                "skipTest": 
                }
                

TestCafe Runner

Here you will see a TestCafe runner that filters and runs all tests. In the runner, we can define a browser and a browser mode (headless), a video path that will be used in case the test fails, and a filter that will skip tests that don't have an e2e meta tag. We can also add an option to skip JavaScript errors, and add an assertion timeout of, for example, 30 seconds which we define in case the page loads longer than expected so the test does not fail instantly.

                const testcafe = await createTestCafe('localhost');
                const runner = await testcafe.createRunner();

                runner.browsers(['firefox:headless']);
                const pwd = path.dirname(require.main.filename);
                runner.video(`$/tests/$/artifacts`, {
                    failedOnly: true,
                    pathPattern: '$_$/$.mp4',
                });
                const  = this.config;
                await runner.reporter(['spec', this.customReporter]);
                runner.filter((testName, fixtureName, fixturePath, testMeta, fixtureMeta) =>
                {
                    let runTest = true;

                    if ((skipTest.fixtureName && skipTest.fixtureName.length && skipTest.fixtureName.includes(fixtureName))
                        || (skipTest.testName && skipTest.testName.length && skipTest.testName.includes(testName)))
                    
                    else if (testMeta.e2e !== true)
                    
                    return runTest;
                });

                await runner.run();
                

NOTE: We had some problems with multiple browsers opening on the server while running tests using the command “npm run test”. Because of that, we added a TestCafe runner to our service, and now it works as expected. Also, don't forget absolute import paths for videos/screenshots.

TestCafe Reporter

Here you will see how we use our custom TestCafe reporter for reporting failed tests.

This is a basic TestCafe reporter that we edited and this solution works best for us.

                this.customReporter = () => ({
                async reportTaskStart(),
                async reportFixtureStart(),
                async reportTestDone(name, testRunInfo)
                {
                    if (testRunInfo.errs.length)
                    {
                        const err = testRunInfo.errs[0];
                        if (err.callsite !== undefined)
                        {
                            const  = err.callsite;
                            const lines = await this.readLines(filename, lineNum);
                            this.failed.push({ testName: name, errMsg: `$\n$\n` });
                        }
                    }
                },
                async reportTaskDone(result)
                ,
                })

                await runner.reporter(['spec', this.customReporter]);
                

Sending Video to Slack

Here we will show you how we send videos to Slack when tests fail. If artifacts (videos) exist, we change the directory and zip the failed test video, after which the service sends the zipped file to the config defined Slack channel.

                if (artifactsExists)
                {
                    execSync(`cd $ && zip -r $ artifacts`,
                        );

                    const zipData = {
                        title: 'Artifacts for test',
                        file: {
                            path: `$/$.zip`,
                            fileName: zipFileName,
                        },
                    };

                    await slackClient.sendFile(zipData);
                

Load Testing/User Simulation Using TestCafe

We had an opportunity to create a user simulation test that served as load testing for our platform. It was successful and we learned a lot from that. First, we created a test that mimics user input on our web application. After that, we created many browser instances using Docker, and with Nomad we controlled each instance. Each “user” created some network requests, so we could track how many requests our services could withstand.

Although there are some standard load tests where you can just spam requests till everything breaks, we wanted to use dockerized tests because we wanted to simulate real users. With that, we gained real application behavior under heavy load, and most importantly we found our breaking points which we later improved.

Conclusion

TestCafe is a fast, robust, practical, End-to-End testing tool and it has helped us eliminate some problems and even foresee other potential issues.

Despite its unpopularity, we believe that Testcafe is reliable and that its presence will grow over time.

We use Testcafe for different purposes. We have created a pre-push hook that prevents developers from pushing changes if something doesn't work the way it should. Besides that, we have created tests and services which are constantly running to let us know if some service is down. Also, we have created load tests that helped us develop our services and to get the most out of our application.

TestCafe has really helped us bring our testing to the next level.