Testing with the Command Runner - AWK v2 style!

Be sure to test using the latest Command Runner to accomodate the latest IKL directory structure changes and new AWK parser. As a rule of thumb, always make sure you run with the latest.

One can use the command-runner in order to either run or create tests. The command-runner will automatically create and modify all the aforementioned files - the test file, the test-case folders, the inputs files and the output files.

For instructions on the specific arguments for the command-runner needed to run/create the tests please refer to the command-runner usage (--help).


Running the tests

Running the test only requires the path to the command (the .ind file) and the existence of a test in the command’s test directory (duh). It is also possible to only run a specific test-case of the command’s test.

Here is an example for the command runner output when running a test successfully:

2017-10-31 19:31:18,900 INFO -- Starting command runner2017-10-31 19:31:18,904 INFO -- Running test for command 'C:\indeni-knowledge\parsers\src\checkpoint\clish\clish-config-unsaved.ind'2017-10-31 19:31:21,063 INFO -- Running test case '0'2017-10-31 19:31:22,405 INFO -- Running test case '1'2017-10-31 19:31:22,413 INFO -- Running test case '2'2017-10-31 19:31:22,421 INFO -- Running test case '3'2017-10-31 19:31:22,428 INFO -- Test of command 'C:\indeni-knowledge\parsers\src\checkpoint\clish\clish-config-unsaved.ind' has been completed successfully2017-10-31 19:31:22,428 INFO -- Exiting

As you can see, the test contained 4 different test-cases and all of them have passed.

In this example, the command has parsed a different output than expected:

2017-10-31 19:33:35,635 INFO -- Starting command runner
2017-10-31 19:33:35,644 INFO -- Running test for command "C:\indeni-knowledge\parsers\src\checkpoint\clish\clish-config-unsaved.ind"
2017-10-31 19:33:38,198 INFO -- Running test case "0"
2017-10-31 19:33:39,571 INFO -- Running test case "1"
2017-10-31 19:33:39,578 INFO -- Running test case "2"
2017-10-31 19:33:39,587 ERROR -- Critical failure running command runner
java.lang.AssertionError: Result doesn"t have the same metrics as expected. Expected: Set(DoubleMetric(Map(im.dstype -> gauge, im.dstype.displaytype -> boolean, im.name -> config-unsaved, live-config -> true, display-name -> Configuration Unsaved?),1.0,0)), but got: Set(DoubleMetric(Map(im.dstype -> gauge, im.dstype.displaytype -> boolean, im.name -> config-unsaved, live-config -> true, display-name -> Configuration Unsaved?),0.0,0))
          at indeni.collector.commandrunner.testing.CommandParsingTester.indeni$collector$commandrunner$testing$CommandParsingTester$$assertResult(CommandParsingTester.scala:170)
          at indeni.collector.commandrunner.testing.CommandParsingTester$$anonfun$runTest$2$$anonfun$apply$1.apply$mcV$sp(CommandParsingTester.scala:100)


Here, we see that an AssertionError has occurred while running the test case named 2; it tells us that we got a different result than expected, and then proceeds with describing what was expected (Expected:) and what was actually produced by the parser (but got:). There is currently no way to pinpoint the differences between the results other than copy-pasting them aside to any third-party tool with that capability.

Of course, the test might fail if the command has caused an error:

2017-10-31 19:37:26,789 INFO -- Starting command runner2017-10-31 19:37:26,795 INFO -- Running test for command 'C:\indeni-knowledge\parsers\src\checkpoint\clish\clish-config-unsaved.ind'2017-10-31 19:37:29,311 INFO -- Running test case '0'2017-10-31 19:37:30,650 INFO -- Running test case '1'2017-10-31 19:37:30,666 ERROR -- failed to parse results of command: chkp-clish-config-unsaved, Failure(indeni.collector.ParsingFailure: Header = Parse Error,Description = Command [] chkp-clish-config-unsaved parser failed with input:unsaved1505144920installer:last_sent_da_info 1505144921,Message = Header = Execution Error,Description = Failed to execute AWK code,Message = For input string: "0,",,)2017-10-31 19:37:30,669 ERROR -- Critical failure running command runnerjava.lang.AssertionError: Parsing failed          at indeni.collector.commandrunner.testing.CommandParsingTester$$anonfun$runTest$2$$anonfun$apply$1.apply$mcV$sp(CommandParsingTester.scala:97)

These are the same type of errors that you may encounter when running the command-runner with either the parse-only or the full-command actions.

Creating the tests

Creating a new test-case for a command requires the path to the command, path to an input file and a name for that test-case. If the given input file is not formatted in the new format (described previously in this document), then the command-runner will automatically convert it to the new format as if it was the raw / plain-text input of the first step of the command.

Note that creating a test-case with the same name of a test-case which already exists will override that test-case, and there is currently no mechanism that raises a warning when this happens.

It is highly recommended to name a test-case in a meaningful and concise manner, so anyone looking at a test will be able to intuitively get an idea of which different cases the command has to handle.

There are no strict rules to which test-cases should be created for a command when designing its test. However, you might want to consider the following guidelines:

  • Choose quality over quantity; one shouldn’t add more test-cases if they don’t represent a case which is significantly different than the others.
  • Gradually add test-cases while writing the command; whenever you encounter a new type of data that your command has to handle, you’d might want to add this as a new test-case.
  • If a bug has been found in a command after it has already been committed or released, it is a good idea to first create a test-case which reproduces the bug and only then start to fixing it.

Hi,
There are several ways to run an awk program with software testing. If the program is short, it is easiest to include it in the command that runs awk , like this:

awk ’
```
program

'
<em>```
input-file1
```</em>

<em>```
input-file2
```</em>