CS 272 Software Development

CS 272-01, CS 272-02 • Spring 2022

Project Testing

You must verify your project is passing the functionality tests to earn the project tests grade as well as before every code review required to earn your project final release grade. This guide walks through that testing process.

This guide assumes you have already setup your project in Eclipse.

It is important to work and test the projects iteratively, as well as attempt to pass the tests locally before considering running the tests remotely.

Running JUnit Tests

You must use the JUnit 5 tests provided with the project-tests repository to determine if your project is meeting the required functionality. The suite of tests for each project are given by the Project#Test.java files in the src subdirectory. For example, the tests for Project 1 are provided by the Project1Test.java file.

While you are initially working on the project, focus on individual tests or individual groups of tests. There is no need to run all of the tests every time; in fact this can make debugging harder! See the Homework Testing guide for different ways you can run individual JUnit tests. Here is an example run configuration:

Screenshot

Pay attention to the test output. The output tells you where to find the actual and expected output files and what caused the test to fail. It also tells you the arguments that were passed to your Driver class, which is helpful for debugging. The full output, which can be copied to the console for easy copying/pasting using the button, includes the following text:

Actual File:
    actual/index-simple-hello.json
Expected File:
    expected/index/index-simple/index-simple-hello.json
Arguments:
    -text input/text/simple/hello.txt -index actual/index-simple-hello.json
Message:
    Difference detected on line: 2.

This gives the actual arguments passed to Driver by the test, the actual and expected files being compared, and where the first difference was detected. You can use the Compare Editor in Eclipse to compare the files side-by-side for debugging, or use the Run Configurations in Eclipse to enter the same arguments manually.

To save space, the tests automatically delete your output files if they match the expected output. Only output files for failing tests will be kept.

Running Driver

When you first start, very little template code is provided and the tests provided only test the final output of your project (unlike the homework which has tests for individual methods). Depending on your approach, you may want to pass in your own arguments to the Driver class using a “Run Configuration” and manually inspecting the output.

View Details

  1. Go to the “Run” menu in the menubar at the top of Eclipse and select “Run Configurations…” from the dropdown.

  2. Click on the “Java Application” category in the left view pane and click the “New Configuration” button (looks like a blank file with a +).

  3. Enter any name you’d like in the “Name:” text box.

  4. Next to the “Project:” text box, click the “Browse” button and select the “SearchEngine” project.

  5. Next to the “Main class:” text box, click the “” button and select “Driver” from the list. At this point, your setup should look like this:

    Screenshot

  6. Click on the “Arguments” tab. Enter the following into the “Program arguments” text area:

     -text input/text/simple/hello.txt -index actual/index-simple-hello.json
    

    …or if you are on Windows:

     -text input\text\simple\hello.txt -index actual\index-simple-hello.json
    

    This will run Driver with the same arguments as the first project 1 test.

  7. This next part is really important! Remember, all of the test files are in a SEPARATE repository. Change the “Working directory:” setting to the “Other:” choice.

  8. Click the “Workspace…” button and select the “SearchEngineTests” project. At this point, your setup should look like this:

    Screenshot

  9. Click the “Apply” and “Run” buttons.


You can keep running this same “Run Configuration” while debugging. If you are failing a specific JUnit test, you can copy/paste the arguments used by the test (provided in the failure output) to debug your code.

Running Maven

Most of the time, if you are passing the tests locally you will also pass them remotely. However, there are slight differences between how the tests are run by Eclipse and how they are run by Github Actions.

If you want to test out the same approach used by Github Actions, you have to use Maven to run the tests instead of the JUnit interface in Eclipse.

View Details

Here is a screenshot of the run configuration you must create:

Screenshot

Follow these steps to create this run configuration:

  1. Go to the “Run” menu in the menubar at the top of Eclipse and select “Run Configurations…” from the dropdown.

  2. Click on the “Maven Build” category in the left view pane and click the “New Configuration” button (looks like a blank file with a +).

  3. Enter any name you’d like in the “Name:” text box.

  4. Under the “Base directory:” text box, click the “Workspace” button and select the “SearchEngine” project.

  5. Enter the following into the “Goals:” text box:

     clean test
    

    This combination will remove all previously-compiled class files, re-compile the main code and test code, and run the JUnit tests.

  6. Next to the “Parameter Name” list, click the “Add…” button to add a parameter. Enter test for the “Name:” text box and Project#Test* for the “Value:” textbox, where # is the project number. For example, enter Project1Test* for Project 1. Click the “OK” button when done.

  7. [OPTIONAL] Click the “Add…” button to add another parameter. Enter groups and test# for the name and value, where # is the project (e.g. test1). This tells Maven to run just the tests that Github Actions focuses on for this project. This can be skipped if you want to run all of the tests regardless.

  8. [OPTIONAL] Click the “Add…” button to add another parameter. Enter compileOptionFail and true for the name and value. This tells Maven to fail if any compile warning are detected. If there are warnings, it will report those warnings and exit (it will not run the tests). This is not required to earn project test credit, but will be required to sign up for a code review.


These steps are really only required for fine-grained debugging and can be skipped most of the time.

You must test your code remotely before you can earn credit for the functionality of your project and before you can sign up for a code review appointment.

This process begins by creating a release on Github. This will trigger the Github Action that verifies your project functionality.

Creating Releases

Creating releases will familiarize you with versioning your code.

  1. After passing all of the tests locally and pushing your latest commits to Github, follow the Creating Releases steps to draft a new release of your project code on Github.

  2. Choose a “tag” or the version number to assign to the code at this stage. Out in the “real world” you will likely use semantic versioning, which we will roughly mimic in class.

    Specifically, you must name your release v#.#.# where the first # is the project number (1, 2, 3, or 4), the second # is the number of code reviews you’ve had for that project, and the last # is the number of patches/fixes you released in between code reviews for that project.

    For example, your first release should be v1.0.0 because it is for project 1, you have not had any code reviews yet, and you have not had any other releases yet. If your code does not pass the tests remotely, then you have to fix your code and re-release your project as v1.0.1 since you now have 1 prior release. After your first code review, the next release will be v1.1.0 (notice how the last number reset to 0).

    The release v2.3.4 means this release is for project 2, you have had 3 code reviews for project 2 so far, and this is the 4th release since your last code review of project 2. (It also means there must be a prior v2.3.3 release made before this one.)

  3. Enter the tag number in the “Choose a tag” dropdown and click the “Create new tag on publish” option. If you are working ahead and have the project code in a different branch, select that branch in the “Target” dropdown.

    Screenshot

  4. Click the “Publish release” button. You can leave the title and description blank. You should click the “This is a pre-release” checkbox unless you have passed code review, but it is not something we enforce currently.

You can see a sample release on the template repository.

Generally, you should not delete releases even if they are not passing tests.

Check Release Action

Creating or editing a release triggers the “Check project release” action in the “Actions” tab to run automatically. It will have the same title as the release tag:

Screenshot

To view details, click on the run title. The yellow circle icon indicates the run or step is still in progress and has not yet completed. The red circle indicates the run or step failed (either because of setup issues or the tests failed). The green circle icon indicates the run or step completed and was successful (i.e. all tests passed).

You can see an example run in the template repository.

Debugging Failures

Pending

Walkthrough Video

The following video 33 minute video gives an overview of projects, how to test projects locally (starting at 13:53), testing projects remotely (starting at 24:33), and get your first functionality grade (starting at 29:38).

Keep in mind the version of Java, appearance of Eclipse, and appearance of the Github Action runs may be slightly different between semesters.


https://drive.google.com/file/d/13JeruWvyf6d2wXbXxGoXFGbM10uL8UD1/view?usp=sharing