Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introduce back go-junit-report for parsing go test output #82

Closed

Conversation

marcuscaisey
Copy link
Contributor

@marcuscaisey marcuscaisey commented Feb 21, 2023

I tried to stack this PR on top of #81, but I think i need to be able to push that branch to this repo to be able to use it as a base for this PR. I'll still leave both open since if both get merged, then we may want to just revert this one (as has happened before). I've commented on the change which is accounted for in the other PR.


The test output from go_test is currently quite hard to read and also can be inaccurate.

For example, given this test file:

package foo_test

import (
	"fmt"
	"os"
	"testing"

	"github.com/stretchr/testify/assert"
)

func Test1(t *testing.T) {
	fmt.Fprintln(os.Stdout, "Test1 stdout")
	fmt.Fprintln(os.Stderr, "Test1 stderr")
	assert.Equal(t, 1, 2, "Test1 failure")

	t.Run("Subtest1", func(t *testing.T) {
		fmt.Fprintln(os.Stdout, "Test1/Subtest1 stdout")
		fmt.Fprintln(os.Stderr, "Test1/Subtest1 stderr")
		assert.Equal(t, 1, 2, "Test1/Subtest1 failure")

		t.Run("NestedSubtest", func(t *testing.T) {
			fmt.Fprintln(os.Stdout, "Test1/Subtest1/NestedSubtest stdout")
			fmt.Fprintln(os.Stderr, "Test1/Subtest1/NestedSubtest stderr")
			assert.Equal(t, 1, 2, "Test1/Subtest1/NestedSubtest failure")
		})
	})

	t.Run("Subtest2", func(t *testing.T) {
		fmt.Fprintln(os.Stdout, "Test1/Subtest2 stdout")
		fmt.Fprintln(os.Stderr, "Test1/Subtest2 stderr")
		assert.Equal(t, 1, 2, "Test1/Subtest2 failure")
	})
}

func Test2(t *testing.T) {
	fmt.Fprintln(os.Stdout, "Test2 stdout")
	fmt.Fprintln(os.Stderr, "Test2 stderr")
	assert.Equal(t, 1, 2, "Test2 failure")
}

we get the following output:

12:34:05.092   ERROR: //foo:test failed
Fail: //foo:test   0 passed   0 skipped   5 failed   0 errored Took 10ms
Failure:  in Test1
Test1/Subtest2 stdout
Test1/Subtest2 stderr
    foo_test.go:31: 
        	Error Trace:	foo_test.go:31
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest2
        	Messages:   	Test1/Subtest2 failure
Standard error:
Test1 stdout
Test1 stderr
    foo_test.go:14: 
        	Error Trace:	foo_test.go:14
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1
        	Messages:   	Test1 failure
Test1/Subtest1 stdout
Test1/Subtest1 stderr
    foo_test.go:19: 
        	Error Trace:	foo_test.go:19
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest1
        	Messages:   	Test1/Subtest1 failure
Test1/Subtest1/NestedSubtest stdout
Test1/Subtest1/NestedSubtest stderr
    foo_test.go:24: 
        	Error Trace:	foo_test.go:24
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest1/NestedSubtest
        	Messages:   	Test1/Subtest1/NestedSubtest failure
Test1/Subtest2 stdout
Test1/Subtest2 stderr
    foo_test.go:31: 
        	Error Trace:	foo_test.go:31
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest2
        	Messages:   	Test1/Subtest2 failure

Failure:  in Test1/Subtest1
Test1/Subtest2 stdout
Test1/Subtest2 stderr
    foo_test.go:31: 
        	Error Trace:	foo_test.go:31
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest2
        	Messages:   	Test1/Subtest2 failure
--- FAIL: Test1 (0.00s)
Failure:  in Test1/Subtest1/NestedSubtest
Test1/Subtest2 stdout
Test1/Subtest2 stderr
    foo_test.go:31: 
        	Error Trace:	foo_test.go:31
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest2
        	Messages:   	Test1/Subtest2 failure
--- FAIL: Test1 (0.00s)
    --- FAIL: Test1/Subtest1 (0.00s)
Failure:  in Test1/Subtest2
Test1/Subtest2 stdout
Test1/Subtest2 stderr
    foo_test.go:31: 
        	Error Trace:	foo_test.go:31
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest2
        	Messages:   	Test1/Subtest2 failure
--- FAIL: Test1 (0.00s)
    --- FAIL: Test1/Subtest1 (0.00s)
        --- FAIL: Test1/Subtest1/NestedSubtest (0.00s)
Failure:  in Test2
Test2 stdout
Test2 stderr
    foo_test.go:38: 
        	Error Trace:	foo_test.go:38
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test2
        	Messages:   	Test2 failure
Standard error:
Test2 stdout
Test2 stderr
    foo_test.go:38: 
        	Error Trace:	foo_test.go:38
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test2
        	Messages:   	Test2 failure

//foo:test 5 tests run in 6ms; 0 passed, 5 failed
    Test1                         FAIL  0s
    Test1/Subtest1                FAIL  0s
    Test1/Subtest1/NestedSubtest  FAIL  0s
    Test1/Subtest2                FAIL  0s
    Test2                         FAIL  0s
1 test target and 5 tests run; 0 passed, 5 failed.
Total time: 50ms real, 10ms compute.

There are 5 assertions which fail, however 10 assertion failures are shown. All expected assertions are output under the Standard error header and then an assertion failure is also output for each failing test / subtest. However, the assertion failures under each Failure: in XXX header are not the always the correct failures. For Test1 and its subtests, the failure from Test/Subtest2 is output under each Failure: in XXX header.

I can see that go-junit-report v0.9.0 (technically a couple of commits after this tag) was introduced to the Please repo in 2020 and then reverted due to it not being able to handle more complex input (thought-machine/please#995). This PR introduces it back, now that the tool seems to have matured a bit. Between v0.9.0 and v2.0.0, 81 test cases have been added and it seems to pass the eye test on the above test file:

12:44:52.798   ERROR: //foo:test failed: Failed
Fail: //foo:test   0 passed   0 skipped   5 failed   0 errored Took 10ms
Failure:  in Test1
Failed
Test1 stdout
Test1 stderr
    foo_test.go:14: 
        	Error Trace:	foo_test.go:14
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1
        	Messages:   	Test1 failure
Failure:  in Test1/Subtest1
Failed
Test1/Subtest1 stdout
Test1/Subtest1 stderr
    foo_test.go:19: 
        	Error Trace:	foo_test.go:19
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest1
        	Messages:   	Test1/Subtest1 failure
Failure:  in Test1/Subtest1/NestedSubtest
Failed
Test1/Subtest1/NestedSubtest stdout
Test1/Subtest1/NestedSubtest stderr
    foo_test.go:24: 
        	Error Trace:	foo_test.go:24
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest1/NestedSubtest
        	Messages:   	Test1/Subtest1/NestedSubtest failure
Failure:  in Test1/Subtest2
Failed
Test1/Subtest2 stdout
Test1/Subtest2 stderr
    foo_test.go:31: 
        	Error Trace:	foo_test.go:31
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test1/Subtest2
        	Messages:   	Test1/Subtest2 failure
Failure:  in Test2
Failed
Test2 stdout
Test2 stderr
    foo_test.go:38: 
        	Error Trace:	foo_test.go:38
        	Error:      	Not equal: 
        	            	expected: 1
        	            	actual  : 2
        	Test:       	Test2
        	Messages:   	Test2 failure
//foo:test 5 tests run in 10ms; 0 passed, 5 failed
    Test1                         FAIL  0s
    Test1/Subtest1                FAIL  0s
    Test1/Subtest1/NestedSubtest  FAIL  0s
    Test1/Subtest2                FAIL  0s
    Test2                         FAIL  0s
1 test target and 5 tests run; 0 passed, 5 failed.
Total time: 50ms real, 10ms compute.

Now, only 5 assertion failures are output and each one is under the correct Failure: in XXX heading.

The version that i've added to this repo is my fork where i've set the message attribute of the <skipped> node correctly so that the reason is output correctly by Please (without this change, the skipped test output looks like Reason: Skipped vs Reason: external_test.go:32: Failing on Alpine currently). I've made a PR to upstream this: jstemmer/go-junit-report#158.

I'm happy to throw any other test files you can think of at go-junit-report to make sure it handles them properly 🙏

@marcuscaisey marcuscaisey marked this pull request as ready for review February 21, 2023 01:04
@@ -912,7 +915,12 @@ def _go_install_module(name:str, module:str, install:list, src:str, outs:list, d
]

if binary:
outs = [f'pkg/{CONFIG.OS}_{CONFIG.ARCH}/bin/{name}']
# This decouples the name of the target from the name of the installed binary when it's unambiguous what the
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the change from #81

@marcuscaisey
Copy link
Contributor Author

After talking with @Tatskaari, a better option looks like parsing the output from https://pkg.go.dev/cmd/test2json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant