PRs not running CI


Hi, Just noticed that some PRs don’t appear to be running Ci checks: continuous-integration/jenkins/pr-merge , for example #4077 is merged doesn’t appear to have run that check.

Why do we have some PRs landing without these CI checks, what am I missing?


#2 Did pass the CI checks.

We do require CI to be pass before merging them. Sometimes we could merge if the change have nothing to do with the pending tests(eg change to wording in docs).


Thanks for bringing this up. Perhaps we should send a PR to the contributor and review guide to clarify this?


Ah OK, I know what you are talking about. Because we use a customized jenkins, it does not appear on the checks tab in the github. But it does show all the status and running PRs need to pass all the checks, see some pending PRs that shows the status check of jenkins(but still not available on the checks tab)


PR 4083 shows that 4 CI pipelines all prefixed with the name
windows_mac_build… ran.

Likewise PR4077

Other PRs show 5 CI pipelines are run, the additional one is:

PR4077 has been merged to main, now when we run the CI checks equivalent to continuous-integration/jenkins/pr-merge those tests fail due to PR4077.

My question is why PR4077 bypassed the CI pipeline called continuous-integration/jenkins/pr-merge which would have stopped the failure before the PR was merged?


OK, let me try to clarify again, Jenkins status check did appear and is required to merge PR4083, but they are not in the checks tab(perhaps due to jenkins not installed as a github app as azure)



OK, I understand and now I dig through the conversation in PR4077 I can see the results of all 5 stages. Now I just need to figure out why our environment gives different results to the upstream CI… thanks for the explanation.


Here is the CI log for PR4077 when it was merged, so it indeed passed the CI


There could be two possible reasons:

  • Difference in terms of docker env, unfortunately the docker build could change due to the change of dependencies, we tried to pin most of them. Our current solution is always use a fixed version tag which you could test against
  • Flaky test cases: there are test cases that could fail due to randomness in the test data, it is a good thing to have those tests, and we just need to identify them and fix flaky cases. The failure might disappear if you rerun the CI