https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2259/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html Starting subtest: common-hpd-after-suspend Subtest common-hpd-after-suspend: SUCCESS (10.860s) (kms_chamelium:3002) igt_chamelium-CRITICAL: Test assertion failure function chamelium_rpc, file ../lib/igt_chamelium.c:303: (kms_chamelium:3002) igt_chamelium-CRITICAL: Failed assertion: !chamelium->env.fault_occurred (kms_chamelium:3002) igt_chamelium-CRITICAL: Last errno: 113, No route to host (kms_chamelium:3002) igt_chamelium-CRITICAL: Chamelium RPC call failed: libcurl failed to execute the HTTP POST transaction, explaining: Failed to connect to 192.168.1.224 port 9992: No route to host
Moving to IGT as this is something the tests need to learn how to deal with.
The CI Bug Log issue associated to this bug has been updated. ### New filters associated * CHAMELIUM: igt@kms_chamelium@*suspend* - warn - Last errno: 113, No route to host - https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_11299/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2246/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html - https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_11973/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2256/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2259/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html
A CI Bug Log filter associated to this bug has been updated: {- CHAMELIUM: igt@kms_chamelium@*suspend* - warn - Last errno: 113, No route to host -} {+ CHAMELIUM: igt@kms_chamelium@* - warn/fail - Last errno: 113, No route to host +} New failures caught by the filter: * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_284/fi-kbl-7500u/igt@kms_chamelium@hdmi-cmp-planes-random.html * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_284/fi-kbl-7567u/igt@kms_chamelium@hdmi-cmp-planes-random.html * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_285/fi-kbl-7500u/igt@kms_chamelium@dp-audio.html * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_285/fi-kbl-7567u/igt@kms_chamelium@dp-audio.html
Other than occasional flukes here and there the biggest offender seems to be *after-suspend* family of subtest. The most representative scenario looks like that: 1. schedule something on chamelium 2. got to sleep 3. scheduled thing triggers 4. wake up 5. check whether we see the changes reflected through DRM and pass 6. exit triggring chamelium cleanup During 6 we have a failed RPC because network is not up yet after waking up, which overwrites the test results to WARN. Solution: introduce chamelium_wait_online(int timeout) that pings the device and bails out after timeout. Always call it after waking up. Everything else looks like sporadic network issues, but we have seen it just a few times in the last months. We can investigate them further once we will get rid of the main source of the noise. Impact on users: none, it's a CI/test issue. Impact on testing: potentially huge, as some of the tests may leave us with chamelium ports unplugged adding to the flip-flopping of the 2x tests. Bumping priority to high, as it is easy to solve and important for keeping the CI noise down.
https://patchwork.freedesktop.org/patch/324290/
Related bug about Chamelium not having all ports plugged in: https://bugs.freedesktop.org/show_bug.cgi?id=110940
https://gitlab.freedesktop.org/drm/igt-gpu-tools/commit/ce130a078c85ce3f2bdb02047cba5b72702a79c3 And from pre-merge CI: #### Possible fixes #### * igt@kms_chamelium@common-hpd-after-suspend: - fi-kbl-7567u: [WARN][15] ([fdo#109380]) -> [PASS][16] [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6734/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_3363/fi-kbl-7567u/igt@kms_chamelium@common-hpd-after-suspend.html
Merged and fixed, issues not seen in 2 weekd :-)
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.