In bug 101204 is a file example detected as MAC-CENTRALEUROPE though it's actually UTF-8 (full ASCII but one single non-ASCII character). The point is that the file is technically valid in both encoding.
Current code, confidence for UTF-8 (without language awareness) is 0.505 whereas it was 0.535104 for MAC-CENTRALEUROPE. That's basically quite a low confidence for both and the detection to one or another is mostly related to chance.
IMO the difference should be made on language detection as is already the case for single byte encodings.
The attached file is code, but that's still close-enough to natural English that I believe the confidence should rise up for the couple (UTF-8, English) rather than a generic UTF-8 detection.
I have other source code files which are I think all in ASCII except my name in the license header for the copyright: Sébastien. Those should be detected as UTF-8 but they are instead recognized as other encodings, for example IBM852.
UTF-8 is the encoding of my locale. For local files, I think it makes sense to prioritize the encoding of the current locale. In GtkSourceView the file loader (which is not based on uchardet) takes a list of encodings to try one by one, sorted in decreasing order of priority, and that list depends on the current locale (the list can be different for each language):
Maybe uchardet could take as input such a list of encodings, so if there is no clear winner, it chooses the one which has the highest priority.
-- GitLab Migration Automatic Message --
This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.
You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/uchardet/uchardet/issues/2.