log4rs icon indicating copy to clipboard operation
log4rs copied to clipboard

Support truncating from the left, width padding/truncation incorrect with some unicode input

Open moh-eulith opened this issue 2 years ago • 12 comments

Left truncation is the default in the inspiration for this crate (Log4j). It usually produces more relevant results, e.g. for filenames, logger names, etc.

Both styles of truncation should be supported, with right truncation (current implementation) as the default for backward compatibility.

Log4j uses a minus sign on the length (negative length) to flip the truncation. So it might look like this for log4rs: {f:>-15.15} (left truncate or left pad the filename field to exactly 15 chars)

moh-eulith avatar Oct 13 '22 03:10 moh-eulith

Sounds like a worthy enhancement, happy to mentor if you'd like to take a stab at it.

estk avatar Oct 13 '22 15:10 estk

I had a look at this. unicode encoding makes this diabolically difficult. The existing code is already running into this issue, so maybe fixing the unicode issues is the first step. Here is a failing test case inspired by this stackoverflow question

    #[test]
    #[cfg(feature = "simple_writer")]
    fn right_align_formatter_hard_unicode() {
        let pw = PatternEncoder::new("{({l} {m}):>15}");

        let mut buf = vec![];
        pw.encode(
            &mut SimpleWriter(&mut buf),
            &Record::builder()
                .level(Level::Info)
                .args(format_args!("\u{01f5}\u{0067}\u{0301}"))
                .build(),
        )
        .unwrap();
        assert_eq!(buf.len(), 18);
    }

a simple println!("\u{01f5}\u{0067}\u{0301}"); produces ǵǵ (two visible characters, not sure the pasting is working right), but the padding thinks it's 3 characters.

moh-eulith avatar Oct 17 '22 02:10 moh-eulith

@moh-eulith can you open a Draft pr with the above failing test?

estk avatar Oct 17 '22 17:10 estk

Ok so I've kinda gotten to the bottom of this. There are in fact 3 unicode "characters" ie codepoints here but they result in only 2 graphemes and therefore the message has a unicode width of 2. I have a fix for the test thanks to the unicode_segmentation crate. But I think I need to think on this for a minute to decide how to proceed.

What do you think?

fn char_starts(buf: &[u8]) -> usize {
    if let Ok(s) = std::str::from_utf8(buf) {
        s.graphemes(true).count()
    } else {
        buf.iter().filter(|&&b| is_char_boundary(b)).count()
    }
}
....
    #[test]
    fn char_starts_gs() {
        let buf = "\u{01f5}\u{0067}\u{0301}";
        let starts = char_starts(buf.as_bytes());
        assert_eq!(2, starts);
    }

estk avatar Oct 17 '22 18:10 estk

I'm not clear on what the buf contents are that are passed to the write function. Is the content guaranteed to be on byte, unicode code point, or grapheme boundary?

moh-eulith avatar Oct 17 '22 18:10 moh-eulith

Negative, an io::Writer just takes bytes and writes them, there is no guarantee of a particular segmentation, which is why this is really only a best-effort check.

estk avatar Oct 17 '22 18:10 estk

So a full fix would require some manner of buffering. Max encoding length for utf8 is 4 bytes, another 2 for combining, so that's a 6 byte buffer. I'm by no means a unicode expert, but it appears combining is not limited to a single one per grapheme, so theoretically, we have to look ahead past all combining codepoints.

Additionally, there are "Codepoints U+035C–0362 are double diacritics, diacritic signs placed across two letters."

A test case for this insanity might look like: println!("a\u{0308}\u{032C}\u{0362}e\u{0301}\u{0331}") which prints: ͏ä̬͢é̱

Honestly, I don't know how far this should go. I think enough buffering for 1 codepoint + 1 combining is practical enough for real text.

moh-eulith avatar Oct 17 '22 18:10 moh-eulith

Counting graphemes seems to be enough.

    #[test]
    fn char_starts_gs() {
        let buf = "\u{01f5}\u{0067}\u{0301}";
        let starts = char_starts(buf.as_bytes());
        assert_eq!(2, starts);
        let buf = "a\u{0308}\u{032C}\u{0362}e\u{0301}\u{0331}";
        let starts = char_starts(buf.as_bytes());
        assert_eq!(2, starts);
    }

estk avatar Oct 17 '22 20:10 estk

Agreed that grapheme counting is the right way to go. However, because we don't have a guarantee on the buf segmentation, it leads to failures:

    #[test]
    fn char_starts_gs() {
        let buf = "\u{01f5}\u{0067}\u{0301}";
        test_all_splits(buf, 2);
        let buf = "a\u{0308}\u{032C}\u{0362}e\u{0301}\u{0331}";
        test_all_splits(buf, 2);
    }

    fn test_all_splits(buf: &str, len: usize) {
        let b = buf.as_bytes();
        for i in 1..b.len() {
            assert_eq!(len, char_starts(&b[..i]) + char_starts(&b[i..]));
        }
    }

moh-eulith avatar Oct 17 '22 20:10 moh-eulith

Sorry, should've made the example simpler. I think the minimal test case that fails test_all_splits is:

        let buf = "e\u{0331}";
        test_all_splits(buf, 1);

moh-eulith avatar Oct 17 '22 20:10 moh-eulith

No worries, yep I agree there will be miscalculations when we span a Unicode sequence, we just don't have enough information to do the right thing.

estk avatar Oct 17 '22 20:10 estk

Is it rude to just ignore unicode and put a note in the docs about "unicode support is undefined and may result in unexpected trimmings" :sweat_smile:

DavidZemon avatar Mar 28 '24 15:03 DavidZemon