Search for character classes but not replace them
-
@guy038 said in Search for character classes but not replace them:
( Note : \x2028} is the LS char and \x{2029} is the PS char ). Don’t know why the tiny difference of two characters ?
LS and PS are among the characters classified as “end of line” characters. LS and PS will get matched by things such as
\R
and\v
. If you don’t have dot matches newline enabled then dot will not match either LS or PS. Searching for~[[:unicode:]]~
will match both~LS~
and~PS~
but a search for~.~
does not match either of those. All of the other characters matched by\R
and\v
have character values less than\xFF
. The LS and PS characters are the exception.I don’t know if that detail explains why @guy038 needed to special case them.
-
@guy038 said:
for such a goal, I would simply use this regex
(?s).
Now, you don’t think I’d bother you, or revive this old thread, if I were finding things that simple, do? I can easily show that that doesn’t work, on just a small bit of text:
💙☀🡢⮃🠧🠉…👍👌👎
I see and count 10 characters there.
If I do a Find All in Current Document, it yields 11 hits, but I only see 3 characters highlighted as matches:
Worse, if I put my caret at the start of line 1 and repeatedly press Find Next, I have to press it 18 times before it runs out of matches (Wrap around not enabled) – many of these matches are “zero-length”, not one character at a time.
I have yet to try some of the other suggestions…but I will.
-
Hello, @alan-kilborn, @mkupper and All,
@mkupper, a BIG thanks to you : your assumption was exact !
Indeed, my complicated regex
(?![\x00-\xFF]).[\x{D800}-\x{DFFF}]?
must be rewritten as(?s-i)(?![\x00-\xFF]).[\x{D800}-\x{DFFF}]?
And, against my
Total_Chars.txt
files, this new formulation does give the same amount of chars (325,334
) than the[[:unicode:]]
or the[^\x00-\xFF]
regexes !
BTW, the nice thing about my
Total_Chars.txt
is that it does not bother whether the unicode code-point is assigned or unnassigned to a character !Probably, depending on your current font, a lot of glyhs will not be reproduced correctly but we don’t care about it. We just want to be able to search any character from its code-point
\x{####}
if inside theBMP
or from its surrogate pairs\x{D###}\x{D###}
if outside theBMP
Presently, it just lists, one character after another, all valid characters from
U + 0000
toU + EFFFD
, described below ( as long as the Unicode Consortium does not decide to use the planes4
to13
)•--------------------•-------------------•------------•---------------------------•----------------•-------------------• | Range | Description | Status | Number of Chars | UTF-8 Encoding | Number of Bytes | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | 0000 - 007F | PLANE 0 - BMP | Included | | 128 | 1 Byte | 128 | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | 0080 - 0FFF | PLANE 0 - BMP | Included | | + 1,920 | 2 Bytes | 3,840 | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | 0800 - D7FF | PLANE 0 - BMP | Included | | + 53,248 | | 159,744 | | | | | | | | | | D800 - DFFF | SURROGATES zone | EXCLUDED | - 2,048 | | | | | | | | | | | | | E000 - F8FF | PLANE 0 - PUA | Included | | + 6,400 | | 19,200 | | | | | | | | | | F900 - FDFC | PLANE 0 - BMP | Included | | + 1,232 | 3 Bytes | 3,696 | | | | | | | | | | FDD0 - FDEF | NON-characters | EXCLUDED | - 32 | | | | | | | | | | | | | FDF0 - FFFD | PLANE 0 - BMP | Included | | + 526 | | 1,578 | | | | | | | | | | FFFE - FFFF | NON-characters | EXCLUDED | - 2 | | | | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | Plane 0 - BMP | SUB-Totals | - 2,082 | + 63,454 | | 188,186 | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | 10000 - 1FFFD | PLANE 1 - SMP | Included | | + 65,534 | | 262,136 | | | | | | | | | | 1FFFE - 1FFFF | NON-characters | EXCLUDED | - 2 | | | | •--------------------•-------------------•------------•-------------•-------------• •-------------------• | 20000 - 2FFFD | PLANE 2 - SIP | Included | | + 65,534 | | 262,136 | | | | | | | 4 Bytes | | | 2FFFE - 2FFFF | NON-characters | EXCLUDED | - 2 | | | | •--------------------•-------------------•------------•-------------•-------------• •-------------------• | 30000 - 3FFFD | PLANE 3 - TIP | Included | | + 65,534 | | 262,136 | | | | | | | | | | 3FFFE - 3FFFF | NON-characters | EXCLUDED | - 2 | | | | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | 40000 - DFFFF | PLANES 4 to 13 | NOT USED | - 655,360 | | 4 Bytes | | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | E0000 - EFFFD | PLANE 14 - SPP | Included | | + 65,534 | | 262,136 | | | | | | | | | | EFFFE - EFFFF | NON-characters | EXCLUDED | - 2 | | | | •--------------------•-------------------•------------•-------------•-------------• •-------------------• | FFFF0 - FFFFD | PLANE 15 - SPUA | NOT USED | - 65,334 | | | | | | | | | | | | | FFFFE - FFFFF | NON-characters | EXCLUDED | - 2 | | | | •--------------------•-------------------•------------•-------------•-------------• 4 Bytes •-------------------• | 100000 - 10FFFD | PLANE 16 - SPUA | NOT USED | - 65,334 | | | | | | | | | | | | | 10FFFE - 10FFFF | NON-characters | EXCLUDED | - 2 | | | | •--------------------•-------------------•------------•-------------•-------------•----------------•-------------------• | GRAND Totals | - 788,522 | + 325,590 | | 1,236,730 | | | | | | | | Byte Order Mark - BOM | | | | 3 | •-----------------------------------------------------•-------------•-------------• •-------------------• | | 1,114,112 Unicode chars | | Size 1,236,733 | •-----------------------------------------------------•---------------------------•----------------•-------------------•
Of course, due to the line-breaks, produced by the
LF
andCR
characters, this file contains three physical lines :-
A first line from
\x00
to\x0A
, so11
chars -
A second line from
\x0B
to\x0D
, so3
chars -
A third long line from
\x0E
to\xEFFFD
, so325,576
chars
If anyone is interested by this file, I could send it by e-mail. Just tell me ! But I suppose that it could be easily implemented with a
Python
script.Simply list, in a
UTF-8-BOM
file, all ranges of characters defined asIncluded
in theStatus
column of the above table !You should get a file containing
325,590
characters for an exact size of1,236,733
bytesNow, if you decide to include all the
NOT USED
areas, too, you’ll get aTotal_UNICODE_Chars.txt
file, of1,111,998
chars for a size of4,372,765
bytes which would be exact for… eternity ;-))
Alan, I"ve just seen your last post ! Give me some time to study your example and I’ll answer you very soon !
Best Regards,
guy038
P.S. : I created a macro which changes any selected regex synntax
\x{#####}
into its correspondant surrogate pair\x{D###}\x{D###}
:<Macro name="Surrogates Pairs in Selection" Ctrl="no" Alt="no" Shift="no" Key="0"> <Action type="3" message="1700" wParam="0" lParam="0" sParam="" /> <Action type="3" message="1601" wParam="0" lParam="0" sParam="(?-i)\\x\{(10|[[:xdigit:]])[[:xdigit:]]{4}" /> <Action type="3" message="1625" wParam="0" lParam="2" sParam="" /> <Action type="3" message="1602" wParam="0" lParam="0" sParam="$0\x1F" /> <Action type="3" message="1702" wParam="0" lParam="640" sParam="" /> <Action type="3" message="1701" wParam="0" lParam="1609" sParam="" /> <Action type="3" message="1700" wParam="0" lParam="0" sParam="" /> <Action type="3" message="1601" wParam="0" lParam="0" sParam="(?i)(?:(1)|(2)|(3)|(4)|(5)|(6)|(7)|(8)|(9)|(A)|(B)|(C)|(D)|(E)|(F)|(10))(?=[[:xdigit:]]{4}\x1F\})|(?:(0)|(1)|(2)|(3)|(4)|(5)|(6)|(7)|(8)|(9)|(A)|(B)|(C)|(D)|(E)|(F))(?=[[:xdigit:]]{0,3}\x1F\})" /> <Action type="3" message="1625" wParam="0" lParam="2" sParam="" /> <Action type="3" message="1602" wParam="0" lParam="0" sParam="(?{1}0000)(?{2}0001)(?{3}0010)(?{4}0011)(?{5}0100)(?{6}0101)(?{7}0110)(?{8}0111)(?{9}1000)(?{10}1001)(?{11}1010)(?{12}1011)(?{13}1100)(?{14}1101)(?{15}1110)(?{16}1111)(?{17}0000)(?{18}0001)(?{19}0010)(?{20}0011)(?{21}0100)(?{22}0101)(?{23}0110)(?{24}0111)(?{25}1000)(?{26}1001)(?{27}1010)(?{28}1011)(?{29}1100)(?{30}1101)(?{31}1110)(?{32}1111)" /> <Action type="3" message="1702" wParam="0" lParam="640" sParam="" /> <Action type="3" message="1701" wParam="0" lParam="1609" sParam="" /> <Action type="3" message="1700" wParam="0" lParam="0" sParam="" /> <Action type="3" message="1601" wParam="0" lParam="0" sParam="([01]{10})([01]{10})(?=\x1F)" /> <Action type="3" message="1625" wParam="0" lParam="2" sParam="" /> <Action type="3" message="1602" wParam="0" lParam="0" sParam="110110\1\x1F}\\x{110111\2" /> <Action type="3" message="1702" wParam="0" lParam="640" sParam="" /> <Action type="3" message="1701" wParam="0" lParam="1609" sParam="" /> <Action type="3" message="1700" wParam="0" lParam="0" sParam="" /> <Action type="3" message="1601" wParam="0" lParam="0" sParam="(?:(0000)|(0001)|(0010)|(0011)|(0100)|(0101)|(0110)|(0111)|(1000)|(1001)|(1010)|(1011)|(1100)|(1101)|(1110)|(1111))(?=[[:xdigit:]]*\x1F\})|\x1F" /> <Action type="3" message="1625" wParam="0" lParam="2" sParam="" /> <Action type="3" message="1602" wParam="0" lParam="0" sParam="(?{1}0)(?{2}1)(?{3}2)(?{4}3)(?{5}4)(?{6}5)(?{7}6)(?{8}7)(?{9}8)(?{10}9)(?11A)(?12B)(?13C)(?14D)(?15E)(?16F)" /> <Action type="3" message="1702" wParam="0" lParam="640" sParam="" /> <Action type="3" message="1701" wParam="0" lParam="1609" sParam="" /> </Macro>
For instance, if you select the regex
\x{10000}\x72\x{27}\x0\x{EFFFD}
Is changed, with this macro, into
\x{D800}\x{DC00}\x72\x{27}\x0\x{DB7F}\x{DFFD}
which correctly matches the 𐀀R’ string ! -
-
-
Hi, @alan-kilborn, @mkupper and All,
Ah…, indeed, the
(?s).
regex seems to give incoherent results and the total number of hits is erroneous, too :-(( However, note that theCount
operation remains correct !
But, luckily, the
[[:unicode:]]
regex does work nicely !Thus, I extended your example to three other characters which lie in the
[\x00-\xFF]
range, so this string : Aé💙☀🡢⮃🠧🠉…👍👌👎. And, if we use the[\x00-\xFF]|[[:unicode:]]
regex, it correctly matches13
characters, as shown in the snapshot below :
Regarding my macro, I’m going to ask Don Ho to add a
C++
equivalent ! A nice improvement would be to analyse theSearch
andReplace
fields and modify all the\x{#####}
regex syntaxes with their surrogate equivalents\x{D###}\x{D###}
for correct searches and replacements in all circonstances. what’s your feeling about it ?BR
guy038
-
@guy038 said:
I’m going to ask Don Ho to add a C++ equivalent ! A nice improvement would be to analyse the Search and Replace fields and modify all the \x{#####} regex syntaxes with their surrogate equivalents \x{D###}\x{D###} for correct searches and replacements in all circonstances. what’s your feeling about it ?
It doesn’t sound like something Don Ho would be interested in, but…go for it.
-
@guy038 said in Search for character classes but not replace them:
If anyone is interested by this file, I could send it by e-mail. Just tell me !
Is this the same Total_Chars.txt that you uploaded to a Google drive as part of this forum post?
-
@Alan-Kilborn said in Search for character classes but not replace them:
If I do a Find All in Current Document, it yields 11 hits, but I only see 3 characters highlighted as matches:
The three you see are “☀⮃…” which can be searched using
\x{2600}
,\x{2B83}
, and\x{2026}
. All three are Basic Multilingual Plane (BMP) characters.The other 7 characters, or “💙🡢🠧🠉👍👌👎”, are all extended Unicode. Here’s how to search for them using surrogate pairs:
- 💙 U+1F499
\x{D83D}\x{DC99}
- 🡢 U+1F862
\x{D83E}\x{DC62}
- 🠧 U+1F827
\x{D83E}\x{DC27}
- 🠉 U+1F809
\x{D83E}\x{DC09}
- 👍 U+1F44D
\x{D83D}\x{DC4D}
- 👌 U+1F44C
\x{D83D}\x{DC4C}
- 👎 U+1F44E
\x{D83D}\x{DC4E}
While Notepad++ and Scintilla seem to store text as UTF-8 the search function has the appearance of converting what we search for into UTF-16 strings and seems to convert the text from UTF-8 into UTF-16 on the fly when searching it. This seems like a lot of overhead. I have never dug hard into what happens under the hood. My guess is that the search computes the surrogate pairs and then extracts the lower 10 bits from each word and spreads the 20 bits out into where they would appear in UTF-8 encoded data. I think that would work and be fast for scanning UTF-8 encoded data.
- 💙 U+1F499
-
@guy038 said in Search for character classes but not replace them:
correct searches and replacements in all circonstances. what’s your feeling about it ?
I can add this note, for what (if anything) it’s worth.
At some point in the development of Columns++, I realized that to get around some limitations in the Scintilla search interface I’d need to use Boost Regex directly. I really wanted, as part of that, to handle Unicode properly, as Unicode characters instead of as UTF-16 bytes. Boost Regex includes support for Unicode, but to do that it depends on ICU.
I could not figure out how to include the necessary dependencies (whatever they are) from ICU as part of a DLL compilation. All instructions discussed installing it at the operating system level. I didn’t want to tell users they had to install something separate system-wide. I gave up on that approach.
So then I thought I could at least write a proper iterator for UTF-32 instead of wchar_t. And ran into character traits. I thought seriously of trying to leverage the traits for wchar_t and “guess” at what to do outside the BMP. (Looking into this made it clear why Boost relies on ICU instead of doing it themselves.) I eventually gave up and implemented UTF-16/wchar_t, essentially what Notepad++ does. It works reasonably well with Windows (which is also UTF-16 as wchar_t) when searching for specific character sequences and/or working with characters in the BMP.
Full and proper Unicode support, as best I can figure out, involves a large amount of detail, which is continuously being updated. (For those who don’t know: not every Unicode character is a single Unicode code point. And unlike the UTF-8/16/32 relationship, there’s no fixed algorithm to tell you which code points combine with others. Then there’s knowing what’s a capital letter, what’s a lower case letter, which letters are equal when case ignored… none of it follows a formula.) If there’s a more compact, contained implementation than ICU, that would be great, but I couldn’t find one. (The C++ standards committee has punted and deprecated the little bit of Unicode support C++ ever had. There are types defined, but nothing that does anything useful with them.)
I did, however, discover after reading this thread that my search doesn’t handle
[[:unicode:]]
the way Notepad++ does. There must be something clever hidden in the Notepad++ implementation that I missed which lets it “understand” characters outside the basic multilingual plane. -
@mkupper said:
which can be searched using…
Here’s how to search for them using surrogate pairs
Clearly you see why this isn’t a good answer to the original query?
I don’t want to search specifically, I want to search generically.I started with
(?s).
as the simplest thing from this thread, as it was stated earlier that it “works”.
I showed (using some specific characters) that this generic search didn’t work.Sure, I can try
[[:unicode:]]
for what I’m trying to do, and see what else – problemwise – I run into. -
@Alan-Kilborn said in Search for character classes but not replace them:
Sure, I can try [[:unicode:]] for what I’m trying to do, and see what else – problemwise – I run into.
I did an experiment with searching for
[[:unicode:]]
on @guy038’s Total_Chars.txt file and learned the following:- It does not match \x{0000} to \x{00FF}
- It matches \x{0100} to \x{0177}
- It does not match
Ÿ
which is \x{0178} - It matches \x{0179} to \x{FFFF}
Starting at U+10000 it gets weird. I made a UTF-8 encoded test file that has 78343 lines where each line starts with a Unicode character starting at U+10000 and running up to U+10FFFF. Each character is followed by a tab and then notes about the character. For example line 15125 has:
🌵 U+1F335 \x{D83C}\x{DF35} \xF0\x9F\x8C\xB5
It lets me know the Unicode code point, the surrogate pairs, and the UTF-8 encoding for that character.
- A count for
[[:unicode:]]
says 78343 which is the number of lines. - A search for
^[[:unicode:]]
or\R[[:unicode:]]
gets zero hits. - A search for
[[:unicode:]]\t
gets 78343 hits.
It seems that
[[:unicode:]]
is matching the second word of the surrogate pair but not the first. The first word of the pairs ranges from\x{D800}
to\x{DBFF}
while the second word is always in the range\x{DC00}
to\x{DFFF}
. The weird thing is that[[:unicode:]]
matches orphan words in the range\x{D800}
to\x{DBFF}
and also matches orphans in the range\x{DC00}
to\x{DFFF}
. It’s possible that Notepad++ does something special with those orphans as you not supposed to have them as orphans plus there are intentional gaps in the UTF-8 encoding system so they can’t be encoded as UTF-8 … if you follow the rules. -
Hello, @alan-kilborn, @mkupper, @coises and All,
First, @mkupper, you made the same mistake that I did when we spoke about the
LS
andPS
characters and for which you had given me the solution !-
Indeed, the regex
(?i)[[:unicode:]]
does not match the\x{0178}
character -
Luckily, the regexes
(?-i)[[:unicode:]]
, even(?s-i)[[:unicode:]]
, do match the\x{0178}
character as well as any character over\x{00FF}
Oh…, My God : regarding the
Total_Chars.txt
file, I’m really confused because I’ve completely forgotten that this file was accessible, among some others, on my google drive account ! So, for people interested, simply click on the link below :https://drive.google.com/file/d/1kYtbIGPRLdypY7hNMI-vAJXoE7ilRMOC/view?usp=sharing
As a security, once the
Total_Chars.txt
file loaded in Notepad++, you can right-click on its tab and choose theRead-Only
optionThank you, @mkupper, for refreshing my memory ;-))
Best Regards
guy038
-
-
@mkupper said in Search for character classes but not replace them:
It’s possible that Notepad++ does something special with those orphans as you not supposed to have them as orphans plus there are intentional gaps in the UTF-8 encoding system so they can’t be encoded as UTF-8 … if you follow the rules.
I’ve been making attempts to follow this under debug in Visual Studio, but so far… I’m lost in the murky depths of Boost regex.
The iterator for UTF-8 documents is implemented in these files:
UTF8DocumentIterator.h
UTF8DocumentIterator.cxxand you can see here how UTF-8 sequences are mapped to wchar_t/UTF-16.
But why
.
matches one of a surrogate pair but[[:unicode:]]
matches both escapes me. (In my search in Columns++ both only match a single wchar_t. I don’t use the same iterator code, but I don’t know what I do that would produce different results, other than handling invalid UTF-8 differently.)To make sense of invalid UTF-16, we’d have to look at the process by which Notepad++ loads UTF-16 and transforms it into UTF-8. I think there is some method of encoding wchar_t sequences that don’t represent valid UTF-16 as invalid, but still round-trip-able, UTF-8.
If you uncover a clue, I would welcome one.
-
@guy038 said in Search for character classes but not replace them:
(?-i)[[:unicode:]]
Thank you for doing that test as I was thinking about doing something similar. I had seen that
Ÿ - \x{0178}
was the upper case form ofÿ - \x{00FF}
and wondered if the failure to match was a one-off edge error. The failure to match still seems like a bug to me unless the rule for(?-i)[[:unicode:]]
is that it only matches if both the upper and lower case form of a letter has a character code of \x0100 or higher. FWIW, Notepad++'s convert case functions work onÿŸ
.I did a search for other letters where the one letter case was \x0000 to \x00FF and the other was \x100 or higher and found
ß \x{00DF} \xC3\x9F LATIN SMALL LETTER SHARP S ẞ \x{1E9E} \xE1\xBA\x9E LATIN CAPITAL LETTER SHARP S
(?i)[[:unicode:]]
matchesẞ
(U+1E9E) as expected. However, I also see that Notepad++'s case conversion functions fail to convert that letter to its upper or lower case version. A search using(?-i)ß
or(?-i)ẞ
also fails match both cases of that letter. According to U+00DF and U+1E9E on fileformat.info that pair should be case-convertible. -
@Coises, The UTF8DocumentIterator code seems straightforward and does more or less mindless conversion. It barely cares about invalid codes, etc. The logic silently allows overlong encoding where for example, a 3-byte UTF sequence is used to encode a value from
0x00
to0x7F
which is normally a 1 byte sequence or0x0080
to0x07FF
which is normally a 2 byte sequenceThe logic also silently allows 4-byte UTF-8 sequences that encode 0x110000 to 0x1FFFFF which is beyond the range assigned to Unicode. It will attempt to convert those values into surrogate pairs. The first word of the pair will overflow the 0xD800 to 0xDBFF range assigned to the first word. The second word is ok and will be a value in the range 0xDC00 to 0xDFFF which is correct for the second word of the pair. I’d have to trace a bit more carefully but the code also seems to silently allow for 5 and 6 byte long encodings that either contain underlong values or will overflow the first word of the surrogate pairs. Overall, it’s not a huge issue that results in garbage in, garbage out, but it should not crash the editor unless something is unhappy about orphan parts of surrogate pairs.
I’m now wondering if the internal storage is UTF-16. That would explain some of the search behavior.