notepad++ url processing cyrillic symbols
-
Hello,
did some further tests to identify which unicode chars would “break” the regex.
“Break” like would match until the char appears.The obvious ones
0x0 = NULL 0x9 = TAB 0xA = LineFeed 0xB = VerticalTab 0xC = FormatFeed 0xD = CarriageRetrun 0x20 = Space 0x85 = Next Line 0xa0 = No-Break Space 0x1680 = Ogham Space Makr 0x2000 = En Quad 0x2001 = Em Quad 0x2002 = En Space 0x2003 = Em Space 0x2004 = Three-Per-Em Space 0x2005 = Four-Per-Em Space 0x2006 = Six-Per-Em Space 0x2007 = Figure Space 0x2008 = Punctuation Space 0x2009 = Thin Space 0x200A = Hair Space 0x200C = Zero Width Non-Joiner 0x200D = Zero Width Joiner 0x200E = Left-To-Right Mark 0x200F = Right-To-Left Mark 0x2028 = Line Separator 0x2029 = Paragraph Separator 0x202f = Narrow No-Break Space 0x205f = Medium Mathematical Space 0x3000 = Ideographic Space
and some unusual ones.
To be more precise a strange pattern started.
Every 0x?0085, 0x?2028 and 0x?2029 would “break” the regex.0x10085 LINEAR B IDEOGRAM B105M STALLION 0x12028 Cuneiform Sign Al Times Ush 0x12029 Cuneiform Sign Alan 0x20085 CJK UNIFIED IDEOGRAPH = 𠂅 0x22028 CJK UNIFIED IDEOGRAPH = 𠂅 0x22029 CJK UNIFIED IDEOGRAPH = 𢀨
and reported as Unknown - Unknown Script by unicode.org
(I guess means that those a valid values but reserved for future use)0x30085, 0x32028, 0x32029, 0x40085, 0x42028, 0x42029, 0x50085, 0x52028, 0x52029, 0x60085, 0x62028, 0x62029, 0x70085, 0x72028, 0x72029, 0x80085, 0x82028, 0x82029, 0x90085, 0x92028, 0x92029, 0xa0085, 0xa2028, 0xa2029, 0xb0085, 0xb2028, 0xb2029, 0xc0085, 0xc2028, 0xc2029, 0xd0085, 0xd2028, 0xd2029, 0xe0085, 0xe2028, 0xe2029, 0xf0085, 0xf2028, 0xf2029, 0x100085, 0x102028, 0x102029
I can’t really explain why this happened.
Maybe someone has an idea or insight?Also, are these three Han Script symbols valid symbols in terms of used in text?
Nevertheless, keeping in mind that, currently,
no “unicode” url gets formatted as link,
I would still ask for replacing the currently used regex with this one(?-s)[A-Za-z][A-Za-z0-9+.-]+://.*?(?=\s|$)
A couple of further tests outstanding - hope to get it done in the next days.
Cheers
Claudia -
The following tests have been successfully passed
Tested within the range of 0x0-0x10FFFF.start of file tests
-url at start of file (no additional text) (also end of file test)
-url at start of file (followed by tab)
-url at start of file (followed by space)
-url at start of file (followed by eol)
-url at start of file (followed by tab and text)
-url at start of file (followed by space and text)
-url at start of file (followed by eol and text)end of file tests
-url at end of file (preceded by tab)
-url at end of file (preceded by space)
-url at end of file (preceded by eol)
-url at end of file (preceded by text and tab)
-url at end of file (preceded by text and space)
-url at end of file (preceded by text and eol)in the middle of a file tests
-url in the middle of a file (preceded and followed by tab)
-url in the middle of a file (preceded and followed by space)
-url in the middle of a file (preceded and followed by eol)
-url in the middle of a file (preceded and followed by text and tab)
-url in the middle of a file (preceded and followed by text and space)
-url in the middle of a file (preceded and followed by text and eol)From my point of view it looks ok.
I’m going to open an enhancement request at github.Cheers
Claudia -
I have the same problem with cyrillic. Please open an issue in github - go to https://github.com/notepad-plus-plus/notepad-plus-plus/issues sign in with your account and create a
New issue
. Also it is good to put this disqus url. -
:-) has been already done ;-)
https://github.com/notepad-plus-plus/notepad-plus-plus/issues/2798Cheers
Claudia -
Hi Claudia and All,
Remainder :
Unicode is organized, within 17 planes, each composed of 65536 code-points => 1,114,112 possible values ! Only FIVE planes are defined. These are :
- The BMP ( BASIC MULTILINGUAL Plane ) = Plane 0, from code-point U+0000 to code-point U+FFFF - The SMP ( SUPPLEMENTARY MULTILINGUAL Plane ) = Plane 1, from code-point U+10000 to code-point U+1FFFF - The SIP ( SUPPLEMENTARY IDEOGRAPHIC Plane ) = Plane 2, from code-point U+20000 to code-point U+2FFFF - The SSP ( SUPPLEMENTARY SPECIAL-PURPOSE Plane ) = Plane 14, from code-point U+E0000 to code-point U+EFFFF - The SPUA ( SUPPLEMENTARY PRIVATE USE Area ) = Plane 15, from code-point U+F0000 to code-point U+FFFFF - The SPUA ( SUPPLEMENTARY PRIVATE USE Area ) = Plane 16, from code-point U+100000 to code-point U+10FFFF
Up to now, even with the recent Unicode 9.0 version, all the other planes, from 3 to 13, are NOT used and all the corresponding code-points, from U+30000 to U+DFFFF are NOT assigned, except for the last two code-points of each place, which are assigned as NON characters
So, Claudia :
-
From your first list : the range
\x{0000}
,\x{0009}
…\x{205F}
,\x{3000}
( 30 values ) -
From the second one : the values U+10085, U+12028, U+12029, U+20085, U+22028 and U+22029 ( 6 values )
-
From your last list : the range U+30085…U+102029 ( 42 values )
I built a test file, containing all these characters, preceded by the letter
a
and followed by the letterz
Then, I tried to determine all the 3-characters string aXz, which was matched by the regex
a\sz
. After some tests, I can affirm that the\s
regex, in a file with UNICODE encoding, matches any single character of the following list, ONLY :- TABULATION ( \t ) - NEW LINE ( \n ) - VERTICAL TABULATION ( \x0B ) - FORM FEED ( \f ) - CARRIAGE RETRUN ( \r ) - SPACE ( \x20 ) - NEXT LINE ( \x85 ) - NO BREAK SPACE ( \xA0 ) - OGHAM SPACE MARK ( \x{1680} ) - EN QUAD ( \x{2000} ) - EM QUAD ( \x{2001} ) - EN SPACE ( \x{2002} ) - EM SPACE ( \x{2003} ) - THREE-PER-EM SPACE ( \x{2004} ) - FOUR-PER-EM SPACE ( \x{2005} ) - SIX-PER-EM SPACE ( \x{2006} ) - FIGURE SPACE ( \x{2007} ) - PUNCTUATION SPACE ( \x{2008} ) - THIN SPACE ( \x{2009} ) - HAIR SPACE ( \x{200A} ) - LINE SEPARATOR ( \x{2028} ) - PARAGRAPH SEPARATOR ( \x{2029} ) - NARROW NO-BREAK SPACE ( \x{202F} ) - IDEOGRAPHIC SPACE ( \x{3000} )
And, except for the MEDIUM MATHEMATICAL SPACE ( \x205F ), which is NOT matched by the
\s
regex, this list is identical to the list of characters, that the UNICODE Consortium considers as White_Space characters. Refer to the link, below :http://www.unicode.org/Public/UCD/latest/ucd/PropList.txt
UPDATE on 02-17-2018 : just looks the definitive list of Unicode BLANK characters, below :
Finally, as most of these “White_Space” characters are quite exotic and very rarely used, in normal writing, the idea to use
\s
syntax, in a look-ahead, as a limit to an Internet address, seems quite pertinent !
Claudia, the new regex, to determine all the contents of an address, could, also, be written :
(?-s)[A-Za-z][A-Za-z0-9+.-]+://.*?(?=\s|\z)
Indeed, the case
(?=\s)
always happens, except when an Internet address would end the last line of a file, without any line-break ! And this specific case is just matched with the second(?=\z)
syntax ;-)Best Regards,
guy038
P.S. :
Claudia, I haven’t find some spare time, yet, to have a look to your new version of the RegexTexter script, with the Time regex test option. Just be patient a couple of days :-)
-
-
Hi Guy,
thank you for doing and researching this and the confirmation about the test.
But I don’t get the same result for \x205fSo, as you see I used python script to add the char
editor.appendText('a'+unichr(0x205f)+'z')
and it looks like it matched as well.
In regards to the time regex option, take your time, you don’t even have to waste your time doing it - if you find it useful, use it, otherwise chuck it into the bin. ;-)
Cheers
Claudia -
Please explain what I need to do with the regexp
For notepad++ processing with Cyrillic characters in the url?
https://lh3.googleusercontent.com/-Rcx51vbIw0U/WGphx4PJ_MI/AAAAAAAAEV0/znXcaeFVKZE/s0/screenshot%25202017-01-02%2520001.jpg
thanks in advance.
sorry for the stupid question.
smile -
You can’t do anything. It was just a discussion for a probably new regex between guy038 and me.
There has been an issue addressed at github and now it is up to Don to decide if it gets changed or not.
Or if you familiar with C/C++ and using Visual Studio you could compile npp yourself with the changed regex.Cheers
Claudia -
I hope these corrections will be made
Cheers
Alexandr -
Please give instructions on how to compile notepadd++ with support url processing cyrillic symbols.
Thanks in advance. -
Here is described how to build notepad++. Please use Visual Studio 2015 or 2017 as there was a commit that this has been changed lately.
In …\notepad-plus-plus\PowerEditor\src\Notepad_plus.h source file you need to replace#define URL_REG_EXPR "[A-Za-z]+://[A-Za-z0-9_\\-\\+~.:?&@=/%#,;\\{\\}\\(\\)\\[\\]\\|\\*\\!\\\\]+"
with a different regex, like the one from here. Make sure you do proper escaping.
So the steps needed are
- Install Visual Studio 2015 or VS2017 and the SDK (Software Development Kit)
- Install git software
- Clone the repo from https://github.com/notepad-plus-plus/notepad-plus-plus.git
- Modify the Notepad_plus.h file using Visual Studio
- Follow the instruction to compile npp like given on github page
- Copy the scilexer.dll from an official distribution (otherwise integrity check will fail)
- Cross fingers.
Hope I didn’t forget anything.
Cheers
Claudia -
Please tell me the correct line ready for replacement.
For Notepad to accept Russian characters in the url.
Sorry for the stupid question. smile
Why the creators can’t add fixes to the code for all? -
file Notepad_plus.h and change the following line
//#define URL_REG_EXPR "[A-Za-z]+://[A-Za-z0-9_\\-\\+~.:?&@=/%#,;\\{\\}\\(\\)\\[\\]\\|\\*\\!\\\\]+" #define URL_REG_EXPR "(?-s)[A-Za-z][A-Za-z0-9+.-]+://[^\\s]+?(?=\\s|\\z)"
Why the creators can’t add fixes to the code for all?
It is still an issue only so as long as no one makes a proper pull request there
is little chance that it gets implemented. Unfortunately, my working agreements
do not allow me to share code on github, sourceforge …, so I can’t do, it at least
for the moment.Cheers
Claudia -
Maybe the developers can make a correction?
What about to move definition of this regexp to config file?
That anybody, who need to, can change it without recompilation!
And update FAQ how to add support of national symbols to url recognation
I very much hope that correction will be made.