Deleting lines that repeat the first 15 characters
-
This post is deleted! -
Thank you for the clarification. It worked perfectly with the file exactly as instructed. Thank you!
With another pre-sorted file which has no duplicates or blank lines, when I perform the main regex to find and remove duplicate lines
(?-s)(.{15}).\R\K(?:\1.\R)+the replace box returns: “Replace All:1 occurrence was replaced” no matter how many times I repeat the replace. If there are no duplicates I would expect a report of 0 occurrences found.
The file is found at
https://mangoguy.sharefile.com/d-s7b2d2a8b3fb459cbThank you,
Doug -
@mangoguy said:
Replace All:1 occurrence was replaced
Formatting note: Your regular expression was stated as
(?-s)(.{15}).\R\K(?:\1.\R)+
but I think you really meant(?-s)(.{15}).*\R\K(?:\1.*\R)+
as per one of @guy038 's regexes above. In the future, wrap any exact text you want to post here in ` (backticks) to hopefully avoid any confusion. For example, if you type in `hello` it should appear here ashello
without any special characters having trouble. You can also start a new line with four spaces and then your text to provide some data that won’t be specially interpreted.I see the same behavior as you when trying this regex replacement on your newest data file. Note that the file is NOT modified by this replacement (disk icon on its tab remains blue after the “replacement” occurs…starting point was a freshly loaded DATA2.txt file). I’m at a loss to explain this (why it is saying “1 replacement”). This thread has brought out some really odd things!
Note that it IS possible to see non-zero replacements listed and have a file NOT be modified (try a Find-what of
^
and a Replace-with of$0
, also Reg exp search mode), but this is very different from your replacement action. -
Hi, @mangoguy, @scott-sumner and All,
To begin with, Doug, I was a bit surprised that, both, the numbers, at column
19
and the first 15th characters look equally sorted, in your Data2.txt file ! So I hope that you understood that the first sort must be performed, after the use of the Column Editor. Indeed, these numbers are just added in order to get the original order back, after the suppression of all the duplicate lines ! Just a remark :-))Now, mangoguy and others, keep in mind that, when a rather complicated regex is applied, against an important file, a complete failure may occur, with only
1 match
which represents, simply, the selection of all the file contents :-((So, I began to investigate this problem, more deeply ! First of all, I verified that the first 15th characters, of your Data2.txt file, had absolutely no duplicate And, like Scott and you, I noticed that the regex
(?-s)(.{15}).*\R\K(?:\1.*\R)+
, wrongly selects the whole file, after a while, instead of finding 0 result
At this point, I simply thought about reducing the file to reach the upper value, beyond we get into trouble. It happened, that, with my old Win XP laptop, the limit is
67,000
lines about. For this value, you get the correct result : no match. But, for instance, with67,100
lines, we get the non-correct one match !Note that using the similar regex
(?-s)(.{15}).*\R\K(?:\1.*\R)
, without the+
sign, at its end, this limit increases to68,830
lines about !
So I was wondering : Could it be that the lack of matches, with the necessity to scan great amount of data, causes that false positive ? So, strangely, I decided to add false positives every
65,000
lines about, as below :--------------- ---------------
So, I added these two lines of
15
dashes, at lines65,000
,130,000
,195,000
,260,000
,325,000
,390,000
and455,000
. In addition, I duplicated the first line as well as the last line of the file.If my intuition was correct, the regex would match, of course, all the second lines of dashes ( false positives ) but also, the first duplicate, in line
2
and the second duplicate, at end of file. This would prove that the search process can go on, normally, throughout an important file ! I ran a Find All in Current Document process and… Bingo ! I obtained the Find Result panel, below, with the expected results :Search "(?-s)(.{15}).*\R\K(?:\1.*\R)+" (9 hits in 1 file) new 1 (9 hits) Line 2: 01,02,2013,1000 000001 ,22.107,22.513,20.976,21.151,0 Line 65003: --------------- Line 130002: --------------- Line 195002: --------------- Line 260002: --------------- Line 325002: --------------- Line 390002: --------------- Line 455002: --------------- Line 458420: 12,31,2015,2559 458404 ,3.270,3.270,3.538,3.527,0
Therefore, it seems that a too important gap, between two successive matches, causes the complete failure of the regex search process !? I just hope that, for most of users, this gap of 65000 lines about( perhaps, we’d better speak about bytes ! ), noted with my outdated laptop, can really be greater :-))
Instead of adding some false positives, in huge files, we could, also, search for a string, which would occur
every x
lines ! For instance, starting with the Data2.txt file, I build a file, made offive
times Data2.txt : I just changed the first character of each line, taking, successively,3
and4
, then5
and6
,… instead of0
and1
, in order to keep a list of lines, without any duplicate :-)This file contained
126,274,854
bytes and2,292,022
lines. So, I decided that, in addition to the detection of duplicates, with the regex(?-s)(.{15}).*\R\K(?:\1.*\R)+
, I would search for lines50,000
,100,000
, and so on…, with the regex(5|0)0000\x20
To that purpose, I just used the list of numbers, at column19
, copied five times !So the final regex is , simply, the two alternatives :
(?-s)(.{15}).*\R\K(?:\1.*\R)+|(5|0)0000\x20
. Again, I clicked on the Find All in Current Document button and, …after6m 49s
( Waoooou ! ) , the Find Result displayed, at last :Search "(?-s)(.{15}).*\R\K(?:\1.*\R)+|(5|0)0000\x20" (47 hits in 1 file) new 1 (47 hits) Line 2: 01,02,2013,1000 000001 ,22.107,22.513,20.976,21.151,0 Line 50001: 02,11,2014,2536 050000 ,0.357,0.380,0.270,0.310,0 Line 100001: 03,24,2014,1115 100000 ,5.494,5.191,5.494,5.299,0 Line 150001: 05,05,2017,1346 150000 ,0.301,0.301,0.270,0.289,0 Line 200001: 06,13,2013,1107 200000 ,0.519,0.588,0.516,0.588,0 Line 250001: 07,23,2013,1437 250000 ,0.070,0.064,0.073,0.071,0 Line 300001: 09,04,2013,1158 300000 ,2.314,2.368,2.314,2.362,0 Line 350001: 10,06,2017,1031 350000 ,0.201,0.138,0.201,0.151,0 Line 400001: 11,08,2012,1254 400000 ,1.263,1.253,1.284,1.284,0 Line 450001: 12,21,2012,1043 450000 ,3.838,3.815,3.858,3.823,0 Line 508405: 22,11,2014,2536 050000 ,0.357,0.380,0.270,0.310,0 Line 558405: 23,24,2014,1115 100000 ,5.494,5.191,5.494,5.299,0 Line 608405: 25,05,2017,1346 150000 ,0.301,0.301,0.270,0.289,0 Line 658405: 26,13,2013,1107 200000 ,0.519,0.588,0.516,0.588,0 Line 708405: 27,23,2013,1437 250000 ,0.070,0.064,0.073,0.071,0 Line 758405: 29,04,2013,1158 300000 ,2.314,2.368,2.314,2.362,0 Line 808405: 30,06,2017,1031 350000 ,0.201,0.138,0.201,0.151,0 Line 858405: 31,08,2012,1254 400000 ,1.263,1.253,1.284,1.284,0 Line 908405: 32,21,2012,1043 450000 ,3.838,3.815,3.858,3.823,0 Line 966809: 42,11,2014,2536 050000 ,0.357,0.380,0.270,0.310,0 Line 1016809: 43,24,2014,1115 100000 ,5.494,5.191,5.494,5.299,0 Line 1066809: 45,05,2017,1346 150000 ,0.301,0.301,0.270,0.289,0 Line 1116809: 46,13,2013,1107 200000 ,0.519,0.588,0.516,0.588,0 Line 1166809: 47,23,2013,1437 250000 ,0.070,0.064,0.073,0.071,0 Line 1216809: 49,04,2013,1158 300000 ,2.314,2.368,2.314,2.362,0 Line 1266809: 50,06,2017,1031 350000 ,0.201,0.138,0.201,0.151,0 Line 1316809: 51,08,2012,1254 400000 ,1.263,1.253,1.284,1.284,0 Line 1366809: 52,21,2012,1043 450000 ,3.838,3.815,3.858,3.823,0 Line 1425213: 62,11,2014,2536 050000 ,0.357,0.380,0.270,0.310,0 Line 1475213: 63,24,2014,1115 100000 ,5.494,5.191,5.494,5.299,0 Line 1525213: 65,05,2017,1346 150000 ,0.301,0.301,0.270,0.289,0 Line 1575213: 66,13,2013,1107 200000 ,0.519,0.588,0.516,0.588,0 Line 1625213: 67,23,2013,1437 250000 ,0.070,0.064,0.073,0.071,0 Line 1675213: 69,04,2013,1158 300000 ,2.314,2.368,2.314,2.362,0 Line 1725213: 70,06,2017,1031 350000 ,0.201,0.138,0.201,0.151,0 Line 1775213: 71,08,2012,1254 400000 ,1.263,1.253,1.284,1.284,0 Line 1825213: 72,21,2012,1043 450000 ,3.838,3.815,3.858,3.823,0 Line 1883617: 82,11,2014,2536 050000 ,0.357,0.380,0.270,0.310,0 Line 1933617: 83,24,2014,1115 100000 ,5.494,5.191,5.494,5.299,0 Line 1983617: 85,05,2017,1346 150000 ,0.301,0.301,0.270,0.289,0 Line 2033617: 86,13,2013,1107 200000 ,0.519,0.588,0.516,0.588,0 Line 2083617: 87,23,2013,1437 250000 ,0.070,0.064,0.073,0.071,0 Line 2133617: 89,04,2013,1158 300000 ,2.314,2.368,2.314,2.362,0 Line 2183617: 90,06,2017,1031 350000 ,0.201,0.138,0.201,0.151,0 Line 2233617: 91,08,2012,1254 400000 ,1.263,1.253,1.284,1.284,0 Line 2283617: 92,21,2012,1043 450000 ,3.838,3.815,3.858,3.823,0 Line 2292022: 92,31,2015,2559 458404 ,3.270,3.270,3.538,3.527,0
As you can see, the duplicate line
2
and the second duplicate, at line2,292,022
, were correctly found and reported !
Conclusion :
Apparently, when a too important amount of text separates two consecutive occurrences of the regex search, it breaks the normal process, getting, wrongly, a single selection of all file contents !? So, Mangoguy, as no duplicate exists in your data2.txt file, it’s obvious that we’re going into trouble as soon as your file exceeds a certain size limit !
In other words, if, in huge files, you get a lot of occurrences, throughout the file contents, this should help the search process to correctly finish the job :-))
Best Regards,
guy038
-
So @guy038’s results and conclusions are interesting. I decided to see what would happen if a Pythonscript-based search was conducted. To that end I came up with:
matches = [] def match_found(m): matches.append(m.span(0)) editor.research(r'(?-s)(.{15}).*\R\K(?:\1.*\R)+', match_found) for (start, _) in matches: print editor.lineFromPosition(start) + 1 print 'done'
With that script and the DATA2.txt file, I found that with 67025 lines in the file I would see “done” printed in the PS console window, but with one more line, 67026, I would get this:
Traceback: editor.research(r'(?-s)(.{15}).*\R\K(?:\1.*\R)+', match_found) <type 'exceptions.RuntimeError'>: The complexity of matching the regular expression exceeded predefined bounds. Try refactoring the regular expression to make each choice made by the state machine unambiguous. This exception is thrown to prevent "eternal" matches that take an indefinite period time to locate.
This seems consistent with @guy038’s findings that somewhere between 67000 and 67100 lines there is a “problem”.
So I think the meaning of all this is that Notepad++ is not a great tool for the OP’s task. :-(
No one wants to be trying to solve one problem, only to encounter problems with the method they are using to solve that problem. Thus, I’d advise, if this is a recurring need, to have a serious look at the short bit of standard Python (or rewrite in your language of choice) that I provided much earlier in this thread. :-D
-
Hello, @mangoguy, @scott-sumner and All,
I’m extremely confused, Indeed ! I did an important and beginner mistake, in my previous regex, that I was testing, intensively :-(( My God, of course ! The RIGHT regex is
(?-s)^(.{15}).*\R\K(?:\1.*\R)+
and NOT the regex(?-s)(.{15}).*\R\K(?:\1.*\R)+
:-))Do you see the difference ? Well, it’s just the anchor
^
, after the modifier(?-s)
!Indeed, let’s try again the wrong regex :
Assuming the test list, below :
91,02,2013,1000 000001 ,22.107,22.513,20.976,21.151,0 13,1000 000002 ,20.976,21.724,20.620,21.336,0 13,1000 000003 ,21.344,22.116,21.336,21.918,0 13,1000 000004 ,21.918,21.918,20.797,20.797,0
So, first, the caret is right before the 9 digit, of the first line and the fifteen characters
91,02,2013,1000
cannot be found elsewhere. Then, as no anchor^
( beginning of line ) exists, the regex engine goes ahead one position between the digits 9 and 1 of the first line. Again, as the fifteen characters1,02,2013,1000b
do not exist further on, the regex engine goes ahead one position, examining, now the string,02,2013,1000bb
…… till the fifteen characters
13,1000bbb00000
, which can be found, this time, at beginning of lines2
,3
and4
! Just imagine the work to accomplish for458,404
lines of the Data2.txt file :-((( Note : the lowercase letter
b
, above, stands for a space character )To easily see the problem, just get rid of the
\K
syntax, forming the regex(?-s)(.{15}).*\R(?:\1.*\R)+
. If you click on the Find Next button, it selects, after test on positions 1, 2,…and 8, from the two last digits of year 2013 till the end of text. But, if you’re using the regex(?-s)^(.{15}).*\R(?:\1.*\R)+
, with the anchor^
, it correctly gets the identical lines2
,3
and4
, regarding theirs first15
characters !
So, Doug, to sump up, using the right regex
(?-s)^(.{15}).*\R\K(?:\1.*\R)+
, against your Data2.txt file, does not find any occurrence (~5s
), that is the expected result, as we know, by construction, that the458,404
lines of this file, are all different :-)Best Regards,
guy038
-
Yea, wow, I totally didn’t see the missing
^
as well. Of course, as our local regex guru I don’t normally question @guy038’s regexes, but there is no excuse for a second pair of eyes (mine) not noticing/questioning this. Looking back over my posts in this thread, I really added nothing of value and totally wish I hadn’t participated at all. :-( -
@Scott-Sumner , about that python code:
prev = '' with open('data.txt') as f: for (n, line) in enumerate(f): if line[:15] == prev: print n+1 prev = line[:15]
How can we delete duplicate lines if first 40 words (or lets say, first 200 characters including spaces) are same? I have changed 15 to 200, I am afraid the code did not work.
Thank you
-
@Saya-Jujur said in Deleting lines that repeat the first 15 characters:
How can we delete duplicate lines if first 40 words (or lets say, first 200 characters including spaces) are same? I have changed 15 to 200, I am afraid the code did not work.
It would have been better to have started a new thread since this one was last posted to 4 years ago. By all means reference it but a new one I think is warranted.
You don’t give much detail on your need, are the lines together as that is what this thread was all about.
So start a new post, outline your need, give examples. Read the post at the top (of the Help Wanted section) titled “Please read before posting” as it will help you provide examples in a format that we can trust haven’t been altered by the posting window and we can copy to help us in tests before we provide a solution to you.
Terry
PS your request to Scott Sumner directly will likely go unanswered (by him), he hasn’t been active on this forum for a long time.
-
Untested, because I am on my phone, but maybe try
prev = '' with open('data.txt') as f: for (n, line) in enumerate(f): if line[:200] == prev[:200]: print n+1 prev = line[:200]
(You said you changed to 200 already, but maybe you missed an instance, or maybe comparing just the left of prev is enough)
If that doesn’t work, then follow @Terry-R’s advice