sort file removing duplicates possible?
-
could you, by any chance, upload the ue cutted list?
To see the differences.Cheers
Claudia -
yes, of course, please tell me where, pastebin doesn’t work, it’s blocked here (at work),
any other suggestions? -
actually pastebin is my first choice as well and haven’t used others for quite some time now.
Heard about
https://www.zippyshare.com/
https://www.sendspace.com/should be good and anonymous but haven’t tried it so far.
Cheers
Claudia -
Yea, I used to be a fan of regular expression replacement when doing this, but with “larger” datasets there always seems like there is so much tweaking and experimentation needed to get it right (for a particular dataset) that it is hardly worth it, unless you like playing with regular expressions all day instead of solving a particular problem and moving on quickly.
A Pythonscript solution such as @Claudia-Frank 's seems fine…
-
ok, but why results are inconsistent (with large datasets)?
@Claudia-Frank, unfortunately I’m not able to access zippyshare either,
so I’ll upload to pastebin from home if we don’t find another solution -
I installed the ue trial version but can’t find the menu item to delete the duplicates.
Is there anything I need to install in addition or am I blind and don’t see the obvious?Cheers
Claudia -
ok - found it - obviously blind :-D
Cheers
Claudia -
@patrickdrd said:
ok, but why results are inconsistent (with large datasets)?
Various reasons, sometimes a regular expression approach to this needs to be refined to match the data better before it works well. If you search up some other threads on this topic you can trace through the evolution of a regex approach on certain datasets. However, I suspect you just want to get a workable solution and move on…and I fully endorse that. I’m tired of trying to use regex for this kind of thing. :)
BTW, @Claudia-Frank going the extra mile…installing UE trial version just to track this down…nice!
-
… boots are made for walking … :-)
I’m confused about how UE does sort and delete duplicates.
Used default settings.Only sorting
Sorting and deleting duplicates
There is obviously something wrong about UEs algorithm, isn’t it?
And the version I used cut the list to 63732 lines.Cheers
Claudia -
so… another number? amazing!
I’ve got version 16 which is lite somehow
-
me too - that is actually the latest one available for ubuntu.
Cheers
Claudia -
so the issue is: which result is the correct?
-
Hello, @patrickdrd, and All,
First of all, I quite pleased to be back on our N++ forum ! Indeed, I was away because of a general failure of my laptop hard drive
C:
, which, you could imagine, highly annoyed me and needed immediate care :-(((So, after more than a week, a software purchase of EaseUS Recovery Wizard to restore my data ( the only one which could identify all my files, although Windows could not see the
C:
partition ), the fact that my first 32 Gb USB key did not work, either, ( Windows cannot format it !!), the re-install of the system on an other hard-disk, the Services Pack, the .NET versions, the different updates, some software installations and the total re-organization of my data structure, I can, now, close that bloody sub-routine !! ( Just note that, if I haven’t been able to get all my files back, I still had a last general backup, performed on the 04/13/18 )
Now, @patrickdrd, you said, in a post above :
guy038 regular expression results in 28109 lines
So, I download the list, from your link, in a new N++ tab :
https://easylist.to/easylist/easylist.txt
Notice that I, personally, found
69889
linesNow, without any change on that text, I simply performed a sort with the N++ command
Edit > Line Operations > Sort Lines Lexicographically Ascending
, on that raw textThen, I used the regex S/R, that I spoke in my previous post :
SEARCH :
(?-s)(^.+\R)\1+
REPLACE :
\1
I obtained a file of
69790
lines => The difference of99
lines ( theduplicate
ones ) were suppressed, almost immediately !So, may be, I’m missing something ?
Cheers,
guy038
-
I can’t say for sure, is there any tool that can tell us how many the unique lines are?
-
Ok - the difference of the total amount of lines in TextFX /python and regex
can be explained, the different usage of caseing(not sure if this is the right word).Regex search/replace is insensitive whereas python script is sensitive.
Once we use the same, the result is the same.For the provided example list, I assume, insensitive search and deletion is ok
but for other cases, sensitive search/replaces might be important.Btw. I also used sort and uniq command line tools from linux with same results as python and regex.
Cheers
Claudia -
thanks for clarifying, in terms of speed?
can python script be made to ignore case?
I want to ignore case and I want it fast (useful for large files) too
-
these are my settings:
https://i.imgur.com/AYmj5I4.jpgand my version:
https://i.imgur.com/YRdLkSl.png -
can python script be made to ignore case?
yes
I want to ignore case
Understood! :-D
and I want it fast (useful for large files) too
Ok, so what is “fast”? PS is in general going to be slower than some other methods, but unless you sort “large files”–also not well defined-- constantly why is speed all that important?
As an example, I took @Claudia-Frank 's PS one-liner above and ran it on the “easylist” data file, and for me it took 3.94 seconds…would that be defined as “fast” or “slow” or…?
And on the topic of “large” files, when you get too large (which honestly isn’t really big) you get into trouble with Notepad++ itself dealing with the files…
-
yes, with python we could do almost everything :-D
Is it important to keep the original lines untouched?
I mean, having two lines likeTest_line_content test_Line_content
could result in either
test_line_content
or
Test_line_content
I will post a performance optimized script later today.
Cheers
Claudia -
@Scott-Sumner said:
can python script be made to ignore case?
yes
I want to ignore case
Understood! :-D
and I want it fast (useful for large files) too
Ok, so what is “fast”? PS is in general going to be slower than some other methods, but unless you sort “large files”–also not well defined-- constantly why is speed all that important?
As an example, I took @Claudia-Frank 's PS one-liner above and ran it on the “easylist” data file, and for me it took 3.94 seconds…would that be defined as “fast” or “slow” or…?
And on the topic of “large” files, when you get too large (which honestly isn’t really big) you get into trouble with Notepad++ itself dealing with the files…
3.94 for a 60k lines file is ok,
I mean I show much more time needed, if I remember correctly it should be with regular expressionssorting is ok too, I want them sorted each time, if that’s what you ask scott
@claudia, if possible, make them “accent insensitive” too