UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?
-
On Windows, when right clicking in explorer to go “New > Text Document” that “New Text Document.txt” file that is created is opened by Notepad++ as ANSI, even though I have the preference set to UTF-8 encoding for new documents.
That “New Text Document.txt” is a zero character, zero byte empty file. So the file itself is not technically encoded as anything the first time it is opened, should this not be opened in UTF-8 when the preference is set for UTF-8?
Can this be made a feature request/option if it is intended behavior? Tried this with VSCode, and file is reported as UTF-8, so this seems more like a bug, or oversight to me.
-
Afaik this is the point where
apply to opened ANSI files
comes into play. -
@Ekopalypse said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
this is the point where apply to opened ANSI files comes into play.
Indeed it is.
See HERE for this code://for empty files, if the default for new files is UTF8, and "Apply to opened ANSI files" is set, apply it if (fileSize == 0) { if (ndds._unicodeMode == uniCookie && ndds._openAnsiAsUtf8) fileFormat._encoding = SC_CP_UTF8; }
The text of this option on the UI is not the greatest. :-(
Even the USER MANUAL doesn’t really tell the tale:
BTW, this is great discussion because I’m currently doing an exploration of “encoding and N++”; not encoding itself, but just how N++ deals with it…
-
@Alan-Kilborn said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
Even the USER MANUAL doesn’t really tell the tale:
Could you clarify that, or provide a phrasing that you think would work better in the usermanual? Because I believe Notepad++ behaves exactly as described in that highlighted section
- Create two ANSI files: one zero-byte (ansi0.txt) and one four-byte with the text
ANSI
(ansi4.txt). Close both. - Uncheck that checkbox. Open ansi0.txt and ansi4.txt. Both are opened as ANSI, since they have no UTF-8-specific codepoints. Close both.
- Check that checkbox. Open ansi0.txt and ansi4.txt. Both are opened as UTF-8. Close both.
So, while you did find the section of source-code that deals with 0-byte files and that option, it also applies that option to non-0-byte files
- Create two ANSI files: one zero-byte (ansi0.txt) and one four-byte with the text
-
@PeterJones said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
while you did find the section of source-code that deals with 0-byte files and that option, it also applies that option to non-0-byte files
Yes, I did that because that (empty files) was what the OP was asking about.
There’s one other usage of_openAnsiAsUtf8
in the source code, and it deals with the other situation you show.So, okay, the UI is “okay”.
So, okay, the user manual is also “okay”.
But I will put some thought into it…about how to make it better.Maybe what makes me feel weird about it, and this may be a nod to Notepad++ 's history, is it feels like it wants to favor ANSI over UTF-8, which, in today’s world, doesn’t feel right. Maybe I’d feel better if the logic were inverted and the wording were more of ‘“downgrade” to ANSI’. :-)
I still have more “encoding” related exploration of the source code to do, so maybe I’ll change my mind.
-
I will say that an option that isn’t related to Notepad++ 's creation of a new document but located in the New Document section of the Preferences is a bit odd. Maybe that contributes to my “weird feeling” about this Apply to opened ANSI files setting.
BTW…a lot of my “feelings” exposed in these last 2 posts. :-)
-
Maybe this (changing my UI text) makes me feel better too:
-
Interesting idea. That works for someone who understands basic encoding ideas (like you). But a newbie wouldn’t be expected to know what a “7bit file” is.
Regarding UI: I might be tempted to move the … MISC > Autodetect character encoding option to the New Document page, and rename that page Encodings / Line Endings or something – maybe File Interpretation ? I don’t know what’s the best phrasing.
I don’t know. UI Design is hard.
-
@PeterJones said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
That works for someone who understands basic encoding ideas (like you). But a newbie wouldn’t be expected to know what a “7bit file” is.
This is true, and it DOES help me, because in about a week (or perhaps less) I will forget again what Apply to opened ANSI files means. :-) Joys of being “old”.
BTW, I said “changing MY UI text”. I wasn’t suggesting it as a change to Notepad++.
-
@Ekopalypse said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
Afaik this is the point where
apply to opened ANSI files
comes into play.Yes but I don’t want to “upgrade” an existing files if they are already in ANSI, only for new 0 byte files, when again, are not really anything yet.
I like @PeterJones mockup UI, or at least those options makes more sense to me, ie. Default Encoding: UTF8 / Apply “Default” Encoding to 0-byte files / but use ANSI if the existing file is already encoded in ANSI.
-
@ImSpecial said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
I don’t want to “upgrade” an existing files if they are already in ANSI
It shouldn’t happen.
Try this test with the UTF-8 radio button chosen and the checkbox ticked.Create a disk file containing only
abc
and perhaps a line-ending.
Open the file in N++.
It should be UTF-8. Why? Because there is nothing about its contents that indicate ANSI–and in a modern world this is how it should be.
You can call it “upgraded” if you want, but I wouldn’t.Create another disk file containing
abc
followed by a byte with value 255. May want to use a hex editor to do this.
Open the file in N++.
Notepad++ should indicate ANSI.
Nothing was “upgraded”.Do you have a different scenario in mind where this Notepad++ behavior would give you trouble?
-
@Alan-Kilborn said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
Create another disk file containing abc followed by a byte with value 255. May want to use a hex editor to do this.
BTW, I tried to create this file within N++ by making sure my status bar said “ANSI” and then double-clicking the 255 line in the Character Panel to insert the character. Unexpectedly (to me at least) what I actually achieved was two
F
characters inserted into my file, instead of one byte with value 255. -
@Alan-Kilborn said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
Unexpectedly (to me at least) what I actually achieved was two F characters inserted into my file, instead of one byte with value 255.
Where you click in that panel determines what you get. If you click on FF, you get FF. If you click on ÿ, you get ÿ. If you click on ÿ, you get ÿ
-
@PeterJones said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
Where you click in that panel determines what you get.
Ha. Amazing. I never knew that.
Of course, I have never used this panel before today.
I have very close to zero use for “ANSI”; that’s probably the reason.
Thanks for the good info!
:-) -
Hello, @alan-kilborn, @imspecial, @ekopalypse, @peterjones and All,
Alan, I think that, in your post, the term
7 Bit
is not appropriate asANSI
encoded files use a1-byte
encoding, i.e. a8-bit
encoding !Eventually, it would be better to change the phrase
Apply to opened ANSI files
in sectionSettings > Preferences... > New Document > Encoding
with :Encode, in UTF-8, any ANSI file, with ASCII characters only (NO char > \x7F)
Best Regards
guy038
-
the term 7 Bit is not appropriate as ANSI encoded files use a 1-byte encoding, i.e. a 8-bit encoding !
We are really talking about how N++ works to “classify” a file it is loading. When I mentioned “7 bit” I meant, at the point in the classification process, N++ has seen a file that contains only bytes with no most-significant-bits set, i.e., only 7-bit data.
Here’s the relevant piece of code:
So in my mind, the checkbox we are talking about comes into play when we have either of these situations:
- zero-byte file
- file with bytes with only 7-bit data
That was my rationale in using the term “7 bit”.
It doesn’t mean that the file would never contain characters from 128-255, just at the current time, no characters in the files meet that criterion.
Does it make sense? -
Hello , @alan-kilborn,
OK, I understand what you mean and, actually, my formulation
(NO char > \x7F)
and yours7 bit files
do mean the same thing, namely that the eighth bit is not used ( so =0
) in existingANSI
files which are automaticaly encoded inUTF-8
on opening, if the squared box is ticked !
Now, I think that my formulation, of my previous post, should be improved as :
Encode, in UTF-8, any opened ANSI file, with ASCII characters only (NO char > \x7F)
Best Regards,
guy038
-
@guy038 said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
Encode, in UTF-8, any opened ANSI file, with ASCII characters only (NO char > \x7F)
One problem is, that text is very long.
For UI text, that is.(NO char > \x7F) and yours 7 bit files do mean the same thing
Yes! I thought this, I just didn’t type it. :-)
-
@Alan-Kilborn said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
Do you have a different scenario in mind where this Notepad++ behavior would give you trouble?
My concern with upgrading files other then 0 byte ones, is that I very often will have my own copy of something and like to compare it against the original, which might be in ANSI, and when doing compares with the notepad++'s Compare plugin, it will complain that the encodings are different when doing so.
-
@ImSpecial said in UTF-8 encoding for new documents - This a bug, an oversight or intended behavior?:
My concern with upgrading files other then 0 byte ones, is that I very often will have my own copy of something and like to compare it against the original, which might be in ANSI, and when doing compares with the notepad++'s Compare plugin, it will complain that the encodings are different when doing so.
So, again, this “upgrade” will only take place if there are no characters in the file that would make the file ANSI. Meaning, there are no characters with byte values from 128 to 255.
If you intentionally work in ANSI most of the time, presumably your files WILL contain characters with byte values from 128 to 255 (because otherwise, why work in ANSI).
Thus, with your “compare” scenario, your fears probably aren’t realized, as you pull the first (base) file in, it is detected and shown as ANSI. Same thing when you pull the second (changed) file in. So when you go to do the compare there is no encoding difference.
Perhaps that’s not the reality of it, but that’s how I see it. :-)