Community
    • Login

    Notepad++ DLL Hijacking Vulnerability (CVE-2025-56383)

    Scheduled Pinned Locked Moved Security
    10 Posts 6 Posters 918 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      DudeNamedReid
      last edited by

      Qualys has started alerting on this CVE as a severity 4, confirmed vulnerability. https://nvd.nist.gov/vuln/detail/CVE-2025-56383

      It was originally identified in Notepad++ 8.8.3 but is still active in 8.8.5.0.
      Apparently, it’s a problem of dll hijkacking via dll substitution in the Notepad++ plugin directory.

      Are there plans to fix this?

      PeterJonesP donhoD 2 Replies Last reply Reply Quote 0
      • PeterJonesP
        PeterJones @DudeNamedReid
        last edited by PeterJones

        @DudeNamedReid said in Notepad++ DLL Hijacking Vulnerability (CVE-2025-56383):

        Are there plans to fix this?

        I assume when such a CVE is reported publically, it is reported to the Developer of the project as well; assuming so, it may be on his radar.

        OTOH, when I looked at the proof-of-concept repo that it linked to (https://github.com/zer0t0/CVE-2025-56383-Proof-of-Concept), I was struck by the inanity of the report. Literally, the bug is “if something malicious has permission to overwrite c:\program files\Notepad++\plugins\<pluginName>\pluginName.dll it can convince notepad++.exe to execute malicious code.” But literally everything that has permission to write that file also has permission to overwrite c:\program files\Notepad++\notepad++.exe itself, so every program in Program Files has an equivalent security bug – and why corrupt a DLL when you can corrupt the application itself with exactly the same amount of effort and permission? A CVE like this is completely pointless – it can only be exploited by a process that already has enough permissions to do anything it wants to any DLL or executable, at which point, it’s not Notepad++'s faut that you are already compromised, and there’s nothing it can do to protect you.

        (And whoever created the “issue #1” against that repo said the same thing, so it’s not just me who thinks this CVE was a waste of bits.)

        1 Reply Last reply Reply Quote 5
        • xomxX
          xomx
          last edited by

          @PeterJones is right, it’s a fake security vulnerability, more in:
          https://github.com/notepad-plus-plus/notepad-plus-plus/issues/17047

          Sometimes I wonder if the security “experts”, who already spread reports about this, are not only AI bots nowadays. I believe in humankind, people can’t be that stupid, or?

          D Lycan ThropeL 2 Replies Last reply Reply Quote 4
          • D
            DudeNamedReid @xomx
            last edited by

            @xomx and @PeterJones - Thanks for the follow up. I opened a case with Qualys and they have since rolled this back and it is no longer listed as an open vulnerability.

            1 Reply Last reply Reply Quote 2
            • Lycan ThropeL
              Lycan Thrope @xomx
              last edited by

              @xomx ,
              You’re right, and as I understand, the idiots submitting AI bug reports has the author a cURL very upset with people wasting their time with these issues. AI is not intelligent, nor is the idiot submitting reports of bugs “found” by it. :-)

              xomxX 1 Reply Last reply Reply Quote 2
              • xomxX
                xomx @Lycan Thrope
                last edited by xomx

                @Lycan-Thrope

                I agree with this part:

                nor is the idiot submitting reports of bugs “found” by it.

                but I’d be rather careful with such a statement:

                AI is not intelligent,

                Many thinks that LLMs are over-hyped things (that it just only copies a data already found somewhere about the relevant topic, in a context), but I do not. I am not an expert in this area, but after reading few papers on neural networks & LLM in the past, I’m completely perplexed about how they’ve started to use their “stupid” trained next-word prediction stuff for almost anything and this seems to be enough to start exhibiting a human-level performance! And by “anything” I mean anything, it’s not only about coding, I am speaking e.g. about capability to design biologically active molecules based only on desired future features.

                Of course, there is the question of what is an intelligent being and what is still not and I don’t want to get into this topic here, I’d rather wanna say something else. I know that e.g. in the Humour section is a post mocking the current AI math abilities but if I get it right, in let’s say ~2-3 years it might be more intelligent than many of us even in the math (maybe it already is now, IDK). And what’s most interesting - it looks like it will only depend on exceeding a certain amount of the input training data. It wouldn’t be such a surprise - this tech is simply based on neural networks and, as we all know, our brains only show a higher level of intelligence since they reach a certain size/complexity.

                We’ll see.

                Lycan ThropeL 1 Reply Last reply Reply Quote 2
                • Lycan ThropeL
                  Lycan Thrope @xomx
                  last edited by Lycan Thrope

                  @xomx said in Notepad++ DLL Hijacking Vulnerability (CVE-2025-56383):

                  but I’d be rather careful with such a statement:

                  Yeah, we don’t want to get into a discussion of this, here. :-)
                  Suffice it to say, an LLM (a database) and a predictive neural network algorithm, still sounds/reads like a stilted know it all, that you ‘sense’ doesn’t know crap. I believe the name given it, is a misnomer, and memorization of words, should never be given the title of ‘intelligence’, as it is no such thing.

                  Moving on. :-)

                  xomxX 1 Reply Last reply Reply Quote 0
                  • guy038G
                    guy038
                    last edited by guy038

                    Hello, @xomx, @lycan-thrope and All,

                    I would tend to agree with @xomx. We all know that these LLM’s need to ingest enormous amounts of data to build up their intelligence in a given field, and that after a certain stage of training, they do not seem to progress any further.

                    However, in an article I read last year, it seems that, by persisting and continuing to introduce new data, there comes a moment when the LLM’s seem to have the “revelation” and acquires full knowledge of its field ! It’s exactly what @xomx said, in other words : it looks like it will only depend on exceeding a certain amount of the input training data !

                    Also, @lycan-thrope, I wouldn’t be as categorical as you. Certainly, LLM’s are, as you say, reservoirs of expressions and rules, but for how much longer?

                    Did you know that in certain circumstances, these LLM’s can be made to lie ? And I know, from a reliable source, that the main AI leaders met recently to discuss this problem: they are afraid that some AIs will eventually create their own language, over which we would have no control, but which would give them exorbitant power :-((

                    This French text was translated with DeepL.com (free version)

                    Best Regards,

                    guy038

                    1 Reply Last reply Reply Quote 1
                    • xomxX
                      xomx @Lycan Thrope
                      last edited by xomx

                      @Lycan-Thrope said in Notepad++ DLL Hijacking Vulnerability (CVE-2025-56383):

                      an LLM (a database) and a predictive neural network algorithm, still sounds/reads like a stilted know it all, that you ‘sense’ doesn’t know crap

                      Hmm. And a bit of brain tissue connected by neurons is of course something else. Or maybe not? I’d say it’s pretty similar. From billions of our brain neurons net emerges e.g. the human “LLM database” (memory) as a pattern of communication between such neurons. If one trains his brain by new experiences, specific groups of neurons activate together and strengthens its connections, just like in the LLMs. We still do not precisely understand how the intelligence emerges at the end from all of this but if I read papers like this, then I have to ask myself if it’s really just not a consequence of the system exceeding a certain limit of complexity. Maybe it’s just another physical law (that the intelligence emerges in such a systems as a byproduct?).

                      Anyway, back to this fake security issue reported - when I blamed the “AI bots” above, I didn’t mean that the fault lies within the current LLM “AI”, but rather with the stupid people who can’t control/use them properly. Like when a car crashed, not because it malfunctioned, but because it had a bad driver.

                      1 Reply Last reply Reply Quote 1
                      • donhoD
                        donho @DudeNamedReid
                        last edited by

                        https://notepad-plus-plus.org/news/v886-released/

                        1 Reply Last reply Reply Quote 2
                        • First post
                          Last post
                        The Community of users of the Notepad++ text editor.
                        Powered by NodeBB | Contributors