New Code-poisoning Attack could Corrupt Your ML Models

A group of researchers discovered a new type of code-poisoning attack that can manipulate natural-language modeling systems via a backdoor. By nature, this is a blind attack, in which the attacker does not require to observe the execution of their code or the weights of the backdoored model during or post the training. Staying protected from this new code poisoning attack will be very challenging for organizations.

click here to read full Article

Read More on Pentesting Tools

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *