‘Triggerless’ backdoors can infect machine learning models without leaving a trace – research

‘Triggerless’ backdoors can infect machine learning models without leaving a trace – research

Malicious actors can induce AI devices to act erratically with out utilizing visible ‘triggers’, scientists at the Germany-primarily based CISPA Helmholtz Centre for Information Protection have identified.

Image and Report Source hyperlink

Read through Extra on Pentesting Equipment

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *