Ask your question


shawna-byrne Shawna Byrne|   Jan 24, 2021

Would a sufficiently advanced ML model be immune to attempted data poisoning?

If the model was trained on clean and accurate data and had been running for long enough, would attempts to feed the model incorrect data be discovered by the model itself? How long would the model have to be running?

Likes (0) Views (0)