1. "It was unethically trained from stolen art without consent" - It was created with billions of images, and for much more than art. It's unreasonable, impossible, to have every one vetted. Newer models (Stability V2) have even removed artists names and can still produce the same results by training back on itself without the use of copyrighted work. Once someone uses AI the result is transformative at that point, and the original copyright is then irrelevant as a new one is created. Stolen art is only when it's copied verbatim and shared/sold. No artist's actual rights under the law were infringed upon by using publicly available images to be trained in the AI. Be honest, if the models were created with 100% consensual artworks and the results were the same (as it's proven that it would be), you'd still be upset right? Yeah let's get down to the real reason then. 2. "What about the unintentional rare cases where images were truly protected?" - The AI developers are immediately