From my understanding, it's the other way around. People started to believe that a certain ethics is no luxury anymore, and started manifesting it in things like universal human rights.
Yet, from a historical point of view, there seems to be a large spectrum of different beliefs here, and also a spectrum of different ethics. Like utilitarianism ("the greatest good for the greatest number"): people following _this_ kind of ethics would surely be happy if their work was used to train AI models.
Or not? That's a big question for me. Let aside the problems of intellectual property: do we as a society want tools like huge powerful AI systems? On the long term, will it make us better or worse?