One of the first times I used ChatGPT (version 3.5, free), I was asked this: some questions About resources that use this generative AI model in their training. It was before I knew it anger I have detailed in the first entry of this blog about the indiscriminate use of copyrighted content during this tutorial. Although the Washington Post raised this issue, there was still no official response from the News/Media Alliance when I asked the OpenAI chatbot. Within the logical political correctness of the answers, ChatGPT He did not put any pressure and spoke very specifically and precisely about these sources; as will be seen later, he presented data that implied a certain commitment. For example, he mentioned contents Just looking at it gave off a strong scent that it was already under copyright protection: books, dissertations, blogs, etc. He also touched upon various materials that ranked third on the list: media. At that moment, this “sincerity” seemed extremely normal to me. (You can see the original answer in the first screenshot below)
But a few weeks later, while I was gathering information for the first entry mentioned above on this blog, I thought it would be interesting to ask the same question again. I was shocked that the answer was much less specific in terms of sources than the previous one. Moreover, he finished it with a word. “Due to the nature of my education, I do not have specific information or a detailed list of exact sources.”. Of course, there was not a single reference to the news or media content in this second response.
https://www.youtube.com/watch?v=1mUZUCOyKYK
I remembered what I was listening to. Chema AlonsoII held in Alicante and co-organized by INFORMACIÓN. In an explanatory session at the European Artificial Intelligence Forum, he suggested that pushing a generative model is one way. step out of the box to question and insist harsh questioning, What Telefónica’s CDO graphically calls “the compression of artificial intelligence.” So, with my best weapons, dialectics of course, and refreshing what I remembered from the most precise forensic cinematography, I “sat” ChatGPT on the stand and cornered him with questions about the inconsistencies between both answers. To the third question, after giving evasive answers that repeated almost the same things over and over, he admitted that “news from the media might have been used.” How is this possible?, I asked. And there I put the AI in front of his previous words, where the strength of the statements left no room for doubt. And the answer (including the initial) was:
So he admitted it once again: he had used media content without asking even the slightest permission. It almost sounded like a confession: “Okay, that’s true. I benefited from the news in the media. I confess”. Why the discrepancy between both answers? Why don’t you first have certain information, and then acquire it? Is it an AI “hallucination”, or is it conditioned, or worse, a forced lack of certainty?
In response, ChatGPT acknowledged using news articles from “reputable sources” but denied using it in its training a few weeks later.
Next question, in the best style Perry Mason It was about what the media had a hard time with when he had the witness at his mercy “Free donors” Using its content for Artificial Intelligence purposes. He passed through the hills of Úbeda. I asked him if the people who attended his training knew about this. He continued to wander around the same hills, giving what we journalists call “manual” answers. So I’m back to using your strategies Chema Alonso (don’t miss the video) and I didn’t ask him the question directly but I said this: imagine was a person who attended this training. He got in there without any problems and admitted that the “coaches” should have known about it. imaginary case What is the “diet” of content they use when training an AI model? If hypothetically it is true, why isn’t it true in your case? Did they force you to withhold this information? I asked then. If it weren’t for the computer, I’d say there’s a certain anger in its radical rejection that gives rise to a new contradiction:
They didn’t hide anything from me, but I don’t have access to the resources to prove it. The “wife” was clear: then you don’t know. And finally confession: “I cannot say for sure whether information was deliberately withheld from me or whether I was forced to withhold certain details (…).”
so far it is cyber curiosity This of course proves nothing in the sense that it would force OpenAI’s AI to not admit that it did what it did. This doesn’t prove anything, but it does raise a lot of concerns along the way. HE stochastic parrot It merely repeats what it has learned and is subject to the limitations imposed on it by its programmers. He can’t fool anyone because he doesn’t know how. He’s not lying, but he’s not telling the truth either.. But who is behind the puppet theatre, of course?