I think at this point any AI I'm aware of is learning from the words of people, or our stories, true or not, with all of the biases of the person writing. The AI is further filling out their worldview from reactions to the responses to the content the AI creates. AI isn't living the experience, learning what makes it happy or sad, failing and feeling failure, or what many would consider reality. I think that may be where The_Chief was coming from.
Yes, my concern with "blurring the lines between truth and fiction" is the ability of these tools to auto-generate deceptive content, combined with the damaging effects of social media algorithms which already drive division and undermine truth in their quest for "engagement".
That's an advantage of AIs when it comes to discerning truth from falsehood! Humans learn what makes them happy, and what makes them happy is often a lie. AI's don't have happiness.
As I say, "AI" means many things. The AI that automates protein folding simulations and so massively accelerates drug design is not the same AI that writes a set of fake legal precedents for a court case or will generate a pastiche of Macbeth in the style of Donald Trump if you ask it to. But what all of these have in common is that they are applications of syntax rather than semantics, i.e. the machine learning algorithm applies rules without any knowledge of the meaning of what it is doing. It's indifferent to the concepts of "truth" or "falsehood" because it doesn't have any concepts whatsoever: you give it some input and it follows its set of rules until it reaches its stopping condition and then returns its output.
So an "AI" can be good at recognising "truth" if properly trained, and if the data it is assessing lies within the scope of its training. But when faced with something outside of that it can respond inappropriately, e.g. the Uber self-driving car which drove over a woman pushing a bicycle because it hadn't been trained not to. That is a mistake no human driver would make because a human understands that they are driving a car, there's something in front of them and there will be consequences if they just keep driving. The machine understands none of that: it has a stream of data, a set of responses, and a set of rules, and that's it: it does not know what either the data or the responses mean.
But the sort of "AI" that the Chief was referring to, the "generative AI", is utterly useless for discerning truth from falsehood, because that is not its function. It generates text (or images, but in this case text) in response to prompts, based on probability tables derived from large samples of human-generated text and a complex set of rules. So it will tell you that the Earth is flat as easily as it will tell you that the Earth is round, depending on the prompt it is given, because it doesn't understand that its output has any meaning. That is why they so easily "invent" or "hallucinate" falsehoods even when asked to give factual answers. John Searle's analogy of a "Chinese room" is a good description of what it does The only real difference is that the man in Searle's room, following a set of instructions to select the right set of (to him) meaningless symbols to return in response to the symbols passed to him, would surely speculate about what they were doing, whereas the machine cannot.
But if the assertion is that a ML algorithm doesn't have biases, that's clearly not true. There are biases from the training process: the data used and the training criteria applied. And has been shown time and time again, where the machine can "learn" from interaction it can acquire biases (Microsoft's infamous "Tay" chatbot for example: repeating racist tropes didn't make it happy, but a few hours interaction with racists or pranksters were enough to change "its truth"). And yes, when a big enough problem is uncovered the company behind the "AI" will introduce protections, i.e. changing the rules to reduce the probability of unwanted outputs, but that also shows how easily these things can be skewed to produce the "truth" that their creator wants you to hear.