No, Google Bard is not trained on Gmail data
Google's large language model tool named Bard says that it was trained with Gmail -- but Google has denied that is the case.

Bard is a generative AI tool that can get things wrong
Bard is a generative AI or Large Language Model (LLM) which can provide information based on its large data set. Like ChatGPT and similar tools, it isn't actually intelligent and will often get things wrong, which is referred to as "hallucinating."
A tweet from Kate Crawford, author and principal researcher at Microsoft Research, shows a Bard response suggesting Gmail was included in its dataset. This would be a clear violation of user privacy, if true.
But, Google's Workspace Twitter account responded, stating that Bard is an early experiment and will make mistakes -- and confirmed that the model was not trained with information gleaned from Gmail. The pop-up on the Bard website also warns users that Bard will not always get queries right.
These generative AI tools aren't anywhere near foolproof, and users with access often try to pull out information that would otherwise be hidden. Queries such as Crawford's can sometimes provide useful information, but in this case, Bard got it wrong.
Generative AI and LLMs have become a popular topic in the tech community. While these systems are impressive, they are also filled with early problems.
Users are urged, even by Google itself, to fall back onto web search whenever an LLM like Bard provides a response. While it might be interesting to see what it will say, it is not guaranteed to be accurate.
Read on AppleInsider

Bard is a generative AI tool that can get things wrong
Bard is a generative AI or Large Language Model (LLM) which can provide information based on its large data set. Like ChatGPT and similar tools, it isn't actually intelligent and will often get things wrong, which is referred to as "hallucinating."
A tweet from Kate Crawford, author and principal researcher at Microsoft Research, shows a Bard response suggesting Gmail was included in its dataset. This would be a clear violation of user privacy, if true.
Umm, anyone a little concerned that Bard is saying its training dataset includes... Gmail?
I'm assuming that's flat out wrong, otherwise Google is crossing some serious legal boundaries. pic.twitter.com/0muhrFeZEA-- Kate Crawford (@katecrawford)
But, Google's Workspace Twitter account responded, stating that Bard is an early experiment and will make mistakes -- and confirmed that the model was not trained with information gleaned from Gmail. The pop-up on the Bard website also warns users that Bard will not always get queries right.
These generative AI tools aren't anywhere near foolproof, and users with access often try to pull out information that would otherwise be hidden. Queries such as Crawford's can sometimes provide useful information, but in this case, Bard got it wrong.
Generative AI and LLMs have become a popular topic in the tech community. While these systems are impressive, they are also filled with early problems.
Users are urged, even by Google itself, to fall back onto web search whenever an LLM like Bard provides a response. While it might be interesting to see what it will say, it is not guaranteed to be accurate.
Read on AppleInsider
Comments
i don’t trust Bard to get the right answer.
i don’t trust Google to tell the truth.
https://support.google.com/mail/answer/10434152?hl=en
If they were being dishonest about this, they would have been sued multiple times by various governments and targeted by a plethora of class-actions for lying about their privacy policy since then. Google is always under the microscope.
Your personal data remains your personal data.
Six years? Was that around the time they dropped 'Don't be evil' from the corporate policy? Remember, Google does all sorts of things "inadvertently", so, no doubt, we'll find out in a short while that Bard was "inadvertently" trained on Gmail data ... by a contractor ... who is no longer with the company. (Perhaps the same guy who was previously responsible for all the other egregious privacy invasions — they keep giving him another chance.)
I mean, come on, gatorguy, does Google really have any credibility at all on issues like this? We know you will always do the knee-jerk defense, but frankly, the stupid computer program has more credibility than Google or you.
Sorry but Google has been caught lying about this kind of thing WAY too many times to be credible.
I do find it surprising though that they haven’t been sued over this, though.
Honestly, just come out and be public with your entire business model and let people decide if they can stomach all the things you're doing behind the scenes. As a friend of mine who worked for Google said after watching Black Mirror, "it's a little too close for comfort".
Everyone is entitled to believe what they wish, distrust or trust as the case might be. What should not be done is factual misrepresentation, particularly when a bit of research would have surfaced the truth of the matter.
Misstatements of fact should be pointed out and corrected. Opinions are what they are, and generally not something worth spending time on, since they are very resistant to change. Facts are a different animal.
If you're confused what a proper link might be, this is an example of one: https://support.google.com/mail/answer/10434152?hl=en
But let's assume your logic isn't as flawed as it seems to be.
The US has sued Google, therefor Google doesn't cooperate with them.
The EU has sued Google too. Therefor Google doesn't feed them either.
Korea and the UK and Japan and Russia and probably a couple others I can't think of at the moment have also sued Google . Therefor Google doesn't willingly deliver citizen data to any of them either.
(eye-roll)