A Google search feature that offered health advice gathered from anonymous internet users via AI was removed months before the public was aware of its discontinuation. “What People Suggest” had organized community health perspectives using AI and displayed them to users conducting health searches. Its removal, confirmed by Google and corroborated by three insiders, has renewed debate about transparency in health AI.
The feature was one of several health AI initiatives introduced at Google’s “The Check Up” event in New York last year. Karen DeSalvo, then Google’s chief health officer, described it as an empathetic tool that acknowledged how people actually search for health information — by looking for community validation as much as clinical data. The feature was deployed to mobile users in the United States first.
Google attributed the removal to a simplification of search results rather than any issues with the feature’s safety or quality. The claim was disputed when the company cited a blog post as its public announcement of the change, a post that had no mention of the discontinued feature. This communication gap has been called out by journalists and health policy observers.
The removal overlaps with a period of intense scrutiny over health misinformation in Google’s AI tools. An investigation this year found that AI Overviews were serving false health information to approximately two billion monthly users. Although Google removed AI Overviews from certain health searches following the investigation, broader concerns about AI health accuracy on the platform remain unresolved.
As Google prepares to announce new health AI developments at its upcoming event, the story of “What People Suggest” will serve as a persistent reminder of the need for accountability. Health AI products must be developed, maintained, and retired with the kind of transparency that a topic as sensitive as medicine demands. Google has not yet fully demonstrated that it meets that standard.

