Google had asked its researchers to “strike a positive tone” and specified their research on sensitive issues, like AI.
As a part of the new controls, the company has asked for additional rounds of inspections to take place for paper and others on particular issues.
If this is true, then this will not be any more positive for Google, which had seen some drawback after supposedly firing Dr Timnit Gebru, the co-lead of Google’s Ethical AI team, after apparent disagreements with her work.
Reports now show that researchers and internal documents which show that in a minimum of three cases senior managers have asked authors to stay away from casting its technology in a bad way.
One the documents quoted in the report said, “Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues.”
In other supposed internal correspondence, a senior Google manager looked into a study based on content recommendation technology, a little while prior the publication was quoted as asking authors to “take great care to strike a positive tone.”
The documents that had been leaked, which have yet to verified, talk about some “sensitive” issues like, “the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecoms and systems that recommend or personalize web content.”
Margaret Mitchell, the senior Google AI researcher said that this kind of interference would be seen as a kind of censorship:
“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”