Academics are at odds over a research paper that suggests that ChatGPT presents a “significant and sizeable” political bias leaning towards the left side of the political spectrum.
As Cointelegraph previously reported, researchers from the United Kingdom and Brazil published a study in the Public Choice journal on Aug. 17 that asserts that large language models (LLMs) like ChatGPT output text that contains errors and biases that could mislead readers and have the ability to promulgate political biases presented by traditional media.
In an earlier correspondence with Cointelegraph, co-author Victor Rangel unpacked the aims of the paper to measure the political bias of ChatGPT. The researchers methodology involves asking ChatGPT to impersonate someone from a given side of the political spectrum and compares these answers with its default mode.
Rangel also noted that several robustness tests were carried out to address potential confounding factors and alternative explanations:
It is worth noting that the authors stress that the paper does not serve as a “final word on ChatGPT political bias”, given challenges and complexities involved in measuring and interpreting bias in LLMs.
Rangel said that some critics contend that their method may not capture the nuances of political ideology, that the method's questions may be biased or leading, or that results may be influenced by the randomness of ChatGPT’s output.
Related: ChatGPT and Claude are ‘becoming capable of tackling real-world missions,’ say scientists
He added that while LLMs hold potential for “enhancing human communication”, they pose “significant risks and challenges” for society.
The paper has seemingly fulfilled its promise of stimulating research and discussion to
Read more on cointelegraph.com