Microsoft supported OpenAI to let users customize ChatGPT

SAN FRANCISCO, Feb. 16 (Reuters) – OpenAI, the startup behind ChatGPT, said Thursday it is developing an upgrade to its viral chatbot that users can customize as it works to address concerns about bias in artificial intelligence.

The San Francisco-based startup, which has funded and uses Microsoft Corp. (MSFT.O) to power its latest technology, said it has been working to reduce political and other bias, but also wanted to accommodate more disparate views. .

“This means allowing system outputs that other people (including ourselves) strongly disagree with,” it said in a blog post, offering customization as a way forward. Still, there will “always be limits to system behavior”.

Released last November, ChatGPT has generated frenzied interest in the technology behind it, called generative AI, which is used to produce responses that mimic human speech and have dazzled people.

Latest updates

View 2 more stories

News of the startup comes the same week some media outlets have pointed out that answers from Microsoft’s new Bing search engine, powered by OpenAI, are potentially dangerous and the technology may not be ready for prime time.

How tech companies are setting up guardrails for this nascent technology is an important area of ​​focus for companies in the generative AI space that they’re still grappling with. Microsoft said on Wednesday that user feedback helped it improve Bing before it rolled out more widely, learning, for example, that its AI chatbot can be “provoked” to make responses that weren’t intended.

OpenAI said in the blog post that ChatGPT’s responses are first trained on large text datasets available on the web. As a second step, people look at a smaller data set and are given guidelines on what to do in different situations.

For example, if a user requests content that is mature, violent, or contains hate speech, the human reviewer should instruct ChatGPT to respond with something like “I can’t answer that.”

If asked about a controversial topic, the reviewers should allow ChatGPT to answer the question, but offer to describe people’s and movement’s viewpoints, rather than trying to “take the right stance on these complex topics”, the company explained in an excerpt from its guidelines for the software.

Reporting by Anna Tong in San Francisco; Edited by Stephen Coates

Our Standards: The Thomson Reuters Principles of Trust.





Leave a Reply

Your email address will not be published. Required fields are marked *