The company behind ChatGPT announced it will award ten grants of $100,000 to teams around the world to develop a democratic process for determining AI rules.
Open AI, the parent company of the artificial intelligence chatbot ChatGPT, has launched an initiative with the hopes to bring more democratic input to AI development.
In the official announcement on May 25, the company said they are preparing to award ten grants worth $100,000 each towards experiments in setting up a “proof-of-concept,” democratic process for determining rules for AI systems to follow.
According to OpenAI, the rules should be, “within the bounds defined by the law” and should benefit humanity.
“This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence.”
The company said the experiments will be used as the basis for a more “global” and “ambitious” project in the future. It also said that conclusions from the experiments will not be binding, but rather used to explore important questions surrounding AI governance.
The grant is provided by the non-profit arm of OpenAI. It said the results of the project will be free and accessible to the public.
Related: OpenAI launches official ChatGPT app for iOS, Android coming ‘soon’
This comes as governments around the world are seeking to implement regulations on general-purpose generative AI. Sam Altman, CEO of OpenAI, has recently met with regulators in Europe to stress the importance of non-restrictive regulations so as to not hinder innovation.
A week prior Altman testified before the United States Congress with a similar message.
In the announcement for the new grant program, OpenAI echoes the sentiment that laws should be tailored to the technology and that AI needs, “more intricate and adaptive guidelines for its conduct.”
It gave example questions such as “How should disputed views be represented in AI outputs?” After which, it said that no single individual, company or country should dictate such decisions.
OpenAI has previously warned that if AI is not developed in a mindful manner, a superhuman form of AI could arise within a decade. Therefore developers “ have to get it right.”
Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?