OpenAI has also eased off from its hard anti-military use stringency, but, to date, it still holds a ban on AI applications for weapons development. One of the biggest policy changes has seen OpenAI provide its AI applications for military use and war based under Sam Altman.
According to The Intercept, what has changed is simply the removal of language that prohibits by name use of OpenAI ‘s technology for purposes related openly toward military activities from its policy surrounding usage.
According to the report, revising GPT4 was considered justified by OpenAI because it intended to develop a universal set of principles that are easy for people to remember and implement.
A company spokesperson was quoted as saying, “We tried to develop a set of universal guidelines, which are easily recalled and followed with essence as our tools have now become common around the world where any individual can manage frequently created GPTs.”
Another statement mentioned issues like ‘do not harm others’ when he went on to note how these policies are clear while mentioning weapons stores / or victims. Though the readjustment of this policy may have its consequences, how exactly they manifest themselves remains unclear.
2018 was released by OpenAI, and it observed the capacity of large language models (LLMs), including ChatGPT, to pursue what it called killing-adjacent tasks, such as coding or managing a procurement. TechCrunch declared that army engineers looking for ways to summarize decades of documentation about a region concerning its water infrastructure could take good advantage once they leverage OpenAI’s platforms.
OpenAI is somewhat less strict regarding the military use of AI and also retains a ban on any applications for weapons development. AI remains a focal point in creating a balance between empowering military-related tasks but does not contribute towards weaponization while it develops into new dynamics of its applications.
Engineer | Content Writer
Want to be a catalyst for a positive change in the world