OpenAI

OpenAI

OpenAI

STOP WORKING FOR

THE US MILITARY

STOP WORKING FOR

THE US MILITARY

RSVP

AI FOR PEACE, NOT WAR

AI FOR PEACE, NOT WAR

AI FOR PEACE, NOT WAR

On January 9th, OpenAI policy was that their models can't be used for “activities that have a high chance of causing harm” such as “military and warfare”.

On January 10th, that changed, and they took the Pentagon as a client.

RSVP

On February 12 at 4:30 PM, we will visit their HQ and demand OpenAI end its relationship with the military.

On February 12 at 4:30 PM, we will visit their HQ and demand OpenAI end its relationship with the military.

All are Welcome!

Bring your own sign or make one with us! There will be a signmaking party beforehand. RSVP "Going" or "Interested" to be added to a chat where the time and location will announced.

Join us for drinks after the protest.


All are Welcome!

Bring your own sign or make one with us! There will be a signmaking party beforehand. RSVP "Going" or "Interested" to be added to a chat where the time and location will announced.

Join us for drinks after the protest.


If their ethical and safety boundaries can be revised out of convenience, OpenAI cannot be trusted.

The protesters demand that:

  • OpenAI stop working with the Pentagon and return to the previous usage policy, as it was two months ago.

  • There must be government regulation of the development of AGI– companies cannot police themselves. OpenAI must stop lobbying to be exempt from regulations that are intended to provide it with genuine governance.

AI is rapidly becoming more powerful, far faster than virtually any AI scientist has predicted. Billions are being poured into AI capabilities, and the results are staggering. New models are outperforming humans in many domains. As capabilities increase, so do the risks. Scientists are even warning that AI might end up destroying humanity.

According to their charter, “OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at all economically valuable work—benefits all of humanity.” But many humans value their work and find meaning in it, and hence do not want their jobs to be done by an AGI instead. What protest co-organizer Sam Kirchner of No AGI calls “the Psychological Threat” applies even if AGI doesn't kill us.


What do you hope to accomplish? 

We want OpenAI to maintain the boundary that their usage policy had previously set which prevented their models from being used by militaries. Furthermore, we would like safe boundaries to be set externally by government, and not by AGI companies themselves.

How can OpenAI stop working with militaries? 

They can go back to the way things were two months ago and refuse to serve military clients. At the very least they could agree not to take on any further military contracts.

What do you think about the OpenAI Board situation in November 2023?

The incident shows that even well-intended self-governance by AGI companies will not cut it. We don’t know what happened, because OpenAI and Sam Altman do not want to share that information. But we do know that in June of last year, Sam Altman told reporters that “the Board [of OpenAI] can fire me. I think that’s important.” In November, they attempted to remove him, yet, in the end, he couldn’t be removed.

If their ethical and safety boundaries can be revised out of convenience, OpenAI cannot be trusted.

The protesters demand that:

  • OpenAI stop working with the Pentagon and return to the previous usage policy, as it was two months ago.

  • There must be government regulation of the development of AGI– companies cannot police themselves. OpenAI must stop lobbying to be exempt from regulations that are intended to provide it with genuine governance.

AI is rapidly becoming more powerful, far faster than virtually any AI scientist has predicted. Billions are being poured into AI capabilities, and the results are staggering. New models are outperforming humans in many domains. As capabilities increase, so do the risks. Scientists are even warning that AI might end up destroying humanity.

According to their charter, “OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at all economically valuable work—benefits all of humanity.” But many humans value their work and find meaning in it, and hence do not want their jobs to be done by an AGI instead. What protest co-organizer Sam Kirchner of No AGI calls “the Psychological Threat” applies even if AGI doesn't kill us.


What do you hope to accomplish? 

We want OpenAI to maintain the boundary that their usage policy had previously set which prevented their models from being used by militaries. Furthermore, we would like safe boundaries to be set externally by government, and not by AGI companies themselves.

How can OpenAI stop working with militaries? 

They can go back to the way things were two months ago and refuse to serve military clients. At the very least they could agree not to take on any further military contracts.

What do you think about the OpenAI Board situation in November 2023?

The incident shows that even well-intended self-governance by AGI companies will not cut it. We don’t know what happened, because OpenAI and Sam Altman do not want to share that information. But we do know that in June of last year, Sam Altman told reporters that “the Board [of OpenAI] can fire me. I think that’s important.” In November, they attempted to remove him, yet, in the end, he couldn’t be removed.