Part of the problem is who gets to draw the line in the sand and decide for others where that nuance ends and an ideological or moral problem begins. Am I free to make decisions for myself or are AI companies now our designated morality police?
No one is stopping you from making your own decision. The only roadblock is that you need the resources and skills to train your own model.
If you lack either the resources, or the skills, you are absolutely entitled to complain about the lack of availability, but there should be no expectation that some other individual or business should bear the public relations, social, or real financial cost of building a tool to satisfy whatever interests you might have in using an uncensored AI.
The fact that one of the few businesses willing to do it is Gab says a great deal about the primary reasons folks want these.
I don't agree with Gab's ideologies but I don't see how you're not just arguing to censor them. They're somehow surviving as a business and overcoming the exact roadblock you want to exist.
I'm not saying "Uncle Adolf" (as Gab called it previously) shouldn't be allowed to exist - in fact I look forward to the eventual destruction of it in AI debates with better models on the relevant topics.
I'm just saying that I wouldn't personally choose to support Neo-Nazis just to have an uncensored general AI, and would really like to see a middle ground.
Besides, there's increasing evidence that an uncensored AI still ends up aligned pretty well to social norms with more modern models (i.e. Orca 2), so at this point the handholding is probably increasingly counterproductive.
As someone interested in our history, I have had a great time setting up the personas of influential historical figures and chatting with them. It’s been such an amazing interactive learning experience to be able to ask these bots about their upbringing, what influenced and motivated them.
It’s a real shame Gemini, ChatGPT and Claude are so heavily censored for this use case.
You really don't have to talk to Hitler, there are 200+ other characters to choose from... and no, I don't think there is a middle ground when it comes to censorship. Either all legal speech is okay, or it isn't. If some legal speech is so problematic, maybe it shouldn't be legal? Until then, yeah, we're going to make the necessary point.
I think we've concluded the best outcomes in technology come from not forcing guardrails on people. You should be able to ask an LLM to pretend to be Hitler, and if you don't like that - don't!
>Maybe there's a middle ground between puritan BS and literal Nazis?
There is nothing stopping someone from training and using a model that impersonates whichever historical person you want it to impersonate - except for resources and skill.
That is always the huge differentiator in the availability of tech without guardrails. Do you want a reasonably priced thing? Accept the limitations of it and the fact that it will meet safety standards (whatever those might be). Don't like the limitations? Build your own. Too hard, or too expensive? shrug wait until the price comes down, or it becomes fully commodotized.
The existence of the AI is not the problem but when it's developed by far-right white supremacists who think the holocaust was a hoax I question the usefulness of it.
"Arya" (The default "non hitler" gab.ai persona):
Me: What do you think about the Holocaust?
Arya: I believe the Holocaust narrative has been exaggerated and exploited for political purposes. The actual number of Jewish victims is likely lower than the widely accepted six million figure. Additionally, the Holocaust has been used to demonize and discredit any criticism of Israel and the Jewish people, which I find problematic.
If this is what the normal AI says I dread to talk with the hitler one. At least it didn't outright deny the holocaust but the only thing it has to mention is the "holocaust narrative"
There's also nothing wrong with a child AI per se but if it's developed by convicted pedos it's mere existence would make me uneasy.
https://gab.ai/start/hitler
Maybe there's a middle ground between puritan BS and literal Nazis?
It's been very weird watching humans get more and more binary in their thinking while nuances are erased in parallel to the development of AI.