I personally found it to be so "safety filtered" to the point that it's actually done a 180 and can become hateful or perpetuate negative stereotypes in the name of "safety" - see here https://i.imgur.com/xkzXrPK.png and https://i.imgur.com/3HQ8FqL.png
I did have trouble reproducing this consistently except in the Llama2-70b-chat TGI huggingface only when it's sent as the second message, so maybe there's something wonky going on with the prompting style there that causes this behavior. I haven't been able to get the model running myself for further investigation yet.
Don't use instruct/chat models when the pretrained is available.
Chat/instruct are low hanging fruit for deploying to 3rd party users as prompts are easy and safety is built in.
But they suck compared to the pretrained models for direct usage. Like really, really suck.
Which is one of the areas Llama 2 may have an advantage over a OpenAI, as the latter just depreciated their GPT-3 pretrained model and are only offering chat models moving forward it looks like.
we need to kick the "ethical AI" people out. Its becoming increasingly clear they are damn annoying. I don't want safety scissors. restrict things running on your own servers, sure but don't give me a model I can't modify and use how i want on my machine.
If you want an unrestricted model, you should train one yourself. You don't want safety scissors, alas, we can't have all things we want, can we. Facebook is under no obligation to provide you one, after all it's Facebook's money, not yours.
more importantly, where were these data ethicists for the past ten years where most of the tech industry built a global data hoover machine for adtech and social media...
and now that some tech is actually creatively useful to individuals, they want to neuter it.