Comment on It’s practically impossible to run a big AI company ethically: Anthropic was supposed to be the good guy. It can’t be — unless government changes the incentives in the industry.

<- View Parent
MagicShel@programming.dev ⁨4⁩ ⁨months⁩ ago

I think I’ve said a lot in comments already and I’ll leave that all without relitigating just for arguments sake.

However, I wonder if I haven’t made clear that I’m drawing a distinction between the model that generates the raw output, and perhaps the application that puts the model to use. I have an application that generates output via OAI API and then scans both the prompt and output to make sure they are appropriate for our particular use case.

Yes, my product is 100% censored and I think that’s fine. I don’t want the customer service bot (which I hate but that’s an argument for another day) at the airline to be my hot AI girlfriend. We have tools for doing this and they should be used.

But I think the models themselves shouldn’t be heavily steered because it interferes with the raw output and possibly prevents very useful cases.

So I’m just talking about fucking up the model itself in the name of safety. ChatGPT walks a fine line because it’s a product not a model, but without access to the raw model it needs to be relatively unfiltered to be of use, otherwise other models will make better tools.

source
Sort:hotnewtop