Comment on Are there any AI services that don't work on stolen data?
partial_accumen@lemmy.world 1 week ago
Are there any AI services that don’t work on stolen data?
Yes, absolutely, but I don’t think that’s the question you want the answer to. There are many places where AI is used inside companies or hobby project where the specific problem to be solved is very specific and other peoples stolen data wouldn’t help you anyway.
Lets say you’re a company that sells items at retail online, like a Walmart or Amazon. You want an AI model to be able to help your workers better select the size of box to pack the various items in for shipment to customers. You would input your past data for shipments you’ve sent including all the dimensions of your products you’re selling (so that data isn’t stolen), and input all of the sizes of boxes you have (they’re your boxes so also not stolen). You’d then could create an Unsupervised Classifier AI model based on linear regression. So the next time you have a set of items that need to be shipped out you’d input those items, and the model would tell you the best box size to use. No stolen data in any of this.
Now, the question I think you’re asking is actually:
“Are there any LLM AI chatbot services that don’t work on stolen data?”
That answer, I don’t know. Most of the chatbot models we’re given to set up chatbots are pretrained by the vendor and you simply input your additional data to make knowledgeable on specific niche subjects.
BartyDeCanter@lemmy.sdf.org 1 week ago
Exactly this. There are plenty of ML/AI systems that build on public datasets, such as AlexNet for image recognition and even some LLMs that are trained on out of copyright documents such as the Project Gutenberg collection. But they almost certainly aren’t what you are looking for.
velummortis@lemmy.dbzer0.com 1 week ago
No, these seem like actually good examples - I’d be interested to use those if they’re publicly available