This is exactly the kind of tool I’d expect AI to be useful for; it goes through a massive amount of freshly digitized data and it scans for, and flags for human action (and/or) review, things that are specified by a human for the AI to identify in a large batch of data.
Basically AI doing data-processing drudge work that no human could ever hope to achieve with any level of speed approaching that at which the AI can do it.
Do I think the AI should be doing these tasks unsupervised? Absolutely not! But the fact of the matter is; the AIs are being supervised in this task by the human clerks who are, at least in theory, expected to read the deed over and make sure it makes some sort of legal sense and that it didn’t just cut out some harmless turn of phrase written into the covenant that actually has no racist meaning, intention or function. I’m assuming a lot of good faith here, but I’m guessing the human who is guiding the AI making these mass edits can just, by means of physicality, pull out the original document and see which language originally existed if it became an issue.
To be clear; I do think it’s a good thing that the law is mandating and making these kinds of edits to property covenants in general to bring them more in line with modern law.
t3rmit3@beehaw.org 1 month ago
This is an awesome use of an LLM. Talk about the cost savings of automation, especially when the alternative was the reviews just not getting done.
Killer_Tree@beehaw.org 1 month ago
Specialized LLMs trained for specific tasks can be immensely beneficial! I’m glad to see some of that happening instead of “Company XYZ is now needlessly adding AI to it’s products because buzzwords!”
knightly@pawb.social 1 month ago
Given the error rate of LLMs, it seems more like they wasted $258 and a week that could have been spent on a human review.
OmnipotentEntity@beehaw.org 1 month ago
LLMs are bad for the uses they’ve been recently pushed for, yes. But this is legitimately a very good use of them. This is natural language processing, within a narrow scope with a specific intention. This is exactly what it can be good at. Even if does have a high false negative rate, that’s still thousands and thousands of true positive cases that were addressed quickly and cheaply, and that a human auditor no longer needs to touch.
t3rmit3@beehaw.org 1 month ago
Apart from just a general dislike of LLMs, what specifically do you believe would make this particular use prone to errors?
dan@upvote.au 1 month ago
The article didn’t say if it was an LLM or not.
howrar@lemmy.ca 1 month ago
Considering that it’s a language task, LLMs exist, and the cost, it’s a reasonable assumption. It’d be pretty silly to analyse a bag of words when you have tools you can use with minimal work with much better results. Even sillier to spend over $200 for something that can be run on a decade old machine in a few hours.