Comment on Daily Discussion Thread: 📸📽️🎥 Saturday, 11 January, 2025

<- View Parent
Baku@aussie.zone ⁨3⁊ ⁨weeks⁊ ago
This came out longer than tnicipated and I'm a bit too smooth brained at the moment to remove all the guff and rephrase. Sorry. Not a rant! Just a Livestream of consciousness basically

I couldn’t figure out how to work their API. I got an API key and all that, but things just weren’t working There’s a set of save page now utilities, I could use API free, but they’re all Linux shell scripts, I couldn’t figure out how to work on windows without messing around with WSL (a bit beyond my capabilities). When I tried to work them out on my MacBook, they worked but from memory not how I wanted I also found the IAs documentation to be missing, difficult to find, or outdated in a lot of areas, as well, which meant that when I last tried to get GPT to work it out, it was trying to use deprecated API calls and an outdated authentication method, and I couldn’t make it work much better myself Could probably give it another go. Having it take the URLs from the CSV could work. But anything before that (like crawling) doesn’t work the best because some of the things I archive require manual intervention anyway to properly extract all URLs (for instance Lemmy threads start auto collapsing after 300 comments, so they need to be expanded to retrieve comment links), or photos hidden in spoilers need to be expanded to retrieve the image url. That sort of thing. Possible to automate, but it would probably take more time to automate than if save compared to just doing it manually I did actually attempt to get GPT to make a crawler for a completely different purpose once, and it didn’t work. I don’t remember what exactly went wrong with it, but from memory it was misinterpreting status codes and couldn’t parse the html properly. Easier to just fork somebody else’s crawler and modify it to work with the other scripts I guess Also, importing it into a sheet doesn’t actually take that much work. It’s basically 3 mouse clicks, then heading to the IAs sheet batch archiving page and pasting in the URL. Their batch processing is a bit inefficient and can take a few days, which if done through the API could definitely be done faster and with a bit of smart logic put in to avoid going over daily archive caps and with a queueing system, but those few days don’t require any active energy on my part. They keep processing it tin the background at a rate of a row or 2 a minute, then send me an email once it’s done

source
Sort:hotnewtop