I’m down with scraping, but “parses HTML with regex” has got me fucked up.
Chad scraper
Submitted 8 months ago by dramaticcat@sh.itjust.works to [deleted]
https://sh.itjust.works/pictrs/image/dcbf36a9-c675-411c-94c1-cbf74ab477cb.webp
Comments
indepndnt@lemmy.world 8 months ago
257m@sh.itjust.works 8 months ago
Relevant SO post. stackoverflow.com/…/regex-match-open-tags-except-…
ChickenLadyLovesLife@lemmy.world 8 months ago
13 years ago my god. I wonder what Jon Skeet is doing these days.
I remember when he passed me in the reputation ranking back in the early days and thinking that I needed to be a little bit more active on the site to catch him lol.
DAMunzy@lemmy.dbzer0.com 8 months ago
That was a great read. Thanks!
demodawg@lemmy.world 8 months ago
This is the way
tetelestia@lemmy.world 8 months ago
What’s wrong with parsing HTML with regex?
indepndnt@lemmy.world 8 months ago
In short, it’s the wrong tool for the job.
In practice, if your target is very limited and consistent, it’s probably fine. But as a general statement about someone’s behavior, it really sounds like someone is wasting a lot of time and regularly getting sub-par results.
ViscloReader@lemmy.world 8 months ago
Just a heads up for companies thinking it’s wrong to scrap: if you don’t want info to be scraped, don’t put it on the internet.
DAMunzy@lemmy.dbzer0.com 8 months ago
But, but, norobots.txt!
Chad doesn’t care!
SeeMinusMinus@lemmy.world 8 months ago
The sad part is that scrapping is often easier then using the api.
Anonymousllama@lemmy.world 8 months ago
Much less beholden to arbitrary rules also. Way too many times companies will just up and lift their API access or push through restrictions. No ty, I’ll just access it myself then
AeroLemming@lemm.ee 8 months ago
Cough Reddit cough
SeeMinusMinus@lemmy.world 8 months ago
API starter kit
- Outdated and unsupported and hasn’t been replaced yet but is the standard way to use the service.
- Lots of authorization tokens.
- The example in the docs doesn’t work (if there is one).
- You have no idea where the online tutorial got the information because it doesn’t have links to resources and the docs have barely anything even though its giant.
- Uses asynchronous programming to make it faster but its still much much slower then scrapping without asynchronous programming.
clbustos@lemmy.world 8 months ago
So true that it hurta
lnee@lemm.ee 8 months ago
I scrape with bash lord help me.
OmnislashIsACloudApp@lemmy.world 8 months ago
there’s literally dozens of us!
or maybe just 2 idk
MaxVoltage@lemmy.world 8 months ago
as a windows user i say kindly on our behalf thank you for pushing the envelope ✉
darcy@sh.itjust.works 8 months ago
someone’s never used a good api. like mastodon
havokdj@lemmy.world 8 months ago
Hold on, I thought it was supposed to be realism on the virgin’s behalf and ridiculous nonsense on the chad behalf:
All I see is realism on both sides lol
sebinspace@lemmy.world 8 months ago
I wanted to build a Discord bot that would check NIST for new CVEs every 24 hours. But their API leaves quiiiiiiite a bit to be desired.
Their pages, however…
khaffner@lemmy.world 8 months ago
Just use this github.com/CVEProject/cvelistV5/tree/main/cves
sebinspace@lemmy.world 8 months ago
Oh yeah, that’s much more robust
snek@lemmy.world 8 months ago
I used Twitter Scraper to get twitter data for my thesis. Shortly after, it became obsolete. github.com/taspinar/twitterscraper/issues/368
rip twitter scraper
lemmywizard@lemm.ee 8 months ago
It’s all fun and games until you have to support all this shit and it breaks weekly!
That being said, I do miss the simplicity of maintaining selenium projects for work
idiosynk@lemmy.world 8 months ago
I’ve just discovered selenium and my life has changed.
XEAL@lemm.ee 8 months ago
I created a shitty script (with ChatGPT’s help) that uses Selenium and can dump a Confluence page from work, all its subpages and all linked Google Drive documents.
Wakmrow@lemmy.world 8 months ago
How so?
idiosynk@lemmy.world 8 months ago
When a customer needs a part replaced, they send in shipping data. This data has to be entered into 3-4 different web forms and an email. This allows me to automate it all from a single form that has built in error checking so human mistakes are limited.
Company could probably automate this all in the backend but they won’t :shrug:
McBain@feddit.ch 8 months ago
I use scrapy. It has a steeper learning curve than other libraries, but it’s totally worth it.
rishado@lemmy.world 8 months ago
Splash ftw
Crashumbc@lemmy.world 8 months ago
ROFL, Chad only thinks that shit works
Irkam@jlai.lu 8 months ago
Rodeo@lemmy.ca 8 months ago
Why on earth would they have changed that. WEBooB is a way better name.
planish@sh.itjust.works 8 months ago
But it’s got boob in it.
chemicalwonka@discuss.tchncs.de 8 months ago
Let’s see what WEI (if implemented ) will do with the scrapers. The future doesn’t look promising.
kadotux@sopuli.xyz 8 months ago
What’s that?
Username@feddit.de 8 months ago
A google/chrome proposal for browser verification, i.e. killing addons and custom browsers.
NigelFrobisher@aussie.zone 8 months ago
My undergrad project was a scraper - there just wasn’t a name for it yet,
newIdentity@sh.itjust.works 8 months ago
Scrapers have been a thing since the web exists.
One of the first one’s is even called WebCrawler
Touching_Grass@lemmy.world 8 months ago
Fuck, I think I’ve been doing it wrong and this meme gave me more things to learn than any YouTube video has
Rodeo@lemmy.ca 8 months ago
Memes have always been superior to YouTube videos
ArchTemperedKoala@lemmy.world 8 months ago
I have totally no idea what these are about…
Touching_Grass@lemmy.world 8 months ago
Websites and services create APIs for programmers to use them. So Spotify has code that let’s you build a program that can use its features. But you need a token they give you after you sign up. The token can be revoked and used to monitor how much of their service you’re using. That way they can restrict if its too much.
Scraping is raw dogging the web slut you met at the cougar ranch who went home with you because you reminded her of her dog
blackluster117@possumpat.io 8 months ago
This is the greatest definition for scraping I’ve ever read. You should have it bronzed.
anarchy79@lemmy.world 8 months ago
Put like that I want to learn everything about it.
DarkSpectrum@lemmy.world 8 months ago
‘Scraping’ is the process of anonymously and programmatically collecting data from a webpage(s), often without the website’s permission and only limited to the content made publicly available. This is I contrast to using an API provided by the database owner which is limited by tokens, access volume, available end points etc.
SternburgExport@feddit.de 8 months ago
Everytime I think I’m good with tech, something like this shows up in my feed and makes me realize I know jackshit.
UraniumBlazer@lemm.ee 8 months ago
Sorry, I’m ignorant in this matter. Why exactly would you want to scrape websites aside from collecting data for ML? What kind of irreplaceable API are you using? Someone please educate me here.
InternetTubes@lemmy.world 8 months ago
So, where can I find the Chad scrapper for reddit?
madcaesar@lemmy.world 8 months ago
How exactly do you make money scraping?
Sotuanduso@lemm.ee 8 months ago
By getting someone to hire you to do it.
anteaters@feddit.de 8 months ago
Mind blowing stuff
madcaesar@lemmy.world 8 months ago
No I mean more what is the use case where it would be worth scrapping on a massive scale?
Hawk@lemmynsfw.com 8 months ago
Imagine an investment firm looking at a property market. They need data like price trends in the surrounding area.
Real estate API is expensive, scraping is free. By hiring an employee the can save money.
Theharpyeagle@lemmy.world 8 months ago
There’s a ton of money to be made from scraping, consolidating, and organizing publicly accessible data. A company I worked for did it with health insurance policy data because every insurance company has a different website with a different data format and data that updates every day. People will pay da big bux for someone to wrap all that messiness into a neat, consistent package.
Right now, gathering machine learning data is hot, cause you need a lot of it to train a model. Companies may specialize in getting, say, social media posts from all kinds of sites and putting them together in a consistent format.
lnee@lemm.ee 8 months ago
That’s why I use geddit
redw04@lemmy.ca 8 months ago
So uh…as someone who’s currently trying to scrape the web for email addresses to add to my potential client list … where do I start researching this?
lutillian@sh.itjust.works 8 months ago
Start looking into selenium, probably in Python. It’s one of the easier to understand forms of scraping. It’s mainly used to web testing, though you can definitely use it for less… nice purposes.
PieMePlenty@lemmy.world 8 months ago
Step one will be learning to code in any language. Step two is using a library to help with it and don’t use regex like the meme says haha. HtmlAgilityPack has always been there for me.
Rodeo@lemmy.ca 8 months ago
Virgin library user vs. Chad regex dev
bill_1992@lemmy.world 8 months ago
Everyone loves the idea of scraping, no one likes maintaining scrapers that break once a week because the CSS or HTML changed.
DigitalPaperTrail@kbin.social 8 months ago
spite can be a great motivator, though
Anonymousllama@lemmy.world 8 months ago
This one. One of the best motivators. Sense of satisfaction when you get it working and you feel unstoppable (until the next subtle changes happens anyway)
archomrade@midwest.social 8 months ago
I feel this
camr_on@lemmy.world 8 months ago
I loved scraping until my ip was blocked for botting lol. I know there’s ways around it it’s just work though
pennomi@lemmy.world 8 months ago
I successfully scraped millions of Amazon product listings simply by routing through TOR and cycling the exit node every 10 seconds.
Touching_Grass@lemmy.world 8 months ago
You guys use IP’S?
dangblingus@lemmy.world 8 months ago
Or in the case of wikipedia, every table on successive pages for sequential data is formatted differently.
Matriks404@lemmy.world 8 months ago
Just use AI to make changes ¯_(ツ)_/¯
anarchy79@lemmy.world 8 months ago
Here take these: \\