The idea of AI automated job interviews sickens me. How little of a fuck do you have to give about applicants that you can't even be bothered to have even a single person interview them??
Automation
Submitted 5 months ago by gedaliyah@lemmy.world to [deleted]
https://lemmy.world/pictrs/image/d0b32229-85da-4cbb-817e-6cbd7c42ab83.png
Comments
Th4tGuyII@fedia.io 5 months ago
Fisch@discuss.tchncs.de 5 months ago
But god forbid the applicant didn’t spend hours researching every little detail about a company, writing a perfect letter with information that could have just been bullet points and being able to explain exactly why they absolutely love the company and why it’s been their dream to work there since they were a child. Or even worse: Use AI to write the application.
Melvin_Ferd@lemmy.world 5 months ago
Cover letters fucking make so hateful.
Tangent5280@lemmy.world 5 months ago
We should build an AI that automates researching about a company for applicants
Th4tGuyII@fedia.io 5 months ago
Exactly!
Applicants are expected to dedicated hours of their time to writing their application and performing background research - both of which are becoming increasingly more tedious over time - so the least a company could bloody do is show some basic respect by paying an actual human being to come interview you!
eager_eagle@lemmy.world 5 months ago
that’s more like an excuse to keep those stupid 5, 6, and even more interview round processes. Basically making you work an entire week for free in exchange of a chance of getting an offer.
OsrsNeedsF2P@lemmy.ml 5 months ago
I dunno, but if your boss chain contains a machine (literally Amazon warehouse), does it matter?
The_Picard_Maneuver@lemmy.world 5 months ago
“Bias automation” is kind of an accurate description for how our brains learn things too.
riskable@programming.dev 5 months ago
The base assumption is that you can tell anything reliable at all about a person from their body language, speech patterns, or appearance. So many people think they have an intuition for such things but pretty much every study of such things comes to the same conclusion: You can’t.
The reason why it doesn’t work is because the world is full of a diverse set of cultures, genetics, and subtle medical conditions. You may be able to attain something like 60% accuracy for certain personality traits from an interview if the person being interviewed was born and raised in the same type of environment/culture (and is the same sex) as you. Anything else is pretty much a guarantee that you’re going to get it wrong.
That’s why you should only ask interviewees empirical questions that can identify whether or not they have the requisite knowledge to do the job. For example, if you’re hiring an electrical engineer ask them how they would lay out a circuit board. Or if hiring a sales person ask them questions about how they would try to sell your specific product. Or if you’re hiring a union-busting expert person ask them how they sleep at night.
snooggums@midwest.social 5 months ago
But all the other questions are to find out if they are a good fit for the office culture.
You know, if they are also white middle class dude bros.
Bertuccio@lemmy.world 5 months ago
I’ve just started doing practical interviews. I basically get really young people with little overall experience and I just want to know if they can do common technical tasks.
So one question is to literally have them explain how to tighten a bolt. One person failed.
Xephonian@retrolemmy.com 5 months ago
That’s why you should only ask interviewees empirical questions that can identify whether or not they have the requisite knowledge to do the job.
Hol up. ThAt sOuNds LiKe RaCisM!
Enkers@sh.itjust.works 5 months ago
That shit works IRL too. Why do you think therapy practices often have themselves positioned in front of a wall of books? Not that it’s a bad thing; it’s good for outcomes to believe your therapist is competent and well educated.
mryessir@lemmy.sdf.org 5 months ago
Maybe true but your comment is humanizing “dumb” AI.
cley_faye@lemmy.world 5 months ago
There’s a ton of great small scale things we can do with machine learning, and even LLM.
Unfortunately, it seems the main usages will be crushing people down even more.
PM_Your_Nudes_Please@lemmy.world 5 months ago
Yup. AI should be used to automate all of the mundane day-to-day BS, leaving us free to practice art, or poetry, or leisure activities. But instead, we went down the dystopian capitalist timeline, where we’re automating all of the art so artists are forced to get mundane day-to-day BS jobs.
RokAlamSeth@lemmy.ml 5 months ago
Adapt or die. The world doesn’t care about useless feelings.
A_Chilean_Cyborg@feddit.cl 5 months ago
Bit it does if you Photoshop a bookshelf in your background?
TAG@lemmy.world 5 months ago
That reminds me of the time, quite a few years ago, Amazon tried to automate resume screening. They trained a machine learning model with anonymized resumes and whether the candidate was hired. Then they looked at what the AI was looking at. The model had trained itself on how to reject women.
merc@sh.itjust.works 5 months ago
Another similar “shortcut” I’ve heard about was that a system that analyzed job performance determined that the two key factors were being named “Jared” and playing lacrosse in high school.
And, these are the easy-to-figure-out ones we know about.
If the bias is more complicated, it might never be spotted.
fubarx@lemmy.ml 5 months ago
Someone should build a little AI app that scrapes a job listing, then takes a resume and rewrites it in subtle ways to perfectly match the job description.
Let your AI duke it out with their AI.
magnolia_mayhem@lemmy.world 5 months ago
When I got out of the military, my outprocessing included a briefing about how to get interviews with federal organizations. One thing they taught us was that you can copy the job description, paste it into your resume, and set the font to white. The automated systems at USA Jobs would register a match to the job description and rate you as a better candidate and the human screeners were so overworked that they would just go with what the computer says without checking.
BobGnarley@lemm.ee 5 months ago
I wonder if this still works lol that’s brilliant.
KazuyaDarklight@lemmy.world 5 months ago
Pretty sure I saw that somewhere actually.
EleventhHour@lemmy.world 5 months ago
I’m looking for a job now, and this is very useful to me. Thank you.
Isoprenoid@programming.dev 5 months ago
Do you really want to work for a company that allows their HR department to abuse AI as a tool?
explodicle@sh.itjust.works 5 months ago
Yes? They’ve got a million bazillion applicants too; this is a huge market failure all around.
EleventhHour@lemmy.world 5 months ago
Explain how anyone has a choice, especially in the United States
GBU_28@lemm.ee 5 months ago
That exists
cheddar@programming.dev 5 months ago
One of my favorite examples is when a company from India (I think?) trained their model to regulate subway gates. The system was supposed to analyze footage and open more gates when there were more people, and vice versa. It worked well until one holiday when there were no people, but all gates were open. They eventually discovered that the system was looking at the clock visible on the video, rather than the number of people.
jubilationtcornpone@sh.itjust.works 5 months ago
Just an expensive timer.
Raxiel@lemmy.world 5 months ago
Reminds me of the time a military algorithm was accidentally trained to conclude that tanks are only concealed in tree lines on overcast days.
Xanis@lemmy.world 5 months ago
I do that shit when I have a web interview. Let up a guitar just visible in the camera, a small bookshelf, a floor lamp, make sure my tennis bag is visible despite not playing in ages…
Whether they realize it or not, people do take this stuff in. Not sure why some algorithm based on these very same interviews wouldn’t do the same.
mrgreyeyes@feddit.nl 5 months ago
I did the same, but they were not impressed by my Obedience extreme sex bench 5000 with restraint straps. I even told them the sturdy bench is made of durable, heavy-duty steel, capable of supporting up to 400 pounds of weight.
smh.
Hobo@lemmy.world 5 months ago
I’d have hired you. At least I know you’d be honest and not try to hide shit for fear of embarrassment.
cley_faye@lemmy.world 5 months ago
Journalist doing reports in front of their dildo collection: “hold my beer”
UltraGiGaGigantic@lemm.ee 5 months ago
Recruiters: “people are using AI to apply!”
Also recruiters:
magnolia_mayhem@lemmy.world 5 months ago
To be fair, this works with humans, too.
pufferfisherpowder@lemmy.world 5 months ago
Yes, contradicting the claim that it’s “more objective”.
Laser@feddit.org 5 months ago
Hence the comment about “bias automation”
MinusPi@pawb.social 5 months ago
I fucking hate that extraversion is a measured trait 🙄
DragonTypeWyvern@midwest.social 5 months ago
I hate that they think bookshelves are an indicator for it
MindTraveller@lemmy.ca 5 months ago
It’s from the OCEAN model of personality, which is currently the most widely accepted model. It’s received less criticism than myers-briggs and astrology.
A_Chilean_Cyborg@feddit.cl 5 months ago
It’s received less criticism than myers-briggs and astrology.
That’s not a high bar to meat.
RokAlamSeth@lemmy.ml 5 months ago
Should’ve gotten better genes from your parents then. Too bad you turned out to be the fastest swimmer. We really missed out on the next Einstein Nad got… you 🤢
SOB_Van_Owen@lemm.ee 5 months ago
One web LLM I was screwing around with had Job Interview as a preset. Ok. Played it totally straight the first time and had a totally positive outcome. Thought the interviewer way too agreeable. The next time I said the most inappropriate stuff I could imagine and still the interviewer agreed to come home with me to check out the rock collection I keep under my bed and listen to Captain Beefheart albums.
TempermentalAnomaly@lemmy.world 5 months ago
Listening to some Captain Beefheart, huh… I’ll grab my shiny rocks!
RadicalEagle@lemmy.world 5 months ago
During the AI goldrush you can make your fortune selling bookshelves.
Rozz@lemmy.sdf.org 5 months ago
Selling bookshelf large poster, or just jpgs
5ibelius9insterberg@feddit.de 5 months ago
Bookshelf NFTs? Only possible to buy with crypto?
Ilovethebomb@lemm.ee 5 months ago
Having a bookshelf poster behind you is actually a hilarious workaround.
RoyaltyInTraining@lemmy.world 5 months ago
Why are the different scales connected? How exactly does one interpolate between agreeableness and neuroticism? This is the kind of diagram I used to draw as an 8 year old, and they put this crap in a real product…
maniclucky@lemmy.world 5 months ago
They shouldn’t be plotted that way technically. The big 5 are independent traits so they should essentially be sliders, not linked like that.
That said, it’s way easier to see the points when you do that. Easy to miss when colors swap, for example, without the lines when you’ve been looking at this stuff for a few hours.
Pete90@feddit.org 5 months ago
Yeah, it’s interesting that the math pretty much says, that these factors are independent from each other. Then we did even fancier math with “AI”, all to ruin the base understanding by connecting them graphically. It bugs me more than it should. Think about your graphics. It is a very interesting result nevertheless.
alcedine@discuss.tchncs.de 5 months ago
“Machine learning” is perfectly cromulent. The bias is what it learned, because that’s what it was taught. (Not intentionally, I don’t think. It’s just hard to get this stuff right sometimes.)
Colonel_Panic_@lemm.ee 5 months ago
I really hate that we are calling this wave of technology “AI”, because it isn’t. It is “Machine Learning” sure, but it is just brute force pattern recognition v2.0.
The desired outcomes you define and then the data you train it on both have a LOT of built-in biases.
It’s a cool technology I guess, but it’s being misused across the board. It is being overused and misused by every company with FOMO. Hoping to get some profit edge on the competition. How about we have AI replace the bullshit CEO and VP positions instead of trying to replace fast food drive through workers and Internet content.
I guess that’s nothing new for humans… One human invents the spear for fishing and the rest use them to hit each other over the head.
FlyingSquid@lemmy.world 5 months ago
I’m not working right now because I’m putting my daughter through online school. She graduates in five years.
I am never getting another job, am I?
acockworkorange@mander.xyz 5 months ago
Answering the question in the image: machine learning arose from the industrial control world. The idea was to teach a machine how to detect defects in supposedly identical objects out of a manufacturing line, most often with “machine vision” (ie. a camera). Applying it to humans was asinine.
Djtecha@lemm.ee 5 months ago
This has job descrimination lawsuit written all over it.
bleistift2@sopuli.xyz 5 months ago
I’m amazed that no-one has complained that the graph’s data points are on the borders between categories rather than inside the category bars.
With that out of the way: WTF is wrong with that graph?
norimee@lemmy.world 5 months ago
I would be interested to see what happens if you lighten up her skin color a bit…
iAvicenna@lemmy.world 5 months ago
“oooo books he must be really smart”
samus12345@lemmy.world 5 months ago
“Extraversion”
eletes@sh.itjust.works 5 months ago
I wonder if it’s actually interpreting the bookshelf or if having such a busy background is taking a toll on the compression. That would alter the details on the person’s face
TheEtherBunny@lemmy.world 5 months ago
Anyone have the original link handy? Trying to get to the tweet is uglier than I expected.
joe_cool@lemmy.ml 5 months ago
It’s from 2021. Link to the website: interaktiv.br.de/ki-bewerbung/en/
Still pretty interesting though.
KeenFlame@feddit.nu 5 months ago
I don’t understand why anyone writing, reading or commenting on this think a bookshelf would not change the outcome? Like what do you people think these ml models are, human brains? Are we still not below even the first layer of understanding?
ramirezmike@programming.dev 5 months ago
fasterthanlime is so cool btw
tacosanonymous@lemm.ee 5 months ago
Well, at least they didn’t spend a lot of money on testing it…
ShaunaTheDead@fedia.io 5 months ago
Reminds me of an early application of AI where scientists were training an AI to tell the difference between a wolf and a dog. It got really good at it in the training data, but it wasn't working correctly in actual application. So they got the AI to give them a heatmap of which pixels it was using more than any other to determine if a canine is a dog or a wolf and they discovered that the AI wasn't even looking at the animal, it was looking at the surrounding environment. If there was snow on the ground, it said "wolf", otherwise it said "dog".
driving_crooner@lemmy.eco.br 5 months ago
Early chess engine that used AI, where trained by games of GMs, and the engine would gi out of its way to sacrifice the queen, because when GMs do it, it’s comes with a victory.
papalonian@lemmy.world 5 months ago
MonkderDritte@feddit.de 5 months ago
Why wouldyou use AI for chess?
KeenFlame@feddit.nu 5 months ago
It’s not wrong
kandoh@reddthat.com 5 months ago
That’s funny because if I was trying to tell the difference between a wolf and a dog I would look for ‘is it in the woods?’ and ‘how big is it relative to what’s around it?’.
Melvin_Ferd@lemmy.world 5 months ago
What about a wolf and grandmotherm
OsrsNeedsF2P@lemmy.ml 5 months ago
While I believe that, it’s an issue with the training data, and not the hardest to resolve
dondelelcaro@lemmy.world 5 months ago
Maybe not the hardest, but still challenging. Unknown biases in training data are a challenge in any experimental design. Opaque ML frequently makes them more challenging to discover.
Mirodir@discuss.tchncs.de 5 months ago
So is the example with the dogs/wolves and the example in the OP.
As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn’t be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model’s ability to draw any conclusions based on background.
However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn’t even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.
merc@sh.itjust.works 5 months ago
Yes, “Bias Automation” is always an issue with the training data, and it’s always harder to resolve than anyone thinks.
StaticFalconar@lemmy.world 5 months ago
Old data adage. Garbage in, garbage out.
kelargo@lemmy.world 5 months ago
Hot dog. Not hot dog