Did you use AI to write this? Kinda ironic, don’t you think?
Comment on Emotion-tracking AI on the job: Workers fear being watched – and misunderstood
farsinuce@feddit.dk 7 months ago
Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.
A quick rundown of what it entails and why it might matter in the US.
What is it? - The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety. - The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination. Key Takeaways: - Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned. - High-Risk Regulation: AI systems with significant implications for people’s rights must undergo rigorous assessments. - Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data. Why Does This Matter in the US? - Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards. - Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally. - Potential for US Legislation: The EU’s move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.
Emotion-tracking AI is covered.
>Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
Sources:
- https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
- https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
melmi@lemmy.blahaj.zone 7 months ago
farsinuce@feddit.dk 7 months ago
I spent the better half of 45 minutes writing and revising my comment. So thank you sincerely for the praise, since English is not my first language.
melmi@lemmy.blahaj.zone 7 months ago
If you wrote this yourself, that’s even more ironic, because you used the same format that ChatGPT likes to spit out. Humans influence ChatGPT -> ChatGPT influences humans. Everything’s come full circle.
I ask though because on your profile you’ve used ChatGPT to write comments before.
DarkThoughts@fedia.io 7 months ago
Definitely a good start. Surveillance (or ""tracking"") is one of those areas where ""AI"" is actually dangerous, unlike some of the more overblown topics in the media.