Comment on Emotion-tracking AI on the job: Workers fear being watched – and misunderstood

farsinuce@feddit.dk ⁨8⁩ ⁨months⁩ ago

Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.

A quick rundown of what it entails and why it might matter in the US.

What is it? - The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety. - The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination. Key Takeaways: - Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned. - High-Risk Regulation: AI systems with significant implications for people’s rights must undergo rigorous assessments. - Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data. Why Does This Matter in the US? - Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards. - Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally. - Potential for US Legislation: The EU’s move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.


Emotion-tracking AI is covered.

>Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.


Sources:

source
Sort:hotnewtop