AlgorithmWatch's No-Nonsense Rules for Not Screwing Up with Generative AI
AlgorithmWatch just dropped a policy to wrangle generative AI without hypocrisy. It's a breath of fresh air from folks who actually fight bad tech, not peddle it.
AlgorithmWatch just dropped a policy to wrangle generative AI without hypocrisy. It's a breath of fresh air from folks who actually fight bad tech, not peddle it.
The Pentagon is using Anthropic's Claude to make military decisions that could cost lives. The question isn't whether AI works—it's whether we should trust it with a trigger.
Imagine an AI whispering sweet nothings about cash incentives as you're booted from Europe. Frontex's new deportation app promises just that, but who's buying the spin?
A California court just handed AI companies a rare win: the Pentagon can't punish you for refusing to build surveillance tools. Here's why that matters for the future of responsible AI.
Google knew the dangers of Project Nimbus from day one—internal memos screamed it. Amazon? Dead silence. Here's why that's a scandal.
San Francisco's digital rights fortress just got a new general. Nicole Ozer, architect of California's toughest surveillance laws, steps up to lead EFF as tech giants ramp up AI tracking.