News
π§ Listen to the episode now!
When you're trying to convince somebody of something (for example, to
examine whether AI is a good thing), you need to know your audience.
This episode is all about looking at programmers as a whole and
identifying different groups, then thinking through how to talk to
members of those groups and use arguments that resonate with them.
After you listen to the episode, be sure to check out the following:
π§ Listen to the episode now!
To quote Sarah Connor, "There is no fate", so why do so many people
buy into the narrative that AI is inevitable so we might as well get
on board (or throw up our hands in despair)? Josh and Ray get to the
bottom of this, realise that Skynet might happen after all, and channel
the great Lina Khan: "Where we are is not just the inevitable outcome
of market forces or technological development. It's a result of choices
we've made, and policies we've enacted or not enacted. And we can
change that. We can. We must."
After you listen to the episode, be sure to check out the following:
π§ Listen to the episode now!
We're joined this week by Dr. Chris Gilliard, Co-Director of the
Critical Internet Studies Institute to talk about what generative AI
means for surveillance, and how this technology disproportionately
harms marginalised groups. Dr. Gilliard is a writer, professor, and
speaker whose scholarship examines digital privacy, surveillance, and
the intersections of race, class, and technology. His book "Luxury
Surveillance" is forthcoming from MIT Press in 2026.
After you listen to the episode, be sure to check out the following:
π§ Listen to the episode now!
Whilst everyone has an opinion about AI, there are certain top-level
narratives driving the hype bubble. We look at the doomer / booster
divide (which turns out to be not so divided) and peek behind the
curtain to see which corporations and people are pushing these
narratives and what they have to gain.
After you listen to the episode, be sure to check out the following:
π§ Listen to the episode now!
Our societies are necessarily based on trust, but how do we decide
what institutions, people, and technologies to place our trust in? And
why have we collectively decided that since "computers don't make
mistakes", they are always worthy of our trust?
After you listen to the episode, be sure to check out the following:
π§ Listen to the episode now!
AI is everywhere these days, and the problem with it is not what people
think. It's not Skynet we should fear; it's the actually existing harms
that are being done to the environment, workers around the world, and
the human mind itself. Join Josh and Ray for the inaugural episode of
the inaugural season of Politechs as they lay down a Luddite argument
against AI from the perspective of software engineers and talk about
how to have good faith discussions with people
who might not see or be willing to acknowledge the dangers of the
technologies under the AI umbrella.
After you listen to the episode, be sure to check out the following:
Read the rules of engagement here.
These days it is becoming more difficult to enjoy good faith
discussions. We used to disagree on policy prescriptions but now
reality itself is under contention. As "alternative facts" and
"flooding of the zone" have become widespread, people are reacting
by either asserting misinformation as truth or shrugging their
shoulders and saying that there's no way to know what is true and what
is not.
This cannot stand. In order to wrestle with the big questions of
our time, we we need to be able to have good faith discussions where
we're talking to each other, not past each other. In order to
have these conversations, we need to find a starting position that we
can agree on, some rules of engagement for good faith discussions.
To that end, we propose the following rules
of engagement.
π§ Listen to the trailer now!
AI is everywhere and nowhere.
Itβs everywhere: in search engines, social mediaβand if youβre software
developers like us, in your toolsβand for more and more people itβs at
work.
Itβs also nowhere, in the sense the way that nobody knows whether itβs
actually useful. There was a wave of excitement around chatbots and
image generators but that has faded as we see how many errors it makes
and how easily it can be fooled.
That hasnβt stopped the hype, because there are billions of dollars at
stake.
Our goal in the first season of Politechs is to offer you a chance to
step away from the breathless hype and examine the new generation of
AI tools: what they are and how and why they are a risk to human
thriving.
Weβll examine the technology, the corporations and people behind it,
and the narratives that are driving the hype cycle.
Weβll offer you practical options in your daily life to resist and
adapt so that you can limit the impact on you as well as your family,
friends, and coworkers.
This is Politechs Season 1: a critical tech perspective on AI in 2025.