The idea that AI will eventually scan, predict, and even influence behavior is no longer hypothetical—it’s unfolding. The real question is: who decides what AI optimizes for? I just started a ten-part series on this.
Whether in advertising, employment decisions, or governance, AI-driven oversight isn’t just tracking—it’s shaping priorities in ways that often go unseen. If AI is structuring ad engagement, determining job retention, and filtering whose voices get amplified, what feedback loops are being created? And who is responsible for ensuring those loops serve more than just efficiency?
Great question. Developers should ensure that these models and loops are efficient, but the culture of the company/organization developing these models matter as every one has an agenda. So as AI models of today it all depends what is the goal of those deploying these models.
I wonder what's more effective than AI in detection the patterns of the hidden goals? And how do new frameworks of transparency relate to trust? It's a topic I've started writing on here on Substack... Feel free to join the conversation.
I would love to hear more about where we expect to see new jobs emerging.
There are a lot of stats cited about increased productivity and economic growth but very few clues on what new jobs will actually look like for those displaced, who are not in AI-related tech or associated fields.
Even Deepmind co-founder Mustafa Suleyman admits to serious concerns about "AI's potential to put large numbers of people out of work."
In his book, The Coming Wave, he says "in the past, new jobs were created at the same time as old ones were made obsolete, but what if AI could simply do most of those as well?" It nags at me.
The idea that AI will eventually scan, predict, and even influence behavior is no longer hypothetical—it’s unfolding. The real question is: who decides what AI optimizes for? I just started a ten-part series on this.
Whether in advertising, employment decisions, or governance, AI-driven oversight isn’t just tracking—it’s shaping priorities in ways that often go unseen. If AI is structuring ad engagement, determining job retention, and filtering whose voices get amplified, what feedback loops are being created? And who is responsible for ensuring those loops serve more than just efficiency?
Great question. Developers should ensure that these models and loops are efficient, but the culture of the company/organization developing these models matter as every one has an agenda. So as AI models of today it all depends what is the goal of those deploying these models.
I wonder what's more effective than AI in detection the patterns of the hidden goals? And how do new frameworks of transparency relate to trust? It's a topic I've started writing on here on Substack... Feel free to join the conversation.
I would love to hear more about where we expect to see new jobs emerging.
There are a lot of stats cited about increased productivity and economic growth but very few clues on what new jobs will actually look like for those displaced, who are not in AI-related tech or associated fields.
Even Deepmind co-founder Mustafa Suleyman admits to serious concerns about "AI's potential to put large numbers of people out of work."
In his book, The Coming Wave, he says "in the past, new jobs were created at the same time as old ones were made obsolete, but what if AI could simply do most of those as well?" It nags at me.
Anyway, love your work!