As our tools get smarter, what do we need to know to ensure we continue to do good work in ethical ways?
This year, ChatGPT changed the way we think about AI. A tool that’s come closer to emulating human thought than any of its predecessors seems irresistible from an efficiency standpoint. But for those in the for-good space, what are the ethical implications of integrating advanced AI into our workflows?
In this episode of Ampersand, we dig into the ethics of AI with Kat Zhou (creator of the Designing Ethically project) and Dr. Jason Millar (Canada Research Chair in the Ethical Engineering of Robotics and Artificial Intelligence, University of Ottawa). Our guests bring both lived experience of working and studying the tech industry responsible for the latest innovations in software engineering, and offer wise words for communicators having to navigate these new tools thoughtfully.
This episode is dedicated to MediaSmarts, Canada's centre for digital and media literacy.
If you’d like to support their work, the best way you can do so is through a donation if you’re able.
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email media@emdashagency.ca with any questions.
[MUSIC PLAYING]
Caitlin Kealey
Welcome to Ampersand, the podcast helping good people be heard and comms people be better. I'm Caitlin Kealey, the CEO of Emdash.
Megana Ramaswami
And I'm Megana Ramaswami, Senior Strategist at Emdash. Ampersand is a space for us to speak with leading experts for their take on the hot topics that we see pop up in our daily work.
Caitlin Kealey
Working in comms at an agency, life can get pretty busy and we don't always get to explore things as deeply as we'd like. Ampersand is our answer to that problem. It's a medium for musing and reflecting in the name of being more effective and inclusive in our work.
[MUSIC FADING OUT]
In today's episode, we get super philosophical as both our guests have an academic background in ethics.
Megana Ramaswami
I can confirm this episode can get pretty deep. I had the pleasure of speaking with Dr. Jason Miller. He's the Canada Research Chair in Ethical Engineering of Robotics and Artificial Intelligence at the University of Ottawa.
Our second guest is Kat Zhou, who created the project.
Caitlin Kealey
Megana was super lucky and she got to pick their big, big brains about AI, chat GPT, and whether we can use any of these tools ethically.
Megana Ramaswami
Honestly, thank God for their big brains because I learned a lot. It was a fascinating conversation, so much so that we think we're going to have to have them back on again to talk about this topic.
If you're like me and you have big feelings and concerns about the new AI tools on the scene, stay tuned to hear our guest shed light on what goes on behind the big tech curtain.
[MUSIC PLAYING AND FADING OUT]
So for years, you've both been advocating to bring ethics into workflows and processes and AI, whether it's developing tech or using tech. So it would be really helpful for each of you to tell us about yourself and your journey with better understanding AI and the ethical needs of our society.
Kat Zhou
My name is Kat Zhou, pronouns she, her, and I am a designer in the tech industry.
I'm also the creator of the project, which I started back in 2018. And whenever people ask me about my journey into this realm of trying to figure out how we can not be that shitty and then tech and design and and I, I start off with just saying it came out of frustration and anger. I think I honestly joining the industry and kind of seeing how it worked and realizing just the amount of exploitation that was happening and just seeing.
Massive big tech companies turn out products that were really harmful to a lot of vulnerable communities was really frustrating. And, you know, I think a lot of designers go into this field because they want to design better experiences for people. And so trying to reconcile that with the realities was difficult.
So, uh, that's why I started the project and since then, I've been doing a lot of advocacy work on the side, speaking a lot about, you know, deceptive, manipulative design patterns and also taking up a master's in ethics and society, which I just finished this summer.
Dr. Jason Millar
My name's Jason Miller.
I'm a professor at the University of Ottawa. I sit in the Faculty of Engineering and teach in engineering and engineering related programs. But academically, my background is both in engineering, I worked as an engineer for several years, but also in philosophy. Most of the academic work that I've done over the years is in ethics and philosophy.
So for example, my PhD is in philosophy. I've been Interested in and researching issues around ethics and technology since I was, you know, an undergrad and master's student. I do my philosophy work, and then I really got interested in AI. I think it was around 2011, IBM did their Jeopardy challenge. The man versus, they called it the man versus machine challenge, where they took their, you know, latest and greatest artificial intelligence, Watson, and pitted against the two most winning Jeopardy contestants of all time, a colleague and I were just kind of fascinated by this. And ever since then, I've been writing about issues and robotics and artificial intelligence. I have a Canada research chair in the ethical engineering of robotics and artificial intelligence.
And so the work that I've been doing since really looks at taking knowledge and theories from ethics, sort of philosophical knowledge and merging it and kind of mashing it up with engineering practice and policy work. What we've been seeing in the last 10 years is really, you know, philosophers, ethicists, sociologists, those social commentators struggling to keep up with what's going on in the tech world, whether it's sex robots or autonomous vehicles, or now we've got, you know, chat GPT and large foundation models that are challenging us currently, or, you know, policymakers struggling to figure out like, what do they need to do in terms of regulation and setting reasonable guardrails around the innovation in this space.
Uh, and then, you know, well, well meaning engineers and computer scientists who were really trying to understand how they can produce better, more ethical AI and robotics. So, yeah, I mean, the research that I do really focuses on those three areas, looking at the ethics, the policy and the engineering practice associated with these emerging AI models.
Megana Ramaswami
Amazing. Thank you so much. I mean, we are so excited to have you both on. One of the things about developing this topic is that our own agency, our office, we're having a lot of these discussions, you know, as we started to see how prevalent AI was in all of our tools.
We're grappling with a lot of the questions that you both are researching and, you know, are, are trying to come up with ethical answers for, so I really appreciate you both being here and I can tell it's going to be a very illuminating discussion. So I'll jump into my next question, which is that, you know, earlier this year, we saw multiple open letters from industry leaders who are working in artificial intelligence, warning us of the risks of their own inventions is what it seemed like.
So I'm curious about your understanding of how, you know, AI executives, researchers, consider the societal risks and the ethics of their technologies and whether they have done so appropriately during the creation and you know what we could potentially improve on there.
Dr. Jason Millar
We really do feel like we're in a moment.
In time and in history where we've crossed some sort of threshold a year ago, you know, we wouldn't have been talking about chat GPT and models like chat GPT, but there's kind of the pre and post chat GPT world it seems, right? This has been incredibly disruptive. To so many industries, you know, it's kind of like the invention of the calculator, but amplified, certainly there are benefits to these technologies as well as, you know, major concerns and disruptions that they're causing.
I think it's safe to say that, you know, executives, you mentioned executives of these large companies, that they're concerned about the social impacts of their technologies. I don't know that it's entirely safe to say that they're willing to pause or that they're all sort of clamouring for good regulations out there.
I see a mixed bag on that front, right? I don't see all the companies coming together, asking for clear regulations. I think they know regulations are coming. And so they're trying to shape them in a way that allows for a more, you know, what they would consider an appropriate balance between safeguarding the public, putting those guardrails in place and still allowing them to do whatever they want.
There's a real push in Silicon Valley to kind of break things and move fast. You know, that creates a certain pressure for companies in that space to just release their technologies quickly to get them out to market as fast as possible. So the worry here is that even if companies are thinking about the ethics that there's this kind of overriding value that pushes them to the scales in favour of just getting things out there fast enough, which is what I worry about in that context particularly.
It doesn't mean they're not still thinking about ethics and finding ways of bringing ethics into the design of their technologies. You can see some of the responses that you get from something like chat GPT, you know, that that clearly indicate that the company has put a lot of thinking and design into responding to particular prompts in particular ways prompts that would, you know, lead chat GPT down the road of giving like clearly biased or unethical or, you know, for example, racist or obviously problematic answers.
But the worry here is that in the, in the drive to be first to market, the scales will tip in more in favour of just getting things out the door quickly, rather than putting the time and effort into really looking into those issues in the way that I think a lot of people were hoping they would.
Kat Zhou
I want to underscore what Jason said about the potential and I think the reality that a lot of these tech titans that are part of this letter and part of the other, uh, Senator Schumer's forum and whatnot, I think they're trying to basically present themselves as caring about this thing, which maybe they do, but I think ultimately this kind of discourse that they're shaping and this kind of policymaking that they're playing a part in is probably in their own interest to help their own companies run without that much obstructive, um, regulation.
And I think what's interesting too is, you know, in the letter that they wrote. There was a pretty sizeable chunk of it dedicated to this worrying about AGI, or like, just really Advanced AI that could replace human labourers and all this stuff, um, which is very much, you know, something that we definitely would need to worry about.
And I think it's just interesting, though, because I think a lot of these people, for example, and we've heard this from folks like Elon Musk and, and Sam and Altman, et cetera, they have been driving up this kind of fear around these futures of AGI and whatnot. And, um, And while it is something we should acknowledge, I think it's also something to acknowledge that I think this kind of fear mongering is also, in some ways, a distraction from the very real exploitation that their companies are perpetuating today, like, right now.
And I think it's, it's, you know, when they're decrying all these future things that are coming down the line, um, I think For them, it's like them realizing, oh, wow, me, like a wealthy white man who owns these giant companies, I might be in jeopardy down the line. But like, in reality, today, there are so many exploited workers that are just completely paving the way for this technology to kind of exist in the first place.
Right? And a lot of these workers are the hidden or erased labour behind, for example, the content moderation for these tools and the data training that's going on. And a lot of this labour is actually sourced from places all around the world that have a lot of connections to previous historical patterns of, you know, colonial exploitation.
Um, so I think that's something that I just wanted to kind of underscore. And I think it is good that they are talking about these things. I just, um, I'm concerned that what we only have. These high profile titans of industry talking about this in policy spaces, that's when we run the risk of kind of ignoring the actual like glaring problems with real vulnerable people today that are being run over by this industry.
Megana Ramaswami
I'm really glad you brought that up, which is, it kind of leads into my next question, which is that it does truly feel like we're in a moment that's, you know, post chat GPT versus pre chat GPT. There truly does seem to be sort of an urgency around talking about AI, you know, thinking about AI and ethics, even though we've been using AI and in various technologies for a while now.
So I'm hoping to draw both of your attention to why now, why does it feel so much scarier for a lot of us?
Kat Zhou
So I think, um, at least within certain segments of, of industry or, or society, some of these tools are finally kind of coming to the forefront, at least in, in media. And for example, at least in the design sphere, like these tools are really just popping up within the last year or two.
I know we've like, there's been whispers and and rumours of like, oh, yeah, there'll be, you know, plugins that you can use to design a wireframe layout or whatnot. But, um, I think it is in this time, this, like, last year or so, where we've seen this proliferation of tools that the layperson can use.
And of course, I do acknowledge that, like, there are still many people that are not using these tools, you know, people that don't have necessarily like access to a computer or whatnot. But, um, at least in our spheres of like, uh, privileged tech workers or designers or whatnot, or folks in academia, we're starting to see like, Open access, right?
That's been made available by these tools, which for a lot of them don't cost money, at least in the first use. And so I think that's kind of where this rise in awareness and this rise in emotional response is coming from. And not to mention, like, the role that media have had as well. I think. The media is reporting and stuff that's largely shaped by a lot of these companies and these tech CEOs and whatnot.
So I think a lot of that is kind of coming together within, you know, the past year or so and is really just evoking a lot of, uh, emotional response.
Megana Ramaswami
That makes sense. Um, we do agree with that. Dr. Miller?
Dr. Jason Millar
When I think why now, like, what is it that is driving people in this moment in time to really amplify their conversations around AI and the models that we have available to us today versus say a year ago.
I think it is because there's something different about the nature of those models from a technical perspective. Maybe they're just larger or draw on more data. For those of us who've been using Siri for years, just to keep to this kind of chat bots, right? The, the kind of chat GPTs for those of us who've used Siri or Alexa in the past.
Or Google assistant, and then have switched to using chat GPT. There's a mark difference in the way that these technologies work, right? Siri has a hard time understanding when I ask it to call my mother, because it doesn't understand that the way my name is spelt M I L L A R. It's actually pronounced Miller.
So when I say call, you know, anyone Miller, it just doesn't understand. I don't have that in my contact list, right? It's looking for Malar or something like that. So I have to mispronounce my own name to, uh, to get it to call people in my family. I don't know that a model like chat GPT would have that issue.
And certainly the kinds of answers and conversations that you can enter into with the bot like that are just fundamentally different. I can't ask Siri to write a political essay for me, right? And I can't ask Siri to make a, a detailed argument that would sway somebody persuasively. I can ask chat GPT to do that.
That is a fundamentally different type of technology. I think it raises a number of different types of concerns. We're seeing that with deep fakes, right? Anyone can generate a deep fake using these online tools. And that is raising a lot of concerns, understandably, because we've seen the impact that that can have on democratic discourse.
That's fundamentally different than, you know, a recommender engine that's telling you maybe what kind of CD or CDs, what kind of music you might like on a streaming service. Uh, yeah, my, my daughter's been collecting CDs lately, so they are actually back in fashion. So, you know. It raises a number of concerns, right?
Like is generative AI that's capable of making arguments and writing an essay for you going to de-skill the population when it comes to thinking critically about certain topics, right? To what extent is a model or are models like chat GPT going to bootstrap our political discourse from on a day to day basis.
That's a serious, significant concern that we should all be wondering about. The health of a democracy is presumably based on the ability of the citizens to make coherent arguments that they believe in and that are from their own sort of introspection about issues and engagement with the issues. So how are we going to, how are we going to maintain that if, if students are using chat GPT for everything?
And then there's the scale and access, as you mentioned, everybody has access to this, right? I have the app on my phone. Now I can open it anytime, ask it any questions. It remembers conversations I've had in the past. And I can just continue as if it was there all along, waiting to pick up on that thread. How is that being manipulated in the background?
What kind of data was that model trained on? We don't know any of these things. Like a highly opaque system that we are now using everywhere and for everything. I asked my students in the first class that I ran yesterday, How many of you in the last eight months have used ChatGPT to complete part or all of an assignment?
In one of your courses and everybody put their hand up, everybody. So this is a concern. I think that's why we're having this conversation today. And we weren't a year ago.
Megana Ramaswami
I think that makes sense. And I would love to jump on something that you mentioned and that Kat also alluded to in her answer, which is about how do we know, what do we know about the data that is being used to train these models, what do we know about the ethics of how they were developed about the workers that may have been involved? Um, Kat, you mentioned the exploitation of workers and, you know, Jason, you mentioned the opaque nature of the data that's being used. Um, would love if you both just kind of expound a little bit on what your understanding of that is, you know, at this point and, and where we kind of need to go.
Um, Dr. Miller, I'll start with you.
Dr. Jason Millar
Yeah. I'd like to pick up on something that Kat was talking about, which is the debate that exists around the exit- what they call like the existential risks posed by AI. Like this is like that futurism version of what we should be concerned about. It's the kind of things you hear about from Elon Musk, the cast of characters that Kat was talking about.
And I just want to agree with everything that Kat said about that, I think, and maybe go, maybe even go a step further. That debate focuses on the future risks of AI and what you're talking about now. And what Kat has alluded to, and I'm sure has a lot of experience with having worked in the industry are the present risks and harms being done by AI.
And one of those. Categories of harms has to do with the way that AI is developed specifically with respect to the data, right? Who is being asked to produce the vast amounts of structured data that these systems need in order to be trained, right? Like they don't know what data they're looking at until some human does a little bit of work to tell the machine, what kind of data it is that they're looking at. So I just want to emphasize the importance and significance of getting away from our focus or some people's focus on this very intriguing and kind of sexy issue of like future, um, general purpose AI that's able to be that, that, you know, is characterized by being smarter than humans, faster than humans, uh, you know, and that kind of like outpace humans in every way and really refocusing the debate on the current issues.
And I just want to say that before turning it over to Kat, because I have a feeling that Kat has a lot more experience with the exploitative practices, and I would love to hear more about what Kat has to say about that personally.
Kat Zhou
Yeah, thank you. There's just been so much fascinating research done on like the whole supply chain behind our systems of, uh, involved in AI.
And I, I think like Kate Crawford has a great book about like the Atlas of AI, where she maps out all the different kinds of workers and the geographies involved and the kinds of extraction involved in order to not only train data and source data, but also to, like, even, you know, develop the technologies that we use and to interface with these various AI tools or the data centres themselves.
And there's just a lot of interesting work out there that kind of emphasizes that 'AI' as we know it, I'm saying AI in quotes, has real material implications and is very much a real thing that is, you know, abstracted, not only on people and labour, but also on the environment itself. And when it comes to it, for example, like the training of data, right?
There's been more reports on how companies like Open AI and Facebook have outsourced the very tedious tasks of training, you know, all this data for, um, their tools to labourers, uh, in for example Kenya or the Philippines and how this kind of work is never like a full time employee kind of work. It's always contracted work where they're paid very, very little.
And there was recently some news that emerged about how Sama, which is a company, it's an American company, counts itself as a certified B Corp. And they kind of painted this whole picture of like, oh, we're employing workers in Kenya to work at the forefront of this, like, industry and in train data and label data, and we're giving them amazing salaries.
Um, but it actually came out that, like, these workers are being traumatized because they were repeatedly exposed to horrific content, the content that we never see, because we don't have to endure it at all. And not only that, they were getting paid very little, and also, when they tried to unionize and organize, they were faced with retaliation.
Um, so it is a very, very kind of precarious industry. There's a whole, you know, segment of these precarious workers that are powering the models themselves, but also making sure that what we see at the end is relatively safe, and that's something that is really, you know, concerning. And there's this really Awesome term coined by Astor Taylor called Fotomation, which represents kind of the marketing of these automated technologies and how they're often displayed as like, oh, it's just it's it's tech.
It's AI. It's computer. It's all. It's all of this like non-human stuff, right? It's the magic of it all, but what it actually does is obscures the amount of labour behind the scenes, which makes it easier to kind of treat it- workers really badly when you don't know about what's going on. And so I think that's something that I hope and I think it is starting to come to the forefront.
And I hope it kind of surfaces up more in our conversations about these tools. Because like, when you look at the interface for like chat GPT, right? Or Midjourney or whatever else you don't see. It's not human at all. You don't, you just see the, the box where you type it in and you see the response and everything, um, but you don't realize, like, it's unintuitive or obvious that, for example, when Midjourney is spitting out an image for you, that image is compiled of so many images scraped off of the internet that real humans and designers and artists created and were never compensated for, right?
And so I think that's something that we, we have to continually underscore when we talk about this.
Dr. Jason Millar
And, you know, in response to those issues, there are groups out there that are trying to find ways to build community around. You know, coming up with norms and practices that would help make the lives of those people who are being asked to do the data enrichment or the content moderation.
Again, these like tens and thousands of people that are out there who get hired to do these things. It's not a small task. If you think about the, the, just the volume of data that's needed to train these systems and the volume of data that runs through these systems that gets flagged as potentially harmful and then needs to get the human to kind of make the final decision.
Like Kat said, we don't hear about this. There are organizations—just to point out that it's not all dire—there are organizations who are working to, you know, build communities to make this better. So for example, I do a lot of work with an organization called Partnership on AI. And, you know, what we do at Partnership is really bring together partners from all over.
So we're talking about large tech companies. We're talking about civil society organizations, you know, not for profits, academics, funders, and so on philanthropists to, to build community around trying to really solve some of those issues.
So. Identify the issues and we've done a lot of work, amazing work on some of these issues that that Kat has mentioned specifically, but you know what we're working towards and and what certainly other groups are working towards outside of Partnership on AI are the development of these community standards that will really help to inform organizations, whether it's labor unions, whether it's people who are using these tools without really understanding how they were produced, um, in the first place, you know, so it's like kind of like building knowledge around the ethical creation of these tools, the ethical use of these tools, certainly if people knew the kinds of harms that were going into developing a certain type of tool, you know, and if we raise awareness about that and really inform the public a little more, you know, you would hope that that would have an impact on people's willingness to use a particular tool over another. So a little bit like, you know, how we went through this, this era of really understanding where our clothes were produced. The first time I remember this was maybe 20, 25 years ago, hearing about like sweatshops and stuff like that.
So, uh, you know, raising awareness is important, but also developing those norms in community with affected populations, with the people who are using the tools, with people who are creating the tools in order to really, uh, try to come to some sort of consensus on what needs to be done. In order to better produce, use, deploy, train all of these, all of these types of things.
So when it comes to AI, those things are happening, they're happening certainly slower than I think a lot of people would hope, but there are efforts ongoing to do that. So I just want to highlight that as, you know, something that's ongoing. Certainly there are a lot of bad actors out there, but there are also a lot of good actors who are trying to learn from, um, you know, the recent stories that have come out, like the one that, uh, Kat was mentioning.
Kat Zhou
I'm glad you brought that up because, um, it reminds me, they actually, uh, the first union, it was like the, the union of content moderators in Africa was formed this summer. Um, but I also like the example that you brought up around like the clothing metaphor, the fashion industry metaphor too, because it reminds me of this one really poignant line in, um, Sarah T. Roberts book on content moderation. And it's called like behind the screen, but I'm paraphrasing. Basically, they were saying how like one of the obvious ways to go about, you know, mitigating the harms that come from like extensive content moderation, PTSD and whatnot for these workers is to just scale it down and same with like, for example, fashion and waste production in the clothing industry, scaling it down and literally just scale preventing us from building, building, making, making, um, and producing all the time.
And of course, it's so obvious, but it's so hard because of just the mechanisms of our society and the kind of always the chase for growth. But that's something that like, I have to constantly remind myself of. Because the reason why we have such a vast network of workers, for example, working on annotating data and whatnot is just because there's such a hunger for it from these companies.
But if they could, if we could figure out a way to tone it down several notches, that could be a start to alongside these practices that people are working on.
Darnell Dobson
The following isn't a paid advertisement. It's just us here at Emdash shouting out some good people who are doing important work. You should check them out.
This episode is dedicated to MediaSmarts, Canada's centre for digital and media literacy. For over 25 years, MediaSmarts has been developing evidence based programs and resources to promote ethical and reflective media users. They equip Canadians with the critical thinking skills they need to engage with media as informed citizens.
If you'd like to support their work, the best way you can do so is through a donation, if you're able. We'll link to their site in the show notes, where you can learn more and donate.
Dr. Jason Millar
Yeah, I think one of the other challenges that I think needs to be recognized is that these companies are pseudo monopolistic.
And I, I mean that in the sense that like these models are extremely expensive to create, right? Only a few corporations can even contemplate the projects. That lead to something like chat GPT or Siri for that matter, or Google maps for that matter, right? These are incredibly expensive companies.
So unlike the sweatshop metaphor where you find out that one company is a bad actor. There are 20 different places you can buy t shirts, right? Hopefully one or two of those is doing something. It's doing something on the right side of history. When we find out that a company like one of these large corporate, I don't want to name companies, but like, cause you kind of pick your company.
But if we find out that there's something, you know, that we would consider problematic or unethical going on at that company, we have very little choice. Where to turn if we want those services. So this is another challenge that I don't think we've really wrapped our heads around as a society yet. It's not as easy to just switch stores and go somewhere else.
Right. So it's a challenge that we, that we have that needs addressing. It's very difficult to go hat in hand to a Meta or a Google and ask for them to just change what they're doing. Right. Although people are doing that. I mean, certainly, uh, you know, they are, again, it's, it's not that they're, you know, evil actors.
Uh, there are people in those companies that are trying very hard to make those technologies as good as possible. It's very difficult to change these massive models that take years to train and, and that are, that are developed, you know, like we were saying, kind of with an opacity that that's difficult to even untangle how all of the, like how the harms sometimes are created, even though we know that they're there.
Megana Ramaswami
So, I mean, based on what you're both saying, does that mean there's a possibility for a fair trade AI, you know, something where there's sort of an agreed upon set of ethical standards? I know that both of you have talked about that some of that seems to be happening. Is there a possibility for something more concrete, you know, something like the fair trade label?
Dr. Jason Millar
I mean, there's a lot of talk about standards and, uh, certified ethical labels and these types of things. These are all voluntary. So if you want to have the fair trade label, you know, there has to be some incentive for you to want the fair trade label.
I'm not, and I don't want to suggest that there's no incentives for companies to do that now, but there kind of isn't any real incentive for companies to do that. And that's a very difficult, very difficult to go to these large companies and ask them to, uh, to just opt into something that would then potentially slow down their, their pace of innovation or leave them at some sort of competitive disadvantage, like, uh, you know, not an actual competitive disadvantage, but a perceived competitive disadvantage, right?
And it is surprisingly difficult at this moment in time, maybe that will change very soon. Maybe not, uh, it's very difficult to incentivize that type of behaviour, uh, when there've been so many rewards for doing the other thing, right? So we turn to regulation. I mean, we know that regulation has come in, you know, the public, I don't think is as aware of some of the issues that we've been talking that Kat and I've been mentioning today.
So there's a certain amount of public awareness that has to go on before you would create the need for that. Maybe it's just because you can buy a coffee from so many different places. It's an easy choice as a consumer to go to the shelf and look for the label. You know, like I said, with pseudo monopolies on these types of technologies, there's really very little incentive for any one person to sort of lead the way on a fair trade equivalent label.
Kat Zhou
Yeah, that's a great point. Um, and I, I'm also a bit wary about it just because I think in the tech industry, at least like every single company, all the big players has some kind of framework or just, you know, structure around that's their perspectives on these things, and it's it really feels like everybody and their mothers has, you know, some kind of list of guidelines for whether it's ethical or ethical design or whatnot, myself included.
But I think what the risk is, is that there's a lot of ethics washing in the industry in which we say, Oh, we've got this framework. Look at this. It's shiny. And, you know, we can say a lot of nice things, but then behind the scenes, we're doing something completely different. And this is something that happens all the time in the industry.
And as Jason was saying, there's just not, there's no incentives for them to, to really listen because there's not really any, you know, enforcement at least, especially in the U. S. where I'm from over here in Europe. It's a bit different. There's definitely more robust, um, regulation happening over here with fines.
But even those fines too, like, are just a drop in the bucket for a lot of these big companies and with the former question, Jason brought up the idea of, like, these companies are essentially monopolies at this point, right? And I think what we really need to see is just some serious, like antitrust kind of legislation that just breaks these companies out.
Ideally down the line, because they are far too powerful for their own good at this point, because there's nothing for them to really be scared of. Even if we did have a fair trade label, like for example, the B corp label, I really would be a little bit wary about, you know, the stuff that happens behind the scenes, because we've seen companies like Sama, which has to this day, I think they still have their B Corp label.
And it's been reported that they've done some stuff that's not ideal or not great. And so there's just a bit of skepticism, I think, around that.
Megana Ramaswami
I think that's a great point, you know, and I kind of want to go back to something that Jason just referenced, which is that regulations are coming. Where would you guys say that regulatory mechanisms are going or can go to sort of keep tech companies a bit more ethical or keep them in a place where the technologies that we use aren't causing widespread harm?
Kat Zhou
Nowadays, like these companies are so global, right? And, you know, it'll be an American headquartered company that allows users in Europe or in Latin America or in Africa and et cetera to use their products.
And in each of these different regions, there's different legislation and regulation about this. So it's one of the really complicated things. It's just like, for example, you'll, you'll see like Instagram, for example, has a certain parameter of what they're allowed to do in the US. And then the moment you open Instagram in Europe, you know, where I live, a bunch of features are not available to us because of regulation that's, you know, via GDPR, the Digital Services Act, the Digital Markets Act, et cetera.
And now in Europe, we have the AI Act. And, well, some of the kind of observations that we've seen throughout the last few years is that Europe has been pretty strict, like, relatively speaking, compared to the US. I don't know about Canada, so you'll have to jump in on that. But, um, yeah, the US is kind of like the Wild West.
It's always been the Wild West. Even China has more robust AI regulation than the US does. Um, and so it's difficult because with all these different fractured spheres of regulation, you're having fractured experiences where some people are a lot more vulnerable to things like, you know, data capture and whatnot, and others are a bit more protected.
And these companies are trying to kind of optimize for the most they can get away with, of course. And they're kind of all calculating, like, okay, what can I do to pay a fine over here in this area? But, you know, in order for me to get this and this and this from other people, so it is definitely a huge, messy, tangled realm.
And I'm not a regulation expert by any means, but even just like talking about, for example, deceptive designs and whatnot, that too, it's so hard to get, to get this kind of across the board solidarity or at least alignment. And maybe we'll never have that. Maybe that's okay.
But at least like trying to coherently crack down from these companies is a step that we should try to take because I think, you know, without that kind of alignment, it's going to be really hard to take on these giant big tech corporations.
Dr. Jason Millar
I don't think regulations are going to solve everything. We've seen that with privacy, right?
Like the regulators will regulate to a minimum standard, and then there will still be many, many things that designers could do if they want to kind of meet and exceed those standards, privacy being a perfect example, right? You can meet your privacy, uh, requirements under the law. You can also choose to exceed them.
So we have to be realistic about what we can expect from any regulation. These are guardrails that are put in place in most cases, prevent the most egregious harms that can happen if you unleash a technology on a public, right? So we are always going to have to turn. So the actors who are creating these technologies, even when regulations are in place in order to get the best out of them, and that's going to require a detailed, a much more detailed and nuanced conversation about what those harms are.
Uh, we have to do that in a way that brings those people to the table and openly that's probably going to have to happen in community with other actors, others affected stakeholders so that we really can get past the bare minimums that are all we can really expect from. Good regulation, like that's what good regulation does.
It sets a standard. The standard is never going to eliminate all the harms that you'll find in that supply chain. So we need, you know, the best actors are going to be the ones who, uh, who go above and beyond. And then as consumers, although I don't like to place too much onus on consumers, because we are the least informed.
We have the least amount of time to keep on top of all these thousands of companies. At least then, though, we have some knowledge of how to make decisions about who we choose to get our services from, whether it's AI or tasty coffee.
[MUSIC PLAYING]
Caitlin Kealey
Well, that's it for this week's episode of Ampersand. Thanks for joining us. For more comms and design tips, sign up for our newsletter at emdashagency.ca and follow us on your favorite podcasting app so that you don't miss our next episode.
Ampersand is hosted by Megana Ramaswami and me, Caitlin Kealey. And it's produced by Elio Peterson. This podcast is a project of Emdash, the small agency focused on big impact helping progressives be heard. We recorded our series with Pop Up Podcasting, and our theme music is Courtesy of Wintersleep. I'm Caitlin Kealey, the CEO of Emdash.
Thanks for listening.
[MUSIC FADES OUT]