Write My Paper Button

What would those look like?

The goal of case studies is to provide you with a real-world example of how ethics, privacy, and security challenges emerge and are resolved. The cases we will be examining are generally from the last 5-10 years and cover a range of topics. For each case study assignment, there will be 2-4 readings/videos/audio that you should first review before responding to the questions. I highly encourage you to do your own research into the topic as well to supplement the links I’ve provided.
You will then complete a 500-700 word write-up responding to the questions below. Your response should be based on evidence, not just your opinion. You should be including at least three citations in your write-up; these can be pulled from the examples I list below or from your own research. The questions will vary based on the case, and you will be graded based on whether you directly address the questions raised and how critically you engage with the materials. There aren’t “right” answers; instead, I want to see how you think through these complicated topics.
How should technology companies navigate ethics related to the technology they build? This is a critical question for the 21st century, as technology companies like Google, Apple, Facebook, and Amazon have become dominant players in the global economy and have created software and hardware that is in most people’s homes, cars, and/or workplaces. We’ve also seen pushback against these technologies, most recently against facial recognition technologies like Amazon’s Rekognition tool, which performs poorly with female and dark-skinned faces.
Google is a prime example of the tensions existing between research & development (R&D) and ensuring company-built technologies do not create new (or exacerbate existing) harms to parts of society. In 2018, Google released it’s “AI Principles,” which provided an overarching set of ethical guidelines for doing AI research. These principles included things like “be socially beneficial” and “avoid creating or reinforcing unfair bias.” The following spring, they created an external advisory board made up of eight academics, researchers, and businesspeople to provide advice to the company on their AI-based research and development.
The program was seen as a move by Google to reduce criticisms about their various tools; by having an (admittedly powerless) group of external experts, they could show they were taking steps to be more ethical in their work. However, the group was disbanded after just a week due to significant pushback (both internally and externally) on the inclusion of Kay Coles James, the CEO of the conservative Heritage Foundation, which has taken a strong stance against LGBTQ+ equality, among other things. Google did not try to reform the advisory board after this.
This is not Google’s only problematic event related to ethics and AI in recent years; in 2020, the company took significant heat after firing ethics expert Timnit Gebru, a prominent voice in the AI ethics community. Gebru is well-known and respected in the computer science community for being among the first to show that Amazon’s facial recognition technology was racially biased. Gebru was fired by Google after her team submitted a paper critical of the company’s natural language processing models. In essence, she was fired for doing exactly what she was hired to do: identify ethical problems in Google’s products.
These events raise some important but difficult questions regarding technology companies. For this case study, consider the following questions:
1. Most technology companies “self-police” their technology development. Should certain companies be required to have external advisory boards that can provide unbiased feedback on company products?
2. Gebru was part of Google’s Ethical AI team at Google. Should these types of jobs in tech companies come with protections that would prevent what happened to her? What would those look like?
3. The research paper in question details the risks of large language models like those Google uses to generate text or infer meaning from text. They detail the financial and environmental costs, as well as problems related to changes in language and slang over time and how these models could be used to generate and spread misinformation. From an ethical perspective, what do you see as the biggest challenge to these kinds of models and why?

WhatsApp Widget
GET YOUR PAPER DONE