Exploring Solutions For Unethical AI Practices in Social Media
Is artificial intelligence good or bad? Should someone use AI to learn about consumers?
Well, Aristotle would say that there is not a right or wrong thing to do. It’s all about purpose or being rather than doing. For example, the purpose of a knife, is to be sharp. Therefore, a sharp knife equals a good knife.But If someone takes that knife and repeatedly commits robberies with it, then they are not going to be virtuous. For as Aristotle said “ you are what you repeatedly do”. As a result, a knife isn’t bad on its own, since its’ purpose is not to be bad.
AI has a purpose too, According to a paper titled Artificial Intelligence, written Emerit US college:
“the basic objective of AI (also called heuristic programming, machine intelligence, or the simulation of cognitive behavior) is to enable computers to perform such intellectual tasks as decision making, problem solving, perception, and understanding human communication.”
So the purpose of AI is to make decisions based on their own “cognitive” behavior. But should we allow that behavior to take users information in order send them targeted ads? What about spam email? When artificial intelligence displayed by technology is working as intended, that is a good thing..but when used by humanity, there are a number of ways it can be used unethically. While profit may be good for the merchants, many people feel uneasy about AI having the power it does within marketing.
Computer vision is field of study in AI which goes over how computers can analyze and comprehend digital media. It’s works similarly to human eyes. It uses algorithms to understand and automate tasks that the human visual system can do. In the future, computer vision will pass the capabilities of the human eye with ease. For example, Facebook has developed algorithms which is able to recognize human faces with much better precision than you or I can. And they will use it for even more precise commercial advertisement.
Let’s say you go somewhere, your friend takes a picture of you when you were in a clothing shop. They post the picture on Facebook. Then Facebook detects your face and account. Then they will show you advertisements for clothing similar to the shop you were in.
This makes me wonder how far they could violate our privacy when this is only one feature of AI. It makes me wonder why AI used to invade people’s privacy…and why do so many stand for it?
Many people working with AI like Google and Social media platforms may not want to try and change the algorithm. This may be because they don’t want to risk their job or because it is an integral part of their job. There is a sense of shamelessness and greed from those who are profiting the most out of these algorithms.
Recently, On December 2nd, Timnit Gebru, One of Googles leading AI ethics researchers’ employment had abruptly ended. There is controversy surrounding this as a higher up at Google sent an email confirming that Gebru has resigned, where Timnit explains that she did no such thing.
Timnit was planning on delivering a paper at an upcoming AI conference highlighting potential ethical issues surrounding Google Advertising but Googles AI head, Jeff Dean, said it “didn’t meet our bar for publication.”
You can read more on this story at: https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382
This is just one story of many where employees have spoken up about injustices happening while the big corporations have either fired those who did speak up or proceeded to do nothing at all.
On the end of the users, I took a poll of 93 facebook users and asked them about their data usage and their opinions on AI in relation to social media. I did this because I wanted to see if they were aware of these issues and how affected they were by them.
When I asked about how much they use social media, 74 people said they used it daily, 11 said often, 5 said rarely and 2 said they don’t use it.
This same group who the large majority uses Social media, 43 people said they were neutral when it came to the topic of social media and its future while close 39 people said they were pessimistic and 10 people said they were optimistic.
This was interesting to me because you would think people on social media use it because they associate some kind of positivity with it, right? It turns out, the connection between daily use and feelings about social media don’t correlate to a positive outlook. But it is clear that the platforms are useful and they can tolerate dealing with the negatives with the service as long as they are still able to connect socially.
People are very aware of the issues on social media and when asked about the biggest concerns between AI and social media, the results were a long list with Privacy issues at the top.
So most of these users use social media daily, and know about privacy issues, so maybe these people fight back by not feeding the AI algorithm?
Well, I also asked them if they have ever bought anything from a social media ad. 60 people said they had while 33 people said no. While the 33 people isn’t a low number, the majority at 60 people still has bought something from these ads.
Here is a link to survey:https://forms.gle/iXyXcdPxVpiY3Pci7
Here is the link to the raw data spreadsheet: https://docs.google.com/spreadsheets/d/1i8M91kkhMhrG1Cl4x7agMd6IEr9nrVIYxcpaiTGzduc/edit?usp=sharing
When you interact with an ad on social media, the AI algorithm collects that information to form a picture of who you are and what people like you..well like! So it can get better and better and giving you ads you will hopefully click on.
And thus creates a sort of contract. But not a contract that we are totally aware of. By interacting with these ads, by joining a social media server, you find yourself in a contract. There is the terms of service which is an explicit contract that no one really reads but when you sign up for these sites, there are terms attached. And because there is a lot of details about data being collected than we actually know about, just by using the site, this type of contract is an implicit contract. When we interact with other people, like posts, and pages and join groups, the algorithm does the same thing that it does with ads. It starts to give you a narrower feed. It starts suggesting things it thinks you’ll like, and gives you political ads that it knows you align with. you become again exposed to AI which use the psychology of persuasion.
A solution to this issue that I came up with, would be to take that user agreement and breaking it down for the user. Instead of the contract just being a huge daunting page with confusing verbiage, I think it should be summed up into sections and made easy to understand for the user with the option to view the terms as whole. Also, I would make it so that you do not have to agree to everything right at the start! This puts the power back into the users hands.
Here is an example of my solution design:
To get even more information on this topic I interviewed Said Soufi, an Electronics ICT engineer whose passion is to combine AI with robotics.
What is your experience with AI and technology currently?
At the moment, I'm following a master program called 'Intelligent Electronics' in Belgium where we have a course about 'machine learning' which is one of the pillars of AI. Moreover, I myself have chosen a master thesis where I have to use 'computer vision' which is also one of the pillars of AI. And next summer, I'm planning to learn 'reinforcement learning' which is also a very interesting topic in AI. This one uses kind of the same idea as the trick that Pavlov uses for the dog.
Why do you believe AI is necessary?
I definitely think that AI is necessary because AI is all about helping us make good decisions, machine learning is all about this. The more efficient the decision we make, the more efficient the expending of our resources. And guess what, we can program an AI application in such a way that it keeps learning without our further interaction. This means that the program takes the newly received data and combines it with the old data to make an even better decision, to be more efficient. Especially in medical world!
What is your biggest concern when it comes to AI?
My biggest concern with AI is that it would be used to dehumanize the human being. That we will become over-controlled by AI. That we wont have any privacy anymore. That our thoughts and what we see get controlled by big companies. Their AI application could recommend (or delete) the opinions which does or doesn’t match with their general point of view. That we get so overwhelmed with canalized info that we miss out the chance of thinking out of the box. For example, you go on youtube, and start watching some videos. Then their AI algorithms are going to choose which videos they recommend you next. The difference is that if they don’t do it, then YOU’ll search for more videos yourself and the chance that you find stuff which could make you think out of the box significantly increases. Capability of thinking out of the box has massive influence on human beings and where we go in the future. For example, now-a-day AI is largely used to trick us in to making decisions which we wouldn’t make in normal situations.
Soon enough, we are feeding our own ego with opinions and ads that suit us, so to speak. Instead of you searching in things and finding interesting things on your own, clicking into a recommended page will decrease the chance that you will find something to make you think outside of the box. So by being in these implicit contract we have to give up some of our freedoms in order to enjoy the services. One of those freedoms, is the freedom of choice.
(This makes it possible to make websites “free”. All you have to do is look at ads and give up some of your data.)
Even though people may not want this to be the case, we put up with it because of the benefits which are to able to socialize on the internet. Some people that are unhappy with how the social medias algorithm have chosen to go off grid in response to learning about it.
There are clear issues with AI being used for unwanted data collection and we feel we have to pay that price in order to stay connected…So what would be the solution to this problem?
One way we can avoid this invasion of privacy would be be to do the research into the apps and websites we are using. One of the apps I recommend avoiding is Tinder. Tinder is another tech matchmaking app that uses an AI algorithm based off of how many swipes you get, to determine what your desirability score is. Then you get matched with people around your’e same score level. The list of matches Tinder finds for you is not a random selection. So you may think you are just getting matches that are based off your chosen location, and sexual orientation, ect. But the list of matches Tinder finds for you is not a random selection. It cuts you off from potential matches because they might be at a different score tier level than you. AKA these are filtered results that the user is unaware of.
To challenge the system, we need to stop using apps that use biased algorithms like this. One method that people use to avoid some of the “dangers” of facebook while still maintaining contact with friends is deleting the facebook app and only using messenger which is the messaging extension app.
On instagram for example, you might want to limit the amount of information gathered by creating a profile that doesn’t include your personal info like your name or picture and not liking posts by companies or ads.
Or When youtube gives you a recommended video to watch.. Don’t give it the satisfaction of clicking on it.
On top of this, try searching on incognito mode which is a built in option on most browsers.This will allow Private browsing at which the browser creates a temporary session that is isolated from the browser’s main session and user data. Therefore it is not using your existing data to filter your search results.
These are just some of the examples that will help you avoid giving up your data and getting biased search results.
The head of the problem still exists with the companies persisting on using these algorithms. It makes sense though for them, they are created to run automatically, in order to achieve maximum profit.
It may be some time before these methods change, but acting against it, is the first step.