Tuesday, March 31 Captivating Algorithms

Read the article at the link below (use your UNC library login), and then add at least one response to one or both of the initial comments posted below.

https://journals-sagepub-com.libproxy.lib.unc.edu/doi/pdf/10.1177/1359183518820366

 

Comments

Seaver outlines a distinction between persuasion and coercion, noting that software developers rely on a sense of persuasion to justify efforts to "hook" users. In this light, is it possible to see a Facebook feed or phone app as similar to other kinds of persuasion? Are they like a political speech or a magazine ad? Or is there a fundamental difference? Or put another way, to what extent--when it comes to digital platforms-- are we "pigeons caught in cages . . . transfixed by schedules of reinforcement?"

When it comes to social media these “persuasion” techniques do have a fundamental difference between political speech or magazine ads. First of all, the “hooking” ads or preferences that recommend what to watch or listen to next is extremely dynamic. Like in the reading, preferences are intrinsically unstable so that these algorithms have a potential to be dynamic as well (especially when companies start looking at the implicit ratings or signs of satisfaction). On the other hand, social media does have elements of persuasive tricks or techniques that hook users. For instance, on Instagram there is a whole page that suggests posts you might like to scroll with. On Facebook, (at least on mine) I get 10-15 targeted dog posts because I frequently look at funny dog memes or videos. When we see things on social media that we enjoy, (find funny, sad, etc.) we are reinforced to keep going back and checking to get that same psychological rush or stimulation. 

I'm with Shawna, in that I think there is a fundamental difference between the rhetoric of a political speech or magazine ad and the rhetoric of technologies (hardwares designs like the iPhone or software and platforms like Zoom, Netflix, or Facebook). The goal in getting users "hooked" is primarily one that prioritizes time, more than anything else. The key metric for engagement (and subsequent success) for softwares and platforms like Zoom, Netflix, or Facebook is the time users spent on the app, with a secondary metric of success/engagement being the quality of interactions users have on the platform (uploads, reactions, clicks, etc.). With the perhaps more traditional persuasive rhetoric, like that of a political speech or magazine ad, sure, attention and time is important, but is not ultimately the same measure of success, as it is on Zoom, Netflix, or Facebook. Speeches and ads have a different goal in mind—to convince someone to vote for them, to convince someone to buy their product. Nir Eyal's book Hooked, an Silicon Valley bible for product/tech development, on the other hand, explicitly wants to monopolize the end user's time. Interestingly, Eyal (who I've worked with/written e-courses for in a previous job) just came out with a book about productivity, entitled Indistractable, that serves as a user/consumer-focused book to 'help' folks get un-Hooked from their technologies. I think this primary emphasis on time reflects a rhetorical shift in thinking about persuasion in the digital age.

I can definitely see an argument being made for apps like Instagram being similar to persuasion. In fact, I think they are exactly that. On the one side, there are definitely company/business pages out there that are tying to persuade users into buying products or services based on how aesthetically pleasing they look in their photo feed. These companies often hire representatives (usually just called "reps") who will put a link in their profile that redirects to that company's site. However, whenever people checkout using that link, the rep earns a percentage of the income. In the "Bookstagram" world (people who take pretty pictures of books, host readalongs, etc.), reps are often chosen based on how many followers they have as well as the quality of the photos they take. Sometimes, it can leave people feeling unworthy or inferior because they don't have all the expensive, perfectly lit and placed items that appear in these kinds of photos. At the same time, people are in control of what they follow and expose themselves to, so the blame can't be put entirely on people who run pages like that, especially when it may be how they earn their living.

However, sometimes young and impressionable children/teens/young adults follow pages of influencers who portray lifestyles that feel unobtainable for the average person, leaving viewers with a low sense of self-worth because they don't have what that person has. It's important to consider that not everyone is able to look at what others choose to portray as their lives and realize that what is shown in a photo feed might not be real life (and, even if it is real life—there's no need to compare yourself to what someone else has).

Social media companies like Instagram, Twitter, and especially Facebook which all use algorithms in order to determine the best advertisements for each user, do have a sense of persuasion. However, unlike commercials that rely on general persuasion to appeal to a general group, computers have the ability to create ads based on each viewer. For example, since I was always logging onto my high school's website, I received a ton of ads for my school. I went on Forever 21 to look for shoes and right after I log off, I get an ad on Facebook for it. So yes, ads rely on an algorithm as its own form of persuasion based on the users' decisions. This is different compared to a political speech or a magazine ad as they are focused on general groups and do not focus on each individual person's interest.  In a way, the way Facebook uses ads may even deter people from going on their websites. People like having a sense of privacy and do not want to feel like a giant computer is watching them and taking notes every single time they buy a pair of socks from Old Navy. 

I think there is a fundamental difference between traditional types of persuasive rhetoric such as in a political speech or magazine ad and the rhetoric of these digital forms of persuasion. When it comes to digital platforms, this persuasion is happening more insidiously—it is, as Seaver articulates, a trap because it doesn’t elicit quite the same voluntary opting in of a political speech or a magazine ad. When you listen to a speech or look at a magazine ad, you have made a choice to participate in that particular community rather than a different one. On a digital platform, this choice is eliminated because the information you see has been tailored to your preferences by an algorithm, without your own conscious reflection on those preferences. In digital spheres, reinforcement happens automatically and immediately, which isn’t the case with a speech or a traditional advertisement.

The article notes an evolution from predictions based on ratings, to traces that measure enchantment:

the data that flows into the recommender has broadened to include practically any form of interaction, even (and now especially) interactions that a user may not realize have occurred – such as data shared by a social network, saved in a browser history, or captured from a smart- phone’s sensors. Algorithmic recommendation has settled deep into the infrastructure of online cultural life, where it has become practically unavoidable.

As we have all now moved to learning through zoom and Netflix binging down time, what are the implications of this shift?

Permalink

In reply to by iamdan

It's definitely interesting to think about the ways in which increased time on digital platforms produces more and more data for algorithms to work with. Especially when Zoom meetings are held for academic reasons and academic spaces are maybe at least slightly more private and less digitized than other spaces, like social spaces. Some brief research on Zoom reveals that they have a pretty mediocre privacy policy. They may be collecting data from automatically generated transcripts of conversations, they know where people are, and who is in each meeting. This could be really concerning if you are having a zoom call with a doctor or therapist. 

I don't think that people realize how much data Zoom could be collecting right now from our class meetings, conversations with friends, and meetings with health professionals. The real question is what the company will do with this data. It also might be hard post-quarantine to resist the urge to all stay home and instead meet on a Zoom call. I think people are realizing the convenience of this technology without also seeing the powers that we have to sacrifice to use it, mostly privacy. 

 

Permalink

In reply to by iamdan

In recent years, the phenomenon of companies tracking our online activity and using this data to enhance their tracking algorithms has become more and more prevalent. While I agree that this tracking of our data is not ideal, it is going to continue happening unless widespread legislation prevents it, so it is good to be aware of it and modify our behaviors accordingly. What this means to me is that we should always keep the fact that we are being tracked in the back of our mind, and before consuming a piece of media, recognize that it is due to these algorithms that the media is being presented to us.

I somewhat disagree with the article's characterization of these types of recommendation algorithms as 'traps'. To me, 'trap' has a negative connotation; it implies the use of persuasion to lure somebody into a situation that they don't want to be in. I don't necessarily buy that these recommendations are 'traps', because the thing that they are recommending is something the user does want to interact with. Sure, you could make the argument that the user actually doesn't really want to interact with that now, and they will regret it later, but I still think it is a grey area.

Regardless, I think companies' tracking of data and subsequent use of that data for recommendations can make it more difficult for people who are addicted to social media to leave. I think that for most people, as long as we are constantly aware of what companies are doing, then it should be okay.

Permalink

In reply to by iamdan

With the new shift to working primarily online, such as through Zoom, Slack, and Microsoft Teams (at least, these are the tools I've had to use to keep up with classes and my job that is now operating remotely), I think it is making us a bit more reliant on technology. I typically try to avoid having too many hours of screen time per day, but now it's pretty unavoidable since most work needs to be done online due to the current situation. I would say this will result in even more data being collected; I am sure people are spending even more time on their phones and on platforms such as Youtube, Netflix, Hulu, etc. simply because they _have_ the time. I know I'm in that boat. It makes me wonder how we will all adjust "back to normal" once the current conditions improve. Will online tools be more prominently used, now that they've proven to be effective? Will this affect how much data is obtained and how it may be used? I guess there's no way to know right now, but I find it interesting to think about, regardless.

The article about "Captivating Algorithms" is an idea that I've thought about out and discussed in my various Advertising and Public Relations classes often. Platforms create special, individual echo chambers for each of us if we aren't careful because we like viewing content that confirms our biases. If number of clicks or time spent per page is the definition for success for our software, we are sacrificing values like truth, open-mindedness, and fairness. I've certainly seen articles about how social media like YouTube and Instagram can feed its users increasingly radical content in order to retain interest. 

Social media epitomizes persuasion. It persuades us to think about certain characteristics and lifestyles as being more desirable and creates an ecosystem of subtleties, ads being one such subtlety. These sites specialize in dynamic persuasion, learning from a person's tendencies in an effort to "offer" the most pertinent suggestions. There is most certainly a difference between political speech/magazine ads and the algorithmic technologies. Tracking engagement to increase the quality of interactions on an app creates a specialized environment for a person once they enter. These algorithms require change for persuasion to exist, and they do just that. Internet culture constantly changes, allowing technologies to learn and grow with the people they are linked to.

Advertisements are a form of persuasion. Social media has become a prime source of ads, and therefore an evolved and specialized method of persuasion for all individuals who devote their time to using such platforms and allow their information and use to customize advertisements and content to the most influential and impactful form for each individual. Our society already places a high value on technology and the many aspects of it that have access to our personal information and toy with the idea of privacy, however, present-day circumstances have allowed a tremendous and unprecedented reliance to be placed upon technology and the means by which it can serve students, employees, and communities. But this great reliance and trust placed on technology and platforms like Zoom, Netflix, and social media provide a greater risk for our personal information and create more opportunities for the lines of privacy to be crossed. Advertisements and influential content are being given more wide-scale and accepted platforms by which their intent to utilize algorithms and private information can impact individuals.

All of the concern about algorithms is totally valid; I personally believe that if Instagram hooks me into scrolling through the app for 45 minutes, I have wasted nearly the entirety of that time. But that's because it's Instagram -- an app where the majority of posts are superficial or unnecessary to my day. Yet the stuff works like a charm -- nearly all of my friends have an average daily screen time on their phones of like five to seven hours. And that is a problem BECAUSE the majority of the content we are looking at is utter bullshit. But here's where I feel hopeful: we, as a society, have developed technology to reel in and hold viewers' interest...... so why don't we make people hooked on learning about topics or skills that are actually useful, such as learning history, a new language, politics, government, the economy, sexua education, and so much more. There is so much useful knowledge in the world, the fact that big tech companies are only focused on stupid things like tailoring a girl's newsfeed so that she stumbles upon a new pair of shoes she likes but doesn't need for me is a big fat "shame on you society."