91download.com supports a wide range of platforms, including YouTube, Facebook, Twitter, TikTok, Instagram, Dailymotion, Reddit, Bilibili, Douyin, Xiaohongshu and Zhihu, etc. Click the download button below to parse and download the current video
In the era of digital dominance, algorithms govern numerous aspects of our daily lives, from approving loans to curating our social media feeds. But have you ever wondered what metrics determine an algorithm's success? What underlying assumptions and biases might be shaping the outcomes, and how does this affect us as consumers and creators of these algorithms?
Imagine you're scrolling through your newsfeed. Have you ever stopped to ask why certain articles consistently appear? Unlike human interactions, your newsfeed doesn't solicit your satisfaction with each result. Instead, it relies on quantifiable indicators like click-through rates. This creates a misleading feedback loop where algorithms favor sensational headlines or cute cat pictures, regardless of the actual content quality or your satisfaction after clicking.
This approach measures engagement, not utility. While this isn't inherently wrong, it skews the types of content we see, prompting us to question: "Why is the algorithm offering me this result?"
What if you delegate algorithm design to AI? After all, aren't they unbiased? Unfortunately, AIs are trained on data created by humans, often biased content found on the internet. AIs, seeing data as sequences of numbers, look for patterns without understanding causality. This can lead to amplified historical biases, as seen in a hiring algorithm that favored male applicants due to the predominantly male engineering workforce in the company's training data.
As programmers, we strive to model the real world but must make assumptions and simplifications. The key is recognizing these assumptions and their impact on outcomes. Consider a content moderation algorithm that favors older accounts, assuming they are more trustworthy. While this might be statistically supported, we must ask ourselves: Is this assumption fair? Does it inadvertently discriminate against any protected classes?
Similarly, a word count check may assume longer posts are less useful, potentially favoring one language over another. In a global community, such biases are unacceptable. We must continuously evaluate and adjust our algorithms based on new data, ensuring they remain fair and representative.
Designing and implementing algorithms is an ongoing process. As they operate on our platforms, we must monitor trends, feature posts, and flagging patterns. Adjustments are necessary to respond to new data, ensuring our algorithms serve everyone equitably.
In conclusion, while algorithms play a pivotal role in our lives, we must remain vigilant about their assumptions and biases. By critically evaluating and adjusting these digital gatekeepers, we can ensure they serve us, not the other way around.
Share on Twitter Share on Facebook