This is the way it’s supposed to work. Twitter was alerted to a problem, it analyzed the issue, and now it’s fixing it. After hearing complaints that its automatic photo-cropping algorithm showed a racial bias, Twitter spent some time conducting an analysis, then decided to remove the tool and let people crop the photos themselves.
Photo-Cropping Bias Complaints
Last October, Twitter users complained that the photo-cropping algorithm wasn’t treating everyone equally. The social media site announced it would analyze its tool and proceeded to do so for the next several months. Improvements were instituted on how potential bias is assessed, according to a blog post written by Rumman Chowdhury,Twitter’s director of software engineering.

This was a team effort between the ML Ethics, Transparency, and Accountability ((META) team and the Content Understanding Research team. The team questioned whether machine learning was the best tool to use for photo cropping, the algorithm was tested for gender and race bias, and whether users were being allowed to make their own choices was questioned.
Twitter instituted the photo-cropping algorithm in 2018. At the time, the goal was”to improve consistency in the size of photos in your timeline and to allow you to see more tweets at a glance,” according to the blog post. The saliency algorithm was designed to judge what a reader might want to see in a photo so that the unimportant parts could be cropped out to make the photo a more viewable size.

Twitter’s analysis considered three areas that could lead to potential harm. The first was the complaints that the saliency algorithm had a preference for White people over Black people and males over females. Another area analyzed was whether the algorithm chose a woman’s chest or legs over other objects in a photo. The third area addressed was that the feature didn’t allow users to express themselves the way they wanted to by not allowing them to crop their own photos.
The analysis found “an 8% difference from demographic parity in favor of women” over men, “a 4% difference from demographic parity in favor of White individuals” over Black individuals, “a 7% difference from demographic parity in favor of White women” over Black women, and “a 2% difference from demographic parity in favor of White men” over Black men.

While the saliency algorithm also tested for the “male gaze,” they found no evidence of objectification bias.
Ultimately, Twitter knew it could adjust the photo-cropping algorithm to address the biases, but it was “concerned by the representational harm of the automated algorithm when people aren’t allowed to represent themselves as they wish on the platform.” It was also believed there were “other potential harms beyond the scope of this analysis, including insensitivities to cultural nuances.
Removing the Biased Algorithm
After coming to that conclusion that it may be best to allow users to express themselves how they wish to, Twitter “began testing a new way to display standard aspect ratio photos in full on iOS and Android – meaning without the saliency algorithm crop.” While allowing users more control with their images, Twitter also wanted to improve “the experience of people seeing the images in their timeline.”
For the ultimate in control, users will see a preview of the cropped image before it’s posted. Twitter is in the clear, as it’s not relying on machine learning and is letting users do the cropping themselves.
Read on to learn about Twitter’s efforts to monetize the platform with a plan to possibly pay for viral tweets and the flak it took from users announcing “RIP Twitter” when they learned of the plan.
