LinkedIn Announces Tougher Measures Against Inappropriate Content on its Platform

Amid the various divisive debates and concerns at present – which, if anything, look set to become even more incendiary as we head towards the US election – LinkedIn has this week outlined a range of new measures that it’s implementing to ensure that its members feel comfortable and protected when engaging on the platform.

As explained by LinkedIn:

“Every LinkedIn member has the right to a safe, trusted, and professional experience on our platform. We’ve heard from some of you that we should set a higher bar for safe conversations given the professional context of LinkedIn. We could not agree more. We’re committed to making sure conversations remain respectful and professional.”

In line with this, LinkedIn has announced the following updates:

Making policies stronger and clearer

LinkedIn says that it’s working to refine its Professional Community Policies in order to clarify that “hateful, harassing, inflammatory or racist content has absolutely no place on our platform”.

“In this ever-changing world, people are bringing more conversations about sensitive topics to LinkedIn and it’s critical these conversations stay constructive and respectful, never harmful. When we see content or behavior that violates our Policies, we take swift action to remove it.”

LinkedIn also notes that it’s rolling out new educational content to help users understand their obligations in this respect, which will appear as pop-up notifications or reminders when you go to post, message or otherwise engage.

Using AI and machine learning to protect against inappropriate content

LinkedIn says that it’s also working with parent-company Microsoft to help keep the LinkedIn feed appropriate and professional.

“More recently, we’ve scaled our defenses with new AI models for finding and removing profiles containing inappropriate content, and we’ve created a LinkedIn Fairness Toolkit (LiFT) to help us measure multiple definitions of fairness in large-scale machine learning workflows.”

LinkedIn published a full overview of the LinkedIn Fairness Toolkit (LiFT) earlier this week, which facilitates: 

“… a more equitable platform by avoiding harmful biases in our models and ensuring that people with equal talent have equal access to job opportunities.” 

Creating economic opportunity for every member of the global workforce is now the key focus of former LinkedIn CEO Jeff Weiner, who stepped down from his former role in June to take on this new focus. The COVID-19 pandemic may actually present new opportunities to facilitate a significant shift in such – as we look to get the economy back on track in the wake of the pandemic, it may provide a new opportunity to implement updated standards on quality, which could help reduce systemic bias.

It’s a hard task, but LinkedIn is already taking steps on this front.

In addition to this, LinkedIn also recently rolled out new process to detect and hide inappropriate InMail messages, tackling another key area of concern for users. 

Closing the loop when you report content that violates our policies

LinkedIn also notes that, in the coming weeks, it will be providing more transparency in its enforcement efforts when taking action on content that violates platform policies.

“We’ll close the loop with members who report inappropriate content, letting them know the action we’ve taken on their report. And, for members who violate our policies, we’ll inform them about which policy they violated and why their content was removed.”

These are important initiatives for LinkedIn, with each of these elements having significant, negative impacts in varying form. As such, it’s good to see LinkedIn taking a more definitive stand on such, and while we’ll have to wait and see on the actual impacts those efforts end up having, it’s good that LinkedIn is coming out on the front foot and detailing its updated processes.

In terms of actions users can take themselves, LinkedIn advises that members should ignore and report unwanted connection requests, and utilize its updated audience control options on their posts, limiting who can see and reply to their updates if they feel unsafe.

“You now have the option to select who gets to see your content. You can select ‘Anyone’, which makes your post visible to anyone on or off LinkedIn, ‘Anyone + Twitter’, which makes your post visible to anyone on both LinkedIn and Twitter, or ‘Connections only’, which makes your post visible to only your 1st-degree connections and reduces the likelihood of people you don’t know or don’t trust seeing your post.” 

Twitter implemented similar controls recently, with the option to limit who can reply to your tweets, while Instagram has also added more tools to limit who can engage with your updates

Of course, due to LinkedIn’s algorithm the amount of people who see your posts will be limited either way, but the controls will give you more options on such, which could help you limit unwanted interactions.

In some ways, it’s sad that there’s a need to implement such controls and options, but it’s reflective of how people choose to interact and engage on social media. Social platforms have now become a critical element in modern discourse, and that, unfortunately, also includes negative interactions.

The idea of a globally connected, interactive space is idealistic, and as we’ve increasingly found, there’s a need for limitations around that connection.

It’s sad, but realistic. And as such, it’s also important for LinkedIn to take these steps.  

You can read more about LinkedIn’s security updates here.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website