AU Gov’s new Social Media laws (Live Streaming/terrorism)

G’day,

To save you clicking - Morrison et al met with social media execs to talk about how to combat “terrorist” material such as what happened during the Christchurch massacre. They were very unimpressed with what they heard, so their answer is new laws that could see the CEOs of social media platforms (ie Facebook & YouTube) do jail time, as well as fines worth 10% of the business’ yearly takings.

Firstly, I applaud the idea that the social media co’s should be forced to act faster with regard to the abuses that we are seeing committed on their platforms - ever since the first murder was live-cast (was it on Facebook?), something should have been done to ensure such “events” could be quickly flagged and removed. They are global companies - someone/s could be “on call” 24/7 (ie if not in the US, then in AU, or in EU, etc).

But… do they (Morrison) honestly expect to be able to jail Zuckerberg or Pichai, and/or fine them billions of dollars? I’m not sure if the punishments are realistic, or even would be enforceable? (Any international law experts on the forum?)

That said - the punishment does need to be of such a magnitude as to make the companies take note and act, given they have not done so to date. Freedom of speech be damned - a video showing a terrorist murdering 50 people should not be given any platform.

Cheers

cosmic

I think the Morrison government is super out of touch with anything technology related. They honestly seem stuck in the 80s when it comes to tech. The whole anti-encryption bill thing proved that. I guarantee Scott Morrison frequently needs his kids to help him use his smartphone or explain what an emoji is.

Jailing or charging CEOs of social media platforms billions would just never happen, it’s a silly over-reaching political statement, but maybe the suggestion of it would at least send a message to social media companies, who need to take a lot more responsibility for the impact the content that gets created/shared on their platforms is having in the world. The US election, Facebook’s ‘fake news’, the rise of the right alt/incel/neo-nazi groups, the atrocity in Christchurch, all have/had major social media ‘viral’ elements which if not supported by the social media platforms themselves, were certainly enabled by them.

Twitter, Youtube, and Facebook have complete control over what is ‘trending’, and I think a responsibility to at least police that element of their platforms. No one should be watching a live stream of a terrorist massacre, and social media platforms should not be facilitating its broadcast. If they can’t at least stop that from spreading, then maybe all hope is lost.

3 Likes

Something needs to happen.

You should not enact laws that can’t be properly enforced. The second concern is I feel very uncomfortable about a law that seems more about your own virtue than the actual problem. It gives a leg up to despots everywhere that want to control information.

On th other hand, social networks are under increasing pressure to police content, and their apparent willingness to bend over as fast as possible, means it is time to drop the pretence they are merely a carrier service and treat them like the publishers they really are.

2 Likes

I know I tend to live under a rock, but using the Christchurch example, what is a reasonable response time to something like that before these proposed penalties kick in? From what point in time? First start of the stream? First report? First what?

My understanding is that all companies took action to remove the material and actively tried to stop people reposting/sharing it. Is this not the case? Or was the time too long? Should there be no more live streaming of anything without approval? The Grand Tour vaguely mentioned a stat (in a totally different context) that there is 300 hours of new video uploaded every minute!
The first google hit says that too, so assuming it’s even close to correct how would you police that amount of information?

I believe it took Facebook just over 1 hour to kill the original footage. They then removed (lots and lots) of copies of the footage over the following day or so.

In my mind - that’s too long.

But - I agree, it’s going to be very hard to work out meaningful timeframes.

An hour is too long? I’d have said 12 hours was marginally too long and a day definitely too long but an hour? Really?

What are they supposed to do? Have specialist staff sitting there 24/7 just in case there is a terrorist shooting or something simila? If people want companies to being doing that then the cost should be carried by the government that’s insisting they do so IMO.

1 Like

If this were live on television, imagine the response.

The internet - and social media on the internet - has become so pervasive in modern societies, it is easily equatable to the television. Yes, there’s a lot more “channels” - but they are ultimately even more wide reaching with a global audience (sans notable exceptions such as China due to their firewall).

Youtube and Facebook as the main targets of Morrison’s ire, already would have staff who are monitoring any issues being flagged by users. If content suddenly has a truck load of flags being raised, good guess that it requires urgent attention.

The internet has really been living in the wild west, with government action only occurring re-actively to things like piracy and porn. This has become as much about how the internet should be used, as much as about gun laws.

I believe it was 37 minutes. That is impressive IMHO.

3 Likes

I was going off this post on the ABC - saying it was up for 69 minutes.

I’ve seen several reports that it took around 17 minutes after the footage ended before a complaint was lodged - around 29 minutes from when it started to when the first complaint was created.

As such - around 40 minutes from complaint til video was taken down - @Entropy
sounds about right

Yes but Television (streamed or broadcast) is ‘curated’ by the provider.

The provider of the ‘terrorist vision’ isn’t going to do that and until someone tells them about the offensive posting the social media company don’t even know it exists.

At that point someone gets a "hey! You there in the complaints department there is this bad stuff on your social media, at which point some worker has to go through a whole process of checking to see if it’s a genuine complaint and then they have to view the content, decide if it’s serious enough to take action about. Realize that it’s REALLY SERIOUS and above their pay grade. Then report it to their supervisor. Who then has to examine the post and who also will realize it’s also above their pay grade. Repeat several times until it reaches someone with enough superiority and authority to issue a take down instruction. That take down instruction then has to be sent to the staff responsible for removing content. Then those staff have to action that request.

Doing all that, with all those people in the chain… in 37 minutes?

That IS impressive and it should be recognized as such, we shouldn’t be talking about jailing people who own a company who did everything humanly possible to remove the content.

What we should be doing is coming down hard on the people who shared it and kept putting it back up!

1 Like

Let’s say they could take down offensive material within 5 minutes. That is enough time for others to find it and download it. Those others then upload it to other sites. Each of those other sites take it down in 5 minutes. But there is still enough time for others to download it from those sites and upload it to even more sites.
And so on …

As others have stated, 5 minutes is unrealistic anyway. A real take down process will take much longer.

Anyone trying to beat the system could upload to many sites at once, increasing their chances of success.

The only solution is to not allow posting of uploaded material until it has been reviewed, ie. curation.

I suspect curation would not be a problem for many users. It does prevent immediacy, but maybe that is a necessary compromise.

Perhaps some previously authenticated users and organisations could be allowed to bypass curation. The user, organisation the authentication authority would be liable and penalised for any inappropriate material.

I say drop live streaming. It wont prevent material like this from being uploaded but it might slow things down a bit. You can bet that footage is still out there.

I’d imagine hardly anyone was watching live. Has FB said how many watched it live and how many watched it on HIS page afterwards?

It’s ridiculous to expect a company to vet things BEFORE they go live, it is also ridiculous to expect anything should be done because of 1 or 2 reports. Maybe after 5 reports, then give a tech 10-15 minutes to respond, and another 10-15 minutes to investigate and act. I believe an hour or so is absolutely within the confines of acceptability.

no, it’s not

1 Like

An hour is a reasonable period based on what is technically achievable. However, if the intent is to prevent such content being broadcast to the world, then an hour is way too long and not acceptable. There may not be a technical solution that will satisfy this intent other than curation.

Initially, reports as I read them, said that the Govt met with social media companies (ie Facebook and YouTube), then drafted legislation… (and were unimpressed with the staff chosen to meet with them)

Now…

We get one instance of major terrorism on social media and our government loses its mind.

Let’s just ignore the millions of people that video themselves eating ice cream or playing with their cat.

I don’t think our government has the moral position to dictate to anyone how they should operate their business, and if you don’t do this and that, we’ll fine and jail you. :roll_eyes: