UK watchdog wants disclosure rules for political ads on social media

The UK’s data protection agency will push for increased transparency into how personal data flows between digital platforms to ensure people being targeted for political advertising are able to understand why and how it is happening.

Information commissioner Elizabeth Deham said visibility into ad targeting systems is needed so that people can exercise their rights — such as withdrawing consent to their personal data being processed should they wish.

“Data protection is not a back-room, back-office issue anymore,” she said yesterday. “It is right at the centre of these debates about our democracy, the impact of social media on our lives and the need for these companies to step up and take their responsibilities seriously.”

“What I am going to suggest is that there needs to be transparency for the people who are receiving that message, so they can understand how their data was matched up and used to be the audience for the receipt of that message. That is where people are asking for more transparency,” she added.

The commissioner was giving her thoughts on how social media platforms should be regulated in an age of dis(and mis)information during an evidence session in front of a UK parliamentary committee that’s investigating fake news and the changing role of digital advertising.

Her office (the ICO) is preparing its own report this spring — which she said is likely to be published in May — which will lay out its recommendations for government.

“We want more people to participate in our democratic life and democratic institutions, and social media is an important part of that, but we also do not want social media to be a chill in what needs to be the commons, what needs to be available for public debate,” she said.

“We need information that is transparent, otherwise we will push people into little filter bubbles, where they have no idea about what other people are saying and what the other side of the campaign is saying. We want to make sure that social media is used well.

“It has changed dramatically since 2008. The Obama campaign was the first time that there was a lot of use of data analytics and social media in campaigning. It is a good thing, but it needs to be made more transparent, and we need to control and regulate how political campaigning is happening on social media, and the platforms need to do more.”

Last fall UK prime minister Theresa May publicly accused Russia of weaponizing online information in an attempt to skew democratic processes in the West.

And in January the government announced it would set up a dedicated national security unit to combat state-led disinformation campaigns.

Last month May also ordered a review of the law around social media platforms, as well as announcing a code of conduct aimed at cracking down on extremist and abusive content — another Internet policy she’s prioritized.

So regulating online content has already been accelerated to the top of government in the UK — as it is increasingly on the agenda in Europe.

Although it’s not yet clear how the UK government will seek to regulate social media platforms to control political advertising.

Denham’s suggestion to the committee was for a code of conduct.

“I think the use of social media in political campaigns, referendums, elections and so on may have got ahead of where the law is,” she argued. “I think it might be time for a code of conduct so that everybody is on a level playing field and knows what the rules are.

“I think there are some politicians, some MPs, who are concerned about the use of these new tools, particularly when there are analytics and algorithms that are determining how to micro-target someone, when they might not have transparency and the law behind them.”

She added that the ICO’s incoming policy report will conclude that “transparency is important”.

“People do not understand the chain of companies involved. If they are using an app that is running off the Facebook site and there are other third parties involved, they do not know how to control their data,” she argued.

“Right now, I think we all agree that it is much too difficult and much too opaque. That is what we need to tackle. This Committee needs to tackle it, we need to tackle it at the ICO, and the companies have to get behind us, or they are going to lose the trust of users and the digital economy.”

She also spoke up generally for more education on how digital systems work — so that users of services can “take up their rights”.

“They have to take up their rights. They have to push companies. Regulators have to be on their game. I think politicians have to support new changes to the law if that is what we need,” she added.

And she described the incoming General Data Protection Regulation (GDPR) as a “game-changer” — arguing it could underpin a push for increased transparency around the data flows that are feeding and shaping public opinions. Although she conceded that regulating such data flows to achieve the sought for accountability will require a fully joined up effort.

“I would like to be an optimist. The point behind the General Data Protection Regulation as a step-up in the law is to try to give back control to individuals so that they have a say in how their data are processed, so that they do not just throw up their hands or put it on the ‘too difficult’ pile. I think that is really important. There is a whole suite of things and a whole village that has to work together to be able to make that happen.”

The committee recently took evidence from Cambridge Analytica — the UK based company credited with helping Donald Trump win the US presidency by creating psychological profiles of US voters for ad targeting purposes.

Denham was asked for her response to seeing CEO Alexander Nix’s evidence. But said she could not comment to avoid prejudicing the ICO’s own ongoing investigation into data analytics for political purposes.

She did confirm that a data request by US voter and professor David Carroll, who has been trying to use UK data protection law to access the data held on him for political ad targeting purposes by Cambridge Analytica, is forming one of the areas of the ICO enquiry — saying it’s looking at “how an individual becomes the recipient of a certain message” and “what information is used to categorise him or her, whether psychographic technologies are used, how the categories are fixed and what kind of data has fed into that decision”.

Although she also said the ICO’s enquiry into political data analytics is ranging more widely.

“People need to know the provenance and the source of the data and information that is used to make decisions about the receipt of messages. We are really looking at — it is a data audit. That is really what we are carrying out,” she added.

Featured Image: Tero Vesalainen/Getty Images

UK kicks off driverless car law review to get tech on the road by 2021

In 2021 the UK government intends the country to be well on its way to a driverless future.

No, not a cheap joke about Brexit — yesterday it announced a three-year regulatory review to “pave the way for self-driving cars”.

This follows the budget, in November, when the government announced a tranche of funding for technology innovations — including AI and driverless cars — and said it wants to establish a looser framework for testing self-driving vehicles “without a safety operator” with the stated aim of getting driverless cars on the roads by 2021.

The law review meshes with that goal, though the government is clearly giving itself a very tight timetable for resolving regulatory complications and passing the necessary legislation.

The myriad technological challenges of ensuring autonomous vehicles can operate safety and efficiently in all weather conditions are really just one portion of the challenge here.

Other major barriers include things like public acceptance of self-driving technology, and liability and insurance complications that arise once you remove human drivers from the mix — raising questions like how do you apportion blame when something goes wrong?

But the law review, which will be jointly carried out by the Law Commission of England and Wales and the Scottish Law Commission, is intended to grapple with exactly these issues.

Among the questions to be reviewed and — says the government — answered are:

  • who is the ‘driver’ or responsible person, as appropriate
  • how to allocate civil and criminal responsibility where there is some shared control in a human-machine interface
  • the role of automated vehicles within public transport networks and emerging platforms for on-demand passenger transport, car sharing and new business models providing mobility as a service
  • whether there is a need for new criminal offences to deal with novel types of conduct and interference
  • what is the impact on other road users and how they can be protected from risk

Commenting in a statement, roads minister, Jesse Norman said: “The UK is a world leader for self-driving vehicle research and development, and this work marks an important milestone in our continued commitment to the technology.

“With driving technology advancing at an unprecedented rate, it is important that our laws and regulations keep pace so that the UK can remain one of the world leaders in this field.”

Law commissioner Nicholas Paines QS, added: “British roads are already among the safest in the world and automated vehicles have the potential to make them even safer. Provided our laws are ready for them.

“We’ll now start consulting widely on how the law should work with this new technology and develop reforms which enable the use of self-driving vehicles in the years to come.”

“Automated vehicles could have a big impact on the way we live and work so it’s important that, UK-wide, we have a legal system which can accommodate them,” said Scottish law commissioner, Caroline Drummond, in another statement.

Norman announced the review during a visit to the GATEway driverless car project in Greenwich, which last year piloted an autonomous shuttle pod for ferrying people along a short pedestrian and cycle path in the London region.

The project has continued to run autonomous tests but is now entering its final phase which the government says will involve a fleet of automated pods providing a shuttle service around the Greenwich Peninsula, aimed at understanding public acceptance of, and attitudes towards, self-driving vehicles.

Also commenting on the law review in a statement, Rob Wallis, CEO of TRL, the company behind the GATEway project, said: “Regulation, safety standards and vehicle insurance models all have a key part to play in enabling change, whilst giving society confidence that these new products and services can be introduced safely.”

The review forms part of the government’s push to encourage mobility innovations as set out in its Industrial Strategy — which it says is aimed at boosting the UK’s long term productivity and the earning power of citizens. (So presumably the government’s long term vision for truckers, cabbies and private hire vehicle drivers is for them to shift gears into higher tech careers.)

In the Future of Mobility Grand Challenge, one of several the Industrial Strategy sets out — to “put the UK at the forefront of the industries of the future” — the government writes that it wants to “look for opportunities to improve customers’ experience, drive efficiency and enable people to move around more freely”.

“The UK’s road and rail network could dramatically reduce carbon emissions and other pollutants, congestion could be reduced through higher-density use of road space enabled by automated vehicles, and mobility could be available when we want it, where we want it and how we want it,” it adds.

UK facing legal action over immigration exemption in DP bill

The UK’s data protection bill is facing fresh controversy and the threat of legal action if the government does not ditch an amendment which removes data protection rights in instances where the Home Office deems it could prejudice “effective immigration control”. Or the “investigation or detection of activities that would undermine the maintenance of effective immigration control”.

Digital civil rights group the Open Rights Group (ORG) and the3million, a post-Brexit referendum organization advocating for the rights of European Union citizens living in the UK, have said today they are launching formal legal action over the inclusion of the clause.

They argue the exemption means at least three million people across the UK would be unable to find out what personal data the Home Office or other related organizations hold on them, with the government able to shield the data behind a claim of “effective immigration control” — risking cementing errors in the processing of applications that could lead to immigrants being unfairly denied entry or deported from the UK.

They also argue the clause is incompatible with the incoming EU General Data Protection Regulation (GDPR) — which the data protection bill is intended to transpose into UK law ahead of the May 25 deadline for applying the regulation. (Although the government is making these specific provisions in a section listing exemptions from GDPR.)

The Home Office has a reputation for data processing errors and for taking a very heavy-handed approach where immigration is concerned. ORG’s executive director, Jim Killock, argues it is trying to use a sweeping exemption to data protection law to cover up its own mistakes.

“This is an attempt to disguise the Home Office’s mistakes by making sure that their errors are never found. When people are wrongly told to leave, they would find it very hard to challenge,” he said in a statement.

“Data protection is a basic safeguard to make sure you can find out what organisations know about you, and why they make decisions. Sometimes, during criminal investigations, that isn’t appropriate: but immigrants aren’t criminals, nor should they be treated as such.”

The government has also proposed setting up a new registration system for EU citizens once the UK leaves the EU, as is currently slated to happen in March 2019 — meaning there would be a new Home Office database containing the personal details of more than three million people. And where there’s data, there’s the inevitable risk of errors and inaccuracies.

Which is exactly why GDPR has provisions for data subjects to be able to view data held about them and have any errors corrected. An exemption for immigration seems intended to impede such visibility, however.

“We need safeguards in place to ensure that these citizens have access to the information held about them, so they are able to appeal Home Office decisions or correct mistakes,” said Nicolas Hatton, chairman of the3million, in another supporting statement. “Everyone should be entitled to know how the Home Office and other government agencies are using their records, and that is why we want this exemption removed.”

Lawyers from Leigh Day, the firm acting on behalf of ORG and the3million, have written to UK home secretary Amber Rudd outlining their concerns and asking for the clause to be removed from the bill, which gets its second reading debate later today in parliament.

Rosa Curling, a human rights solicitor from Leigh Day, said the exemption risks creating a “discriminatory two-tier system” for data protection rights.

“The clause is incompatible with GDPR, as well as EU law generally and the European Convention on Human Rights,” she said in a statement. “If the exemption is made law, our clients will apply for judicial review. They have written to the government today to urge it to reconsider and to remove the immigration exemption from the bill without further delay.”

We’ve contacted the Home Office for comment and will update this story with any response.

It’s not the first controversy for the 2017 data protection bill. In January concerns were raised, including by the UK’s information commissioner, over the implications of a new ethics regime for processing public sector data.

Consumer groups have also voiced unhappiness that the government has not taken up another GDPR provision that allows for collective redress for victims of data breaches.

On the data retention side, the government is also facing several ongoing legal challenges to its surveillance regime under European law — and has lost multiple times including in December 2016 when Europe’s top court dealt a major blow to “general and indiscriminate” data retention directives.

Featured Image: Diamond Gallery UNDER A CC BY-SA 3.0 LICENSE

UK accuses Russia of 2017’s NotPetya ransomware attacks

The UK government has directly accused Russia of being behind the so called NotPetya ransomware attack last year — which quickly spread around the globe, including affecting businesses in Spain, France and India, demanding payment in Bitcoin to unlock infected machines. The malware initially appeared targeted at Ukrainian networks.

“We have entered a new era of warfare, witnessing a destructive and deadly mix of conventional military might and malicious cyber-attacks,” UK defense secretary Gavin Williamson is quoted as saying (via The Guardian). “Russia is ripping up the rulebook by undermining democracy, wrecking livelihoods by targeting critical infrastructure and weaponising information… We must be primed and ready to tackle these stark and intensifying threats.”

Russia has made various military incursions into Ukrainian territory since 2014, when it annexed Crimea. Ukraine has also suffered a sustained cyberwarfare campaign apparently waged by Kremlin agents — though of course Russia denies all charges — including, in 2015, a cyber attack against the local energy grid that temporarily disrupted electricity supplies in the depths of winter.

Russia has denied Williamson’s latest charge too — as it also did last year, when the UK prime minister directly accused Vladimir Putin of seeking to weaponize information in order to sew social division and influence elections in the West, via the medium of fake news posted to social media platforms.

“We categorically dismiss such accusations; we consider them unsubstantiated and groundless. It’s not more than a continuation of the Russophobic campaign which is not based on any evidence,” a Kremlin spokesman, Dmitry Peskov, told the BBC.

The UK foreign office backed up Williamson’s remarks, with Lord Ahmad saying in a statement (via Reuters): “The decision to publicly attribute this incident underlines the fact that the UK and its allies will not tolerate malicious cyber activity.

“The UK government judges that the Russian government, specifically the Russian military, was responsible for the destructive NotPetya cyber attack. Its reckless release disrupted organisations across Europe costing hundreds of millions of pounds. The Kremlin has positioned Russia in direct opposition to the West yet it doesn’t have to be that way.”

“We call upon Russia to be the responsible member of the international community it claims to be rather then secretly trying to undermine it,” he added.

While the NotPetya malware was initially thought to be a strain of the Petya ransomware it turned out to be a new variant that reused only some code. (Hence NotPetya.) It also included code known as Eternal Blue — which is widely believed to have been stolen from the NSA, as was the exploit that fueled last year’s WannaCry/WannaCrypt attack.

UK parliamentarians are currently investigating the impact of Russian-backed Brexit meddling in the UK’s 2016 EU referendum, as part of a wider enquiry into fake news. And separately the UK Electoral Commission is also looking into digital campaigning activity funded by Russia during the referendum.

Last month the UK government announced plans to set up a dedicated national security unit to try to combat state-led disinformation campaigns.

Featured Image: Bryce Durbin

Cryptojacking attack hits ~4,000 websites, including UK’s data watchdog

At first glance a CoinHive crypto miner being served by a website whose URL contains the string ‘ICO’ might not seem so strange.

But when you know that ICO in this case stands for the UK’s Information Commissioner’s Office — aka the national data protection and privacy watchdog, whose URL ( predates both Bitcoin and the current craze for token sales — well, the extent of the cryptojacking security snafu quickly becomes apparent.

Nor is the ICO the only website or government website caught serving cryptocurrency mining malware to visitors on every page they visited. Thousands of sites were compromised via the same plugin.

Security researcher Scott Helme flagged the issue via Twitter yesterday, having been initially alerted by another security professional, Ian Trump.

Helme traced the source of the infection to an accessibility plugin, called Browsealoud, created by a UK company called Texthelp.

The web screen reader software was being used on scores of UK government websites — but also further afield, including on government websites in the US and Australia.

So when an attacker injected a crypto mining script into Browsealoud’s JavaScript library some 4,000 websites — a large number of them taxpayer funded and/or subsidized — were co-opted into illegal crypto mining…  Uh, oopsie…

tl;dr: “If you want to load a crypto miner on 1,000+ websites you don’t attack 1,000+ websites, you attack the 1 website that they all load content from,” as Helme has since blogged about the attack.

Texthelp has also since issued a statement — confirming it was compromised by (as yet) unknown attackers, and saying it is investigating the incident.

“At 11:14 am GMT on Sunday 11th February 2018, a JavaScript file which is part of the Texthelp Browsealoud product was compromised during a cyber attack,” it writes. “The attacker added malicious code to the file to use the browser CPU in an attempt to illegally generate cryptocurrency.  This was a criminal act and a thorough investigation is currently underway.”

According to Texthelp the crypto miner was active for four hours on Sunday — before, the company claims, its own “continuous automated security tests” detected the modified file in Browsealoud and responded by pulling the product offline.

“This removed Browsealoud from all our customer sites immediately, addressing the security risk without our customers having to take any action,” it further claims.

However, at the time of writing, the ICO’s website remains down for “website maintenance” — having been taken offline on Sunday soon after Helme raised the alert.

We reached out to the ICO with questions and a spokesperson responded with this statement: “We are aware of the issue and are working to resolve it. We have taken our website down as a precautionary measure whilst this is done.”

The spokesman added that the ICO’s website remains offline today because it’s investigating what it believes is another Browsealoud-associated issue.

“The ICO’s website will remain closed as we continue to investigate a problem which is thought to involve an issue with the Browsealoud feature,” the spokesperson told us, without elaborating further.

Yesterday the UK’s National Cyber Security Center issued its own statement about the crypto miner attack, writing:

NCSC technical experts are examining data involving incidents of malware being used to illegally mine cryptocurrency.

The affected service has been taken offline, largely mitigating the issue. Government websites continue to operate securely.

At this stage there is nothing to suggest that members of the public are at risk.

Texthelp has also claimed that no customer data was “accessed or lost” as a result of the attack, saying in its statement yesterday that it had “examined the affected file thoroughly and can confirm that it did not redirect any data, it simply used the computers CPUs to attempt to generate cryptocurrency”.

We’ve also reached out to Texthelp for any updates on its investigation — at the time of writing the company has not responded.

But even if no user data has indeed been compromised, as it’s claiming, the bald fact that government websites were found to be loading a CoinHive crypto miner which clandestinely and thus illegally mined cryptocurrency en mass is hugely embarrassing. (Albeit, as Helme points out, the attack could have been much, much worse. A little CPU burn is not, for e.g., stolen credit card data.)

Still, Helme also argues there is added egg-on-face here — perhaps especially for the ICO, whose mission is to promote data protection best practice including robust digital security — because the attack would have been trivially easy to prevent, with a small change to how the third party JS script was loaded.

In a blog post detailing the incident he describes a method that would have mitigated the attack — explaining:

What I’ve done here is add the SRI Integrity Attribute and that allows the browser to determine if the file has been modified, which allows it to reject the file. You can easily generate the appropriate script tags using the SRI Hash Generator and rest assured the crypto miner could not have found its way into the page. To take this one step further and ensure absolute protection, you can use Content Security Policy and the require-sri-for directive to make sure that no script is allowed to load on the page without an SRI integrity attribute. In short, this could have been totally avoided by all of those involved even though the file was modified by hackers. On top of all of that, you could be alerted to events like this happening on your site via CSP Reporting which is literally the reason I founded Report URI. I guess, all in all, we really shouldn’t be seeing events like this happen on this scale to such prominent sites.

Although he does also describe the script the ICO used for loading the problem JS file as “pretty standard”.

So it does not look like the ICO was doing anything especially unusual here — it’s just that, well, a national data protection agency should probably be blazing a trail in security best practice, rather than sticking with riskier bog standards.

Not to single out the ICO too much though. Among the other sites compromised in the same attack were US courts, the UK’s financial ombudsman, multiple local government websites, National Health Service websites, higher education websites, theatre websites and Texthelp’s own website, to name a few.

And with volatile cryptocurrency valuations clearly incentivizing cryptojacking, this type of malware attack is going to remain a problem for the foreseeable future.

Also blogging about the incident, and the SRI + CSP defense proposed by Helme, web security expert Troy Hunt (of data breach search service fame) has a bit more of a nuanced take, pointing out that third party plugins can be provided as a service, rather than a static library, so might need (and be expected) to make legitimate changes.

And therefore that the wider issue here is how websites are creating dependencies on external scripts — and what can be done to fix that. Which is certainly more of a challenge.

Perhaps especially for smaller, less well-resourced websites. At least as far as government websites go, Hunt argues they should definitely should be doing better in shutting down web application security risks.

“They should be using SRI and they should be only allowing trusted versions to run. This requires both the support of the service (Browsealoud) not to arbitrarily modify scripts that subscribers are dependent on and the appropriate processes on behalf of the dev teams,” he writes, arguing that government websites need to take these risks seriously and have a prevention plan incorporated into their software management programs — as standard.

“There are resources mentioned above to help you do this — retire.js is a perfect example as it relates to client-side libraries,” he adds. “And yes, this takes work.”

But if the ICO isn’t going to do the work to lock down web application risks, how can the national data protection agency expect everyone else to?

Featured Image: Bryce Durbin

UK outs plan to bolster gig economy workers rights

The UK government has announced a package of labor market reforms to respond to changes in working patterns including those driven by the rise of gig economy platforms and apps like Uber and Deliveroo.

It’s billing the move as an expansion of workers rights — saying “millions” of workers will get new day-one rights, as well as touting tighter enforcement of sick and holiday pay rights.

“We recognise the world of work is changing and we have to make sure we have the right structures in place to reflect those changes, enhancing the UK’s position as one of the best places in the world to do business,” said prime minister Theresa May in a statement.

“We are proud to have record levels of employment in this country but we must also ensure that workers’ rights are always upheld. Our response to this report will mean tangible progress towards that goal as we build an economy that works for everyone.”

The reforms — which the government has dubbed a ‘Good Work Plan’, saying it will for the first time be “accountable for good quality work as well as quantity of jobs” — follow rising criticism of conditions for workers in the gig economy, and a number of legal challenges including by a group of UK Uber drivers who used an employment tribunal in 2016 to successfully challenge the company’s classification of them as self-employed contractors.

It also follows a government-commissioned independent review of modern working practices, conducted by Matthew Taylor and published last summer. The government says it’s acting on all but one of the Taylor report recommendations.

(The one exception being changes to tax rates which, unsurprisingly given its prior U-turn, is confirmed as entirely off the table. “The employment status consultation makes very clear that changes to the rates of tax or NICs for either employees or the self-employed are not in scope,” it emphasizes on that.)

“The Taylor Review said that the current approach to employment is successful but that we should build on that success, in preparing for future opportunities,” said business secretary Greg Clark in a supporting statement. “We want to embrace new ways of working, and to do so we will be one of the first countries to prepare our employment rules to reflect the new challenges.”

The government claims it’s going further than Taylor’s recommendations — specifically by planning to enforce

  • vulnerable workers’ holiday and sick pay for the first time
  • a list of day-one rights including holiday and sick pay entitlements and a new right to a payslip for all workers, including casual and zero-hour workers
  • a right for all workers, not just zero-hour and agency, to request a more stable contract, providing more financial security for those on flexible contracts

The 2016 employment tribunal judgment that reclassified the group of UK Uber drivers as workers gave them entitlement to benefits such as holiday pay and sick pay.

The ruling also paves the way for other legal challenges to be brought by gig economy workers. And while Uber continues to appeal against it the company has also responded to rising legal risk and political pressure over gig economy working conditions by introducing some subsidized insurance products for workers on its platforms. So, in case law terms, the direction of travel for legal liabilities in this area seems fairly clear.

As well as tightening up the enforcement of workers rights, the government said it will be raising fines for employers that show “malice, spite or gross oversight”, as well as considering raising penalties for employers who have previously lost similar cases.

It will also be introducing a new naming — and, clearly, shaming — scheme for employers who fail to pay employment tribunal awards.

While the government is very clearly signaling an intent to bolster gig economy workers rights, plenty of questions about its reform plan remain at this stage — such as, for example, how it intends to define “vulnerable” workers, and how explicitly it will codify the planned changes and/or write them into law.

Responding to its announcement today unions were generally critical, arguing it’s not going far enough.

Some also accused the government of seeking to kick the problem into the long grass. In a response statement, IWGB union general secretary, Dr Jason Moyer-Lee, added: “Similar to the Taylor review itself, the announcement is big on grandiose claims, light on substance.”

The reforms certainly lack detail at this stage — not least because the government has announced no less than four consultations to, as it puts it, “inform what the future of the UK workforce looks like” — so it’s not possible to determine what will be the final shape of employment law in this area. (Nor, therefore, assess impacts on gig economy platforms.)

Among the consultations announced today is one on employment status, and another on measures to increase transparency in the labor market — with the government committing to define ‘working time’ for flexible workers who find jobs through apps or online “so that they know when they should be being paid”.

How to define working time for gig economy workers who may be simultaneously logged onto multiple apps has been a bone of contention in legal challenges in this area. So the government providing clarity would certainly be welcome. Though how exactly it will clear up that issue when platforms and apps can be so variable remains unclear.

Discussing the overall reform plan, Sean Nesbitt, a litigator on employment issues at law firm Taylor Wessing, told us: “The government is looking to make a big statement about their commitment to reforming and making fit for purpose modern work for the 21st century but, although there’s a broad commitment and a big statement, there’s not too much detail as to what they’re proposing.

“I don’t think they are booting it into the long grass… I think there’s still a desire there to make a large correction. I don’t think it’s necessary a big change but a large correction to make sure the market understands how work is to be run in the UK.

“But I also think that, as is characteristic of this government, they’re cautious about rocking the boat and they’re trying to build consensus — so four separate consultations is a way of managing the risk that they take too strong a position and can’t deliver it.”

“Keeping up momentum is good,” he added. “But it’s hard for business to judge when implementation will occur and precisely what.”

Also today the government said it will work with industry “over the coming months” to look at ways to encourage the development of online tools for self-employed people — to “come together and discuss issues that are affecting them”. So more details should emerge soon.

While the full implications of the reform are not yet clear, Nesbitt believes case law gives a strong steer — perhaps especially in the instance of Uber. Given that judges in Europe have pretty consistently ruled against the company’s claims it’s just a tech platform or a dispatching agency in recent years.

“It is hard to see the detail of the shape. What we can see is that the government, like Taylor and like the parliamentary committee that made 11 recommendations recently, all intend to keep the three statuses of employee, worker and dependent contractor. So that shape we can see staying,” he told TechCrunch.

“There is then intended to be clarification as to how you tell the difference. That isn’t clear what that clarification will look like but I believe it will be based on existing case law — including of course, notably, the Uber litigation.”

“I feel there is quite a lot of certainty around the determining features of those three [employment] statuses are already,” he added. “Where I think the really useful piece could come is if the government regulates to define what working time is for platforms. They say they’re going to.”

Nesbitt points out that many platforms don’t accept the view that a worker being logged onto their app and waiting time constitutes ‘working’. So if the government were to legislate on it it could help inject a little more certainty into the gig economy — for players on both sides.

“The government could find a way forward and say well it isn’t necessary being logged on that’s the determining feature — you have to be actively working or at least committed to the exclusion of other opportunities,” Nesbitt suggested. “So they could find a way to do it — but it’s not clear when and how they’re going to do it.”

“The judge in the Uber case said… working time is when the driver for Uber is logged onto the app and is available for a ride. Now lots of other apps — your Deliveroos, your JustEats, your healthcare or beauty services apps, catering apps — will say obviously if they’re logged onto five of us, being logged on or available on its own can’t be working,” he continued.

“If you’re on JustEast or Deliveroo or a restaurant’s own waiting app, you’re not doing anything and you’re not excluding the others — especially if their terms of service don’t punish you for logging out or for not taking a job.

“It’s quite possible the government could legislate to say… it isn’t necessarily being logged on that’s the determining feature. You have to be actively working or at least committed to the exclusion of other opportunities. And I think that would enable both views to be upheld.”

“The judge in Uber basically said the key reason I say that being logged on for Uber counts as working time is essentially that they are so dominant in the market that it makes it very hard to take any other jobs without risk of falling foul of their benching provisions that log you out if you don’t take jobs. And because they are so dominant. Where there is more competition it may be that logging on is not to be considered working time,” he added.

The government’s timeframe for running its four consultations and firming up the shape of the reform isn’t clear. But such consultations rarely take less than three months — if not six.

By which time the next round of Uber’s appeal against the 2016 tribunal ruling will have reached the UK Appeals Court and there will likely be more case law for it to draw on to feed its thinking.

“What I don’t see in the government press release is any attempt to short circuit or override the litigation process,” added Nesbitt. “It’s almost as if this consultation process is designed to run in parallel to the court process — the sort of privatized testing of what the law is that Uber and the unions are engaged in.”

So don’t expect a more finely detailed employment law reform plan to emerge before fall.

Twitter accused of dodging Brexit botnet questions again

Once again Twitter stands accused of dodging questions from a parliamentary committee that’s investigating Russian bot activity during the UK’s 2016 Brexit referendum.

In a letter sent yesterday to Twitter CEO Jack Dorsey, DCMS committee chair Damian Collins writes: “I’m afraid there are outstanding questions… that Twitter have not yet answered, and some further ones that come from your most recent letter.”

In Twitter’s letter — sent last Friday — the company says it has now conducted an analysis of a dataset underpinning a City University study from last October (which had identified a ~13,500-strong botnet of fake Twitter accounts that had tweeted extensively about the Brexit referendum and vanished shortly after the vote).

And it says that 1% of these accounts were “registered in Russia”.

But Twitter’s letter doesn’t say very much else.

“While many of the accounts identified by City University were in violation of the Twitter Rules regarding spam, at this time, we do not have sufficiently strong evidence to enable us to conclusively link them with Russia or indeed the Internet Research Agency [a previously identified Russian trollfarm],” it writes.

Twitter goes on to state that 6,508 of the total accounts had already been suspended prior to the study’s publication (which we knew already, per the study itself) — and says that more than 99% of these suspensions “specifically related to the violation of our spam policies”.

So it’s saying that a major chunk of these accounts were engaged in spamming other Twitter users. And that — as a consequence — tweets from those accounts would not have been very visible because of its anti-spam measures.

“Of the remaining accounts, approximately 44.2% were deactivated permanently,” it continues, without exactly explaining why they were shuttered. “Of these, 1,093 accounts had been labelled as spam or low quality by Twitter prior to deletion, which would have resulted in their Tweets being hidden in Search for all users and not contributing to trending topics in any way.

“As we said in our previous letter, these defensive actions are not visible to researchers using our public APIs; however they are an important part of our proactive, technological approach to addressing these issues.”

Twitter’s letter writer, UK head of public policy Nick Pickles, adds that “a very small number of accounts identified by City University are still active on Twitter and are not currently in breach of our rules”.

He does not say how small.

tl;dr a small portion of this Brexit botnet is actually still live on

While Twitter’s letter runs to two pages, the second of which points to a December 2017 Brexit bot study by researchers at the Oxford Internet Institute, also relying on data from Twitter’s public streaming API, which Twitter says “found little evidence of links to Russian sources” — literally right after shitting on research conducted by “researchers using our public APIs” — Collins is clearly not wooed by either the quantity or the quality of the intelligence being so tardily provided.

Cutting to the chase, he asks Twitter to specify how many of the accounts “were being controlled from agencies in Russia, even if they were not registered there”.

He also wants to know: “How many of the accounts share the characteristics of the accounts that have already been identified as being linked to Russia, even if you are yet to establish conclusively that that link exists.”

And he points out that Twitter still hasn’t told the committee whether the 13,493 suspected bot accounts were “legitimate users or bots; who controlled these accounts, what the audience was for their activity during the referendum, and who deleted the tweets from these accounts”.

So many questions, still lacking robust answers.

“I’m afraid that the failure to obtain straight answers to these questions, whatever they might be, is simply increasing concerns about these issues, rather than reassuring people,” Collins adds.

We reached out to Twitter for a response to his letter but the company declined to provide a public statement.

Last week, after Collins had accused both Twitter and Facebook of essentially ignoring his requests for information, Facebook wrote to the committee saying it would take a more thorough look into its historic data around the event — though how comprehensive that follow up will be remains to be seen. (Facebook has also said the process will take “some weeks”, giving itself no firm deadline).

Both companies also disclosed some information last month, in response to a parallel Electoral Commission probe that’s looking at digital spending around the Brexit vote — but then they just revealed details of paid-for advertising by Russian entities that had targeted Brexit (saying this was: ~$1k and ~$1, respectively).

So they made no attempt to cast their net wider and look for Russian-backed non-paid content being freely created and spread on their platforms.

To date Collins has reserved his most withering criticisms for Twitter over this issue but he’s warned both they could face sanctions if they continued to stonewall his enquiry.

The DCMS committee is traveling to Washington next month for a public evidence session that Facebook and Twitter reps have been asked to attend.

It’s clearly hoping that proximity to Washington — and the recent memory of the companies’ grilling at the hands of US lawmakers over US election-related disinformation — might shame them into a more fulsome kind of co-operation.

Meanwhile, the UK’s Intelligence and Security Committee, which is able to take closed door evidence from domestic spy agencies, discussed the security threat from state actors in its annual report last year.

And although its report did not explicitly identify Brexit as having been a definitive target for Russian meddling, it did raise concerns around Russia’s invigorated cyber activities and warn that elections and referenda could be targets for disinformation attacks.

“State actors are highly capable of carrying out advanced cyber attacks; however, their use of these methods has historically been restricted by the diplomatic and geopolitical consequences that would follow should the activity be uncovered. Recent Russian cyber activity appears to indicate that this may no longer be the case,” the committee wrote, citing the hacking of the DNC and John Podesta’s emails as indications that Russia is adopting a “more brazen approach to its cyber activities”.

Evidence it took from the UK’s GCHQ and MI5 spy agencies is redacted in the report — including in a section discussing the security of the UK’s political system.

Here the committee writes that cyber attacks by hostile foreign states and terrorist groups could “potentially include planting fake information on legitimate political and current affairs websites, or otherwise interfering with the online presence of political parties and institutions”.

Another redacted section of evidence from GCHQ then details how the agency “is already alert to the risks surrounding the integrity of data”.

The ISC goes on to speculate that such state attacks could have a variety of motives, including:

  • generally undermining the integrity of the UK’s political processes, with a view to weakening the UK Government in the eyes of both the British population and the wider world;
  • subverting a specific election or referendum by undermining or supporting particular campaigns, with a countervailing benefit to the hostile actor’s preferred side;
  • poisoning public discourse about a sensitive political issue in a manner that suits the hostile state’s foreign policy aims; or
  • in the case of political parties’ sensitive data on the electorate, obtaining the political predilections and other characteristics of a large proportion of the UK population, thereby identifying people who might be open to subversion or political extremism in the hostile actor’s interests

“The combination of the high capability of state actors with an increasingly brazen approach places an ever greater importance on ensuring the security of systems in the UK which control the Critical National Infrastructure. Detecting and countering high-end cyber activity must remain a top priority for the government,” it adds.

In related news, this week the UK government announced plans to set up a dedicated national security unit to combat state-led disinformation campaigns.

Featured Image: NurPhoto/Getty Images