By Raúl Illargi of The Automatic Earth
Happy belated new year. Belatedly. Thought I’d sit out a few days, since there wasn’t much news to be expected. And it did pan out that way, other than Trump bogarting the limelight; but then, that isn’t really news either. Anything he says or does triggers the expansive anti-Donald echo chamber into a daily frenzy. And frankly, guys, it’s not just boring, but you’re also continuously providing him with free publicity. At least make him work for some of it.
Then, however, the big microprocessor (chip) security ‘flaw’ was exposed. And that’s sort of interesting, because it concerns the basic architecture of basically every microchip produced in the past 20 years, even well before smartphones. Now, the first thing you have to realize is that we’re not actually talking about a flaw here, but about a feature. We use that line a lot in a half-jokingly version, but in this case it’s very much true. As Bloomberg succinctly put it:
All modern microprocessors, including those that run smartphones, are built to essentially guess what functions they’re likely to be asked to run next. By queuing up possible executions in advance, they’re able to crunch data and run software much faster. The problem in this case is that this predictive loading of instructions allows access to data that’s normally cordoned off securely..
Spectre fools the processor into running speculative operations – ones it wouldn’t normally perform – and then uses information about how long the hardware takes to retrieve the data to infer the details of that information. Meltdown exposes data directly by undermining the way information in different applications is kept separate by what’s known as a kernel, the key software at the core of every computer.
As I said: feature, not flaw (or two really, Spectre and Meltdown). And that makes one wonder: fixing a flaw is one thing, but how do you fix a feature? Several quotes claim that software patches would mean the performance speed of affected chips (that would be all of them) would go down by 25-30% or so. Which is bad enough, but the problem is not -limited to- software. And patching up hardware/firmware issues with software can’t be easy, if even viable.
That would make one suspect that even if a software patch can suppress this feature, as long as the architecture doesn’t change, it can still function as a backdoor. Apple may say there are no known exploits of it, but would they tell if for instance intelligence services used it? Or other parties that cannot be labeled ‘hackers’?
All that ties in seemingly seamlessly with Apple shareholders expressing their worries about the effect of their investments. Though you might want to wonder if their worries would be the same if Apple shares plummeted tomorrow.
..activist investor Jana Partners and the California State Teachers’ Retirement System urged Apple to create ways for parents to restrict children’s access to their mobile phones. They also want the company to study the effects of heavy usage on mental health.
There are a few things off with this. First, there’s the risk of these kids’ iPhones being hacked through the flaw, feature, backdoor mentioned above. That’s potentially a lot worse for them. Then, there’s the obvious fact that parents can simply take their children’s phones away, there’s no better way to restrict access. Why should that be Apple’s responsibility?
But most of all, children are addicted to their phones because of the content, and Apple, though they would wish it were different, are not the major content providers. That role is played by Alphabet/Google/YouTube and Facebook/Instagram, and to a lesser extent Snapchat and Twitter. And they are a much bigger threat than Apple is
There has been a lot of talk about hate speech, fake news and election interference over the past year and change -and it won’t stop anytime soon, because it’s political gold dust. Germany, France, the UK, US and a whole slew of smaller nations have all tried to implicate Russia in all of these issues, and for good measure opposition parties to incumbent governments have been fingered too.
There are perhaps very obvious examples of all three topics, but the issue as a whole is far from clear. In Germany, Twitter accounts of the Alternative für Deutschland party have been blocked, but given that they now have seats in parliament, that is a tricky problem. Likewise, much of what the US MSM has been writing about Trump and his organization has proven unsubstantiated, and could therefore be labeled fake news. It isn’t to date, other than by the president himself, but who draws the line where?
The US election interference narrative is shaky, since it largely appears to rely on $100k or so in Facebook ads bought by some mysterious party, ads that are supposed to have been much more effective than many billions of dollars in campaign funding. The kind of thing that makes you think: if the Russians are so much better at this than we are, we might as well hand it all over to them right now.
The main problem with the election interference stories is that none of it has ever been proven. Not even the $100k+ in Facebook ads; they might just as well have originated in Langley and we only have Langley’s word for any alternative claims. Overall, defining what is hate speech and what is fake news seems to come down far too much to opinions rather than facts, and that has us sliding down a supremely slippery slope, not exactly a place to build solid policy on.
So how and why can Facebook and Google be trusted to provide objective assessments on what is fake news and hate speech vs what is not? That is what they are being tasked with at present. They hires tens of thousands of people to do that ‘job’. But what are these people’s qualifications? How do these companies make sure political bias is kept out of the process? Do they even want to keep it out, or do Zuckerberg, Brin, Schmidt want to confirm their own bias?
It’s hard to see how the decision making process, fake vs real news, hate speech, political meddling, will not inevitably become one guided and goaded by intelligence services, because they are the ones who claim to have both the knowledge and the evidence upon which these decisions must be based. But US intelligence is not politically neutral, and they don’t share the sources of their ‘evidence’.
Still, none of that is the main problem here either. Though we’re getting closer.
Over the holidays, I saw a movie in which there was a teachers’ Christmas party at some highschool. All the teachers were bored and sat or stood in silence looking at nothing. And I realized that kind of scene no longer exists today. Though the movie was just 10-15 years old, there have been some profound changes. At a party like that, or at a busstop, in a bus or train, a waiting room or even a family dinner, everyone is now glued to their smartphone. Even people walking down the street are. And those driving down the street.
What all these people seem to do most is look at their Facebook/Instagram/Snapchat etc. accounts. And apart from the profound changes to human interaction in public spaces, there are other things that deserve attention. Like for instance that while you think you’re having private conversations with your friends and family, there’s nothing private about it. Everything you tell your ‘friends’ de facto becomes property of the owners of the app you’re sharing it on.
When your friends read what you just wrote, they see not only that but also ads that the app displays alongside it. That means Facebook makes money from your friends’ attention to your words. Since Facebook reached 2 billion active users in 2017, that adds up. And they don’t have to do anything for that, other than keep the channels open.
But that is not the worst part. Facebook not only makes money off your contact with family and friends, something most people would probably find comparatively innocent, it also ‘spies’ on you…