To err is human but to make a real mess you need a computer.’

Pressenza reported recently how algorithms used by social media were creating fake news without human intervention.

Not the only cybercock up.

‘Instagram uses ‘I will rape you’ post as Facebook ad in latest algorithm mishap’: ‘Instagram used a user’s image which included the text “I will rape you before I kill you, you filthy whore!” to advertise its service on Facebook, the latest example of social media algorithms boosting offensive content.’ The Guardian

And again in Facebook, this time an experiment widely reported in the media as an AI threat appeared to have been (according to Fb, that is) exaggerated: ‘Facebook shut down a pair of its artificial intelligence robots after they invented their own language.
Researchers at Facebook Artificial Intelligence Research built a chatbot earlier this year that was meant to learn how to negotiate by mimicking human trading and bartering.

´But when the social network paired two of the programs, nicknamed Alice and Bob, to trade against each other, they started to learn their own bizarre form of communication.The chatbot conversation “led to divergence from human language as the agents developed their own language for negotiating,” the researchers said.’ The Telegraph

Who feels clear enough about this event to make a judgement about how dangerous it could be for robots to start to communicate in a language nobody but them understands?

Technology is entering a new stage which escapes the moral codes that attempt to regulate human behaviour because very few people (if any) can really understand the consequences. This leads to make people switch off and ‘leave it to the experts’. Who are The Experts?
Many are celebrating the end of science as we know it. Thanks to Big Data now it is not necessary to propose a hypothesis, collect sufficient data, make a statistical analysis and decide whether the hypothesis was proven or disproven.

In ‘Facebook’s war on free will: How technology is making our minds redundant’ Franklin Foer describes in great detail how the analysis of huge amounts of data collected by social media is mechanically finding patterns and reaching conclusions about human behaviour (but not exclusively) without the intervention of minds:
‘For the entirety of human existence, the creation of knowledge was a slog of trial and error. Humans would dream up theories of how the world worked, then would examine the evidence to see whether their hypotheses survived or crashed upon their exposure to reality. Algorithms upend the scientific method – the patterns emerge from the data, from correlations, unguided by hypotheses. They remove humans from the whole process of inquiry. Writing in Wired, Chris Anderson, then editor-in-chief, argued: “We can stop looking for models. We can analyse the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”  The Guardian

The analysis of faces by algorithms could detect people’s sexual orientation according to research from Stamford University reported by the Economist
‘AI’s power to pick out patterns is now turning to more intimate matters. Research at Stanford University by Michal Kosinski and Yilun Wang has shown that machine vision can infer sexual orientation by analysing people’s faces. The researchers suggest the software does this by picking up on subtle differences in facial structure. With the right data sets, Dr Kosinski says, similar AI systems might be trained to spot other intimate traits, such as IQ or political views. Just because humans are unable to see the signs in faces does not mean that machines cannot do so.’

As expected the LGBT community found this problematic. The results are not 100% accurate but in future employers and authorities may use these algorithms to make judgements about people.

Intentionality cannot be switched off

Human consciousness perceives the world as well as itself and structures those perceptions into a model of ‘reality’ by comparing the new data with memory and adding elements from the emotional tone of the moment and a deeper omnipresent direction, away from pain and suffering and seeking happiness and meaning. Unlike mechanistic (eg Newtonian) and probabilistic (eg Chaos, quantum) processes this intentionality is ‘pulled’ by an image launched into the future. In spite of all the violence and dehumanisation that reigns in our present system where the prevailing ideology promotes individualism and selfishness there is an intention for humanity to move towards love and compassion. It may not be visible in the media but it is in communities and in response to disasters. So, when scientists propose a hypothesis they are not simply trying to see what correlates with what, but are seeking a way to emerge from pain and suffering. The hypothesis may turn out to be wrong but in that case the scientist will look for other ways to solve the problem. Because human beings care.

Even when in some people the structuring leads to violence we can find some element of self preservation or compassion towards themselves or their own kind.

No doubt the mechanical analysis of Big Data will throw amazing insights into different correlations, but there is no guarantee that any of them will serve the purpose of overcoming essential problems for humanity. Because machines do not care. Only those who programme them may do. And if the main intention is to make money, that’s what the machines will seek.

Medical analysis of Big Data is already showing great promise, therefore it is not a case of throwing the baby with the bath water.

Automated killing machines are programmed to kill, a soldier, trained to kill is still capable of feeling compassion, the killer robot is not. We have several examples of people preventing a nuclear holocaust where a machine would not have done so. Such is the case of the recently deceased Stanislav Petrov, a Soviet military officer who decided to dismiss a computer warning that the US had launched a nuclear attack on the Soviet Union as a false alarm. Intentionality includes in its construction of ‘reality’ what others may be processing, ‘I think therefore you think’, ‘I care therefore you may care’. It may be wrong, but there remains a choice.

Moreover, the assumption that the information that is being evaluated by algorithms is reliable is incorrect. People do not enter their true thoughts and intention into social media, but rather they, we, input a partial and biased self-image according to certain social trends and intentions. And as we have seen earlier, even the most sophisticated algorithms are capable of making mistakes.

Perhaps the paranoia that the robots will take over the world is still too unlikely to contemplate as it would mean they can develop intentionality. Since for the time being only human intention gives direction to Big Data we must concentrate in eliminating war, revenge and greed, all rooted in fear, and create international agreements about putting data at the service of overcoming pain and suffering. The dictatorship of Big Data is not a cyber abstraction, the people who manage it have names, faces and intentions, and ordinary people can reach out to them to demand a humanised use of our information.