Artificial Intelligence AI

Facebook does what it was built for

?Dumb Fucks? is how Mark Zuckerberg called his first couple of thousand ?friends? who lavishly shared their personal life on Facebook back in 2004. Listening to the mostly unqualified questions from grey-haired members of Congress during Mark Zuckerberg?s hearing at the US Senate and looking at his releasing smile, he might have though the same thing again of the seasoned politicians in front of him.

To me, the hearing is the world?s most outright and direct example of the impressive gap between the unlimited possibilities of technological evolution and the blatant lack of understanding by the masses.

Here are five reasons why this situation should not come as a surprise.

  1. Internet = Surveillance

The world-renowned Internet security expert Bruce Schneier once said that Surveillance is the business model of the Internet. Just like a pipe of fresh water needs ongoing monitoring for obvious health reasons and a pipe of sewage needs regular maintenance to properly work, pipes of internet traffic are subject to sophisticated monitoring. The worldwide web is founded on the TCP/IP protocol which serves as a global standard for information exchange. These standards were developed by DARPA, the US military research agency in the late 70ies and were declared the standard for all military computer networking in March 1982. Monitoring traffic from emails to images, from video clips to attached PDFs and from voice messages to animated GIFs has hence been an intrinsic part of the Internet from day one ? and for good reason. Balancing off the exponentially growing volume of traffic is no easy task, keeping out viruses and child pornography as well as protecting our Internet banking transactions are other good reasons. Keeping control on an entire population to stabilize the political system in power is the most fundamental of all reas

And then we?ll tell them that their privacy will be respected?
And then we?ll tell them that their privacy will be respected?

ons. Deep packet inspection (DPI) has been one of several standard procedure of scanning and analyzing any information that travels through the Internet and it does exactly what its name suggests. It allows governmental organizations to open and inspect in details any parcel of information traveling from A to B. While the worldwide web is by far the most fantastic media that humanity has ever come up to, it was never meant to be a private space.

  1. Ten year gap

I once assisted a conference on risk management and international regulations where a leading researcher explained why most top athletes hardly get caught in doping and drug tests: the laboratories who develop new (performance enhancing) drugs are at least ten years ahead of the labs who test the athletes. We can safely assume that the same gap applies to information technology developed by US military organizations and the common knowledge of the world population. In ten years form now, most of us will start to understand what is technically possible and therefore being done through the collection of our own data today.

  1. Facebook does what it was built for

Once the potential of Facebook as a gargantuan supplier of valuable information was understood by the US intelligence services, the CIA became an early investor in Facebook through its venture capital firm In-Q-Tel. The unprecedented rise of Facebook to become the world?s biggest social network data supplier of private information and enabler of behavioral economics analysis hence is perfectly in line with the hegemony of the United States of America.  Advanced knowledge by politicians and the population of what can be done through sophisticated data analytics would be counterproductive.

  1. We all agree

This is where the magic lies. Imagine a conversation between two Gestapo agents in 1943, both of whom work day and night to spy on suspected individuals and meticulously collect information about them. If one told the other that 70 years from now, billions of people would tell us everything they do, send us photos without being asked and report to us with whom they hang out and where, his friend would burst out in laughter. Yet, this is exactly what happened. Part of Facebook?s ever more complex terms of use read : „by posting member content to any part of the Web site, you automatically grant, and you represent and warrant that you have the right to grant, to Facebook an irrevocable, perpetual, non-exclusive, transferable, fully paid, worldwide license to use, copy, perform, display, reformat, translate, excerpt and distribute such information and content and to prepare derivative works of, or incorporate into other works, such information and content, and to grant and authorise sublicenses of the foregoing. And when it comes to its privacy policy it states that „Facebook may also collect information about you from other sources, such as newspapers, blogs, instant messaging services, and other users of the Facebook service through the operation of the service (eg. photo tags) in order to provide you with more useful information and a more personalized experience. By using Facebook, you are consenting to have your personal data transferred to and processed in the United States.“ In the name of ?free? entertainment and connecting with our friends, we politely and voluntarily do the tedious job formerly done by thousands of intelligence agents. If we agree to use a product for which we don?t pay, we agree that we are the product being used.

  1. Out of sight, out of mind

Among the many advantages that come with the digital transformation of our life?s, they also come with pitfalls. The invisibility and intangibility of data is probably the creepiest of them all. We are inherently visual animals relying disproportionately on our eyes to construct our own reality. As soon as we don?t see, we get scared (walking through a forest at day or at night are two incredibly different experiences). And because we don?t see data and we don?t understand what self learning algorithms can predict using our data, we indeed should be scared. As soon as political propaganda becomes visible, it loses its effectiveness. For this very same logic we should do whatever it takes to keep a open dialogue on the subject and to force the world?s data driven media behemoths to tell us how our data is being used.

Facebook does what it was built for Lire la suite »

Women cook, men play tennis, AI says

Junk in, junk out is an old way of describing the relationship between corrupt and erroneous data being fed to a computer and the useless results being delivered in response. If you deliberately fill an Excel column with a couple of random numbers that you just made up, you know that you cannot trust the sum or average function at the bottom. This common sense is now being reconfirmed by sophisticated machine learning algorithms.

Machine learning refers to autonomous computer programs that feed themselves from large amounts of data without human supervision. So comes that a pattern recognizing algorithm scrolling through hundreds of thousand images concludes that people standing in a kitchen are more likely to be women than men. On the other hand, someone with a gun or a person coaching a sports team is more likely to be a man.

We are about to find out that letting AI lose and eating our own data is a great way of not only breeding sexist views but amplifying gender biased perception. While this comes as a disturbing surprise to many AI researchers, it should not. It is the same junk in, junk out rule applied on another level.

We have now two choices. Either we ?correct? the algorithms by hard coding gender neutrality (i.e. fifty-fifty chance between man and woman for each picture with a person cooking, shooting, shopping or playing tennis) or we accept the biased output as a ?feature? that makes cold and rational AI systems ?more human?.

Should we let AI systems learn by themselves and use them as a mirror for our many biases or should we feed them with gender neutral, morally desirable and ethically acceptable values to help us evolve?

Doing the former will help us learn about ourselves but trigger a downward spiral with bias amplification. Doing the latter will help us to change our biased perceptions but leave the burden of programming global ethical standards to few god like tamer of algorithms. We should take the time to discuss this before economic greed outsources the decision to AI.

 

Women cook, men play tennis, AI says Lire la suite »

Why we need to talk about responsible AI

Excellent video enabling a urgently needed conversation about responsible use of artificial intelligence:

What do you think about it? Share your thoughts and opinion on  The Declaration of Montreal for responsible AI. The declaration of Montreal aims to foster a public conversation about the potential and responsibilities of AI. It hopefully becomes a binding, international legal framework that assures that the above fiction will stay fiction.

Why we need to talk about responsible AI Lire la suite »

Is capitalism the wrong OS for AI?

Today I was attending the highly interesting and utterly necessary event ?Responsible AI?, a two-day Forum on the socially responsible development of Artificial Intelligence organized by Université de Montréal. The prolific exchange of knowledge, wisdom and opinions around AI and the profound social and ethical responsibilities that come with it emphasizes Montreal?s ambition and seriousness to become a leading hub of AI.

The unknown variables of the mid- and long-term impact of AI on job security, privacy, justice, social equality and ecology are so far reaching that questions during the first day of the forum largely outnumbered  answers – which to me is a healthy sign of a constructive dialogue. Being able to ask the right question is more valuable than offering an easy answer that has not been thought to the end.

So this is one of the many questions I wrote into my notebook while listening to leading AI scientists : Is capitalism the wrong Operating System for Artificial Intelligence? While we get assured during every press conference by a tech CEO that their goal is to make life better, connect the world, wipe out poverty and safe the planet, it is easy to forget that all leading AI multinationals are stock quoted companies. In a value proposition world driven by quarterly reporting and C-level compensation mostly linked to short term profits, corporate social responsibility is regularly perceived as a profit decreasing waste of shareholder?s investment. If we are serious about AI and if we have the collective capacity of learning from self learning algorithms, we should consider in each AI algorithm a baked-in, triple bottom line approach that inherently pursues ecological, social and economic objectives. Or do you think we can rely on capitalism as the adequate OS for AI?

Is capitalism the wrong OS for AI? Lire la suite »

Externalisation of intelligence _ Intelligence Matrix Richert

On the accelerated externalisation of Intelligence

Forget about the numb killing robot walking down the streets and shooting helpless humans. Any discussion on Artificial Intelligence (AI) around this scenario is utterly missing the point and the real urgency.

Instead of fueling such dystopian thoughts and filling the unknown with fear, we need a rational conversation about what is already being done.

It is about the silent, invisible but progressive erosion of our cognitive superiority to machines. It is about the unprecedented concentration of power in the hands of few stock quoted tech giants. It?s about the alarming vacuum of nonexistent regulations to tame the algorithms and protect democracy. Ultimately, it?s about our informational self determination carving our behavior, desires and needs.

One of the many challenges hindering a meaningful public discussion is the lack of common definitions of Artificial Intelligence and even intelligence itself. To foster a meaningful conversation on the progressive outsourcing and externalization of human intelligence to machines, I developed the Intelligence Matrix.

The Intelligence Matrix is a simple square with the x-axis divided between subconscious and conscious intelligence and the y-axis between internal (human) and external (artificial) intelligence, visualising for types of intelligence:

  1. Automatization (internal and subconscious intelligence)
  2. Skill (internal and conscious intelligence)
  3. Manipulation (external and subconscious intelligence) and
  4. Enhancement (external and conscious intelligence)

The Intelligence Matrix allows a simplified, yet holistic view on the ongoing shift of intelligence from man to machine. It should be seen as a comprehensive framework enabling constructive discussions and a better understanding of opportunities and threats of the fast-paced advancement of what we currently describe and perceive as ?Artificial Intelligence?.  For further details on the Intelligence Matrix, please read this paper (PDF): https://beatrichert.net/wp-content/uploads/2023/05/Reflections-on-the-accelerated-externalisation-of-intelligence_Beat-Richert_July2017.pdf

On the accelerated externalisation of Intelligence Lire la suite »

Retour en haut