Responsible AI

Hey Google, what?s next with Corona virus COVID-19?

One thing that strikes me in this unprecedented media hype around the Corona virus is the suspect silence of Google and Facebook. Both multinationals are sitting on incredibly complete and accurate mountains of data. Remember, sinister use if such data can win elections or trigger mass murders.

I can think back to several Google sales pitches I got many years ago with sales representatives  priding themselves of the insights they can get from aggregated search results.

Here is Mr. Eric Schmidt back in 2012 (yes, eight years ago) explaining the stunning capabilities of Google:

?Through centralized information, there are a lot of things that you can do that you couldn?t do before. We can alert the health care professionals and get them in the cycle six months ahead [of the flu].?

Eric Schmidt, Google CEO, 2012

Hey Google, why aren?t you sharing your knowledge? Hey Mr. Sundar Pichai, why are you not informing of the public about the state of Covid-19 propagation? Dear Larry and Sergey, please don?t be evil. If sharing was caring (teilen ist heilen), Google would use its data to predict future Corona virus outbreaks just like it visualizes traffic jams on Google maps.

Hey Google, what?s next with Corona virus COVID-19? Lire la suite »

Meat Porn and Sex Bots

As a child of convinced vegetarian parents, I always had an awkward feeling in the supermarket when I left the fresh and colorful food section behind me, strolled through the diary food department and finally arrived at the somewhat hidden butchery section. Often, the meat section would be at a dead end or even in a partly separated area. Every time, an eerie mix of shame and guilt overcame me and forced me back to the cozy section with chocolate bars and cereals.

Years later, I realized that the architecture of a video shop was just as strategic as the layout of a supermarket. The fresh and colorful fruits were the fresh released family films, the diary section would be dramas and series whereas the meat section was the hidden room full of porn. The strange mix of shame and revolting feelings filled my boy?s chest again. I did not have the courage to step into the porn room and sought instead emotional relief in the family comedies section.

Reading through an interesting photo blog post on a Chinese manufacturing facility of sex robots with sophisticated AI software, the emotion driven boy in me woke up again and the vegetarian father I became was the motivation to create this photo mashup.

 

The nascent industry of sex robots is peculiarly fascinating. It combines our eternal desire to create (artificial) life through our tireless innovation with the primitive urge of using our reproduction apparatus. A customizable sex doll with body temperature and contextual conversations is the output of the two opposing poles of Homo Sapiens. It is the combination of the uncontrolled, rudimentary impulsion of copulation with the incredibly sophisticated science of robotics and artificial intelligence.

Unfortunately, the fascinating coupling of these extremes also reinforces the disturbing ignorance on how we treat other living creatures on our planet. While we put some of the smartest brains at work to fake life with a talking plastic sex doll, we torture and kill billions of animals in most abominable ways. While we are incredibly busy trying to create life, we are at the very peak of destroying lifes. We successfully ignore vast scientific proof of animals being able to feel, communicate and suffer in the name of industrial production of cheap meat. Instead of opening the gates of the slaughterhouse, we open the doors of IT labs pretending to create ?feelings? based on algorithmic calculations.

It is the paradox between the cruel and systematic killing of lives and the eternal desire to create life. The clash of pitiless indifference towards animal?s feelings against the sophisticated, AI driven ?feelings? of a sex bot. It is this very gap that feels so wrong in my boy?s heart.

 

Photos: Aleksandar Plaveski

Meat Porn and Sex Bots Lire la suite »

Facebook does what it was built for

?Dumb Fucks? is how Mark Zuckerberg called his first couple of thousand ?friends? who lavishly shared their personal life on Facebook back in 2004. Listening to the mostly unqualified questions from grey-haired members of Congress during Mark Zuckerberg?s hearing at the US Senate and looking at his releasing smile, he might have though the same thing again of the seasoned politicians in front of him.

To me, the hearing is the world?s most outright and direct example of the impressive gap between the unlimited possibilities of technological evolution and the blatant lack of understanding by the masses.

Here are five reasons why this situation should not come as a surprise.

  1. Internet = Surveillance

The world-renowned Internet security expert Bruce Schneier once said that Surveillance is the business model of the Internet. Just like a pipe of fresh water needs ongoing monitoring for obvious health reasons and a pipe of sewage needs regular maintenance to properly work, pipes of internet traffic are subject to sophisticated monitoring. The worldwide web is founded on the TCP/IP protocol which serves as a global standard for information exchange. These standards were developed by DARPA, the US military research agency in the late 70ies and were declared the standard for all military computer networking in March 1982. Monitoring traffic from emails to images, from video clips to attached PDFs and from voice messages to animated GIFs has hence been an intrinsic part of the Internet from day one ? and for good reason. Balancing off the exponentially growing volume of traffic is no easy task, keeping out viruses and child pornography as well as protecting our Internet banking transactions are other good reasons. Keeping control on an entire population to stabilize the political system in power is the most fundamental of all reas

And then we?ll tell them that their privacy will be respected?
And then we?ll tell them that their privacy will be respected?

ons. Deep packet inspection (DPI) has been one of several standard procedure of scanning and analyzing any information that travels through the Internet and it does exactly what its name suggests. It allows governmental organizations to open and inspect in details any parcel of information traveling from A to B. While the worldwide web is by far the most fantastic media that humanity has ever come up to, it was never meant to be a private space.

  1. Ten year gap

I once assisted a conference on risk management and international regulations where a leading researcher explained why most top athletes hardly get caught in doping and drug tests: the laboratories who develop new (performance enhancing) drugs are at least ten years ahead of the labs who test the athletes. We can safely assume that the same gap applies to information technology developed by US military organizations and the common knowledge of the world population. In ten years form now, most of us will start to understand what is technically possible and therefore being done through the collection of our own data today.

  1. Facebook does what it was built for

Once the potential of Facebook as a gargantuan supplier of valuable information was understood by the US intelligence services, the CIA became an early investor in Facebook through its venture capital firm In-Q-Tel. The unprecedented rise of Facebook to become the world?s biggest social network data supplier of private information and enabler of behavioral economics analysis hence is perfectly in line with the hegemony of the United States of America.  Advanced knowledge by politicians and the population of what can be done through sophisticated data analytics would be counterproductive.

  1. We all agree

This is where the magic lies. Imagine a conversation between two Gestapo agents in 1943, both of whom work day and night to spy on suspected individuals and meticulously collect information about them. If one told the other that 70 years from now, billions of people would tell us everything they do, send us photos without being asked and report to us with whom they hang out and where, his friend would burst out in laughter. Yet, this is exactly what happened. Part of Facebook?s ever more complex terms of use read : „by posting member content to any part of the Web site, you automatically grant, and you represent and warrant that you have the right to grant, to Facebook an irrevocable, perpetual, non-exclusive, transferable, fully paid, worldwide license to use, copy, perform, display, reformat, translate, excerpt and distribute such information and content and to prepare derivative works of, or incorporate into other works, such information and content, and to grant and authorise sublicenses of the foregoing. And when it comes to its privacy policy it states that „Facebook may also collect information about you from other sources, such as newspapers, blogs, instant messaging services, and other users of the Facebook service through the operation of the service (eg. photo tags) in order to provide you with more useful information and a more personalized experience. By using Facebook, you are consenting to have your personal data transferred to and processed in the United States.“ In the name of ?free? entertainment and connecting with our friends, we politely and voluntarily do the tedious job formerly done by thousands of intelligence agents. If we agree to use a product for which we don?t pay, we agree that we are the product being used.

  1. Out of sight, out of mind

Among the many advantages that come with the digital transformation of our life?s, they also come with pitfalls. The invisibility and intangibility of data is probably the creepiest of them all. We are inherently visual animals relying disproportionately on our eyes to construct our own reality. As soon as we don?t see, we get scared (walking through a forest at day or at night are two incredibly different experiences). And because we don?t see data and we don?t understand what self learning algorithms can predict using our data, we indeed should be scared. As soon as political propaganda becomes visible, it loses its effectiveness. For this very same logic we should do whatever it takes to keep a open dialogue on the subject and to force the world?s data driven media behemoths to tell us how our data is being used.

Facebook does what it was built for Lire la suite »

Attention please !

Yes, our attention is the scarce resource that the digital media industry depends on.  And yes, I just got your attention by recycling a picture that was used in an online ad to do the same. While the volume of content is exploding (300 hours of new videos on YouTube, every minute, 95 million photos and stories on Instagram are uploaded and shared every day…), our attention is a limited resource. No wonder that the the way of catching it has become a science by behavioral psychologist, software developers and user interface designer. Because our eyeballs glued to the screen are the currency that is being sold to the advertisers, the art and science on how to lock us into the longest streak of engagement is the name of the game.

In his TED Talk, the former Design Ethicist at Google gives an excellent overview of how the industry works and what it wrong with this.

I am happy to see the awakening of Silicon Valley pundits gathering to tackle the desperately needed social responsibility of the tech giants on one side and the creation of awareness and consciousness of digital media consumers on the other hand. As I mentioned before, the race for attention is of course only the symptom of the underlying need for growth and profitability dictated by the current economic imperatives. As long as we are willing to trade our attention against „free“ content that in return is designed to hijack our attention, we are in a weak position to ask Facebook and Google to step up to their responsibility.

It is hence the duty of teachers and parents, designers and software developers, shareholders and philosophers to have a honest conversation about how the current state of tech giants are threatening our health, our society and our democracy.

 

Attention please ! Lire la suite »

Women cook, men play tennis, AI says

Junk in, junk out is an old way of describing the relationship between corrupt and erroneous data being fed to a computer and the useless results being delivered in response. If you deliberately fill an Excel column with a couple of random numbers that you just made up, you know that you cannot trust the sum or average function at the bottom. This common sense is now being reconfirmed by sophisticated machine learning algorithms.

Machine learning refers to autonomous computer programs that feed themselves from large amounts of data without human supervision. So comes that a pattern recognizing algorithm scrolling through hundreds of thousand images concludes that people standing in a kitchen are more likely to be women than men. On the other hand, someone with a gun or a person coaching a sports team is more likely to be a man.

We are about to find out that letting AI lose and eating our own data is a great way of not only breeding sexist views but amplifying gender biased perception. While this comes as a disturbing surprise to many AI researchers, it should not. It is the same junk in, junk out rule applied on another level.

We have now two choices. Either we ?correct? the algorithms by hard coding gender neutrality (i.e. fifty-fifty chance between man and woman for each picture with a person cooking, shooting, shopping or playing tennis) or we accept the biased output as a ?feature? that makes cold and rational AI systems ?more human?.

Should we let AI systems learn by themselves and use them as a mirror for our many biases or should we feed them with gender neutral, morally desirable and ethically acceptable values to help us evolve?

Doing the former will help us learn about ourselves but trigger a downward spiral with bias amplification. Doing the latter will help us to change our biased perceptions but leave the burden of programming global ethical standards to few god like tamer of algorithms. We should take the time to discuss this before economic greed outsources the decision to AI.

 

Women cook, men play tennis, AI says Lire la suite »

Why we need to talk about responsible AI

Excellent video enabling a urgently needed conversation about responsible use of artificial intelligence:

What do you think about it? Share your thoughts and opinion on  The Declaration of Montreal for responsible AI. The declaration of Montreal aims to foster a public conversation about the potential and responsibilities of AI. It hopefully becomes a binding, international legal framework that assures that the above fiction will stay fiction.

Why we need to talk about responsible AI Lire la suite »

Introducing smart gut feeling

After several years of tests, the world?s first ?intelligent? drug just got approved by the US Food and Drug Administration (FDA). Called the Abilify McCite aripiprazole tablet, this smart drug for schizophrenia treatment has an embedded sensor. Once in contact with stomach fluids, the ?Ingestible Event Marker? sensor pushes information on the patient?s and doctor?s app, offering a convenient confirmation that the drug was swallowed. In the name of ?medication compliance? we are effectively giving up another big chunk of privacy and intimacy and see ourselves relegated to obedient drug consumers. While the trend of measuring and sharing body key performance indicators (KPI) through wearable devices such as FitBit has taken off several years ago, the big psychological difference here is that we embed a sensor right into our stomach instead of wrapping it around our wrist (and take it off) when we feel like it.

In the name of drug consumption compliance, we allow Big Pharma to insert bugging device right into our body. According to official sources, the patient?s consent is needed to share the ?download complete? message with the prescribing doctor. Nevertheless, any data security professional confirms that as soon as something is technically feasible, it is done. Before you download the tablet into your fleshware, you might ask yourself several questions: Are you fine with sharing a new level of information, sending your gut feeling right into the hands of your doctor and potentially other ?medication compliance? parties (your health insurance would certainly not mind having this data?)? Since the sensor itself is not at all an active ingredient treating your illness, isn?t it a simple sign of lack of trust? Your verbal confirmation to the doctor about taking the medicine will be overwritten by a simple pop up message on the doc?s app. Who knows how smart the sensor really is, picking up ?several other physiological data? on the state of your health and transmitting the info to receiving devices. Are we not only losing our body intimacy but also introducing a potentially new level of stress with a performance dashboard of our own body? We just might have lost the intuition driven gut feeling, replacing it with laser sharp sensors bringing us one step further into the sterile world of digital reasoning.

Source: Otsuka and Proteus

Introducing smart gut feeling Lire la suite »

Is capitalism the wrong OS for AI?

Today I was attending the highly interesting and utterly necessary event ?Responsible AI?, a two-day Forum on the socially responsible development of Artificial Intelligence organized by Université de Montréal. The prolific exchange of knowledge, wisdom and opinions around AI and the profound social and ethical responsibilities that come with it emphasizes Montreal?s ambition and seriousness to become a leading hub of AI.

The unknown variables of the mid- and long-term impact of AI on job security, privacy, justice, social equality and ecology are so far reaching that questions during the first day of the forum largely outnumbered  answers – which to me is a healthy sign of a constructive dialogue. Being able to ask the right question is more valuable than offering an easy answer that has not been thought to the end.

So this is one of the many questions I wrote into my notebook while listening to leading AI scientists : Is capitalism the wrong Operating System for Artificial Intelligence? While we get assured during every press conference by a tech CEO that their goal is to make life better, connect the world, wipe out poverty and safe the planet, it is easy to forget that all leading AI multinationals are stock quoted companies. In a value proposition world driven by quarterly reporting and C-level compensation mostly linked to short term profits, corporate social responsibility is regularly perceived as a profit decreasing waste of shareholder?s investment. If we are serious about AI and if we have the collective capacity of learning from self learning algorithms, we should consider in each AI algorithm a baked-in, triple bottom line approach that inherently pursues ecological, social and economic objectives. Or do you think we can rely on capitalism as the adequate OS for AI?

Is capitalism the wrong OS for AI? Lire la suite »

Retour en haut