vendredi 29 avril 2016

20 Crucial Terms Every 21st-Century Futurist Should Know (article original)



20 Crucial Terms Every 21st-Century Futurist Should Know
By George Dvorsky on 29 Mar 2016 at 8:00PM
We live in an era of accelerating change, when scientific and technological advancements are arriving rapidly. As a result, we are developing a new language to describe our civilisation as it evolves. Here are 20 terms and concepts that you’ll need to navigate our future.
Back in 2007 I put together a list of terms every self-respecting futurist should be familiar with. But now, some seven years later, it’s time for an update. I reached out to several futurists, asking them which terms or phrases have emerged or gained relevance since that time. These forward-looking thinkers provided me with some fascinating and provocative suggestions — some familiar to me, others completely new, and some a refinement of earlier conceptions. Here are their submissions, including a few of my own.
1. Co-veillance
Futurist and sci-fi novelist David Brin suggested this one. It’s kind of a mash-up between Steve Mann’s sousveillance and Jamais Cascio’s Participatory Panopticon, and a furtherance of his own Transparent Society concept. Brin describes it as: “reciprocal vision and supervision, combining surveillance with aggressively effective sousveillance.” He says it’s “scrutiny from below.” As Brin told us:
Folks are rightfully worried about surveillance powers that expand every day. Cameras grow quicker, better, smaller, more numerous and mobile at a rate much faster than Moore’s Law (i.e. Brin’s corollary). Liberals foresee Big Brother arising from an oligarchy and faceless corporations, while conservatives fret that Orwellian masters will take over from academia and faceless bureaucrats. Which fear has some validity? All of the above. While millions take Orwell’s warning seriously, the normal reflex is to whine: “Stoplooking at us!” It cannot work. But what if, instead of whining, we all looked back? Countering surveillance with aggressively effective sousveillance — or scrutiny from below? Say by having citizen-access cameras in the camera control rooms, letting us watch the watchers?
Brin says that reciprocal vision and supervision will be hard to enact and establish, but that it has one advantage over “don’t look at us” laws, namely that it actually has a chance of working. (Image credit:24Novembers/Shutterstock)
2. Multiplex Parenting
This particular meme — suggested to me by the Institute for the Future’s Distinguished Fellow Jamais Cascio — has only recently hit the radar. “It’s in-vitro fertilization,” he says, “but with a germline-genetic mod twist.” Recently sanctioned by the UK, this is the biotechnological advance where a baby can have three genetic parents via sperm, egg, and (separately) mitochondria. It’s meant as a way to flush-out debilitating genetic diseases. But it could also be used for the practice of human trait selection, or so-called “designer babies”. The procedure is currently being reviewed for use in the United States. The era of multiplex parents has all but arrived.

3. Technological Unemployment
Futurist and sci-fi novelist Ramez Naam says we should be aware of the potential for “technological unemployment”. He describes it as unemployment created by the deployment of technology that can replace human labour. Per Naam:
For example, the potential unemployment of taxi drivers, truck drivers, and so on created by self-driving cars. The phenomenon is an old one, dating back for centuries, and spurred the original Luddite movement, as Ned Ludd is said to have destroyed knitting frames for fear that they would replace human weavers. Technological unemployment in the past has been clearly outpaced (in the long term) by the creation of new wealth from automation and the opening of new job niches for humans, higher in levels of abstraction. The question in the modern age is whether the higher-than-ever speed of such displacement of humans can be matched by the pace of humans developing new skills, and/or by changes in social systems to spread the wealth created.
Indeed, the potential for robotics and AI to replace workers of all stripes is significant, leading to worries of massive rates of unemployment and subsequent social upheaval. These concerns have given rise to another must-know term that could serve as a potential antidote: guaranteed minimum income. (Image credit: Ociacia/Shutterstock)
4. Substrate-Autonomous Person
In the future, people won’t be confined to their meatspace bodies. This is what futurist and transhumanist Natasha Vita-More describes as the “Substrate-Autonomous Person”. Eventually, she says, people will be able to form identities in numerous substrates, such as using a “platform diverse body” (a future body that is wearable/usable in the physical/material world — but also exists in computational environments and virtual systems) to route their identity across the biosphere, cybersphere, and virtual environments.
“This person would form identities,” she told me. “But they would consider their personhood, or sense of identity, to be associated with the environment rather than one exclusive body.” Depending on the platform, the substrate-autonomous person would upload and download into a form or shape (body) that conforms to the environment. So, for a biospheric environment, the person would use a biological body, for the Metaverse, a person would use an avatar, and for virtual reality, the person would use a digital form.
5. Intelligence Explosion
 It’s time to retire the term ‘Technological Singularity.’ The reason, says theFuture of Humanity Institute’s Stuart Armstrong, is that it has accumulated far too much baggage, including quasi-religious connotations. It’s not a good description of what might happen when artificial intelligence matches and then exceeds human capacities, he says. What’s more, different people interpret it differently, and it only describes a limited aspect of much broader concept. In its place, Armstrong says we should use a term devised by the computer scientist I. J. Good back in 1967: the “Intelligence explosion.” Per Armstrong:
It describes the apparent sudden increase in the intelligence of an artificial system such as an AI. There are several scenarios for this: it could be that the system radically self improves itself, finding that as it becomes more intelligent, it’s easier for it to become more intelligent still. But it could also be that human intelligence clusters pretty close in mindspace, so a slowly improving AI could shoot rapidly across the distance that separates the village idiot from Einstein. Or it could just be that there are strong skill returns to intelligence, so that an entity need only be slightly more intelligent that humans to become vastly more powerful. In all cases, the fate of life on Earth is likely to be shaped mainly by such “super-intelligences”.
6. Longevity Dividend
While many futurists extol radical life extension on humanitarian grounds, few consider the astounding fiscal benefits that are to be had through the advent of anti-ageing biotechnologies. The Longevity Dividend, as suggested to me by bioethicist James Hughes of the IEET, is the “assertion by biogerontologists that the savings to society of extending healthy life expectancy with therapies that slow the ageing process would far exceed the cost of developing and providing them, or of providing additional years of old age assistance”. Longer healthy life expectancy would reduce medical and nursing expenditures, argues Hughes, while allowing more seniors to remain independent and in the labor force. No doubt, the corporate race to prolong life is heating up in recognition of the tremendous amounts of money to be made — and saved — through preventative medicines.
7. Repressive Desublimation
This concept was suggested by Annalee Newitz, author of Scatter, Adapt And Remember. The idea of repressive desublimation was first developed by by political philosopher Herbert Marcuse in his groundbreaking book Eros and Civilization. Newitz says:
It refers to the kind of soft authoritarianism preferred by wealthy, consumer culture societies that want to repress political dissent. In such societies, pop culture encourages people to desublimate or express their desires, whether those are for sex, drugs or violent video games. At the same time, they’re discouraged from questioning corporate and government authorities. As a result, people feel as if they live in a free society even though they may be under constant surveillance and forced to work at mind-numbing jobs. Basically, consumerism and so-called liberal values distract people from social repression.
8. Intelligence Amplification
Sometimes referred to as IA, this is a specific subset of human enhancement —the augmentation of human intellectual capabilities via technology. “It is often positioned as either a complement to or a competitor to the creation of Artificial Intelligence,” says Ramez Naam. “In reality there is no mutual exclusion between these technologies.” Interestingly, Naam says IA could be a partial solution to the problem of technological unemployment — as a way for humans, or posthumans, to “keep up” with advancing AI and to stay in the loop.
9. Effective Altruism
This is another term suggested by Stuart Armstrong. He describes it as:
the application of cost-effectiveness to charity and other altruistic pursuits. Just as some engineering approaches can be thousands of times more effective at solving problems than others, some charities are thousands of time more effective than others, and some altruistic career paths are thousands of times more effective than others. And increased efficiency translates into many more lives saved, many more people given better outcomes and opportunities throughout the world. It is argued that when charity can be made more effective in this way, it is a moral duty to do so: inefficiency is akin to letting people die.
10. Moral Enhancement
On a somewhat related note, James Hughes says moral enhancement is another must-know term for futurists of the 21st century. Also known as virtue engineering, it’s the use of drugs and wearable or implanted devices to enhance self-control, empathy, fairness, mindfulness, intelligence and spiritual experiences.
11. Proactionary Principle
This one comes via Max More, president and CEO of the Alcor Life Extension Foundation. It’s an interesting and obverse take on the precautionary principle. “Our freedom to innovate technologically is highly valuable — even critical — to humanity,” he told me. “This implies several imperatives when restrictive measures are proposed: assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favour measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people’s freedom to experiment, innovate, and progress.”
12. Mules
Jamais Cascio suggested this term, though he admits it’s not widely used. Mules are unexpected events — a parallel to Black Swans — that aren’t just outside of our knowledge, but outside of our understanding of how the world works. It’s named after Asimov’s Mule from the Foundation series.
13. Anthropocene
 Another must-know term submitted by Cascio, described as “the current geologic age, characterized by substantial alterations of ecosystems through human activity.”
14. Eroom’s Law
Unlike Moore’s Law, where things are speeding up, Eroom’s Law describes — at least in the pharmaceutical industry — things that are slowing down (which is why it’s Moore’s Law spelled backwards). Ramez Naam says the rate of new drugs developed per dollar spent by the industry has dropped by roughly a factor of 100 over the last 60 years. “Many reasons are proposed for this, including over-regulation, the plucking of low-hanging fruit, diminishing returns of understanding more and more complex systems, and so on,” he told me.
15. Evolvability Risk
Natasha Vita-More describes this as the ability of a species to produce variants more apt or powerful than those currently existing within a species:
One way of looking at evolvability is to consider any system — a society or culture, for example, that has evolvable characteristics. Incidentally, it seems that today’s culture is more emergent and mutable than physiological changes occurring in human biology. In the course of a few thousand years, human tools, language, and culture have evolved manifold. The use of tools within a culture has been shaped by the culture and shows observable evolvability-from stones to computers-while human physiology has remained nearly the same.
16. Artificial Wombs
“This is any device, whether biological or technological, that allows humans to reproduce without using a woman’s uterus,” says Annalee Newitz. Sometimes called a “uterine replicator,” she says these devices would liberate women from the biological difficulties of pregnancy, and free the very act of reproduction from traditional male-female pairings. “Artificial wombs might develop alongside social structures that support families with more than two parents, as well as gay marriage,” says Newitz.
17. Whole Brain Emulations
Whole brain emulations, says Stuart Armstrong, are human brains that have been copied into a computer, and that are then run according to the laws of physics, aiming to reproduce the behaviour of human minds within a digital form. According to Armstrong:
They are dependent on certain (mild) assumptions on how the brain works, and requires certain enabling technologies, such as scanning devices to make the original brain model, good understanding of biochemistry to run it properly, and sufficiently powerful computers to run it in the first place. There are plausible technology paths that could allow such emulations around 2070 or so, with some large uncertainties. If such emulations are developed, they would revolutionise health, society and economics. For instance, allowing people to survive in digital form, and creating the possibility of “copyable human capital”: skilled, trained and effective workers that can be copied as needed to serve any business purpose.
Armstrong says this also raises great concern over wages, and over the eventual deletion of such copies.
18. Weak AI
Ramez Naam says this term has gone somewhat out of favour, but it’s still a very important one. It refers to the vast majority of all ‘artificial intelligence’ work that produces useful pattern matching or information processing capabilities, but with no bearing on creating a self-aware sentient being. “Google Search, IBM’s Watson, self-driving cars, autonomous drones, face recognition, some medical diagnostics, and algorithmic stock market traders are all examples of ‘weak AI’,” says Naam. “The large majority of all commercial and research work in AI, machine learning, and related fields is in ‘weak AI’.”
Naam argues that this trend — and the motivations for it — is one of the arguments for the Singularity being further than it appears.
19. Neural Coupling
Imagine the fantastic prospect of creating interfaces that connect the brains of two (or more) humans. Already today, scientists have created interfaces that allow humans to move the limb — or in this case, the tail — of another animal. At first, these technologies will be used for therapeutic purposes; they could be used to help people relearn how to use previously paralysed limbs. More radically, it could eventually be used for recreational purposes. Humans could voluntarily couple themselves and move each other’s body parts.
20. Computational Overhang
This refers to any situation in which new algorithms can suddenly and dramatically exploit existing computational power far more efficiently than before. This is likely to happen when tons of computational power remains untapped, and when previously used algorithms were suboptimal. This is an important concept as far as the development of AGI (artificial general intelligence) is concerned. As noted by Less Wrong, it:
signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.
Luke Muehlhauser from the Machine Intelligence Research Institute (MIRI) describes it this way:
Suppose that computing power continues to double according to Moore’s law, but figuring out the algorithms for human-like general intelligence proves to be fiendishly difficult. When the software for general intelligence is finally realized, there could exist a ‘computing overhang’: tremendous amounts of cheap computing power available to run [AIs]. AIs could be copied across the hardware base, causing the AI population to quickly surpass the human population.

Ce à quoi tout futuriste devrait s'attendre





Un article de George Dvorsky, largement inspiré du transhumanisme propose 20 pistes cruciales pour le futur de l’humanité. 
J’en ai sélectionné quelques-unes. 

1) La co-surveillance : tout le monde aura accès aux multiples caméras de surveillances de sa ville (voire de la planète).
2) Les parents multiples : le nouveau-né sera conçu, hors utérus, par plus que deux parents, il aura plusieurs pères, plusieurs mères (voir dans l’article original le point 16).
3) Le revenu minimum garanti : chaque humain bénéficiera d’un salaire automatique, qu’il ait ou non un emploi. La robotisation généralisée privera de travail de nombreux salariés, lesquels seront de la sorte dédommagés.
4) Le transfert de sa personnalité et de son intelligence (voir dans l’article original le point 17) dans divers substrats, dans des corps biologiques, dans des univers digitaux ou virtuels. On peut aussi envisager l’interconnexion entre plusieurs cerveaux humains et pourquoi pas aussi d’animaux (voir dans l’article original le point 19).
5) Le progrès foudroyant de sa propre intelligence, qui transformera chacun, fût-il l’idiot du village, en un génie digne d’Einstein. Ce progrès sera réalisé en partie par des compléments technologiques qui assisteront le raisonnement et la réflexion (voir dans l’article original le point 8 et aussi le point 20).
6) L’avènement de la technologie antivieillissement aura des retombées économiques favorables pour toute la société.
7) Retour à la lucidité et à l’esprit critique politique de la part des dépendants des moyens d’évasion (alcool, drogue, jeux vidéos violents, etc.).
8) Le nouvel altruisme, ou la sollicitude boostée qui solutionnera de multiples problèmes chez les autres, proches ou non, de telle sorte  que la condition humaine sera beaucoup moins souffrante. Cette amélioration morale, cette capacité supérieure d’empathie  sera encouragée par des drogues ou des implants (voir dans l’article original le point 10).
9) La garantie de la liberté d’innover, d’inventer, d’expérimenter sera effective.
10) La gestion des événements imprévisibles sera assurée.
11) La remédiation contre les détériorations écologiques dues à l’activité humaine sera efficace.



mardi 26 avril 2016

Quand l'AI crée des concepts



« Je n'ai jamais vu une révolution aussi rapide. On est passé d'un système un peu obscur à un système utilisé par des millions de personnes en seulement deux ans. »
Yann LeCun
Toutes les grandes entreprises tech s'y mettent : Google, IBM, Microsoft, Amazon, Adobe, Yandex ou encore Baidu y investissent des fortunes. Facebook également, qui, signal fort, a placé Yann LeCun à la tête de son nouveau laboratoire d'intelligence artificielle installé à Paris.
Ce système AI basé sur des « réseaux de neurones artificiels » numériques, est, pêle-mêle, utilisé pour comprendre la voix, être capable d'apprendre à reconnaître des visages. Il a « découvert » par lui-même le concept de chat et est à l'origine des images psychédéliques qui ont inondé la Toile ces dernières semaines, aux allures de « rêves » de machines.
« La technologie du deep learning apprend à représenter le monde. C'est-à-dire comment la machine va représenter la parole ou l'image par exemple », pose Yann LeCun, considéré par ses pairs comme un des chercheurs les plus influents dans le domaine. « Avant, il fallait le faire à la main, expliquer à l'outil comment transformer une image afin de la classifier. Avec le deep learning, la machine apprend à le faire elle-même. Et elle le fait beaucoup mieux que les ingénieurs, c'est presque humiliant !»
Pour comprendre le deep learning, il faut revenir sur l'apprentissage supervisé, une technique courante en IA, permettant aux machines d'apprendre. Concrètement, pour qu'un programme apprenne à reconnaître une voiture, par exemple, on le « nourrit » de dizaines de milliers d'images de voitures, étiquetées comme telles. Un « entraînement », qui peut nécessiter des heures, voire des jours. Une fois entraîné, il peut reconnaître des voitures sur de nouvelles images.
« La particularité, c'est que les résultats de la première couche de neurones vont servir d'entrée au calcul des autres », détaille Yann Ollivier, chercheur en IA au CNRS, spécialiste du sujet. Ce fonctionnement par « couches » est ce qui rend ce type d'apprentissage « profond ». Yann Ollivier donne un exemple parlant :
« Comment reconnaître une image de chat ? Les points saillants sont les yeux et les oreilles. Comment reconnaître une oreille de chat ? L'angle est à peu près de 45°. Pour reconnaître la présence d'une ligne, la première couche de neurones va comparer la différence des pixels au-dessus et en dessous : cela donnera une caractéristique de niveau 1. La deuxième couche va travailler sur ces caractéristiques et les combiner entre elles. S'il y a deux lignes qui se rencontrent à 45°, elle va commencer à reconnaître le triangle de l'oreille de chat. Et ainsi de suite. »
A chaque étape – il peut y avoir jusqu'à une vingtaine de couches –, le réseau de neurones approfondit sa compréhension de l'image avec des concepts de plus en plus précis. Pour reconnaître une personne, par exemple, la machine décompose l'image : d'abord le visage, les cheveux, la bouche, puis elle ira vers des propriétés de plus en plus fines, comme le grain de beauté. « Avec les méthodes traditionnelles, la machine se contente de comparer les pixels. Le deep learning permet un apprentissage sur des caractéristiques plus abstraites que des valeurs de pixels, qu'elle va elle-même construire », précise Yann Ollivier.
Outre sa mise en œuvre dans le champ de la reconnaissance vocale avec Siri, Cortana et Google Now, le deep learning est avant tout utilisé pour reconnaître le contenu des images. Des chercheurs l'utilisent pour classifier les galaxies. Yann LeCun fait aussi depuis plusieurs années cette démonstration impressionnante : il a créé un programme capable de reconnaître en temps réel les objets filmés par la webcam d'un simple ordinateur portable.
Une des réalisations les plus poussées et les plus spectaculaires du deep learning a eu lieu en 2012, quand la machine a analysé, pendant trois jours, dix millions de captures d'écran issues de YouTube, choisies aléatoirement et, surtout, non étiquetées. Un apprentissage « en vrac » qui a porté ses fruits : à l'issue de cet entraînement, le programme avait appris lui-même à détecter des têtes de chats et des corps humains – des formes récurrentes dans les images analysées. « Ce qui est remarquable, c'est que le système a découvert le concept de chat lui-même. Personne ne lui a jamais dit que c'était un chat. » a expliqué Andrew Ng, fondateur du projet Google Brain, dans les colonnes du magazine Forbes.
« L'espoir est que plus on augmente le nombre de couches, plus les réseaux de neurones apprennent des choses compliquées, abstraites, qui correspondent plus à la manière dont un humain raisonne », anticipe Yann Ollivier. Pour lui, le deep learning va, dans une échéance de 5 à 10 ans, se généraliser « dans toute l'électronique de décision », comme dans les voitures ou les avions. Il pense aussi à l'aide au diagnostic en médecine, citant certains réseaux de neurones qui « se trompent moins qu'un médecin pour certains diagnostics », même si, souligne-t-il, « ce n'est pas encore rôdé ». Les robots seront eux aussi, selon lui, bientôt dotés de cette intelligence artificielle. « Un robot pourrait apprendre à faire le ménage tout seul, et ce serait bien mieux que les robots aspirateurs, qui ne sont pas fantastiques ! », sourit-il. « Ce sont des choses qui commencent à devenir envisageables. »
Plus inattendu, les réseaux de neurones pourraient aussi avoir une influence sur les neurosciences, explique Yann LeCun. « Des chercheurs les utilisent comme un modèle du cortex visuel, car il y a des parallèles ». « Le cerveau humain fonctionne aussi par couches : il capte des formes simples, puis complexes », explique Christian Wolf, spécialiste de la vision par ordinateur à l'INSA de Lyon. « En ce sens, il existe une analogie entre les réseaux de neurones et le cerveau humain. Mais, à part cela, on ne peut pas dire que le deep learning est à l'image du cerveau. »
Source : (extraits de) Morgane Tual journaliste au Monde
En savoir plus sur http://www.lemonde.fr/pixels/article/2015/07/24/comment-le-deep-learning-revolutionne-l-intelligence-artificielle_4695929_4408996.html#Ee3gtqsBrl38Ievm.99