Header Ads

You Should Be Afraid of Artificial Intelligence | Artificial Intelligence

Robot

If you ever walked into a bait shop and asked for a bag of self-hooking worms, the attendant would probably look at you like you had two heads. A slightly more sophisticated sports outfitter running the latest version of the creature-simulating platform, OpenWorm, and packing a well-stocked biohacker lab might instead lead you behind the counter and ask you to design your own. OpenWorm is an open-source project that aims to create a virtual nematode worm in a computer. The wiki and code are on GitHub, which makes it easy for anyone with coding skills to get involved. What makes this project different from any other attempt to create virtual organisms is that its bottom-up approach starts with data from scientific experiments, and builds up a complete worm cell by cell.
Feynman made the open-ended assertion that understanding of something is not entry into a mental state of new knowledge, but rather the physical process of building it. Despite years of study, a principled understanding of the tiny roundworm, c. elegans, still eludes researchers. By capturing sufficient complexity at a low level in the worm itself, and including a similarly-detailed model of its environment, researchers anticipate worm-appropriate behaviors to spontaneously emerge that are consistent with experimental data.
With just a thousand cells, the real worm solves the basic problems of feeding, mate-finding, and predator avoidance. Its brain is composed of 302 neurons and their entire connectome has previously been mapped out in detail by other researchers. Using the connectome as a starting point, comprehensive computational models that describe compartments of each neuron, and the synaptic connections between them can be described.



A standalone model of a worm is just that, a sterile facsimile of the real thing. The real power is that a good model can potentially be more than the organism it represents. For argument’s sake, we might imagine adding a network of 50 additional neurons to the worm in attempt to breed in or otherwise teach it to self-bait under the appropriate conditions or stimulus. Doing such a thing in software could be a whole lot faster than in the real worm. At this point in time at least, the OpenWorm project does not simulate development, nor utilize information from the known genome of the worm itself. It does, however, incorporate some sophisticated software already.
The OpenWorm project has developed a Java OSGi modular platform known as Geppetto, to enable multi-scale interactive simulation of biological systems. It features a built-in WebGL visualizer that runs right in the browser. The OpenWorm Browser enables access to a cell-by-cell 3D representation of the worm. The connectome is described using the NeuroML language, and employs an optimization engine that uses genetic algorithms to fill in gaps in the worm’s physiology, including the simulation of the muscles. The project also implements smoothed particle hydrodynamics algorithms to simulate body — environment interaction using GPUs. Initially worked out in C++ with OpenGL visualization, it was then ported to Java as a bundle for Geppetto.


Writing about Artificial Intelligence is a challenge. By and large, there are two directions to take when discussing the subject: focus on the truly remarkable achievements of the technology or dwell on the dangers of what could happen if machines reach a level of Sentient AI, in which self-aware machines reach human level intelligence).
This dichotomy irritates me. I don’t want to have to choose sides. As a technologist, I embrace the positive aspects of AI, when it helps advance medical or other technologies. As an individual, I reserve the right to be scared poop-less that by 2023 we might achieve AGI (Artificial General Intelligence) or Strong AI — machines that can successfully perform any intellectual task a person can.
Not to shock you with my mad math skills, but 2023 is 10 years away. Forget that robots are stealing our jobs, will be taking care of us when we’re older, asking us to turn and cough in the medical arena.
How we can ensure humans will be able to control AI once it achieves human-level intelligence?
So, yes, I have control issues. I would prefer humans maintain autonomy over technologies that could achieve sentience, largely because I don’t see why machines would need to keep us around in the long run.
It’s not that robots are evil, per se. (Although Ken Jennings, Jeopardy champion who lost to IBM’s Watson might feel differently.) It’s more that machines and robots are currently, and for the moment, predominantly, programmed by humans who always experience biases.
In a report published by Human Right’s Watch and Harvard Law School’s International Human Rights Clinic, "Losing Humanity, The Case Against Killer Robots", the authors write: “In its Unmanned Systems Integrated Roadmap FY2011-2036, the U.S. Department of Defense wrote that it ‘envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.’”
The "unmanned systems" refer to fully autonomous weapons that can select and engage targets without human intervention.
Who is deciding when a target should be engaged? Come to think of it, who’s deciding who is a target? Do we really want to surrender control for weaponized AI to machines, in the wake of situations like the cultural morass of the Trayvon Martin shooting? How would Florida’s Stand Your Ground Law operate if controlled by weaponized AI-police enforcement hooked into a city’s smart grid? 
wearable-cameras-2

The term, FUD stands for “Fear, Uncertainty and Doubt.” It’s a pejorative phrase with origins in the tech industry, where companies use disinformation tactics to spread false information about competitors.
FUD has evolved, however, to become a tedious phrase leveled at anyone questioning certain aspects of emerging technology, often followed by accusations of Ludditism.
But I think people have the wrong picture of Luddites. In the New York Times, Paul Krugman recently wrote on this idea, noting the original Luddite movement was largely economically-motivated, in response to the Industrial Revolution. The original Luddites weren’t ignorant regarding the technology of the day, or at least its ramifications (loss of work). They took up arms to slay the machines they felt were slaying them.
Not too far a stretch to say we’re in a similar environment, although the stakes are higher — strong AI arguably poses a wider swath of technological issues than threshing machines.
So, as a fan of acronym-creation I’d like to posit the following phrase to counter the idea of FUD, especially relating to potentially human-ending technology without standards governing its growth:

FAB: Fear, Awareness and Bias

The acronym distinguishes a blind and reactionary fear used to proactively spread false information, from a warranted and human fear based in the bias that it’s okay to say we don’t want to be dominated, ruled, out-jobbed or simply ignored by sentient machines.
Does that mean I embrace relinquishment, or abandoning AI-related research? Not altogether. The same Watson that won on Jeopardy is also now being utilized in pioneering oncological studies. Any kneejerk reaction to stop work in the AI space doesn’t make sense (much less, it’s impossible) .
But the moral implications of AI get murky when thinking about things like probabilistic reasoning, which helps computers move beyond Boolean decisions (yes/no) to make decisions in the midst of uncertainty — for instance, whether or not to give a loan to an applicant based on his or her credit score.
It is tempting to wonder what would happen if we spent more time focusing on helping each other directly, versus relying on machines to essentially grow brains for us.

Nuclear fission was announced to the world at Hiroshima.” James Barrat is author of Our Final Invention: Artificial Intelligence and the End of the Human Era, which expounds a thorough description of the chief players in the larger AI space, along with an arresting sense of where we’re headed with machine learning — a world we can’t define.
For our interview, he cited the Manhattan Project and the development of nuclear fission as a precedent for how we should consider the present state of AI research:
We need to develop a science for understanding advanced Artificial Intelligence before we develop it further. It’s just common sense. Nuclear fission is used as an energy source and can be reliable. In the 1930s the focus of that technology was on energy production, initially, but an outcome of the research led directly to Hiroshima. We’re at a similar turning point in history, especially regarding weaponized machine learning. But with AI we can’t survive a fully realized human level intelligence that arrives as abruptly as Hiroshima.
Barrat also pointed out the difficulty regarding AI and anthropomorphism. It’s easy to imbue machines with human values, but by definition, they’re silicon versus carbon.
“Intelligent machines won’t love you any more than your toaster does," he says. "As for enhancing human intelligence, a percentage of our population is also psychopathic. Giving people a device that enhances intelligence may not be a terrific idea.”
A recent article in The Boston Globe by Leon Neyfakh provides another angle to the concern over autonomous machines. Take Google’s Self-Driving Car — what happens when a machine breaks the law?
Gabriel Hallevy, a professor of criminal law at One Academic College in Israel and author of upcoming book When Robots Kill: Artificial Intelligence Under Criminal Law, adds to Barrat’s assessment: Machines need not be evil to cause concern (or in Hallevy’s estimation, be criminally liable).
The issue isn’t morality, but awareness.
Hallevy notes in "Should We Put Robots on Trial," “An offender — a human, a corporation or a robot — is not required to be evil. He is only required to be aware of what he’s doing…[which] involves nothing more than absorbing factual information about the world and accurately processing it.”

Options for AI

The nature of FAB, as I’m proposing it, is to move beyond the dichotomy of only two ways of thinking about AI and elevate the work of unique thinkers in the space. Use our Fears about the nature of potential scenarios to help create Awareness of positive possibilities that will Bias us to action regarding AI, versus succumbing to complacency or tacit acceptance toward inevitable overlord rule.
In that regard, I appreciated when James Barrat told me about the work of Steve Omohundro who holds degrees in physics and mathematics from Stanford, a Ph.D. in physics from U.C. Berkeley and is president of Self-Aware Systems, a think tank he created to “bring positive human values to new intelligent technologies.”
He provides a refreshing voice in the AI community, acknowledging that “these systems are likely to be so powerful that we need to think carefully about ensuring they promote the good and prevent the bad.”
In terms of using AI for positive means, it’s worth watching two of his videos: his TEDx talk in Estonia on "Smart Technology for the Greater Good" (above) and a keynote talk at Oxford on "Autonomous Technology for the Greater Human Good."
Steve Mann, pioneer in the field of wearable computing, has a theory of Humanistic Intelligence (HI) that also adds a unique layer to the discussion surrounding Artificial Intelligence. The theory came from his Ph.D. work at MIT, where Marvin Minsky (whom many call the father of AI) was on his thesis committee.
Mann explains in the opening of his thesis, “Rather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications, within the domain of personal technologies, that can make use of this excellent but often overlooked processor.” By leveraging tools like Google’s Glass or other intelligent wearable camera systems, we can enhance our lives as aided by technology, versus having our consciousness supplanted by it. He described his theory for our interview:
HI is intelligence that arises by having the human being in the feedback loop of the computational process. AI is not immediately a reality, whereas HI is here and now and viable. HI is a revolution in communications, not mere computation. It’s really a matter of people caring about people, not machines caring about people.
Compared to the notion of the Singularity as described by Ray Kurzweil (the moment in time when machines gain true sentience), Mann’s description of Humanistic Intelligence in full fruition is the Sensularity. It’s an appealing concept: that technology assisting humanity towards greater innovation can feature compassion over computation as its primary goal.
HI features elements that ring of transhumanism, or H+, the idea that we could transform the human condition by merging technology with our bodies. eye-camera

While many of us get anxious by the idea of ingesting sensors or replacing our eye with a camera, we don’t think twice about prosthetic limbs (even if it’s embedded with a smartphone).
In Clyde deSouza’s science fiction novel Memories with Maya, however, AI and Augmented Reality add to the transhuman mix (in the form of haptic interfaces) by imagining how we’ll interact with the reanimated avatars of our loved ones. The concept is fascinating and imminently credible. Think of the volume of content around a person: pictures, videos and words (sentiment expressed in texts, mails and social networking posts). It won’t be long until we’re able to fabricate or recreate people in virtual form.
DeSouza noted in an interview with Giulio Prisco for the Kurzweil blog, “Memories with Maya is a story that aims to seed ideas, grounded in hard science, on how AI, AR and advances in the field of deep learning and cybernetic reconstruction will eventually allow us to virtually resurrect the dead. A time will soon come when questions will need to be answered on the ethical use of such technology and its impact on intimate human relationships and society.” The book imagines life's repercussions if we could essentially keep our loved ones alive beyond the time their bodies physically stop functioning.
When is the best time to discuss the ethical uses of these technologies? NOW.

The Depth and the Direction

“I’m hungry for depth.” Peter Vander Auwera is cofounder of Innotribe, the innovation arm of SWIFT, the global provider of secure financial messaging services, and the cocreator of Corporate Rebels United, an organization geared to create actionable value practices within organizations.
Calling Corporate Rebels United a “do tank” versus a think tank, Vander Auwera learned to temper his passion for technology with a fervor for human connection. Like Mann’s focus on people caring for people, Vander Auwera is calling for a revolution focused on empowering humans, which he outlined in a recent blog post, "Dystopian Futures:"
We have come at a point where our only option [from dystopia] is a revolution [from being] data slaves and evolving as a new kind of species in the data ocean, trying to preserve what makes us human … We will need a new set of practices for value creation; where data slaves dare to stand up and call for a revolution … But it will be very difficult to turn back the wheel that has already been set in motion several decades ago.
I do not welcome our robot overlords.
I welcome aspects of accelerated learning and improvements in health, but not the full-stop acceptance of a time when AI will gain sentience, one which many are expediting. I’m with Peter Vander Auwera — I love technology, but I want to be part of the revolution that dares to stand up and say,” I like being human! I want humans to retain autonomy over machines!”
My hope is that, like the Genesis Angels, the $100 million fund created to spur acceleration in AI and robotics startups, someone will step up and justify the ramifications of AI before unleashing it full-blown onto humanity.
For the robots or technology that may surpass our intelligence in the near future, observe my fleshy middle-digit and hear me cry: “I wave my private parts at your Auntie! Your mother was a hamster and your father smells of elderberry!” —John Cleese, Monty Python and the Holy Grail.

Nipun Tyagi. Powered by Blogger.