AI Pervasiveness in Human Lives

2
3126 views

AI technologies already pervade human lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy. Substantial increases in the future uses of AI applications, including more self-driving cars, healthcare diagnostics and targeted treatment, and physical assistance for elder care is becoming real.
Question: How AI will improve quality of human life in next decade? How to deploy AI-based technologies in a ways that promote rather than hinder democratic values such as privacy, freedom, and transperancy?

Kishor Akshinthala
85 months ago

5 answers

1

AI is an alias term for deep learning which comprises of Machine Learning and Natural Language Processing. The way I like to see this is that the development of AI will only get better with time. In the same essence, the next generation of humans are going to be even smarter. However, the underlining fact here is that the application for which we use a certain tool, technology, or even our human presence on this planet will only be abused or put to use upon our intentions. Which means, AI can improve the quality of human life by providing assistance in many areas such as genomics, microbiome, pharma...etc.
The hours I use to spend for my research projects in curating relevant information from databases can now be automatically curated within seconds for me to address my needs. This improves efficiency and accuracy in our research with the constant flow of information/big data which all contribute towards hidden relationship discovery, better diagnostics and awareness. We are currently addressing the needs for improvement in the health care and life sciences space which all improve the quality of life somehow.
At the end, it all comes down to the application (for which we want to use AI for), our intentions, and goals that we are trying to accomplish.

Stuti Desai
85 months ago
1

How A.I. Will Redefine Human Intelligence

The machines are getting smarter. They can now recognize us, carry on conversations, and perceive complex details about the world around them. This is just the beginning.
As computers become more human-like, many worry that robots and algorithms will displace people. And they are right to. But just as crucial is the question of how machine progress will change our perceptions of human abilities.
Once a job can be done by a computer, it changes the way people think about the nature of that job. Let me give you an example. Travel with me, if you will, back to the year 1985. Here we are in Baltimore, Maryland. It’s Christmastime.
Hutzler’s Department Store is all twinkle lights and glass ornaments. Somewhere among the festive plaid tablecloths and polished silver are Tinsel and Beau—two full-blown animatronic talking reindeer. Tinsel’s frosted in glitter and Beau’s in a top hat. I’m one of the children lined up to greet them.
Baltimoreans may remember Hutzler’s as a beautiful old-fashioned department store, once known for its extravagant window displays and ornate façade. (The flagship store closed in 1989.) It was celebrated for its traditions, but also for its innovations. Hutzler’s installed the city’s first escalator in the 1930s, a modern convenience that it touted in advertisements.
A 1942 ad for Hutzler’s in The Baltimore Sun (Newspapers.com)
So it made sense that Hutzler’s would also have such impressive animatronic reindeer. I thought of them recently when I was watching a video demonstration of Handle, the new Boston Dynamics robot that can wheel around with alarming swiftness and jump four feet into the air. Boston Dynamics also made a robot reindeer once—and, well, this is how you fall into an internet rabbit hole. Next thing I know, I’m searching the web for evidence of a faint childhood memory: a pair of beloved robotic reindeer from Baltimore in the 1980s.
It didn’t take long.
“Tinsel and Beau are back!” said one 1983 advertisement in The Baltimore Sun. “Our talking deer make such a charming couple, and what wonderful conversationalists they are!”
A 1983 advertisement for Hutzler’s in The Baltimore Sun (Newspapers.com)
What I learned next, however, was that Tinsel and Beau weren’t robotic at all. A classified ad that Hutzler’s placed in The Baltimore Sun in 1986 offered a “fun opportunity for drama or theater students to be the voices of our famous talking reindeer.”
Tinsel and Beau were people!
Which, I mean, of course they were people. It seems silly now that I ever thought otherwise. Department stores in the 1980s didn’t just go around buying high-tech robotic reindeer that could carry on lengthy conversations with little kids. We’re talking about an era when Teddy Ruxpin—a furry tape cassette housed in the body of a mechanical bear—was considered a technological marvel. If you wanted to have a chat with a mobile device, your best bet was to get a Speak ‘n’ Spell to burp out the alphabet in your direction.
In a 1986 ad in the Sun, Hutzler’s seeks voice actors for its talking deer. (Newspapers.com)
And yet my misplaced memory of the reindeer is understandable, maybe, given how dramatically the world has changed in the past 35 years. It’s the same reason that people today are surprised to learn that R2-D2, the lovable whistling droid from the Star Wars franchise, was operated by a human actor. (Today, another actor operates the unit for some shots; while a radio-controlled device is used for others, according to The Guardian.)
In a world of digital assistants and computer-generated imagery, the expectation is that computers do all kinds of work for humans. The result of which, some have argued, is a dulling of the senses. “The miraculous has become the norm,” Jonathan Romney wrote in an essay about computer-generated imagery for Aeon. “Such a surfeit of wonders may be de-sensitizing, but it’s also eroding our ability to dream at the movies.”
Our ability to dream, elsewhere in the arts, may be intact, but computers are encroaching on all sorts of creative territory. There are computers that can forge famous paintings with astounding accuracy—and there are algorithms designed to identify such fakes. Artificial intelligences can already write novels, and there’s at least one literary contest—the Hoshi Shinichi Literary Award—that’s open to non-human competitors. Computers can flirt. They can write jokes. (Not great ones, but hey.)
Computers are now so pervasive that we should expect them to be everywhere. The past is quickly becoming a place where the presence of humans, talking reindeer and otherwise, is now surprising. That’s likely to continue, and to expand into our most creative spaces. “The unresolved questions about machine art are, first, what its potential is and, second, whether—irrespective of the quality of the work produced—it can truly be described as ‘creative’ or ‘imaginative,’” Martin Gayford wrote in an essay for MIT Technology Review last year. “These are problems, profound and fascinating, that take us deep into the mysteries of human art-making.”
As machines advance and as programs learn to do things that were once only accomplished by people, what will it mean to be human?
Over time, artificial intelligence will likely prove that carving out any realm of behavior as unique to humans—like language, a classic example—is ultimately wrong. If Tinsel and Beau were still around today, they might be powered by a digital assistant, after all. In fact, it’d be a littler weird if they weren’t, wouldn’t it? Consider the fact that Disney is exploring the use of interactive humanoid robots at its theme parks, according to a patent filing last week.
Technological history proves that what seems novel today can quickly become the norm, until one day you look back surprised at the memory of a job done by a human rather than a machine. By teaching machines what we know, we are training them to be like us. This is good for humanity in so many ways. But we may still occasionally long for the days before machines could imagine the future alongside us.

Jorge Alberto Hernández C., PhD.
85 months ago
1

Stanford-hosted study examines how AI might affect urban life in 2030

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.
Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.
“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”
The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.
“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”
The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.
“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”
The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.
The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.
The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.
“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.
“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”
The eight sections discuss:
Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.
Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.
Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.
Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.
Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.
Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.
Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.
Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.
“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”
Link : http://news.stanford.edu/2016/09/01/ai-might-affect-urban-life-2030/

Jorge Alberto Hernández C., PhD.
85 months ago
1

I am not convinced that AI will in fact improve quality of life: meaning, I do not consider this "improvement" as a foregone conclusion. I read the answers above and find that in many it appears "assumed" that the designers and programmers will know exactly how to build these ancilla. I make no such assumption.
Point: trying to build an AI based on human thinking-reasoning-arbitration models is fine in terms of the decision making process, per se. But what about the more ethereal constraining influences of the various prohibitors, societal etiquette, and "taboos"? These will likely prove a lot harder to codify than the more pragmatic decisionmaking, but quite possibly much more important.
I am not concerned about AI's becoming the HAL-9000 paranoid killer of astronauts. Nor do I think they will one day bring "SkyNet" into lucid reality. I do think that like all other technologies, this should be tested a whole lot before we turn it loose on the elderly (in particular) or anyone else. Developing and implenting safety measures in AI's should be a very, very high priority before Marketing folks get hold of these devices and sell them on "promise" and "coolness".
I also think someone should be considering Asimov's Laws of Robotics as mandatory to be included - that may have been sci-fi when he first wrote it, but not any more. It is likewise important to consider what the increasingly sophisticated and pervasive attacks on IoT devices (which these ancilla, like the Tesla cars and biomedical devices, are) could produce in the way of impacts, and what must be done about them - BEFORE the attacks show us what can happen. And they will.....
What I find both remarkable and disturbing is that every conversation I read imparts a very clear and strong impression that the conversation about AI generally shifted, almost in a single sentence from "Should we...." to "how do we make this do....": no more the question of whether we do it or not, but what the next or most important product that we can get AI to do for us and let's get on with that. I am in no such rush: AI may "extend" me, but I question whether it will "improve" me.
Remarkable - that there are indeed many things AI can do that we should seek to ultimately implement it.
Disturbing - that it seems automatic that implementing AI is good and all but "destined to happen". It is a technological instrumentality - a tool, nothing more, just as computers, mobile phones, iPads, Cortana, and Siri are.

Ross A. Leo
78 months ago
1

Right from the start AI is neither good nor bad. It can transparent democracy or manipulate the society. Important stays the human individual. It is not enough that schools foster STEM, but ensure STREAM (Science, Technology, Reading, Arts & Mathematics). Amazon's Jeff Bezos' said that we are living in the "Golden Age of AI". This is not limited to the raising possibilities, but we are at the beginning of new development and it is up to us to decide how we will go from here. Important roles do not only play the scientists and companies, but also the artists. Famous authors predicted our life with AI, including Philip K. Dick ("The Minority Report"), George Orwell ("1984") and William Gibson ("Neuromancer").

Patrick Henz
77 months ago

Have some input?