Is New Technology Dangerous?
We are hearing a lot of scenarios around the world where people and experts are talking about new technology, new inventions like AI, IoT, Robotics etc as a dangerous means for the world as it is being used by various elements for their good. Everyone is using technology for monetary gains and have got ill thoughts behind it. Governments are not far away, they are also using AI and various other applications and apps, to track and keep close eye on their Citizens. People have lost trust and faith in new technology and at the same time are sacred of using various Apps, and applications fearing of data and privacy theft or being tracked.
So, how can we gain public trust back or have we lost that for Good?
The fact that regulations like GDPR or CCPA (California Consumer Privacy Act) are seen to be necessary definitely demonstrates that businesses and governments have lost the public trust. Whether that loss is permanent or not depends entirely on how we choose to respond to the loss.
On the one hand, businesses can start doing the right things for the right reasons right now, and in time the public will come around (just as they did with banks after the 1929 crash, not to mention the savings and loans debacle in the 1980s). That's the safest way to regain that trust.
Less safe is for businesses to get on top of the psychology behind the public mistrust and start to manipulate the markets into trusting businesses again. This is risky, because loss of trust will be permanent if the manipulation becomes obvious to the public. And it probably would come to light in this day and age.
One thing that does need to happen is for businesses to start being more honest about their ability to protect consumer privacy. When people find their data is being streamed into other businesses or governments, they get cranky. On the other hand, if businesses stopped pretending they can fully protect personal data and if they helped the general public to understand that today's technology landscape doesn't afford anybody a true expectation of privacy, the public will find trust easier to navigate by the simple virtue of not being lied to.
Interesting question. If new technology is not tested enough then yes it can be dangerous. Is it the newness that scares people or is it the actual technology or is it a mixture of both. Change is difficult for most people. With the latest technology and not knowing who has access to all of the data that is where I get nervous - who is using it and how are they using it.
Masarrat A Shah I'm not sure your assertions are entirely accurate. Has the public, enmasse, really lost trust? You'd think if that was true then a lot more people would taking much more strong action. Or perhaps the real issue is that trust is not necessary. Afterall, how many people trust their banks, or their politicians, yet we continue to surrender ourselves to their domination.
I don't think technology, per se, is the issue here, I think it is more about the breakdown of social values. Technology, for the most part, on its own is neither good nor bad. That moral judgement applies when we consider how the technology is being used and compare that to the values we purport to uphold. We could readily get the tech companies to stop sharing our data but, then we would have to be prepared to pay for all the benefits we currently get from them in exchange for surrendering our data. That sort of change, as Cristen Taylor, MBA pointed out, is really, really, hard.
I don't think Apple has ever been any better than any other company in this space, they are just better at distracting their customers from these issues. Similarly, the difference between China and the USA is that China tell us they are spying while the USA lie through their teeth while they are doing it.
Our choice is to live in fear, or accept the modus operandi and live with it, or choose a better model based on the values we hold most dear and then insist on regulatory and business changes to enforce that model. This is the tension between social values and the American free-to-make-a-profit-at-any-cost model. I want the former to rule but, experience says the latter will continue to win out because people in power are motivated more by greed and self-interest than social good.
I would argue that assumptions claiming "everyone" is using new technology for monetary gains or for illicit and uncouth means is a broad overstatement of a fraction of use cases. New technologies are spawned from creative solutions to problems or challenges people face. For the most part new tech advances are driven by entrepreneurial minded people with a sincere desire to help others.
Where I do agree with your concern relating to new technology's misuse is in the breadth of data collection and the lack of true understanding of these collection, storage and sharing practices. Some national and international legislation such as GDPR have helped to bring awareness and corporate accountability to the industry it is insufficient for true protection. There is a dire and ever increasing need for both consumer education and corporate data accountability legislation.
In the immediate future, companies that are bringing new tech to the market can benefit by designing the product and their business to specifically address data security, transparency and protection assurances. These would be viewed by consumers as value add, trust enhancements or differentiation in crowded fields.
There are actually several very interesting articles and studies done on the question if technology is dangerous and what the dangers are. The overall consensus is that there is no consensus.
There was even a summer podium among key digital and AI scientists and Entrepreneurs to discuss on the potential risks of what is sometimes referred to as General Artificial Intelligence. The key from that conference was that some key rules should be applied to limit the potential risks.
Irrespective, there is a book by Max Tegmark "Life 3.0" that explains and summarizes the perceived and actual risks of AI technology and future technology very well in Chapter 2.I recommend reading it. In summary also here the result in not conclusive as multiple scenarios persists that are yet contradicting each other. The key element is when GAI is available and how it is being used and controlled.
On the downside and especially looking onto our current society structures it is apparent that a money buys it all mentality or based on indirect network effect a monopoly structure can emerge . That would pose some risks especially if that monopoly takes control of vital elements and influences broadly. We see such a behavior already nowadays by certain media in certain countries.
To regain that trust, one proposal is the complete transparent society or digital freedom as anticipated by some. This is maybe more a society where influencers tend to live in, where all data is publicly shared. As such something like privacy does not really exists and all data is common good. By that the power of data for governments or companies that fosters some of the fear would be not given.
New technologies, in my opinion, if designed with privacy by default and by design, will reduce new risks to that technology. The push to put new products into market without security is how and why we have catastrophic cyber events.
The simplicity of introducing privacy and data protection in the SLDC process will enhance and enforce the power of the new technologies.