Uncategorized

Is it unethical NOT to develop AI?

In recent months we have heard big names in science speak out against artificial intelligence. Well-known technologists such as Elon Musk (Tesla, SpaceX), Steve Wasniak (Apple), and Steven Hawking (The Simpsons) have warned us to tread carefully in the future as the development of AI continues. These big names have warned us that our relentless pursuit of science will evidently lead to a future of enslavement, serving our robotic overlords, being farmed like batteries to power the new machinist race. With popular culture screaming “Won’t somebody please think of the children?” and announcing that it is unethical to create something so dangerous to the future of humankind, why should we even consider the alternative? We have all heard the reasons why it may be unethical to develop artificial intelligence but how about the opposing position? Is it unethical NOT to develop artificial intelligence?

Artificial Intelligence is the development of mechanical systems that can perform tasks that would normally require human intelligence, such as perception, recognition, and reasoning. It is theorized that continued progress in the development of artificial intelligence will lead to a superintelligence, or the ‘Singularity’. Fears exist about the intentions that the Singularity may have towards humanity, extermination and enslavement to name a couple. AI at the moment is vastly different to the envisioned Singularity. AI has already made some incredible changes to our society. It runs our stock markets and our businesses, it detects fraud and terror, drives our cars, and is even replacing limbs (read Ironman, yay!). Some new and wondrous improvement is made in AI every week. However, this isn’t the type of AI that people tend to be worried about. The strong AI poses its threat not because of its form or function but because of its pure intelligence. The change that this would bring would rival the invention of fire, flight, or the printing press, and here lies the key to our fear, change. Every now and again humanity faces a game-changing event, something that challenges the norm.

As artificial intelligence penetrates the many different aspects of our lives, we begin to see more jobs automated. AI has the opportunity to replace the dangerous, the menial, the knowledge-heavy, and the complex jobs, which is a large portion of the market. Many see this as a threat, but technologies of the past have posed the same dilemma. Flight largely replaced rail and sea travel, the printing press replaced many typists, and internet is fulfilling the role of libraries and encyclopaedias.  All of these technologies brought major societal, economic and corporate restructures, but they also brought significant benefits. Artificial Intelligence is in the same position, it may bring the need for major restructuring, but it may also bring some major benefits to medicine, business, and education. As this technology grows and is accepted the very nature of jobs and employment may change, but as with any past technology, the job market always adapts and grows with it.

So perhaps the issue is not between humans and technology but instead with humans and the future. People are faithful to the norm; we like how things are and tend to be against change. When the status quo gets challenged, we seem to get challenging. Why? Are we comfortable in the present? Are you afraid that the worst-case scenario may happen? Or are well all just too lazy and happy procrastinating to progress?

So is it ethical to stop progress on Artificial Intelligence? This technology may kick up a bit of dust in the future, changing many aspects of our lives, but after this dust settles won’t we be better off? Do we have the right to deny our future generations for the benefits of AI? Better medical and mental care, significant efficiencies and prediction, maybe even an accurate weather report for Ballarat?

But what if we look even further, what if the AI did become sentient, and humans did manage to create a brand new life form. Perhaps we have the duty to create this sentient being, to build and preserve a brand new species. Without starting a debate about when some seed becomes life, can we honestly say that it is ethical to abort development on AI now because it may grow to become a living being?

No matter whether you happen to be for or against artificial intelligence research there is one point that just about everyone agrees on, that there will be a need for a framework when developing strong AI. That time may be here now, or it may be a moment distant into the future, but it is something that will need considering. For now, let us continue to progress in our education and research, to aim for that greater and grander human experience we are always striving to achieve.

Leave a Reply

Your email address will not be published. Required fields are marked *