Why Companies need their own AI Ethics Code of Conduct

Over the last year, I have been immersing myself in a lot of AI research, including reading multiple books on AI and taking a class from Stanford on the fundamentals of Artificial Intelligence.

This class was taught by an Adjunct Professor at Stanford, Andrew Ng, a co-founder of Coursera.org, and he has a new related class on Coursera entitled “Deep Learning with Andrew Ng.”

All of this study and research has given me a much better understanding of AI, what it can and can’t do, and its potential impact on our world. Although I am not an engineer and come from the marketing research side of the tech market, after nearly 40 years dealing with technology at all levels, my grasp of technology and its impact on our world has always been present in my work and research. AI has been around for decades but is even more prevalent in our tech world today. That is why I wanted and needed to delve deeper into AI at the design level to be more informed on how AI can and will impact our world.

One book, in particular, has been essential to my own understanding of AI from a global and political perspective, and it comes from Kai-Fu Lee. Titled, AI Super Powers; China, Silicon Valley, and the New World Order.

One other country that comes into the AI global picture is Russia, whose leader, Vladimir Putin, has said on record that “the nation that leads in AI, will rule the world.”

During the Stanford Class, Professor Ng touched on one topic of great interest to me, and that is ethics in AI. The more I have studied AI and ML, and it has become apparent to me that AI can be used for good as well as evil. I believe that developing guidelines or principles for AI and ML use by companies will become one of the most important things companies of all nature have to put in place soon and live by in this next decade.

In an editorial this week for the Financial Times, CEO of Alphabet and Google, Sundar Pichai said that he believes “AI must be regulated to prevent the potential consequences of tools including deep fakes and facial recognition.”

His suggestions include “international alignment between the UK and the EU, agreement on “core values,” using open-source tools (such as those already being developed by Google) to test for adherence to written principles and using existing regulation, including Europe’s GDPR, to build out broader regulatory frameworks.”

While Pichai is pushing for government regulation, he is not waiting around for any government to drive what Google and Alphabet believe should be their position on AI Ethics.

As he states in the Financial Times editorial, “We need to be clear-eyed about what could go wrong.”

Echoing Pinchai’s view, Microsoft President, Brad Smith, Speaking at Davos yesterday said that “the time to regulate AI is Now.” I agree that AI is going to need some governmental regulations, and Pinchai has suggested a starting point for the US is to go down the path of regulating AI and ML.

I find Sundar’s and Brad Smiths’s remarks critical. CEO’s of any company need to begin thinking about putting in place their own company’s AI Code of Ethics they plan to follow in their use of AI and ML Technology.

Google has put in place its own AI principles and guidelines and state objectives in its quest to create AI-based on the protection of human rights.

Similarly, Microsoft has written commentary on their AI principles and objectives and even included guidelines for responsible bots. And Salesforce has created a very concise AI Ethics objectives document, including committing to all they do in AI to be Responsible, Accountable, transparent, empowering, and inclusive. https://einstein.ai/ethics

And the IEEE recently posted in Forbes their projected guideline recommendations for their members about AI compliance that include commentary and principles to follow in AI and IoT, AI and Work, AI and Healthcare, AI and ethics, and a broader view on AI strategies.

I recently asked some of the major companies in tech and telecom if they have published their own AI Principles and Guidelines and have been surprised that very few of them are even in the process of doing this. They admit that it is vital to have their own AI guidelines in place and are doing some work on it but revealed that they are nowhere close to having a comprehensive AI Ethics strategy read to publish.

Companies having an AI Ethics Code of Conduct in place soon should become a priority. AI is becoming something that will quickly be an intricate part of their business. In the not-too-distant future, their customers are going to demand to know how the company they deal with handles AI-based personal data and what their AI Ethics code will be. If companies are smart, they will craft their AI Ethics position soon and be ready for the demands the market and their customers will expect from them in the age of AI.

Published by

Tim Bajarin

Tim Bajarin is the President of Creative Strategies, Inc. He is recognized as one of the leading industry consultants, analysts and futurists covering the field of personal computers and consumer technology. Mr. Bajarin has been with Creative Strategies since 1981 and has served as a consultant to most of the leading hardware and software vendors in the industry including IBM, Apple, Xerox, Compaq, Dell, AT&T, Microsoft, Polaroid, Lotus, Epson, Toshiba and numerous others.

One thought on “Why Companies need their own AI Ethics Code of Conduct”

Leave a Reply

Your email address will not be published. Required fields are marked *