Playing with fire: How to adapt to the new realities of AI

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


When humans discovered fire roughly 1.5 million years ago, they probably knew they had something good right away. But they likely discovered the downsides pretty quickly: Getting too close and getting burned, accidentally starting a wildfire, smoke inhalation or even burning down the village. These were not minor risks, but there was no going back. Fortunately, we managed to harness the power of fire for good.

Fast forwarding to today, artificial intelligence (AI) could prove to be as transformational as fire. Like fire, the risks are huge — some would say existential. But, like it or not, there is no going back or even slowing down, given the state of global geopolitics.

In this article, we explore how we can manage the risks of AI and the different paths we can take. AI is not just another technological innovation, it is a disruptive force that will change the world in ways we cannot even begin to imagine. However, we need to be mindful of the risks associated with this technology and manage them appropriately.

Setting standards for the use of AI

The first step in managing the risks associated with AI is setting standards for the use of AI. This can be done by governments or industry groups, and they can be either mandatory or voluntary. While voluntary standards are nice, the reality is that the companies that are the most responsible tend to follow rules and guidance, while others pay no heed. For overarching societal benefit, everyone needs to follow the guidance. Therefore, we recommend that the standards be required, even if the initial standard is lower (that is, easier to meet).

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

As to whether governments of industry groups should lead the way, the answer is both. The reality is that only governments have the heft to make the rules binding, and to incentivize or cajole other governments globally to participate. Notwithstanding, governments are notoriously slow-moving and prone to political cross-currents — definitely not good in these circumstances. Therefore, I believe that industry groups must be engaged and play a leading role in shaping the thinking and building for the broadest base of support. In the end, we need a public-private partnership to achieve our goals.

Governance of AI creation and use

There are two things that need to be governed when it comes to AI: Its use and its creation. The use of AI, like all technological innovations, can be used with good intentions or with bad intentions. The intentions are what matters, and the level of governance should coincide with the level of risk (or whether inherently good, or bad, or somewhere in between). However, some types of AI are inherently so dangerous that they need to be carefully managed, limited or restricted.

The reality is that we don’t know enough today to write all the regulations and rules, so what we need is a good starting point and some authoritative bodies that will be trusted to issue new rules as they become necessary. AI risk management and authoritative guidance need to be quick and nimble; otherwise, it will fall far behind the path of innovation and be worthless. Existing industries and government bodies move too slowly, so new approaches need to be established that can proceed more quickly.

National or global governance of AI

Governance and rules are only as good as the weakest link. The buy-in of all parties is critical. This will be the toughest aspect. We should not delay anything to wait for a global consensus, but at the same time, global working groups and frameworks should be explored.

The good news is that we are not starting from scratch. Various global groups have been actively setting forth their views and publishing their output; notable examples include the recently released AI Risk Management Framework from the U.S.-based National Institute for Science and Technology (NIST) and Europe’s proposed EU AI Act — and there are many others. Most are of a voluntary nature, but a growing number have the force of law behind them. In my view, while nothing yet covers the full scope comprehensively, if you were to put them all together, you would be at a commendable starting point for this journey.

Reflecting

The ride will definitely be bumpy, but I believe that humans will ultimately prevail. In another 1.5 million years, our ancestors will look back and muse that it was tough, but that we ultimately got it right. So let’s move forward with AI, but be mindful of the risks associated with this technology. We must harness AI for good, and take care we don’t burn down the world.

Brad Fisher is CEO of Lumenova AI.

Originally appeared on: TheSpuzz

iSlumped