Artificial intelligence (AI) is no longer a side project. It is shaping up as being at the forefront of enterprises — but that doesn’t mean AI should be implemented just for the sake of it.

As more companies have moved to explore AI, the world’s largest consultants (including the likes of PwC and McKinsley) have moved to warn companies to think before they jump in.

It has to be overall beneficial, adding value whilst costing less in implementation and monitoring than what it will make.

Here are four broad principles the consultants agree on.

1. The first mover advantage is gone


In years gone by, companies could grow from modifying technological ideas and enhancing them.

Mobile phones existed for decades before Apple entered into the sector with the iPhone. Just ask Nokia or Motorola.

However, companies seeking to be first movers in AI may be too late.

Chief Technology Officer of Cloudera, a global software analytics company, has said, “Those organisations that are reaping the benefits today started thinking about AI up to eight years ago. ..Now, eight years later, they are at the point where they are truly able to leverage AI”.

2. Be in an industry where AI can add significant value


Much talk about AI has been about making processes simpler and faster, and while it can do that, implementing AI can be a costly exercise.

So you want to make sure you’re in an industry where AI can create value by making inefficient processes easier and in turn solve headaches for potential customers.

According to McKinsey, it can create $3.5 billion to $5.8 trillion in annual value across 19 industries.

The industry with the most incremental value benefit is travel, at 128 percent, followed by transport at 89 percent..

ASX-listed software Company LiveTiles (ASX: LVT) are expanding their product range into the travel sector with their Gate Agent Assistant Bot, which will help airline staff respond to queries faster by obtaining information from the airline’s database quickly.

Among the other industries where there is potential include retail (87 percent), oil and gas (79 percent) and banking (50%).

3. AI needs to be trusted by all stakeholders


There is increasingly a cold war between the broader public and AI about the latter’s future impact in regards from privacy and security to employment and income inequality.

In some quarters, ‘AI’ has become a blanket term for ‘a robot is coming for our jobs’ — but in some cases AI will help people do their jobs better (such as LiveTiles’ gate assistant AI).

Even those with an inside knowledge of AI are anxious about some aspects of AI.

Some examples include the fairness of AI models (in lacking bias), their robustness and governance of the systems. This can be done through setting accountability for these concerns and ensuring that all hires (including technical hires) understand legal regulations and ethics.

It is not good enough for businesses to generalise that “consumers will have no choice but to accept AI because this is the way the world is going”.

While the latter part of that statement may be true, they will still have choices as to where to spend their money. Many may take into account the ethical conduct of companies when making purchasing decisions.

4. A company’s AI capabilities need to be constantly monitored


Businesses need to monitor their AI systems to ensure they are secure, work effectively, are free of bias and in compliance with the law.

This has to be an ongoing process and not just an evaluation process before developing systems. When models need updating and replacement this should happen as soon as possible.

Employees should also experience continual learning measures, both technical staff and non-technical staff.

What is concerning is not all firms intend to undertake specialised monitoring of their AI systems.

While a PwC survey found that while 64 percent of surveyed companies will boost AI security and 61 percent will create transparent, explainable models, only 47 percent will regularly test AI models for bias — a key concern broader society has about AI.

AI-competent businesses need to ensure they are in compliance with the law. This may sound obvious for any business but until now the law has been failing to keep pace with technological advancement but it is beginning to catch up.

Europe’s General Data Protection Regulation, which was implemented last year, is a key example of this. Individuals will have the right to see and control how organisations collect and use their data.

Although only 49 percent of business surveyed by PwC agreed that ensuring data in AI systems meets regulatory requirements, this will undoubtedly rise as regulators put the spotlight on AI.


Star Investing has a commercial partnership with some companies mentioned in this article. This content does not constitute financial product advice. You should consider obtaining independent advice before making any financial decisions.