DealMakerz

Complete British News World

The dangers of artificial intelligence are too great to ignore

The dangers of artificial intelligence are too great to ignore

Regarding the success of ChatGPT and other AI systems, it has been observed that some strange results sometimes come out of them. It has already been shown that AI can make systematically incorrect decisions, and that this can lead to a series of problems for those who must use the decisions in their business.

Often the root of the problem is that the data used is skewed from the start and then the AI ​​system can magnify the problems even further. These false results can be due to several things. For example, data used to train machine learning systems can be colored by human biases. When data is collected and used to train machine learning models, even the models themselves can inherit the biases of the people who build them.

We recently conducted a survey with 640 companies with more than 500 employees, of which 50 were Swedish. We asked questions about how they use data when making decisions, not least about how developments in artificial intelligence have affected them and will affect them in the future.

Our survey shows that larger Swedish companies see a problem with the fact that the data used in decision-making is often colored in different ways, and they also see that this can lead to bad decisions. The survey showed that there is awareness among companies, but at the same time there is still a lot to be done to confront the problem.

Three-quarters of Swedish companies (74 per cent) think they will need to do more to meet this challenge. In addition, the vast majority (62 percent) believe they are fully convinced or consider it very likely that this problem will increase as we start working more with AI and machine learning.

See also  Great selection of UK private cars. Many famous racing cars are here

Then you should also know that many companies (60 percent) expect the amount of decisions made with the help of AI to increase in the coming years. So the concern is definitely there and what we’re also seeing is that some companies are starting to get to grips with the challenge.

The majority (56 per cent) of Swedish companies who took part in the survey say that the data they use for decision-making is certainly colored or perhaps in different ways. At the same time, today only 2% of Swedish companies claim to have come a long way in dealing with the challenge, although many (54%) have at least started implementing solutions.

We also asked the companies themselves what obstacles they saw, and then they mainly cited a lack of awareness of potential biases, an understanding of how biases are identified and the difficulty of finding suitable specialists, for example in data management.

There are a number of problems that can result from biased decision rules. For the company itself, skewed decisions can lead to downright harmful decisions, both legally and financially — for example, when a finance company was discovered to be rejecting customer loan applications because an AI-based tool distinguished it based on customers’ mailing address.
In addition, there is an ethical aspect if, for example, a certain category of people is excluded by AI-driven decisions made during the process of hiring new employees in the company. This can lead to injury to individuals, while also taking away good efficiency from the company and risking its brand.

See also  Winner Eurovision 2022 - Ukraine

So it’s clear that companies have to get better at addressing the question of how to use AI support in operations. The risks are simply too great to ignore.

Niklas Engi
Head of Scandinavian Progress

The full report is available here >>