Microsoft will tame its artificial intelligence.  Facial recognition and voice generation only with permission

Microsoft will tame its artificial intelligence. Facial recognition and voice generation only with permission

Regulations concerning it are still few. The European Union is still working on the Artificial Intelligence (AI) Act. The first version was shown a year ago,

However, many companies are actively developing AI-based technologies. This, in turn, requires them to create rules of conduct when composing algorithms.

Microsoft’s artificial intelligence will be used less often and only in justified cases

informed that when creating a new framework for responsible AI programming, he sought knowledge from scientists and specialists from outside the company who deal with the subject. Therefore, it was decided to take quite serious steps – the company states that it will apply appropriate solutions, considering the balance of profits and losses that the use of such a powerful technology can bring.

More information from around the world

Microsoft will primarily limit the use of the Azure Face tool. He used it, e.g. for user verification. Now access to the tool will be limited. To use it, you will need the company’s approval, issued on the basis of an application in which you must prove that the use will comply with Microsoft’s ethical standards for the use of AI.

What’s more, the American company is also to abandon the Azure Face elements, which are responsible for recognizing emotions and characteristics such as gender or age. After several studies and consultations, Microsoft came to the conclusion that it is strongly influenced by contextual and cultural factors, and scientists do not even agree on the definition of emotions. Still, this technology will be used, for example, in the Seeing AI tool, which is to be used to verbally describe the world for people who have vision problems.

among others for this reason, access to speech recognition tools will be limited. A 2020 study found that black and African people were twice as likely to make mistakes as white people. This made Microsoft understand that it needed to step up its efforts and change the way it builds databases to include all social groups. The company also wants to work out a way of obtaining data that will respect and involve representatives of non-white groups in the process.

In addition, the use of Custom Neural Voice, which is used to convert text to speech, will be restricted. The tool also allows you to generate artificial speech that sounds like any voice you choose. AT&T used this technology to create Bugs Bunny’s voice for an interactive game. In this case, Microsoft wants to prevent fraud that can be done with this .

Has Artificial Intelligence Gained Consciousness?

AI algorithms have a lot of potential. However, we are still far from General Artificial Intelligence, i.e. one that has gained consciousness. However, it is also worth mentioning here that

The employee was supposed to ask the chatbot about his feelings, and the chatbot said that he was sometimes sad or happy. The assumption that this is a sign of self-awareness, however, is denied by other employees of the company. in view of this situation, he suspended Lemoin.

Source: Gazeta

You may also like

Immediate Access Pro