C
Charles Harrow

  • Joined Jul 10, 2020
  • The AI only works really well in the "comfort zone", i.e. under test conditions. In the real world, on the other hand, it is very easy to trick it.

    The core weakness of the AI itself are a few problems. The use of machine learning systems in sensitive areas such as medicine is still a risky undertaking in many cases. Example? The AI used in an experiment conducted by a network of New York hospitals, where the system learned to 'detect' cases of pneumonia not from medical data, but by identifying the institution from which the results came. The machine simply knew that during training, most cases of the disease were in a given institution, and based its "diagnosis" on this.

    Another example of disappointing expectations of the AI today are autonomous vehicles. "The Economist" cites the case of the American company Starsky Robotics, which was working on autonomous trucks and was closed down in March this year. Among the reasons for the company's collapse, its founder mentions both the focus on the safety of the designed solutions (which annoyed impatient investors) and the shortcomings of the technology itself.

    ---------------------

    Pulno