New research has revealed that 87 per cent of IT decision-makers believe that technologies powered by artificial intelligence (AI) should be subject to regulation.
A study of 300 ITDMs from the UK and US commissioned by integration platform SnapLogic found a growing need for firms to focus on corporate responsibility and ethics it the development of AI solutions, with 94 per cent saying more attention needs to be paid to this area.
Nearly a third (32 per cent) of those who believed action was needed to regulate AI said that any moves should come from a combination of government and industry. High level governmental and internal co-operation would be necessary to facilitate such a shift, with 42 per cent saying groups, such as European Commission High-Level Expert Group on Artificial Intelligence, would play an increasingly prominent role in the debate around AI regulation.
A quarter believed regulation should be the responsibility of an independent industry consortium.
There were also growing calls for organisations to draw up their own internal systems for developing AI systems, with more than half (53 per cent) saying they should be responsible for their own ethical development, regardless of whether that organisation is a commercial or academic entity.
However, 17 per cent placed responsibility with the specific individuals working on AI projects, with respondents in the US more than twice as likely as those in the UK to assign responsibility to individual workers (21 per cent vs. nine per cent).
A similar number (16 per cent) saw an independent global consortium, comprised of representatives from government, academia, research institutions and businesses, as the only way to establish fair rules and protocol to ensure the ethical and responsible development of AI.
In the UK, 15 per cent of ITDMs stated that they expect organisations will continue to push the limits on AI development without regard for the guidance expert groups provide, compared with nine per cent of their American counterparts.
Furthermore, five per cent of UK ITDMs indicated that guidance or advice from oversight groups would be effectively useless to drive ethical AI development unless it becomes enforceable by law.
Gaurav Dhillon, chief executive of SnapLogic, commented: “AI is the future, and it’s already having a significant impact on business and society. However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes."
He continued that data quality, security and privacy concerns are real, and the regulation debate will continue. “But AI runs on data — it requires continuous, ready access to large volumes of data that flows freely between disparate systems to effectively train and execute the AI system.
“Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained," added Dhillon.












Recent Stories