From job killer to killer robot, artificial intelligence (AI) increasingly has come under the spotlight for its potentially adverse impact on human lives. Singapore, however, is advocating the need to hold off judgement whilst the technology continues to evolve and focus instead on building trust.
Whilst not a new concept, AI in recent years had been garnering significant interest due to the convergence of three key factors, said S. Iswaran, Singapore’s Minister for Communications and Information and Minister-in-charge of Trade Relations.
First was the ability now to amass large volumes of data, organise, and use it. Computing power in large quantities also had become more available and at lower costs. And, together with more robust machine learning capabilities, these factors had come together to fuel renewed interest in AI, said Iswaran, who was speaking at the Bloomberg Live forum held Thursday in Singapore.
Elaborating on the country’s efforts in this space, the minister said Singapore was focused on verticals that were relevant to the nation and, hence, on developing applications that could be scaled locally, regionally, and worldwide. These domains included healthcare, education, and transport, and its initiatives encompassed research and development work, skillsets and training, and working with the private sector to build applications, he said.
In healthcare, for instance, he noted that there was scope for data and AI to be tapped to augment physicians’ delivery of healthcare and in managing chronic diseases such as diabetics and hypertension. Core to this was the nation’s central database of medical records, providing the data needed to train the AI and machine learning systems.
Hon Hsiao-Wuen, corporate vice president of Microsoft Research Asia and Asia-Pacific research and development group, also pointed to the potential for AI to improve the quality of healthcare and reduce the cost of healthcare. The technology could be used further to stave the spread of infectious diseases, said Hon, who was speaking to ZDNet on the sidelines of the forum.
For instance, he said Microsoft was working with Pfizer in China to use image recognition as a way to more quickly identify and detect fungal infection. Patients typically would need to go to the hospital to seek treatment, but this could lead to a further spread of the virus and put others at risk of infection since it would take a couple of days before test results could determine the type of disease, he said.
Computer-aided diagnosis of fungal infections could significantly speed up the time needed to identify the illness and eliminate the need for patients to enter a hospital for diagnosis, stemming its spread.
Asked about Singapore’s low adoption of AI for diagnosis, he said this could be due to concerns about reliability, responsibility, and liability. And this was not necessarily a bad thing in healthcare. He suggested that hospitals could put in place a layer of human certification to reduce false alarms and issues regarding reliability.
According to Iswaran, Singapore differentiated itself by its ability to organise and bring together different industry players “in a manner that’s focused and efficient”. It also was able to marshall the data and resources so these could be used in “a careful way” and aimed at resolving key issues, he said.
Asked how it dealt with concerns about privacy and security as citizen data was shared with the private sector, the minister pointed to the need to find a balance between legitimate concerns around privacy and the legitimate use of data. This was necessary because data could be used to serve a wider public good as well as individuals.
Acknowledging that there were tensions about AI and the use of data, he said Singapore focused on identifying the tools that could be used in both private and public sectors and identifying relevant safeguards that should be implemented to assure people with privacy concerns.
Iswaran said this was where trust was key and underpinned everything, whether it was data or AI. “Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality,” he said.
Hon concurred, and echoed the minister’s call to build trust. He noted that Microsoft observed six principles that guided how it conducted business with others, and was necessary as an industry to encourage people to trust technology in general. These principles included the need to respect local regulations and sovereignty, accountability, fairness, and safety.
Others, in fact, called for complete trust and for humans to let AI take over some tasks completely.
Nielsen’s CEO and chief diversity officer David Kenny pointed to how the research firm had used machines to predict the weather. The AI continued to improve over time and reduced its error rate of 12% to 4% as it got smarter about its predictions, Kenny said.
More interestingly, these projections were able to improve as humans were taken out of the equation. In fact, 75% of the time meteorologists meddled and changed the AI predictions, they made it worse, he added, noting that “false” data was being introduced into the algorithm if humans were allowed to intervene.
Kenny explained: “Machines are actually better at predicting what people are going to watch. What we have to train humans on is to trust the machines, don’t override them even if you don’t like the answer…and instead be creative. I think jobs will be much more interesting when we let machines do the grunt work [and] we can focus on innovating.”
Regulations still necessary despite rapid technology change
But with AI technology evolving so rapidly, discussions turned to weather regulations would be able to keep pace.
Iswaran noted that regardless of whether it could, a framework was necessary to instill confidence that AI would be applied in a responsible and ethical way. Adding that this could take the form of legislation, guidelines, or international norms, he said the absence of such frameworks could end up limiting the potential of AI because it could lead to a sharp pushback from the public.
Regulations, too, were critical to ease security concerns about cross-border data transfer. At the same time, however, these should not curtail the flow of data from which valuable insights could be extracted, the minister said.
In this aspect, he urged the need for regional and international dialogue about rules to manage cross-border data flow.
Singapore in June introduced a framework designed to resolve challenges businesses typically faced when sharing data assets, such as the need to ensure regulatory compliance and a lack of standardised methodologies and trust with whom they shared data. Called Trusted Data Sharing Framework, it aimed to facilitate data-sharing to drive the development of new products and services as well as establish consumer confidence that their data would be protected.
In addition, Iswaran said, businesses should observe key principles when building AI products, which he said should be human-centric, explainable, and transparent.
There also were calls for less regulation so that the technology could be given room to evolve. Speaking at a panel discussion, Koh Soo Boon, founder and managing partner of VC firm iGlobe Partners, noted that it was difficult to determine if good or bad would come out of a developing technology and, with AI still nascent, governments also would not know what rules to apply.
Koh said the industry should be allowed to grow and when problems did emerged later, it could self-correct or self-regulate to address these issues.
Steve Leonard, founding CEO of SGInnovate, also echoed the need for trust and explainable AI, whilst noting that the technology was an ongoing development and important concepts would surface along the way. It then would be ineffective to attempt to react to issues in advance, Leonard said.
He said societies had to be open that this concept was “imperfect” and know that some people would “misbehave”, and address these with rules and guidelines. Otherwise, they would miss opportunities in tapping AI to solve real-world problems, he noted.
With AI discussions now dividing most opinions into two camps–the “dystopia and eutopia”–Iswaran noted that, ultimately, AI used well would augment human existence and capabilities and enhance human life.
“From our perspective, every technology change and revolution really has led to existing work practices being enhanced, certain practices being eliminated, and new areas being created,” Iswaran said. “We’re at beginning of the [AI] evolution…so we need to watch this space and hold back on judgement [just yet].”
This also was why closer collaboration between the private and public sectors was necessary so both sides could work together to develop “sensible” guidelines on the use of AI, he said.
Hon also pointed to the need for more collaboration to drive the responsible use of AI, including with competitors and specialists outside the field of IT such as anthropologists and psychologists.