ETSI Artificial Intelligence Summit puts things into perspective
Sophia Antipolis, 5 April 2019
Luis Jorge Romero, the ETSI Director-General, opened the day’s proceedings by recognizing the quantity of “natural intelligence” attending the ETSI summit on AI. Dirk Weiler, the ETSI Board Chair, launched the debate by asking the provocative question to the audience: “does AI really need standardization?”.
With nearly 200 participants from various sectors, the Summit was the place to network with academia, industry, government representatives from around the world and the European Commission.
The speakers in the first session outlined the rich history of AI dating back to the 1950’s and highlighted the advances in hardware (centralized/distributed/decentralized), AI processing and the increased power in algorithms. Nowadays AI is present in many different areas including communications, healthcare, industry automation, IoT, devices, autonomous vehicles and many other aspects of our modern life.
One important element of the event was to clearly define what AI is all about and demystify some of the complex buzzwords surrounding AI, from machine learning, to neural networks and automation. The European Commission presented their vision for AI in Europe and described the various initiatives and tools that are currently in place including the High Level Expert Group and the AI Alliance. The key takeaway of the opening session was that machines won’t take over humans but will unburden us by performing many complex and mundane tasks.
The next session covered AI in the telecommunications industry. Presentations outlined that AI starts being implemented across multiple telecom systems (access, transport, core, cloud, management, orchestration, business support systems) and that keeping a human in the loop is required to endorse responsibility. With informed examples, several experts warned about bias and discrimination in data set to train AI algorithm. They also stated that quality and good analysis is essential to turn data into value. Transparency, explainability and trust are also key notions to take into account along with consumer data privacy. When the standardization issue was raised, the answer was that interfaces between AI systems should be standardized and not the AI as such.
The Summit also covered the role of AI and machine learning in industry 4.0 and manufacturing, AI sectors still in their infancy but nevertheless on the rise. There are three major steps where AI is instrumental for industry: perception, cognition and decision.
As a wrap up, standardization plays a key role for interchange formats for machine learning algorithms, explainable AI, and on the horizontal level. Standards ensure interoperability and we need them for terminology and semantics and to enable adaptive, agile governance of the system. We also need a trustworthy AI framework and a “certification” process to allow that trust.
Last but not least, it was noted that applying AI in any domain will require us to consider security, ethical, societal and trustworthiness issues carefully before allowing full AI autonomy.
But to know more, go and see all presentations on our website.