Ad Code

Responsive Advertisement

Importance of Artificial Intelligence

What is AI 

Modern AI – ‘machine learning’ – enables software to perform difficult tasks more effectively by learning through training instead of following sets of rules. Deep learning, a subset of machine learning, is delivering breakthrough results in fields including computer vision and language processing.


  • AI’ is a general term that refers to hardware or software that exhibit behaviour which appears intelligent.
  • Basic AI has existed since the 1950s, via rules-based programs that display rudimentary intelligence in limited contexts. Early forms of AI included ‘expert systems’ designed to mimic human specialists.
  • Rules-based systems are limited. Many real-world challenges, from making medical diagnoses to recognising objects in images, are too complex or subtle to be solved by programs that follow sets of rules written by people.
  • Excitement regarding modern AI relates to a set of techniques called machine learning, where advances have been rapid and significant. Machine learning is a sub-set of AI. All machine learning is AI, but not all AI is machine learning.
  • Machine learning enables programs to learn through training, instead of being programmed with rules. By processing training data, machine learning systems provide results that improve with experience.
  • Machine learning can be applied to a wide variety of prediction and optimisation challenges, from determining the probability of a credit card transaction being fraudulent to predicting when an industrial asset is likely to fail.
  • There are more than 15 approaches to machine learning. Popular methodologies include random forests, Bayesian networks and support vector machines.
  • Deep learning is a subset of machine learning that is delivering breakthrough results in fields including computer vision and language. All deep learning is machine learning, but not all machine learning is deep learning.
  • Deep learning emulates the way animals’ brains learn subtle tasks – it models the brain, not the world. Networks of artificial neurons process input data to extract features and optimise variables relevant to a problem, with results improving through training.


Why is AI important

AI is important because, for the first time, traditionally human capabilities can be undertaken in software inexpensively and at scale. AI can be applied to every sector to enable new possibilities and efficiencies.

  • AI technology is important because it enables human capabilities – understanding, reasoning, planning, communication and perception – to be undertaken by software increasingly effectively, efficiently and at low cost.
  • General analytical tasks, including finding patterns in data, that have been performed by software for many years can also be performed more effectively using AI.
  • The automation of these abilities creates new opportunities in most business sectors and consumer applications.
  • Significant new products, services and capabilities enabled by AI include autonomous vehicles, automated medical diagnosis, voice input for human-computer interaction, intelligent agents, automated data synthesis and enhanced decision-making.
  • AI has numerous, tangible use cases today that are enabling corporate revenue growth and cost savings in existing sectors.
  • Applications will be most numerous in sectors in which a large proportion of time is spent collecting and synthesising data: financial services, retail and trade, professional services, manufacturing and healthcare. Applications of AI-powered computer vision will be particularly significant in the transport sector.
  • Use cases are proliferating as AI’s potential is understood. We describe 31 core use cases across eight sectors: asset management, healthcare, insurance, law & compliance, manufacturing, retail, transport and utilities.
  • We illustrate how AI can be applied to multiple processes within a business function (human resources).




Why has AI come of age

Specialised hardware, availability of training data, new algorithms and increased investment, among other factors, have enabled an inflection point in AI capability. After seven false dawns since the 1950s, AI technology has come of age.


  • After seven false dawns since its inception in 1956, AI technology has come of age.
  • The capabilities of AI systems have reached a tipping point due to the confluence of seven factors: new algorithms; the availability of training data; specialised hardware; cloud AI services; open source software resources; greater investment; and increased interest.
  • Together, these developments have transformed results while slashing the difficulty, time and cost of developing and deploying AI.
  • A virtuous cycle has developed. Progress in AI is attracting investment, entrepreneurship and interest. These, in turn, are accelerating progress.


The race for adoption

AI may be the fastest paradigm shift in technology history. Increasing adoption masks a growing divergence, among nations and within industries, between leaders and laggards.


  • AI adoption has tripled in 12 months. One in seven large companies has adopted AI; in 24 months, two thirds of large companies will have live AI initiatives. In 2019, AI ‘crosses the chasm’ from early adopters to the early majority.
  • AI may be the fastest paradigm shift in technology history. In the course of three years, the proportion of enterprises with AI initiatives will have grown from one in 25 to one in three. Adoption has been enabled by the prior paradigm shift to cloud computing, the availability of plug-and-play AI services from global technology vendors and a thriving ecosystem of AI-led software suppliers.
  • Great expectations are fuelling adoption. Executives expect AI to have a greater impact than any other emerging technology, including Blockchain and IoT.
  • Increasing overall adoption masks a growing divergence between leaders and laggards. Leaders are extending their advantage by learning faster and increasing investment in AI at a greater pace than laggards.
  • Globally, China leads the race for AI adoption. Twice as many enterprises in Asia have adopted AI, compared with companies in North America, due to government engagement, a data advantage and fewer legacy assets.
  • Sector adoption is uneven and in a state of flux. ‘Early adopters’ (financial service and high-tech companies) maintain a lead while ‘movers’ (retail, healthcare and media) are rapidly catching up.
  • AI is advancing across a broad front. Enterprises are using multiple types of AI application, with one in ten enterprises using ten or more. The most popular use cases are chatbots, process automation solutions and fraud analytics. Natural language and computer vision AI underpin many prevalent applications as companies embrace the ability to replicate traditionally human activities in software for the first time.
  • Leaders and laggards face different adoption challenges. Laggards are struggling to gain leadership support for AI and to define use cases. Leaders’ difficulties, in contrast, have shifted from ‘if’ to ‘how’. Leaders are seeking to overcome the difficulty of hiring talent and address cultural resistance to AI.
  • AI initiation has shifted from the C-suite to the IT department. Two years ago, CXOs initiated two thirds of AI initiatives. In 2019, as corporate engagement with AI shifts from ‘if’ to ‘how’, the IT department is the primary driver of projects.
  • Companies prefer to buy, not build, AI. Nearly half of companies favour buying AI solutions from third parties, while a third intend to build custom solutions. Just one in ten companies are prepared to wait for AI to be incorporated into their favourite software products.
  • Workers expect AI to increase the safety, quality and speed of their work. As companies’ AI agendas shift from revenue growth to cost reduction initiatives, however, workers are concerned about job security.


The advance of technology

Advances in AI technology are creating new possibilities. Custom silicon is enabling a new generation of AI hardware. Emerging software techniques are delivering breakthroughs in multiple domains and decoupling progress from the constraints of human experience.


  • While graphical processing units (GPUs) catalysed AI development in the past, and will continue to evolve, hardware innovations are expanding AI’s potential. Hardware is being optimised, custmised or re-imagined to deliver a new generation of AI accelerators.
  • Hardware with ‘tensor architectures’ is accelerating deep learning AI. Vendors, including NVIDIA and Google are optimising or customising hardware to support the use of popular deep learning frameworks.
  • We are entering the post-GPU era. Leading hardware manufacturers are creating new classes of computer processor designed, from inception, for AI. Custom silicon offers transformational performance and greater versatility.
  • Custom silicon is also taking AI to the ‘edge’ of the internet – to IoT devices, sensors and vehicles. New processors engineered for edge computing combine high performance with low power consumption and small size.
  • As quantum computing matures, it will create profound opportunities for progress in AI and enable humanity to address previously intractable problems, from personalised medicine to climate change. While nascent, quantum computing is advancing rapidly. Researchers have developed functioning neural networks on quantum computers.
  • Reinforcement learning (RL) is an alternative approach to developing AI that enables a problem to be solved without knowledge of the domain. Instead of learning from training data, RL systems reward and reinforce progress towards a specified goal. AlphaGo Zero, an RL system developed by DeepMind to play the board game Go, developed unrivalled ability after just 40 days of operation. In 2019, developments in RL will enable groups of agents to interact and collaborate effectively.
  • Progress in RL is significant because it decouples system improvement from the constraints of human knowledge. RL is well suited to creating agents that perform autonomously in environments for which we lack training data.
  • Transfer learning (TL) enables programmers to apply elements learned from previous challenges to related problems. TL can deliver stronger initial performance, more rapid improvement and better long-term results. Interest in TL has grown seven-fold in 24 months and is enabling a new generation of systems with greater adaptability.
  • By learningfundamental properties of language, TL- powered models are improving the state of the art in language processing – in areas of universal utility. 2018 was a breakthrough year for the application of TL to language processing.






Post a Comment

0 Comments