Following on from other blogs that my colleagues have written, I also attended the Tech Exeter conference. The session I was most interested in was the talk regarding Machine Learning, AI and the Brain by Samantha Adams.
When thinking about AI and Machine Learning it is easy to be caught up thinking about the bleeding edge uses that often dominate the tech news. Examples such as self-driving cars, AlphaGo and digital assistants (Siri, Alexa, Cortana) are usually the first things that come to mind. While these technologies are exciting and can change the way we work in the future, as things stand, they are not going to revolutionize the way businesses work overnight.
What is Machine Learning?
So you may think why consider these technologies when their use cases are very restricted or still in development? In this slot I consider Machine Learning to reside, while Machine Learning can be a key part of an AI’s toolkit it is a distinct subset within the field of AI. Machine Learning, at its core, is training a computer based on some sample data how to produce a correct and useful extrapolation from said data. In concrete terms, this means that we can train a computer to perform tasks such as predicting user load on a website, highlight anomalous activity within a website, highlight anomalous performance of servers and predict how systems might need to scale in the future. All of these use cases are relevant and useful to businesses of today; Machine Learning can be used to trivialize many analysis tasks that would have previously been performed by an individual. Alternatively it can be used to support the decisions of professionals who do not have the time to study as many examples as a computer can, think medical uses.
What data to I need to utilise Machine Learning?
The main factor that predicts how well Machine Learning can do its job is the data set that it is fed. It is surprising then to learn that a lot of this data you will already have at hand or will be able to generate very quickly. For example, a proof of concept we completed recently involved using the ElasticSearch stack to create a dashboard of information using their Kibana technology. This dashboard was created by automatically feeding the logs generated by the webserver into the ElasticSearch stack, from here we can look to integrate their new Machine Learning tools to do some trend analysis and automated alerts based on live data. In this example we could instruct the Machine Learning package to alert us when we’re getting more requests than usual, when the time to complete requests is higher than usual or where there may be an attempt to gain unauthorized access to the site. This example illustrates how we can quickly derive useful information from large datasets using Machine Learning, but this is just one specific example. There are many other use cases including deciding if a tumour is malignant or benign from a brain scan or classifying images based on what they contain.
Machine Learning is:
- Not a technology of the future, it is already useful!
- Great at parsing very large datasets, many more than any human can achieve, and making decisions based on the provided example and knowledge gained from all previously processed examples.
- Able to be trained to perform a wide variety of tasks.