AI Insights

Building Scalable AI Solutions

When designing Artificial Intelligence enabled software to work at scale there are several themes that should be considered. Some apply specifically to AI systems, but many are just good software development practices…

Already AI has countless applications in all sectors, spanning financial services, engineering, healthcare, marketing and legal. It supports an ecosystem poised to transform industry in the next few years, bringing superior medical diagnosis, unbiased brand analysis, broad investment insights and robust fraud detection.

Our own human imagination is now the only limiting factor!

We’ve all seen the amazing feats performed by research AIs: DeepMind’s AlphaGo, Georgia Tech’s Shimon and Google’s Quick, Draw!, but that’s just part of the story – how do we plug this intelligence into real-world applications securely and reliably?

At Robosoup, given our experience building AI enabled systems, we believe development should be approached using the following prioritisation.

This is the essence of the solution – its raison d’être – the software should add value – it must do something that can’t be done by a human alone, or do it at superior scale, accuracy and/or speed. Our goal is not always to completely automate away human activity, but more generally to create an environment which augments human activity to provide super-human levels of performance. AI is the catalyst that gives humans more time to do what they do best – think creatively.

Software usability can be described as how effectively new users can use, learn or control the system. At its heart, we’re attempting to provide positive answers to questions like these:

  • – are the most common operations streamlined to be performed quickly?
  • – can new users intuitively learn to use the software without help?
  • – do validation and error messages make sense?

Scalability is the ability for software to gracefully meet the demand of stress caused by increased usage. To achieve this, we follow two guiding principles.

Scale horizontally in the cloud – there is a limit to how large a single server can be, both for physical and virtual machines. There are limits to how well a system can scale horizontally, too. That limit, though, is increasingly being pushed further ahead. We always target cloud based virtual servers and databases wherever possible providing ultimate flexibility.

Asynchronous rather than synchronous – we all understand asynchronous communication in the physical world. We send a letter in the mail and sometime later it arrives. Until it does, we are happy in the knowledge it is underway, oblivious to the complexity of the postal system. A similar approach should be taken with our applications. Did a user just hit submit? Tell the user that the submission went well and then process it in the background. Perhaps show the update as if it is already completely done in the meantime.

This is related but slightly different to scalability. What we’re trying to ensure here is that each process, even when run in isolation makes maximum use of available resource. Are we using parallel processes where we can? Are we using caches effectively? Caches are essentially storages of precomputed results that we use to avoid computing the results over and over again.

Avoid the single point of failure. We try to never just have one of anything, we should always assume and design for having at least two of everything. This adds costs in terms of additional operational effort and complexity, but we gain tremendously in terms of availability and performance under load. Also, it forces us into a distributed-first mindset. ‘If you can’t split it, you can’t scale it’ has been said by various people, and it’s very true.

You’ve made an investment in machine learning, but how do you get the most from it? Think API first! In addition to pushing work to clients, we view your application as a service API. Clients these days can be an ever-changing mix of smartphones, web sites, line of business systems and desktop applications. The API does not make assumptions about which clients will connect to it, it will be able to serve all of them. And furthermore, you open your service up for future automation.

Given the world we live in, this should be pretty obvious – it simply means the system’s ability to resist unauthorised attempts at usage or behaviour modification, while still providing service to legitimate users. From an administration perspective this could also mean:

  • – does the system require user or role-based security?
  • – does code access or multi-factor authentication need to occur?
  • – what operations need to be secured?
  • – should traffic be encrypted?

These are the edited highlights – situations can and do vary – as solution builders it’s very important for us to work closely with clients to establish the correct mix of priorities.

Contact us if you would like to learn more about using AI and machine learning in your business.

« »

Talk to an expert

Book a call with one of our AI strategists now!