The advent of Artificial Intelligence programming has been helping us solve problems that could not have been solved before. I would like to expand on my blog about the staying power of AI (AI Is it here to stay).
In software great power comes with great configurability. Similar to the early ERP solutions that solved a myriad of business problems with configurations that could fill out several volumes of instruction manuals the machine learning algorithms come with a lot of knobs and settings.
There are now a large number of algorithms available to data scientists in the quest to solve AI related problems. The algorithms are typically aimed at different segments of problems such as image recognition, traffic prediction, autonomous driving and a number of other typical machine learning domains. Each set of these problems is often addressed using a different set of algorithms. Linear regression , neural networks which by themselves come in several flavors like convolutional neural networks and recurrent neural networks are only a few of the weapons available in the arsenal of the modern data scientist. Most of these algorithms have a number of configurations like knobs on an engine to let them process and learn from the data available.
These settings are called hyper parameters and could effect the behavior of these algorithms considerably. Incorrect settings could result in a training run taking hours instead of minutes. When you consider the number of iterations and epochs needed to complete the training those kinds of inefficiencies quickly add up.
The usage of the algorithms also comes with capabilities to stop at a certain point when the data scientist determines the cost of going forward is not worth the benefit of spending time and computational resources.
Another aspect of the machine learning algorithms is the features that are used for the calculations, learning and predictions. The raw data from customers often has a large number of useful features. However these features may not be enough to get a consistently good rate of predictions. In that case special techniques can be used to derive additional features out the existing features that will help with the predictions.
Most of the tuning comes from intuition that data scientists develop over a period of time. The derivation of new predictions also comes with a combination of using domain specific knowledge and using some tested techniques for feature derivation.
One of the advantages of having a small team is the pressure to do more with less. When we took on a list of problems to solve in the healthcare space we quickly realized that we would need to try out a lot of algorithms with a number of settings for each of the problems. Further we a AI product company we need to have our algorithms execute on multiple client data and each dataset has its own nuances.
To address this issue the Tagnos engineering team has developed a Tagnos AI engine. Think of this as a learning algorithm that learns from the execution of the various learning algorithms. Using the permutations and combinations of how the algorithms behaved in the past the Tagnos AI engine builds up a prioritization of execution steps.
The goal of this engine is that the execution and tracking of the results is automated so the data scientist does not need to keep adjusting the knobs and running various algorithms to look for an optimal setup. This can often involve keeping a lot of detailed logs of the execution which could be monotonous and time consuming with little value addition and the engine can automate all of these tasks so the data scientists can use their expertise to sort through the results that have been collated by the engine.
The Tagnos AI Engine will help our customers get optimal predictions quicker. There is still a need for domain knowledge and expertise to interpret the data available and create as many features as we can to pass to the AI engine but the engine can then add value by employing a number of tested techniques on top of that.
As with any learning algorithm this has the potential to learn with larger amounts of data that will be available as we keep using the engine.
We are addressing a number of orchestration problems in the healthcare systems like predicting load on ER, predicting case length in OR, predicting start end end times in OR and predictor the stock levels needed to maintain par levels for key equipment needed throughout the hospital.
We believe addition of these predictions can create substantial value to the hospital by improving efficiency and proving a better patient experience.