Artificial Intelligence (AI) Research is around since 50 or a 100 years. While tracing its roots is cumbersome, understanding its trajectory can help us to adjust our expectations and find real use cases that are robust enough to be deployed for production and are accepted by consumers.
AI has seen several periods of hype and bust, where startups eventually deployed technologies like expert systems that were nothing but hard coded decision trees. Soon after basic use cases were demonstrated, it became evident that these system could not adapt to new context, so these startups eventually disappeared, funding was drained and the field became more dormant.
While there was several of these cycles, new technologies surfaced every time and the learning was not in vain. Also, supplementing technologies became available more readily and cheaply which supported previous research.
The invention of neural networks may be traced back to the 1950’s. But, their real power became only evident when large datasets were made available in the last 10 to 20 years. Since neural networks need to be trained with these datasets, feeding them more knowledge usually helps.
Use Cases for Image Recognition
As of now, we have all the tools available to tell a computer to distinguish a rock from a tennis ball, by feeding it numerous images of each and telling it the label of each image seen. The only bottle neck may arise when we are trying to understand what an unseen image would be categorized as. Also, having small numbers of images and trying to distinguish let’s say a unhealthy cucumber plant from healthy one can be tricky at first, but becomes more reliable at a high rate of seen images. However, classifying an image within a certain time period, with short latency, is still a subject of computer power and system design. Recent research suggests that small numbers of input images may suffice to adequately identify an image.
Use Cases for Predicting Yield
When considering a scenario like predicting yield, it is crucial to capture all relevant input parameters. While the weather is a decisive factor, so is fertilizer input and seed quality. Understanding field performance as a series of experiments can help us to build the right setup for predictions. For instance, ideally we trace 10.000 fields across different regions for the given input parameters as well as the respective yield obtained. While the latter parameter would be the ground truth, it is always subject to the previous input parameters. Mathematically speaking, we are doing nothing but linear regression. As such, predicting unlikely events is also very unlikely within this domain. Of course you may set up a system that alerts for rare conditions, but the odds are: the less likely a certain scenario occurred before, the less likely your prediction is to be accurate.
While recent AI research has led to the conclusion that software can finally learn new knowledge., we are still a bit away from universal artificial intelligence, a type of intelligence that could solve any previously unseen problem and where new domains could be captured. Yet, there is research about this on its way. Also, we moved on from hiding AI tools in the basement, far away from customers.
It is all about finding the right use-cases and supplementing them with some human support. As Moravec, an AI researcher, put it, things that are hard for a human, are easy for a robot. The same is true, the other way around. As far as the particular field of Agriculture is concerned, I would propose to maintain a wider industry view at first and identify robust use cases elsewhere. Then, transfer them to ag-tech, instead of narrowing the view down too early. Edge innovation on AI happens, where most data is publicly available.