WESTERN PRODUCER —Imagine fields of wheat, corn or soybean extending to the horizon.
Smart equipment — tractors and combines — till, plant, fertilize, monitor and harvest the fields. Using cutting-edge artificial intelligence, they do the work and save farmers countless hours of labour. The equipment responds to the weather and calculates the exact needs of each crop.
Now, imagine a hacker breaking into the digital system to steal, destroy or change the data.
Artificial intelligence (AI) in agriculture has been hailed as one of the bright lights of production. But few consider that it could come with a dark side.
New risk analysis done by researchers at the University of Cambridge in the United Kingdom warns that the future of AI comes with potential hazards for farms, farmers and food security systems that are under-appreciated and poorly understood.
“Greater technological innovation is often proposed as a panacea for humanity’s ailments and annoyances from epidemic prevention to agricultural productivity,” said Asaf Tzachor, a researcher with the Centre for the Study of Existential Risk at Cambridge. He is also affiliated with Reichman University, Israel.
“The larger, more profound the problem, the bolder, more ambitious the technological intervention required,” he said. “For the optimist, a swift and extensive design and diffusion of technologies, even if experimental, is the most sought-after solution. But technology may falter or backfire. If deployed hastily, it may bring about unintended consequences.”
Or intentional cyber-attacks.
Tzachor said potential problematic outcomes of automation require a balanced perspective. The intent of this research is to make practitioners, policymakers, scientists and scholars aware that they must ensure the technology is implemented in a safe and secure manner.
The concept of AI agriculture is not futuristic. In the report, Tzachor wrote that large companies are already pioneering the next generation of autonomous agricultural robotics and decision support systems to replace humans in the field, improve crop yield and productivity, and increase accuracies in crop sorting, weed control, pesticide use, and food processing.
But no one seems to be asking the question about associated risks, even though threats are already in play.
“Some 50 malware and ransomware attacks targeting food manufacturers, processors and packagers were registered over the past two years including a $11 million ransomware attack against the world’s premier meat processor, JBS,” said Tzachor. “The Italian Campari Group has also suffered a ransomware attack and so did Molson Coors Beverage Company.”
In 2021, a review on smart farming and precision agriculture by researchers at the University of Guelph, Ont, listed several potential cyber-attacks that could challenge food production and wireless networking systems.
Tzachor’s report also cited the threat of interference with datasets, shutting down sprayers, autonomous drones and robotic harvesters. These are separate from understandable malfunctions.
“Autonomous agricultural machinery, including tractors and harvesters, sensors and self-driving rovers malfunction now and again,” he said. “It is safe to assume that errors in hardware and software design have resulted in local unintended consequences, such as excessive application of agrochemicals or insufficient irrigation. That said, we have yet to experience a system failure and yield loss on a large, catastrophic scale. That may be attributed to the fact we have not yet delegated a great deal of autonomy to machines to administer our farms.”
One of the intentions of Tzachor and his colleagues with their risk analysis is to encourage programmers to consider such possibilities, invite botanists and agronomists into the design process and uncover risks before deploying machinery. He suggests using ‘white hat hackers’ to help production companies identify potential security failings during the design and development of machinery and make recommendations for improvement.
A white hat hacker is an ethical hacker who can exploit computer systems and suggest safer operating systems. In computer jargon, they are the antithesis of black hat hackers who are intent on doing harm.
“While different systems comply to different standards and follow different protocols, it may well be argued that, overall, the agrifood sector is paying insufficient attention to the accidental, unintentional risks and is insufficiently secured against malicious risks. Other sectors, such as financial services, are much better geared to deal with similar risks.”
Various agricultural robotic machines such as drones and sensors are already gathering information on crops to help farmers, such as detecting an emerging disease, insufficient irrigation, or monitoring livestock. Self-driving combines can bring in a crop, eliminating the need for a human operator. Offsetting the investment into this level of machinery are the savings in labour costs, optimized production and minimized loss and waste thereby maximizing yield and revenue.
As with any equipment, accidental failures can happen such as over-application of fertilizer or pesticides, unintended soil erosion, or inappropriate irrigation.
To test the AI system in this new generation of farm equipment, the researchers suggest that initial applications take place cautiously in what Tzachor calls digital sandboxes where the technology for supervised prototypes and pilots can be assessed under closely monitored circumstances.
“The idea of digital sandboxes refers to supervised, low-risk, hybrid, cyber-physical spaces that represent agricultural environments, including crop farming, livestock ranching, aquaculture and horticulture,” he said. “In these spaces, experimental autonomous agricultural machinery, both hardware and software, can be deployed for initial assessment and evaluation. Assessments may include susceptibility to cyber-attacks, in partnerships with white hat hackers. In a similar vein, emerging technologies can be tested in digital sandboxes under changing conditions and circumstances to detect possible failures that may result in harming agro-ecologies before they are distributed at scale. These spaces, possibly operated by public-private partnerships, allow staged roll-out of innovations from low-risk environments to commercial farms and factories.”
Tzachor’s message is being heard and he and his colleagues are responding to positive feedback.
“With our partners in CGIAR (Consultative Group on International Agricultural Research), we were able to disseminate our findings to communities of growers in developing and developed regions,” he said.
While he said that it is possibly too soon to appreciate the full influence of their analysis, Tzachor and his colleaugues have attended conferences across Europe to elaborate on their results and discuss vulnerabilities related to AI in all phases of agricultural production. They have also begun providing advice to some precision-agriculture firms.
But while a secure digital-run farm may be the next step for some farmers, many small-scale growers and subsistence farmers could likely be left behind. That inequality might simply not allow for AI options.
“This was a particular area of concern for us,” Tzachor said. “We already know that small-scale growers and subsistence farmers cultivate over 80 percent of plots worldwide. They shoulder the burden of feeding large swaths of the so-called Global Â鶹ÊÓƵ. Their contribution to global food and nutritional security is crucial.”
These farming communities often have little access to information and communication technologies, including internet coverage.
That concern come into greater focus when considering that an estimated two billion people are affected by food insecurity, including about 690 million malnourished people and 340 million children suffering micronutrient deficiencies. Precision agriculture could promise substantial benefits for food and nutritional security in the face of climate change and a growing global population.
Tzachor said it is essential that a balanced approach towards innovation is practiced and that risk assessments and responsible research and development procedures do not stifle innovation in a system so fundamental to human wellbeing.
The study was published in February 2022 in the journal Nature Machine Intelligence.