English 
搜索
Hebei Lansheng Biotech Co., Ltd. ShangHai Yuelian Biotech Co., Ltd.

AI is opening doors for large-scale studiesqrcode

Nov. 5, 2024

Favorites Print
Forward
Nov. 5, 2024

The numbers of scientific applications of artificial intelligence are growing rapidly. Smart computers and robots are opening new doors to research that isn’t feasible manually. ‘We can now scale projects up to previously inconceivable levels.’


Willem-Jan Knibbe, the head of the Wageningen Data Competence Centre and its Artificial Intelligence programme leader, sees AI as a great way of extracting knowledge from data. ‘AI was already excellent at pattern recognition; it can for instance tell you what kind of animal or tree is in a picture, and it can recognize faces and look for relationships. That’s making research faster, more efficient and more productive. But over the last couple of years AI has also become creative and generative. You can make it write, draw, talk and construct things, for example with ChatGPT. That’s a radical change that affects all of us.’


Knibbe and his colleagues at the data centre want to create value from Wageningen’s data. ‘The data centre was set up in 2017, when the Big Data explosion came along with the exponential growth of data and the sheer quantities of data being produced and stored. Larger and more complex datasets led to the rise of data science as a way of gleaning understandings from the data. And now we have to deal with the developments in AI. Responding sufficiently quickly to all those changes is a continuous challenge in research and education, where the possibilities for rapid adaptation are more limited because the programme is generally fixed for a year. Bringing it all together is pretty complex.’


This article appeared earlier in Wageningen World 1 | 2024, the magazine of Wageningen University & Research.


Inspiration


Knibbe believes that AI tools offer boundless opportunities. ‘You can ask ChatGPT for a good study design, for instance. That doesn’t work perfectly yet, but it helps you find inspiration. AI can also help you construct, correct and combine ideas more quickly. AI has capabilities that you can’t even conceive of and it reads through more material than you can. That’s where you have to be careful, though, because AI can read the wrong things too. But if you do it properly, AI is a powerful and helpful sparring partner.’ AI is already being used avidly in innovative projects: for recognizing food quality using camera images, for the fully automated cultivation of land, for monitoring livestock health, or in the search for the hereditary characteristics of resistant strains.


Computer vision


Erik Pekkeriet is the manager of Vision + Robotics, the programme that is bringing together experts in computer vision – image interpretation by software – and robotics and AI from all corners of WUR so that the technology can be utilized in agriculture, horticulture, fishing, livestock farming and the food supply chain. He thinks Wageningen’s researchers are still a bit traditional in their attitudes to AI. ‘We don’t really trust it yet and want to do a lot of measurements and counts manually, just to be sure that it’s all correct. AI-based image processing systems are so good and efficient nowadays, though, that they do the job better and more completely than we do, as well as saving a lot of effort. The technology has genuinely turned a corner over the past ten years. Manual measuring and counting is going to be largely redundant in future and will be replaced by AI-based, robotized systems. On top of that, researchers will often have a lot more data points available, for instance because they can now use drones to fly over an area to gather data.’


Predicting illegal deforestation


Researchers at Wageningen Environmental Research are working on making the Forest Foresight system of the World Wide Fund for Nature (WWF) even smarter. This is a system that uses radar images from the Sentinel-1 satellite to make a detailed map in which AI can show where felling is likely to happen, up to several months in advance. It can for instance determine every few days where new roads have been created, which shows where heavy vehicles are going to be used, for instance for tree felling. The system is undergoing trials in the tropical rainforests of Suriname, Gabon and Kalimantan. The initial results are promising, according to Johannes Reiche, an assistant professor of Radar Remote Sensing. ‘The nice thing is that we can now also teach the system about the causes of deforestation. It can recognize activities such as mining, agriculture and tree felling and it also knows about various types of forests. That means the system can estimate accurately where the risk of illegal deforestation is highest. Felling is less likely in wet woodlands, for example. Local rangers can use this system to see where they need to send their patrols instead of just reacting to what has already happened.’


image.png


Jeroen Hoekendijk, a marine biologist and computer scientist at Wageningen Marine Research, knows all about that. For his doctoral thesis, he used AI to automate the counting of seals in the Wadden Sea using aerial images. In current research, birds above the North Sea are also being counted using photographs, he tells us. ‘In the past, birds were counted by an expert from a plane; nowadays, modern aerial cameras can photograph large areas at high resolution and the images are analysed automatically. The initial results are very promising, but a lot of example data from the experts is currently needed if the process is to be improved. The algorithm has difficulty with similar-looking bird species in particular.’


Although he started in biology, over time Hoekendijk has shifted towards computer science. ‘At the moment, I’m helping ecologists and biologists to use AI tools in their research. Because I know both sides of the coin, I’ve got a kind of bridging role.’ As an example, he gives research into determining the age of a fish. That is done by looking at growth rings in otoliths, the small ossicles in the fish’s ears, which get annual growth rings just like trees do. Researchers have been taking photos of these for years, and these old datasets can be used for machine learning and teaching the algorithm to count the annual rings. ‘You do that by showing the computer one photo at a time along with the corresponding number of rings until it can count them correctly on new photos. It’s got to be accurate and the research is very labour intensive, so it’ll be great if AI is able to take it over.’


Recognizing sounds


AI is also being used increasingly often in biodiversity research, not only for image recognition but also for recognizing the sounds made by birds, marine mammals, bats and fish. This can create better understandings of where animals are located and how they behave. According to Hoekendijk, the added value of AI is in the scale. ‘Automation opens new doors for research that was impossible before because it wouldn’t be feasible manually. We can now scale projects up to previously inconceivable levels: we’re doing plankton research, for instance, at a scale that would have been unimaginable in the past. The latest technology lets us photograph 10,000 plankton particles a minute and we then use smart algorithms to analyse the photos. The quantities, species and locations of plankton all vary with the seasons, so using this tool lets us do monitoring at a much larger scale and pick up the changes more quickly.’


AI uses a lot of electricity


The United Nations and the World Economic Forum predict a major role for AI in the battle against climate change. AI could help make things greener, for instance by automatically switching off energy sources when they aren’t needed. That doesn’t alter the fact that AI itself has a substantial ecological footprint, though, for instance for producing and transporting all the hardware, for water to cool the servers in data centres and for the large amounts of electricity needed to train the AI models and keep them running. According to the International Energy Agency, data centres use about 3 per cent of all the electrical power on the planet and are responsible for 1 per cent of global CO2 emissions. That may not sound much, but even the aviation sector ‘only’ emits twice as much.


The penny is slowly starting to drop in the academic world that digitalization comes at an ecological cost. Hoekendijk is positive about it nevertheless. ‘The climate impact varies from one project to the next. And AI can also have a positive impact too: we are now able to use existing satellite images, for example, which is more environmentally friendly than flying. You can also use an underwater drone to film the ocean floor and see what’s living there, without fishing or ruining the seabed by scraping it away. And images from Google Earth enable us to detect new forests of darkgreen seaweed, which we can then protect. These kelp forests are crucial for biodiversity. AI tools can't save the climate; human beings must do that. AI can help us, though.'


Losing control


Vincent Blok, professor of the Philosophy of Technology and Responsible Innovation, notes that society also has concerns about AI. ‘In a well-made marketing video about Lely milking robots, you see fully automated cowshed systems – without any people. This has an alienating effect on the general public, though: they no longer see any relationship between the humans and the animals. This had already become much less in livestock farming, but the robots draw attention to the fact.’ Blok thinks people sometimes wonder whether we’re losing control, with AI taking over. ‘Scientists need to address that concern, so that ordinary people can assess the potential, the opportunities and the risks. If public opinion turns against AI, that could work against the scientific technology. So this is something for interdisciplinary cooperation between the philosophers and the technologists.’


Blok is leading a project about the ethical, legal and social aspects (ELSA) of AI in sustainable food systems. The ELSA lab aims to develop responsible, human-centric AI. ‘We’re working with various chair groups to provide critical reflections on the negative and unforeseen effects of AI on humans, animals and society. What are the ethical issues, and who is ‘in control? In the Netherlands, we’re thinking carefully about the ethics and philosophy of AI.’ Hoekendijk has seen AI building up momentum massively over the past five years. ‘It’s difficult to say where we’ll be five years from now. I’d expect AI to need less and less example data from experts and that the tools will become ever more accessible to people who aren’t computer scientists.’ Pekkeriet believes WUR still has a way to go, and that researchers will have to learn what AI can do for them. ‘We understand which data items can be linked together, but AI doesn’t. With generative AI such as ChatGPT or Google, you often don’t know where the information has come from and so you regularly get lousy answers: GIGO (garbage in, garbage out). When we’re doing research, we know the origins of our data – and we have colossal amounts of data available.’


Human intelligence


Sometimes, it is not clear whether AI would actually solve a particular problem. Selflearning machines don’t always outperform humans. ‘AI does not possess human intelligence,’ says Blok emphatically. ‘I think that we’re heading for a sort of hybrid intelligence. You always need human intelligence at the front and back ends. We need to utilize the user’s expertise in a positive way. Human-centric AI can help increase human capacities.’ According to Blok, that also raises the question of whether we aren’t defining the concept of intelligence too narrowly. ‘Why do we assume it’s either artificial intelligence or human intelligence? Maybe we ought to move from human-centric to biocentric AI. There are forms of intelligence in non-human systems too – take a flock of birds, for instance.


Triage of diseased greenhouse seedlings


Selecting and sorting young seedlings before they are transferred to the greenhouse to develop is a labour-intensive task. Scientists from the Vision + Robotics programme are working on a technique for automating that selection process in which diseased and non-viable plants are recognized and picked out. The new technology uses a camera that records the shape and colour of the seedling roots and shoot. Image processing and machine learning are then used to determine whether it is a viable plant. The AI technology can moreover help determine which characteristics are predictors of plant health. ‘We’re currently in the middle of our feasibility study,’ says Lydia Meesters, the project manager. ‘Can we genuinely produce the images we need of the plant properties so that its health can be determined? And if so, how can we create the best possible picture of these characteristics using simple, scalable technology?

image.png

0/1200

More from AgroNewsChange

Hot Topic More

Subscribe Comment

Subscribe 

Subscribe Email: *
Name:
Mobile Number:  

Comment  

0/1200

 

NEWSLETTER

Subscribe AgroNews Daily Alert to send news related to your mailbox