Sorghum is a critical crop for food production in arid climates due to its natural drought and heat tolerance. To improve yields in these challenging environments, researchers need high-throughput methods to monitor plant development and optimize cultivars. One of the most significant indicators of yield is plant height, which directly reflects growth rates and responses to environmental treatments.
This post details a workflow for using drone-based Structure from Motion (SfM) technology to estimate sorghum height and develop growth models based on Growing Degree Days (GDD).
For this study, we conducted experiments at the UC Davis experimental field during the summer of 2022. The field spanned 265 feet by 45 feet, arranged into 18 beds across 18 rows.
Data was collected using a DJI Phantom 4 Pro drone. The flight path was generated via Mission Planner and executed through Litchi software at a height of 5 meters above ground level.
Over the course of the season, 11 separate flight missions were conducted, capturing standard RGB images for SfM processing.
We processed the captured imagery using Agisoft Metashape 1.8.6. The processing pipeline involved aligning photos, building dense clouds, and generating orthomosaics and Digital Elevation Models (DEMs).
To extract plant height from the DEM, we utilized Python for automated plot segmentation. The Cumulative Probability Distribution (CPD) of heights within each plot was calculated. We recorded the 99th percentile height as the representative height for each genotype, minimizing noise from outlier points.
The extracted height data was modeled against environmental data collected from the CIMIS Weather Station #6. We used a base temperature of 8°C to calculate Growing Degree Days (GDD).
The relationship between GDD and crop height was modeled using a sigmoidal growth function. According to the empirical formula, sorghum height can be modeled using Growing Degree Days (GDD) and Sigmoid functions:
\[y(GDD, p) = \frac{y_0 y_{max}}{y_0 + (y_{max} - y_0)\exp(-k \cdot GDD)}\]The sorghum height model is optimized so that the $R^2$ of the modeled value $\hat{y}_i$ and the observed value $y_i$ is maximized:
\[R^2 = 1 - \frac{\sum_{i=1}^n (y_i - \hat{y}_i)^2}{\sum_{i=1}^n (y_i - \bar{y})^2}\]This optimization approach ensures that the model captures the characteristic S-shaped growth curve of the sorghum genotypes while minimizing the residual error across the multi-temporal dataset.
The developed model was validated using an independent dataset from North Carolina A&T State University (2011).
Finally, we applied the model to climate data from Kaolack, Senegal, to predict growth patterns in West African environments.
The model predicts that sorghum in this region will reach its maximum height approximately two months after planting, which aligns with local harvesting calendars.
According to the USDA International Production Assessment Division, sorghum in the Senegal region begins cultivation in mid-June and is harvested after September [3]. Therefore, Growing Degree Days were calculated using the average value of the minimum and maximum temperatures from June 15 to November 15.
This demonstrates the potential of combining drone-based phenotyping with biophysical modeling to support breeding programs in food-insecure regions.
Academic Attribution:
@article{yun2026sorghum,
title={Drone-based Sorghum Height Estimation and Plant Modeling},
author={Yun, Heesup},
journal={PLS123 Plant and Crop Systems Modeling, WQ2023, UCDavis},
year={2026},
url={https://heesup.github.io/assets/docs/2023WQ_PLS123_Heesup_FinalProject_Report.pdf}
}Just in time, Planet Fitness opened in Davis recently, so it’s a perfect time to see how AI could assist in its own version of physical AI by designing a personalized workout program.
I started by asking Google Gemini for a beginner-friendly routine:
User: I’m a beginner in fitness training. Please create a one-hour full-body workout program for Planet Fitness.
Gemini: Welcome to the fitness world! Planet Fitness is a fantastic place for beginners because it’s designed to be “judgment-free” and has plenty of user-friendly equipment. Since you’re just starting, this 60-minute Full-Body Routine focuses on building a foundation of strength and stamina using mostly machines…
Goal: Raise your heart rate and prepare your joints.
Goal: Target all major muscle groups. Aim for 2 sets of 12 repetitions for each exercise with 60s rest.
| Muscle Group | Exercise | Why? |
|---|---|---|
| Legs | Leg Press Machine | Safely works glutes, quads, and hamstrings. |
| Back | Lat Pulldown | Builds back strength and improves posture. |
| Chest | Chest Press Machine | A safer alternative to the bench press. |
| Shoulders | Shoulder Press Machine | Strengthens the deltoids. |
| Core | Abdominal Crunch Machine | Stabilizes the spine and builds core strength. |
| Lower Body | Seated Leg Curl | Specifically targets the hamstrings. |
Pro Tip: Look for the yellow QR codes on the machines. Scan them with your phone for a quick instructional video if you’re unsure about the setup.
The routine Gemini put together was surprisingly comprehensive. While I wasn’t entirely sure what machines was available there, specifying the “Planet Fitness” in the prompt allowed Gemini to tailor the program perfectly. I’ve been following it and am very satisfied with the results!
Google Gemini introduced a feature called Dynamic View. This allows users to generate fully functional, dynamic applications from simple, zero-shot prompts. I decided to test it using the same workout prompt.

I was impressed that it even went as far as including an image of the gym. You can explore the generated app via the link below: Explore the Gemini Dynamic View App
Since I couldn’t find the source code from the dynamic view, I also tried a similar task using Gemini’s Canvas feature. With Canvas, I first drafted the exercise program and then asked Gemini to transform it into a dedicated, interactive web page. The result looks like this:
The fast advancement of LLMs and AI continues to amaze me. Reflecting on my own experience, I spent an entire quarter learning Node.js to build a data visualization like this project: (Project Link), yet this AI-generated demo looks significantly more polished and took only seconds to create.
There is often discussion about AI replacing traditional jobs. In my view, AI will not take your job; rather, the individuals who are good at these AI tools will get new opportunities. As a researcher in 2026, I see AI as a powerful tool, but ultimately, finding things that need to be done, making objectives, and acutal execution are in the hands of the people using it, just like you still have to go to the gym yourself to build muscle.
]]>This work was a collaborative effort with Kyung Hee University, Republic of Korea.
This paper presents a non-destructive framework for monitoring water-deficit responses in basil using deep learning. We developed a fusion model combining RGB, Depth, and Chlorophyll Fluorescence (CF) imaging with a 3D Convolutional Neural Network (3D-CNN).
Key findings include:
My opinion:
My Contribution:
You can access the full publication here: 👉 MDPI Agronomy - Full Article
Our team—Heesup Yun, Jonathan Berlingeri, Ethan Truong, and Nathan Chiu—secured the 1st Place win at the 2025 AIFS × Sony AI AgTech Challenge. Successfully delivered a technical demo within a two-day hackathon timeframe. Equipped with a Raspberry Pi Zero and a Raspberry Pi AI Camera, we built WeedTrackr: Precision Weed Management System.
Weed management is a critical challenge facing modern agriculture, with profound economic and environmental implications:
Conventional methods rely on “blanket” treatments—equal amounts are applied to the entire field regardless of actual weed distribution. While precision equipment exists, it is expensive and built primarily for annual row crops. In vineyards and orchards, mowing is common for controlling weeds between rows, but weeds along rows remain another challenge.
WeedTrackr is a low-cost AI weed monitoring system designed specifically for planning herbicide sprays in orchards and vineyards. It automates weed scouting and herbicide application planning during regular maintenance activities, saving time and money, and mitigating environmental impacts by reducing herbicide use by up to 90% through targeted application.
Simple 3-Step Process:
WeedTrackr’s software stack integrates affordable, off-the-shelf hardware with custom integration software.
The model was optimized for edge deployment. Using MCT (Model Compression Toolkit) for INT8 quantization and ONNX conversion to an IMX500 package, we reduced the model size by 75% while maintaining over 93% of the original accuracy.
AI and robotics are finally accessible and scalable. As agricultural input costs (chemicals, labor, and fuel) climb and government regulations tighten, minimizing chemical buildup and preserving long-term biodiversity has never been more imperative.
By only spraying where it’s needed, WeedTrackr helps farmers:
The Raspberry Pi Zero isn’t typically powerful enough to handle deep learning right away. To make it work, I pushed the CPU and GPU limits through overclocking and tailored our image classification model for the Raspberry Pi AI Camera. This setup shifts the heavy lifting from the CPU to the camera’s dedicated AI accelerator, making it possible to run sophisticated models on such a small, low-power device.
We first set up the device with a 64-bit OS referring to this guide: Install 64 OS on Raspberry Pi zero 2 - Q-engineering.
Below is the evolution of our hardware configurations, utilizing resources like the MakeUseOf overclocking guide.
By default, the arm_freq is set to 1000 and gpu_freq to 300.
weed@raspberrypi:~ $ vcgencmd get_config int
... (shortened)
arm_freq=1000
core_freq=400
gpu_freq=300
sdram_freq=450
We increased the arm_freq to 1200 and core_freq to 500 for a noticeable performance bump without touching the voltage.
weed@raspberrypi:~ $ vcgencmd get_config int
... (shortened)
arm_freq=1200
core_freq=500
gpu_freq=300
sdram_freq=450
Finally, we pushed arm_freq to 1300, gpu_freq to 500, and sdram_freq to 500. We also configured over_voltage=5 and over_voltage_sdram=3 to keep the system stable.
weed@raspberrypi:~ $ vcgencmd get_config int
... (shortened)
arm_freq=1300
core_freq=500
gpu_freq=500
sdram_freq=500
over_voltage=5
over_voltage_sdram=3
These settings can be permanently applied in /boot/config.txt as follows:
[all]
arm_freq=1300
core_freq=500
gpu_freq=500
sdram_freq=500
over_voltage_sdram=3
# over_voltage is done by the governor.
# set the parameter to overrule its moderate choice.
over_voltage=5
enable_uart=1
dtoverlay=pi3-miniuart-bt
Overclocking improved the VNC performance a lot. But it requires active cooling, so I added a heatsink and fan.
Getting the deep learning model to run optimally on the Raspberry Pi AI Camera involved several rounds of testing.
We initially explored installing TensorFlow Lite and experimented with a few guides for object detection and image classification, such as building the tflite-runtime explicitly for the Pi Zero:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install python3-tflite-runtime
We also set up a local Conda environment using Miniforge to manage dependencies cleanly. We reviewed the official TensorFlow Lite Raspberry Pi examples. To ensure the camera works, it’s also practical to check out the new libcamera-based Python library, picamera2.
To use the specific Sony IMX500 AI sensor effectively, the necessary tools are installed via:
sudo apt install imx500-all
Once the environment was correctly pointed towards the native Raspberry Pi OS libraries without Conda’s interference, everything ran flawlessly. We successfully executed our .rpk compiled model with:
python imx500_classification_demo_ucd.py --model weeed.rpk --softmax
You can check out our repository and code here.
This work addresses the critical need for high-resolution thermal imaging in agriculture by leveraging multimodal deep learning to enhance low-cost sensors.
Thermal imaging is a vital tool for measuring plant traits such as water stress and stomatal conductance. However, high-resolution thermal cameras are often cost-prohibitive for large-scale agricultural deployment. VisTA-SR is a deep learning-based framework designed to bridge this gap by improving both the temperature accuracy and image quality of consumer-grade thermal sensors.
Key technical contributions include:
My Contribution:
Our team—Robo-ag—competed in the inaugural Farm Robotics Challenge hosted by the AI Institute for Next Generation Food Systems (AIFS), farm-ng, and UC ANR. We were honored to receive the Elegance in Design Prize for our autonomous targeted pesticide application system.
Resource overuse is a significant challenge in modern agriculture. Traditional “blanket” spraying methods lead to:
In our target 10-acre vineyard in Napa County, we aimed to transition from broad application to precise, targeted control to save costs and protect the environment.
Utilizing the farm-ng Amiga robotic platform, we developed an autonomous spray applicator capable of delivering precise doses to specific plants.
Key Features:
The robot’s operation follows a robust task hierarchy:
To achieve high precision, we utilized GeoJSON-based application maps. The system tracks the robot’s current position and cross-references it with pre-defined spray points.
By moving from blanket application to targeted spraying, our system can: