Jekyll2026-03-15T22:36:15+00:00https://heesup.github.io/feed.xmlHeesup Yun’s HomepageArchives of my work and projects. Main topic: Biological and agricultural engineering. Sub topics: Remote sensing, computer vision, deep learning Heesup Yunhspyun@ucdavis.eduDrone-based Sorghum Height Estimation and Plant Modeling2026-03-15T00:00:00+00:002026-03-15T00:00:00+00:00https://heesup.github.io/research/agriculture/2026/03/15/sorghum-height-modeling

Sorghum is a critical crop for food production in arid climates due to its natural drought and heat tolerance. To improve yields in these challenging environments, researchers need high-throughput methods to monitor plant development and optimize cultivars. One of the most significant indicators of yield is plant height, which directly reflects growth rates and responses to environmental treatments.

This post details a workflow for using drone-based Structure from Motion (SfM) technology to estimate sorghum height and develop growth models based on Growing Degree Days (GDD).

Drone Data Acquisition

For this study, we conducted experiments at the UC Davis experimental field during the summer of 2022. The field spanned 265 feet by 45 feet, arranged into 18 beds across 18 rows.

Data was collected using a DJI Phantom 4 Pro drone. The flight path was generated via Mission Planner and executed through Litchi software at a height of 5 meters above ground level.

Drone Flight Path
Figure 1: Autonomous drone flight path over the sorghum experimental field, optimized for high-overlap imagery at low altitude.

Over the course of the season, 11 separate flight missions were conducted, capturing standard RGB images for SfM processing.


3D Reconstruction and Height Extraction

We processed the captured imagery using Agisoft Metashape 1.8.6. The processing pipeline involved aligning photos, building dense clouds, and generating orthomosaics and Digital Elevation Models (DEMs).

Metashape Processing Result
Figure 2: Reconstructed 3D orthomosaic and Digital Elevation Model (DEM) of the experimental site, facilitating organ-level height extraction.

To extract plant height from the DEM, we utilized Python for automated plot segmentation. The Cumulative Probability Distribution (CPD) of heights within each plot was calculated. We recorded the 99th percentile height as the representative height for each genotype, minimizing noise from outlier points.

Drone Validation
Figure 3: Comparison of drone-measured vs. ground-truth sorghum height across 224 plots, demonstrating a strong linear correlation (R² = 0.845).

Modeling Growth via Growing Degree Days

The extracted height data was modeled against environmental data collected from the CIMIS Weather Station #6. We used a base temperature of 8°C to calculate Growing Degree Days (GDD).

The relationship between GDD and crop height was modeled using a sigmoidal growth function. According to the empirical formula, sorghum height can be modeled using Growing Degree Days (GDD) and Sigmoid functions:

\[y(GDD, p) = \frac{y_0 y_{max}}{y_0 + (y_{max} - y_0)\exp(-k \cdot GDD)}\]

The sorghum height model is optimized so that the $R^2$ of the modeled value $\hat{y}_i$ and the observed value $y_i$ is maximized:

\[R^2 = 1 - \frac{\sum_{i=1}^n (y_i - \hat{y}_i)^2}{\sum_{i=1}^n (y_i - \bar{y})^2}\]

This optimization approach ensures that the model captures the characteristic S-shaped growth curve of the sorghum genotypes while minimizing the residual error across the multi-temporal dataset.

Growth Predictions
Figure 4: Modeled sorghum growth curves across different genotypes using signature GDD profiles and optimized sigmoidal parameters.

Validation and Application to Different Climates

The developed model was validated using an independent dataset from North Carolina A&T State University (2011).

Independent Validation
Figure 5: Model validation on an independent dataset (North Carolina, 2011), confirming model reliability across different geographic regions (R² = 0.690).

Finally, we applied the model to climate data from Kaolack, Senegal, to predict growth patterns in West African environments.

Kaolack Climate Data
Figure 6: Year-round climate profile for the Kaolack region, showing average temperature and rainfall patterns.
Senegal Sorghum Growing Season
Planting
Mid-Season
Harvest
JunJulAugSepOctNov
Planting
Mid-Season
Harvest
Figure 7: Seasonal timeline for sorghum cultivation in Senegal, derived from USDA IPAD regional data [3].

The model predicts that sorghum in this region will reach its maximum height approximately two months after planting, which aligns with local harvesting calendars.

According to the USDA International Production Assessment Division, sorghum in the Senegal region begins cultivation in mid-June and is harvested after September [3]. Therefore, Growing Degree Days were calculated using the average value of the minimum and maximum temperatures from June 15 to November 15.

Senegal Sorghum Height Prediction
Figure 8: Predicted sorghum height values of selected cultivars in the Senegal region during the main growing season (June - November).

This demonstrates the potential of combining drone-based phenotyping with biophysical modeling to support breeding programs in food-insecure regions.



References

  • 1. Brian Bailey. Plant and crop systems modeling. 2023.
  • 2. Ashwin Kumar Devudigari. Sweet sorghum production based on fertilizer rates, varieties and use of grain sorghum model. 2011.
  • 3. USDA IPAD. Senegal sorghum area, yield and production. retrieved march 22, 2023, from USDA IPAD.
  • 4. World Meteorological Organization. World weather information service. retrieved march 22, 2023, from WMO World Weather.

BibTeX Citation

Download Full Project Report (PDF)

Academic Attribution:

@article{yun2026sorghum,
  title={Drone-based Sorghum Height Estimation and Plant Modeling},
  author={Yun, Heesup},
  journal={PLS123 Plant and Crop Systems Modeling, WQ2023, UCDavis},
  year={2026},
  url={https://heesup.github.io/assets/docs/2023WQ_PLS123_Heesup_FinalProject_Report.pdf}
}
]]>
Heesup Yunhspyun@ucdavis.edu
Physical (muscle) AI2026-01-01T13:36:00+00:002026-01-01T13:36:00+00:00https://heesup.github.io/posts/2026/01/01/Physical%20(muscle)%20AIHappy New Year! Physical AI has been getting famous these days. Therefore, I also join this trend by doing my physical AI—specifically, my muscles 💪 (I know it’s a joke. To check out what Physical AI is, check out this. AI can build workout programs for me, but it can’t build muscles yet!

Just in time, Planet Fitness opened in Davis recently, so it’s a perfect time to see how AI could assist in its own version of physical AI by designing a personalized workout program.

Designing a Workout for a Beginner with LLMs

I started by asking Google Gemini for a beginner-friendly routine:

User: I’m a beginner in fitness training. Please create a one-hour full-body workout program for Planet Fitness.

Gemini: Welcome to the fitness world! Planet Fitness is a fantastic place for beginners because it’s designed to be “judgment-free” and has plenty of user-friendly equipment. Since you’re just starting, this 60-minute Full-Body Routine focuses on building a foundation of strength and stamina using mostly machines…


Phase 1: Warm-up (10 Minutes)

Goal: Raise your heart rate and prepare your joints.

  • Treadmill or Elliptical: 10 minutes at a brisk walking pace.

Phase 2: Strength Training (35 Minutes)

Goal: Target all major muscle groups. Aim for 2 sets of 12 repetitions for each exercise with 60s rest.

Muscle Group Exercise Why?
Legs Leg Press Machine Safely works glutes, quads, and hamstrings.
Back Lat Pulldown Builds back strength and improves posture.
Chest Chest Press Machine A safer alternative to the bench press.
Shoulders Shoulder Press Machine Strengthens the deltoids.
Core Abdominal Crunch Machine Stabilizes the spine and builds core strength.
Lower Body Seated Leg Curl Specifically targets the hamstrings.

Pro Tip: Look for the yellow QR codes on the machines. Scan them with your phone for a quick instructional video if you’re unsure about the setup.

Phase 3: Cardio Finisher (10 Minutes)

  • Stationary Bike or StairClimber: 10 minutes at moderate intensity.

Phase 4: Cool Down & Stretching (5 Minutes)

  • Hamstring Stretch: Reach for your toes.
  • Chest Stretch: Use a wall or pole to rotate away.
  • Overhead Triceps Stretch: Reach down your back and gently push your elbow.

Beginner Tips for Success

  1. Consistency > Intensity: 30 minutes 3 times a week is better than one 2-hour session.
  2. Stay Hydrated: Use the touchless filling stations.
  3. 30-Minute Express Circuit: A great alternative for when you’re short on time.

The routine Gemini put together was surprisingly comprehensive. While I wasn’t entirely sure what machines was available there, specifying the “Planet Fitness” in the prompt allowed Gemini to tailor the program perfectly. I’ve been following it and am very satisfied with the results!

Building a Web App with LLMs

Google Gemini introduced a feature called Dynamic View. This allows users to generate fully functional, dynamic applications from simple, zero-shot prompts. I decided to test it using the same workout prompt.

Dynamic View Screenshot

I was impressed that it even went as far as including an image of the gym. You can explore the generated app via the link below: Explore the Gemini Dynamic View App

Since I couldn’t find the source code from the dynamic view, I also tried a similar task using Gemini’s Canvas feature. With Canvas, I first drafted the exercise program and then asked Gemini to transform it into a dedicated, interactive web page. The result looks like this:

Outro

The fast advancement of LLMs and AI continues to amaze me. Reflecting on my own experience, I spent an entire quarter learning Node.js to build a data visualization like this project: (Project Link), yet this AI-generated demo looks significantly more polished and took only seconds to create.

There is often discussion about AI replacing traditional jobs. In my view, AI will not take your job; rather, the individuals who are good at these AI tools will get new opportunities. As a researcher in 2026, I see AI as a powerful tool, but ultimately, finding things that need to be done, making objectives, and acutal execution are in the hands of the people using it, just like you still have to go to the gym yourself to build muscle.

]]>
Heesup Yunhspyun@ucdavis.edu
Jornal Paper: Multimodal Optical Biosensing and 3D-CNN Fusion for Basil Water Stress Phenotyping2025-12-24T00:00:00+00:002025-12-24T00:00:00+00:00https://heesup.github.io/publications/2025/12/24/AgronomyI am pleased to announce the publication of our latest paper in Agronomy, titled “Multimodal Optical Biosensing and 3D-CNN Fusion for Phenotyping Physiological Responses of Basil Under Water Deficit Stress”.

This work was a collaborative effort with Kyung Hee University, Republic of Korea.

  • Title: Multimodal Optical Biosensing and 3D-CNN Fusion for Phenotyping Physiological Responses of Basil Under Water Deficit Stress
  • Authors: Yu-Jin Jeon, Hyoung Seok Kim, Taek Sung Lee, Soo Hyun Park, Heesup Yun, and Dae-Hyun Jung
  • Journal: Agronomy (MDPI), 2026, 16(1), 55
  • Publication Date: December 24, 2025
  • DOI: 10.3390/agronomy16010055

Research Summary

Basil Water Stress Analysis
Overview of the 3D-CNN model architecture and multimodal data fusion process.

This paper presents a non-destructive framework for monitoring water-deficit responses in basil using deep learning. We developed a fusion model combining RGB, Depth, and Chlorophyll Fluorescence (CF) imaging with a 3D Convolutional Neural Network (3D-CNN).

Key findings include:

  • The 3D-CNN model achieved a remarkable 96.9% classification accuracy in water stress level tasks (Normal, Resistance, and Recovery).
  • This approach effectively captures spatial and temporal-spectral features, outperforming traditional 2D-CNN and machine learning models.
  • The results demonstrate the scalability of multi-modal information merging for precision irrigation and agricultural monitoring.

My opinion:

  • The study suggests that adopting more advanced fusion techniques, such as Transformers, could further enhance the precision of physiological response phenotyping.

My Contribution:

  • Provided editorial feedback and proofread the manuscript to ensure technical clarity and academic rigor.

Read the Full Paper

You can access the full publication here: 👉 MDPI Agronomy - Full Article


]]>
Heesup Yunhspyun@ucdavis.edu
AIFS Sony Tech Challenge2025-05-03T13:36:00+00:002025-05-03T13:36:00+00:00https://heesup.github.io/posts/2025/05/03/Sony-AIFS-Challenge1st Place Winner - 2025 AIFS × Sony AI AgTech Challenge

Our team—Heesup Yun, Jonathan Berlingeri, Ethan Truong, and Nathan Chiu—secured the 1st Place win at the 2025 AIFS × Sony AI AgTech Challenge. Successfully delivered a technical demo within a two-day hackathon timeframe. Equipped with a Raspberry Pi Zero and a Raspberry Pi AI Camera, we built WeedTrackr: Precision Weed Management System.

WeedTrackr Teaser Sony AI AgTech Challenge 1st Place Certificate

The Problem We’re Solving

Weed management is a critical challenge facing modern agriculture, with profound economic and environmental implications:

  • $33 billion in annual U.S. crop yield losses due to weed competition.
  • $6 billion in additional control measure costs each year.
  • 70 million pounds of herbicides are lost to imprecise application and overuse each year.

Conventional methods rely on “blanket” treatments—equal amounts are applied to the entire field regardless of actual weed distribution. While precision equipment exists, it is expensive and built primarily for annual row crops. In vineyards and orchards, mowing is common for controlling weeds between rows, but weeds along rows remain another challenge.

Our Solution: WeedTrackr

WeedTrackr is a low-cost AI weed monitoring system designed specifically for planning herbicide sprays in orchards and vineyards. It automates weed scouting and herbicide application planning during regular maintenance activities, saving time and money, and mitigating environmental impacts by reducing herbicide use by up to 90% through targeted application.

Simple 3-Step Process:

  1. Mount: Mount WeedTrackr onto your mower or tractor.
  2. Power on: Activate the image collection system by powering on WeedTrackr.
  3. Work: That’s it! Continue with planned activities. WeedTrackr will work autonomously in the background while you do!

Technical Workflow (Detect → Classify → Plan)

WeedTrackr’s software stack integrates affordable, off-the-shelf hardware with custom integration software.

  • Detect: We use a Raspberry Pi Zero with the AI-enabled camera and a Sony Spresense GPS module to capture geolocated images as you move through your field.
  • Classify (Edge AI): The onboard AI model processes images in real-time to identify and categorize weeds (broadleaf vs. grass) and distinguish them from the soil. Our custom weed classification model is built on a modified MobileNetV2 architecture, trained on a diverse dataset (500+ broadleaf, 500+ grass, 1000+ soil/background)
WeedNet Architecture

The model was optimized for edge deployment. Using MCT (Model Compression Toolkit) for INT8 quantization and ONNX conversion to an IMX500 package, we reduced the model size by 75% while maintaining over 93% of the original accuracy.

Model Quantization
  • Plan (Web Dashboard): Built with React + TypeScript, the frontend dashboard uses the classified data to generate intelligent spray route planning and variable rate application maps. It visualizes GPS-accurate weed distribution heat maps (separating grass and broadleaf weeds), historical treatment analysis, and estimated chemical savings compared to blanket treatment.

Why Now & Impact

AI and robotics are finally accessible and scalable. As agricultural input costs (chemicals, labor, and fuel) climb and government regulations tighten, minimizing chemical buildup and preserving long-term biodiversity has never been more imperative.

By only spraying where it’s needed, WeedTrackr helps farmers:

  • Stay compliant with evolving environmental standards
  • Keep food cleaner by reducing herbicide over-usage
  • Protect land health
  • Reduce manual labor by automating weed scouting
  • Boost crop yields with fewer weeds and better plant performance

Technical Details

Raspberry Pi Zero Setup
Raspberry Pi Zero 2 W with IMX500 AI Camera

The Raspberry Pi Zero isn’t typically powerful enough to handle deep learning right away. To make it work, I pushed the CPU and GPU limits through overclocking and tailored our image classification model for the Raspberry Pi AI Camera. This setup shifts the heavy lifting from the CPU to the camera’s dedicated AI accelerator, making it possible to run sophisticated models on such a small, low-power device.

We first set up the device with a 64-bit OS referring to this guide: Install 64 OS on Raspberry Pi zero 2 - Q-engineering.

Below is the evolution of our hardware configurations, utilizing resources like the MakeUseOf overclocking guide.

1. Stock Configuration

By default, the arm_freq is set to 1000 and gpu_freq to 300.

weed@raspberrypi:~ $ vcgencmd get_config int
... (shortened)
arm_freq=1000
core_freq=400
gpu_freq=300
sdram_freq=450

2. First Overclock

We increased the arm_freq to 1200 and core_freq to 500 for a noticeable performance bump without touching the voltage.

weed@raspberrypi:~ $ vcgencmd get_config int
... (shortened)
arm_freq=1200
core_freq=500
gpu_freq=300
sdram_freq=450

3. Pushing Further

Finally, we pushed arm_freq to 1300, gpu_freq to 500, and sdram_freq to 500. We also configured over_voltage=5 and over_voltage_sdram=3 to keep the system stable.

weed@raspberrypi:~ $ vcgencmd get_config int
... (shortened)
arm_freq=1300
core_freq=500
gpu_freq=500
sdram_freq=500
over_voltage=5
over_voltage_sdram=3

These settings can be permanently applied in /boot/config.txt as follows:

[all]
arm_freq=1300
core_freq=500
gpu_freq=500
sdram_freq=500
over_voltage_sdram=3
# over_voltage is done by the governor.
# set the parameter to overrule its moderate choice.
over_voltage=5
enable_uart=1
dtoverlay=pi3-miniuart-bt

Overclocking improved the VNC performance a lot. But it requires active cooling, so I added a heatsink and fan.

Raspberry Pi Heatsink and Fan
Active cooling setup with heatsink and fan for the overclocked Raspberry Pi Zero

Running the AI Model on the Raspberry Pi Camera

Getting the deep learning model to run optimally on the Raspberry Pi AI Camera involved several rounds of testing.

1. Exploring TF Lite & Miniforge

We initially explored installing TensorFlow Lite and experimented with a few guides for object detection and image classification, such as building the tflite-runtime explicitly for the Pi Zero:

echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install python3-tflite-runtime

We also set up a local Conda environment using Miniforge to manage dependencies cleanly. We reviewed the official TensorFlow Lite Raspberry Pi examples. To ensure the camera works, it’s also practical to check out the new libcamera-based Python library, picamera2.

2. Resolving Environment Conflicts

To use the specific Sony IMX500 AI sensor effectively, the necessary tools are installed via:

sudo apt install imx500-all

3. Execution

Once the environment was correctly pointed towards the native Raspberry Pi OS libraries without Conda’s interference, everything ran flawlessly. We successfully executed our .rpk compiled model with:

python imx500_classification_demo_ucd.py --model weeed.rpk --softmax
Running AI Model on Camera
Demo of the classification model running on Raspberry Pi AI camera

You can check out our repository and code here.

]]>
Heesup Yunhspyun@ucdavis.edu
[CVPR 2024 AgVision Workshop] VisTA-SR: Improving the Accuracy and Resolution of Low-Cost Thermal Imaging Cameras for Agriculture2024-06-18T00:00:00+00:002024-06-18T00:00:00+00:00https://heesup.github.io/publications/research/oral-presentation/2024/06/18/CVPRW2024I am pleased to share that our research, “VisTA-SR: Improving the Accuracy and Resolution of Low-Cost Thermal Imaging Cameras for Agriculture”, was selected for an Oral Presentation at the CVPRW 2024 AgVision Workshop (Workshop on Vision4Ag).

This work addresses the critical need for high-resolution thermal imaging in agriculture by leveraging multimodal deep learning to enhance low-cost sensors.

  • Title: VisTA-SR: Improving the Accuracy and Resolution of Low-Cost Thermal Imaging Cameras for Agriculture
  • Authors: Heesup Yun, Sassoum Lo, Christine H. Diepenbrock, Brian N. Bailey, and J. Mason Earles
  • Venue: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024
  • Presentation Type: Oral Presentation

Research Summary

VisTA-SR Framework Overview
Overview of the VisTA-SR framework: Visual-Thermal Alignment and Super-Resolution enhancement.

Thermal imaging is a vital tool for measuring plant traits such as water stress and stomatal conductance. However, high-resolution thermal cameras are often cost-prohibitive for large-scale agricultural deployment. VisTA-SR is a deep learning-based framework designed to bridge this gap by improving both the temperature accuracy and image quality of consumer-grade thermal sensors.

Key technical contributions include:

  • Multimodal Fusion: Leverages high-resolution RGB images as a reference to guide the super-resolution (SR) process of low-resolution thermal data.
  • Dual-Stage Network: Features an alignment stage for spatial consistency followed by a sophisticated SR network to restore sharpness and detail.
  • Radiometric Calibration: Includes a complete pipeline for temperature accuracy enhancement, validated through field experiments in garbanzo bean crops.
  • Improved Phenotyping: Enables precise, organ-level (leaf, stem, fruit) analysis using cameras that cost a fraction of industrial alternatives.

My Contribution:

  • Lead author; developed the model architecture, conducted field data collection/validation, and performed the comparative analysis.


]]>
Heesup Yunhspyun@ucdavis.edu
Farm Robotics Challenge 20232023-05-20T10:00:00+00:002023-05-20T10:00:00+00:00https://heesup.github.io/posts/2023/05/20/Farm-Robotics-ChallengeElegance in Design Prize - 2023 Farm Robotics Challenge

Our team—Robo-ag—competed in the inaugural Farm Robotics Challenge hosted by the AI Institute for Next Generation Food Systems (AIFS), farm-ng, and UC ANR. We were honored to receive the Elegance in Design Prize for our autonomous targeted pesticide application system.

Robo-ag in Vineyard Spray Map UI
Left: Robo-ag robot in a vineyard trial. Right: The real-time targeted spray control interface.

The Problem: Herbicide & Fertilizer Waste

Resource overuse is a significant challenge in modern agriculture. Traditional “blanket” spraying methods lead to:

  • 70 million pounds of pesticide active ingredients wasted annually in the U.S.
  • $1.4 billion in economic losses due to incorrect application.
  • Environmental damage including eutrophication, harmful algal blooms, and soil acidification.

In our target 10-acre vineyard in Napa County, we aimed to transition from broad application to precise, targeted control to save costs and protect the environment.

Our Solution: Targeted Spray System

Utilizing the farm-ng Amiga robotic platform, we developed an autonomous spray applicator capable of delivering precise doses to specific plants.

Key Features:

  • GPS-Guided Path Following: Leveraging onboard sensors and a line-following algorithm to navigate vineyard rows.
  • Dynamic Spray Control: A self-pressurizing tank system with solenoid-controlled nozzles that activate only at defined points.
  • Safety First: Integrated weather shields and automated system checks (GPS, pressure, battery) to ensure reliable operation.

Technical Workflow (Navigate → Detect → Spray)

The robot’s operation follows a robust task hierarchy:

  1. Start & Test: Remote activation and automated subsystem diagnostics.
  2. Navigate: The Amiga uses GPS and line-following to maintain alignment within vineyard rows.
  3. Targeted Spray: The system cross-references real-time GPS coordinates with a pre-defined application map (JSON-based) to trigger the spray nozzles.
Robo-ag System Architecture
System architecture diagram showing the integration of GPS, control computer, and spray system.
Robo-ag Workflow Task Hierarchy
Detailed task hierarchy for the autonomous spraying process.

GPS Navigation & Data Mapping

To achieve high precision, we utilized GeoJSON-based application maps. The system tracks the robot’s current position and cross-references it with pre-defined spray points.

GeoJSON Spray Points
GeoJSON configuration for target spray points, mapping geolocated coordinates to application conditions.
GPS Trajectory
Visualized GPS trajectory of the Amiga robot during a vine-row trial, showing autonomous navigation performance.

Impact

By moving from blanket application to targeted spraying, our system can:

  • Reduce herbicide and fertilizer usage by up to 90% in many scenarios.
  • Minimize manual labor required for scouting and application.
  • Improve crop health by ensuring optimal nutrient delivery without over-exposure.
]]>
Heesup Yunhspyun@ucdavis.edu