Skip to content
View berrymix13's full-sized avatar
🏠
Working from home
🏠
Working from home

Block or report berrymix13

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
berrymix13/README.md


I'm currently working on Vision-based Autonomous Robot Systems πŸ€–
Really do hope to be a ✨Data Scientist & Robotic Developer✨ someday!


πŸš€ Featured Projects

πŸ† AGV Vision Pick-and-Place System

Real-time 3D vision-based autonomous pick-and-place using mobile robot (AGV) + robotic arm

A complete mobile manipulation system that combines AGV navigation, 3D object detection, and precise robotic manipulation for autonomous pick-and-place operations.

Main Features Technologies Used
Autonomous Navigation ROS Noetic, SLAM, AMCL
3D Object Detection & Pose Est. YOLOv11, RealSense D435
Mobile Manipulation myAGV, MyCobot 280, MoveIt
Hand-Eye Calibration Eye-to-Hand, OpenCV, PGO
Real-time Coordinate Transform Point Cloud Processing, FastAPI

πŸ‘‰ View Project Repository


πŸ”₯ Vision Manipulator

A robot system that understands natural language, detects objects with YOLO, and executes Grasp & Place

Natural language-driven manipulation system integrating GPT, computer vision, and industrial robot control.

Main Features Technologies Used
Natural Language Command Parsing OpenAI GPT API
Object & Keypoint Detection YOLOv11, HRNet
3D Transformation & Calibration Eye-in-Hand, OpenCV
Robot Control Pipeline ROS2, UR10, MoveIt2

πŸ‘‰ View Project Repository


🧠 Currently Interested In

  • Mobile Manipulation & Multi-Robot Systems

    • AGV navigation with SLAM (gmapping, AMCL)
    • Coordinated control between mobile base and manipulator
  • Robot Vision & 3D Perception

    • YOLO-based object detection & pose estimation
    • Hand-Eye calibration (Eye-in-Hand / Eye-to-Hand)
    • Point cloud processing with RealSense depth cameras
  • Robot Control & Motion Planning

    • ROS / ROS2 system integration
    • MoveIt / MoveIt2 motion planning
    • Real-time robot control pipelines
  • AI & Robotics Integration

    • GPT + Vision + Robot control fusion
    • Real-time vision-language-action models
    • Autonomous task execution systems

🌱 I'm currently learning

  • Multi-robot coordination and task planning
  • Advanced hand-eye calibration techniques (PGO, Tsai-Lenz)
  • Real-time 3D object pose estimation
  • ROS action servers for complex robot behaviors
  • Vision-based autonomous navigation systems

πŸŽ“ Education

  • Konkuk University – Graduate School
    M.S. in Big Data and Applied Statistics
    Research focus on computer vision, AI robotics, and autonomous systems
    πŸ‡°πŸ‡· κ±΄κ΅­λŒ€ν•™κ΅ λŒ€ν•™μ› μ‘μš©ν†΅κ³„ν•™κ³Ό 빅데이터 전곡 μž¬ν•™ 쀑

  • Kyonggi University
    B.A. in Applied Statistics (Major)
    B.A. in Convergent Data Engineering (Double Major)
    πŸ‡°πŸ‡· κ²½κΈ°λŒ€ν•™κ΅ κ²½μ œν•™λΆ€ μ‘μš©ν†΅κ³„ν•™κ³Ό / μœ΅ν•©λ°μ΄ν„°κ³΅ν•™ λ³΅μˆ˜μ „κ³΅


πŸ› οΈ Tech Stack

Robotics

ROS ROS2 MoveIt

Computer Vision & AI

Python OpenCV PyTorch YOLO

Development Tools

Git Linux Docker


πŸ“Š GitHub Stats


πŸ“« Contact


Pinned Loading

  1. vision-manipulator vision-manipulator Public

    Python 1

  2. DefScan DefScan Public

    Python