Robot intelligence platform for Kumamoto University
Built the infrastructure and LLM-driven robot integration system for a ¥20.9M international research project on socially assistive robotics.
Context
miXai-learn is a ¥20.9M KAKENHI-funded international research project developing socially assistive robots for embodied learning. Research teams are distributed across three Japanese universities with partner institutions in Europe.
The project needed two things built simultaneously: a reliable research platform to capture and analyse learning sessions, and a robot integration system that could respond intelligently to what students were actually doing in real time.
Challenge
The codebase lived in GitLab with no CI/CD. Research data — session photography, conversation transcripts, robot gesture logs — was collected manually with no analytics layer. There was no production infrastructure. Researchers across multiple universities were contributing to a shared codebase with no version control standards.
The robotics challenge was harder: socially assistive robots needed to select appropriate gestures in response to student learning events. The selection logic had to be configurable by researchers without engineering involvement, and the system had to stream hardware telemetry in real time to the analytics platform.
Approach
Infrastructure first. Designed and built on-premises server infrastructure for the LA-ReflecT research platform — Docker Swarm with Cloudflare and nginx. All three partner universities run on this infrastructure. Self-hosted, within the project’s data residency requirements.
Robot integration system. Built the architecture so robots can respond to student learning events. An LLM selects contextually appropriate gestures. A configurable event trigger system lets researchers define which gestures and responses the robot produces for each learning event, without touching the underlying code. Hardware telemetry streams robot state back to the analytics platform in real time.
Reflection evidence layer. Built the evidence capture layer across the full stack — session photography, conversation transcripts, and gesture timelines all stored and linked against the LA-ReflecT analytics platform. Piloted in live research sessions before handover.
DevOps culture across the partnership. Moved the codebase from GitLab to GitHub, introduced CI/CD pipelines with GitHub Actions, and set version control standards now used by all partner universities across Japan and Europe.
Outcome
- On-premises research platform serving distributed teams across Japan and Europe
- Robots selecting and executing gestures in response to real-time learning events
- Full reflection evidence layer: photography, transcripts, and gesture timelines captured and linked per session
- Researchers contributing via standardised workflows for the first time, with automated validation on every commit
- Academic team mentored in DevOps culture — the infrastructure and processes are now maintained internally
Technologies
Docker Swarm, Cloudflare, nginx, GitHub Actions, LLM integration (gesture selection), real-time telemetry streaming, OpenTelemetry.