Donte & Adi
Live telemetry and analysis for the car: sensor streams (with filtering), derived run metrics, and optional RaceGPT insights in a sidebar.
Sensor views
- Speed time series, live, and max value
- Power time series (calculated from current and voltage data)
- GPS location display with Google Maps
- Live Steering angle, Brake pressure, Throttle, and RPM on all wheels
- Timestamps and stopwatch for lap and race timing
Derived metrics
- Distance calculated from aggregating GPS data
- Energy Use calculated from power
- Efficiency instantaneous and average over a run
RaceGPT
- Configurable manual or automatic LLM requests that analyze recent telemetry data and return verdicts on improving performance
-
Running the Project
Create a
.envfile in the the root directory (not under/backendor/frontend). See.env.examplefor more info.Make sure Docker containers and volumes for this project are not running already. Then, run
docker compose up --buildto get all containers running.The frontend client UI should be running on
port 3000.
The backend healthcheck endpoint is on the root ofport 8000.Integration with RaceGPT is done over a websocket connection to a machine running RaceGPT. Connect your machine to the RaceGPT host, and the dashboard should connect when built.
-
Frontend Testing
Refer to
frontend/README.mdfor more information.
Bun Installation: The frontend uses Bun instead of NodeJS as a package manager and runtime. The installation is linked here. Then, run the following commands in the terminal to test/run the frontend in isolation.
This will start the frontend development environment with HMR and Vite atport 5173.cd frontend bun devGoogle Maps: To get location data and Google Maps properly displaying while running only the frontend with
bun dev, create a.envfile in the/frontenddirectory. Follow the.env.examplein the/frontendand create the environment variables below.VITE_GOOGLE_MAPS_API_KEY=<Your Google Maps API Key here> VITE_GOOGLE_MAP_ID=<Your Google Maps Map ID from Google Cloud console>
Instructions for getting your own
API_KEYandMAP_IDare infrontend/README.md. -
Backend Troubleshooting
Refer to
backend/README.md.
For the ROS2 subscriber, first get the ROS2 publisher IP address from:
ssh cev@<daq tailscale ip> "docker exec ts-authkey-container tailscale ip" | head -n1Make sure this matches the ip in the docker-compose.yml file.
We set it manually, not via env var, because the publisher IP should not change.
Connect your machine to another machine running RaceGPT so the dashboard can reach the RaceGPT websocket service and request analysis on live telemetry snapshots.
As mentioned above, there are two modes for requesting responses on the sidebar.
- Manually request LLM responses (5s buffer between responses)
- Automatically request and display LLM responses based on a set frequency >5s
ROS2 Sensors → Backend (Python + ROS2) → WebSocket Stream → Frontend (React + TypeScript + Bun)
↓
RaceGPT Integration
RaceGPT is integrated as an on-demand analysis layer.
- The frontend sends telemetry history to the backend via a POST request
- The backend forwards this data over a USB WebSocket connection to the RaceGPT machine (
/ws/analyze) - RaceGPT processes the data and returns a verdict/analysis
- The backend relays the response back to the frontend
- The frontend displays the result in the sidebar (manual or automatic modes)
- The frontend can trigger ROSbag recording via
/bagendpoints - The backend communicates with the DAQ machine through Tailscale
- ROSbag files are stored remotely for later analysis and replay
