IMUs & Motion: Limits and Challenges

This part of the project didn’t “work” in the way I originally expected. I wasn’t able to extract clean, stable orientation or lean angle data from the IMUs.

One of the things I’ve always found fascinating about modern super-bikes is how deeply they rely on spatial sensing.

Today, bikes come equipped with IMUs (Inertial Measurement Units) that continuously measure how the bike is accelerating and rotating in three-dimensional space. This data is what enables systems like traction control, wheelie control, and cornering ABS to work the way they do.

On bikes like the BMW S1000RR, Yamaha R1M, or Ducati V4R, IMUs are a core part of the electronics stack.

I wanted to explore the same idea—but strictly for logging and observation.


What an IMU actually measures (and what it doesn’t)

An IMU typically contains:

  • a 3-axis accelerometer
  • a 3-axis gyroscope
  • sometimes a magnetometer (not always)

Together, these sensors measure:

  • linear acceleration along X, Y, and Z axes
  • angular velocity (rotation rate) around X, Y, and Z axes

What an IMU does not directly measure is:

  • absolute orientation
  • lean angle
  • pitch or yaw as stable angles

Those values must be computed from raw sensor data using math and filtering.

This distinction turned out to be very important.



My initial IMU plan

My original idea was to place multiple IMUs across the bike to see how different sections experience motion.

The plan was:

  • one IMU in the front section
  • one IMU in the mid section
  • one IMU in the rear section

Each IMU would:

  • have its own microcontroller
  • stream data wirelessly to the logging system

The goal wasn’t sensor fusion between IMUs—it was comparison and observation.


Hardware choice: MPU6050 (and why)

I used the MPU6050 IMU module.

This wasn’t because it was ideal, but because:

  • it was available
  • it’s cheap
  • it’s widely documented
  • it integrates easily over I²C

The MPU6050 includes:

  • a 3-axis accelerometer
  • a 3-axis gyroscope
  • no magnetometer

That last point matters a lot.


How the IMUs were distributed

To reduce controller sprawl, IMUs were grouped logically:

  • Front section:
    • one MPU6050
    • shared controller with the front suspension ultrasonic sensor
  • Mid section:
    • two MPU6050 sensors on one controller
    • added mainly for redundancy and comparison
  • Rear section:
    • one MPU6050
    • integrated into the rear brake input module

This wasn’t perfect architecture, but it kept controller count manageable and wiring tidy.



First reality check: raw IMU data is not orientation

After test runs, I quickly realized I had misunderstood something fundamental.

I initially assumed that an IMU like the MPU6050 could directly give me:

  • lean angle
  • pitch
  • roll

That’s not how it works.

Accelerometer data

The accelerometer measures specific force, not orientation. At rest, this appears as approximately ±1g depending on axis orientation. During motion, acceleration from braking, bumps, and vibration completely dominates gravity.

You can multiply the normalized readings by g to get acceleration in m/s², but that doesn’t magically give orientation.

Gyroscope data

The gyroscope measures angular velocity, not angle. To get orientation, you must integrate angular velocity over time—and integration drifts.

Without correction, drift grows quickly.


Why orientation estimation is hard

To compute stable orientation (roll, pitch, yaw), you need:

  • gyroscope integration (fast, but drifts)
  • accelerometer reference (noisy during motion)
  • magnetometer reference (for yaw correction)

The MPU6050 lacks a magnetometer, which means:

  • yaw cannot be stabilized
  • long-term orientation drifts are unavoidable

This is why OEM IMUs:

  • are carefully calibrated
  • use high-grade sensors
  • rely on sophisticated sensor fusion algorithms


Mounting matters more than expected

Another thing that became obvious very quickly: IMU mounting is critical.

Poor mounting leads to:

  • vibration-induced noise
  • resonance artifacts
  • axis misalignment

Even small flex or movement in the mount can dominate the signal. This made it very difficult to extract meaningful insight from raw data.

At this stage, the IMUs were doing what they were supposed to do—but interpreting the data meaningfully was a different problem entirely.


Trying lean angle using Android’s Rotation Vector

To get something usable, I experimented with lean angle estimation using the Android Rotation Vector sensor.

Important clarification:

  • The Android Rotation Vector is not a physical sensor
  • It is a virtual sensor created by sensor fusion
  • It combines:
    • accelerometer
    • gyroscope
    • magnetometer (on the phone)

When the bike was stationary, this worked reasonably well. Lean angle readings looked correct.

But once the bike was in motion—especially at higher speeds—the readings became wildly inaccurate. Lean angle would drift by tens of degrees.


Why the Android sensor drifts so badly on a bike

This wasn’t a bug. It was a design limitation.

Android’s sensor fusion algorithms are designed for:

  • phones in pockets
  • phones in hands
  • walking, running, casual movement

They are not designed for:

  • high vibration
  • sustained acceleration
  • aggressive rotation
  • a device rigidly mounted to a motorcycle at speed

At high speeds, accelerometer data is dominated by dynamic forces, confusing the gravity reference. The fusion algorithm loses its frame of reference and orientation drifts badly.

This made it clear why OEM systems do not rely on general-purpose phone sensors.



What I learned from the IMU experiments

Even though the data was noisy and hard to interpret, this experiment was extremely valuable.

I learned that:

  • IMUs do not give angles “for free”
  • orientation requires careful fusion and filtering
  • sensor grade matters
  • mounting quality matters
  • math matters a lot

It also gave me a new appreciation for how complex systems like traction control really are. They don’t just “read lean angle”—they estimate it under brutal conditions.


Why I kept the IMUs anyway

Despite the limitations, I kept the IMU data.

Why?

  • relative motion trends were still visible
  • spikes during braking and acceleration were clear
  • vibration signatures were interesting
  • it exposed the real challenges of spatial sensing

Even noisy data can teach you something—especially when you understand why it’s noisy.


What I learnt from this

Implementing spatial sensing—even just for logging—forced me to confront:

  • sensor physics
  • signal processing
  • data fusion
  • real-world noise

It made it clear why IMU-based intervention systems take years of development and testing.

DIY GSXR Dashboard

Once I could reliably log sensor data, store it, export it, and analyze it using tools like Pandas, it felt like the right time to take the next step.

Looking at plots after a ride was useful—but it was also disconnected from the riding experience itself. I wanted to be able to see some of the data live, while riding, in a way that actually made sense.

That naturally led to the idea of building an instrument cluster, or dashboard.


Why I wanted a dashboard in the first place

The goal of the dashboard wasn’t to show everything.

Some parameters were:

  • Too noisy
  • Too complex
  • Better suited for offline analysis

Those would stay in the background and simply be logged.

The dashboard needed to surface only the most useful, rider-relevant information, without overwhelming the user.

That meant thinking not just like an engineer, but like a rider.


Looking for design inspiration

Before writing any code, I spent time looking at existing dashboard designs.

I downloaded images of:

  • Modern superbike dashboards
  • TFT instrument clusters
  • Racing dashboards and telemetry screens

At the same time, I sketched a few ideas of my own to explore layout and hierarchy.

One design kept standing out: the 2025 BMW S1000RR dashboard.

It uses:

  • A large TFT display
  • A clean, high-contrast color scheme
  • Clear prioritization of information
  • Visual elements instead of raw numbers where possible

It gives the rider what they need—without shouting.

That made it a great reference point.



Constraints of my platform

The S1000RR dashboard is a dedicated hardware unit, deeply integrated into the bike’s electronics via CAN bus. My setup was very different.

  • No CAN bus
  • No factory ECU integration
  • Wireless sensor system
  • Prototype-level hardware

I didn’t want to build a physical dashboard unit at this stage. Instead, I decided to use something I already had: a smartphone.


Using a smartphone as the dashboard hardware

I chose to run the dashboard on an Android smartphone (Samsung Galaxy M31).

That decision came with some advantages:

  • High-resolution display
  • Touch input
  • Built-in power management
  • Easy deployment and iteration

It also meant the dashboard had to be implemented as a native Android app.

Since I already had experience with Android Studio, this felt like the most practical path forward.


Designing the UI with Figma

Before writing any Android code, I focused on the UI.

I developed some of the assets for the dashboard UI in Figma which I used to:

  • Design individual widgets
  • Experiment with layout
  • Adjust spacing and hierarchy
  • Think about how information flows visually

This made a huge difference. Being able to design visually first helped avoid a lot of trial-and-error later in code.

I wasn’t trying to copy the S1000RR dashboard, but I was trying to understand why it works.

One key lesson was restraint:

A good dashboard doesn’t show everything—it shows the right things.



Choosing what to display (and what not to)

After some iteration, I settled on displaying:

  • Tyre temperature parameters
  • Brake disc temperature parameters
  • RPM
  • Speed
  • Suspension state
  • Gear indicator
  • Lap timer
  • Clock
  • Brake usage indicators
  • Sensor error codes (for my system, not the bike ECU)

Anything that didn’t directly help the rider stayed out of view and was only logged in the background.


Custom widgets and visual elements

Many of the UI elements needed to be custom.

For example:

  • RPM required an arc-style widget that changed width dynamically
  • Temperature values worked better as bars rather than raw numbers
  • Suspension state needed a visual indicator instead of a static value

Some early versions used static images to represent suspension movement, but that didn’t feel right. I wanted elements that moved and responded.

This pushed me deeper into custom widget design.


Connecting the dashboard to the data logger

The dashboard didn’t talk directly to sensor modules. Instead, it communicated with the data logging engine.

The data logging engine:

  • Ran as a Python program
  • Collected and logged all sensor data
  • Exposed a UDP server on port 9100

The protocol was simple:

  • The dashboard (or any UDP client) sends a "REQ" message
  • The logger responds with a single, aggregated data frame
  • The dashboard parses the frame and updates the UI

This design had a few advantages:

  • Loose coupling between systems
  • Easy debugging from a laptop or another device
  • One unified data snapshot per request


Software gating and lap timer logic

One of the more interesting problems was implementing a lap timer without adding new hardware controls.

Instead of building a dedicated control module, I reused existing inputs:

  • Front brake signal
  • Rear brake signal
  • Pass light trigger

The logic worked like this:

  • Press both brakes → countdown begins
  • After countdown → dashboard enters lap mode
  • Front brake + pass light → start lap
  • Rear brake + pass light → stop lap

Using the pass light prevented accidental triggering during normal riding.

This kind of software gating turned out to be surprisingly powerful. It allowed fairly complex behavior using very simple inputs.

Laptimer Demo Video: Youtube


Handling sensor errors

Another dashboard feature I added was sensor error reporting.

If a sensor failed or stopped reporting, the dashboard could show:

  • Which module was affected
  • That something was wrong

This made diagnosing issues much easier, especially during test rides.


Learning restraint in dashboard design

As the dashboard evolved, I started removing things.

At one point, I even had a GSX-R logo on the screen—but over time it felt like clutter. It didn’t help the rider, so it had to go.

This process reinforced a key lesson:

If a UI element doesn’t help in the moment, it doesn’t belong on the dashboard.

That philosophy clearly came from studying well-designed OEM dashboards like the S1000RR.


Frontend and backend on the same device

In the final setup:

  • The data logging engine ran as a backend Python process
  • The dashboard ran as a foreground Android app
  • Both lived on the same device

Structurally, it wasn’t that different from a typical software system:

  • Backend collects and serves data
  • Frontend renders it for the user

The difference was that the “backend” was talking to real hardware mounted on a motorcycle.


Why the dashboard mattered

Building the dashboard changed how the project felt.

It turned:

  • Abstract plots into real-time feedback
  • Logged data into actionable awareness
  • A sensor experiment into something closer to a real system

It also made clear just how much thought goes into human-machine interfaces on modern motorcycles.

First Sensor Data Log Test Run

At this point in the project, most of the sensors were finally in place.

That included:

  • Brake disc temperature sensors
  • Tyre temperature sensors
  • GPS data from the rear section
  • Suspension movement using ultrasonic ranging
  • Engine coolant temperature
  • Input sensing from the front brake, rear brake, and headlight switch

With all of that hardware installed and communicating, it was finally time to answer a simple question:

What does the data actually look like when you ride the bike?


How the system was powered and networked

Before getting into the data itself, it’s important to explain how everything was running during the test ride, because this setup played a big role in how stable the system turned out to be.

All sensor modules were powered from a single 5V power rail, supplied by a separate power bank. This was a deliberate design choice. I wanted the entire sensing and logging system to be electrically isolated from the bike’s main electrical system.

That isolation had a few advantages:

  • No risk of interfering with the bike’s ECU or wiring
  • Reduced electrical noise from the bike
  • Easier debugging when something went wrong

Once the power bank was switched on, all sensor modules booted automatically.

Because the system was fully wireless:

  • My phone acted as a WiFi hotspot
  • The phone was also running the data logging engine
  • All sensor modules connected directly to the phone

This effectively turned the phone into the central hub of the system, handling networking, data collection, and storage at the same time.


In practice, this setup worked surprisingly well. All sensors were within range of the phone, and I never experienced WiFi dropouts during the ride. Every module stayed connected and streamed data continuously.

Looking back, isolating power and keeping networking simple probably saved me from a lot of intermittent and hard-to-debug problems.


The first real data logging run

The first test run itself wasn’t anything extreme. It was simply a ride from home to my workplace under normal riding conditions.

At the time, I wasn’t entirely sure how I would handle the data once it came in. I had already written multiple versions of a Python-based logging engine, and the earlier versions stored data in CSV format.

That’s what I used for the first run.

The ride went smoothly. The sensors logged. The system didn’t crash.

Then I opened the data file.



Realizing the data was too large to handle locally

The CSV file was massive.

There was no realistic way to:

  • Scroll through it comfortably
  • Inspect it manually
  • Make sense of it directly on my laptop

That’s when I started thinking about moving the analysis somewhere with more computing power.

The obvious answer was the cloud.


A quick and messy data pipeline (that still worked)

The workflow I came up with wasn’t elegant, but it got the job done:

  1. Convert the CSV data into JSON
  2. Upload the converted data to Firebase
  3. Load the data into Google Colab
  4. Explore and visualize it using Python

I honestly don’t remember why Firebase was the first thing that came to mind, but it worked.

The downside was obvious:

  • Conversion took time
  • Uploading the large dataset was slow
  • The workflow felt fragile and unsustainable

Still, once the data was available in Google Colab, I could start exploring.



Seeing the data for the first time

This was the moment where everything started to feel real.

Inside Google Colab, I loaded the dataset and began plotting different signals:

  • Brake disc temperature over time
  • Suspension movement during braking
  • Responses during acceleration
  • General trends across the ride

Nothing fancy—mostly basic plotting using pandas and matplotlib, tools I had only lightly touched back in college.

But seeing data generated by hardware I built, from a bike I rode, plotted in front of me was incredibly satisfying.



Discovering a major flaw: time synchronization

As exciting as it was, something didn’t look right.

I noticed that:

  • Different sensors produced different numbers of data frames
  • Some data streams were much denser than others
  • Events didn’t line up cleanly across sensors

Up until this point, I had made a bad assumption.

I assumed that because all sensor modules booted at roughly the same time, their data would naturally be synchronized.

That wasn’t true.

Some modules:

  • Ran faster loops
  • Had more processing overhead
  • Sent data at different rates

There was no shared clock.


Rethinking the logging approach

This forced me to rethink how the logging engine worked.

Two ideas came up:

Time-slot based logging

Instead of logging whenever data arrived, I could:

  • Define a fixed time window (for example, once per second)
  • Sample the latest values from all sensors
  • Store them together as a single frame

This would force alignment.

Switching to JSONL

Instead of CSV → JSON conversion, I could:

  • Log directly as JSON Lines (JSONL)
  • Append one structured record per time slot
  • Upload a single file
  • Load it directly into Google Colab

We tried this approach—and it worked far better.



When everything finally lined up

Once the data was time-aligned, everything clicked.

Now I could clearly see:

  • Brake input events
  • Immediate suspension response
  • Brake disc temperature rising
  • Tyre temperature reacting more slowly

I could brake hard and watch the system respond:

  • Brake signal toggled
  • Suspension compressed
  • Brake disc temperature climbed

These are some the plots from when I was looking at the data in segments like I could plot the temperature data or IMU data. There is also a brake input plot which are just binary , 1 for when the brake is not engaged and 0 for when the brake is engaged.


Learning the limits of the data

One thing became clear quickly: some runs were simply too short.

Tyre temperature, in particular, doesn’t change much unless:

  • You ride longer
  • You push harder
  • You sustain load

That was fine. This wasn’t about perfect results. It was about understanding what mattered and what didn’t.

This test run was a turning point.

For the first time, the entire loop worked:

  • Hardware → data logging
  • Data logging → cloud analysis
  • Analysis → insight

Even with a messy pipeline and imperfect sensors, the system worked.

I could build hardware, ride the bike, record data, and understand what was happening.

That alone made the project feel worth it.

Logging GPS and Brake Inputs

By the time I had several sensors working reliably, I started to realize something slightly ironic:
even though I was already capturing a lot of data, I actually needed more parameters.

Not because the project wasn’t complex enough already—but because the data I had didn’t always explain why things were happening.

That realization pushed me to focus more heavily on the rear section of the bike.


Why the rear section became the focus

The rear section made sense as a place to expand for a few reasons:

  • There was physical space to mount additional electronics
  • It already hosted temperature sensors
  • It was a logical place to integrate GPS and brake-related data
  • It could act as a “hub” for several related measurements

Instead of scattering more single-purpose modules everywhere, I wanted to experiment with grouping functionality.



Designing a more modular rear electronics unit

The first new rear module I planned needed to do several things:

  • Monitor rear brake disc temperature
  • Integrate a GPS module
  • Allow for future expansion (including a possible speed sensor)
  • Be detachable and reprogrammable
  • Use connectors rather than hard-wired connections

In my head, this module started to resemble a very crude version of an ECU-style unit—not in function, but in philosophy. Something you could unplug, reflash, and reinstall without disturbing the rest of the system.


Adding brake signal monitoring

The second rear module focused on brake signal monitoring.

What I wanted here was simple but important:

  • Detect front brake activation
  • Detect rear brake activation

This module used:

  • An ESP8266 controller
  • Two 12V relays connected to the brake light circuits
  • Logic-level outputs (1 or 0)

There was no brake pressure sensing, no analog finesse—just a clean digital signal telling me when braking occurred.

That alone was extremely valuable.

By having brake inputs, I could now correlate:

  • Braking events
  • Suspension dive
  • Brake disc temperature rise
  • Tyre temperature changes


Adding an IMU to the rear section

While working on the brake signal module, I also decided to integrate an MPU6050 IMU into the rear section.

The idea was to:

  • Capture motion data closer to the rear of the bike
  • Compare it with IMU data from the front or mid section
  • See how different parts of the bike experience movement differently

This wasn’t about sensor fusion yet—it was about observing differences.


GPS integration (simple, but useful)

For GPS, I kept things intentionally basic.

I used a u-blox NEO-6M GPS module, which is:

  • Cheap
  • Widely available
  • Easy to integrate
  • Relatively slow in refresh rate

I knew upfront that the refresh rate would be limited, so this wasn’t going to give me high-resolution speed or position data. But it still had value:

  • Location context
  • Rough speed reference
  • Time alignment with other data streams

To mount it, I designed a small rear wing that kept the GPS module exposed. It also happened to look kind of cool, which was a bonus.



Keeping communication consistent

Just like the other sensor units, these rear modules:

  • Used ESP8266 controllers (Wemos D1 form factor)
  • Connected wirelessly to the sensor network
  • Sent data to the central logging system

By keeping the communication model consistent, integration was straightforward. New modules could come online without major changes to the rest of the system.


The ESP8266 everywhere problem

At this point, almost everything was using an ESP8266—and that was both good and bad.

On the plus side:

  • Small footprint
  • Easy to program
  • Built-in WiFi
  • Cheap and replaceable

On the downside, I started to notice something worrying.

I had:

  • A dedicated front brake disc sensor controller
  • A dedicated front IMU and suspension controller
  • A dedicated engine coolant and pass-light trigger module
  • Multiple rear modules

Just in the front section alone, I was already running three separate microcontrollers.

The system was working, but the module count was exploding.


Realizing the need for consolidation

This rear-section experiment taught me an important lesson:

You can make many small, dedicated modules—but complexity grows fast.

In an ideal world, I would have:

  • One larger module per section
  • Multiple sensor connectors per module
  • Fewer microcontrollers
  • Cleaner wiring
  • More uniform design

But there was a trade-off.

If I stopped to redesign everything properly, I would risk falling into analysis paralysis.

Monitoring Suspension using Ultrasound

After working through temperature sensing and system architecture, the next area I wanted to explore was suspension behavior.

Modern superbikes equipped with electronic suspension systems—such as dynamically damped suspension—can actively adjust and monitor suspension in real time. My bike doesn’t have any of that. It runs a standard mechanical suspension with no electronics involved.

That didn’t mean the suspension wasn’t doing interesting things. It just meant I couldn’t see them.

So the goal here wasn’t control or tuning. It was observation.


What I wanted to learn from the suspension

I was interested in understanding how the suspension behaves in everyday riding conditions:

  • How much does it compress under hard braking?
  • What happens during acceleration?
  • How does it behave during steady cruising?
  • How does rough road surface affect movement?

In short, I wanted to monitor compression and rebound, not intervene or adjust anything.



Why I avoided mechanical measurement

The most direct way to measure suspension travel is with mechanical linkages or linear position sensors. I ruled that out very early.

Mechanical setups:

  • Add complexity
  • Require precise mounting
  • Can interfere with moving parts
  • Are fragile in a vibration-heavy environment

I wanted something non-contact, simple, and safe to experiment with.

That’s what led me to ultrasonic distance sensors.


How ultrasonic ranging works (simple explanation)

Ultrasonic ranging is based on a very straightforward idea.

An ultrasonic sensor:

  1. Emits a short burst of high-frequency sound (ultrasound)
  2. That sound travels through the air until it hits an object
  3. The sound reflects back to the sensor
  4. The sensor measures how long the echo takes to return

Because the speed of sound in air is known, the distance can be estimated using time-of-flight:

Distance = (time × speed of sound) / 2

The division by two accounts for the sound traveling to the object and back.



In this project:

  • The sensor was mounted to the bike frame
  • The reflecting object was the wheel or tire
  • Changes in distance corresponded to suspension movement

As the suspension compresses or extends, the measured distance changes.


Why ultrasonic sensors made sense for this project

Ultrasonic sensors aren’t precision instruments, but they have a few advantages that made them ideal here:

  • Completely non-contact
  • Cheap and easy to replace
  • Simple to interface with microcontrollers
  • Fast enough to capture suspension movement trends

Most importantly, they let me observe relative movement, which was the real goal.


Sensor placement on the bike

I installed:

  • One ultrasonic sensor at the rear
  • One ultrasonic sensor at the front

Each sensor was positioned so it faced the wheel directly, measuring the gap between the wheel and a fixed point on the bike.

As the suspension moved:

  • Compression reduced the distance
  • Rebound increased the distance

What surprised me was how quickly this worked. Once mounted and powered, the sensors immediately started reporting meaningful changes.


What I was measuring (and what I wasn’t)

Technically, you can calculate real suspension travel by:

  • Recording a baseline distance
  • Tracking changes from that baseline
  • Converting distance changes into displacement values

I chose not to do that.

Accuracy wasn’t the objective here. I wasn’t aiming for millimeter-perfect measurements or shock dyno data. What I wanted was behavioral insight:

  • Relative compression vs extension
  • Fast vs slow movement
  • Smooth vs chaotic response

Seeing patterns mattered more than absolute numbers.


Visualizing suspension movement

Once data started coming in, plotting it made the behaviour obvious:

  • Sharp spikes during hard braking
  • Gradual compression during acceleration
  • Continuous small oscillations on rough roads

Even without exact units, the shape of the data told a clear story.



Considering laser distance sensors

I did consider using laser ranging sensors instead of ultrasonic ones.

Laser sensors offer:

  • Higher precision
  • Faster response
  • More focused measurement beams

But they’re also much more expensive. For a learning-focused, experimental setup, ultrasonic sensors struck the right balance.


Dealing with jitter and noisy data

One of the first issues I ran into was jitter.

The distance readings fluctuated significantly because:

  • The sensors updated very frequently (milliseconds)
  • The environment was mechanically noisy
  • The wheel surface wasn’t perfectly uniform

This wasn’t a hardware problem—it was a data problem.

Simple filtering on the software side helped smooth the signal and made the suspension behaviour much easier to interpret.


What this experiment taught me

Using ultrasonic sensors to monitor suspension movement wasn’t perfect, but it was extremely informative.

Key takeaways:

  • You don’t need high accuracy to gain insight
  • Non-contact sensing simplifies experimentation
  • Relative measurements are often enough
  • Simple sensors can reveal complex behaviour

DIY Data Logging System

After getting a few sensors working individually, it became clear that I couldn’t keep treating them as isolated experiments. If this project was going to grow beyond a handful of wires and test scripts, I needed some kind of system architecture—something that would bring order, make debugging easier, and allow me to add more sensors without everything turning into a mess.

At this point, the goal wasn’t elegance or OEM-level design. It was structure and scalability, even if the hardware itself was still very prototype-grade.


Breaking the bike into logical zones

The first decision I made was to stop thinking about the bike as one big system and instead break it down into three physical sections:

  • Rear section
  • Mid section
  • Front section

This immediately simplified how I thought about sensors, wiring, and future expansion.



Rear section

The rear section already had a few components in place:

  • Rear tire temperature sensors
  • Rear brake disc temperature sensor

I also knew this area would likely grow later, so I wanted the architecture to allow additional sensors without major changes.

Each rear module was treated as its own unit, responsible only for collecting data and sending it out wirelessly.


Mid section

The mid section was where I planned to add spatial awareness.

The idea here was a simple IMU unit consisting of:

  • One ESP8266 microcontroller
  • Two MPU6050 IMU sensors

This wasn’t high-end hardware by any means. Everything I was using at this stage was prototype-level, hobbyist gear. But that was fine. The point was to understand the problems first before worrying about precision or robustness.

This mid-section IMU would eventually help describe how the bike was moving, leaning, and accelerating in space.


Front section

The front section already had working modules:

  • Front tire temperature sensors
  • Front brake disc temperature sensors

Just like the rear, this section was designed with future expansion in mind. The important part was that each module behaved consistently, regardless of where it lived on the bike.


A wireless-first approach

One design choice that stayed consistent across all sections was wireless communication.

Every sensor module sent its data wirelessly to a central data logging system. This immediately removed a lot of complexity:

  • No long signal wires running across the bike
  • No shared buses stretched through noisy environments
  • Easier isolation and debugging

Each module focused on one job:

  1. Read sensors
  2. Package data
  3. Transmit it

Everything else happened elsewhere.



First full system power-up

The first time I powered everything on together was genuinely interesting. Data started flowing in from multiple places on the bike, all at once.

At that moment, the system worked—but it also raised new questions.

The biggest one was data storage.


Deciding how to store the data

Each sensor module was already sending its readings in JSON format. That made debugging easy and human-readable, but it also made me think about performance.

Looking back, JSON may not have been the most efficient choice:

  • It’s text-based
  • It involves string operations
  • It’s slower than raw binary formats

But at the time, usability mattered more than speed.

I hadn’t yet decided whether the final data logs would be:

  • CSV files
  • JSON files
  • Or something else entirely

The key requirement was that the data had to be easy to analyze later using Jupyter Notebook or Google Colab. That decision would shape everything downstream, and it’s something I’ll cover in a separate article.


Early data logging and debugging

Before building a single “master” logging engine, I took a more pragmatic approach.

I wrote small Python programs that:

  • Listened on specific UDP ports
  • Received data from individual sensor modules
  • Printed or stored the incoming data

Each module sent its data to a dedicated port, which made debugging much easier. If something looked wrong, I could isolate that one stream without guessing.

The long-term plan was always to replace this with:

  • A central data logging engine
  • Multiple threads, each handling a module
  • A system that flattened all incoming data into a unified structure

But for early development, simple tools were enough.



Powering the system (on purpose, not accidentally)

One design decision I was very deliberate about was power isolation.

All sensor modules were powered from a single 5V rail, supplied by an external power bank. I could have used a 12V-to-5V converter and tied everything into the bike’s electrical system, but I chose not to.

The reason was simple:

  • I didn’t want experimental electronics touching the bike’s primary electrical system
  • I wanted failures to be contained
  • I wanted to debug without risking the bike itself

This separation gave me peace of mind while experimenting.


What this stage taught me

Designing the system architecture didn’t magically make everything easy, but it did make the complexity manageable.

A few things became clear:

  • Thinking in zones simplifies physical design
  • Treating each sensor as an independent module scales well
  • Early architecture matters, even for hobby projects
  • Debugging is much easier when data streams are isolated

This stage wasn’t about perfection. It was about creating a foundation solid enough to build on—and fragile enough to teach me where the real problems were.

Monitoring Brake Temperature

After getting the tire temperature sensor modules working, the next thing I wanted to measure was brake disc temperature. This felt like a natural next step. Braking is one of the most aggressive actions on a motorcycle, and I was curious to see how quickly heat builds up during real riding.

At first, this part of the project seemed like it would be straightforward.


Using the same sensors

For brake disc temperature, I reused the same non-contact infrared temperature sensors I had already used for tire temperature sensing. The advantage of these sensors is that they measure temperature by detecting infrared radiation from a surface, which means there’s no need for physical contact.

That immediately simplified things.
No drilling, no clamps on brake lines, no risk of interfering with braking hardware.

In theory, I just needed to point the sensors at the discs and read the temperature.



Front brake disc mounting

The first challenge was mounting. I needed to position the sensors so they were:

  • Facing the brake discs directly
  • At a reasonable angle
  • Stable under vibration

For the front brakes, this meant aiming one sensor at the front-left disc and another at the front-right disc. Getting the angle right mattered more than I initially expected. Small changes in alignment noticeably affected readings.

By this point in the project, I was also dealing with a familiar limitation.


The I2C address problem (again)

All the infrared sensors I had left used the same fixed I2C address (0x5A). That meant I couldn’t place both front brake sensors on the same I2C bus.

The solution was the same workaround I had used before:

  • One sensor on the hardware I2C bus
  • One sensor on a software-emulated I2C bus

It wasn’t elegant, but it worked. Once everything was wired up and programmed, I could read temperature data from both front brake discs reliably.



Rear brake disc sensing

The rear brake disc was much simpler. I only needed a single sensor, which meant I could use the hardware I2C bus without any address conflicts.

Mounting the rear sensor was easier as well, though I quickly noticed that sensor distance mattered. The further the sensor was from the disc, the slower and less responsive the temperature readings felt. In hindsight, I could have mounted it closer, but for an early version of the system, it was good enough.

Once mounted, I could clearly see surface temperature changes on the rear brake disc during riding.


Why non-contact sensing mattered

One of the biggest advantages of using infrared sensors here was that I could measure brake disc temperature without touching the disc at all.

The sensors picked up infrared radiation emitted by the metal surface, which meant:

  • No risk of interfering with braking
  • No heat damage to wiring
  • No moving parts to worry about

It wasn’t laboratory-grade measurement, but it was more than enough to show real trends—especially during hard braking.



Software stayed simple

By this stage, I had settled on a consistent software structure for all sensor modules. Each module followed the same basic pattern:

  • Read sensor data
  • Package it into a simple data format
  • Send it back over WiFi using UDP

The brake disc temperature modules reused the same approach. From the data logging side, nothing special was needed. The logging program simply listened for incoming packets and stored everything into a log file.

Because of this consistency, adding brake disc temperature sensing was relatively easy compared to earlier stages of the project.


What this stage confirmed

This part of the project reinforced a few important lessons:

  • Reusing a known sensor design saves time
  • Physical mounting and distance affect readings more than expected
  • Non-contact sensing is extremely useful on a motorcycle
  • Software simplicity becomes valuable as the system grows

Brake disc temperature sensing worked well enough to answer the questions I had, and it gave me confidence to keep expanding the system.

Monitoring Tyre Temperature

I started building sensors and see what would actually work on a real motorcycle. This part of the project involved a lot of fabrication, testing, programming, and trial-and-error. It was also where my assumptions about “simple sensors” started to break down.

I didn’t start with the most complex system. I picked something that felt measurable, visual, and useful: tire temperature.


Starting with rear tire temperature

The first module I worked on was the rear tire temperature unit. The idea was straightforward. I wanted to measure the temperature of the tire across three regions:

  • Left edge of the tire
  • Center of the tire
  • Right edge of the tire

To do this, I used three non-contact infrared temperature sensors (MLX90614). The plan was to connect all three sensors to a single microcontroller, read their values continuously, and stream the data back wirelessly.

At this stage, the goal wasn’t perfect accuracy. It was to answer a much simpler question:
Can I even collect usable temperature data from a moving motorcycle tire?



Choosing WiFi over wiring

One of the early design decisions was how these sensor modules would communicate. Running wires back to a central unit would have meant implementing a proper communication bus, something like CAN. That would have required extra hardware, more complexity, and more cost per module.

SPI or I2C over long runs also didn’t feel like a good idea. Those protocols are sensitive to noise and interference, especially in an environment full of vibration, heat, and electrical noise.

So I went with WiFi.

I was using an ESP8266-based NodeMCU, which already had WiFi built in. It wasn’t the most elegant solution, but it was available, cheap, and flexible. Each sensor module could be self-contained and transmit data independently.


Sending data with UDP broadcast

To keep things simple, I chose UDP for data transmission. Unlike TCP, UDP is connectionless, which meant I didn’t have to manage connections, retries, or handshakes. If a packet was lost, that was fine—I cared more about trends than perfect delivery.

I also decided to broadcast the data packets over the local network instead of sending them to a fixed IP address. That way, I didn’t have to hardcode destinations into the modules. Any listening program could receive the data.

To test this, I wrote a small Python script and also used tools like ncat to listen on the network. Seeing raw temperature values coming in over the network for the first time was genuinely exciting. It meant the concept worked.


Mounting challenges on the rear

Once the rear module was working electrically, I ran into a very physical problem: placement.

The bike didn’t have a rear tire hugger, which would have been ideal because the sensor could move with the wheel and maintain a consistent distance. Instead, I ended up mounting the module on the tail section, close to the tire.

It wasn’t perfect, but it was good enough for a first test. The module stayed in place, the sensors read values, and data continued streaming over WiFi. That was enough to move on.


Front tire temperature module

The front tire temperature module followed a similar design: three infrared sensors measuring left, center, and right sections of the tire. This time, mounting was easier because the bike had a front tire hugger. That allowed for a more stable and consistent setup.

Electrically, though, this is where things started getting tricky.



The I2C address problem

The MLX90614 sensors communicate over I2C. Normally that’s not an issue, but these sensors come in different variants with fixed I2C addresses. Most of the sensors I had used the same default address (0x5A). If you try to put multiple devices with the same address on a single I2C bus, it simply doesn’t work.

Out of the batch I had, only two sensors used a different address (0x2A). But I needed three sensors per module.

The workaround was a bit of a hack.

I placed two sensors—one with address 0x5A and one with 0x2A—on the hardware I2C bus of the ESP8266. For the third sensor, I implemented a software I2C bus on different GPIO pins using a library.

This meant:

  • Two sensors on the hardware I2C bus
  • One sensor on a separate, software-emulated I2C bus

It wasn’t elegant, but it worked.


Multiple modules, multiple data streams

This is test run data plot of the front tyre and you can the temperature of the left-edge ,center and right edge section of the front tyre. I will explain in brief what was happening when this test drive happened. Basically when we started at 0450am to around 0500 the temperature the edges rises and then we see a sharp drop as this was a traffic stop and then it starts to rise again as we continue flactuating. What was interesting was how the edges got warmer than the center and this was a normal work commute not a track run.

With both the rear and front tire temperature modules working, I configured each module to transmit data on a different UDP port. This made it easier to route and process the data on the receiving side without mixing streams.

At this point, I had:

  • A rear tire temperature module sending data
  • A front tire temperature module sending data
  • Both broadcasting over WiFi using UDP
  • Real temperature readings coming in from a moving motorcycle

This was the first moment where the project felt real.


What this stage taught me

Building these first sensor modules taught me a few things very quickly:

  • Hardware constraints show up fast in the real world
  • “Simple” protocols like I2C can become limiting
  • Wireless communication simplifies wiring but introduces its own trade-offs
  • Physical mounting is just as important as electronics

Most importantly, it showed me that collecting data was possible—but also that every new sensor would come with its own set of problems.

Project GSXR: A DIY Data Logging

I started riding motorcycles in 2024. Like a lot of people, I had always wanted a superbike, and eventually I ended up with a second-hand Suzuki GSX-R1000 from 2003. The bike itself has a lot going on, but my actual riding journey is probably a story for another time.

What matters here is that once I started riding the bike every day—commuting to work, running errands, and just spending time on it—my curiosity slowly shifted. I wasn’t just riding anymore. I was constantly wondering how a machine like this actually works as a system.


Seeing a motorcycle as a system

When you sit on a superbike, you’re sitting on wide tires, a stiff aluminum frame, an inline-four engine making well over 120 horsepower, serious suspension, and an ECU-controlled fuel injection system. Somehow, all of this works together smoothly enough to be usable on normal roads.

At some point it clicked for me that superbikes aren’t just fast motorcycles—they’re complex engineering systems.

If you look at how these bikes have evolved, the difference is massive. Early-2000s superbikes and modern flagship models feel like they belong to different eras. Modern bikes rely heavily on electronics and software: ride-by-wire throttles, multiple riding modes, traction control, wheelie control, cornering ABS, and layers of logic constantly working in the background.

That shift toward electronics is what really caught my attention.

Inspiration from modern bikes and racing

A big inspiration for this project came from MotoGP. From a technical perspective, MotoGP bikes are rolling laboratories. They generate huge amounts of data—suspension movement, tire behavior, braking forces, lean angle, acceleration—and engineers use that data to refine setups and strategy session by session.

Around the same time, I was also looking at modern road bikes like the Yamaha R1M, which come with built-in data logging features. That idea stuck with me. I didn’t need race-level telemetry, but I kept wondering what it would be like to have some visibility into what my own bike was doing.

That’s when the idea formed:
What if I tried to log data on an old GSX-R that was never designed for it?


BMW S1000RR Dashboard

The questions I couldn’t answer while riding

I already had a rough intuition for a lot of things:

  • Hard braking heats up the brake discs
  • Tires warm up as you ride
  • The front suspension compresses under braking
  • Different rider inputs interact with each other

But intuition isn’t the same as measurement.

I couldn’t actually see any of this happening. I couldn’t measure it. Everything was based on feel and assumptions.

What really happens to brake disc temperature during a hard stop?
How quickly do tires warm up on a normal commute?
How does suspension behavior change under real road conditions?

Those questions kept coming back, and eventually curiosity won.


Turning curiosity into a data logging experiment

I didn’t start with a polished design or a clear end goal. I started by writing down what I was curious about and what I wanted to observe:

  • Front and rear suspension behavior
  • Front and rear tire temperature
  • Front and rear brake disc temperature
  • Spatial data using IMU sensors (lean and movement)
  • Engine coolant temperature
  • Brake input states
  • Headlight state for logic and triggers
  • GPS location data

At one point I even considered reading throttle position, but I dropped that idea quickly. I was already getting overwhelmed, and I had to remind myself what this project actually was.

This wasn’t about building a product or doing anything “proper.”
It was a hobbyist experiment to see if something like this was possible—and how hard it would be in practice.

I expected things to break, and I was fine with that. Learning was the whole point.


Testing a Throttle Position Sensor module

The first reality check

What surprised me wasn’t that the project was difficult—I expected that.

What surprised me was how quickly simple ideas turned into complex problems. Sensors that worked perfectly on the bench behaved very differently once they were mounted on a vibrating, hot motorcycle. Mounting, wiring, noise, and real-world conditions mattered far more than I initially thought.

That was the first real lesson of this project.