Rebel Racing - Charter-based Exploratory & Edge-Case Testing Case Study
Rebel Racing Banner

Back to Manual Portfolio hub

📱 Rebel Racing - Charter-based Exploratory & Edge-Case Testing (Mobile)

Tested: 17 Nov–22 Nov 2025

🧾 About this work

  • Author: Kelina Cowell - Junior QA Tester (Games)
  • Context: Self-directed manual QA portfolio project
  • Timebox: 1 week
  • Platform: Mobile (Android)
  • Focus: Daily golden-path smoke, interruptions & recovery, device/network variation, and basic UI scaling/readability

Introduction

One-week exploratory / edge-case pass on Rebel Racing (Android, Moto g54 5G, Android 15 @2400×1080/120Hz) focused on daily golden-path smoke, interruptions & recovery, device/network variation, and basic UI scaling/readability. I built a small set of charters, ran daily smoke passes across the week, and logged one high-impact soft-lock defect plus a separate low-severity audio issue, both with full evidence and STAR summaries.

StudioPlatformScope
Hutch Games Android (Moto g54 5G - Android 15) Exploratory & edge-case: smoke runs • interruptions & recovery • UI scaling/readability • performance & device feel • input responsiveness • network & live surfaces

🎯 Goal

Show how I approach exploratory and edge case testing on a live mobile F2P racer by scoping realistic charters, running daily smoke checks, and capturing any high impact issues with clear repro steps, evidence, and context.

🧭 Focus Areas

  • Daily smoke runs
  • Interruptions and recovery
  • UI scaling and readability
  • Input responsiveness in menus and races
  • Performance and device feel on WiFi, warm device, and LTE
  • Network and live surfaces in Store and Events
  • Visual-only layout/aspect checks in Bluestacks (16:9 hub baseline, Portrait stretch, 20:9 preset blocked)

📄 Deliverables

  • Exploratory and edge case workbook (Google Sheets)
  • Bug log and STAR summary (PDF export)
  • Evidence videos grouped by area (YouTube playlists)
  • Jira style bug and summary examples
  • Networking and applied insight notes from senior QA leads

📊 Metrics

MetricValue
Total Bugs Logged2
Critical0
Major1
Minor1
Repro Consistency100%

⭐ STAR SUMMARY - Rebel Racing QA (Android)

Situation: One week of exploratory and edge-case testing on Rebel Racing on a Moto g54 5G running Android 15, build 27.01.18975, captured via scrcpy at 1080p on Wi-Fi and LTE.

Task: Keep scope realistic on a single physical device by running daily golden-path smoke checks, then pushing interruptions, UI scaling, performance and network edge cases to see where stability or player experience might break.

Action: Designed charters from the project brief and senior QA insight, ran daily smoke runs, and executed focused sessions for interruptions (alarms, notification shade, Home/Return, screen lock/unlock), scaling, performance, input and network. Captured short 1080p clips with scrcpy and OBS, and tracked results in a Google Sheets workbook with a clear bug log and STAR summary.

Result: All smoke runs passed with no crashes. I found and documented one soft lock in the post-race rewards flow after an alarm and app close (RR-1), plus a separate low-severity audio issue where background music speeds up after lock/unlock on the Results/Rewards screen (RR-37), both captured with full video evidence. I also recorded smaller observations on LTE load delays, warm-device feel, and Bluestacks visual-only behaviour that can inform future testing and device coverage.


🤝 Networking & Applied Insight

During this project I did not guess the scope in isolation, I treated it like a mini live-ops assignment and shaped it around advice from senior QA leads.

Nathan Glatus (ex Senior QA / Game Integrity Analyst, Fortnite, Epic Games) helped me set the initial scope. His advice was to treat Rebel Racing as what it is: a live mobile F2P racer, not a lab toy for every possible edge case. That translated into a small set of focused charters rather than a giant “test everything” list: daily golden-path smoke on a single physical device, input and handling tiers, collisions / exits and race flow, and UI scaling and key live surfaces (store, events, hub). He also pushed me to keep runs to realistic 45–60 minute sessions with clear exit criteria, and to write tighter bug reports with strong oracles, clear summaries and bundled evidence (video, device/build metadata, repro steps, repro rate) instead of vague “it feels off” notes. His framing around realistic coverage on an approved device list is why this case study is scoped to one main phone but documented in a way that could scale to a real QA team.

Radu Posoi (Founder, AlkoTech Labs, ex Ubisoft QA Lead) then helped me iterate the scope so it matched how mobile QA is actually run day to day. His feedback led me to define clear performance anchors (hub, pre-race, race start, mid-race, results) instead of vague “seems fine” checks; treat interruptions as a first-class surface covering lock screen, app switching, app kill and recovery, and notification shade; and turn battery, heat and multi-touch stress into dedicated charters rather than random one-off experiments. Because Rebel Racing blocks standard Android Studio emulators on the Play Store, he also recommended using Bluestacks as a visual-only oracle for odd aspect ratios and layout stretch while keeping all real testing and bug reproduction on my physical Moto g54. That combination turned my original “nice to have” ideas into a concrete device and network approach that looks like a small slice of a real mobile QA lab rather than a student project.

Their insight directly shaped the final list of charters, including the extra lock/unlock interruption runs that revealed the audio issue on the Results/Rewards screen (RR-37). It also shaped how I recorded device and network context, how I wrote and prioritised bug reports, and the STAR summary for this case study, so the project reads more like a realistic live mobile QA engagement than a purely academic exercise.

Source Key takeaway How I applied it in this project Evidence
Nathan Glatus
ex Senior QA / Game Integrity Analyst (Fortnite, ex Epic Games)
Treat Rebel Racing as a live mobile F2P racer and anchor testing in a small set of high-value surfaces instead of trying to “test everything”. Prioritise daily golden-path stability, input and handling tiers, collisions / exits, and UI scaling / live surfaces, and back this with clear charters and strong evidence bundles.
  • Scoped the week around daily golden-path smoke runs from launch → hub → race → rewards on one physical device.
  • Wrote focused charters for input & handling tiers, collisions / exits, and UI scaling + key live surfaces (hub, store, events).
  • Structured each session as a realistic 45–60 minute run with clear exit criteria instead of open-ended “play until bored” testing.
  • Improved bug reports by using clear titles, strong oracles and bundled evidence (video, device/build context, repro steps, repro rate) instead of vague “feels off” notes.
LinkedIn conversation with Nathan Glatus shaping Rebel Racing scope LinkedIn conversation with Nathan Glatus shaping Rebel Racing scope Click to enlarge
Radu Posoi
Founder, AlkoTech Labs (ex Ubisoft QA Lead)
Iterate the scope so it matches real mobile QA: define performance anchors, treat interruptions and recovery as their own surface (including screen lock/unlock and audio recovery), structure battery / heat / multi-touch stress instead of doing random “torture tests”, and use emulators as visual oracles only when the store blocks standard Android Studio emulators.
  • Defined performance anchors at hub, pre-race, race start, mid-race and results, and used them in my charters instead of vague “seems fine” checks.
  • Created dedicated charters for interruptions & recovery (notification shade, Home/Return, lock screen and unlock, app switch, app kill and return to race / hub), including the extra lock/unlock runs that later revealed the Rewards BGM tempo issue (RR-37).
  • Turned battery / heat and multi-touch stress into structured sessions rather than ad-hoc “let’s see what happens” experiments.
  • Because Rebel Racing blocks standard Android Studio emulators on the Play Store, used Bluestacks as a visual-only oracle for odd aspect ratios and layout stretch, keeping all real testing and bug reproduction on my physical Moto g54.
  • Kept scope realistic for a solo tester by limiting device coverage and focusing the workbook on evidence and outcomes rather than padding it with low-value checks.
LinkedIn conversation with Radu Posoi about realistic solo QA scope Click to enlarge

📚 JIRA Courses & Application

After my first case study (Battletoads) where I used two beginner Jira courses to learn the basics, I wanted this project to focus more on how work is modelled and organised in Jira day to day. For Rebel Racing I took two short Coursera projects that go deeper into user stories and simple Scrum setups.

Courses completed for this project:

Practice in this project:

🎓 Certificates

Certificate Provider Issued Evidence
Create User Stories in Jira Coursera 2025 Create User Stories in Jira – Coursera certificate
How to Create a Jira SCRUM Project Coursera 2025 How to Create a Jira SCRUM Project – Coursera certificate

📷 Evidence & Media

These links are the complete artefacts for this project. They contain:

TypeFile / Link
QA Workbook (Google Sheets) Open Workbook
QA Workbook (PDF Export) Open PDF

📌 Core Project Findings - Sessions and Bugs

All planned charters and daily smoke runs were completed on the Moto g54 with no crashes. The build stayed stable across the week, but one high-impact soft lock in the post-race rewards flow was found and logged with full evidence, along with a separate low-severity audio issue where background music speeds up after lock/unlock on the Results/Rewards screen. I also captured smaller observations around LTE load delays, warm-device performance feel, and UI scaling in Bluestacks.

📁 Jira Board Screenshot - Overview

Rebel Racing QA board overview — To Do, Blocked, In Progress, Verified

🗂️ Jira Board - Verified Screenshots (thumbnails)

Jira board — Verified set 1 Jira board — Verified set 2

Click any thumbnail to view the full-size image.

🗂️ Jira - Bug Ticket Layout

Jira - Bug Ticket Layout 1 Jira - Bug Ticket Layout 2

Click thumbnail to view the full-size image.

🐞 Bugs - Summary + Videos

ID Title Sev Repro Video
RR-1 [Android][Interruptions][Rewards] Continue button unresponsive after OS alarm + app close High 1/1 RR-1 - Rewards Continue unresponsive after alarm + app close
RR-37 [Android][Interruptions][Audio][Rewards] BGM tempo increases after lock/unlock on race results screen Low 3/3 RR-37 - BGM tempo increases after lock/unlock on race results screen
Show inline video

If you are viewing this on github.com, embeds may not display. Use the thumbnails/links above or open this page on the published site (GitHub Pages) to watch inline.

🔍 Other observations (non-blocking)

Smaller UX and performance findings taken from an extended LTE race/results run on the Moto g54 and a Bluestacks Portrait visual-only check. These did not meet the bar for full bug tickets but are still useful for future tuning or device coverage.

ID Observation Category Evidence
OBS-01 [Android][Network][Results] Longer load spinner before Rewards on LTE compared to Wi-Fi Performance / UX OBS-01 – LTE results delay vs Wi-Fi
OBS-02 [Android][Device feel] Moto g54 felt warm but stable during the extended LTE run (same session as OBS-01) Device feel
Same run as OBS-01; device warmth noted during this session.
OBS-03 [Bluestacks][Visual-only] Portrait mode: background and menu bar backgrounds stretched; buttons/text very small (hard to read). Functional taps OK for this sweep. Visual-only sanity check OBS-03 – Bluestacks Portrait mode showing stretched visuals and tiny text

📈 Results

See Metrics above for the full table of runs and references.


📱 Peer-style UX benchmark (Rebel Racing vs Asphalt 9)

As a small add on to the main Rebel Racing work, I ran a quick visual only peer benchmark against Asphalt 9 (Gameloft). The goal was not to file bugs, but to see how two mobile racers from the same space handle menu clarity, taps to driving, HUD readability and reward pacing, using my own dyslexic and dyscalculic perspective as a lens. The findings below helped me frame Rebel Racing’s UX strengths and risks in a way that is easier to explain to designers and producers.

⭐ MICRO-STAR SUMMARY – Comparative Findings

Situation: During the Rebel Racing project I ran a short visual UX benchmark against Asphalt 9 to understand how similar mobile racers handle menu clarity, taps-to-driving, HUD readability, reward pacing, and return-to-hub flow.

Task: Compare the first-minute path, HUD readability, clarity of labels, reward pacing, and any hiccups or friction points in the standard race loop.

Action: Opened both apps, timed taps-to-driving, reviewed HUD readability, checked clarity of results and reward steps, and noted anything that slowed the player down or was easy to miss (from a dyslexic and dyscalculic tester’s perspective).

Result: Rebel Racing reached driving quicker (4 taps) than Asphalt 9 (6 taps). Asphalt 9 had clearer HUD labels overall, while Rebel Racing contained several readability risks in white-on-bright UI elements.

📊 Summary Metrics

Game Area / Feature What happened Why it matters Evidence
Asphalt 9 Taps to driving 6 taps to reach the race Clear Play CTA; onboarding friction slightly higher than Rebel Racing. Asphalt 9 – taps to driving and HUD
HUD readability Large readable labels: “POS”, “DIST”, timer; MPH clearly visible. High clarity helps dyslexic and dyscalculic players track race state. Same video as above (visual-only).
Rewards flow “Next” CTA changes to “Miss Out”, requiring a second tap. CTA change can confuse players and increase mis-taps. Same video as above (visual-only).
Rebel Racing Taps to driving 4 taps to reach driving, faster than Asphalt 9. Lower friction; faster access to gameplay. Rebel Racing – taps to driving and HUD
HUD readability Small POS label (no box), small white timer (hard to read), rival name in small unboxed white text. “BEAT JASMINE” not bold and easy to miss. Readability risks for players with dyslexia or dyscalculia, especially mid-race. Same video as above (visual-only).
Rewards flow Snappy; no ads; consistent taps. Strong UX with a quick return to hub. Same video as above (visual-only).

🏁 Result and takeaway

Result: Rebel Racing reaches driving fastest (4 taps). Asphalt 9’s HUD readability was stronger due to clearer labels and boxed text.

Takeaway: Rebel Racing’s core loop is faster but could benefit from improved readability in HUD elements, especially small unboxed text.


🧠 What I learned


🔚 Conclusion

Exploratory and edge-case pass complete on Rebel Racing (Android, Moto g54 5G). I kept device coverage realistic on a single phone, ran daily golden-path smoke checks, pushed interruptions, network changes and basic scaling, and documented one high-impact rewards soft lock plus one low-severity audio issue, both with clear repro and short 1080p evidence.

Up next: I am moving on to a one week Regression Testing project on Sworn (PC). This one is focused on verifying recent fixes against patch notes, checking save/load safety, session start/quit flows, stamina and quest systems, UI readability, and catching any side effects introduced by the latest update.

Email Me Connect on LinkedIn Back to Manual Portfolio hub


📎 Disclaimer

This is a personal, non-commercial portfolio for educational and recruitment purposes. I’m not affiliated with or endorsed by any game studios or publishers. All trademarks, logos, and game assets are the property of their respective owners. Any screenshots or short clips are included solely to document testing outcomes. If anything here needs to be removed or credited differently, please contact me and I’ll update it promptly.