📱 Rebel Racing - Charter-based Exploratory & Edge-Case Testing (Mobile)
🧾 About this work
- Author: Kelina Cowell - Junior QA Tester (Games)
- Context: Self-directed manual QA portfolio project
- Timebox: 1 week
- Platform: Mobile (Android)
- Focus: Daily golden-path smoke, interruptions & recovery, device/network variation, and basic UI scaling/readability
Introduction
One-week exploratory / edge-case pass on Rebel Racing (Android, Moto g54 5G, Android 15 @2400×1080/120Hz) focused on daily golden-path smoke, interruptions & recovery, device/network variation, and basic UI scaling/readability. I built a small set of charters, ran daily smoke passes across the week, and logged one high-impact soft-lock defect plus a separate low-severity audio issue, both with full evidence and STAR summaries.
- Scope: daily golden-path smoke (launch → hub → 1 race → rewards → hub), interruptions & recovery (alarms, notification shade, Home/Return, LTE force-close, screen lock/unlock), UI scaling & readability in core menus, performance & device feel (Wi-Fi vs warm vs LTE), input responsiveness (race + menus), network & live surfaces (Store/Events, Wi-Fi/LTE toggles), and light Bluestacks visual-only layout/aspect checks (16:9 hub baseline plus Portrait with an attempted (but blocked) 20:9 preset).
- Approach: charter-based sessions designed from senior QA insight (Nathan Glatus & Radu Posoi), time-boxed to ~20–45 minutes each, with one smoke pass per day plus focused charters layered on top. High-impact issues were captured with short 1080p videos, clear repro steps, device/network context, and a simple device/network matrix to keep coverage realistic on a single physical phone.
- Outcome: all planned smoke and charter runs completed on Moto g54 with no crashes; one Rewards soft-lock after an alarm + app close (RR-1) captured with 1/1 repro and documented to backlog; an additional audio issue on the Results/Rewards screen after lock/unlock (RR-37) captured with 1/1 repro and logged as a low-severity backlog item; LTE felt noticeably slower than Wi-Fi with a results-loading delay; Bluestacks 16:9 (1920×1080) gave a clean hub/layout baseline as a visual-only check, while Portrait mode showed stretched background and very small text (visual-only, 20:9 preset attempts blocked by the emulator); Android Studio AVD was blocked by Play Store eligibility.
- Evidence: Google Sheets workbook (README, 1-liner summary, charters, session notes, bug log, STAR, daily smoke, device matrix, glossary), YouTube playlists grouped by area (smoke, interruptions, UI scaling, performance, input, network, Bluestacks), and Jira-style bug/STAR summaries suitable for review.
| Studio | Platform | Scope |
|---|---|---|
| Hutch Games | Android (Moto g54 5G - Android 15) | Exploratory & edge-case: smoke runs • interruptions & recovery • UI scaling/readability • performance & device feel • input responsiveness • network & live surfaces |
🎯 Goal
Show how I approach exploratory and edge case testing on a live mobile F2P racer by scoping realistic charters, running daily smoke checks, and capturing any high impact issues with clear repro steps, evidence, and context.
🧭 Focus Areas
- Daily smoke runs
- Interruptions and recovery
- UI scaling and readability
- Input responsiveness in menus and races
- Performance and device feel on WiFi, warm device, and LTE
- Network and live surfaces in Store and Events
- Visual-only layout/aspect checks in Bluestacks (16:9 hub baseline, Portrait stretch, 20:9 preset blocked)
📄 Deliverables
- Exploratory and edge case workbook (Google Sheets)
- Bug log and STAR summary (PDF export)
- Evidence videos grouped by area (YouTube playlists)
- Jira style bug and summary examples
- Networking and applied insight notes from senior QA leads
📊 Metrics
| Metric | Value |
|---|---|
| Total Bugs Logged | 2 |
| Critical | 0 |
| Major | 1 |
| Minor | 1 |
| Repro Consistency | 100% |
⭐ STAR SUMMARY - Rebel Racing QA (Android)
Situation: One week of exploratory and edge-case testing on Rebel Racing on a Moto g54 5G running Android 15, build 27.01.18975, captured via scrcpy at 1080p on Wi-Fi and LTE.
Task: Keep scope realistic on a single physical device by running daily golden-path smoke checks, then pushing interruptions, UI scaling, performance and network edge cases to see where stability or player experience might break.
Action: Designed charters from the project brief and senior QA insight, ran daily smoke runs, and executed focused sessions for interruptions (alarms, notification shade, Home/Return, screen lock/unlock), scaling, performance, input and network. Captured short 1080p clips with scrcpy and OBS, and tracked results in a Google Sheets workbook with a clear bug log and STAR summary.
Result: All smoke runs passed with no crashes. I found and documented one soft lock in the post-race rewards flow after an alarm and app close (RR-1), plus a separate low-severity audio issue where background music speeds up after lock/unlock on the Results/Rewards screen (RR-37), both captured with full video evidence. I also recorded smaller observations on LTE load delays, warm-device feel, and Bluestacks visual-only behaviour that can inform future testing and device coverage.
🤝 Networking & Applied Insight
During this project I did not guess the scope in isolation, I treated it like a mini live-ops assignment and shaped it around advice from senior QA leads.
Nathan Glatus (ex Senior QA / Game Integrity Analyst, Fortnite, Epic Games) helped me set the initial scope. His advice was to treat Rebel Racing as what it is: a live mobile F2P racer, not a lab toy for every possible edge case. That translated into a small set of focused charters rather than a giant “test everything” list: daily golden-path smoke on a single physical device, input and handling tiers, collisions / exits and race flow, and UI scaling and key live surfaces (store, events, hub). He also pushed me to keep runs to realistic 45–60 minute sessions with clear exit criteria, and to write tighter bug reports with strong oracles, clear summaries and bundled evidence (video, device/build metadata, repro steps, repro rate) instead of vague “it feels off” notes. His framing around realistic coverage on an approved device list is why this case study is scoped to one main phone but documented in a way that could scale to a real QA team.
Radu Posoi (Founder, AlkoTech Labs, ex Ubisoft QA Lead) then helped me iterate the scope so it matched how mobile QA is actually run day to day. His feedback led me to define clear performance anchors (hub, pre-race, race start, mid-race, results) instead of vague “seems fine” checks; treat interruptions as a first-class surface covering lock screen, app switching, app kill and recovery, and notification shade; and turn battery, heat and multi-touch stress into dedicated charters rather than random one-off experiments. Because Rebel Racing blocks standard Android Studio emulators on the Play Store, he also recommended using Bluestacks as a visual-only oracle for odd aspect ratios and layout stretch while keeping all real testing and bug reproduction on my physical Moto g54. That combination turned my original “nice to have” ideas into a concrete device and network approach that looks like a small slice of a real mobile QA lab rather than a student project.
Their insight directly shaped the final list of charters, including the extra lock/unlock interruption runs that revealed the audio issue on the Results/Rewards screen (RR-37). It also shaped how I recorded device and network context, how I wrote and prioritised bug reports, and the STAR summary for this case study, so the project reads more like a realistic live mobile QA engagement than a purely academic exercise.
📚 JIRA Courses & Application
After my first case study (Battletoads) where I used two beginner Jira courses to learn the basics, I wanted this project to focus more on how work is modelled and organised in Jira day to day. For Rebel Racing I took two short Coursera projects that go deeper into user stories and simple Scrum setups.
Courses completed for this project:
- Create User Stories in Jira (Coursera) – Practised breaking work into epics, user stories and sub-tasks with clear acceptance criteria. This helped me think about Rebel Racing work in terms of “player goals + expected behaviour” instead of just a list of tests.
- How to Create a Jira Scrum Project (Coursera) – Set up a basic Scrum project from scratch with a backlog, a simple sprint board, and clear status transitions (To Do → In Progress → Blocked → Done). Reinforced keeping the workflow lightweight and readable.
Practice in this project:
- Framed test ideas and charters as short “stories” (e.g. interruptions, LTE vs Wi-Fi, Bluestacks visual check) with a clear player goal and expected outcome, then linked the RR-1 and RR-37 defects back to the relevant charter.
- Used a simple Jira-style workflow (To Do / In Progress / Done / Deferred) so each issue told a clear status story without extra admin.
- Logged RR-1 and RR-37 and key observations with consistent titles, short descriptions, and direct links to 1080p evidence clips, mirroring how they’d sit on a real Jira board.
- Kept the issue list small but focused, favouring a few well-written tickets with strong evidence over a noisy backlog of half-baked notes.
🎓 Certificates
| Certificate | Provider | Issued | Evidence |
|---|---|---|---|
| Create User Stories in Jira | Coursera | 2025 |
|
| How to Create a Jira SCRUM Project | Coursera | 2025 |
|
📷 Evidence & Media
These links are the complete artefacts for this project. They contain:
- Overview and scope
- Charters and session notes
- Bug log and STAR summary
- Daily smoke runs and outcomes
- Device and network matrix
- Glossary and methodology notes
| Type | File / Link |
|---|---|
| QA Workbook (Google Sheets) | Open Workbook |
| QA Workbook (PDF Export) | Open PDF |
📌 Core Project Findings - Sessions and Bugs
All planned charters and daily smoke runs were completed on the Moto g54 with no crashes. The build stayed stable across the week, but one high-impact soft lock in the post-race rewards flow was found and logged with full evidence, along with a separate low-severity audio issue where background music speeds up after lock/unlock on the Results/Rewards screen. I also captured smaller observations around LTE load delays, warm-device performance feel, and UI scaling in Bluestacks.
📁 Jira Board Screenshot - Overview
🗂️ Jira Board - Verified Screenshots (thumbnails)
|
|
Click any thumbnail to view the full-size image.
🗂️ Jira - Bug Ticket Layout
|
|
Click thumbnail to view the full-size image.
🐞 Bugs - Summary + Videos
Show inline video
If you are viewing this on github.com, embeds may not display. Use the thumbnails/links above or open this page on the published site (GitHub Pages) to watch inline.
🔍 Other observations (non-blocking)
Smaller UX and performance findings taken from an extended LTE race/results run on the Moto g54 and a Bluestacks Portrait visual-only check. These did not meet the bar for full bug tickets but are still useful for future tuning or device coverage.
📈 Results
- Completed all planned daily smoke runs and exploratory charters on the Moto g54 without any crashes or hard failures.
- Logged one major defect (RR-1 Rewards soft lock) with clear 1080p evidence and a full bug entry, plus one low-severity audio defect (RR-37 BGM tempo increase after lock/unlock on the Results/Rewards screen). Captured additional smaller UX and performance observations for future device and network coverage.
- Confirmed the core golden path (launch → race → rewards → hub) stayed stable on baseline Wi-Fi and across most interruption scenarios, including lock/unlock on race and hub. LTE showed slower results loading compared to Wi-Fi. The main exception is the RR-1 alarm plus app close case described below.
- Rewards soft lock (RR-1): after finishing a race, if an OS alarm fires while the Rewards screen is open and the player then closes and relaunches the app, the post-race Rewards screen can appear with the Continue button unresponsive. Back, notification shade, Home → Return and Wi-Fi toggle do not recover the flow; the practical workaround is to Pause the app from Android app info, then Unpause and relaunch so Continue works again. This was captured with video and documented as a high-severity soft lock.
- Rewards audio tempo issue (RR-37): when locking the device on the Results/Rewards screen and then unlocking back into the game, the background music resumes at a noticeably faster tempo and feels off beat compared to pre-lock playback. UI and progression are not impacted, so this is logged as a low-severity audio defect rather than a blocking issue.
- LTE results delay: on LTE, the results screen shows a noticeably longer loading spinner before rewards appear compared to the much quicker transition on Wi-Fi. This did not reproduce as a hard failure but is worth tracking as a performance and UX risk.
- Warm device and scaling notes: extended play on the Moto g54 made the device feel warm but not dangerously hot, with no visible frame drops in core races. In Bluestacks Portrait mode, the background and menu bar backgrounds appeared stretched and buttons and text were very small (hard to read). Attempts to force a 20:9 preset were blocked by the emulator, so checks stayed visual-only.
- Performance and device feel: the Moto g54 stayed stable across all runs with no visible stutters or visual hitches during races. LTE produced slower results loading compared to Wi-Fi but did not cause crashes or hard failures.
See Metrics above for the full table of runs and references.
📱 Peer-style UX benchmark (Rebel Racing vs Asphalt 9)
As a small add on to the main Rebel Racing work, I ran a quick visual only peer benchmark against Asphalt 9 (Gameloft). The goal was not to file bugs, but to see how two mobile racers from the same space handle menu clarity, taps to driving, HUD readability and reward pacing, using my own dyslexic and dyscalculic perspective as a lens. The findings below helped me frame Rebel Racing’s UX strengths and risks in a way that is easier to explain to designers and producers.
⭐ MICRO-STAR SUMMARY – Comparative Findings
Situation: During the Rebel Racing project I ran a short visual UX benchmark against Asphalt 9 to understand how similar mobile racers handle menu clarity, taps-to-driving, HUD readability, reward pacing, and return-to-hub flow.
Task: Compare the first-minute path, HUD readability, clarity of labels, reward pacing, and any hiccups or friction points in the standard race loop.
Action: Opened both apps, timed taps-to-driving, reviewed HUD readability, checked clarity of results and reward steps, and noted anything that slowed the player down or was easy to miss (from a dyslexic and dyscalculic tester’s perspective).
Result: Rebel Racing reached driving quicker (4 taps) than Asphalt 9 (6 taps). Asphalt 9 had clearer HUD labels overall, while Rebel Racing contained several readability risks in white-on-bright UI elements.
📊 Summary Metrics
- Taps to driving: Rebel Racing: 4 • Asphalt 9: 6
- Menu clarity: Both strong, but Asphalt 9’s large yellow Play button was clearer
- HUD readability: Asphalt 9: clearer labels • Rebel Racing: some hard-to-read small text and unboxed labels
- Reward pacing: Both smooth; Asphalt 9 had slight friction due to “Next” changing into “Miss Out”
- Hiccups: None observed in either title
🏁 Result and takeaway
Result: Rebel Racing reaches driving fastest (4 taps). Asphalt 9’s HUD readability was stronger due to clearer labels and boxed text.
Takeaway: Rebel Racing’s core loop is faster but could benefit from improved readability in HUD elements, especially small unboxed text.
🧠 What I learned
- Keep charters tight and scoped to one behaviour. Splitting interruptions, network flips, heat and UI scaling into separate surfaces made each run cleaner, easier to repeat and simpler to compare.
- Evidence must be fast to review. Short 10–30 second clips of each run told the story better than long recordings and made RR-1 (Rewards soft lock) and RR-37 (Rewards BGM tempo issue after lock/unlock) immediately clear during review.
- Baseline numbers matter. Basic metrics like number of taps from Title to control and how long it takes for the player to get first feedback made it easier to compare the games and spot changes later.
- Visual-only peer benchmarks are surprisingly effective. Comparing Rebel Racing with Asphalt 9 helped me talk about HUD readability risks and pacing strengths with much more confidence.
- Write notes for “future me”. Clear, plain-language steps meant I could pick the project up the next day without re-learning the flow.
- Tester context matters. As someone who is dyslexic and dyscalculic, documenting readability issues explicitly helped me explain why small unboxed HUD text or pale UI labels are real accessibility risks.
- Interruptions are more than just “does it crash”. Testing alarms, notification shade, Home/Return and screen lock/unlock showed that even when the flow survives, small audio issues can still slip through, so it is worth treating audio recovery as part of the interruption surface.
- Keep admin light. The project worked best when the workbook supported the testing, not the other way round. Clear tables, simple IDs and a single source of truth kept everything easy to maintain.
🔚 Conclusion
Exploratory and edge-case pass complete on Rebel Racing (Android, Moto g54 5G). I kept device coverage realistic on a single phone, ran daily golden-path smoke checks, pushed interruptions, network changes and basic scaling, and documented one high-impact rewards soft lock plus one low-severity audio issue, both with clear repro and short 1080p evidence.
- Coverage delivered: launch → hub → race → rewards golden path, daily smoke runs, alarms and notification interruptions, Home, app-close behaviour and screen lock/unlock, LTE and Wi-Fi switching, light performance and device-feel checks, UI scaling and readability, a visual-only Bluestacks layout/aspect check, and a peer-style UX benchmark against Asphalt 9.
- Highest-impact finding: rewards soft lock after an alarm and app close that leaves the Continue button unresponsive and effectively forces a kill or pause of the app (RR-1), plus a separate low-severity audio issue where rewards BGM speeds up after lock/unlock (RR-37), and smaller UX risks around LTE load delays and HUD readability.
- Evidence maturity: every key finding is backed by short clips, workbook entries, and a simple device and network matrix, with bug and STAR-style summaries that could be lifted straight into Jira.
Up next: I am moving on to a one week Regression Testing project on Sworn (PC). This one is focused on verifying recent fixes against patch notes, checking save/load safety, session start/quit flows, stamina and quest systems, UI readability, and catching any side effects introduced by the latest update.
Email Me Connect on LinkedIn Back to Manual Portfolio hub
📎 Disclaimer
This is a personal, non-commercial portfolio for educational and recruitment purposes. I’m not affiliated with or endorsed by any game studios or publishers. All trademarks, logos, and game assets are the property of their respective owners. Any screenshots or short clips are included solely to document testing outcomes. If anything here needs to be removed or credited differently, please contact me and I’ll update it promptly.