Manual Testing
Purpose: Execute and document manual tests for Perma initiative releases to ensure quality and track issues
Audience: Testers
Status: Draft
Rationale
Manual testing is essential for:
- validating features that are difficult or impractical to automate
- testing user experience and visual elements across browsers and devices
- ensuring critical functionality works before releases
- discovering edge cases and integration issues that automated tests might miss
- providing systematic documentation of test results for quality tracking
Quick Reference
Before starting, review the Manual Testing Guidelines for:
- Test case format and structure
- Test status definitions (✅ ⚠️ ❌)
- Issue linking requirements
- Report structure and naming
Key Reminders
- Always execute
00_guided_tour.mdfirst - Requires local build (cannot use dev.permaplant.net) - Link all issues - Add GitLab issue number in Notes for failed/problematic tests
- Check for duplicates - Search existing issues before creating new ones
- Fix test case errors immediately - Update test case files when you find errors
- Use test user -
testuser_t@permaplant.netfromtestusers.md
Workflow Steps
1. Preparation
Who: Tester
When: Before starting manual testing cycle
Actions:
- Ensure you have access to the test environment (local build required for guided tour; otherwise local build or dev.permaplant.net).
- Locate test user credentials in
doc/tests/testusers.md. - Verify test user
testuser_t@permaplant.netis set up with necessary materials. - Access test images in Nextcloud (login via "Anmelden mit Keycloak" → "Photos" directory).
- For mobile testing, prepare required devices:
- OnePlus 5T (6.01", 1080x2160)
- Samsung Galaxy A33 5G (6.2", 1080x2400)
- Generic Windows Tablet (10.1", 1920x1200)
Result: test environment ready with all necessary credentials and materials
2. Generate Test Report Template
Who: Tester
When: At the start of testing
Actions:
- Run the test report generation tool:
tools/release/generate-test-report.sh - This creates a new report file in
doc/tests/manual/reports/named with current date (format: YYMMDD). - Open the generated report file and fill in the General section:
- tester name
- date/time
- commit/tag being tested
- setup (local build required for guided tour; otherwise local build or dev.permaplant.net)
Result: new test report file created with all test cases and ready to fill out
3. Execute Test Cases
Who: Tester
When: During testing session
Actions:
- IMPORTANT: Always execute
00_guided_tour.mdfirst.- If you exit the tour mid-test, you must reset your local database to take it again.
- Work through test cases in alphabetical order (they're numbered to maintain sequence).
- For each test case:
- Read the Description, Given, When, Then sections.
- Execute the test following the steps.
- Record the actual result.
- Assign test status using exactly one emoji: ✅ ⚠️ ❌ (see Test Status Classification).
- Document any deviations in the Notes field.
- For collaboration tests: Follow collaboration instructions in test case descriptions.
- If test case has errors: Fix the test case file immediately (see Test Case Maintenance).
- Try to reproduce issues in another browser to check for browser-specific problems (see Browser Testing).
Result: all test cases executed with results documented
4. Issue Tracking
Who: Tester
When: Immediately when a test fails or is problematic
Actions:
- Search for existing issues before creating new ones (see Avoiding Duplicate Issues):
- Check previous test reports in
doc/tests/manual/reports/. - Search GitLab issues.
- Check previous test reports in
- If issue already exists: Link to it in Notes field using
#issue_number. - If creating new issue:
- Create GitLab issue with appropriate labels and priority.
- Add issue number to test case Notes field.
- Link issue to the test in the report.
- Document blocking dependencies for problematic tests.
- Follow Test Status Classification for issue linking requirements.
Result: all failed and problematic tests linked to GitLab issues
5. Complete Test Report
Who: Tester
When: After all tests are executed
Actions:
- Calculate test result counters using the automation tool:
tools/release/count-test-results.sh path/to/your/report.md - Update the test counters in the General section with the calculated values.
- Fill out the Error Analysis section:
- Confirm all failed/problematic tests have GitLab issue links.
- Write Closing Remarks section:
- Assess current state of software.
- Evaluate if quality objectives were achieved.
- Identify lessons learned and process improvements.
- Calculate test duration and add to General section.
Result: complete test report ready for review
6. Review and Submit
Who: Tester
When: After completing test report
Actions:
- Review the report for completeness.
- Ensure all required fields are filled.
- Verify issue links are correct.
- Commit the test report to the repository.
- Share findings with the team.
- Update relevant issues based on test results.
Result: test report committed and findings communicated
Related Resources
Documentation
- Manual Testing Guidelines - Rules, standards, and best practices
- Test Case Template:
doc/tests/manual/testcases/README.md- GIVEN-WHEN-THEN format - Test Users:
doc/tests/testusers.md- Test account credentials
Test Files
- Test Cases:
doc/tests/manual/testcases/- Individual test case files - Test Reports:
doc/tests/manual/reports/- Historical test reports - Report Template:
doc/tests/manual/reports/template.md- Report header template
Tools
tools/release/generate-test-report.sh- Generate new test reporttools/release/count-test-results.sh- Calculate test result statistics
Troubleshooting
Cannot Access Nextcloud Test Images
Actions:
- Navigate to Nextcloud login page
- Click "Anmelden mit Keycloak"
- Log in with test user credentials from
doc/tests/testusers.md - Navigate to "Dateien" → "Photos"
Guided Tour Won't Start Again After Exiting
Actions:
- Stop current testing session
- Run
make reset-databasefrom project root - Restart testing from
00_guided_tour.md
Not Sure Which Test Status to Use
Action:
- Review Test Status Classification in guidelines
- ❌ = feature under test has a defect
- ⚠️ = cannot test due to blocking dependency
Cannot Find If Issue Already Exists
Actions:
- Search previous test reports:
grep "error description" doc/tests/manual/reports/*.md - Search GitLab issues for similar problems
- Check recent reports first (most likely to have related issues)
- See Avoiding Duplicate Issues in guidelines
Test Case Instructions Unclear or Outdated
Actions:
- Fix the test case file immediately in
doc/tests/manual/testcases/ - Document the change in your test report Notes field
- Continue testing with the corrected version
- See Test Case Maintenance in guidelines