Salesforce, Python, SQL, & other ways to put your data where you need it

Need event music? 🎸

Live and recorded jazz, pop, and meditative music for your virtual conference / Zoom wedding / yoga class / private party with quality sound and a smooth technical experience

💬 EN

#


I need to present a summary to management about the use cases (including where they overlap and where they either don’t, or do but one is still better and why), for each possible point of testing tool in a web application’s SDLC. From “left” to “right,” for example, I’m thinking: 1. tSQLt and such for the database, 2. NUnit/XUnit-based tests and such for the backend at build time, 3. live HTTP request-response tests for the backend at run time, 4. @angular/core/testing and such for the frontend at build time, 5. Playwright/Selenium/Cypress tests and basic HTTP request-response availability tests for the frontend at run time. Which, now that I think of it, is already a pretty good starter for a “tl;dr,” but those are all just jargon to management, so I still need help with my original question at the beginning of this prompt, please

To present a comprehensive summary to management about the use cases for testing tools throughout a web application’s Software Development Life Cycle (SDLC), we’ll break it down into different stages and discuss the overlaps, unique benefits, and best practices for each point of testing.

Database Testing (Left-most in SDLC)

Tools: tSQLt, DbUnit, SQL Test

Use Cases:

  • Validating database schema integrity
  • Testing stored procedures and functions
  • Ensuring data consistency and referential integrity

Unique Benefits:

  • Catches data-related issues early in the development process
  • Ensures database performance and scalability

Overlap: Some overlap with backend testing, but focused specifically on data layer

Backend Testing at Build Time

Tools: NUnit, XUnit, JUnit

Use Cases:

  • Unit testing of individual components and functions
  • Integration testing of multiple components
  • Validating business logic implementation

Unique Benefits:

  • Faster execution compared to runtime tests
  • Helps catch bugs early in the development process
  • Improves code quality and maintainability

Overlap: May overlap with database testing for data access layers, but primarily focuses on application logic

Backend Testing at Runtime

Tools: Postman, REST-assured, SoapUI

Use Cases:

  • API endpoint testing
  • Validating request/response payloads
  • Testing authentication and authorization

Unique Benefits:

  • Ensures proper functioning of the entire backend stack
  • Validates API contracts and documentation
  • Can simulate real-world scenarios and load conditions

Overlap: Some overlap with build-time tests, but provides a more realistic testing environment

Frontend Testing at Build Time

Tools: Angular Testing Utilities, Jest, Jasmine

Use Cases:

  • Unit testing of components and services
  • Testing UI logic and state management
  • Validating template bindings and interactions

Unique Benefits:

  • Faster execution compared to end-to-end tests
  • Helps catch UI-related bugs early
  • Improves frontend code quality and maintainability

Overlap: May overlap with runtime tests for some UI logic, but focuses on individual component testing

Frontend Testing at Runtime

Tools: Playwright, Selenium, Cypress

Use Cases:

  • End-to-end testing of user workflows
  • Cross-browser compatibility testing
  • Performance and responsiveness testing

Unique Benefits:

  • Validates the entire application stack from user perspective
  • Ensures proper integration of frontend and backend
  • Catches issues that may not be apparent in isolated tests

Overlap: Some overlap with build-time frontend tests, but provides a more comprehensive, real-world testing scenario

Key Takeaways for Management

  1. Comprehensive Coverage: Each testing point in the SDLC addresses specific aspects of the application, ensuring thorough quality assurance.
  2. Early Bug Detection: Left-shifted testing (database and build-time tests) helps catch issues earlier, reducing development costs and time.
  3. Realistic Scenarios: Runtime tests for both frontend and backend provide a more accurate representation of real-world usage.
  4. Balanced Approach: While there is some overlap between testing points, each offers unique benefits that contribute to overall application quality.
  5. Efficiency and Speed: Build-time tests are generally faster and more focused, while runtime tests offer more comprehensive but slower execution.
  6. Continuous Quality Assurance: Implementing tests at various points in the SDLC supports a culture of continuous integration and delivery.

By utilizing a combination of these testing approaches, the development team can ensure higher quality, more reliable web applications, ultimately leading to improved user satisfaction and reduced maintenance costs123.


What are some questions that a development team can ask themselves to help themselves figure out which stage(s) of testing they should add tests to upon receiving a bug report from the production support team? Some good brain ticklers.

When a development team receives a bug report from the production support team, they should ask themselves several key questions to determine which stage(s) of testing need additional tests. Here are some thought-provoking questions to help guide this process:

Identifying the Root Cause

  1. At which stage of the development process should this bug have been caught?
  2. What type of test would have been most likely to detect this issue earlier?
  3. Are there any similar functionalities that might be affected by the same underlying problem?

Evaluating Current Test Coverage

  1. Do our existing unit tests cover the code path where the bug occurred?
  2. Would our integration tests have caught this issue if they were more comprehensive?
  3. Are our end-to-end tests simulating real-world scenarios that could have revealed this bug?

Assessing Test Environments

  1. Does our test environment accurately reflect the production environment where the bug was found?
  2. Are there any environmental factors that may have contributed to this bug appearing in production but not in testing?

Considering Test Data

  1. Do our test datasets include edge cases that could have exposed this bug?
  2. Are we testing with a sufficient variety of input data to catch similar issues?

Examining Test Processes

  1. Would a different testing technique (e.g., exploratory testing, stress testing) have been more likely to uncover this bug?
  2. Are our code review processes thorough enough to catch potential issues before they reach testing?

Evaluating Monitoring and Logging

  1. Could improved logging or monitoring in our tests have helped identify this issue earlier?
  2. Are there any warning signs or precursors to this bug that we could look for in our test results?

Considering User Behavior

  1. Does this bug reveal any assumptions we made about user behavior that proved incorrect?
  2. Are our tests adequately simulating real user interactions and workflows?

Assessing Test Automation

  1. Could automating certain manual tests have increased the chances of detecting this bug?
  2. Are our automated tests running frequently enough to catch issues introduced by recent changes?

Examining Test Priorities

  1. Did we prioritize our testing efforts correctly, or did we overlook critical areas?
  2. Are there any low-priority tests that should be elevated in importance based on this bug?

By asking these questions, the development team can gain insights into which stages of testing need improvement and where new tests should be added. This process helps ensure that similar bugs are caught earlier in future development cycles, improving overall software quality and reducing the likelihood of issues reaching production456.


Same question, but tailored to help developers break out of their assumptions that things need end-to-end tests and really think through whether they’re maybe just hoping someone to the “right” of them will add the test and question what they could add “leftward” (e.g. at build-time) that yes, could’ve caught this problem.

To help developers challenge their assumptions about end-to-end testing and consider more “leftward” testing options, here are some thought-provoking questions they can ask themselves when receiving a bug report:

Questioning the Need for End-to-End Tests

  1. Could this bug have been caught with a more targeted, smaller-scope test?
  2. Are we relying too heavily on end-to-end tests as a safety net?
  3. What specific aspect of the system did this bug affect, and could it be isolated for testing?

Exploring Build-Time Alternatives

  1. Could a unit test have exposed this issue without the need for a full system setup?
  2. Is there a way to simulate the problematic behavior in a controlled, build-time environment?
  3. Are our mocks and stubs accurately representing the real-world scenarios where this bug occurred?

Analyzing Code Structure

  1. Does this bug indicate a need for better separation of concerns in our codebase?
  2. Could we refactor the affected code to make it more testable at the unit or integration level?
  3. Are there any pure functions or isolated components we could extract and test separately?

Evaluating Test Data and Inputs

  1. Could we generate test data at build-time that would have triggered this bug?
  2. Are our input validation tests comprehensive enough to catch edge cases like this one?
  3. Could property-based testing help us discover this type of issue earlier in the development process?

Considering Static Analysis

  1. Could we implement or improve static analysis tools to catch this type of bug before runtime?
  2. Are there any linting rules or code quality checks we could add to prevent similar issues?
  3. Could type checking or stronger typing have helped prevent this bug?

Examining Build Processes

  1. Are there any build-time checks or validations we could add to catch this kind of issue?
  2. Could we implement pre-commit hooks or CI/CD pipeline steps to identify similar problems?
  3. Are our current build processes thorough enough, or are we rushing to runtime testing too quickly?

Assessing Test Coverage

  1. Do our current unit and integration tests adequately cover the code path where this bug occurred?
  2. Are there any critical paths in our application that lack sufficient test coverage at the lower levels?
  3. Could we implement code coverage tools or improve our coverage metrics to identify undertested areas?

Evaluating Dependency Management

  1. Was this bug caused by an unexpected interaction between dependencies?
  2. Could we improve our dependency mocking or stubbing to catch this issue earlier?
  3. Are there any build-time dependency checks we could implement to prevent similar problems?

By asking these questions, developers can shift their focus from relying solely on end-to-end tests to implementing more efficient and targeted testing strategies earlier in the development process. This approach can lead to faster bug detection, easier debugging, and a more robust testing pyramid that catches issues before they reach the end-to-end testing stage789.


Can you please generate me some LLM agent prompts that I can suggest developers run against their codebases so as to get an LLM’s help answering these questions, too?

Certainly! Here are some LLM agent prompts that developers can use to analyze their codebases and get AI-assisted insights for improving their testing strategies:

  1. Code Analysis for Testability: ``` Analyze the following code snippet: [PASTE CODE HERE]

Identify areas where the code could be refactored to improve testability at the unit or integration level. Suggest specific changes that would make it easier to write build-time tests for this code.


2. Test Coverage Assessment:

Review this test file and its corresponding implementation: [PASTE TEST FILE] [PASTE IMPLEMENTATION FILE]

Identify any critical paths or edge cases that are not covered by the existing tests. Suggest additional test cases that could be added at the unit or integration level to improve coverage.


3. Dependency Analysis:

Examine the following module and its dependencies: [PASTE MODULE CODE]

Identify any areas where dependency mocking or stubbing could be improved for more effective build-time testing. Suggest specific mocking strategies or test doubles that could be implemented.


4. Static Analysis Opportunities:

Analyze this code snippet: [PASTE CODE HERE]

Suggest potential static analysis rules, type checks, or linting configurations that could be implemented to catch similar issues at build-time. Provide examples of how these checks would be defined.


5. Test Data Generation:

Given the following function signature and its usage context: [PASTE FUNCTION SIGNATURE AND CONTEXT]

Propose a strategy for generating comprehensive test data at build-time that would effectively exercise this function, including edge cases and potential error conditions.


6. Build Process Enhancement:

Review the following build configuration: [PASTE BUILD CONFIG]

Suggest additional build-time checks, validations, or CI/CD pipeline steps that could be added to catch issues earlier in the development process. Provide specific examples of how these could be implemented.


7. Refactoring for Testability:

Analyze this complex function: [PASTE FUNCTION CODE]

Propose a refactoring strategy to break this function into smaller, more testable units. Provide an example of how the refactored code could be structured and how it would improve build-time testability.


8. Test Pyramid Assessment:

Review the following test suite structure: [PASTE TEST SUITE STRUCTURE]

Evaluate the balance between unit, integration, and end-to-end tests. Suggest changes to shift more testing “leftward” in the development process, focusing on build-time tests where possible. ```

These prompts can help developers leverage AI assistance to critically examine their code and testing strategies, encouraging a shift towards more efficient, build-time-focused testing approaches.


The last time I asked you these kinds of questions, you gave me some great thought-provoking questions that product teams could ask themselves that included asking themselves about what the key user journeys through the products were, reasonable expectations for their availability and performance and usability and whatnot, etc. I forgot to save off a copy. Can you think of any questions that dovetail well with our above chats that’re kind of along those lines of anticipating priority issue areas before anything even fails, for a given product?

Building on our previous discussions about risk assessment and proactive problem-solving in product development, here are some thought-provoking questions that product teams can ask themselves to anticipate priority issue areas before failures occur:

  1. What are the critical user journeys through our product, and how might they break?
  2. Which features or functionalities, if they fail, would cause the most significant disruption to our users?
  3. What are our users’ expectations for availability, performance, and usability? Are we meeting or exceeding these expectations?
  4. What are the potential single points of failure in our system architecture?
  5. How might changes in user behavior or usage patterns impact our product’s performance or reliability?
  6. What external dependencies (APIs, third-party services, etc.) does our product rely on, and what risks do they introduce?
  7. How might our product be misused or abused, and what safeguards can we implement?
  8. What are the most common user errors or misunderstandings, and how can we design to prevent them?
  9. How well does our product handle edge cases or unexpected inputs?
  10. What potential security vulnerabilities exist in our current implementation?
  11. How might regulatory changes or new compliance requirements affect our product?
  12. What are the scalability limits of our current architecture, and when might we reach them?
  13. How do we ensure data integrity and consistency across all parts of our system?
  14. What is our disaster recovery plan, and how quickly can we restore service in case of a major outage?
  15. How might changes in technology trends or user preferences impact the relevance of our product?

By addressing these questions proactively, product teams can identify potential issues before they become critical problems, improving overall product quality and user satisfaction101112.


Can you imagine that my team sells an online ordering system for restaurants and that we’re all in the room brainstorming answers to some of those last questions? And then imagine a bug that we’re doing a retrospective about and using your first few sets of questions for? What all might you anticipate would appear on the sticky notes we produce, and which roles do you anticipate might have thought of them?

Certainly! Let’s imagine your team is brainstorming about an online ordering system for restaurants, and then doing a retrospective on a specific bug. Here’s a scenario with potential sticky notes and the roles that might have contributed them:

Brainstorming Session

Question: What are the critical user journeys through our product, and how might they break?

Sticky Notes:

  1. “Customer placing an order” - Product Manager
  2. “Restaurant accepting/rejecting orders” - UX Designer
  3. “Delivery driver picking up and delivering order” - Operations Manager
  4. “Customer tracking order status” - Customer Support Lead

Question: Which features or functionalities, if they fail, would cause the most significant disruption to our users?

Sticky Notes:

  1. “Payment processing” - Backend Developer
  2. “Real-time order updates” - Frontend Developer
  3. “Menu item availability” - Restaurant Partner Manager
  4. “Address validation for delivery” - QA Tester

Question: What are the potential single points of failure in our system architecture?

Sticky Notes:

  1. “Database server overload during peak hours” - DevOps Engineer
  2. “Third-party payment gateway downtime” - System Architect
  3. “Push notification service failure” - Mobile App Developer
  4. “Load balancer misconfiguration” - Network Administrator

Bug Retrospective

Let’s say the bug was: “Orders were being marked as delivered before the driver picked them up.”

Question: At which stage of the development process should this bug have been caught?

Sticky Notes:

  1. “Integration testing of order status flow” - QA Lead
  2. “Unit tests for order status transitions” - Backend Developer
  3. “End-to-end testing of delivery process” - Test Automation Engineer

Question: What type of test would have been most likely to detect this issue earlier?

Sticky Notes:

  1. “State machine testing for order statuses” - Senior Developer
  2. “Comprehensive API contract tests” - API Designer
  3. “Scenario-based testing with real-world timings” - QA Tester

Question: Could this bug have been caught with a more targeted, smaller-scope test?

Sticky Notes:

  1. “Unit test for order status update logic” - Backend Developer
  2. “Integration test for driver app and order system” - Mobile App Developer
  3. “Mock time-based test for order lifecycle” - Test Engineer

Question: Are there any build-time checks or validations we could add to catch this kind of issue?

Sticky Notes:

  1. “Static analysis for state transition logic” - DevOps Engineer
  2. “Automated code review for status update functions” - Lead Developer
  3. “Compile-time checks for valid status transitions” - Backend Developer

Question: Could we implement or improve static analysis tools to catch this type of bug before runtime?

Sticky Notes:

  1. “Custom ESLint rule for status update functions” - Frontend Developer
  2. “SonarQube quality gate for complex logic” - QA Lead
  3. “Type checking for status enums” - Backend Developer

This exercise demonstrates how different team members, based on their roles and perspectives, can contribute valuable insights to both proactive risk assessment and retrospective bug analysis. It highlights the importance of cross-functional collaboration in identifying and addressing potential issues in a complex system like an online ordering platform for restaurants.


Not all sticky notes during brainstorming are actually high-efficacy ideas. Which sticky notes from the above bug-retrospective session seem likely to deliver the most efficacy at keeping it from happening again? And if it’s not the build-time ones, then please also highlight the “best of build-time” ideas in a separate list.

Great point! Not all brainstorming ideas are equally effective, and part of the value of a retrospective is prioritizing the most impactful actions. Let’s analyze the sticky notes from the bug-retrospective session to determine which are likely to deliver the highest efficacy in preventing this bug (orders being marked as delivered before the driver picked them up) from happening again.


High-Efficacy Ideas for Preventing This Bug

These ideas are likely to have the most direct impact on ensuring this specific issue doesn’t recur, based on their relevance to the root cause and their ability to address it effectively:

  1. “State machine testing for order statuses”
    • Why? This directly targets the logic governing status transitions (e.g., ensuring an order cannot move to “delivered” without first being “picked up”). It ensures that invalid transitions are detected early.
    • Role: Senior Developer
  2. “Comprehensive API contract tests”
    • Why? If the bug involved miscommunication between services (e.g., backend and driver app), API contract tests could ensure that status updates follow strict rules and prevent invalid states.
    • Role: API Designer
  3. “Scenario-based testing with real-world timings”
    • Why? This would simulate real-world workflows, including timing mismatches, and could reveal edge cases where statuses are updated prematurely.
    • Role: QA Tester
  4. “Integration test for driver app and order system”
    • Why? If the bug stemmed from a disconnect between the driver app and backend logic, integration tests would validate that these systems work together correctly under various conditions.
    • Role: Mobile App Developer
  5. “Mock time-based test for order lifecycle”
    • Why? Time-based bugs often arise because systems don’t properly account for delays or asynchronous processes. Mocking time in tests could help simulate scenarios where orders are updated out of sequence.
    • Role: Test Engineer

Best Build-Time Ideas

While build-time ideas might not always be as directly impactful as runtime or integration testing in this case, they can still provide value by catching issues earlier in development. Here are the best build-time ideas from the sticky notes:

  1. “Unit test for order status update logic”
    • Why? Unit tests focused on the status update logic could ensure that invalid transitions (e.g., “created” → “delivered”) are impossible at a granular level, catching issues before they propagate further into the system.
    • Role: Backend Developer
  2. “Static analysis for state transition logic”
    • Why? Static analysis tools could enforce rules about valid state transitions (e.g., using a state machine library or custom linting rules). This would prevent developers from accidentally introducing invalid transitions during implementation.
    • Role: DevOps Engineer
  3. “Type checking for status enums”
    • Why? Strong typing can enforce constraints on valid status values at compile time, reducing the risk of invalid states making it into runtime code. For example, enums could ensure only predefined statuses are used, preventing typos or invalid states like “delivered-before-pickup.”
    • Role: Backend Developer

Analysis: Why These Ideas Stand Out

High-Efficacy Ideas

The high-efficacy ideas focus on testing workflows and interactions between components (e.g., state machine testing, API contract tests, scenario-based testing). These approaches align closely with how this bug likely occurred—through a failure in enforcing correct state transitions or communication between services.

Best Build-Time Ideas

The build-time ideas focus on catching problems earlier in development by enforcing constraints (e.g., static analysis, type checking) or writing targeted unit tests for critical logic (e.g., status updates). While these may not directly simulate real-world workflows like runtime tests do, they provide a solid safety net to prevent developers from introducing invalid logic in the first place.


Final Recommendation

For maximum efficacy:

  • Prioritize implementing state machine testing, API contract tests, and scenario-based testing with real-world timings, as these will directly address the root cause of similar bugs.
  • Complement these efforts with build-time improvements like adding unit tests for status update logic and introducing static analysis or type-checking rules to catch issues earlier in development.

This layered approach ensures both early detection during development and robust validation during runtime testing.


What criteria should we use to determine the efficacy of each sticky note

To determine the efficacy of each sticky note, we should consider the following criteria:

  1. Clarity and Conciseness: The note should capture a single, clear idea or observation1314. It should be written in capital letters for legibility and use thick pens to match the size of the sticky note13.
  2. Relevance: The content should be directly related to the task or topic at hand, capturing verbatim quotes or direct observations rather than analysis13.
  3. Actionability: For tasks or action items, the note should be specific enough to be acted upon14.
  4. Identifiability: Each note should be coded with a participant ID or other relevant identifier to track its source13.
  5. Organization: The use of color coding can enhance organization and clarity, with different colors used for different types of information or priorities1314.
  6. Visibility: The note should be placed in a prominent location where it’s likely to be seen frequently15.
  7. Longevity: The quality of the adhesive and the surface it’s applied to affect how long the note remains sticky and useful14.
  8. Metacognitive Value: In educational settings, the note should enhance the user’s ability to think critically about the material and internalize key concepts16.
  9. Prioritization: The note should fit into a system that allows for easy prioritization of tasks or ideas1517.
  10. Flexibility: The note should be easily repositionable to allow for reorganization and grouping of related ideas1415.

By evaluating sticky notes against these criteria, we can determine their effectiveness in capturing information, organizing thoughts, and promoting productivity.


How can we prioritize which sticky notes to implement first

To prioritize which sticky notes to implement first, consider the following approach:

  1. Use the Eisenhower Decision Matrix18:
    • Urgent and important tasks go in the upper left corner
    • Important but not urgent tasks go in the upper right
    • Urgent but less important tasks go in the lower left
    • Neither urgent nor important tasks go in the lower right
  2. Color-code tasks based on urgency19:
    • Use red for high-priority tasks
    • Yellow for medium-priority tasks
    • Green for low-priority tasks
  3. Arrange sticky notes in order of importance20:
    • Compare two notes side by side
    • Ask “Which is more important to get done today?”
    • Place the more important one above the less important one
    • Repeat this process until you have a prioritized list
  4. Focus on high-impact tasks21:
    • Tackle the most urgent tasks first
    • Address undesirable tasks early to avoid procrastination
  5. Consider time requirements19:
    • Use multiple sticky notes for tasks longer than an hour
    • Group small tasks that add up to an hour on one sticky
  6. Limit the number of priority tasks20:
    • Select 3-5 important tasks to focus on
    • Reprioritize after completing the initial set

By applying these strategies, you can effectively prioritize your sticky notes and focus on the most critical tasks first.

  1. https://www.browserstack.com/guide/sdlc-tools 

  2. https://www.reddit.com/r/QualityAssurance/comments/13ktc4p/shifting_left_and_what_it_means_for_qa/ 

  3. https://www.hypertest.co/software-testing/frontend-testing-vs-backend-testing 

  4. https://www.browserstack.com/guide/bug-life-cycle-in-testing 

  5. https://www.linkedin.com/pulse/bug-reporting-tracking-guide-software-testers-testcaselab-wgdse 

  6. https://stackoverflow.com/questions/22657403/best-practice-for-writing-tests-that-reproduce-bugs 

  7. https://bugbug.io/blog/test-automation-tools/end-to-end-testing-tools/ 

  8. https://www.ranorex.com/blog/end-to-end-testing-vs-integration-testing-explained/ 

  9. https://softwareengineering.stackexchange.com/questions/198918/end-to-end-tests-versus-unit-tests-should-tests-be-decoupled 

  10. https://beyondthebacklog.com/2024/02/22/performing-risk-assessments-for-product-managers/ 

  11. https://www.taazaa.com/prevent-problems-in-software-development/ 

  12. https://www.fusiondesigninc.com/blog/2023/4/21/the-high-stakes-game-of-product-development-questions-for-identifying-risks-in-engineering-projects 

  13. https://userresearch.blog.gov.uk/2014/10/29/anatomy-of-a-good-sticky-note/  2 3 4 5

  14. https://www.syloon.com/sticky-notes-for-efficient-organization-and-creativity  2 3 4 5

  15. https://fastercapital.com/topics/tips-and-tricks-for-effective-sticky-note-usage.html  2 3

  16. http://www.aijcrnet.com/journals/Vol_5_No_4_August_2015/1.pdf 

  17. https://www.linkedin.com/pulse/sticky-note-liberation-sarah-clark-ms-atc-4ovwf 

  18. https://lifehacker.com/prioritize-your-to-dos-with-sticky-notes-and-masking-ta-1680447902 

  19. https://lifehackmethod.com/blog/sticky-notes-for-productivity/  2

  20. https://www.reddit.com/r/getdisciplined/comments/oaewbd/easy_way_to_prioritize_with_sticky_notes_method/  2

  21. https://www.youtube.com/watch?v=mYNDW7kM2go