Iteration 5: Test Doubles and More Patterns

Deadline

Always at 23.59 on the same day as your TA session (Except if another deadline has been agreed with the TA).

Consult the deadline for your class on the delivery plan.

Learning Goals

In this iteration, the learning goals are using test doubles to get "randomness" under automated test control; and applying the State and Strategy patterns.

Prerequisites

Carefully read FRS § 37.4 which outlines the new EpsilonStone, ZetaStone, and EtaStone variants.

Kata

Spend the first 15-20 minutes of the class in plenum discussing...

... who stores attack output by player?

EpsilonStone and ZetaStone introduce deliberately tricky winning strategies: accumulating attack output by a player and even accumulating attack output only after a certain number of turns.

This introduces two important design decision: Who stores 'attack output of a player'? and the follow up decision How is that attack output sum updated when an attack occurs?

Given our design there are three choices of objects to keep the information:

  1. The Game object
  2. The Strategy object
  3. A third object

Discuss benefits and liabilities of each design choices using terms from our maintainability discussion, Cohesion and coupling in particular.

... testing card effects?

... the actual code if you want to implement this Spy kata requires 'Private interface', so the kata is a bit premature. Sorry. - Henrik

Many of our strategies mutate our Game instance, and so far we have stated that this is per definition integration testing: you have two objects and their interaction under test.

Test doubles actually solves this, and allow us to unit test our strategies by replacing the Game instance with a Test Spy. Remember: A spy just records what ever methods are called as well as what parameters were passed, a record that the JUnit test case can then inspect.

Consider the following GWT formulation of a test of the Brown Rice being played (effect: opponent hero's health is lower by one):

  // Given a BrownRice and a Game test spy
  StandardCard brownRice = [...]
  SpyGame spyGame = [...]
  // When the brown rice card is played
  [use the strategy's method directly, not calling via Game]
  // Then proper hero health is reduced
  [assert that the spy game's method for reducing the correct hero health
    has been called.]

Discuss the following questions

  1. Sketch a game spy implementation for the 'reduce hero health' method: how would your record the interaction, and how would you later in JUnit test code inspect that record?
  2. The concrete case of Brown Rice probably dictate that one or some methods of the Game must be stub methods (test stub behavior). Which is that?
  3. Assess roughly the test case's size in terms of lines of code when doing integration testing (real game and real strategy) versus doing unit testing (spy game and real strategy).
  4. Assess the effort in writing the spy game code. Is is worth it? What defines the balancing point when it is a good idea or not to write a test spy?

Remember, you do not need to 'solve' the kata exercises by coding it, it is more a discussion/reflection whose outcome can influence your own design in your particular HotStone work.

Exercises

EpsilonStone

Develop the EpsilonStone variant using a compositional approach by refactoring the existing HotStone production code. As much production code as possible must be under automated testing control.

ZetaStone

Develop ZetaStone by refactoring the existing HotStone production code.

EtaStone

Develop EtaStone by refactoring the existing HotStone production code.

  1. Sketch a compositional design using UML for EtaStone.
  2. Implement the variant using TDD by refactoring the existing HotStone system.

Deliveries:

  1. Develop on a feature branch "iteration5". (You can of course make sub branches for the individual exercises, if you want).
  2. You should document your group's work by following the more detailed requirements defined in the iteration5 report template (pdf) (LaTeX source)

Comments of a 'not required to learn' nature...

A mock library like Mockito can make using test spies more effective. As an example, my test of Brown Rice effect looks similar to this:

    // Given a mock of the full Game (including mutator private interface)
    mockGame = mock(FullGame.class);
    // When brown rice is played by Findus
    [... the brown rice card effect is triggered here...]
    // Then Game's private method to reduce Hero health is called with (PEDDERSEN, -1)
    verify(mockGame).deltaHeroHealth(Player.PEDDERSEN, -1);
        
(The syntax of the Mokito library takes some time to get used to.)

Note that the only aspect being tested here is that the method 'deltaHeroHealth()' method with parameters PEDDERSEN and -1 is called. That this method actually is "code that works" is tested elsewhere.

Evaluation criteria

Your submission is evaluated against the learning goals and adherence to the submission guidelines. The grading is explained in Grading Guidelines. The TAs will use the Iteration 5 Grade sheet to evaluate your submission.

The rubrics 'Effects Design' has been renamed to 'EtaStone Design' as per 6th Oct, to better reflect the contents of your report. It is the same criteria though.

Learning Goal Assessment parameters
Submission Git repository contains merge request for branch "iteration5"". Git repository is not public! Required artefacts (report following the template) must be present.
Test Double The "randomness" of the 'select minion algorithm' is properly encapsulated in a test double using a compositional design. JUnit test cases for EpsilonStone and/or hero power methods are deterministic and correct. Argumentation for use (or not) of test doubles when testing ZetaStone winner strategy is sound. The design is clearly and correctly documented in the report.
State Pattern The ZetaStone winning algorithm is designed and coded using a State pattern. Existing winning strategies from BetaStone and EpsilonStone are properly reused by composition (i.e. no source-code-copy reuse, polymorphic reuse, etc.). Decisions on where state is stored (accumulated attack output, etc.) are proper and correctly documented in the report.
EtaStone Design The EtaStone/card effect is designed and coded using a compositional/3-1-2 approach. The design is clearly and correctly documented in the report.
UML The UML diagrams are syntactically correct, correctly reflects the patterns/doubles, and is a correct overview of the architecture. The UML diagram does not show irrelevant implementation oriented details.
TDD and Clean Code TDD process has been applied. Test code and Production code keeps obeying the criteria set forth in the previous iterations, including adhering to Clean Code properties for newly developed code. The requirements of EpsilonStone, ZetaStone, and EtaStone are correctly implemented. Missing features are noted in the backlog (Minor omissions allowed).