Always at 23.59 on the same day as your TA session (Except if another deadline has been agreed with the TA).
Consult the deadline for your class on the delivery plan.
In this iteration, the learning goals are using test doubles to get "randomness" under automated test control; and applying the State and Strategy patterns.
Carefully read FRS § 37.4 which outlines the new EpsilonStone, ZetaStone, and EtaStone variants.
Spend the first 15-20 minutes of the class in plenum discussing...
EpsilonStone and ZetaStone introduce deliberately tricky winning strategies: accumulating attack output by a player and even accumulating attack output only after a certain number of turns.
This introduces two important design decision: Who stores 'attack output of a player'? and the follow up decision How is that attack output sum updated when an attack occurs?
Given our design there are three choices of objects to keep the information:
Discuss benefits and liabilities of each design choices using terms from our maintainability discussion, Cohesion and coupling in particular.
... the actual code if you want to implement this Spy kata requires 'Private interface', so the kata is a bit premature. Sorry. - Henrik
Many of our strategies mutate our Game instance, and so far we have stated that this is per definition integration testing: you have two objects and their interaction under test.
Test doubles actually solves this, and allow us to unit test our strategies by replacing the Game instance with a Test Spy. Remember: A spy just records what ever methods are called as well as what parameters were passed, a record that the JUnit test case can then inspect.
Consider the following GWT formulation of a test of the Brown Rice being played (effect: opponent hero's health is lower by one):
// Given a BrownRice and a Game test spy StandardCard brownRice = [...] SpyGame spyGame = [...] // When the brown rice card is played [use the strategy's method directly, not calling via Game] // Then proper hero health is reduced [assert that the spy game's method for reducing the correct hero health has been called.]
Discuss the following questions
Remember, you do not need to 'solve' the kata exercises by coding it, it is more a discussion/reflection whose outcome can influence your own design in your particular HotStone work.
Develop the EpsilonStone variant using a compositional approach by refactoring the existing HotStone production code. As much production code as possible must be under automated testing control.
Develop ZetaStone by refactoring the existing HotStone production code.
Develop EtaStone by refactoring the existing HotStone production code.
Deliveries:
A mock library like Mockito can make using test spies more effective. As an example, my test of Brown Rice effect looks similar to this:
// Given a mock of the full Game (including mutator private interface) mockGame = mock(FullGame.class); // When brown rice is played by Findus [... the brown rice card effect is triggered here...] // Then Game's private method to reduce Hero health is called with (PEDDERSEN, -1) verify(mockGame).deltaHeroHealth(Player.PEDDERSEN, -1);(The syntax of the Mokito library takes some time to get used to.)
Note that the only aspect being tested here is that the method 'deltaHeroHealth()' method with parameters PEDDERSEN and -1 is called. That this method actually is "code that works" is tested elsewhere.
Your submission is evaluated against the learning goals and adherence to the submission guidelines. The grading is explained in Grading Guidelines. The TAs will use the Iteration 5 Grade sheet to evaluate your submission.
The rubrics 'Effects Design' has been renamed to 'EtaStone Design' as per 6th Oct, to better reflect the contents of your report. It is the same criteria though.
Learning Goal | Assessment parameters |
Submission | Git repository contains merge request for branch "iteration5"". Git repository is not public! Required artefacts (report following the template) must be present. |
Test Double | The "randomness" of the 'select minion algorithm' is properly encapsulated in a test double using a compositional design. JUnit test cases for EpsilonStone and/or hero power methods are deterministic and correct. Argumentation for use (or not) of test doubles when testing ZetaStone winner strategy is sound. The design is clearly and correctly documented in the report. |
State Pattern | The ZetaStone winning algorithm is designed and coded using a State pattern. Existing winning strategies from BetaStone and EpsilonStone are properly reused by composition (i.e. no source-code-copy reuse, polymorphic reuse, etc.). Decisions on where state is stored (accumulated attack output, etc.) are proper and correctly documented in the report. |
EtaStone Design | The EtaStone/card effect is designed and coded using a compositional/3-1-2 approach. The design is clearly and correctly documented in the report. |
UML | The UML diagrams are syntactically correct, correctly reflects the patterns/doubles, and is a correct overview of the architecture. The UML diagram does not show irrelevant implementation oriented details. |
TDD and Clean Code | TDD process has been applied. Test code and Production code keeps obeying the criteria set forth in the previous iterations, including adhering to Clean Code properties for newly developed code. The requirements of EpsilonStone, ZetaStone, and EtaStone are correctly implemented. Missing features are noted in the backlog (Minor omissions allowed). |