Skip to content
This repository has been archived by the owner on Mar 27, 2022. It is now read-only.

Sprint 1 wrap up

Afonso Jorge Ramos edited this page Feb 26, 2019 · 1 revision

Sprint 1 wrap-up artifacts

Sprint 1 close on October 15th 2018. During the class, we met with the Product Owner and produced the three following artifacts in order to fully close the iteration and in order to plan for the next one, which started on that same day.

Sprint Review

The sprint review was conducted with the PO, and we, together with him, reviewed the artifacts produced in the closing iteration. We explained that the single item (#13) we were assigned to do was not fully implemented due to it being an epic, and instead proceeded to show the smaller items we build out of that epic's decomposition. Globally, the PO was satisfied with the team's progress, and he was pleasantly surprised with our study of the TIC-80 app, which was faster and more complete than he would expect.

What was done

The PO approved of the current exercise interface devised, saying that it was more than enough for the current stage of development. As for the API, he also agreed that it was quite complete on both ends (server and FEUP-8), and he suggested that, later, we should fully implement all the main HTTP methods in order to create an integral RESTful API. In conclusion, all components related to the item were approved. Finally, we improved our documentation, mainly the read-me and the diagrams.

What was not done

The only part that was not done was bringing together all the smaller artifacts in order to complete the item, that is, transferring an exercise from the server and displaying its information. In particular, the problem is in the translation between the API result and the rendering of the exercise. This lack of completeness was considered to be minor by both the team and PO, given that all components are already done and validated. This task will be completed at the start of the next sprint, as it has utmost priority.

Sprint Retrospective

The sprint retrospective was done mostly within the team, but also taking some technological advice from the PO himself. We can classify all the agreed points in four main categories:

What went well

Starting off with the positive points of this iteration, the team agrees that having used Docker since the very beginning for the Web component was a good decision, and that using it later for FEUP-8 was also a good call. The team also praises the development done using GitHub Flow, as it enabled us to be quite organized. We also consider that the choice of technologies and frameworks was the right one, as we were able to use them to do exactly what we wanted (even if they were not easy to work with), including the testing of all components using Unit Tests. Finally, we consider that the global appreciation of the interpersonal relations between the team is good, as we had, as of yet, no conflicts between team members.

What went wrong

The team considers that the user story supposed to be implemented on this iteration was an underestimated epic, which caused us a lot of problems with the division of tasks and the segmentation of responsibilities. Towards the end of the sprint those matters were mostly resolved, but that rough start impeded us from fully implementing the feature, as explained in the sprint review. Plus, the organization of the team is still far from optimal: while GitHub Flow is used extensively, we still have problems related to a lack of common vision in some aspects, ambiguous acceptance tests and not enough time spent together in the same room, which makes communication harder. Finally, the TIC-80 application was challenging to work with, as it had some unfixed bugs and lacked documentation. That lack of information slowed down our development considerably, but we consider it safe to say that it will get better with time and experience.

What should we keep doing

The team agrees that we should keep doing good use of the GitHub Flow paradigm, as well as the use of containerization for all artifacts. The team will also continue to follow the established coding guidelines. We also consider that our assignment of tasks between team members was good as well, as all members ended up having a decent global view of the project, regardless of having worked more extensively on the Web or Desktop components, and as such we intend to keep that global awareness as high as possible.

What should we improve

As said before the team needs to improve the communication, mostly on sharing the same idea about a feature to be implemented, that can only be achieved with a previous discussion about the topics and a common vision. Despite having a good understanding of the engineering aspects of the application, as mentioned above, that lack of common vision is made clear when it comes to the design of the graphical and usability aspects. In other words, we should improve our acceptance tests. Plus, we think that there is still room for improvement in the development practices too, such as using Linters to validate the coding guidelines. We also intend to improve even more upon the GitFlow paradigm by introducing a staging/dev branch, as we had some trouble with merging branches. Finally, the PO himself offered us some technological advice, suggesting that we should move some responsibilities to the server side of the application, in particular the validation of an exercise using Lua unit tests.

Sprint Planning for the 2nd sprint

It was agreed with the PO that the items #7, #25 and #16 would be implemented in the following sprint, as well as the item #40, which still needs some wrapping up. If there is time left in the end, we shall use it to setup the Continuous Integration (CI) tools. Given that we underestimated the effort on the previous iteration, we will now adopt a more conservative approach by giving slightly higher values of effort to the items. The total effort of the three items plus the unfinished one is 20, which is the effort value we try to aim for every sprint.