You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This gist details the decisions I made, the tests I conducted, the observations recorded, and the directions I took while building out a Blue Ocean app for our client.
For our Blue Ocean Capstone (BOC) project, we were organized as a team of 6 people to create an app that aggregates movie streaming data and helps users select movies and save money by comparing different streaming services. Out of the 6 developers, 3 were elected by the team to also take on additional managerial roles of:
For this sprint my goals and motivations were to get the team up and running as fast as possible, with everyone working in areas they were effective in, and with effective prioritization. I also wanted to make sure that not only did we complete was was due for the next set of deliverables, but that we also begin laying the groundwork for subsequent deliverable sets early.
Actions Taken
I led the group meetings to keep them on point and deal with our action items in order of priority. I first went around and guided the process of assigning ownership roles for product manager, system architect, and UX manager. This was before I was voted product manager, where I wasn't eager to volunteer for it, but honestly stated it seemed like something I was qualified for and the rest of the team agreed.
Next, I created a Google Drive directory for our team to access, create, and edit shared files. Within there I also created a team tracking spreadsheet where we could delegate overall tasks, share research notes, vote for actions/assignments. It serves as our team's source of truth for all links related to the project, from Trello tickets to Learn to the Github repository.
While people split off to work on various roles, I kept track of who was doing what, and constantly reassessed what we needed done, when, and made sure everyone was busy and on a task that needed to be done.
Finally, in order to have effective standup meetings I worked out a template meeting sheet for the questions (and recording answers). I duplicated this template to pre-populate meeting notes for all meetings in advance and have them linked with our meeting schedule. In order to invide collaboration and give everyone practice in standups, I made a roster so that we all take turns leading the meeting and acting as scribe.
Achievements
I kept people busy on the right tasks, finished the spreadsheet, and have managed to adhere to the system (and get others onboard with using the system).
Results Observed
We completed all of our due deliverables for the Sprint, and even 50-90% (or 100%?) of some deliverables for later Sprints. Everyone is busy and engaged with work that they like to do, but everyone also is doing some work in all main areas (user stories, testing, coding, UX, etc.)
Results Impacts
People are working autonomously, but also transparently and collaboratively, especially with the people with primary responsibility for a given domain.
Reflections
There is good energy, good progress, and good organization. Hopefully this helps us stay on point, develop a complete and bug-free app, and if necessary, cut out features to a good enough MVP in an organized and non-stressful way.
For this next Sprint we are completing setup of the project and planning spec such that we can all begin testing and coding the app while following good Agile and QA/QC protocols.
My primary responsibility is completing the project proposal that we present to the client. This proposal derives from notes that we took from our meeting with the client in the last sprint and presents a detailed proposal to the client to approve before we begin serious development on the app.
Actions Taken
For the team I maintained basic managerial duties of tracking, assessing, and communicating the state of our work as a team, as well as initiating any needed action (e.g. suggesting what work one should do, what deadlines there are, etc.)
For myself, the first step for the project proposal was for me to deeply read through the user stories, acceptance criteria, and initial wireframe in order to understand the product. I also added improvements/content where I saw fit during this review. I wrote out the project proposal based on these, lifting/modifying a lot of the content to be used directly in the proposal.
Achievements
Among the team, everyone has stayed actively engaged and we are ahead of deliverables on all metrics. We had a smoother standup sprint this morning with other team members taking on tasks to conduct the meeting. We also had a good planning session where eveyone is up to speed on the app state, needs, etc. with roles/priorities shifting around to meet the current state and upcoming demands.
I completed a thorough project proposal that reiterates what we understood from the client, our plan for meeting their needs with an app, and lays out the framework under which we will work and deliver a product, including setting expectations.
Results Observed
We are staying ahead of schedule on the deliverables. People are excited, engaged, and informed as to the app development. No actualy coding has been done yet as we are thoroughly planning the app and carefully setting things up, but coding shall begin today.
Results Impacts
I think the team dev machine is running smoothly and is becoming self-maintaining with less micromanaging from me. Hopefully this continues in code dev/testing where lower-level concerns are delegated to the UX owner and Architecture Owner.
Product Proposal impacts are TBD after the meeting with the client.
Reflections
A lot of the upfront planing and adhering to our structure, while less fun, seems to be paying off with smooth, efficient development.
Now our goals are to begin making things! My goal is that everyone is developing and testing by Monday night, ideally with people starting Saturday night if they wish.
Some specific development goals for myself and the entire team are:
Have all development meet or exceed 60% testing coverage on every PR.
With the above goal, we won't have to catch up to 60% coverage of testing at the end of our project.
Maintain a passing state with CircleCI (i.e. no failing tests are introduced into the CI system)
With the above goal, outstanding bugs to fix at the end of the project should be small and easy to wrap up.
Actions Taken
I wrote out some demo tests for people to use as a template for test writing.
I created a formatting document for coding, file structure, etc. that people are familiar with in advance such that we begin and maintain a good workflow, consistancy, QA/QC.
I took initiative in completing getting Jest with Enzyme up and running, with placeholder files with a basic snapshot test for all components to give everyone an easy starting point from which to write more tests.
I set up some shared aspect of the app that should be used from the beginning and by many people, to help give our work a boost and avoid refactoring later:
Config file set up with environment variables defined in a *.env file
Global logger so that console log messages can be handled differently globally based on the environment (e.g. turned off for production)
Click tracker to have a tool from which to gather user data on the app
Error Boundary for other to use to reduce volality from bugs not caught in testing.
Example/proposed setup for the backend structure in a way to use Redis caching, Mongo, and routes such that it is easier for testing, debugging, and assigning people smaller dev roles for developing/fixing new routes/features on the back end in a more uniform and consistent pattern. (This has not been reviewed yet)
I made expectations for PRs clear and confirmed buy in from the rest of the team. Part of this was done through fleshing out a fairly complete ReadMe file that clearly and finely laid out our workflow processes.
Achievements
People are beginning development on the server and React components. A lot of the 'paperwork' that would be hard to finish at the end when we are rushed and tired (like the ReadMe) are fleshed out and can now serve as a guide.
Results Observed
The app is beginning to come together across all levels of functionality. However, communication and coordination is beginning to break down as people are siloing into their own work and not going to the right people for work direction. They also are not reviewing the documents I made to guide their work.
Results Impacts
People are not working as effectively. Those that are doing a lot of work are running ahead and not coordinating, potentially making wasted code or forcing design decisions on the app implicitly rather than going through any thoughtful and collaborative process.
Reflections
Tonight I will have to focus people on the above issues. Some of this seems to be that the UX & Architecture owners are acting more as specialists on their own projects rather than managers for other people building out those parts. I will need to talk with them more to make sure their roles are more clear. I think some of this breakdown occurred because I myself became too deeply involved in development too early and missed this developing trend.
This evening I presented the project proposal to the clients and it accepted with some minor tweaks based on our requests/feedback.
Goals & Motivations
To get people working on the highest priority parts of the app as we should be focusing on MVP for every stage of the process until we complete MVP.
Actions Taken
We had a LONG team meeting to discuss the breakdown in communications, assignments, and my need for people to communicate with me and follow through on agreed upon protocol. I also had a meeting with the UX owner and Architect to express my need for them to act more as managers on the team, communicate closely with each other and with me, and be proactive in making tickets and assigning devs work that they need done. One member was in agreement, the other indicated that he thought this was all solely my job... I think I got him on board with the idea that ultimately it is my responsibility, but it is everyone's job to get the app built and delegate/share responsibilities.
Achievements
None. We have since gone backwards.
Results Observed
The first two tests merged with main were broken. Others did not take proactive corrective action besides bringing the problemt to me and asking me to fix it, and I was left hunting down the issue and fixing the other developer's test and offering to help them debug their systems. As far as I have seen, no one has made any PRs since Tuesday, one member is still working on the API documentation (which I said is lowest priority), and no one has communicated to me on making any headway on what I have repeatedly emphasized should be our highest priorities. Best as I can see, some people are just doing as they please, and others are lost where I cannot micromanage 6 people and had assumed lower-level management would have been working with them and me.
Results Impacts
Broken sample tests have been fixed. Reported problems with the testing setup were non-existent. As far as I know, no core development has occurred since our last stand up.
Reflections
I have tried to keep a light touch on things and get buy-in from the team. Also to defer to owners in their own realm. I might have to just start imposing decisions and rearrange roles where is reasonable. Otherwise, if people choose not to listen to the product manager, then the product manager really has no purpose.
I am having an additional manager meeting with the Architect & others before class today. I will also revisit topics from the last sprint again tonight, more succinctly, and emphasize to everyone the importance of proactive communication with me.
Achievements
I think I managed to get things back on track, both with getting myself caught up with the state of the project, and getting others more in line with project priorities and communication needs.
Results Observed
Development seems to be more back on track & in sync again. People seem to be happier being on a more defined track, loose threads resolved.
Results Impacts
App may be somewhat functional full stack by the end of Saturday.
Reflections
When in doubt, be more explicit, talk sooner rather than later. Perhaps more organized laying out of decisions and diagrams for the system earlier would have been helpful for coordination & communication.
I made a diagram of the client and server sides to assist with planning, understanding, and communication. I worked collaboratively with the UX & Architect managers to flesh out a web sequence diagram as well - both recording what we are doing, and finish fleshing out the actual sequence of actions within the app.
I made a final detailed breakdown of development needs and sequencing, and broke that into tickets with deadlines and assigned those to members. This made development needs for the last week of open development very clear.
With the above done, the team could hopefully operate more independently while staying in sync, and I could more safely/easily move into dev work myself and spend less time 'managing'.
This night we had a 'user testing & acceptance session' with the client where we demoed the MVP as it currently stands in order to get feedback & validation of our work, and strategize development priorities for the rest of the open feature development period.
The final piece of the puzzle. Each team will discuss what worked well, what didn't, areas for improvement, and where the project currently stands. The goal of this meeting is to leave with action items for the next iteration both in terms of process and the product as a whole.
2021-11-13 to 20 - Development of 'Suggested' Feature Based on User Watch History
Goals & Motivations
In order to help users of the Streamfinder app select movies that they would like to watch, I am fleshing out the 'suggested' feature of the app. Currently 'suggested' is just taken from the third party API based on a single movie name (that is not based on user behavior). I will work out a system that produces custom-tailored recommendations that are derived from a user's watch history.
Actions Taken
To begin, I inspected the current state of the application since I am adding a new/expanded feature to an existing app. My strategy was to go about this in the following order:
Understand the client-side interface output
Understand the client-side interface input
Understand the server routes and DB queries related to 'Suggested' movie inputs/outputs
Work out edge cases and constraints
Develop dummy data for the server side that captures the expected input scenarios and intended results
Write tests to the dummy data
Develop the algorithm in a model file that is unit tested throughout
Add tests on the destination React component to stub data resulting from the model tests, and do any required development to pass these tests
Make relevant seed data for testing database to test integration (skipped as no test database for seeding is set up. May do later)
Create relevant tests for tracking the data needed for the algorithm (click-tracker or CRUD-hooking strategies)
Develop features necessary to pass tracking tests, including modifying/extending existing components
See the following detailed journal entries for more detail about my development strategy and implementation:
Development of 'Suggested' Feature Based on User Watch History
Understand the Client-Side Interface Output
I did this to determine the form & content of the server response, and means by which this enters the React component.
As I worked through the front end, I added some additional error-handling and refactoring. The aim of the refactoring was to make the code easier to read and create the initial seams upon which I can interact with the app. For example, a server request made in componentDidMount seemed good to factor out as a method that might be called at other times, such as when a user's watch or search history (and therefore suggested movies results) changes.
Based on the current state of the app, only movie title and img url are needed as results. This is shared across all categories of:
Trending
Suggested
History
Search
It may be useful to add a movie ID in later for more efficient and unique tracking of movies.
Also, the current home screen is using local stubbed data. Ultimately this should come from the server. This is good to know as currently any development on the server will NOT be reflected on the client. Any work on the client should have this in mind, and that for any completely functional behavior of the app, the algorithm will either need to be deployed initially on the client side, or additional work is needed for the app to be fully functional.
I refactored the component to handle the case of currently being stubbed, possibly currently capable of server requests, and expected testing. Right now if I manually inject data in a test, that will govern. Once tests are developed enough and decisions on the server work are made, I can easily remove the stubbed data while retaining unit testing functionality.
Development of 'Suggested' Feature Based on User Watch History
Understand the Client-Side Interface Input
The client-side inputs that are relevant for our goal must be understood in order to find opportunities in tracking user data from which recommendations can be made. A click tracker already exists for the app and one question to answer here is whether the user behavior used to determine suggestions is best collected through a separate layer of click tracking versus adding more logic in the existing server routes to log the information as the database is queried and updated. For lack of a better term, I will call the latter CRUD Hooking.
This is also necessary to know if there are any gaps in what is needed for our 'trending' algorithm. This may either shape the algorithm, or inform me up front that more development work on supporting features must be done. If the latter, then at the very least careful and thoughtful stubbing for future development must be done.
Inputs for Suggested, Trending, History
Movie results in the client are displayed through a carousel component that renders individual movie tile components. Upon inspection, I found incomplete implementations of click handlers that don't actually do anything. The current click-handling behavior of the carousel is limited to scrolling a static data set and nothing more.
Whether I choose to handle collecting data by hooking code to the CRUD server operations or using a click tracker, this development needs to be completed.
In general, I am thinking the following about input related to suggested movies as adding weighting factors to movies in the following way:
Highest weight for trending movies that are related to previously watched movies
Weights for movies related to previously watched movies - weighted further by current star rating
Lesser weights for movies related to previously executed searches (but made higher if search result movie was watched)
Least weights for movies in any category where the user has clicked on the movie tile to learn more about the movie (this would depend on a click-tracker strategy)
Another major development note here is that user 'previously watched' movie history may be incomplete. For purposes of development, this can be seeded and completed later for a more end-to-end interactive experience for behavior affecting 'suggested' movies.
Inputs for Search
The search component is fully functional in sending out a search keyword to the server, and receiving results from the server.
A strategy here for the suggested algorithm is to have a weighting factor based on the search phrase sent. Another lesser weight factor can be applied to movies returned. Or perhaps a weighting factor can be applied based on the % match of a movie title to the search phrase such as:
Strongest weight for perfect match
Lesser weight for perfect matching of words
Even lesser weight partial matching of words
Least weight for other films (since the 3rd party API used can bring these in by keyword fuzzy searches, they may still be relevant)
Inputs for Media Detail & Reviews
Another weighting factor can be used based on the movie rating, e.g. 1 star gives a reduced rating, 2 neutral, and more stars increase a weighting. This value is gathered from the MediaModal.jsx file.
The rating weightings themselves can be weighted higher if the user also submits a review (lowers the lowering weight, increases the increased weight, and adds a fixed amount to the neutral weight).
Reflections
It seems that both click tracker and CRUD-hooking strategies should be used, depending on the data gathered.
CRUD-Hooking
User review:
User movie rating
User watched movie confirmation
Movie rating
User movie search
Click-Tracking
User clicking on movie detail
Event Triggers
Suggested movies are calculated asynchronously on the server during the following events, with no need to return the server request:
Search submission
Movie review submission
Movie detail click
Suggested movies are just read upon initial page load:
Development of 'Suggested' Feature Based on User Watch History
Understand the server routes and DB queries related to 'Suggested' movie inputs/outputs
Relevant routes were inspected on the server in order to gain an understanding of the data that passes between server and client and what data is currently stored, how, and where.
GET /home/homePage?=${currentUser} Route
In Home.jsx.
The following route methods are called, which return a full, unmodified object from the database:
getHistory(queryUser).then((historyData) => User.find({ username: user }) returns movie history data.
getMovie(historyData.currentId).then((newWatchData) => Movie.find({id: movieId}) returns trending and suggested data.
And ultimately returns a user-specific object with movie data grouped by various property categories:
{
suggested,
trending,
history,
}
This will be where I will hook in the suggested algorithm, in a layer of the route above getHistory and getMovie.
POST /search/searchPost Route
In Streamfinder.jsx
With 3rd party API:
Looks up the movie that matches the search term
Looks up all trending movies, with no relation to anything in the app
Looks up suggested movies, based on the movie search result
Saves trending and suggested returned from 3rd party API to database.
Returns search results object with movie data from suggested results appended to the movie matching the search term.
This route needs a lot of refactoring to bring it in line with SRP, including having a GET and separate POST route. While the movie searched and suggested movies can be used for the in-app suggestion algorithm, other parts of this route need to be refactored out and accessed separately.
PUT /media/watchHistory?${userId}?${mediaId}?${this.state.rating} Route
In MediaModal.jsx.
This adds a movie to user watch history if they have submitted a review declaring it watched. Movie ratings are also updated. Ratings originally come from API when first added to the DB but then are only updated by the app.
Returns nothing.
It appears from this that currently user history is only established by the user adding a movie review.
GET /media/mediaDetails?${mediaId} Route
In MediaDetail.jsx. Returns media detail object
This route is probably not relevant to the suggested feature as it only fetches movie data. All relevant data from this component comes from actions after the GET request.
Database Schemas
These might not all be used. Adjust as inspected.
User Schema
{
subscriptions: [],
history: [Array of movies],
...
}
Might be too fine-grained, but any rating-related weighting could be further weighted by how recent the date of review was.
Movie Schema
{
suggested: Array of movies from 3rd party API,
trending: Array of movies from 3rd party API,
rating: Number,
ratingCount: Number,
reviews: [ReviewSchema],
...
}
Development of 'Suggested' Feature Based on User Watch History
Work out MVP, dev strategy, pseudocode
After reviewing the current development state and behavior of the app, as well as the third party API used, I will now lay out a tiered development strategy that begins with an MVP/naive solution to the suggested feature, and then fleshes out additional development for a more fully functional feature.
MVP / Naive Solution
At a bare minimum, suggested can just be taken as related movies, as any movies related to user activity at any level can arguably be a relevant suggestion. Beyond this, suggested are related movies ordered by weighting factors, and possibly filtered out if weighting factors are too low.
Dev Strategy
Work out a method for compiling a list of related movies
Layer on increasingly customized and sophisticated weighting factors. This includes sorting by weights first, and filtering out by weights later. This will allow me to have a basic MVP of the feature that is then easily extended, tested & refined in small chunks of tasks (e.g. a separate cycle for each weighting factor, weighting sub-factor, etc.)
Develop weight factors first by existing state of app client followed by simplicity of factor.
Extend/modify/complete client components in sync with associated weights in order to improve data gathering
Edge cases and constraints
Below are a list of initially considered edge cases & constraints. More may be added in the pseudocode.
Edge Cases
User has no watch history
User has no existing suggested list (new user, or all history are disliked movies, etc.)
Constraints (Practical, not time/space complexity)
Data available from 3rd party API
Data saved in server database
Current behavior/development of app client components
Pseudocode
The suggested feature is used on the home page, and at the most basic level it is a list of related movies. The following is my pseudocode for the home page loading with related movies:
// On Home page load:
// Trending: get popular movies for trending
// if user history is empty:
// History: do not render history carousel
// Suggested: get top rated movies: https://developers.themoviedb.org/3/movies/get-top-rated-movies
//
// if user history is not empty:
// History: Render History carousel
// Suggested:
// Begin with current suggested list (if any)
// Collect the list of movies from history
// For each movie from history:
// Get list of suggested movies from IMDB API
// Get list of related movies from IMDB API
// Add to overall list of related/suggested movies
// For each movie in overall list of related/suggested movies:
// Determine suggestion weight
// Sort related/suggested movies by weight
// Limit return by predetermined value (user settings: e.g. 20 movies)
The suggested feature's more customized behavior is extracted out to be handled separately, which applies to the following lines:
// For each movie in overall list of related/suggested movies:
// Determine suggestion weight
// Sort related/suggested movies by weight