title | author | date | lastModified | lang |
---|---|---|---|---|
Experienced Developer Notes |
Steven E. Newton |
2022-11-12 04:56:49 -0800 |
2024-06-18 12:13:05 -0700 |
en-US |
The last thing a programmer should be worried about is how fast they can type in code.
Everything is a people problem when you dig deep enough.
Create and keep a personal knowledge base.
Writing an onboarding guide is definitely a great way to both learn and level up your skillset. Taking notes and then creating documents for others who come after you will get you noticed (or should – an employer or team that doesn't probably has other issues, too).
Andy Hunt and David Thomas wrote about the importance of writing in their book The Pragmatic Programmer (highly recommended).
In some ways creating and updating documents of this sort is even better than, and a stepping stone, to creating design and requirements documents for upcoming work. Lots of documentation written before the coding is done ends up out of date before the actual system it documents makes it into production, and is never corrected. Do take note of any existing documentation and update it where you find it so outdated and incorrect that it is worryingly misleading.
"The problem with the discuss-in-front-of-the-whiteboard approach is that it doesn’t scale up", and I have yet to seen a working solution for distributed environments.
Whiteboards are great for small, co-located teams. When the team size grows and is distributed across multiple sites or working remotely, a physical whiteboard is impractical. Products like Miro and Canva are starting to move in the right direction, though.
"The hardest problem in software engineering is getting the appropriate information into the heads of the people who need to have that information in order to do their work effectively." Writing docs well: why should a software engineer care? [@hochsteinWritingDocsWell2022]
Being able to write this much without it taking away from other work requires adopting a habit of taking notes as you code. Lots of notes. Write down every decision you make. Every time you look up something on StackOverflow, refer to the API docs, or refer back to the ticket, use case, or requirements document, make a note of your question and the answer. Even when you just stop typing to take a break, note down where you are and what you just did.
Taking notes as a developer will make it clear that reviewing and refactoring your notes is essential. The craft of writing is mostly about rewriting and editing anyway, so taking your raw notes (which you can either type or hand-write, whichever works best) and turning them into something useful to others will provide ample opportunity to improve.
Note-taking and writing is good for development in itself, the broader goal is to grow into a position where you are seen as providing knowledge leadership and guidance, as well as helping the organization build long-term strength and resilience.
You could give your career a little boost for the next position if you were to put together a document/presentation advocating for code reviews, or one of the other things you would like to see. Don't necessarily expect it to get very far or result in a big change in the organization, but the process of putting together the justification and advocating for will be a good experience. If your employer happens to pay attention and notice you, great. If they happen to adopt your idea, even better. Just start small. At a fairly early point in my career I led some of my co-workers at my then-employer to adopt unit tests and better use of source control.
Writing an onboarding guide is a great way to both learn and level up your skillset. Taking notes and then creating documents for others who come after you will get you noticed (or should – an employer or team that doesn't probably has other issues, too).
Andy Hunt and David Thomas wrote about the importance of writing in their book The Pragmatic Programmer (highly recommended). [@thomasPragmaticProgrammerYour2020]
In some ways creating and updating documents of this sort is even better than, and a stepping stone, to creating design and requirements documents for upcoming work. Lots of documentation written before the coding is done ends up out of date before the actual system it documents makes it into production, and is never corrected. Do take note of any existing documentation and update it where you find it so outdated and incorrect that it is worryingly misleading.
Being able to write this much without it taking away from other work requires adopting a habit of taking notes as you code. Lots of notes [@binstockTakeNotesYou2019]. Write down every decision you make. Every time you look up something on StackOverflow, refer to the API docs, or refer back to the ticket, use case, or requirements document, make a note of your question and the answer. Even when you just stop typing to take a break, note down where you are and what you just did.
After taking notes as a developer [@crabillTakingNote2019] for a while, it will become clear that reviewing and refactoring your notes is essential. The craft of writing is mostly about rewriting and editing anyway, so taking your raw notes (which you can either type or hand-write, whichever works best) and turning them into something useful to others will provide ample opportunity to improve.
How you share that knowledge depends on the organization. Some have a wiki, like confluence, some have documentation in their source repo, some have elaborate document management software and systems, and some just have a shared google docs/Microsoft OneDrive folder. There's way too much there that I don't feel qualified to go into other than I think documents should be under some kind of version control, just like source code.
If I were starting from scratch, today, I'd go with the simplest wiki-like document store married with a top-notch full-text fuzzy/smart search feature. The only features I'd insist on the in the wiki would be history, in a git-like way, of who changed it and when. A very nice to have would be the ability to track when a page was moved or renamed, so that the history isn't lost.
I've more or less given up on visual diagramming tools. Starting with early versions of (what is now Microsoft's) Visio and the first UML tools up through whatever is hot right now, they are just way too much yak-shaving over getting "just right" to be a part of ongoing development. It's perfectly fine for a technical documentation person to use one to generate a snapshot to use in communications with users and stakeholders, but they suffer even more than written documentation from the inevitable deterioration over time as the code and the system grow and change.
What I've tried to move towards myself, and encouraged my teams to do, is diagrams as code. I don't mean generating UML from classes and such, at least not as anything canonical, but just having a textual description of the system that doesn't take any more time to update and correct than necessary.
Create diagrams with text files using tools like PlantUML, Structurizr DSL, and GraphViz.
Learning at least one language in each of the three major paradigms (functional, procedural, and OO) not only shows you're flexible, but will also provide insights even in the languages you most use. For example, blocks, procs, and yields in Ruby are pretty opaque without some understanding of functional programming concepts.
If you know a lot of languages, you'll be less likely to dismissed if you don't know the specific stack required for a job. The exceptions tend to be around the Microsoft/proprietary stack vs anything else. Once you've learned 4-5 different languages in different paradigms, and are familiar with the general architectural principles of tech stacks, the differences are merely in the details.
It should be possible for someone already proficient in three or four languages in different paradigms to pick up the basics of a new one in a couple of days, provided it's not something esoteric like APL or INTERCAL.
But something like Rails or ReactJS is a library/framework, not a language. Grasping a new framework takes time because it's not just syntax, it's the whole environment, the conventions, and so forth.
I don't mean contribute less, I mean learn to write code that is succinct and concise. Some of the worst code I've ever seen takes a page of not-short lines to do something that could be done a couple of lines. I'm not even counting language boilerplate.
A notebook is also great for keeping a brag document to use for your next review.
Stop writing when you're in a groove and already know where the story will continue to go next.
This works for programming, too. When I have a green build and know what the next thing is to work on, I can go to lunch or go home for the day, and commit or continue when I get back. If I’m worried about forgetting, especially on weekends or holidays, maybe I’ll add a comment or a snippet where the next code will go.
Don't work right up to a hard stop. Look at the clock (or have a reminder) half an hour before the hard stop, then take the next 20 minutes to outline the next steps and properly suspend the tasks. Of course if you just work, work, work, then hit 5 p.m. or whatever and rush off then you'll lose the thread of what you were working on.
Instead of wasting 20 minutes when you come back to the task, set yourself a reminder to stop 20-30 minutes before the hard stop, and use that time to debrief yourself and write out your next steps.
Another way of putting this is, "never keep working until you have to stop because you don't know what to do next". That's a pretty good sign you've lost your way and it's likely the last couple of things you did were wrong anyway.
If you've got your work habits dialed in and are at your most effective, this should almost never happen. You should always be working in "bite-sized" chunks and always have some work mapped out ahead of time.
Ask 'why' five times? Yes, but do it by asking a lot of "what if" and "how" questions.
If someone doesn't explain something to you and gets upset when you ask for an explanation, it's likely that person doesn't understand it what they're being asked to explain. With more experience and guidance comes the ability to say, "I don't know, let's try to figure it out together". But the hard part is getting over the fear of repercussions for saying "I don't know".
Spend Fridays writing documentation. Complete whatever programming tasks for the week by Thursday. On Monday you have everything you did the week before written up.
There's a few things I can add to the other suggestions, but first, let's look at how we get to a point where we're stuck in work mode at quitting time. They thing to know is that you've got to start by arranging your day so you don't come to the end of the day fogged.
We can't do more than about four hours of "deep work" each day. Even getting a solid four hours takes practice. But fortunately our jobs are not all deep work. Yes, programming itself is deep work, but most of the workday is answering emails, following up on stuff, and dealing with toil. So start by planning your day to only expect four hours of focused, detailed work. Schedule time to do it at your peak productivity hours. For most people this means towards the beginning of the day. Do this work with frequent breaks. The pomodoro-style of 25 minutes on, 5 minutes off, continue for a couple of rounds then take a longer break. I use flow on my macbook to drive this. Make sure your rest breaks are real breaks. Get up, move around, stretch, get a snack. Don't just pop open a reddit tab and zone out reading. Disengage. I recommend Max Frenzel's work for further consideration. [@frenzelPraiseDeepWork2018].
Spend the rest of your work day on the other stuff, including meetings and such. Obviously most of us don't have 100% control of our schedules, but it's pretty common to see a senior developer block out hours a day to focus.
Once you've found a good balance of focus and rest, you should be able to get to the end of the day with a clearer mind. So now comes the transition routine. If you're working from home or remotely, you should have a space dedicated to work. Step away from the desk/computer. Change out of your work clothes. You did put on clothes in the morning, right? Don't just walk from the bed to work and spend the day in your pajamas. For me, this change of clothes, even if it's just my shirt and shoes, is what I call my "Mr. Rogers Time", from the TV show where Fred Rogers would always start and end by changing his shoes and sweater. Doing this gives you mental space to do something easy that helps your mind wind down. When I was commuting, the bus ride home was part of my transition routine.
Once you've put work behind you, mentally, and physically, you can do some meditation, exercise, yoga, or other hobby that's more physical than mental.
whatever requirements were written at the time the system was first implemented, and whatever they may have been when they were last updated, are de facto wrong, because if the system was meeting the requirements it wouldn't be undergoing renovation. What is needed, she says, is determining the thought process that went into the original requirements, and what the trade-offs were there compared to what they are now. Too often developers will approach a legacy code base thinking that the programmers of the past were idiots who didn't know what they were doing an made "obviously" wrong choices, when in fact it's best to look at the system as having been the product of trade-offs and understanding as they were then, by people of similar skill. Not that we assume it was the best possible solution, but that they did the best work they could under the circumstances.
Taking the existing system as the measure against which requirements are evaluated is a mistake, mostly because time has moved on and things have changed.
The best resource I can recommend is Michael Feathers' book Working Effectively With Legacy Code. Without knowing more about where you are I am guessing, but it sounds like a good start would be Chapter 16, "I Don't Understand the Code Well Enough to Change It". [@feathersWorkingEffectivelyLegacy2004]
Also, because you mention Java specifically, Object-Oriented Reengineering Patterns is also worth perusing, if you can find a copy. Part I is all about reverse engineering.
Finally, just as an additional resource, Code Reading: the Open Source Perspective [@spinellisCodeReadingOpen2003] is about how to understand a code base in general.
Also take a look at
- Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones) [@bellotti2021kill]
- Beyond Legacy Code Nine Practices to Extend the Life (and Value) of Your Software [@bernsteinLegacyCodeNine2015a]
Anyone who has ever spent significant time working with and supporting a legacy system would, and should, be righteously offended at the idea that doing so is "coasting". Legacy systems are some of the most challenging, complex, and hard to work with systems out there.
Most work is maintenance, and lots of projects get cancelled.
Maintenance makes up the largest portion of the software lifecycle, some 60%-80% of the time and typically 90% of the cost over the life of the system. Don't confuse the term maintenance with just fixing bugs in existing systems. Working with existing code to change its behavior in response to changing requirements or external needs is maintenance. Adding the ability to interface with new systems or address security and performance issues is maintenance. Large projects aren't necessarily entirely greenfield, they can involve expanding and enhancing existing systems. Some studies suggest that only 20% of software maintenance is "corrective", i.e. fixing bugs.
The folks who talk about jumping into greenfield projects and going live are over-represented in this subreddit. I don't know what the exact percentage is for cancelled or shut down projects is, but to my knowledge it hasn't change much in the past few decades. I do think the move away from waterfall/big bang release to iterative-and-incremental development means that more projects get something into production, but even then lots of them end up winding down before the Big Plan had hoped.
The projects I've worked on included one where there was supposed to be a Phase I and a Phase II, but for various reasons Phase I ate up all the time and money, so Phase II was canceled. Yes, we went to production, but in a real sense there was a failure, because Phase I was mostly about improving/re-invigorating the legacy product, and most of the new features and functionality were scheduled for Phase II, and that got cut.
Another project I worked on went to production, but then the organization decided to bring in a Big Multinational Consultant to redo Everything, so the work we (a team of about a dozen) did got dumped and replaced by some Enterprisey crap written by a horde of contractors.
I guess my dumb question is, do you have any reason to think your experience is atypical of the industry? People who jump into these threads and talk about being successful are mostly just examples of survivorship bias. Recruiters and hiring managers are trying to sell you on the organization and position, so they will always talk about the attractive parts of programming. But the day-to-day reality by far is maintenance and being what I call a "code janitor".
We like to think of ourselves as building construction crews digging fresh holes, laying foundations, and putting up structures where none were before, but we're really more like road building and maintenance crews. Yea, we might put some new routes or bridges in, but it's all building on existing infrastructure.
If code and other work you do never goes into production and produces tangible value, is it wasted effort? For the company, maybe, when the project gets cancelled, doesn't make money, is obsolete, or implements unused features.
But it's not waste if you learn from it. Instead, think of it as exploring, testing, and learning. Sometimes you'll go a certain direction for a while and find out it's a dead end. But how would you have known that path leads to a dead end without having gone down it?
Some people are content living with modest means. Anything that tends to force the pool of candidates to act like they care about money above all else will have the effect of eliminating a large chunk of very good people from consideration.
The thing about working at FAANG (and this holds true for the leading companies in other sectors, too) is that you have to "drink the kool-aid" and really buy into their way of thinking about problems and doing business. This is where the vague term "culture fit" really becomes clear: Employees have to believe that the company's own views about its success and goodness are true, and that critics and detractors are either wrong or have an axe to grind. Well, they either have to believe it or they are so focused on money that they have successfully compartmentalized any ethical or legal concerns out of their day-to-day work. There's also those desperate for a job any job, that they will work for a company they believe is unethical or full of bad actors, just to have a paycheck, but we software developers don't have to worry about that usually.
Let the companies that try to put candidates through the leetcode grinder know that no, developers, especially seniors, aren't going to put up with that garbage. It's a job seeker's market, let's set the conditions for employers to come to us.
When someone already working at FAANG, took 10 tries to pass an interview at another company? It shows the process isn't about figuring out if the candidate can do the job, it's selecting for candidates that can jump through the hoops.
Employers are definitely tilted against false positives, but that's simply because they have given up on actually developing employees. They'd rather roll the dice on a complicated and exhausting interview process in the belief they'll snag the perfect hire of someone who is a good culture fit and already knows the solutions to the employer's problems. There's no proof that these gantlet-style interviews are any good at determining someone's actual ability to do the job.
Companies don't want to hire great people. They want to not hire bad people. Nearly every management-oriented source out there giving advice on hiring puts "no false positives" (hiring someone who doesn't work out and must be fired) as the #1 goal. Employers don't want to bring in a person unless that person is from day one a productive fit for their organization. Employers don't do career development. Employers don't look past the current quarterly earnings report. Thus they see anyone who comes in and doesn't provide ROI in three months or less a net loss, even if that programmer, with proper coaching and mentoring, could be their future CTO.
I understand the temptation to want to find the perfect hire to slot into a skillset you're missing and code the solution, I had tried to do that once myself.
Of course, using a 6-hour marathon interview process as way to gate-keep so you only hire compliant people who will work 996 (9 a.m. to 9 p.m, six days a week) and never make waves is also a thing.
It used to be easier to vote with your feet in tech because transferring jobs could easily happen in a week. Current processes now make this significantly more difficult, on purpose.
"Do not fall into the error of the artisan who boasts of twenty years experience in his craft while in fact he has had only one year of experience — twenty times." — Trevanian (Shibumi)
The job title "senior" doesn't mean anything consistently. The reasons are many but the main one from my experience is that it is the only way some employers will pay a market rate for decent programmers.
Title inflation is a thing. The other thing is that companies don't want to have real professional development programs. Instead of hiring entry-levels out of college and giving them entry-level things to do while actively helping them mature, companies always want to hire that "immediate impact" developer, and end up with a bunch folks who may be good but haven't developed in the company. Also, because there are no entry-levels to do the entry level thing, seniors end up having devote a significant chunk of their time to dealing with simple stuff. Sometimes that can be automated, and a good senior engineer always strives to automate those repetitive tasks. A better way would be to assign a new or junior engineer the task of automating some toil, with a real senior engineering mentoring, advising, and reviewing the work. Then the work gets done, the company gains a valuable experience employee AND the results of the work they did.
Someone getting a senior engineer title just because they hit their 5th (or worse, 3rd!) year in the industry but haven't really learned much in that time is just a flaw in the industry. Perhaps the meaning of "senior" differs across companies? There's a lot to know and just because someone has gotten really good at doing the things they've been doing for 3-5 years doesn't mean they've actually grown.
A novice senior is simply someone who has spent their career creating multiples of the same kinds of software using the same tools, languages, and processes as they did the first 2-3 years in the workforce, and can do nothing else. Those technologies may or may not be outdated and may or may not still be good ways of doing things, but the novice senior engineer knows no other way.
No organization wants to spend money and invest in people they hire, they just want to find the unicorn that they can slot into a job and have them be productive from day one.
The ones that say <num
> years of experience with <language
> where <num
> is greater than the length the time the language has existed or been used outside the group that created it. It sometimes happens that a hiring manager will want a person who has 10 years of experience, any experience, and happens to be experienced with a specific technology, and by the time the req gets laundered through HR and the recruiting pipeline the wording gets mangled until the job requirements are absurd and bear little resemblance to skills the hiring manager wants. Recruiters don't know that Java and JavaScript aren't the same thing. I wonder how many online job postings get loaded down with keywords for SEO?
Some employers expect senior engineers to be the most whiz-bang coders ever and not dirty their hands with 'management-y' stuff like quality initiatives. Others expect senior engineers to have advanced beyond just being able to code well to where they can see and understand the broader view and how the business actually makes money.
In my mind the first kind of company, the one that calls someone that can code well but isn't able to work on the 'soft' stuff a senior engineer, is either just inflating titles, or they have a very traditional command-and-control structure with a bright line between IC and management.
In the other kind of company, a mid-level engineer that wants to be promoted should at least be starting to take on additional work beyond banging out tickets.
The typical job ladder for tech companies I've worked at shows increasing levels and breadth of impact the higher you go on the engineering ladder. A mid-level person might be expected to only do things that affect his immediate team, like adding a new tool to the team's CI pipeline, a senior person's work would have an affect across teams, encouraging adoption of tools, libraries, techniques, and processes that have a material affect on the bottom line.
I wouldn't expect anyone looking for their first senior position to have all of these. But a senior engineer should at least be "in the hunt for" the skills listed below.
Years in the industry would be the last item on my list, because the other things I look for can't be accomplished in a short time. Anyway, to jump into it, starting with Camille Fournier's list I posted in another comment, I'll summarize the key ideas as best I can.
Senior-level technical skills go beyond (and, depending on your perspective, may replace) being able to knock out the answer to a "hard" leetcode question. Some will say broad, others will say deep in a specific tech or stack. I say neither, but a little of both. The key is having some grasp of the entire tech stack from hardware to distributed systems. Not in-depth knowledge of everything, of course, that would be impossible. But I have yet to meet a good senior+ engineer that doesn't have at least some familiarity with very low-level things like computer architecture and network physical and data link layers all the way up to concurrency, parallelism, and distributed networking systems. This should include some knowledge of compiler and OS concepts, application internal structuring and software design and deployment.
Communication skills land about second in importance. Not just good writing skills, but knowing how to structure design docs and specifications, how to communicate and explain technical things to non-technical people (without making them feel shamed or embarrassed) and pitching ideas to other engineers. While I don't expect engineers to have the highest EQs, at the senior level they need to know how to shut up and listen, when and how to ask good questions, and how to make sure others know that you're listening and understanding them.
Next I'd go with overall project technical and operational skills. Being able to set up a CI/CD pipeline from scratch to go from code to complete package or deployment. Build integration test setups that provide useful signals. Setting up monitoring and instrumentation for a running system.
After that, probably mentoring and developing less senior developers. Mentoring is a whole topic of its own that doesn't get enough attention, but even being able to notice a junior is struggling with something and say, "Hey I read/watched this book/article/video on that and I think you'd get something out of it for what you're working on." Ideally you'd work with them on it, not just drop it on them and go, but it depends on the you have and the organization.
Finally, a senior engineer needs to know their weaknesses as well as their strengths, and at least have some idea how to bring in another person to bolster their weaknesses. For myself, even though programming is detail-oriented, I'm surprisingly poor at a certain kind of tedious, repetitive detail work. I like to say I got into programming to be able to automate away having to do things like that, but sometimes you can't escape it. Realizing that has led me to understand that everyone has different abilities, and there's absolutely no excuse for disrespecting a co-worker who doesn't have your skills if their job doesn't require it. They almost certainly have skills you don't and quite possibly can knock out something with easy that I would be a struggle for me to even get a good start. Asking for help is not a weakness. Knowing when to ask for help and who to ask is a strength.
For mid-career to senior folks, why test them for coding? If they got that far they either know how to code well enough to do the job, or they bring enough value in other areas to make up for less-than-10x coding ability. Don't trust their resumé? If a candidate lies there, they have bigger problems than whether or not they can code, and it shouldn't be a challenge to screen them out. If the interview process can't screen out someone who lies on their resumé, a coding test won't save it.
The other thing about the typical coding tests popular now is that they don't actually determine if a candidate can be a good software developer, just that they can write solutions to artificial problems. Goodhart's law as stated by Marilyn Strathern: "When a measure becomes a target, it ceases to be a good measure."
Remember when questions like "How many golf balls can fit in a school bus?" or "Why are manhole covers round?" were popular? They were completely useless for determining if a person was any good at their job, but they were a nice way to gatekeep and ensure only people who happened to have the advantages that led them to knowing how to game the questions were able to pass the interview.
Never, ever join a startup because you think that in some later year you'll be able to cash out and retire. Most startups fail; lots of the ones that don't fail get bought out in some kind of deal where your equity is traded for not-yet-vested options in the company that bought them. You have a 1-in-10 chance of the startup succeed at all, and at best a 1-in-40 chance of hitting it big. Some sources say 1 in 100 or less. Even if they do hit it big, unless you have preferred options, you won't make as much as you think.
Remember: your equity is worth exactly $0 unless and until the company succeeds and your vestment period ends.
Most startups fail. I've been in more than one failing startup myself. Maybe the company implodes and lays everyone off, or the shattered remnants of a formerly good company get sold off to a private equity company that pumps whatever profit it can get in the short term before finally killing off the business. Maybe they 'pivot' to something and no longer need whatever it is you do.
Some significant part of the 3-4 year cycle is just good developers looking for a success story. They're not leaving in hard times, they stay with the company during the hard times and are eventually "rewarded" with a layoff.
Early in my career I jumped around industries a bit, including a stint at an insurance company, and was always frustrated by the difference between the kind of highly effective software development I would read about in the literature and the kinds of dumb shit that happens at non-tech companies. I eventually managed to get into working for actual tech, and things fell into place.
My lessons from the early days are
- Software development is lumped under IT and is always considered a cost center. It doesn't matter if the application you wrote or support is used by the top salespeople pulling in the biggest deals, the credit will go to the sales organization and the software is valued about like filing cabinets and typewriters.
- Nobody cares about staying current with technology. While there's a certain wisdom in "if it ain't broke, don't fix it", the cost/risk model done at non-tech companies never takes into account the changing and advancing world of tech. There's a reason why Equifax was still running an unpatched version of Apache Struts in 2017, and it's not the fault of the single individual the CEO wanted to throw under the bus. In fact, his testimony to Congress actually says the exact opposite: the process and protocol they had in place was followed, and the patch still didn't get into production.
- Even if the developers manage to get some modern tools, processes, or systems in place, they will simply become the targets of blame for failures later. Something will break, as things do, and someone whose salary has an extra zero or two more than the developers will evade responsibility and blame it on "that newfangled thing", and assert that staying with the old "tried-and-true" would not have failed.
- Some developer, somewhere, will be more than happy to climb the career ladder not by being better, but by rolling out all the above and more to cast aspersions on their co-workers Management, not being in tech and not caring if tech if dysfunctional or not, will reward that person for what they believe is astute business sense.
True story from that era: The developers were all given laptops with Win95 on it, even though Win2K had been out for some time. One new hire I worked with got a new laptop with the OEM install of Win2K but desktop support took it back, wiped it, and put Win95 on it, because that was the "supported" OS. Oh and the laptops were woefully underpowered for software development, because from the point of view of desktop support and others laptops were just souped-up terminals to be used to access the "real" computers where the work was done - IBM mainframes, Sequent NUMA-Q DB servers, and Sun Solaris minicomputers. As a side-effect of this, no developer had the ability to build and test the software they were deploying to production on the target OS/hardware prior to going to production with it.
Why are you calling him lazy?
Clearly he's not lazy, he got burned by organization dysfunction after making a sincere and, in some ways, career-limiting, push to improve things. He probably got a talking-to from whoever he reports to and after that decided that he was done fighting.
Now he just does the minimum necessary, on his own terms, to avoid getting whatever your employer's version of a PIP might be. Heck, he might already be on some kind of PIP and he's doing "work-to-rule".
You might ask why he doesn't just find another job, one where he might be happier and better challenged. Nobody knows another person's circumstances. Maybe he is looking for another job, during the day, and working these odd hours. Maybe he's not looking because of life circumstances which we can't and shouldn't judge.
he is doing work. Good work. It’s just entirely on his terms.
Something everyone should strive for, within the constraints of team work. You say he doesn't show up for meetings and other things, which is a bit of wrench in team work, but it sounds like the organization and the SM were and are already doing significant damage to team effectiveness. With the dysfunction you hint at, it's hard to say his style is doing any more damage than the organization does to itself.
So again, why is he lazy? He gets his work done, he still maintains a high quality of contributions, he still communicates about technical issues where he sees room for improvement. The fact that he can do it all while only working 2-3 days a week and weird hours says more about how bad the organization is and how much room there is for improvement there than anything about whether or not your co-worker is indolent.
A workaholic person will have a net negative effect on the team over time. It's almost impossible to discourage a workaholic from working. Their motivations are too deep-seated for an office/manager environment to address. I'm all for attempts to change their behavior, to the extent that the leaders and team can tolerate it, but if the workaholism continues the result will be piles of work that everyone else will have to deal with, increasing their workload involuntarily. There's also the potential friction with between the workaholic and the others over what is the "right" amount of work and what should be encouraged and rewarded.
At the very least the workaholic should be given more tasks that require coordinating and scheduling with the other team members, so that no individual's work can overwhelm the team cohesion.
As a last resort, give them lots of work that is tangential to overall team efforts, and let them finish lots of things that don't and won't impact the rest of the team members. Slot them off with lots of make-work so they can feel like they are performing above others, but keep that work sidelined so it doesn't distract or impact the others who are doing work in the critical path.
Not always. Even here there's a clear subgroup of people who want to go back to the office. People who have spouses and kids and no room for a home office. Young people living in studio apartments or shared housing with roommates that don't want to have to work out of their room. People in stressful living situations.
In my year+ of enforced remote work, I've come to appreciate that there's value in in-person interactions. Maybe just getting together once a month or every six weeks is sufficient. Even before that I had one team member who always worked from home and it definitely had its downsides.
I'm lucky, I have an entire room dedicated as an office and den, where I have my desk with a good office chair, my personal desktop computer, filing cabinets, shelves, and a view out my front window. Back in February I got a small easy chair and arranged a small corner of the room with a side table and lamp. If I'm tired of sitting at my desk I can work on my laptop there, or if I want to unplug I can just sit there and read a book or write.
I would go back to in-person work in a hot minute on one condition: give me an actual office, with a window a door that closes. It doesn't even have to be very big, it just has to be mine. I haven't had an office like that since 2000. Depending on the job, I might even consider going back in if I had a large, high-sided, sound-muffling cubicle with plenty of flat surfaces, cabinets, and shelves. At one place I worked the cubicles even had a small coat closet with a door.
When things happen that change the amount of time it will take to complete the work, what then?
You're supposed to estimate every sprint. It's been shown repeatedly that trying to decide, up front, how long the work will take, or even what work actually needs to be done does not lead to predictable outcomes. Sometimes (ok, usually) requirements change. Sometimes things don't work out, like choosing a particular design turns out to be a poor fit for things that weren't fully understood, or even known, at the start of the project.
The reason for estimating every sprint is to be able to look at what was accomplished so far, how long it actually took, (vs. what everyone thought it would take way back at the beginning), what was planned that must now be re-assessed because something – requirements, assumptions, team size, unexpected global pandemic that sent everyone home to work for 18+ months – invalidated the planning done up front.
A good team should be self-organizing. Project managers are not the team wranglers, good developers should definitely not pass off teamwork as being the responsibility of someone else to oversee.
That individuals on the team have varying levels of organization skills is a given, but a high-performing team will have these aspect taken care of as part of the development process.
Entire discussions can be avoided by writing code in a way that doesn't depend on the person reading the code knowing the reasons for specifying how, but instead used concepts around the 'why'.
Coverage Is Not Strongly Correlated with Test Suite Effectiveness
I don't consider it proper review if I can't at least get the changes locally, run all the tests (they should pass, of course) and run through the changed behavior. In the better places I've worked I'd spin up a test environment and exercise the changes.
The model of a separate QA department that catches bugs "after the code is written" is old and outdated.
Companies really need to take a hard look at systems that end up untouchable like that. If it's important to the business, they need to suck it up and replace it. If it's not really important, just shut it down and move whatever it does that must be done to other systems.
More importantly, organizations need to take an even harder look at how they let systems get to the point, and why they don't maintain them properly.
When I left one position, the only real feedback I gave to my skip-level manager was to look at all the things they had that nobody wanted to touch and either delete or replace them.
Never underestimate the ability of a single programmer in academia to churn out massive amounts of code and, importantly, never delete any of it. Typically a significant fraction of the codebase is simply dead code. Stuff that is never called, or is no longer part of any important operational flow of control. There may be whole modules, or packages, or whatever, that exist simply because the developer thought to do something but that were never part of the actual functioning of the program and are never referenced from outside the package itself.
Sloppy spaghetti code is not tech debt, it's just bad code.
Crap code is crap and will always slow you down. Tech debt speeds you up now I'm exchange for a temporary slowdown later.
"Debt can be repaid" does not mean "Burn the old code and start fresh but you have knowledge". That's not repaying debt, that's writing off bad code for what it is: a dumpster fire. Bad code doesn't even help gain knowledge, because it's done so wrong that whatever problems it has are inherent in itself, not errors in understanding the problem domain.
I would argue a bad conscious choice is ok if it works for the needs now. For example your business needs to research if something will work out or not, you build it out quickly without thinking much about testing and architecture.
This is something closer to what the term Technical Debt actually means. The team should still build it as well as they can, including at least some testing and thought for architecture, but in the situation where there's insufficient information to know now, make the choice to work with what's known.
The important part is that it's "a set of conscious choices", and that there's enough of a breadcrumb trail in the form of documentation and some testing that it can be revisited later.
We need to put to bed, forever, the idea that terribly-written code with no tests and no meaningful design choices is debt that can be repaid. That kind of situation is more like taking a pile of paper money and setting it on fire. That money is gone and ain't coming back. In fact, fixing the mess will cost even more time and money than if it had just been done well but incorrectly in the first place.
Any automated metric can be gamed, and developers, even good ones, will game them when management cares more about the metrics than the product. This is Goodhart-Strathern's law: When a measure becomes a target, it ceases to be a good measure. It's entirely possible to meet or exceed these metrics and still have a crap product.
Those circles where it actually works and is sustained are very small. See "TDD: Where did it all go wrong" [@cooperTDDWhereDid2017].
When you deploy a release to production with several fixes, including an important one, but one of the non-important fixes breaks something, so you have to roll the whole thing back and then figure out how to get the important thing re-released asap while fixing the other stuff.
Lesson learned: make releases smaller and more focused, put highly important changes in their own release so they aren't subject to problems with other changes, or get really good at "fix forward".
Also of note: Any release requiring data migrations that can't be easily reversed should use change data capture or dual write migration. It should also be in a release by itself to avoid messy impacts.
A "cowboy" or Leroy Jenkins developer should only work on things outside any critical path or potentially critical path. Developers like that don't document, don't transfer knowledge, and don't follow up. Yeah, stuff gets done, but nobody knows how exactly. Three months from now (or less) even leroy will forget what he did and probably why, meaning that the next thing will incur that much more risk and friction from having build on off-the-cuff hotshot's work.
I've seen this problem before in companies that have incentive structures that reward finishing new stuff but do nothing about maintaining existing stuff. Incentives not just for the developers but for the product and business side. The metaphor that I discovered to describe it is that of a restaurant kitchen that only ever makes food that is ordered by never cleans the dishes, utensils, and pots and pans. Sure, you can go a little while, tossing the dirty stuff in a pile by the sink, but eventually you run out of clean dishes and all work has to stop while someone washes a spoon or whatever. Well-run restaurants and commercial kitchens have a "clean as you go" policy.
Just go back to the stakeholders and say things like "we were trying to work on X, but this other stuff got prioritized above it, so we had to delay work on X". Then, if the stakeholders have any say in prioritizing the other stuff, ask them which of it they are willing to let go. If they don't have any say because another part of the business owns it, get your stakeholders for X and the other people together and basically (but not literally) say, "cage match, fight it out. decide who gets the team's attention". Make it their problem. Make them take it to the next level up.
In the mean time, institute a software clean-as-you-go policy. Put time into the legacy app by automating the manual stuff, adding tests, standing up a test and deployment pipeline, and fixing little things, and do the same for the new stuff.
As for feature Y, now that you have some experience in the realities of the toil, make your estimates and planning taking into account that you'll spend maybe 10% of your time on it.
You might find Marianne Bellotti's book on legacy systems titled Kill it with Fire [@bellotti2021kill] helpful.
When I'm asked, directly, "when is this going to be ready?", I always answer, "It depends" and talk about all the risks and unknowns that make it impossible to do more than give an approximate range of times when something could be ready to go to production.
Here's the trick: anyone asking that question already has an idea in mind of when it must be done, at the very latest. If the correct estimate can even been determined, which is rare, and it is farther in the future than the person asking the question can accept, they are going to ask how to shorten it anyway. My task is to discover when they need it, and work backwards from there, to determine if the range mentioned above allows for that.
All of this is why iterative an incremental development with frequent releases is more important than accurate estimates of date complete. If there's something in production, then stakeholders can decide if it is sufficient, or how much more needs to be done, and users can see the work in progress.
The reason we see software today that is in early access, or having limited features, is because some companies have learned that the only way to reduce risk and uncertainty is to actually have working software. Among other things, it lets stakeholders determine that if that big, complex, feature or function they thought was a must-have in January is actually all that important in October, or if it turns out that now, with a better understanding of what actually works, that feature could be simplified, broken down into smaller pieces, or even eliminated.
MNobody gets 40 hours or programming done in a 40 hour work week.
I wouldn't call it padding. Of course, you shouldn't be estimating by time anyway, but at least one person in your organization will be trying to convert size of effort to hours anyway, so take that into account. In any case, don't fall into the trap of thinking that because your work schedule is 40hrs/week (or whatever) you're programming the whole time. I hesitate to say that's crazy, but it's way off base. Programming is difficult work that requires focus and deep concentration. Even in the best of circumstances, most people can't sustain that for more than four hours/day, so right there, without accounting for any other responsibilities or interruptions, you're only programming 20 hours/week at best. Don't count any unscheduled time less than one hour as time you can do deep work. If you have a day broken up by meetings so that you get just one 3-hour block of unscheduled time that day, you might get three hours of programming done. You might not get any, depending on how much you get interrupted.
When you perceive problems in team or project, before jumping into solutions it's worth taking the time to be explicit about the pain points and how they are harming the team. Sit down with your co-workers and talk about specific scenarios and goals for the short term. Standing up a workflow won't help if it doesn't solve the problems you have, which seem, if I'm reading your post right, to revolve around collaboration and discoverability.
Don't forget the part where you determine if the solution, as implemented, actually solves the problem and doesn't introduce new ones.
One can even take the idea of microservices perhaps farther than is efficient or reliable and think of each service as something like a class or package like you'd have in code, but distributed. In designing microservices, the same principles of information hiding, encapsulation, high cohesion and low coupling. Reducing dependencies is especially important as a way to avoid problems around network latency and unreliability.
In microservices the patterns the exist aren't much like the usual GoF patterns. Even where you can identify one of the commonly-known ones, it's implementation will be quite different from the code-centric examples usually given.
Patterns you see in microservices architectures include things like Event Sourcing, CQRS, Saga, API Gateway, Database per Microservice, and Circuit Breaker.
There's some good work out there covering these. I might start with Martin Fowler's article "Microservices". And then look at A pattern language for microservices by Chris Richardson. There are a couple of books out there but I've not read them so I can't comment. Edit: I forgot there's one book. Michael Nygard's Release It!: Design and Deploy Production-Ready Software [@nygardReleaseItDesign2007]
My experience showed that there are some organizational and cultural red flags to watch out for. Technically, the idea is very sound and properly done is extremely resilient.
Culturally, you'll run into developers that are very uncomfortable outside of the synchronous transactional request/response style architectures.
Organizationally, you'll find that if you have to bring different teams on board to adopt event-driven microservices, there will be resistance who just want to have an endpoint to call or implement.
Finally, you may run into a situation where your ops team doesn't want/can't properly support whatever message broker you choose. If they have been doing the things to support servers doing REST and CRUD and relational DBs, they might not understand or want to deal with the operational realities of message brokers and queues.
In the common way of doing things with lots of API calls across services that have a front end, a back end, and DB, the system is said to be "orchestrated", that is, there's something that acts as the "leader", or coordinator, like an API gateway. With event-driven microservices, things are less orchestrated and more choreographed. Decentralization seems to be a very hard concept for many, maybe most, developers to really grasp.
It's not always best to outsource technology and hosting. Referring back to Joel Spolsky's In Defense of Not-Invented-Here Syndrome, there are times when you'd want to roll your own. The rule as stated "buy for parity, build for competitive advantage" applies.
If the business can realize a competitive advantage by building out their own, that's the right choice. Using a managed service means you get the same behavior as everyone else who is using it. That might be fine, depending on the use case.
There's also the situation where you really don't want to pay an Amazon or Google because your product or service is a competitive offering to one of theirs. If you run a search engine you'd be a fool to host your systems on Google's platform.
What people today say is "standard REST" is more or less completely different than what Fielding wrote about in 2000.
A great deal of what is written about HTTP verbs, paths, is derived from some rather dense technical descriptions. For example, "a resource R is a temporally varying membership function MR(t), which for time t maps to a set of entities, or values, which are equivalent. The values in the set may be resource representations and/or resource identifiers" and "REST components perform actions on a resource by using a representation to capture the current or intended state of that resource and transferring that representation between components. A representation is a sequence of bytes, plus representation metadata to describe those bytes. Other commonly used but less precise names for a representation include: document, file, and HTTP message entity, instance, or variant."
Of note, the paper does not mention verbs or nouns as conceptualized by certain REST implementations, and in fact neither term appears in the paper even a single time.
Oh, a challenge. I'm in complete agreement with you regarding what REST is and isn't, but let's have some fun with your list. This is just a few minutes thinking, I'm sure there are improvements that could be made. Since they all related to users, user identity, and such, this is kind of my specialty. The REST bits are certainly wrong in some ways.
- login
- logout
Let's call this "manipulating authenticated state". So it would be something work on that state, or session.
POST /user/{userid}/session
(with the body of the request containing the authentication credentials, e.g password. It returns a secure token (DO NOT USE: this isn't sufficiently secure for real world use, it's just an exercise.DELETE /user/{userid}/session/{token}
. This assumes perfect security, thattoken
won't leak or otherwise be compromised.
Password change
- forgot password
- reset password
Here, the resource we are manipulating is the user credential and the client needs to ask the server to create the state of "password reset".
The "forgot password" part is just "get me the resource for manipulating the password"
POST /user/{id}/OTP
For users the response on success would be200 OK
[edit: or maybe201 Created
] with a description of the next steps (go check your email, typically). For automated processes, we can't just return a useable token, that would be terribly insecure, but we can return a code that, combined with something the client has, like a security key, can be sent to. the next stepPUT /user/{id}/OTP/{code}
Here I'm usingPUT
in the "update if exists" sense. Here we run into the fact that REST isn't just the HTTP method and the path, but other metadata, which the client supplies in the body of the request as the password but with additional information proving that the caller is in possession of information that, combined with the code, allows a reset. Typically this would be some sort of client secret credential
The following are manipulating the state of the user and any or all sessions associated with that user.
-
kick user
-
ban user
-
unban user
-
DELETE /user/{id}/{session-id}
. Here the session id represents the current interactive session for the user.
Ban can be the endpoint /user/{id}/ban
with methods PUT and DELETE
PUT /ban/{userid}
Note that ban is a noun here, and the client provides, in the body of the request, proof that the client has the privileges. This would likely be some representation of the admin user session/credentials. The response would be thebanid
for the resource describing the ban itselfDELETE /ban/{banid}
the key trick is to note that this is manipulating the resource calledban
which encapsulates certain information, including a userid
The last two seem hard because email is not part of HTTP, but that's kind of an implementation detail. The resource we are manipulating is the user and the state of verification is created and modified
- send email verification
- confirm email address
POST /user/{id}/verification/{channel}
will create the resource and as a side effect create an out-of-band message containing the verification id to the channel indicated, which may always be email, but could be SMS.PUT /user/{id}/verification/{verification-id}
where theverification-id
is in the side-channel message external to HTTP.
It's been a while so I've forgotten some of the details. We used, if I recall correctly, flyway, and could reproduce a production DB from scratch with all the alters from the beginning. Any DB change was done in the form of a DDL script run through flyway. Manually, at the command line, for development, but then integrated via a Maven plugin for test and production deployments.
We could also build a test environment with older versions of the application because the flyway scripts were version controlled with the rest of the code.
The complexities arose when an alter was not completely reversible. We definitely worked to minimize those but there were times when it was unavoidable.
We used a similar setup at another employer, although there was another tool whose name I forget that we used for postgres because that company had both mysql (technically mariaDB) and postgresql. Any alter that we wanted for production had to be reviewed by someone on the DB team who would check it for style and adherence to conventions as well as running it in an environment they owned and managed.
Overall I would say version controlled scripted schema migration is essential and wouldn't go back to the days when a DB admin ran un-versioned scripts in production. Thinking back to the early days of my career when that was common, it now seems primitive and crazy to do things by hand like that.
I've been waiting my entire career for something to come along that is enough better than this 50yo tech for it to go away. At least one of the sticking points is that there SO MUCH data trapped in Oracle/DB2/MySQL/SQL Server that it's going to take a paradigm shift to unstick it.
Terrible type system; no user-defined types. No object (tuple) identity. Three-value logic with NULLs. And of course graph theory is a powerful tool that SQL simply misses out on entirely.
Change data capture applies to having a process whereby any data that is changed in the unmigrated source (e.g. table) is copied to the migrated source. Usually this is done by having some kind of timestamp or version number on each data element. This is done before the code migration itself happens, so that the new version simply starts using the newer migrated data source, and the old data source can simply be archived.
dual writes are a bit more complicated: the code is modified to write to both the old and the new source, and there's a flag for reading from either old or new. The code changes and the data source changes are deployed together. Code runs for a while using both new and old data sources, with the old source as primary. Once the new source is sufficiently complete, the new source becomes primary and writes no longer happen to the old source.
They are both a bit complex, but dual writes more so. Don't use either unless you really need to, such as when reverting a migration would be more complex, or impossible, and dual writes are even more of an edge case, and the reasons for choosing it over change data capture imply quite a lot of complexity in the data stores.
Online Database Migration by Dual-Write: This is not for Everyone (to be more precise: for almost no-one) [@busslerOnlineDatabaseMigration2020]
- Bellotti, M. 2021. Kill It with Fire: Manage Aging Computer Systems (and Future Proof Modern Ones). No Starch Press, Incorporated. https://nostarch.com/kill-it-fire.
- Bernstein, David Scott. 2015. Beyond legacy code: nine practices to extend the life (and value) of your software. Version P1.0 (August 2015). Place of publication not identified: Pragmatic Bookshelf.
- Binstock, Andrew. 2019. “Take Notes As You Code—Lots of ’Em!” November 21, 2019. https://blogs.oracle.com/javamagazine/take-notes-as-you-code-lots-of-em.
- Bussler, Christoph. 2020. “Online Database Migration by Dual-Write: This Is Not for Everyone.” Google Cloud - Community (blog). August 27, 2020. <https://medium.com/google-cloud/online-database-migration-by-dual-write-this-is-not-for-everyone-cb4307118f4b.
- Cooper, Ian. 2017. “TDD, Where Did It All Go Wrong.” December 20. https://www.youtube.com/watch?v=EZ05e7EMOLM.
- Crabill, Shannon. 2019. “Taking Note.” Shannon Crabill — Front End Software Engineer (blog). July 15, 2019. https://shannoncrabill.com/blog/taking-notes-as-a-developer/.
- Feathers, Michael C. 2004. Working effectively with legacy code. Upper Saddle River, N.J: Prentice Hall PTR.
- Frenzel, Max. 2018. “In Praise of Deep Work, Full Disconnectivity and Deliberate Rest.” January 9, 2018. https://maxfrenzel.medium.com/in-praise-of-deep-work-full-disconnectivity-and-deliberate-rest-e9fe5cc50a1d.
- Hochstein, Lorin. 2022. “Writing Docs Well: Why Should a Software Engineer Care?” Surfing Complexity (blog). November 24, 2022. https://surfingcomplexity.blog/2022/11/24/writing-docs-well-why-should-a-software-engineer-care/.
- Nygard, Michael T. 2007. Release It! Design and Deploy Production-Ready Software. The Pragmatic Programmers. Raleigh, N.C: Pragmatic Bookshelf.
- Spinellis, Diomidis. 2003. Code Reading: The Open Source Perspective. Effective Software Development Series. Boston, MA: Addison-Wesley.
- Thomas, David, and Andrew Hunt. 2020. The pragmatic programmer your journey to mastery: 20th anniversary edition.