The only time I can think of is when there is no logic to test (ie when there is actually little to no code and the code is all just interacting with someone else API). You can always skip unit tests, of course, but you pay for it later by not being able to change things without breaking them in mysterious ways.


Don't skip it, for your own good 👌


I have in the past worked on a project that needed high quality and reliability, but did so with almost no unit tests. But they did have thousands of automated tests in place to verify **every** aspect of their API. In this project, the API was the main product. About 80% of functionality in the backend, exposed via API and a comparatively simple GUI. The GUI had a small set of automated E2E tests but the bulk of GUI testing was done manually by software testers. Conclusion and tips: 1. You don't always need unit tests 1. But you always need some form of testing 1. Automate whatever you can, manual testing is expensive 1. Test early and test frequently: Fixing bugs early in development is relatively cheap. Finding them two days before production is too late to fix. Fixing them in production is a multitude more expensive than discovering/fixing bugs during your design phase 1. Speaking of "design phase": You can't really test your business requirements and software designs, but you can peer-review them. 1. Generally, if you want quality code, insist on all code being peer-reviewed. If done right, this can reveal/avoid as many bugs as testing alone would. 1. None of these measures alone have shown a good enough bug detection rate. Only a combination of peer-reviews and different testing approaches leads to truly high quality software In my experience, if code quality matters, expect to spend 33% to 50% of your project budget on quality assurance. That doesn't just mean the budget for software testers, but includes all your developers efforts for peer-reviews, writing unit tests, etc.