Deadlines in Software Development: A Double-Edged Sword
Explore the role of deadlines in software development, weighing their pros and cons, and comparing continuous deployment to deadline-driven approaches.
The Ottia team replaced guessing with data. We use our past scope and effort data to forecast with clarity. It’s a method you can use too.
If you run a startup, the pressure to predict delivery timelines is constant. Investors want launch dates. Customers ask about new features. Your team needs a plan to follow.
Your first thought may be to give a number that sounds good. That sounds ambitious. This is optimism. Sometimes it works. Other times it fails. When it fails, you get more than missed deadlines. You lose trust. You go over budget. Your team burns out.
What if you could stop guessing? What if you could use real facts? Our team at Ottia built a framework to do just that. We forecast with our own data. We use our past work to predict our future.
Most teams use old, flawed ways to forecast.
This is a straight line that shows what you hope will happen. Everyone works at full speed. The plan stays stable. No bugs appear. This almost never happens.
This is a chart of deadlines. You draw it once. It shows how things should be. You never update it. It does not reflect what actually happens.
Both of these methods ignore a simple fact. Your future capacity is best predicted by your past delivery. This only works if you track your past delivery.
Our team tracks some key things. We track the actual hours for each task. We track the time for each feature. We track the hours per sprint. We also track the difference between our estimates and the actual time it took. We look at where we had blockers or where the scope changed. We also add notes to explain why a task took more time.
This gives us a living data set. It answers the main question in forecasting. Based on our past performance, what can we really deliver?
The process is straightforward. We use five simple steps.
We break our next phase into epics. An epic is a big piece of work. For each epic, we list the business goals. These goals might be to reduce churn or launch a new feature. We then give a rough scope. We base this on similar work from our past.
Our estimates always include a step that describes the scope. Any task that is more than 8 hours is broken into subtasks.
We look at our past data. We see how long similar epics actually took. We find our team’s average weekly velocity. We see where the scope often grew or where work slowed down.
For each epic, we ask some questions. Have we done this kind of work before? Is our team familiar with it? Are there any known problems or unknowns? We use these answers to apply buffers. We flag areas that have a high risk. This shows us what parts of the project are the riskiest.
We build a projection. We use our real data. For example, our team delivers 80 hours a week. Our past estimates are 70% accurate. We have a roadmap with 640 hours of work. We project a delivery in about 10-12 weeks. Most other teams would say 8 weeks. We use real numbers, not just effort divided by team size. We use our team's real, validated velocity.
We update our forecast with every sprint. We track completed tasks against what was planned. We see the difference between the time we planned and the time we spent. We check the accuracy of the remaining scope. This tells us if we are on track. It gives us a new delivery range. We can then decide if we need to change our plan.
Large companies have extra people and more money. They have more project managers. They have buffers. Startups do not have these things. For a startup, an accurate forecast is a survival skill.
When you tie effort to actual capacity, you get some important benefits. You stop making promises to investors that you cannot keep. You use your money in a smarter way. You reduce team burnout because you work within your limits. You can change plans early. You do this before you run out of time.
Imagine we were building a new feature. It includes a self-serve onboarding flow and a usage analytics dashboard. Our roadmap had four items. Onboarding redesign was 80 hours. The metrics backend setup was 60 hours. The UI dashboard was 50 hours. QA and iteration were 40 hours.
We checked our past work. Our team of four delivers about 90 hours each week. We saw that similar dashboards took twice as long as planned. This was because the scope grew. The backend work always matched our estimates.
We flagged the UI dashboard. We showed low confidence. We decided to make a change. We shifted the delivery date by one sprint. We added a buffer to that epic. We also decided to start with a smaller version of the dashboard. Weeks later, we put in the actual hours. We updated our forecast. Our stakeholders saw the change in real time. It was not a surprise. It was a change based on data.
With 3000+ professionals on board, we’re ready to assist you with full-cycle development.