The later you detect a bug in the development process, the more expensive it is to fix.
If every type mismatch causes a program crash then you still have real problems with a production crash. Your user has a degraded experience. Somebody gets alerted about the crash. Somebody needs to investigate it. In a language like python, you are often just stuck with a message that says that your variable doesn't have the right function so you've got no idea what wrong type it is and where it came from. You track it down, diagnose the issue, and push a fix.
It can be worse in a duck typed world, where you aren't even guaranteed to crash. Your program might just fly off and do completely wrong things. Or you might be working in a domain where crashing is unacceptable.
In a statically typed world, your compiler yells at you and says "you are passing an X here when it expects a Y" and you fix the issue before the code ever runs.
There are downsides of the latter approach, of course. "Ugh, I've got a FooMap and I need to pass it to a BarMap and these types are incompatible" is aggravating. Attempts to fix this have not always been great and we've got mountains of Java code where nobody has any clue where the real code is because everything is typed as an interface, for example.
I think in this case it's probably true for run of the mill bugs, but not for architecture and system design.
But I just want to object to "we don't have any good evidence". This is the sort of thing that it's really hard to get scientific evidence for, but that doesn't mean we can't learn about it.
I mean the problem is that we have quite good evidence of how to handle architecture and system designs problems. And it is not by finding them earlier.
But by reducing their costs through looser coupling and incremental work. That is where all kind of agile and DevOps research showed.
> we have quite good evidence of how to handle architecture and system designs problems
We do? I seriously doubt that. New good and bad architectures are popping up all the time. And there are plenty of architectures that people totally disagree about, e.g. microservices.
> But by reducing their costs through looser coupling
In my experience loose coupling is something to be generally avoided where possible. It leads to spaghetti systems and unknown data flows. Pretty much the dynamic typing of architecture design.
I'm not exactly sure what point you're trying to make though so I may have misunderstood...
They may not cost more for us devs, but does it factor in the time our customers are not doing what they need to do because of that bug?
I mean I fixed a trivial-to-fix bug the other day that probably Rust would have caught. Between the time it took for them to report it, support doing their thing and a new build was out it took an hour. An hour that our customer couldn't do what they needed to do for their work.
So I'd say it's almost trivially true that a bug caught before release costs less to fix.
This is a very strange statement on a site designed by and for engineers. We’ve all worked on large projects. It can take an annoying few minutes to get a static type check error fixed. We’ve all spent days or even weeks tracking down weird random runtime errors due to type mismatching.
The plural of anecdote is not data, but this is not a science website either. You don’t need a study to establish engineering common sense.
I've grown to dislike this statement. The worst case cost of fixing a problem late is bad. The vast majority of bugs are not that, though. Some are cheap no matter when you find them.
That said, I suspect many people were used to static types that required a lot of support to name everything. In those, it was common for changes to be rather extensive in what had to change. Just look at early Java EE stuff with tons of interfaces and text config for what is commonly a single "pojo" for a while now.
Now, I don't think this is an argument against static tools. Tools are tools. If you have a specification that you can encode into types easily, do so. If the type already exists, use it. If you are exploring, take care not to encode into the type a runtime feature of the data.
If every type mismatch causes a program crash then you still have real problems with a production crash. Your user has a degraded experience. Somebody gets alerted about the crash. Somebody needs to investigate it. In a language like python, you are often just stuck with a message that says that your variable doesn't have the right function so you've got no idea what wrong type it is and where it came from. You track it down, diagnose the issue, and push a fix.
It can be worse in a duck typed world, where you aren't even guaranteed to crash. Your program might just fly off and do completely wrong things. Or you might be working in a domain where crashing is unacceptable.
In a statically typed world, your compiler yells at you and says "you are passing an X here when it expects a Y" and you fix the issue before the code ever runs.
There are downsides of the latter approach, of course. "Ugh, I've got a FooMap and I need to pass it to a BarMap and these types are incompatible" is aggravating. Attempts to fix this have not always been great and we've got mountains of Java code where nobody has any clue where the real code is because everything is typed as an interface, for example.