Tuesday, December 11, 2012

Bugs and dynamically vs statically typed languages

Something occurred to me that, unfortunately, was just a little bit too big for a tweet, so I decided to blog instead, and add more context.

While thinking about the perennial question of whether static typing help reduce the number of bugs in an otherwise well tested code base, I was reminded of how many bugs I saw tagged by newer versions of Scala with improved static analysis and correctness, or how damn hard it is to do variance right when you start using it.

I was then struck by a thought: the "bugs" I was thinking of were not caught by tests because no feature in the code actually used the buggy class in a way that revealed the bug. This also relates to variance, because variance in Java is defined at the usage site, instead of definition site. The definitions are all invariant in Java (aside from arrays).

So, thinking about it, I came up with the following hypothesis:

Dynamically typed language proponents consider "bugs" to be things that the user of the software can cause it to do, while statically typed language proponents consider "bugs" to be things that users of the code can cause it to do.

That would certainly account for some differences in attitude and beliefs I perceive. On the other hand, it might just be my imagination.

10 comments:

  1. Unfortunately, in the end, the user is the only thing that matters. If a bug isn't visible to the user, it really doesn't exist. Now, that doesn't mean that some future revision might not make the (previously hidden) bug visible, but that is dealing in hypotheticals.

    Static typing does exactly that: it deals in hypotheticals. It forces you to prove that your code is free of a certain class of bugs under *all* circumstances, regardless of what users see.

    ReplyDelete
    Replies
    1. Isn't a hypothetical bug a bug by itself? Given that humans are flawed and humans maintain software, eventually a hypothetical bug will become a bug. Ok, maybe too philosophical...

      Delete
    2. While I agree with Spiewak that what the user can do is what matters, I think this is only true for a given point in time of the code base and usage. The tests normally reflect the things we expect user to do and some of what we expect them not to do. But as security incidents show us, user can frequently do the unexpected.

      Having worked with formal verification of code (in a simple term rewriting language), what the code can do has profound impact on how to prove correctness. Small decisions reflected in huge combinatorial explosion of what the code could do. Some of these combinations produced bugs. In my case the bugs had not been seen because testing reflected only one path and one usage pattern. A simple configuration change and the system would fail to produce the expected results.

      Also while working on this formal verification I found that bad type design meant the system could be in a series of states to which there was no valid computation or correctness proof. So what level of correctness we had was based on the faith that those state could not be reached by “correct” behavior of the system.

      Since most interesting good lives on, what the code can do affects how it can be combined. So while the bugs are not significant at a given point in time, maybe not having them in the code base is better in the long run. Type and early elimination of certain problems make it easier to grow the base.

      I think it’s like laying the solid foundation to a building. If it’s a one story house, maybe no foundation will do, and save time and money. But if you start stacking floors, and placing load, a good structure with know behavior is the only way to go. And you never know when you'll need those extra floors.

      Delete
  2. The focus on "what the user sees" is implicitly referring to "what she sees now". Add a feature, even using mostly the same code base, and the user sees something new, exercising a new fraction of the code base, which may now be buggy.

    Static typing enforces constraints on the code and uses thereof, which leads to improved correctness not just for current uses cases but also future ones. Static typing buys insurance for future uses of your code base, and thus makes future features "that the user sees" more likely to be correct.

    ReplyDelete
  3. What you said here is possibly the best explanation of the difference between statically and dynamically typed languages. This should go into every book/tutorial about static/dynamic typing.

    ReplyDelete
  4. While it's true that static typing enforces constraints, it's not a given that those constraints are all that useful. I maintain an Eclipse plugin, and one of the things that drives me crazy is that Eclipse code just pretends to be statically typed. In reality, I sometimes think they would have been better off writing Eclipse in Ruby. My favorite is IPath; it can be one of about seven different things depending on what created it. But nooooo, it's apparently _really expensive_ to create actual types that have meaning.

    OK, I don't really mean the comment about Eclipse in Ruby.

    ReplyDelete
  5. This is a great explanation. Though I must agree with the other commenters that to the users it doesn't matter where or how the bug is represented. If it does not work properly that is the root problem. I have seen with my own code bugs in logic come through that were written perfectly fine within the constructs of Scala.

    ReplyDelete
  6. I disagree.

    First, everyone (static or dynamic) cares about bugs the user finds. There's no difference there.

    I primarily program in dynamic languages (Clojure + JS), but have professional experience with static typing (Haskell).

    I think the difference between the two approaches is simply that static typers want to write their types down and dynamic typers keep them in their head. Intrinsic vs extrinsic types.

    What this often amounts to is that I have a particular domain for a function in mind as I write it and use it. There's no way to indicate the domain in the language, so it's just in my head. For me, that function IS well-defined over the domain as I understand the domain. And sometimes that domain is not even checkable by any extant type system. But, here's the thing, it /is well defined over the domain/. So /it's not a type error until someone tries to apply that function to a value outside the domain/.

    Here's a simplistic example.

    head :: [a] -> a
    head [] = error
    head (x:_) = x

    It's a partial function. The Haskell type system cannot enforce the domain. Is it a bug to define the function this way? It *could* be used incorrectly. Is the very potential to raise an error a bug? Some would say yes, and according to your definition, you are in that set.

    Another way to look at it is that if there are no type errors in the code paths that could possibly be executed, then it's ok to a dynamic typer.

    In the "domain is in your head" mindset, this code has no bug:

    let a = [1,2,3] in head a

    and this code has a bug:

    let a = [] in head a

    because there is a code path that violates the domain.

    That's kind of obvious. But here's something that is not quite so obvious. To a dynamic typer, this is not and could not be a bug:

    if length a > 0 then head a else 1

    But Haskell's type checker cannot guarantee that the empty list code path won't get executed. But in the type checker in your head, you can infer that.

    Eric

    ReplyDelete
  7. Sorry about the last comment. I lost the thread of the post. The reason I disagree is that I think dynamic typers do care about what programmers can do. I tried to think about the differences and got sidetracked with intrinsic vs extrinsic typing, which is part of the answer, but not all of it.

    If your statement were correct, there would be no explanation for all of the talk of encapsulation and private methods.

    ReplyDelete
  8. I primarily develop in C#. I also write a lot of code in Python and Javascript. I find I do not change my definition of a bug when I switch from static to dynamic languages. Instead, I find I change my definition of a bug based on who my code is intended for. I tend to think of bugs as "things that the user of the software can cause it to do" when I write code that will be at most lightly consumed by other developers. And I think of bugs as "things that users of the code can cause it to do" plus "things that the user of the software can cause it to do" when I develop heavily consumed code. This paradigm shift materializes in many ways in my code. One of the most obvious examples is how I am more careful about choosing intuitive, clear, and future proof parameters for heavily consumed functions regardless of the language. Another is that I make sure to write more unit tests for heavily consumed functions in order to minimize bugs uncovered from unexpected uses.

    I think more developers fall more in line with my philosophy about changing the definition of a bug based on code consumption than changing the definition of a bug based on languages used as your hypothesis suggests. Good library developers in particular prove this point. Both statically typed languages and dynamically typed languages have really great libraries. One goal of any great library is to provide functionality that developers consume as it is intended to be consumed. Good library developers accomplish this by considering how other developers will interact with the library. These considerations really show up thorough documentation, clear parameters, and clear entry points. In other words good library developers must consider bugs to include "things that users of the code can cause it to do.” Some of the best libraries I've used have been written in dynamic languages, so it's clear to me that developers of who are proponents of dynamic languages do considered bugs to be "things that users of the code can cause it to do" when appropriate. Otherwise all libraries written in dynamic languages would be pretty poor.

    Although I disagree with your hypothesis, there is merit to something that you implicitly suggest. You suggest that static language proponents think of code consumption differently than dynamic language proponents. Static language proponents tend to emphasize the importance of guardrails that keep developers on target with consuming code correctly. Think of variance provided by static typing as a guardrail used by statically compiled library developers to keep consumers on the right path towards consuming their code correctly. Dynamic language proponents tend to think these guardrails are more trouble than they're worth. They generally argue type-checking guardrails aren't valuable enough to merit the complexity they add. Some may interpret this as: dynamic language proponents tend to be less paranoid about incorrect code consumption. But that's another debate :)

    ReplyDelete