Falsehoods programmers believe about programming

  • There is no ethical impact of my job; I build technological systems and it’s up to others how they use them.
  • Software is a purely technical discipline.
  • There is some innate affinity for computer programming which you must be born with, and cannot be taught.
  • Allowing people who are unlike me to program can only be achieved by “lowering the bar”.
  • Compiled languages are always faster.
  • floating point calculations will introduce non-deterministic errors into numerical results.
  • OK, they introduce some errors into numerical results.
  • alright I understand that floating point calculations are imprecise not inaccurate, Mister Pedantic Blog Author, but I cannot know what that imprecision is.
  • at least the outcome of integer maths is always defined.
  • fine, it’s not defined. But whatever it was, the result of doing arithmetic on two numbers that each fit in a data register itself fits in a data register.
  • every computer on sale today (2017) uses two’s complement notation for negative numbers.
  • every computer on sale today uses a register width that’s a multiple of eight bits.
  • the bug isn’t in my code.
  • the bug isn’t in the library.
  • the bug isn’t in the operating system.
  • the bug isn’t in the compiler.
  • the bug isn’t in the kernel.
  • the bug isn’t in the hardware.
  • bug-free computer hardware is completely deterministic.
  • the lines on the hardware bus/bridge are always either at the voltage that represents 0 or the voltage that represents 1.
  • if my tests cover 100% of my lines of code then I have complete coverage.
  • if my tests cover 100% of my statements then I have complete coverage.
  • if my tests cover 100% of my conditions then I have complete coverage.
  • if I have complete test coverage then I have no bugs.
  • if I have complete test coverage then I do not need a type system.
  • if I have a type system then I do not need complete test coverage.
  • no hacker will target my system.
  • information security is about protecting systems from hackers.
  • if the problem is SQL Injection, then the solution is to replace SQL; NoSQL Injection is impossible.
  • my project is a special snowflake; I can reject that technique I read about without considering it.
  • my project is much like that unicorn startup or public company’s project; I can adopt that technique I read about without considering it.
  • people who do not use my language, technique, framework, tool, methodology, paradigm or other practice do not get it.
  • any metaprogramming expansion will resolve in reasonable time.
  • any type annotation will resolve in reasonable time.
  • OK, well at least any regular expression will resolve in reasonable time.
  • can you at least, please, allow that regular expressions are regular?

I’m sure there must be more.

Update The following were added later; where they were supplied by others there is citing link. There are also good examples in the comments.

  • You need a computer science degree to be a good programmer.
  • A computer science course contains nothing useful to programmers.
  • Functional Programming is a silver bullet.
  • Rust is a silver bullet.
  • There is a silver bullet.
  • There can be no silver bullet.
  • Rewriting a working software system is a good idea.
  • I can write a large system in a memory unsafe language without introducing vulnerabilities.
  • I can write a large system in a memory safe language without introducing vulnerabilities.
  • Software is an engineering discipline.
  • Software is a scientific discipline.
  • Discourse on a topic is furthered by commenting on how I already knew a fact that was stated.
  • A falsehood about programming has no value unless the author of the falsehood provides supporting evidence to my satisfaction.

About Graham

I make it faster and easier for you to create high-quality code.
This entry was posted in code-level. Bookmark the permalink.

14 Responses to Falsehoods programmers believe about programming

  1. Peter Hickman says:

    The code I wrote 10 years ago is as good as the code I write today (God forbid)

  2. Derek Jones says:

    You forgot to say that binary is not the most efficient representation and that computers have always used binary notation:

    http://shape-of-code.coding-guidelines.com/2012/07/09/ternary-radix-will-have-to-wait-for-photonic-computers/

    Also signed-magnitude may make a comeback:
    http://shape-of-code.coding-guidelines.com/2017/07/28/signed-magnitude-the-integer-representation-of-choice-for-iot/

  3. mathew says:

    My code does not need documentation, because you can guess what the supported API is from the unit tests.

  4. Daniel Rendall says:

    “The only documentation users of my library will need is a short description of the parameter types and return types of its API functions”

  5. Paulo says:

    documentation is always up to date an describes what the code does

  6. Mike says:

    “Good Code is possible”

  7. Tom Harrison says:

    Developers can not successfully test their own code.

  8. Martin says:

    I agree with the vast majority, but “Rewriting a working software system is a good idea.”??? There are many circumstances that justify re-writing a software system. I think a more accurate one is:

    There is such a thing as a completely working software system.

  9. Graham says:

    Related: developers can successfully test their own code.

  10. Graham says:

    Related: developers will successfully test their own code.

  11. MP says:

    This is a bit silly: There is some innate affinity for computer programming which you must be born with, and cannot be taught.

    Everyone has inborn skills, and some will have more than others in the logical domain.

  12. Ed says:

    My first computer, in the mid 1960s, was an IBM 1620. It was a binary-coded-decimal machine. Yes, the word binary is in the name, but really that had to do with how the data was stored. The arithmetic and machine addressing was all decimal. All values were of variable length. Hence each decimal digit was stored in 6 bits: 4 for the BCD representation of 0 to 9, 1 was a flag bit to indicate the number was positive or negative (no complementary representation), and one bit to indicate the last decimal digit in the number. Actually, it was a bit more complicated than that, but you get the idea. See: https://en.wikipedia.org/wiki/IBM_1620.

  13. Pingback: Sci/Math/Prog Summary: December 2017 – Abacus Noir Form

  14. Pingback: Falsehoods programmers who write “falsehoods programmers believe” articles believe about programmers who read “falsehoods programmers believe” articles – Structure and Interpretation of Computer Programmers

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.