• 14 Posts
  • 79 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle







  • Well it’s in the name, they are code smells, not hard rules.

    Regarding the specific example you cited, I think that with practice it becomes gradually more natural to write reusable functions and methods on the first iteration, removing the need for later DRY-related refactorings.

    PS : I love how your quote for the Rule of Three is getting syntax highlighted xD (You can use markdown quotes by starting quoted lines with > )





  • That’s not what I said. I said that comments can often (but not always) be replaced with good and explicit names.

    This can be pushed to some extreme by making functions that only get called at a single place in the code, just for the sake of being able to give a name to the code that’s inside (instead of inlining it and adding a comment that conveys the same informations as the function’s signature)

    It’s definetly not for everyone, but for beginners/juniors it gives something objective they can aim for when trying to build good coding habits





  • They are both serialization formats that are supposed to be able to represent the same thing. Converting between these 2 formats is used in the article as a way to highlight yaml’s parsing quirks (since JSON only has a single way to represent the false boolean value, it makes it clear that the no value in yaml is interpreted as a boolean false and not as the "no" string)

    Anyway, I disagree with your point about YAML and JSON not being interchangeable



  • The problem is specifically that in’t not exactly clear what’s considered ambiguous. For instance, no is the same thing as false, but as evidenced in the linked post, in the context of country codes, it means “Norway” and it’s not obvious that it might get interpreted as a boolean value.

    It’s the same thing as this famous meme about implicit type conversions in JS :







  • You’ve probably read about language model AIs basically being uncontrollable black boxes even to the very people who invented them.

    When OpenAI wants to restrict ChatGPT from saying some stuff, they can fine tune the model to reduce the likelihood that it will output forbidden words or sentences, but this does not offer any guarantee that the model will actually stop saying forbidden things.

    The only way of actually preventing such an agent from saying something is to check the output after it is generated, and not send it to the user if it triggers a content filter.

    My point is that AI researchers found a way to simulate some kind of artificial brains, from which some “intelligence” emerges in a way that these same researchers are far from deeply understanding.

    If we live in a simulation, my guess is that life was not manually designed by the simulation’s creators, but rather that it emerged from the simulation’s rules (what we Sims call physics), just like people studying the origins of life mostly hypothesize. If this is the case, the creators are probably as clueless about the inner details of our consciousness as we are about the inner details of LLMs