Not everything should be beginner friendly. Trying to nerf things because they are not beginner friendly should not be how tools/patterns of languages are designed.
Its ok to have more advanced topic that require more knowledge and that people don’t understand from the first moment they see them.
Yes and no. I mean sure, if you are going to leverage this to gain a significant edge in the market, that works.
If you add a tool to the project, that you need to understand to maintain some parts of it, which adds to the learning curve of someone joining said team, then the gains have best be worth the effort.
We adopt so many librairies/plugins/tools over time that adding more complexity than you need this way is just terrible.
Amen! One thing which drives me crazy is that most people confuse beginner friendly and user friendly, the two things are absolutely not the same thing. There is nothing wrong with having tools which are beginner friendly, especially for stuff one does once in a while. There is everything wrong with nerving tools which are for pros or even everyday usage: If I use something everyday I have rather an optimization for the mid or long run, than for the first few hours…
KDE is ridiculously unintuitive and has become too awkward to use to recommend to a newbie.
Arch is not a general purpose distro. Theres too many things that can break it unless you meticulously follow the patch notes and participate in the community.
If white space carries any function that the compiler/interpreter needs to know about like structure or scope, it’s probably not a very good programming language.
Genuine question: why? What makes, say a semicolon, so superior to the the newline or tab characters?
To be clear: I don’t think whitespace as a part of syntax is an awesome idea which should be more popular. It’s definitely a bit more error prone in some ways. It’s not perfect. But it’s okay.
I’ve written a lot of Python and I don’t think I have ever seen a syntax error caused by incorrect whitespace. I’m not exaggerating. I regularly forget semicolons in other languages but I never type out incorrectly indented code. Maybe that’s just me though…
From some one who used Python as it was the easiest solution to few of my problems, and having to experience languages with brackets and/or endif/fi/done as ways to limit scope, I find that having things like brackets and/or scope terminators easier to parse and less error-prone. I’m thinking about moving on to Ruby whenever I had a need where Python would be a good choice, but the time it takes for me to understand a new language is blocking me from that.
I honestly think the scripting languages like fish have got it right.
Newline by default completes a line and can optionally be escaped. Saves you most of the semicolons and even implicitly highlights multi-line statements.
Whitespace doesn’t matter except for separating names.
Blocks are explicitely ended without braces you can confuse with brackets or parentheses, no matter the coding style.
If Rust and fish had a baby, I think it would be the best language to have ever been created.
A vast majority of the code in question is the code I’ve written for my work projects with multiple active contributors and refactoring is very common too. We all like to shit on Python for various reasons but no one in my environment ever complained about whitespace.
Like I said, I don’t think whitespace is perfect as a part of syntax but I’m much more likely to forget a semicolon than a proper indentation and this applies to any language. I guess it’s not universal tough, because you can often see code with messed up indentation on online forums etc. TBH this is just unthinkable to me, indentation is absolutely necessary for me to be able to read code and reason about it. When I’m thinking about blocks and scopes it’s not because I counted semicolons and braces, it’s 100% indentation.
Load-bearing whitespace is the fucking devil. This thread about hot takes is topped by a comment highlighting how people can’t even agree what kind of whitespace to use.
Python - you want code to fail if someone from one camp copy-pastes code from another camp, and the characters that make a difference are invisible?
An iterator is commonly understood to be an object and thus something much more complex than a simple integer. This is the exact opposite of more clear.
I have a convention to correlate the size of variable scope with its name length.
If a variable is used all over the program, it will be named “response”. If it is <15 lines, then it can be “res”. If it is less than 3 lines, it can be only “r”.
This makes reading code a bit simpler, because it makes unimportant, local vars short and unnoticeable.
Index can be useful but start looking for mapping and sorting functions. Or foreach. If you really must index, sure go use index or I if it’s conventionally understood. But reading something like for I in e where p == r.status is really taxing to make sense of
Oh yeah, I map, filter and reduce pretty much everywhere I can. But sometimes you need the index and i is so commonly understood to be that, it’d say it could even be less legible to deviate from that convention
It’s not incoherent, it just takes a tiny bit more effort to mentally parse as it’s not a stereotypical for loop. Maybe it’s just me, but let me try and explain
With the i example if you’re familiar enough with a language, your brain will gloss over the unimportant syntax, you go straight to the comparison and then whether it’s incrementing or decrementing.
With the other example, the first my brain did was notice it’s not following convention, which then pushes me to read the line carefully as there is probably a reason it doesn’t.
I’m not saying it’s a huge difference or anything, but following code conventions like this makes things like code reviews much easier cumulatively.
Honest question: is there a mapping function that handles the case where you need to loop through an iterable, and conditionally reference an item one or two steps ahead in the iterable?
Something like parsing a string that could have command codes in it of varying length. So I guess the difference is, is this a 1-, 2-, or 3-character code?
I have something like this in a barcode generator and I keep trying to find a way to make it more elegant, but I keep coming back to index and offset as the simplest and most understandable approach.
This would map arr and return halved values for elements for which the element two steps ahead is even. This should be available in languages where map is present. And sorry for possible typos, writing this on mobile.
I’d say except indices in general. Just bloats every line where you need to use them. Imagine writing CUDA C++ where you regularly add and multiply stuff and every number is referenced via (usually) 1-3 indices. Horrible.
Yeah, but it’s easy to overuse it. If your for loop is much longer. For a few lines I’d agree, don’t bother using something longer.
Code should scream out it’s intent for the reader to see. It’s why you are doing something that needs to be communicated, not what you are doing. “i”, “counter” or “index” all scream out what you are doing, not why. This is more important than the name being short (but for equal explanations of intent, go with the shorter name). The for loop does that already.
If you can’t do that, be more precise. At the least make it “cardIndex”, or “searchIndex”. It makes it easier to connect the dots.
That the HTML/CSS structure of web programming is absolutely disgusting and not necessary. The internet could be and should be so much more from a developer pov. Also people who double space instead of tab often have their mouths open while mashing space 16 times.
I completely understand and clearly see that web development is the future, but I still think it’s all gross and will always prefer targeted efficient compiled code. Why? Because I’m a huge fucking dork.
I write the parentheses before I start writing inside the block. When something goes wrong, the scope of what I’ve done wrong is narrowed to within that specific block.
Compiler checked typing is strictly superior to dynamic typing. Any criticism of it is either ignorance, only applicable to older languages or a temporarily missing feature from the current languages.
Using dynamic languages is understandable for a lot of language “external” reasons, just that I really feel like there’s no good argument for it.
Yeah the error list is my friend. Typos, assigning something to the wrong thing or whatever is fixed without having to run the code to test it. Just check the error list and fix any dumb mistakes I made before even running the thing. And I can be confident in re-factoring, because renaming something is either going to work or give a compiler error, not some run-time error which might happen in production weeks later.
I do believe that static typing is at least a local optimum, but I am still not entirely convinced. Rich Hickey is a very convincing presenter and I can’t help but think that he is on to something — with Clojure the chosen direction is contract-typing, which is basically a set of pre- and post-conditions for your functions that are evaluated at runtime. Sure, it has a cost and in the extremes they are pretty much the same as dependent types, but I think it is an interesting direction — why should my function be overly strict in accepting a “record” of only these fields?
TDD is overrated. Code coverage is extremely overrated. Both of these tend to lead to a morass of tests that prove the compiler works at its most basic level while simultaneously generating a surplus of smugness about the whole situation.
Tests have their place. Tests can be, and often are, valuable. But the easier the test is to write, the easier it would’ve been to just encode it into the type system to begin with.
Yeah creating tests for every single method is insane. If a feature changes it’s more difficult you either have to figure out how to implement the change without changing the method, or you change the method and have update the unit test. But if you’re constantly updating the unit tests, how do you know if you might’ve broken something else that the test was intended for.
It’s way better just to do integration tests that match the feature request. That way the feature that someone asked for will continue to work even if you decide to refactor the code.
Unit tests are only worthwhile if you refractor code or write the unit tests before writing the code. We started adding unit test for most everything where I work and I think it’s far more effort than it’s worth. It’s not that it catches nothing but it catches so little I don’t think it’s worth the time spent writing them.
I don’t think you understood my point. That’s exactly why I think unit tests aren’t all that useful. Most code changes require updating the unit tests so unless you change the unit tests first all that’s being done is saying, yep this works how I programmed it to work.
TDD as in religion is overrated. TDD done right is IMHO extremely effective.
The problem is, writing good tests is really hard, and I have seen/committed/experienced a lot of bad tests… just the top of my mind problems with TDD done wrong:
testing the implementation instead the interface
creating a change detector
not writing / factoring the tests in a good way
writing tests / TDD w/o having an overall design for the software
For every non trivial piece of software written w/o TDD, I always saw the same pattern: First few hours/days/weeks, rapid progress compared to TDD, afterwards: hours/days/weeks wasted in debugging, bug fixing etc… and the people can not even catch up with tests if they wanted.
Is TDD always the answer? Of course not, it is a tradeoff like everything else in technology. OTOH I have yet to see a project which benefited from not using TDD by any metric after a few days in.
What is so difficult to learn about pointers? I am not a programmer, i just used to dabble in C++ and a bit of C# and java for school and now python for uni. I found pointers in c++ much more straightforward, then memorizing when a function is doing call by value or call by reference. I still hate java for doing it half half and not letting you do it differently.
Learning pointers feels like one of those things, if you’re physically capable of learning it, then it just takes having it explained in a certain way, or seeing a certain implementation and then it just clicks.
They’re easy when they work. If you screw up, you have to fold your brain in half to figure out what you’re trying to do, what you’ve actually done, and what to do instead.
Nevermind the compiler throwing errors because the syntax is awful and it won’t tell you what it expects.
Add comment