Readability is often in the eye of the beholder, but knowing that a component implements a design pattern is all you need to know how it’s used without even having to peek at the code.
I think the most vocal critics of design patterns are those who are clueless about design patterns, and they are just scared if stuff they don’t know.
Should you use a class? Should you use a Factory pattern or some other pattern? Should you reorganize your code? Whichever results in the least code is probably best.
A nice thing about code length is it’s objective. We can argue all day about which design pattern makes more sense, but we can agree on which of two implementations is shorter.
It takes a damn good abstraction to beat having shorter code.
I mostly agree with this but more than shorter code I value readability, I would rather take 3 lines to be clear to any developer than use some obscure or easy to misunderstand structure to write it in 1.
I think it really depends. Functions break up the visual flow, so if you need to look at multiple functions to visualize one conceptual process then it can be less efficient
Yes. I learned this from Haskell. I like Haskell, but it has a lot of very granular functions.
Earlier comment said that breaking up 1 function into 3 improves readability? Well, if you really want readability then break it up into 30 functions using Haskell. Your single function with 25 lines will become 30 functions, so readable (/s).
In truth, there’s a balance between the two. Breaking things up into function does have advantages, but, as you say, it makes it more likely that you’ll have to jump around a lot to understand a single process.
I specifically have a rule that if at the current abstraction layer, a step is more than one function call/assignment - I’m creating another function for that.
i used to think like this. nowadays I prefer readability, and even debugability.
sure, I could inline that expression, but if I assign it to a constant with a descriptive name instead, the next person reading that piece of code will not hate me.
But on a more serious note, I don’t really agree. Writing more code needs to be a conscious choice, but going for the shortest code too often creates a mess. I know, since I was that junior dev who just wanted to get stuff done and I would ignore project architecture in order to have to implement less, like accessing the database in GUI code.
Shorter code with the same amount of coupling between components and with the same readability is always better though.
but we can agree on which of two implementations is shorter.
Shortness for the sake of being short sounds like optimizing for the wrong metric. Code needs to be easy to read, but it’s more important that the code is easy to change and easy to test. Inline code and function calls are renowned to render code untestable, and introducing abstract classes and handles is a renowned technique to stub out dependencies.
I think this is accurate on a larger scale, but I’ll often do things like breaking up a large chain of methods with an interim variable just for readability. A few lines of simple math is better than one line of bit shifting wizardry that does the same thing but doesn’t show the semantic meaning of the operation.
shorter code is not always better, especially when it comes to types. building in lots of guard rails by being verbose with the type system is a good thing. “shorter = better” is the python approach that starts off fun and easy but the codebase scales extremely poorly.
I’ve seen it get a lot of hate revently. In my experience, it’s mostly been from people upset they had to refactor their 400 line function or write unit tests.
Meanwhile, Windows has become drastically better for development over the past few years. There are still some drawbacks, but a ton of the anti windows circlejerking in tech spaces is caused by people who haven’t touched windows as a dev environment in 10+ years.
I agree that make is confusing at first but I don’t think it should fall out of use. It’s a great tool that I use everyday it is far simpler than its competitors once you get used to it. It is basically glorified bash scripting.
I agree, yet I also see no good universal alternative. Every language has a nice tool to do things in it’s ecosystem, but the moment you need to coordinate two languages or go beyond simple stuff, make is the only good option.
Yep. And honestly most language specific versions of make still have glaring missing features. Which doesn’t matter, until when it really matters.
I want to embrace a make replacement, but if the pattern holds, they will be prying make out of my cold dead hands to make me presentable for my funeral.
I have this opinion of PHP. I don’t use it, and I look for alternatives when I find something that does what I want that’s written in it.
Your callout is fair though. I’m not going to switch or anything. I’m happy with my favored scripting language. But I’ll try to not be dismissive of projects that are written in PHP.
If you want to check some modern PHP code, you can check some pieces of mine (this project in particular could have a little cleaner code, but I think it demonstrates well what modern PHP can look like):
Your examples are ironically a great showcase of what I hate about PHP. Java-style object-obsessed programming with long names, piles of design patterns and dozens of imports, along with C-style syntax and dollars before variable names.
Java-style object-obsessed programming with long names
I mean, unlike Java, you can do procedural or functional (the support is getting better but I wouldn’t call php the best language for functional). I personally love object-oriented design because it’s the best (IMO) for large projects.
C-style syntax
That’s a feature in my book, I almost exclusively write code in C-style languages and it’s really easy switching between them. The only exception for me being Go.
dollars before variable names
That’s just such a minor detail to hate language over. I like them (probably because PHP was my first language so I got used to them), but I don’t quite care when I’m switching between languages.
jQuery was an essential stepping stone back when JS was lacking a ton of features that people take for granted these days.
Sure everything could have been done with Vanilla JS but it was verbose and difficult to follow. jQuery made it possible for any developer to quickly make a page dynamic
If your function is longer than 10 statements, parts can almost always be extracted into smaller parts. If named correctly, this improves readability significantly
HELL NO! If you split that function into three, but these always have to be called in succession, you win nothing but make your code WAY harder to read/follow.
What do you gain from that approach, compared to comments, and appropriate whitespace? If you spread out your function over three, you now potentially have triple the moving parts. You have to manage in- and output, and you have to hope noone coming after you sees your subfunction, and assumes it’s there for using.
Testability for one, but I would also argue that those functions are there for using. If some block of logic is sufficient to stand on its own, it should. I’m not saying do it arbitrarily, but it’s been my experience that small functions lead to more readable code and better testing. Most people write a 15 line function treating it as if it does a single thing when in reality it’s doing two or three discrete operations
Almost everything in Scrum can be seen as protecting the team.
ONE example (there are many):
Problem: vague requirements
Solutions in Scrum:
Acceptance criteria on stories need to be clear
whole team grooms the story, everyone understands it and does planning poker to agree on costs. If the team doesn’t all understand it, it doesn’t get past this point
only fully groomed stories get into a Sprint and get worked on.
EVEN IF YOU GOT IT WRONG, you demo what you did every sprint, and the stakeholders can ask for additional work in a future story, reducing the cost of getting it wrong once.
The problem is that only 1 organization that I’ve worked for has actually tried to implement it correctly. The rest just say “yeah we do Agile SCRUM” but it becomes obvious quite quickly that no they do not. Just because we throw stories on a Jira board every 2 weeks and move them around does not make it SCRUM. I suspect this is partially the reason that some people have a negative view of it. They’ve only done “SCRUMfall” and assume that’s all it really is.
You could be onto something. On of my first language was “dBase” (early 90s) which, through it’s style, enabled you to build complex user interfaces with data storage very quickly. I only built small things with it at the time, but it influenced my desire for some better solutions than we have to today.
Learning SQL on an enterprise database is what started my journey into coding. It really forces you to think about what you’re doing because of how structured the language is. It’s also very immediate in that you do x and you get y.
It also makes you think more about data models which I’d argue is why we ended up with the garbage that is MongoDB. Developers not thinking about their data and how it relates enough.
For anyone with their rankles up. 99.9999999% of the time you want an RMDBS. That remaining 0.00000001% you want NoSQL. So any project you spin up? Guess what? You want an RMDBS.
Completely agree. I really love SQL, but I hate it’s syntactic limitations. SQLAlchemy was my band-aid with an after-burner to make it bearable (and maintainable).
dBASE was not my first language, but learning normalization and modelling completely transformed my user interface design. Starting with dBASE, every UI I built used all available data to do some combination of reducing the potential for error and reducing user effort.
For example, choosing “Tesla” as the make of car should obviously hide “F-150” from the list of models and hide all fuel types except “Battery Only”. This seems obvious to pretty much everyone, but there are a lot of UI designs that completely ignore analogous data relations. Less obviously, but just as important, having reduced the list of fuel types to one possibility, it should be automatically filled in.
I find web forms, especially government ones, to be particularly bad at this stuff.
In addition, or maybe this is also what typing and structure means, organizing data to eliminate duplicated or derived info and determining the keys or indexes needed to access it and the rules governing access and update: that’s half your app specification right there and how well you do it makes a big difference to the speed and flexibility of implementing the other half.
It teaches you to think about data in a different way. Even if you never will use it in your products, the mental facilities you have to build for it will definitely benefit you.
I see what you are getting at (and I actually do know the basics of SQL), but for embedded developers, i think it’s much more important to know about the storage medium. Is it EEPROM or flash? What are pages and blocks? Do you need wear leveling? Can you erase bytes or only entire pages at a time? What is the write time og a page, a block and a byte? There are so many limitations in the way we have to store data, that I think it can be harmful to think of data as relational in the sense SQL teaches you, as it can wreck your performance and the lifetime of the storage medium.
Files are a mistake and destroy all structure information.
You don’t have guarantee nobody touched your file, we should database that keep structure information instead.
Command Line is the minimum effort human interface, if we had more time/skill to make interface, the cli wouldn’t exists.
You don’t have guarantee nobody touched your file, we should database that keep structure information instead.
What is your definition of file and database? For example, do you think SQLite is a database, and a SQLite database file counts as a file? Do you think that editing SQLite or PostgreSQL with a third party client counts as touching a file?
C can die in a fire. It’s “simplicity” hides the emergent complexity by using it as it has nearly no compile time checks for anything and nearly no potential for sensible abstraction. It’s like walking on an infinite tight rope in fog while an earth quake is happening.
For completely different reasons: The same is true for C++ but to a far lesser extent.
Ahh, the consequences of using the PDP-11 as your abstract machine.
I find C fun in small doses, but if I ever had to scale up to an actual product, I’d quickly want to off myself from copy-pasting my vector implementation for every different type it needs to contain.
I used to be this way about c++ too… But c++17/22 are not the same language as it was 10 years ago… And it definitely isn’t the language most firmware guys get to use it as.
There is some truly wild shit in the templating system.
I’m aware. I write C++17 and I try to be informed what the best praticed are for whatever version of whatever language I’m writing at the moment. But that’s actually a reason to not like C++. It’s painfully backwards compatible and what was good pratice isn’t anymore because now there’s a better one, but that better pratice isn’t in anyway enforced because of backwards compatibility. And also I don’t like templates, generics are superior to me, but that’s a me thing.
I forgot where I heard this, but at one point around the same time, Microsoft was trying to get BASIC embedded into webpages for Internet Explorer as a competitor to JavaScript.
What do you mean going to? Internet Explorer supported VBscript, which was a “competitor” to Javascript. Though being locked to ie was a hindrance to adoption. That, and it was based on vb.
The JavaScript ecosystem is made worse by the legions of “developers” in it which amount to bro-velopers that put no thought into if something is needed before they create it. There’s a strong overlap between the idiots in crypto and JavaScript developers that needs to be decoupled drastically.
Having fun when programming should be much more important than having correct or fast code when you’re a programmer and should be what we should aim for first.
Having fun when programming should be much more important than having correct or fast code (…)
That’s only remotely reasonable if you’re a weekend warrior that messes with coding as a pastime. Even so, I’m not sure what fun you can extract from dealing with slow, broken code.
Of course those concepts are intertwined in some way.
But as a full time lead dev of a relatively big project, I find that a lot of people, often junior devs, concentrate a lot on what they think is “good code” and not a lot on whether they and other devs are having fun. It may make sense when you’re junior and you have to learn a lot at once, but when you’re experienced enough I feel that focusing on having fun, both for you and your team, should be much more important to us than focusing on precepts you read on having fast code and theoretically clean code, as long as it doesn’t lead the code to be less fun to work with in the long run.
For example, doing R&D re-implementing things from scratch, in most cases just to throw away the great majority of it, could be considered as fun by most programmers, even when it makes not much sense because what you did before also worked. As with switching some architecture around (perhaps wrongly, but it’s hard to know sometimes before you tried it).
I’ve come to very much dislike scrum or agile management as well due to all its protocols and the ways it enforces a certain way of progressing (with tickets, progress reporting, mostly short-term work) which focuses on the project’s goal (which really is what the company wants), sometimes at the expense of devs experimenting and just having fun (what I advocate we should aim for). Though it all depends on your project and company I guess.
It can be rewarding. For me, this has a lot to do with team culture. Am I supported and given the time needed to make improvements as I go or am I constantly rushing to make a deadline?
So much. When I’m trusted to find the right balance of productivity and quality, I enjoy the work more. When I enjoy the work, I’m more productive and write better code. It’s a positive feedback loop.
You can always solve a problem by adding more layers of abstraction. Good software design isn’t to add more layers of abstractions, it’s to solve problems with the minimum amount of abstractions necessary while still having maintainable, scalable code.
There are benefits to abstraction but they also have downsides. They can complicate code and make code harder to read.
Add comment