I mean, it is entirely reasonable that “bad” is the best performance you can hope for while sorting an entire set of generally comparable items.
If you can abuse special knowledge about the data being sorted then you can get better performance with things like radix sort, but in general it just takes a lot of work to compare them all even if you are clever to avoid wasted effort.
Yeah, you’re right, it doesn’t make sense to say that O(f(n)) is good or bad for any algorithm. It must be compared to the complexity of other algorithms which solve the same problem in the same conditions.
I mean…yeah. Just because something is provably the best possible thing, doesn’t mean it’s good. Sorting should be avoided if at all possible. (And in many cases, such as with numbers, you can do better than comparison-based sorts)
The labels are from the perspective of viewing the space of all possible functions of element set size to operations, so they don’t apply to any particular problem an algorithm is attempting to solve (that space is often smaller).
There’s some simple rules of thumb that can be inferred from Big O notation. For example, I try to be careful with what I do in nested loops whose number of repetitions will grow with application usage. I might be stepping into O(n^2) territory, this is not the place for database queries.
I don’t see the contradiction. If you’re doing CRUD operations on 10,000 points, I’m sure you’re doing what’s possible to send it to storage in one fell swoop. You most probably get out of those loops any operation that doesn’t need to be repeated as well.
Yup, it’s why O(N+10) and even O(2N) are effectively the same as O(N) on your CS homework. Speaking too generally, once you’re dithering over the efficiency of an algorithm processing a 100-item dataset you’ve probably gone too far in weeds. And optimizations can often lead to messy code for not a lot of return.
That’s mostly angled at new grads (or maybe just at me when I first started). You’ve probably got bigger problems to solve than shaving a few ms from the total runtime of your process.
I’d probably accept the job and get paid to practice in the field while focusing on finding a more permanent position. Nothing is more attractive to employers than someone working in the field they want to hire in.
Not exactly. If no one on your instance has subscribed to the community, Lemmy fails to forward you to the community and returns 404. So the Lemmy way of making sure others can get to the community is to provide the URL. Lemmy has a lot of poor design in this way. It will be replaced with something better next year. Also, as a beehaw user you should be familiar broken ! links to communities that are not federated.
Oh, that’s correct! Thanks for taking the time to write this clarification. And I’m not sure I’ve seen broken links via beehaw. I’d have to check again which instances are defederated. I’m using Liftoff and pretty sure it asks me from which instance I want to navigate to a community.
Exactly the same company. The fact that all of their glassdoor reviews are from India made me rethink if I should follow through. We’ll see how it goes, but making a blacklist sounds pretty dope, so that’s a nice new goal.
The problem breaks down into a few broad sub problems, as I see it.
Confirming the reviewer or voter is who they say they are (to prevent one entity from making multiple reviews).
Confirming the reviewer or voter is a valid stakeholder. This is domain-specific, but can be such metrics as “citizen of country”, or “verified purchaser”.
Confirming the intent of the reviewer. This meaning people who were paid off (buyers who are offered a gift card for a positive review, which happens plenty on Amazon), or discounting review bombs when a game “goes woke”.
1 and 2 have solutions. Steam cares about whether you’re a verified purchaser, and the barrier to entry of “1 purchase of a game per vote” is certainly enough to make things harder to bot. Amazon might be able to do the same, but so much of the transaction happens outside their purview that a foolproof system would be hard. Not that it’s in their interest to do so, though.
For places like Reddit or Lemmy, verifying one human per up vote is going to be impossible. New accounts are cheap and easy as a core function of the product. bot detection is only going to get harder, too.
If you used some centralized certificate system (like SSL certs), you could maybe get as granular as one vote per machine, but not without massive privacy invasions. The government does this for voting kinda, but we make a point to keep those private identifiers the government gives private.
As far as I’m aware something like that isn’t really possible.
it would prevent one person from making multiple fake accounts
How do you define ‘a person’ and how do you ensure that they only have one account? Short of government control of accounts, I don’t think you can really guarantee this and even then there’s still fraud that gets past the current government systems.
Then, how do you verify that the review is coming from the person that the account is for?
IMO, we’d all be better off going back to smaller scale social interactions, think ‘social media towns’ you trust a smaller number of people and over time develop trust in some. Then you can scale this out to more people than you can directly know with some sort of web-of-trust model. You know you trust Alice, and you know Alice trusts Bob, so therefore you can trust Bob, but not necessarily quite as much as you trust Alice. Then you have this web of trust relationships that decay a bit with each hop away from you.
It’s a rather thorny problem to solve especially since for that to work optimally you’d want to know how much Alice trusts bob, but that amounts to everyone documenting how much they trust each of their friends, which seems socially… well… difficult.
Though the rest is actually easy™:
reviews wouldn’t be suppressed or promoted by paid algorithms
the algorithm WOULD help connect people to items they are interested in. But maybe the workings of it would be open source, so it can be audited for bad acting.
You do what the fediverse does, you have all the information available to everyone, then you run your own ‘algorithm’ that you wrote/audited/trust. The hard part is getting others to give away access to all ‘their’ data.
Again, this is not what you asked but I prefer looking at reviews by YouTubers that I know (e.g. Linus Tech Tips). Maybe a ranking system among those in the review biz would not be so prone to bots.
programming
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.