I’m not sure what to tell you. I just don’t see what you do. And I never bother to look at a meme close enough to notice the kind of details the other user pointed out.
Principal Engineer for Accumulate
I’m not sure what to tell you. I just don’t see what you do. And I never bother to look at a meme close enough to notice the kind of details the other user pointed out.
How can you tell?
nasm
is an assembler though, not a ‘languages’
That’s like saying “clang is a compiler though, not a language”. It’s correct but completely beside the point. Unless you’re writing a compiler, “cross platform assembler” is kind of an insane thing to ask for. If want to learn low level programming, pick a platform. If you are trying to write a cross-platform program in assembly, WHY!? Unless you’re writing a compiler. But even then, in this day and age using a cross-platform assembler is still kind of an insane way to approach that problem; take a lesson from decades of progress and do what LLVM did: use an intermediate representation.
I’ve genuinely never had a problem with it. If something is wrong, it was always going to be wrong.
Have you worked on a production code base with more than a few thousands of lines of code? A bug is always going to be a bug, but 99% of the time it’s far harder to answer “how is this bug triggered” than it is to actually fix the bug. How the bug is triggered is extremely important.
Why is it preferable to have to write a bunch of bolierplate than just deal with the stacktrace when you do encounter a type error?
If you don’t validate types you can easily run into a situation where you write a value to a variable with the wrong type, and then some later event retrieves that value and tries to act on it and throws an exception. Now you have a stack trace for the event handler, but the actual bug is in the code that set the variable and thus is not in your stack trace. Maybe the stack trace is enough that you can figure out which variable caused the problem, and maybe it’s obvious where that variable was set, but that can become very difficult very fast in a moderately complex application. Obviously you should write tests, but tests will never catch every weird thing a program might do especially when a human is involved. When you’re working on a moderately large and complex project that needs to have any degree of reliability, catching errors as early as possible is always better.
And relying on runtime validation is a horrific way to write production code
Assembly languages are always architecture specific. Thats kind of their defining feature. Assembly is readable machine code.
“Assume it’s a map and treat like a map and then catch the type error if it’s not.” Paraphrased from actual advice by Guido on how you should write Python. Python isn’t a bad language but the philosophy that comes along with it is so fucked.
What I mean is, from the perspective of performance they are very different. In a language like C where (p)threads are kernel threads, creating a new thread is only marginally less expensive than creating a new process (in Linux, not sure about Windows). In comparison creating a new ‘user thread’ in Go is exceedingly cheap. Creating 10s of thousands of goroutines is feasible. Creating 10s of thousands of threads is a problem.
Also, it still uses kernel threads, just not for every single goroutine.
This touches on the other major difference. There is zero connection between the number of goroutines a program spawns and the number of kernel threads it spawns. A program using kernel threads is relying on the kernel’s scheduler which adds a lot of complexity and non-determinism. But a Go program uses the same number of kernel threads (assuming the same hardware and you don’t mess with GOMAXPROCS) regardless of the number of goroutines it uses, and the goroutines are cooperatively scheduled by the runtime instead of preemptively scheduled by the kernel.
Key point: they’re not threads, at least not in the traditional sense. That makes a huge difference under the hood.
Really? Huh, TIL. I guess I’ve just never run into a situation where that was the bottleneck.
Definitely not a guarantee, bad devs will still write bad code (and junior devs might want to let their seniors handle concurrency).
It’s safe to assume that any non-trivial program written in Go is multithreaded
You can also just tell GitHub to not do that.
Why are you memeing about me? I don’t appreciate being made fun of.
If your job is to make websites and you make sites that don’t work on a browser that has over 100 million users you’re not doing your job.
If you actually have deep knowledge in a specialty, then you describe yourself as that specialty. ‘Full stack engineer’ coneys that you don’t have a specialty/are a master of nothing/your skills are _ shaped.
Experience != expertise or skill. I have never met someone who was actually good at both. Maybe if your backend is just some SQL queries. I am a backend engineer and I’m adequate at front end but I’d never hire someone whose skills were merely adequate unless I thought they had the potential to reach ‘good’.
Scripting languages being languages that are traditionally source distributed.
So the only ways that the distribution mechanism matter are really a difference between How does the distribution mechanism matter beyond that? And even those points are
They tend to be much easier to write
I’m assuming you are not saying “real” languages should be hard to write…
run slower
Objective-C and Go run slower than C and they’re all compiled languages. Sure, an interpreter will be slower than a compiled language but modern languages aren’t simply interpreted (i.e. JIT, etc).
often but not always dynamically typed, and operate at a higher level
There are dynamically typed compiled languages, and high level compiled languages.
It’s not a demeaning separation, just a useful categorization IMO.
Calling one class of languages “real” and another class something else is inherently demeaning. I wouldn’t have cared enough to type this if you used “compiled vs scripting” instead of “real vs scripting”. Though I disagree with using “scripting” at all to describe a language since that’s an assertion of how you use the language, not of the language itself. “Interpreted” on the other hand is a descriptor of the language itself.
As someone who loves C there are lots of languages that seem too limiting and high level, doesn’t mean they aren’t useful tho.
I personally can’t stand Java because the language designers decided to remove ‘dangerous’ features like pointers and unsigned integers because apparently programmers are children who are incapable of handling the risk. On the other hand I love Go. It’s high level enough to be enjoyable and easy to write, but if you want to get into the weeds you can.
That line is blurring to the point where it barely exists any more. Compiled languages are becoming increasingly dynamic (e.g. JIT compilation, code generation at runtime) and interpreted languages are getting compiled. JavaScript is a great example: V8 uses LLVM (a traditional compiler) to optimize and compile hot functions into machine code.
IMO the only definition of “real” programming language that makes any sense is a (Turing complete) language you can realistically build production systems with. Anything else is pointlessly pedantic or gatekeeping.
I guess I just don’t see enough memes to have picked up on that