Hacker Newsnew | past | comments | ask | show | jobs | submit | haimez's commentslogin

Generating code at runtime is often an anti-goal because you can’t easily introspect it. “Build-time” generation gives you that, but print often choose to go further and check the generated code to source control to be able to see the change history.


But for things like e.g. DAG systems, it would be great to be able to upload a new API definition and have it immediately available instead of having to recompile anything in the backend.


Are we the baddies?


Yep


Java is both compiled (first to bytecode, then to machine code by the JIT) and fast (once JIT compiled).


It depends on what you are using it for, “fast” is relative. Java can be fast for applications where performance and scalability are not a primary features. If performance and scalability are core objectives, even performance-engineered Java isn’t really competitive with a systems language. You can bend Java to make it perform better than most people believe, especially today, but the gap is still pretty large in practice.

I wrote performance-engineered Java for years. Even getting it to within 2x worse than performance-engineered C++ took heroic efforts and ugly Java code.


Java is "fast" but not fast. Most of the time if performance is a true concern, you are not writing code in Java.


I have yet to run a Java program that I haven't had to later kill due to RAM exhaustion. I don't know why. Yeah an Integer takes 160 bits and that's without the JVM overhead, but still. Somehow it feels like Java uses even more memory than Python. Logically you'd point the finger at whoever wrote the software rather than the language/runtime itself, but somehow it's always Java. It's like the Prius of languages.

Ok, just glanced at my corp workstation and some Java build analysis server is using 25GB RES, 50GB VIRT when I have no builds going. The hell is it doing.


GC usually only runs when the process wants to allocate an object but there's no space left on the heap. It's entirely possible that it did a bunch of work previously which created a bunch of garbage now waiting to be cleaned up. See the G1PeriodicGCInterval flag to enable idle collections (assuming G1).

Java is also fairly greedy with memory by default. It likes to grow the heap and then hold onto that memory unless 70% of the heap is free after a collection. The ratios used to grow and shrink the heap can be tuned with MinHeapFreeRatio and MaxHeapFreeRatio.


Why do Java developers still have to tune stuff like that?


Before I even went on my rant, I was guessing there's just some confusing default like this but there's also some historical reason why it's like that.


> Ok, just glanced at my corp workstation and some Java build analysis server is using 25GB RES, 50GB VIRT when I have no builds going. The hell is it doing.

Allocating a heap of the size it was configured to use, probably.


That's a max size, not a preset allocation. The process normally starts out using 1GB.


Sure, but if it's had to use a lot at some point in the past it usually holds onto it.


That would explain it, but also, that's super broken


Nothing broken about it. It's optimized for a particular situation, that situation being a long running process on a server. This is where the JVM typically runs. If you don't want that behaviour there are a myriad of GC options, which could be better documented but are not that hard to find.


I'm not the one who wrote it though, and it's software designed to run on a workstation.


It's not a big issue for a server deployment where if you got that memory from the OS and didn't get killed, there's probably nothing else running on the box and you might as well keep it for the next traffic spike. But yeah not ideal on the desktop/workstation.


don't slander the Prius! it's an incredibly efficient and robust machine. Java is a Chevy Colerado. surprisingly common for how unreliable it is


That's what I mean, surely the Prius can reach 100mph, but you rarely see it go past 55. Usually in the fast lane. It's a paradox.


More of a historical footnote than a serious example, but you've never had to kill the Java applications running on your SIM card (or eSIM).


I don't know about that, my flip phone used to crash quite often. And it displayed a lot of Java logos.


Different processor and JVM. My understanding is that early versions of the Java card runtime didn't even support garbage collection. It was a very different environment to program, even if the language was "Java".


Java is fast for long-running server processes. Even HFT shops competing for milliseconds use it. But yeah every user-facing interactive Java application manages to feel clunky.


Learned from an NYC exchange 10 years ago that Java can be written so as to not use garbage collection. Fast and no pause for GC.

1. Resource and reuse objects that otherwise are garbage collected. Use `new` sparingly.

2. Avoid Java idioms that create garbage, e.g. for (String s : strings) {...}, substitute with (int i = 0, strings_len = strings.length(), i < strings_len) { String s = strings[i]; ...}


It stands for “Advanced Persistent Threat” - https://en.m.wikipedia.org/wiki/Advanced_persistent_threat


Delete API calls are free, listing the objects in a bucket to know what to delete is not.


Let’s be a little charitable here- having and raising children has been a much, much more dangerous proposition health wise even at your parents generation to say nothing of your grandparents generation.


There’s not a lot to be charitable about in the parent comment. Some poorly researched race replacement bs?

You personally are right, it has never been safer health wise to have children. It has never been more expensive either, win some, lose some.


Expensive relative to expectations. Not in terms of what most people could provide.

That's not bad. It's fine that people expect more and want to be able to provide more before having children (and I do agree the commenter who argued this is somehow confined to the Western world is way out there).

But we need to understand that across most of the developed world - there will be exceptions in pockets here and there - it is not the ability to afford children that has dropped, but that we're not willing to give up what we have to have children at a living standard that was considered good before.

E.g. when I grew up we always had food, but the food we had was dictated by cost in a way I never think about, and wouldn't want to deal with. We lived in less space, and I want more space, not less. Our living standard when I was a kid was fine; far above average for most of the world, about average for where I grew up. But if what it took to have more kids was to go back to that, I wouldn't.

The commenter above can call that soft all they want - I don't feel bad for wanting to enjoy life more than I want more children (I have one son; I might well have another child, but because I'm at a stage where my girlfriend and I can afford it without sacrificing our standard of living and that does place me in a privileged position)


more expensive than when? people had nothing 150 years ago and still had kids, mind you


Yes they, and especially women, had absolutely no choice due to lack of birth control and various repressions. A relevant and useful comparison.


10x reduction in latency, higher storage costs with lower access costs (SSD instead of spinning disks). So high I/O, small files situations (with no need for cross AZ access) are where the benefits can be found.


Or like SSD’s vs spinning disks…


Counterfeiting parts, including passing even a small part off as an original OEM one, absolutely enters into it


This is a non-sequitur to the comment you replied to.


> This seems like a bad idea, because the whole point of an assert is that something shouldn't happen, but might due to a (future?) bug.

And so it’s a bad idea because…?

The whole idea is to notice a bug before it ships. Asserts are usually enabled in test and debug builds. So having an assert hit the “unreachable” path should be a good way to notice “hey, you’ve achieved the unexpected” in a bad way. You’re going to need to clarify in more detail why you think that’s a bad thing. I’m guessing because you would prefer this to be a real runtime check in non debug builds?


It's undefined behavior if the assert triggers in production. It's too greedy for minor performance benefit at the risk of causing strange issues.


Yikes. I did have to go down a little rabbit hole to understand the semantics of that builtin (I don’t normally write C if that wasn’t immediately obvious from the question) but that seems like a really questionable interpretation of “this should never happen”. I would expect the equivalent of a fault being triggered and termination of the program, but I guess this is what the legacy of intentionally obtuse undefined behavior handling in compilers gets you.


The builtin itself is fine. It works exactly as it's intended. It says "I've double and tripple checked this. Trust me compiler. Just go fast". But you should not use it to construct an assert.


Eh. I absolutely get what you're saying. And this is for sure flying very close to the knife's edge. But if your assertion checks don't run in release mode, and due to some bug, those invariants don't hold, well, your program is already going to exhibit undefined behaviour. Why not let the compiler know about the undefined behaviour so it can optimize better?

The nice thing about this approach is that the assertion provides value both in debug and release mode. In debug mode, it checks your invariants. And in release mode, it makes your program smaller and faster.

Personally I quite like rust's choice to have a pair of assert functions: assert!() and debug_assert!(). The standard assert function still does its check in both debug and release mode. And honestly thats a fine default these days. Sure, it makes the binary slightly bigger and the program slightly slower, but on modern computers it usually doesn't matter. And when it does matter (like your assertion check is expensive), we have debug_assert instead.


> But if your assertion checks don't run in release mode, and due to some bug, those invariants don't hold, well, your program is already going to exhibit undefined behaviour. Why not let the compiler know about the undefined behaviour so it can optimize better?

Usually in release mode you want to log the core dump and then fix the bug.


Yeah; thats why I like rust's approach. You can either leave assertions in in release mode, so you get your core dump. Or you can take them out if you're confident they won't fire in order to make the program faster.

The unreachable pragma suggested by the author is just a more extreme version of the latter choice.


this is a true unconditional assert, e.g. "I assert that this condition is true". Problem is that's too much power to throw at most use cases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: