Hacker Newsnew | past | comments | ask | show | jobs | submit | pizza234's commentslogin

It cuts both ways - in those environments, very unhealthy lifestyles (high stress, drug abuse…) are quite common, if not the norm, so even people starting with healthy lifestyles are under significant pressure.

"Zero cost abstractions" refers to some features of the language that provide functionalities with no runtime cost, e.g. (safe) iterators, not to a presumed simplicity of the whole language. Therefore, this is not mutually exclusive with the fact that certain concepts in Rust require more complexity than their counterpart in other languages (after all, the complexities of the borrow checker don't exist in C).

In general, and it applies to the referenced article, programming with a high level of control over the implementation is complex, and there's no way around it. This article explains the concept: https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html.


> "Zero cost abstractions" refers to some features of the language that provide functionalities with no runtime cost...

In that case, all languages provide zero cost abstractions.

> programming with a high level of control over the implementation is complex, > (after all, the complexities of the borrow checker don't exist in C).

The point of an abstraction is to _hide_ complexity.

> This article explains the concept

The article is only half-right. People complain both about the semantics and the syntax, because Rust's implementation of both presents (in some cases) a huge cognitive load that simply doesn't exist in other languages.


Very hard to estimate, depending on the domain, I'd say 1.5-2x as much.

When it comes to programming in languages and frameworks I'm familiar with, there is virtually no increase in terms of speed (I may use it for double checks), however, it may still help me discover concepts I didn't know.

When it comes to areas I'm not familiar with:

- most of the time, the increase is substantial, for example when I need a targeted knowledge (e.g. finding few APIs in giant libraries), or when I need to understand an existing solution - in some cases, I waste a lot of time, when the LLM hallucinates a solution that doesn't make sense - in some other cases, I do jobs that otherwise I wouldn't have done at all

I stress two aspects:

1. it's crucial IMO to treat LLMs as a learning tool before a productivity one, that is, to still learn from its output, rather than just call it a day once "it works"

2. days of later fixing can save hours of upfront checking. or the reverse, whatever one prefers :)


It really depends on what's being learned. For example, take writing scripts based on the AWS SDK. The APIs documentation is gigantic (and poorly designed, as it takes ages to load the documentation of each entry), and one uses only a tiny fraction of the APIs. I don't find "learning to find the right APIs" a valuable knowledge; rather, I find "learning to design a (small) program/script starting from a basic example" valuable, since I waste less time in menial tasks (ie. textual search).


> It really depends on what's being learned.

Also the difference between using it to find information versus delegating executive-function.

I'm afraid there will be a portion of workers who crutch heavily on "Now what do I do next, Robot Soulmate?"


That's very subjective. Concepts like iterations are inevitable, and they don't look great in a declarative language like HCL.

I also find refactorings considerably harder in a declarative language, since configurations have a rigid structure.


From Calibre's repository README:

> Supports hundreds of AI models via Providers [...] no AI related code is even loaded until you configure an AI provider.

This fork is pretty much useless.


That's not true: there's some menu items and supporting code by default.


Still, the menu item is not interacting with AI without you explicit configuring it.

I bet if you click it without any configuration will give you an error message.


How many inactive menu items that error out when clicked on is acceptable? Are we ok with a Microsoft Word style ribbon of controls that do nothing?


If UI bugs are really the issue, then one just sends patches to the upstream project - I'm sure the maintainers will be happy to receive fixes for broken menus. A fork for this is useless, and guaranteed to be abandoned.


Not just useless but another fork that only confounds newcomers and users looking for help. We can be generally opposed to AI without making it a boogeyman.


This sentiment reflect the type of project worked on - small ones. As projects get bigger, more type information gets lost, and that's why it needs to be compensated, typically via automated (unit) testing.

After having worked with gradual typing, unless the application is very disciplined, IMO automated testing is not enough to document the code, as Ruby makes it very easy to use flexible data structures which very easily become messy.


> Many more elaborate projects exist. My favorite is the one that compiles something similar to Turbo Pascal to C64 6502

It depends on the purpose. The reference IDE is intended to produce real-world programs (it includes tools for sprites, music etc.), while high level language compilers are mostly academic, as they're not performant enough.


llvm-mos https://github.com/llvm-mos/llvm-mos seems to generate good enough code - more than competitive with other high-level languages for the 6502, though perhaps not so much with manual assembly coders.


Shoes was very limited, and could only be used for extremely simple applications.


VB6 deserves the huge popularity it had, but the reason wasn't because of the language design, rather, its (extremely) rapid GUI application development. It was actually a two-edged sword - it facilitated writing spaghetti code.

> You could do basically everything that you could do in languages like C/C++

As long as there is some form of memory access, any language can do basically everything that one can do in C/C++, but this doesn't make much sense.


> As long as there is some form of memory access, any language can do basically everything that one can do in C/C++, but this doesn't make much sense.

No VB6 had really easy COM integration which let you tap into a lot of Windows system components. The same code in C++ often required hundreds of lines of scaffolding, and I'm not exaggerating


FWIW, the pywin32 Python package and win32ole Ruby package have streamlined COM integration for Python and Ruby. Not quite as easy as VB6, but it's pretty close. I was even able to tab complete COM names in the Emacs Python REPL, but I remember it being a little buggy.


It probably still sucks in C, but the C++ DX got a lot better. Importing the idl would generate wrapper functions that made calling code look much more like a normal function. It would check the hresult and return an out param from the function. They also introduced types like _variant_t that help boxing and unboxing native types. It still wasn't fun but it greatly reduced line count.


Nah, unless talking about C++ Builder extensions for COM, in Visual C++ land it still sucks big time.

For some reason, there are vocal teams at Microsoft that resist anything in C++ that is comparable to VB, Delphi, .NET, C++ Builder ease of use regarding COM.

Hence why we got MFC COM, ATL COM, WRL, WinRT (as COM evolution), C++/CX, C++/WinRT, WIL, and eventually all of them lose traction with that vocal group that aparently rather use COM with bare bones IDL files, using the command line and VI on Windows most likely.


Windows has a COM system. VB6 isn’t special. You can do that with VB.Net or C# too, C and C++. Windows COM is a thing. VB6 COM isn’t as VB6 only hooked into windows COM.


I'm just giving context as to why VB6 was much better than C++ back in the day for building windows apps. VB.Net and C# didn't exist in the halcyon days of 1998


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: