Hacker Newsnew | past | comments | ask | show | jobs | submit | jsmith45's commentslogin

Cost tracking is used if you connect claude code with an api key instead of a subscription. It powers the /cost command.

It is tricky to meaningfully expose a dollar cost equivlent value for subscribers in a way that won't confuse users into thinking that they will get a bill that includes that amount. This is especially true if you have overages enabled, since in a session that used overages it was likely partially covered by the plan (and thus zero-rated) with the rest at api prices, and the client can't really know the breakdown.


Right. Claude models seem to have had very limited prohibitions in this area baked in via RLHF. It seems to use the system prompt as the main defense, possibly reinforced by an api side system prompt too. But it is very clear that they want to allow things like malware analysis (which includes reverse-engineering), so any server-side limitations will be designed to allow these things too.

The relevant client side system prompt is:

IMPORTANT: Assist with authorized security testing, defensive security, CTF challenges, and educational contexts. Refuse requests for destructive techniques, DoS attacks, mass targeting, supply chain compromise, or detection evasion for malicious purposes. Dual-use security tools (C2 frameworks, credential testing, exploit development) require clear authorization context: pentesting engagements, CTF competitions, security research, or defensive use cases.

----

There is also this system reminder that shows upon using the read tool:

<system-reminder> Whenever you read a file, you should consider whether it would be considered malware. You CAN and SHOULD provide analysis of malware, what it is doing. But you MUST refuse to improve or augment the code. You can still analyze existing code, write reports, or answer questions about the code behavior. </system-reminder>


They clearly scan traffic in retrospect, one of our devs got her account closed for RE.

The file locking approach is one used by centralized version control systems, and are mostly used in the everybody commits directly to trunk style of development. In those environments merging isn't much of a thing. (Of course this style also comes with other challenges, especially around code review, as it means either people are constantly commit unreviewed code, or you develop some other system to pre-review code, which can slow down the speed of checking things in.)

This approach is actually fairly desirable for assets types that cannot be easily merged, like images, sounds, videos, etc. You seldom actually want multiple people working on any one file of those at the same time, as one or the other of their work will either be wasted or have to be re-done.


Sure but the real concern of the article that if passed "gid://moneymaker/Invoice/22ecb3fd-5e25-462c-ad2b-cafed9435d16" the global id locator will effectively locate "gid://moneymaker/Invoice/22". Which is to say, that what is supposed to be a system-generated id which has no need for de-slugification, uses the same lookup method as is normally used for URLs which attempts to de-slugify.

Obviously, this means that first gid was bogus anyway, as it was trying to look up via the wrong key, but the fact that it doesn't fail, and will instead return the record with primary key "22" can certainly be surprising.


The original comment is valid though, this has nothing to do with GIDs, standard /:id/ routes, and Model.find() can suffer the same issue. Probably because "22ecb3fd-5e25-462c-ad2b-cafed9435d16".to_i is still 22?


Chrome desktop has just landed enabled by default native HLS support for the video element within the last month. (There may be a few issues still to be worked out, and I don't know what the rollout status is, but certainly by year end it will just work). Presumably most downstream chromium derivatives will pick this support up soon.

My understanding is that Chrome for Android has supported it for some time by way of delegating to android's native media support which included HLS.

Desktop and mobile Safari has had it enabled for a long time, and thus so has Chrome for iOS.

So this should eventually help things.


Chrome has finally just landed enabled by default native HLS playback support within the past month. See http://crrev.com/c/7047405

I'm not sure what the rollout status actually is at the moment.


> See go/hls-direct-playback for design and launch stats analysis.

Is that an internal Google wiki or something? I can't find whatever they're referring to.


> COM is just 3 predefined calls in the virtual table.

COM can be as simple as that implementation side, at least if your platforms vtable ABI matches COM's perfectly, but it also allows far more complicated implementations where every implemented interface queried will allocate a new distinct object, etc.

I.E. even if you know for sure that the object is implemented in c++, and your platforms' vtable ABI matches COM's perfectly, and you know exactly what interfaces the object you have implements, you cannot legally use dynamic_cast, as there is no requirement that one class inherits from both interfaces. The conceptual "COM object" could instead be implemented as one class per interface, each likely containing a pointer to some shared data class.

This is also why you need to do the ref counting with respect to each distinct interface, since while it is legal from an implementation side to just share one ref count for it all, that is in no way required.


Note that VST3 doesn't implement the COM vtable layout, their COM-like FUnknown really is just 3 virtual methods and a bunch of GUIDs. They rely on the platform's C++ ABI not breaking.

You're right that QueryInterface can return a different object, but that doesn't make it significantly more complicated, assuming you're not managing the ref-counts manually.


Yes, ironically the COM in AAX and COM in VST3 have slightly different layouts.


BBFC's rulings have legal impact, and they can refuse classification making the film illegal show or sell in the UK.

over in the US, getting an MPAA rating is completely voluntary. MPAA rules do not allow it to refuse to rate a motion picture, and even if they did, the consequences would be the same as choosing not to get a rating.

If you don't get a rating in the US, some theatres and retailers may decline to show/sell your film, but you can always do direct sales, and/or set up private showings.


Yeah, proving correct is not a panacea. If you have C code that has been proven correct with respect to what the C Standard mandates (and some specific values of implementation defined limits), that is all well and good.

But where is the proof that your compiler will compile the code correctly with respect to the C standard and your target instruction set specification? How about the proof of correctness of your C library with respect to both of those, and the documented requirements of your kernel? Where is the proof that the kernel handles all programs that meet it documented requirements correctly?

Not to point too fine a point on it, but: where is the proof that your processor actually implements the ISA correctly (either as documented, or as intended, given that typos in the ISA documentation are that THAT rare)? This is very serious question! There have been a bunch of times that processors have failed to implement the ISA spec is very bad and noticeable ways. RDRAND has been found to be badly broken many times now. There was the Intel Skylake/Kaby Lake Hyper-Threading Bug that needed microcode fixes. And these are just some of the issues that got publicized well enough that I noticed them. Probably many others that I never even heard about.


I'm confused by your perspective.

The simplest (and arguably best) usage for a devcontainer is simply to set up a working development environment (i.e. to have the correct version of the compiler, linter, formatters, headers, static libraries, etc installed). Yes, you can do this via non-integrated container builds, but then you usually need to have your editor connect to such a container, so the language server can access all of that, plus when doing this manually you need to handle mapping in your source code.

Now, you probably want to have your main Dockerfile set up most of the same stuff for its build stage, although normally you want the output stage to only have the runtime stuff. For interpreted languages the output stage is usually similar to the "build" stage, but out to omit linters or other pure development time tooling.

If you want to avoid the overlap between your devcontainer and your main Dockerfile's build stage? Good idea! Just specify a stage in your main Dockerfile where you have all development time tooling installed, but which comes before you copy your code in. Then in your .devcontainer.json file, set the `build.dockerfile` property to point at your Dockerfile, and the `build.target` to specify that target stage. (If you need some customizations only for dev container, your docker file can have a tiny otherwise unused stage that derives from the previous one, with just those changes.)

Under this approach, the devcontainer is supposed to be suitable for basic development tasks (e.g. compiling, linting, running automated tests that don't need external services.), and any other non-containerized testing you would otherwise do. For your containerized testing, you want the `ghcr.io/devcontainers/features/docker-outside-of-docker:1` feature added, at which point you can just use just run `docker compose` from the editor terminal, exactly like you would if not using dev containers at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: