Hacker Newsnew | past | comments | ask | show | jobs | submit | howenterprisey's commentslogin

> This appears to be an AI-generated draft with severe hallucination problems bordering on WP:HOAX. For one, it calls it the "Margo Largo Accord" -- a name that is not and never was real. The draft also claims the accord was already introduced, and goes into detail about its supposed contents, effects, and reactions to it; the problem is, no actual accord was introduced at the time (or now), so pretty much all of that is made up. The sourcing gives an impression of WP:SIGCOV [significant coverage] but almost all of it appears to be background information about various topics that aren't the accord.

From https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletio..., the discussion that concluded with the article's deletion


Yeah, I've read that and find it a little sus. The article for me was fine even if not perfect.

It's the first time in my internet life (20y+) that I go find a wiki article to share to a friend because the US keeps weakening and I found it gone.

Didn't know it was that easy to remove a wikipedia article by just hinting it was AI generated.


In the anime fan subbing community (which this document is likely from), it's very common to hate on VLC for a variety of imagined (and occasionally real but marginal) issues.


Why is that?


At least for the real part there was the great 10-bit encoding "switch off" at around 2012 where it seemed like the whole anime encoding scene decided to move into encoding just about everything with "10-bit h264" in order to preserve more detail at the same bitrate. VLC didn't have support for it and for a long time (+5 years?) it remained without proper support for that. Every time you tried playing such files they would exhibit corruption at some interval. It was like watching a scrambled cable channel with brief moments of respite.

The kicker is that many, many other players broke. Very few hardware decoders could deal with this format, so it was fairly common to get dropped frames due to software decoding fallback even if your device or player could play it. And, about devices, if you were previously playing h264 anime stuff on your nice pre-smart tv, forget about doing so with the 10-bit stuff.

Years passed and most players could deal with 10-bit encoding, people bought newer devices that could hardware decode it and so on, but afaik VLC remained incompatible a while longer.

Eventually it all became mutt because the anime scene switched to h265...


8-bit and 10-bit almost give digital video too much credit. Because of analog backwards compatibility, 8-bit video only uses values 16-235, so it's actually like… 7.8 bit.

It's nowhere near enough codes, especially in darker regions. That's one reason 10-bit is so important, another is that h264 had unnecessary rounding issues and adding bit depth hid them.


Mostly that VLC has had noticeable issues with displaying some kinds of subtitles made with Advanced SubStation (especially ones taking up much of the frame, or that pan/zoom), which MPV-based players handle better.

If you want a MPV-based player GUI on macOS, https://github.com/iina/iina is quite good.


Note that, while I haven't had time to investigate them myself yet, IINA is known to have problems with color spaces (and also uses libmpv, which is quite limited at the moment and does not support mpv's new gpu-next renderer). Nowadays mpv has first-party builds for macOS, which work very well in my opinion, so I'd recommend using those directly.


Two different systems; on the mod side there are two different UIs (one to set each) as well. Yeah it's weird.


I'd guess nobody sat down and said "Here's the target demographic profile for the new UI, so let's rework our messaging, people!" It's just a funny accident of maintenance over time that the result looks like that.


Each subreddit's mod team gets to style the subreddit (within some limitations). There's presumably a separate set of style rules for the main and "old" sites; and the latter is legacy that most mods (and most users) have not even thought about for years. (Probably most current users have joined the site after the switch and never seen the "old" domain. I'm honestly surprised it still works at all.)


Pretty much. And that seems to reflect changing sentiment of the mods over time (e.g., no information, no inspiration, no trying to emulate, only emoting).

But what I thought was funny was, if you didn't know that, it would look like the two "experiences" were tailored separately: OG redditors get the constructive messaging in the spirit of RMS's mission, but modern social media redditors get the modern social media simplified passive consumption.


It is funny.

I suppose it's a consequence of the current mods having been immersed in that modern social media environment for longer.


That's what saying "noticed with Gen Z" means.

Reply to edit: generations are sequential; if you've noticed something with one generation it means that you're not accusing the prior generations of the same thing, otherwise you would've used different wording.


You can just as easily add context to the first example or skip the wrapping in the second.


Especially since the second example only gives you a stringly-typed error.

If you want to add 'proper' error types, wrapping them is just as difficult in Go and Rust (needing to implement `Error` in Go or `std::Error` in Rust). And, while we can argue about macro magic all day, the `thiserror` crate makes said boilerplate a non-issue and allows you to properly propagate strongly-typed errors with context when needed (and if you're not writing library code to be consumed by others, `anyhow` helps a lot too).


fmt.Errorf with %w directive in fact wraps an error. It will return an fmt.wrapError struct which can be inspected using `errors.Is`. So it's not stringly typed anymore.


I am fully aware of how fmt.Errorf works as well as what's inside the `errors` package in the Golang stdlib, as I do work with the language regularly.

In practice, this ends up with several issues (and I'm just as guilty of doing a bunch of them when I'm writing code not intended for public consumption, to be completely fair).

fmt.Errorf is stupid easy to use. There's a lot of Go code out there that just doesn't use anything else, and we really want to make sure we wrap errors to provide 'context' since there's no backtraces in errors (and nobody wants to force consuming code to pay that runtime cost for every error, given there's no standard way to indicate you want it).

errors.New can be used to create very basic errors, but since it gives you a single instance of a struct implementing `error` there's not a lot you can do with it.

The signature of a function only indicates that it returns `error`, we have to rely on the docs to tell users what specific errors they should expect. Now, to be fair, this is an issue for languages that use exception's - checked exceptions in Java notwithstanding.

Adding a new error type that should be handled means that consumers need to pay attention to the API docs and/or changelog. The compiler, linters, etc don't do anything to help you.

All of this culminates to an infuriating, inconsistent experience with error handling.


I don't agree. There isn't a standard convention for wrapping errors in Rust, like there is in Go with fmt.Errorf -- largely because ? is so widely-used (precisely because it is so easy to reach for).

The proof is in the pudding, though. In my experience, working across Go codebases in open source and in multiple closed-source organizations, errors are nearly universally wrapped and handled appropriately. The same is not true of Rust, where in my experience ? (and indeed even unwrap) reign supreme.


> There isn't a standard convention for wrapping errors in Rust

I have to say that's the first time I've heard someone say Rust doesn't have enough return types. Idiomatically, possible error conditions would be wrapped in a Result. `foo()?` is fantastic for the cases where you can't do anything about it, like you're trying to deserialize the user's passed-in config file and it's not valid JSON. What are you going to do there that's better than panicking? Or if you're starting up and can't connect to the configured database URL, there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.

For everything else, there're the standard `if foo.is_ok()` or matching on `Ok(value)` idioms, when you want to catch the error and retry, or alert the user, or whatever.

But ? and .unwrap() are wonderful when you know that the thing could possibly fail, and it's out of your hands, so why wrap it in a bunch of boilerplate error handling code that doesn't tell the user much more than a traceback would?


> there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.

`?` (i.e. the try operator) and `.unwrap()` do not do the same thing.


One would still use `?` in rust regardless of adding context, so it would be strange for someone with rust experience to mention it.

As for the example you gave:

    File::create("foo.txt")?;
If one added context, it would be

    File::create("foo.txt").context("failed to create file")?;
This is using eyre or anyhow (common choices for adding free-form context).

If rolling your own error type, then

    File::create("foo.txt").map_err(|e| format!("failed to create file: {e}"))?;
would match the Go code behavior. This would not be preferred though, as using eyre or anyhow or other error context libraries build convenient error context backtraces without needing to format things oneself. Here's what the example I gave above prints if the file is a directory:

    Error: 
       0: failed to create file
       1: Is a directory (os error 21)

    Location:
       src/main.rs:7


My experience aligns with this, although I often find the error being used for non-errors which is somewhat of an overcorrection, i.e. db drivers returning “NoRows” errors when no rows is a perfectly acceptable result of a query.

It’s odd that the .unwrap() hack caused a huge outage at Cloudflare, and my first reaction was “that couldn’t happen in Go haha” but… it definitely could, because you can just ignore returned values.

But for some reason most people don’t. It’s like the syntax conveys its intent clearly: Handle your damn errors.


I think the standard convention if you just want a stringly-typed error like Go is anyhow?

And maybe not quite as standard, but thiserror if you don’t want a stringly-typed error?


yeah but which is faster and easier for a person to look at and understand. Go's intentionally verbose so that more complicated things are easier to understand.


  let mut file = File::create("foo.txt").context("failed to create file")?;
Of all the things I find hard to understand in Rust, this isn't one of them.


Important to note that .context() is something from `anyhow`, not part of the stdlib.


What's the "?" doing? Why doesn't it compile without it? It's there to shortcut using match and handling errors and using unwrap, which makes sense if you know Rust, but the verbosity of go is its strength, not a weakness. My belief is that it makes things easier to reason about outside of the trivial example here.


The original complaint was only about adding context: https://news.ycombinator.com/item?id=46154373

If you reject the concept of a 'return on error-variant else unwrap' operator, that's fine, I guess. But I don't think most people get especially hung up on that.


> What's the "?" doing? Why doesn't it compile without it?

I don't understand this line of thought at all. "You have to learn the language's syntax to understand it!"...and so what? All programming language syntax needs to be learned to be understood. I for one was certainly not born with C-style syntax rattling around in my brain.

To me, a lot of the discussion about learning/using Rust has always sounded like the consternation of some monolingual English speakers when trying to learn other languages, right down to the "what is this hideous sorcery mark that I have to use to express myself correctly" complaints about things like diacritics.


I don't really see it as any more or less verbose.

If I return Result<T, E> from a function in Rust I have to provide an exhaustive match of all the cases, unless I use `.unwrap()` to get the success value (or panic), or use the `?` operator to return the error value (possibly converting it with an implementation of `std::From`).

No more verbose than Go, from the consumer side. Though, a big difference is that match/if/etc are expressions and I can assign results from them, so it would look more like

    let a = match do_thing(&foo) {
      Ok(res) => res,
      Err(e) => return e
    }
instead of:

     a, err := do_thing(foo)
     if err != nil {
       return err // (or wrap it with fmt.Errorf and continue the madness
                  // of stringly-typed errors, unless you want to write custom
                  // Error types which now is more verbose and less safe than Rust).
    }
I use Go on a regular basis, error handling works, but quite frankly it's one of the weakest parts of the language. Would I say I appreciate the more explicit handling from both it and Rust? Sure, unchecked exceptions and constant stack unwinding to report recoverable errors wasn't a good idea. But you're not going to have me singing Go's praise when others have done it better.

Do not get me started on actually handling errors in Go, either. errors.As() is a terrible API to work around the lack of pattern matching in Go, and the extra local variables you need to declare to use it just add line noise.


I interpret the sense of "literally" here in the opposite way, i.e. without it the sentence may be taken to mean that the books metaphorically stop mid-sentence, but with it, they're saying that it's non-metaphorical and they really do. It would be bizarre wording otherwise.


Since my work is vaguely related to superconductors, I saw this comment and was excited to dig into all the errors in the article, but actually couldn't find any in the parts discussing the superconductors specifically. (I don't know data centers and can't comment on that bit.) 77 K is indeed an appropriate temperature for LN2 coolant for high-temperature superconductors like they're using. What errors did you see?


The very first sentence is confusing. "Power demands of data centers have grown from tens to 200 kilowatts in just a few years". I assume they're talking a single rack here, not "power demand of data centers".


Well the third paragraph implies that "low-voltage" is a factor against having lots of heat and size, when the opposite is true.

Otherwise nothing pops out to me.


Maybe Geoguessr players would be good at identifying them as well?


Am I to interpret https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa... as it making a test that only asserts false and saying that the test exercises the function in question?

Edit: I misunderstood what was being tested; the test is correct.


Hi. I was an arbitrator who voted to suspend that arbitrator. There was no doxxing involved, which anyone can verify. Barely anything else in your comment is correct either. Doxxing is an issue but from where I sit it's much worse from people outside Wikipedia.


This comment is farcical. Supposing you are right and that there was "no doxxing involved", it's still impossible for an outsider like most of us here, to verify it. Especially if there is such a thing as non-public discourse of any kind.

It is not a transparent organization, and it does not even pay lip service to the effort of transparency. It is large enough of an organization that it is an absurd claim, on its face, that there are not cliques and factions who would do such things if it were at all possible.

You investigated yourselves and found no evidence of wrongdoing.


When I said anyone can verify it, I meant it; go make an account on wikipediocracy, go to the "Wikimedian Folks Too Embarrassing for Public Viewing" forums, and go through the posts by that user.

Quite to the contrary, it's a very transparent organization because edit histories are public. It would be trivial to link to any instances of doxxing on the project, unless they don't exist, which they don't. Wikipediocracy doesn't count when talking about Wikipedia doxxing.


> It would be trivial to link to any instances of doxxing on the project, unless they don't exist

Please don't pretend as if people having a discussion at this level are unaware of the facilities available for permanent deletion on Wikipedia (the so-called "oversight").

> Wikipediocracy doesn't count when talking about Wikipedia doxxing.

"Wikipedia doxxing" clearly means doxxing performed by and/or against Wikipedians, not necessarily on Wikipedia's actual domains. Especially if you're using the term to refer to GP, which states:

> The article criticizes doxxing but well-known Wikipedia editors doxx each other all the time...

So unless you can demonstrate that these Wikipedia editors don't post on Wikipediocracy, then yes it obviously does count. "Wikipedia editors doxxing each other" doesn't stop being "Wikipedia editors doxxing each other" just because of where it's posted.

> When I said anyone can verify it, I meant it; go make an account on wikipediocracy, go to the "Wikimedian Folks Too Embarrassing for Public Viewing" forums, and go through the posts by that user.

It looks to me like the top-level commenter already did exactly this, and found the exact opposite of what you imply we'd find.


My thesis is that Wikipedianon's comment implies Wikipedia editors (specifically, "well-known" editors and "admins") doxx each other all the time, but that's hilariously wrong. Doxxing mostly comes from assholes outside the community, such as those who post on Wikipediocracy.

Yes, on-project doxxing gets OS'd but it also results in discussions and bans which can be reviewed. And from those you can easily determine that it's truly rare.

When I said to go to the forums, that was unfortunately unclear wording; I meant it's trivial to verify that Beeblebrox didn't doxx anyone in his postings.


This is like claiming that you didn't key someone's car, because the scratches weren't signed with your signature.

No one doxxing others in that particular clique is going to do it from anything other than a burner account.


"No one doxxing others in that particular clique is going to do it from anything other than a burner account."

This is incorrect.

many do it with accounts linked to their real onwiki profiles. jps is an example and I provided a link to unambiguous doxxing:

https://wikipediocracy.com/forum/viewtopic.php?f=38&t=14172

They've been doing it since 2016 when they started an" alt-right identification thread":

https://wikipediocracy.com/forum/viewtopic.php?f=38&t=8031

Others use accounts linked to their onwiki personas to ask for doxx. e.g. AndyTheGrump is a well-known user who posts in the "alt-right identification thread" about someone they dislike and getting a quick response. Here's AndyTheGrump asking for doxx on a user named "BlueGraf".

https://wikipediocracy.com/forum/viewtopic.php?f=38&t=8031&p...

Quickly followed up with that individuals full name and employment.

And many editors/admins participate in those doxxing threads to gawk or have fun under their real usernames.


Okay, but now that's an unfalsifiable statement. What makes you think the burners are tied to the well-known accounts?


Says the guy who's telling us "check for ourselves, no one doxxed anyone!" as if it means anything.


Also, the poster "Wikipedianon" makes Tu Quoque fallacies. The fact that some Wikipedia editors have engaged in doxxing of others doesn't make it less of a problem for the government to do so.

Unsurprisingly, "Wikipedianon" is a hit-and-run profile created just for this post, AFAICT.


it's a hit-and-run because I don't want to get doxxed.

I dont want a world in which Trump regulates Wikipedia but pretending it's sunshine and rainbows is a joke at this point.

And the person you're replying to is strawmanning. I never said Beeblebrox doxxed anyone, just that they leaked secret information on a doxxing forum in violation of Wikipolicy and possibly privacy law.


Wikipediocracy is hardly a doxxing forum…


Beeblebrox leaked internal mailing list messages to a forum known for doxxing in violation of the NDA they signed.

i know that Beeblebrox did not doxx anyone and I said that in my comment. my point is leaking information to a doxxing forum sends the wrong message and is dangerous.

Maybe you should create an account and look at the "Wikimedian Folks Too Embarrassing for Public Viewing" forum and get back to me. Or do something about it before the Trump administration uses this as an excuse to censor enwiki. Either way here are some excerpts if you don't want to.

From the first page, here's an active editor (iii, known as jps or ජපස) doxxing someone about UFOs. I took out the names to be polite but it's all there:

https://wikipediocracy.com/forum/viewtopic.php?f=38&t=14172

"Is [username 1] (T-C-L) an alt account of [username 2] (T-C-L)?

For those who are not aware, [username2] is the name of an account used by one [redacted] on various platforms up until about 2024 when he more or less abandoned them. That account also was involved in the ongoing game of accusing [redacted] (T-H-L) of being [redacted] (T-C-L) which is about as fairly ludicrous an attempt at matching a Wikipedia username as I've ever seen.

Anyway, I feel like maybe he thought "If [__] can do it, so can I." And maybe that's the origin of the VPP.

Oh, this is about UFOs. Yeah, I'm in the shit. Maybe someone can link to some other stuff for you to read, but I just want to drop this here because I have nowhere else I get to speculate on these matters and everyone loves a good conspiracy theory data dump from time to time "

Here's the thread "Who is Wikipedia editor i.am.qwerty"

https://wikipediocracy.com/forum/viewtopic.php?f=38&t=13821

"I.am.a.qwerty (T-C-L) gathered up a bunch of those articles and some earlier material to create Wikipedia and antisemitism..."

It goes on:

"But who is I.am.a.qwerty? Let's suppose, just for the sake of argument, that I.a.am.a.qwerty is a PhD student named [real name]. Specifically, this [real name]:"

    "[real name] is a PhD candidate [major] at [university name]. He received his BA (Hons) in [major] from [university]. Previously [real name] received his rabbinical ordination from the [other school] in [location] in [year]. [real name] is also the [job title] at [organization]."
I can't imagine any other community tolerating its members going on KiwiFarms and encouraging doxxing of other community members, so long as they didn't technically engage in it. But Wikipedia does.


That’s hardly doxxing. Asking if two publicly visible usernames might be related is hardly alarming.


To be absolutely, 100% clear: your position is that someone who writes on the Internet, a statement of the form:

> Let's suppose, just for the sake of argument, that [username] is a PhD student named [real name]. Specifically, this [real name]:"

> "[real name] is a PhD candidate [major] at [university name]. He received his BA (Hons) in [major] from [university]. Previously [real name] received his rabbinical ordination from the [other school] in [location] in [year]. [real name] is also the [job title] at [organization]."

is not "doxxing"?

Let's suppose, just for the sake of argument, that I find that patently absurd.


What about the part where they revealed the full name of the person allegedly behind the two usernames?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: