Hacker Newsnew | past | comments | ask | show | jobs | submit | wingmanjd's commentslogin

Was working on a rebuild of my gaming rig when I noticed the NexusMods repository marked as read-only by the author.

Their Trello board [1] for their roadmap also seems to be gone.

[1] https://trello.com/b/gPzMuIr3/nexus-mods-app-roadmap


Datacenter income for nVidia last quarter was something like 62B vs the gaming market of <4B. While not quite a rounding error, it feels like the gaming market is just too small for them to put more resources toward it for us consumer folks.

https://nvidianews.nvidia.com/news/nvidia-announces-financia...


It is insanely stupid that '4B' with a B, is 'too small' of an operating space.

The absolute value is irrelevant - it's the opportunity cost that determines this.

It doesn't matter if the consumer market is 4T, if the AI market is 60T!


One strategic reason is to remove oxygen from competitors. Otherwise someone will scoop up the gaming market and put the proceeds into developing technology to compete with NVIDIA in the more lucrative AI space.

I wonder who is going to fill the gaming market if AI market focused companies would simply outbid them during manufacturing? All available and not yet available manufacture is pivoting to AI market

“I wouldn’t pick up $20 if there was $100 on the ground!”

Most people would pick up both.

These economic proclamations don’t seem to make sense, when applied to different contexts — which suggests what you’re saying might be folk wisdom rather than sound theory (and greatly over simplifying the problem).

You’re also discounting ecosystem effects — gaming GPUs driving demand for datacenter and workstation GPUs as hobbyist experimentation turns into industrial usage. We don’t know what would happen if nVidia stopped suppressing the GPU market, because it’s never been tried — nVidia has always viciously undercut their own grassroots.


> “I wouldn’t pick up $20 if there was $100 on the ground!” Most people would pick up both.

No, it’s more like there’s a massive pile of both $20s and $100s on the ground. You wouldn’t waste time running between the two, you’d focus on the $100s


If I have a garbage bag, I'm shoving everything in there.

> Most people would pick up both.

if you're within reach of both, then it's not a choice, and there's no opportunity cost in picking just one - you'd be taking both.

If not within reach of both but just one, and you picking one up means someone else might pick up the other, then which would you choose? The other is then by definition, the opportunity cost.


nVidia is sitting on a huge pile of cash, ie, they’re not constrained by resources — hence the framing as a choice.

cash cannot buy more fabs. ASML machines are the constraint, plus TSMC's capacity is a constraint.

Not to mention that nvidia's cash pile isn't magic - they should not overpay for capacity; they're better off returning cash to investors in that case.


The problem is that Nvidia cannot make infinite amounts of chips so they actually can't pick up both.

You’re standing on a traffic island in the middle of a busy road. The lights change allowing you to cross. On one side there is a $20 note, on the other there is a $100 note. Which side do you go to first?

If you were carrying heavy shopping in both arms would you stop to pick up a quarter?

A dollar?


I wouldn't pick up either even with empty hands. No idea where they've been. Maybe a fiver, a twenty sure. At that point I'd put down my bags and grab both.

But so many gamers want to buy GPUs and can’t because they are sold out or won’t because they are super price inflated. Wouldn’t the gaming market be larger if the products were actually available and at their actual MSRP?

Nvidia can't sell 10x the number of GPUs they sell. As much as the supply issues are discussed, it would likely take them a long time to just double the market. They could try to become the vendor of choice for the PS6/next xbox, but that's a big strategy shift for again maybe double the market, not 10x the market.

On the other hand right now the market doesn't seem to think that the >60bn of datacenter revenue is going away or even going to slow down _growing_ any time soon. Just adding 10% more revenue there is worth more than doubling their GPU business which they likely can't do.


I am not saying it would be anywhere near equal, just that it would be "bigger" than 4B if it wasn't so constrained.

>On the other hand right now the market doesn't seem to think that the >60bn of datacenter revenue is going away or even going to slow down _growing_ any time soon.

I wonder why this then?

https://news.ycombinator.com/item?id=47256781


No. Enterprise customers generate vastly more revenue and profit than consumers can.

That is not substantiable. AI bubble is wealthy hype like a single drop of blood can be used to validate 100 different diagnostic test. Reality is parts per million fails this along with reusable medium. Wealth latches to idiocy.

Gaming and CAD market are real expectations that latch to reality. Grow the education systems and grow both. So is matrix math, such as hashing.

AI has reached a state of software issue, not hardware. And the divergence of AI hardware does not equate to CAD and Gaming math.


How many of the last ten years have had some kind of "temporary" GPU shortage? It was crypto, now it's LLMs, who knows what's next?

The only winning strategy for these guys is to exploit the market for all it's worth during shortages and carefully control production to manage the inevitable gluts.


> AI has reached a state of software issue, not hardware

Citation very much needed.

At the very least, OpenAI seems to believe more and larger datacenters is the path to better models... and they've been right about that every time so far.


Moreover, all the frontier labs and hyperscalers are capacity constrained, and will be for the foreseeable future.

Their story (valuation) hinges on it - therefore that’s their investment thesis when raising money.

>OpenAI seems to believe more and larger datacenters is the path to better models...

Does that mean they produce better slop, or more slop faster?


Better slop. The effect that these systems get better as you scale up [0] is real you know.

[0]: https://arxiv.org/pdf/2001.08361


Slop is still slop. There is no legitimate evidence that these systems get any better just by throwing more hardware at it. Every one of the people in this paper is involved with OpenAI, so it is very suspect in its findings.

Great, when the AI bubble bursts, they can repackage their AI chips into consumer cards! /s

I am afriad thhe GPUs chips will be often useless (to power hungry, running too hot and needing too expensive accessories) but it might be possible to harvest the memory chips and put them on useful GPU cards.

So we send an AI agent to the French cafe instead of us?

https://download.samba.org/pub/tridge/misc/french_cafe.txt


Shouldn't AI be able to take this one step further and just analyze the binary (of the samba server in this case) and create all kinds of interface specs from it?


Make the LLM operate the hypervisor VM so it can observe a binary as it executes to write specs for it?


I'm working on this. It's wild.


Doesn't it have those characters via extended ASCII? I seem to recall making boxes with characters back in my BASIC class.


"Extended ASCII" is just a sloppy term for a bunch of other encodings that are not, in fact, ASCII.

If your BASIC class used (or emulated) a C64 or compatible, you were using https://en.wikipedia.org/wiki/PETSCII and if it used MS-DOC you were using https://en.wikipedia.org/wiki/Codepage_437


We used QBasic, but I don't recall what version (maybe 4.5?). Codepage 437 looks similar to what I recall seeing, though.


As brazzy said, there's no such thing as extended ASCII. There's just a huge number of ASCII-compatible eight-bit encodings. The original IBM (and DOS) character set, hardwired into ROM, is the one you're thinking of, and went by various names such as "Personal Computer, MS-DOS United States, MS-DOS Latin US, OEM United States, DOS Extended ASCII (United States), PC-ASCII" [1].

DOS 3.3, in 1987, was the first version to support localized character sets, via a system of "code pages". You'd select an encoding/"character set" that suits your language in AUTOEXEC.BAT – or just used the default 437 if you were a US user and never had to worry about these things. For me, the most relevant code page was 850, aka "OEM Multilingual Latin 1" (not at all the same as ISO/IEC 8859-1 which is also known as "Latin 1").

Why the apparently arbitrary numbers, I'm not sure, but Claude and ChatGPT both claim the codes were simply drawn from a more general-purpose sequence of product numbers used at IBM at the time.

This application, like other similar ones, uses Unicode box drawing characters that now all reside comfortably out of the eight-bit range.

[1] https://www.aivosto.com/articles/charsets-codepages-dos.html


> Why the apparently arbitrary numbers, I'm not sure, but Claude and ChatGPT both claim the codes were simply drawn from a more general-purpose sequence of product numbers used at IBM at the time.

Claude and chatgpt are (probably) wrong. Wikipedia has 3 citations for the following statement:

> Originally, the code page numbers referred to the page numbers in the IBM standard character set manual

The reason they're so high is because code pages were assigned to EBCDIC first.


Yeah, I later found that quote on Wikipedia too. Though I don't think the cited source is super reliable either, or just folklore ("Oh, 'code page' refers to actual deadtree pages"). All the IBM documentation I could find showed big gaps in the sequence of code pages.

But I just now found the list at [1], I don't know why I didn't notice it before. It's certainly comprehensive! There's been some real detective work to be done in compiling that list. The gaps are much smaller, though still exist, eg. from 40 to 251. The 300s are rather sparse, there are only a few 4xx codes, and then there's a jump from 500 to 8xx (with some 7xx assigned later I think).

In any case, I agree that the LLMs seem to have hallucinated the "more general sequence" part. The code page IDs, or more formally CCSIDs, always were a specific set of 16-bit ID numbers. Why exactly the various gaps exist is probably lost in history by now, if there ever even were any particular reasons.

[1] https://en.wikipedia.org/wiki/Code_page


There is a single row of apps that can be favorited on the bottom row of the screen for quick access. There is also a search bar that searches across apps, some direct app actions (like Firefox: New Tab), contacts, and some settings. The search bar might be able to pass the query to the default browser, but

There is not another "desktop" that can be swiped to right and left. Widgets can be added, if desired.


thankful for all the answers, I'll give it a try for a while to 'learn' me and see if I can lean into the workflow.

Optimistically speaking, the only drawback would then be I only get one screen/desktop for adding widgets – which, I guess, might be a reasonable trade-off.


KISS is probably my favorite "app" on Android. I don't need to remember where an icon is located, just a few taps (sometimes even just one) of its name in the search bar and it'll show up immediately. It's amazingly fast and does just what I need it to.


Why would I need an app to do that though, when it's at least one click less to just open the app drawer and start typing into its search input? Seems like it's the same results but without the app and extra click to open it. And the app drawer search remembers my results (I don't know if this app does) which reduces the number of letters I need to type for future searches.


On Android, you can download replacement launchers in the form of an App. That app replaces your default launcher.

In this case, KISS is the app drawer, because it is the default UI of your phone once installed and configured. It's a really good app drawer at that!


This isn't a traditional app, its a launcher


I had not heard of Parakeet until earlier today with Handy [1].

I've previously had good luck with FUTO's keyboard and it's companion voice input app [2] on my Android, both of which are local-only after downloading the model. I'll have to try this one out and compare them.

[1] https://handy.computer/

[2] https://voiceinput.futo.org/


Interesting. Du you know which model they use? Yeah would be curious to hear your experience comparing them.


From their repo, it looks like OpenAI Whisper?

Language support

FUTO Voice Input is currently based on the OpenAI Whisper model, and could theoretically support all of the languages that OpenAI Whisper supports. However, in practice, the smaller models tend to not perform too good with languages that had fewer training hours. To avoid presenting something worse than nothing, only languages with more than 1,000 training hours are included as options in the UI:

<List of supported languages>

Language support and accuracy may expand in the future with better optimization and fine-tuned models. Feedback is welcomed about language-related issues or general language accuracy.


At my $DAYJOB, we have a bunch in-house saltstack states for applying the CIS benchmarks for Ubuntu, Debian, and CentOS. I never looked into it, but I always wondered if I'd be allowed to publish them publicly.


Well there is one available for oscap at https://github.com/ComplianceAsCode/content


You can also use light handkerchiefs that fall slower to the ground than balls, pins, or flaming chainsaws.


I forgot about the handkerchief trick to slow things down.


I wrote this migration tool when we moved years back from a locally hosted Bitbucket instance to Gitlab.com. I haven't tested it recently, but happy to take merge requests to address gaps.

[1] https://gitlab.com/jeremygonyea/jira-to-gitlab-migration-too...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: