Hacker Newsnew | past | comments | ask | show | jobs | submit | syphia's commentslogin

"All of humanity's problems stem from man's inability to sit quietly in a room alone." - Blaise Pascal

Translations vary slightly.


It's hard to escape the tick-tock of time slipping away, even if there's no clock in the room

Computers, TVs, video games, and smartphones have solved that problem. There are now more things to do alone in a room than ever before.

It didn't help.


> Computers, TVs, video games, and smartphones have solved that problem.

No, they exacerbated the problem. The point of the quote is not the being alone, but the doing nothing. All your examples just made it harder to do so because there’s always something you can distract yourself with. The point is that you should be able to be alone with your thoughts and nothing else.


"All of humanity's problems stem from man's *inability* to sit quietly in a room alone." - Blaise Pascal

Smart phones etc just prove that we can't sit quietly in a room alone.


BP died in 1662 and that's a translation. The phrasing isn't quite timeless or perfect. The central point anyway is the ability to be without entertainment and possibly also focus. Not just people.

How is that quiet or alone? Stuff you listed is exactly the perfect enemy of what Pascal meant.

LLMs read and write human-code because humans have been reading and writing human-code. The sample size of assembly problems is, in my estimate, too small for LLMs to efficiently read and write it for common use cases.

I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.

If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.

But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?


But starcraft training is not through mimicking human strategies - it was pure RL with a reward function shaped around winning, which allows it to emerge non-human and eventually super-human strategies (such as the worker oversaturation).

The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).


AlphaStar (2019) refined through self-play but was initially trained on human data. I don't know of any other high-level Starcraft AI, but if you do let me know.

> Can you sit down with an unfamiliar domain and develop enough genuine curiosity to get good at it, without a syllabus or a credential dangling in front of you?

Do I have faith that I'll be compensated according to my developed ability?

Looking broadly at the recent past, the correct answer seems "no".


I've known many people who met through games. They offer something similar, in the sense that you can meet new people and learn about them.

The synchronous nature of multiplayer games leaves most of this expression implicit rather than explicit, though, so for some people it doesn't fit the same need. It's a kind of role-play.

I think most people are, for lack of a better metaphor, blood-sucking vampires for honest, explicit, and carefully-crafted communication. People are pleased when I offer it, but they struggle to offer it back, so I learn to not bother. Most relationships degenerate into expressing things better left unsaid, or being entirely superficial.


A case study of myself as an overeager math student:

I used to focus so much on finding "elegant" proofs of things, especially geometric proofs. I'd construct elaborate diagrams to find an intuitive explanation, sometimes disregarding gaps in logic.

Then I gave up, and now I appreciate the brutal pragmatism of using Euler's formula for anything trigonometry-related. It's not a very elegant method, if accounting for the large quantity of rote intermediate work produced, but it's far more effective and straightforward for dealing with messy trig problems.


Agreed. I think the divide is between code-as-thinking and code-as-implementation. Trivial assignments and toy projects and geeking out over implementation details are necessary to learn what code is, and what can be done with it. Otherwise your ideas are too vague to guide AI to an implementation.

Without the clarity that comes from thinking with code, a programmer using AI is the blind leading the blind.

The social aspect of a dialogue is relaxing, but very little improvement is happening. It's like a study group where one (relatively) incompetent student tries to advise another, and then test day comes and they're outperformed by the weirdo that worked alone.


Writing may not be produced for the prestiege of its result, but written words still serve an essential purpose for communication. I think that, as with any essential art, e.g. cooking, people will experiment with it to fit their needs.

Writing is also peculiar in that it is easily referenceable with a deep history, so it serves as a way to compare one's own ideas to others. Memes are similar in principle, but tend towards esotericism and ephemerality in a balkanized internet.


I prefer a more direct formulation of what mathematics is, rather than what it is about.

In that case, mathematics is a demonstration of what is apparent, up to but not including what is directly observable.

This separates it from historical record, which concerns itself with what apparently must have been observed. And it from literal record, since an image of a bird is a direct reproduction of its colors and form.

This separates it from art, which (over-generalizing here) demonstrates what is not apparent. Mathematics is direct; art is indirect.

While science is direct, it operates by a different method. In science, one proposes a hypothesis, compares against observation, and only then determines its worth. Mathematics, on the contrary, is self-contained. The demonstration is the entire point.

3 + 3 = 6 is nothing more than a symbolic demonstration of an apparent principle. And so is the fundamental theorem of calculus, when taken in its relevant context.


I think that humans can find new frontiers to struggle on and develop mental faculties for, even if the prior frontiers are solved.

"Problem-solving" might be dead, but people today seem more skilled in categorizing and comparing things than those in the past (even if they are not particularly good at it yet). Given the quantity and diversity of information and culture that exists, it's necessary. New developments in AI reinforce this with expert-curated data sets.


I have to agree with you. It seems that most measures to make school harder or more rigorous turn it into an aptitude test or boot camp, because so little development can occur in that environment. It breaks down individuals or, at best, filters them.

If that's what schools are supposed to be, so be it, but I'd like to see that outcome explicitly acknowledged (especially by other posters here) instead of implied.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: