Hacker Newsnew | past | comments | ask | show | jobs | submit | robot_no_419's commentslogin

It's not the opposite either. Chess is a game of skill, poker is also a game of skill.

The difference is that chess is deterministic and poker is not. Chess skill means being able to forcefully convert a winning or drawing position. Poker skill means optimizing your chances of winning.

Not only is it possible to beat weaker opponents in chess by playing the best moves (contrary to your assertion), it happens all the time over the board. Weaker players are more likely to play moves that transforms the chess game from a win/draw to a draw/loss for them. There isn't a single human or engine alive that can consistently secure a draw against every opponent.

And it's possible for a poker player to mindlessly play the mathematically optimal move and still lose. Poker IS a game a chance, so they could just get unlucky.


If we define optimal play as never turning a win into a draw or a draw into a loss, then I think it's possible to play a passive-but-optimal style, with few traps for weak players to fall into. Also if a position is "lost" but it's very very non-trivial to see why, a stupid-but-optimal strategy is to resign or move pieces randomly. You need psychology to win from a lost position.


I think the author needs to come up with a different title because it's highly misleading.

The author's premises are also highly exaggerated. For starters, the game of chess has not stopped evolving, because our chess engines continue to get stronger and stronger. The strongest engines of today can crush the older engines from a few years ago. This goes to show that even the elite machines haven't completely figured out chess; the smarter engines are going to continue to push the chess meta forward. In that sense, chess creativity and intuition hasn't stalled. We've just reached the point of collective knowledge that only machines can improve on chess theory.

Second, it's not like GMs are playing bad or losing moves to bluff the opponent. In most opening positions, there are at least 3 or 4 moves that could be played to still maintain winning or drawing positions. When GMs pick "suboptimal lines", they're picking maybe the 3rd or 4th best option that's still objectively a good and viable move from an engine's POV. Nobody is playing bad or losing moves on purpose, that simply does not work in chess.


Math is presented in a way that's supposed to be organized, compact, and categorical. If we taught math the same way math was proven and discovered, it would be so slow and inefficient that we would still be covering linear algebra in post grad.

As an analogy: The 1,000th person to climb Mt. Everest takes a well defined path that has already been mapped out as the most efficient path to the top. If every single person had to go through the treachery of finding the dead ends, cliffs, crevices, and death traps that the first few climbers endured, it would be a journey only a few could accomplish.

Most people (computer scientists, engineers, chemists, physicists) using math only need to reach the top and see the view from the peak. The few climbers that are really dedicated to climbing (ie, the math researchers who reach the frontier of math) will naturally learn about the rest of the jagged, unmapped landscape as they climb harder and unconquered mountains.


I think that's one useful way to view it. Math is infinite "mountain" and the higher you get, the rougher the summaries.

One thing I should mention. I have been reading The Great Formal Machinery Works: Theories of Deduction and Computation at the Origins of the Digital Age by Jan von Plato. One thing I notice is that a lot of mathematicians activity is not proving more and more complex theories but rather, producing a framework that takes a series of complex results and shows them to be much simpler within the framework.

And that's just to say, the organization of math isn't just a matter of simplification for the layman, it's part of the progress of math itself.


Good points.

This is essentially compression of knowledge at play. To make the next discovery it makes sense to get compressed information about the previous discoveries.

Historians are often more interested in the various routes attempted to achieve scientific discovery -- which failed, which succeeded etc. Scientists are interested in climbing to the next peak (of knowledge) with just sufficient knowledge of how we came to the current location.

It always helps to know a bit of history. You might encounter problems while climbing to the next peak and knowing a bit of history might give you some additional tools to solve problems you may encounter.

However, you must be judicious. Learn too much about the past and you won't have much time to create the future. Also if you learn too little about the past you may not be well equipped to deal with upcoming challenges. It is a balance.


The current pedagogy and curriculum is nowhere close to the “most efficient path” (nor is it the most intuitive, best organized, easiest to extend, ...). It’s just an arbitrary history-dependent path people happened to come up with, mostly centuries ago, and “refactoring” any of it is almost impossible.

It’s more like our current public transit system: it gets some people where they’re going, somewhat on time, but it’s generally pretty crummy and full of historical inequity.


I never said it was the most intuitive, just the most streamlined and most efficient. I'm having a hard time understanding why you think it's not the best organized. Most math courses are taught with a pretty straight forward approach: start with the axioms and definitions, prove the easy and auxiliary theorems that are easily derived from the axioms, prove the fundamental theorems that make the subject useful. In other words, the shortest path from the axioms to the important theorems. I don't see a way to make it more organized or compact, but I'm open to hear what you think is a more condensed or organized way to teach math.

And no, this is rarely the most intuitive or contextual way to learn math. Another analogy - a library doesn't sort their books by which ones were best reads or most influential, but by topic and author. Similarly, math curriculums are organized by a hierarchy of which theorems can prove the next theorem with no explanation of which ones are important. Organization doesn't always provide intuition.


What you describe is probably the least efficient way to learn math - or anything else for that matter - because you’re trying to learn things you can’t, at that point and for a while afterwards, reason about. They don’t have connection to anything else. Compare that with the opposite - learning stuff you can already mentally associate with parts of known reality.


> learning stuff you can already mentally associate with parts of known reality.

This is not always feasible or effective. Sometimes it's just better to start by doing some simple reasoning about things in isolation, and build the proper connection and context afterwards.


> It’s just an arbitrary history-dependent path people happened to come up with, mostly centuries ago, and “refactoring” any of it is almost impossible.

This is absolutely not true. If anything, math education has a tendency to keep losing intuition over time as it's refactored for modern approaches and notation.


There are few (if any) important differences between algebra textbooks from 400 years ago, trigonometry textbooks from 300 years ago, and calculus textbooks from 200 years ago vs. their current counterparts. The way we teach vector calculus is more than a century old. Introductory statistics courses still often haven’t caught up with the existence of computers. Undergraduate level math textbooks from 60–90 years ago are still among the most popular course sources across most subjects, including abstract algebra, analysis, etc. Hot “new” material comes from the 19th–early 20th century. The curriculum (at least say 8th grade through undergrad level) is calcified and dead, like a bleached coral.

Once you get to math grad school you can find more material that uses approaches and notations that are only about 50 years old.

The most significant “recent” change to be found from the 20th century is the “Bourbaki-zation” of mathematics, especially sources intended for expert readers: cutting out pictures, intuition, and leading examples in favor of an extremely spare and formal style that alienates many newcomers and chases them out of the field. And I guess at the high school level, there’s the domination of pocket calculators (displacing slide rules) which came about in the 1970s–80s.

There is massive, massive room for improvement across the board.

If you read works by e.g. Euler, other than being in Latin they still seem pretty much modern (we did tighten up some of the details in the century or two afterward), because much less has changed in the way we approach those subjects than you would expect. By contrast, if you read Newton or his contemporaries/predecessors, the style is often completely different and almost unrecognizable/illegible to modern audiences, building on the millennia old tradition of The Elements and Conics.

For another serious transformation, look to the way computing is taught, which has changed quite dramatically in the past 50 years. Nothing remotely like that is happening in up-through-undergraduate mathematics.


Can you recommend a book that you think presents,say, calculus, significantly better than the books commonly in use?


http://www.science.smith.edu/~callahan/intromine.html is one idea (and read that page/book for a critique of the kind of typical ~200 year old textbook/course we still use today), but this could be a lot better with a bigger budget and more support.

Just look what you can do with high-production-value video animations: https://www.3blue1brown.com/lessons/essence-of-calculus


This is a fairly well written book. But I don't see anything qualitatively better about it than the best standard math textbooks. What am I missing?


It largely dropped the "memorize this set of symbol-pushing rules then apply them to a long list of exercises" version of differential/integral calculus found in typical introductory textbooks, in favor of a "make up a model for a situation, then program a simulation into a computer and see what happens" approach. That makes for a radically different experience for students.

It’s hard to simply say this is “better”: it depends what skills and content you are trying to teach. The more computing-heavy version arguably does a lot better job quickly preparing students to engage with scientific research literature (because differential equations are a fundamental part of the language of science). But it might make it harder for students to e.g. dive into a traditional electrodynamics course intended for future physicists, full of gnarly integrals to solve.

Most of the people proposing even more significant departures (in content or style) aren’t writing introductory undergrad textbooks.


Using simple computer simulations to teach introductory math courses is definitely a change that has been slowly happening over the past couple of decades.

However, different approaches don't just teach different "skills and content" as you say, but entire paradigms of thinking. There is mathematical thinking and there is computational thinking (and other types as well), and any course helps you step up the ladders of these paradigms by different amounts.

My experience teaching undergrad math/physics/cs for several years is that computational thinking is in the short term time and effort cheap, and this causes a fixed point in how students think. If you give them the concept of say differential equations, and teach them some computational methods and some mathematical methods to solve these equations, they will always lean towards just using the computational methods. This seems all fine and dandy, except when you go to more advanced mathematical abstractions, and in the previous step the students had not mastered the mathematical way of thinking, they are lost. They simply don't have the mathematical capacity to grasp the higher abstractions. And no amount of 3B1B fixes it - this lack of long term investment into an important thinking paradigm.


"Elementary Calculus: An Infinitesimal Approach", by Jerome Keisler. Learning calculus is made harder than necessary by the legacy of clumsy epsilon and delta formalism. This formalism is not the intuitive approach Newton and Leibniz used to develop Calculus, based on infinitesimals, that was shunned later because it took time until Abraham Robinson made it rigorous in the 1960s. The author made the entire book available for free online: https://people.math.wisc.edu/~keisler/calc.html See also: https://en.wikipedia.org/wiki/Nonstandard_analysis


Open any book on differential geometry and compare the treatment of differentiation with the needlessly index heavy treatment in any undergraduate calculus textbook.

The point is that we treat the differential of a real valued function as a function/vector/matrix for historical reasons. The simpler perspective that always works is that the differential of a function is the best linear approximation of the function at a given point. But for historical reasons most math textbooks restrict themselves to "first order functions" and avoid, e.g., functions returning functions.

This also leads to ridiculous notational problems when dealing with higher order functions, like integration and all kinds of integral transforms.


> full of historical inequity.

Granted the academic profession has historical inequality but what about the math itself displays that?


The biggest problem is that it is very unfriendly to uninitiated newcomers and makes insufficient effort to draw people in. You end up with a culture that is unfortunately insular and has trouble engaging with even engineers and scientists, much less the general public. It’s also not very friendly to people who approach problems in different ways: symbol pushing has been elevated and anyone who has difficulty with symbol pushing (for whatever reason) ends up at least partly excluded.

Students who have a lot of practice/experience by the time they get to be teenagers (often via extra-curricular help and support) are much better prepared than those without that practice. Which is of course not a problem per se, you see the same in any field and it’s great if kids want to learn ahead of their peers. But then the content, curricular design, and pedagogy of mathematics courses leave students with the impression that those differences in preparation are due to innate differences in aptitude (“I suck at math”; “she’s just a math person”; ...), toss less well prepared students into the deep end to sink without enough support, and ultimately chase a huge number of people away who might otherwise find the subject beautiful and interesting, and could meaningfully contribute.


Well, we can't know that until we find (or won't find) the more effective way of teaching (or a way to do math without "symbol pushing" for that matter).

Until then it will not be wise to break what works (even for a minority of students).


Current incentives are set up to make even the most trivial attempts to run against the mainstream definitions and notations extremely difficult.


I don’t think it’s fair to say they are set up to do that. They weren’t conceived with that purpose. It’s just a fact of life that once we’ve invested a huge amount of effort in one set of conventions it’s very costly to change those conventions.


I don’t mean that some secret committee got together to “set up” all of the social incentives of the entire school system, university system, textbook industry, scientific publication system, engineering fields, etc.

What I mean is that there are incentives for the people involved in those systems which are extremely difficult to reform, and as long as the current incentives prevail it is all but impossible for anyone to refactor things like basic mathematical notions and notations.

Switching and retraining costs are high, gaps in inter-operability are expensive, and there is almost nobody who will achieve any career advancement through promoting changes to the high school and early undergraduate curriculum.

Mathematicians are generally most interested in pushing on the shiny boundaries of the field rather than trying to clean up the centuries-old material for novices. Teachers have their hands full enough with their students to do much new research in pedagogy. Practitioners in industry have their own problems to solve.


My mind goes to various, unfortunate notational conventions.


Such as? Many branches of mathematics have their own mutually unintelligible dialects of notation. Many longer papers or Ph.D thesis will just create notation just for the context of the paper.


An obvious one is that traditional mathematical convention requires single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition. That’s an admittedly trivial example though, even though the higher order consequences are considerable.

Computing science is when notation came into its own. Younger mathematicians have taken those lessons to heart, but as the old saying goes, progress comes one funeral at a time.

Being forced to mechanically parse and interpret a syntax has a way of really bringing out any ambiguity.


> that traditional mathematical convention requires single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition.

It's not to save paper or because of the product. You don't know the solution to the problem you are working on from the beginning and most of the time is spent writing and writing and writing in a scratchpad trying to solve what you need. Anything longer than a single glyph for variables would be too tedious so everyone evolved to use single letters. And then the papers are written with the same convention since it's natural. You have variable names though with the use of subscripts with the added benefits that it can be (and is) used to elegantly group relevant variables together giving you some sort of abstraction

I once wrote a comment about it here on HN - language in maths is not a programming language used to tell a computer how to go from A to B, but a natural language used to talk about maths between peers. Every natural language have idioms, inconsistences and other quirks. Polish will not change for you so it's easier for you to learn it, it will change in the way that let's polish people communicate better with each other which also include a lot of historical and cultural happenstances. Same with maths

There are attempts like Esperanto and other artificial languages like that and I think any attempts at 'codification' of maths to use some programming language has the same chance of success of wide adoption


> They are attempts like Esperanto and other artificial languages like that and I think any attempts at 'codification' of maths to use some programming language has the same chance of success of wide adoption

Aren't existing programming languages already types of codified artificial math dialects which have seen wide adoption


That’s a good point.

Programming languages are more for humans than for computers. Otherwise we’d be writing our programs in 1s and 0s, and extending our editors in Emacs Binary and VSCode BinaryScript.


> language in maths is not a programming language used to tell a computer how to go from A to B, but a natural language

Right, we're on the same page, I just think this is a bad thing and you evidently think it's a good thing. I'm well aware many mathematicians don't, because it's how they were trained and unlearning is the hardest kind of learning. The ambiguity[1] of natural language is observably ill-suited for formal reasoning, and the experience of computing science has shown this conclusively.

Do bear in mind that the pioneers in our field were virtually all trained mathematicians. They were well aware of the historic faults of the field because having to make programs actually work forced them to be.

The legacy fuzzy pencil and paper approach of traditional mathematics is going to end up being to proper formal mathematics just as what's now called philosophy is to formal logic.

[1] Let's not confuse ambiguity with generality.


> single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition.

Programmers tend to have this lack of fluency with written math that they completely miss: the concise names are not to save paper or make writing easier or anything like that. They're because they make the structure of expressions easier to visually identify and parse. The shapes of expressions are an incredibly important feature of the language and often contain implicit structural analogies. You need to be able to see those analogies to correctly read mathematics, and long variable names would obscure that part of the language.

I suppose it's similar to having enough fluency in a natural language to mechanically translate the words of a poem, but you can't properly read things like the metre, so you've unknowingly missed half of what the author originally wrote and lost it all in translation.


I haven't encountered much resistance to n_{arbitrarily complex subscripts}


> An obvious one is that traditional mathematical convention requires single glyph variable names due to the unfortunate decision to save paper by denoting the product by juxtaposition.

Generally you'd use upright text in square brackets to denote longer variable names, the notation is often seen in applied fields. But this quickly becomes clunky with longer expressions.


> Being forced to mechanically parse and interpret a syntax has a way of really bringing out any ambiguity.

This is absolutely beautifully said user23. I as a programmer often struggle with understanding notations used in some papers.


There's a value in compact / structural notation though, to an extent of course. But o come from the world of verbose application programming :)


Like APL notation being a tool of thought.


A bit of that even though apl arguably push things too far (maybe that's what you implied). Parametric types also help making point, abstract combinators, recursive schemes.. All very helpful to define and manipulate ideas.


Is there something you could say about these unfortunate notations or an article you you could point me to so that I can understand what they are?


For example, calculus uses notation and terminology that predates the modern limit-based field Weierstraß and others built. It's really confusing [1].

Statistics is even worse. A mix of old tricks developed to avoid computations when these were expensive. See [2].

[1] A Radical Approach to Real Analysis https://www.davidbressoud.org/aratra/

[2] The Introductory Statistics Course: A Ptolemaic Curriculum? https://escholarship.org/uc/item/6hb3k0nz


Limits are not an inherent part of calculus. You can do all calculus relevant for the physical world just fine with nilpotent infinitesimals if you but give up excluded middle.


I've heard of this constructivist approach to calculus, but hadn't made the connection with nilpotents. that's really interesting, could you explain why nilpotenxy and forgoing the law of the excluded middle relate to each other?


You can use nilpotents with classical logic and the excluded middle. This is called dual numbers and it's already a good model for "calculus without limits". They are like complex numbers, but instead of x^2=-1 you set x^2=0.

However, if you want to get really serious about that, you'll need that zero plus an infinitessimal be equal to zero. This is impossible in classical logic due to the excluded middle (which forces each number to be either equal to zero or non-zero).


Can you recommend a introductory calculus book that builds it up from dual numbers?


The Silvanus P. Thompson book suggested by the sibling comment is lovely and very clear.

For a more algebraic treatment, and its important applications to automatic differentiation, I'd suggest starting with the relevant wikipedia articles:

https://en.wikipedia.org/wiki/Dual_number

https://en.wikipedia.org/wiki/Automatic_differentiation


You could try Calculus made easy by S. P. Thompson.


My point is not that limits are an inherent part of calculus. My point is that calculus as currently taught mixes infinitesimal-like notation that predates limits with limit-based calculus.


So, like, inequality against people with visual disabilities and dyslexics?


I would counter that the current pedagogy -- at least high school through early undergrad -- is the most efficient path, or close to it, for teaching students to become electrical engineers in the analog era. Historically, that was the most math-heavy profession that had a lot of jobs (not just professors/researchers). We just haven't updated it in a long time.


That’s probably not too far off the mark... with the proviso that we are fixing the notation, terminology, and problem solving methods for electrical engineering to what was historically used in the 1950s, and not allowing any more radical “refactoring” of those ideas or methods.

I don’t think this is actually the most effective way to train analog electrical engineers, or the most effective possible set of conceptual/notational tools for practical electrical engineering.


That’s a great analogy. Thank you.


How to really multiply Roman numerals: convert them to Arabic numeral notation, multiply them (use a calculator or your favorite algorithm), and convert them back to Roman numerals.

It's very interesting to discuss this from a cultural or anthropological point of view. It doesn't seem very interesting from a mathematical POV. I'm disappointed that there's no mention of the history of Roman arithmetic or content related to actual Romans doing math.

I feel like anyone who is even asking the question "how can I do arithmetic with Roman numerals?" knows enough math to make this a trivial observation.


It's a profitability issue, not a logistics issue. It would not be profitable to convert that natural to electricity if it was used for anything besides Bitcoin.

And worry not, this is benefiting you even if you don't like cryptocurrency:

"The firm estimates that bitcoin mining allows carbon dioxide-equivalent emissions to be reduced by over 60% compared to routine flaring."


> It would not be profitable to convert that natural to electricity if it was used for anything besides Bitcoin.

My sibling comment talks about this a little more, but as someone running data centers on the same technology, I can assure you that it is possible to do profitable compute that is not crypto mining on this technology.

> "The firm estimates that bitcoin mining allows carbon dioxide-equivalent emissions to be reduced by over 60% compared to routine flaring."

https://www.crusoeenergy.com/digital-flare-mitigation has all the stats on Methane, CO/CO2, VOCs, NOx, etc.

For comparison, using one of our GPUs for a month offsets the equivalent of ~400 kg of CO2, which is about the same as a round trip flight from SFO to SEA.


If you make a Bitcoin address, memorize the private key, and never write it down ever, then the only way for anyone to "reverse" transactions is to force you to physically say or write down the private key. This transaction finality seems to be orders of magnitude more final than a credit card transaction or a bank transaction.

So sure - it's technically reversible. But is this really a practical argument? It's like saying nobody is safe in public because you can be a victim of a terrorist attack any any moment. It's alarmist and practically wrong even if technically true.


I don't think the strictest sense of "logic" has much use outside of pure math. Using commonplace definition of "logical=rational", it's logically refutable. Any rational person should be able to reason out that it's a scam.


Certainly that's incorrect, actual logic is used throughout virtually all sciences to deduce conclusions using methods of proof. The reason we're able to deduce many core truths of reality is by relying on both observation and logic to guide us to truth.

But, to engage with your implied argument in good faith: Do irrational people deserve to have all their money taken from them?


Actual, deductive logic can't be used in any scientific field because we don't have scientific objective truths. Science is based on inductive reasoning and constant hypothesis checking. Plenty of scientific facts we once thought were "true" ended up being false such as Newton's laws of gravity. The only place where true deductive logic really has a use is philosophy and pure mathematics, which are completely abstracted from reality. The laws of physics as we understand can always be overturned at some point; the laws of mathematics cannot.

My argument was that any rational person should be able to conclude it's a scam. That's really the extent of my willingness to engage further.


You do not need objective truths to use deductive logic. That statement shows that you fundamentally misunderstand deductive logic.

You simply need premises (i.e. assumptions) to make use of deductive logic in a standard logical system. Those premises, when used with logic that has no fallacies, implies conclusions. Any real world conclusions are always based on premises, where those premises are based on experimental evidence. It's always the case that there could be problems with the experiments, and there always are limitations of those experiments. That doesn't mean you can't use logic to derive conclusions.


You know what? You're 100% right and I agree with you. I am wrong here. But still by your definition of logic, anyone can logically conclude it's a scam. Just make the assumption that people don't give out money for free, which is a valid assumption to make given most people's experience with the world.


Well, certainly that would be a logical argument, but it isn't a sound argument and I can give a few examples: a long time ago, people gave out bitcoin for free to anyone that wanted it. I personally received 0.05 BTC back around 2010 simply by providing my wallet to a generous person who ran a website.

It's very common to give away new cryptocurrencies to seed them and build popularity with them, even in a non-scammy way. RaiBlocks (eventually renamed Nano), for example, started this way. Eventually that currency (mostly) ended in a giant heist because the biggest exchange for that currency had its entirety of value "stolen" (embezzled by the exchange's operator). https://en.wikipedia.org/wiki/Nano_(cryptocurrency)#BitGrail...

Naive users who know very little about cryptocurrencies but who want to get into that market (think your parents, if they're technically literate enough to use the internet) will genuinely be persuaded by giveaways.

It's becoming a common scam practice to give people free cryptocurrency tokens (or at extreme discount), and make it so people can only sell those tokens by buying other tokens. This happens over and over, and actually got some reporting on it with the Squid Game tokens (not associated with the actual Korean media drama): https://www.bbc.com/news/business-59129466

So sure, you could technically make a "logical" argument about anything you want with the right premises, but if those premises are not based in reality, then the conclusions don't hold in reality. Those are called apriori arguments. Because the statement that "no one would ever give away cryptocurrency for free" is a strictly false one.


Plenty of cryptocurrencies are private: Monero, Dash, Zcash, Beam, Grin, etc...

I actually prefer using the public ones when I want a paper trail. But I've got a stash of the private ones in case I ever need to relocate to a different country one day in my life.


It gave us an uncensorable, global, and decentralized payments system. To some people this is extremely valuable.


How so many people overlook this, I'm surprised by. Everyone at every level has been distracted by cryptocurrency being either a way to make money or an investment scheme. It'd be nice if it could be those things, but there are already other systems of actual investment. On the other hand, cryptocurrency, whether or not you hate them for their energy consumption, provide a means of independent economies being created outside of whatever regimes are in control. But that's boring.

Then again, hopefully we'll never need to see a future where everyone values the decentralized nature of crypto more than its monetary value at any given moment. If that happens, it might be under a regime where sneezing in public or looking in the wrong direction would lose you enough social credit that your bank account gets terminated.


Er all your transactions are tracked. They want to get rid of cash and they need people to want the technology that enables that. A couple of mining operators control the network and they just need to operate as one to do whatever they want with it. The lizards in charge of fiat probably own those mining companies. All transactions are tracked did I Mention. So everything you do is tracked. Read that again and think about how cash == freedom ... and crypto == total enslavement


Monero, Dash, Zcash, Beam Grin, are all completely private and anonymous. Nobody can see you using those blockchains.


Each of these comes with a large asterisk:

Uncensorable, except for all of the times that providers have refused service because of (very reasonable!) KYC laws.

Global, except for all of the countries that will throw you in prison for using cryptocurrencies.

Decentralized, except for the structured oligarchy of miners who prevent protocol evolution on any decentralized currency of sufficient value.

Hully gee.


Not really:

- You cannot censor a Bitcoin transaction. You are talking about a completely unrelated issue and pretending this is an issue with Bitcoin.

- People in China are still mining Bitcoin and using crypto even with the ban. So it truly is global. Just like how drugs are global even though most countries have banned them: https://www.cnbc.com/2021/12/18/chinas-underground-bitcoin-m....

- Miners have tried to prevent changes to the Bitcoin and Ethereum protocol more than once and failed due to community consensus of the users. Miners don't get to decide which protocol is run, the users ultimately do. If the miners choose to mine an unpopular protocol, people can and will simply fork the chain to a more favorable protocol. This has happened plenty of times.


"Some people" being mainly criminals and money launderers.


A lot more people than just criminals and money launders. But yes, criminals ironically tend to be some of the first adopters of new technology. So the fact that it's so popular with criminals should be the signal that it actually is really powerful technology.

Or do you think criminals are using crypto because they're uninformed reddit HODLRS hoping to moon?


Plenty of non-criminals adopted bitcoin. Then they un-adopted it once they found out it was actually not useful at all.

The criminals stayed though.


I've legally bought and sold quite a few things on craigslist using cryptocurrencies. I'm personally finding it easier and easier to use cryptocurrency as a payments system as time goes on. You really don't know what you're talking about.


You could have done that just as easily, or easier, using other payment methods that are not boiling the planet.


Could have but did not want to? I want to use crypto and so do plenty of other people. That's why the option to use crypto on craigslist even exists.

First, it was criminals, now it's the planet. You are all over the place my guy. I'm doing what's best for me economically speaking; the planet is just gonna have to figure out a way to survive because I'm for sure doing fine.


All right, so you're a sociopath.


Not a sociopath, just someone whose priorities are in the following order:

- my family

- my friends

- my country

- the rest of the world

I'm not going to restrict my own economic goals to save the planet at the expense of the things above.


I don't know how to tell the difference between that statement, and what a sociopath would say in the same situation.


Then you don't know what the definition of a sociopath is at all.


Does the definition of a sociopath say what they would say in that situation?


Would you go vegan to reduce your CO2 output, and does it make you a sociopath if you dont?


Already did.


Thanks for your sacrifice, now I know what kind of tree hugging hippie Im talking to. The unfortunate reality of the world is that sacrificers like you have to live with resource gatherers like me. And your opinion about me, unfortunately for you, doesn't really impact my day to day life in any meaningful way.


And Argentinians, and Venezuelans, and Nigerians, and the Salvadorian government...

There's flyover country, there's flyover countries, and ignoring the masses that make that up is equal parts condescending and ignorant.


No significant number of people in any of those use it.

El Salvador's wannabe dictator is pretty keen on it, though.


Define "no significant number."

And we aren't going to get anywhere in this discussion if your only rebuttal is "the leader of a country using it voluntarily is a dictator so that settles it." Do you have a rebuttal to my point at all?


If everyone worked 25 hours a week, dog walking would not even exist as a job. Literally anyone can do it as long as they can walk. Let's not pretend it's a noble contribution to society here, it's objectively speaking something a 15 year old teenager can do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: