Hacker Newsnew | past | comments | ask | show | jobs | submit | dislikedtom2's commentslogin

Timer is accurate only if you have first done experiments with your unique settings. However temperature gauges are very useful. I usually trust my eyes and nose more than a timer.


Because this planned obsolete stuff is slowly killing us and yielding would probably only make it worse?


Having a warranty on CPUs wouldn't have saved the environment. It probably would have made things worse as every user returned their computers for a new one.


If you don't like it, don't buy it.

Or do you just want to control what other people are doing?


Gotcha, so you are also against electrical regulations like requirements to use tested insulators etc?

Because some regulation is actually useful in a world where consumers cannot be expected to know everything and be able to test everything.

And some assumptions about a product should hold true as well. E.g. the assumption should be that if I buy a desktop CPU today I should be able to use it for at least n years. Because how would you know this beforehand?

If Intel made an optimization that turns out to be a security flaw, that is totally their fault and they should cover it. This is nothing any consumer could have known (and rherefore avoided buying) beforehand. This is the cost of taking risks in business.

It seems to be a popular stance these days to shill for corporations and protect them from ever having their risks realized. But that makes the products and the world we live in worse for everybody and gives you nothing at all.


There is a huge difference between electrical safety regulations, and a side channel attack that has never been seen to be exploited in the wild.


If that sidechannel attack that has not been seen in the wild leads to me having to throw my CPU/computer into the bin because it is unsopported on a up to date OS for that reason, there isn't.

How should I, as a consumer, have voted against this with my wallet, before it happened?

We can argue whether this is the fault of the hardware manufacturer or the software company that sells the OS, but I am 100% sure that no consumer should take the blame for "not doing their research".


> How should I, as a consumer, have voted against this with my wallet, before it happened?

That's what long term reputation is for.


So which desktop processor manufacturer other than the mentioned two do you recommend? ARM?


Me!

I happily resell you Intel processors at a 10x markup but you get a ten year warranty that I'll replace your processor with a new one of no worse specs, if similar vulnerabilities get discovered.

Details to be negotiated.

(More seriously: you don't need a new manufacturer. Someone else can do the warranty at a price.

That's actually what eg Apple Care: if a part inside your device catches on fire, Apple will replace it, even though they did not necessarily manufacture that part.

I have no clue whether their particular policy covers the vulnerabilities we are talking about. But you can easily imagine a variant of such an insurance policy that does.)


> And some assumptions about a product should hold true as well. E.g. the assumption should be that if I buy a desktop CPU today I should be able to use it for at least n years. Because how would you know this beforehand?

You ask the manufacturer, duh? If they lie, that's fraud. If you don't like the answer, you don't buy.

> If Intel made an optimization that turns out to be a security flaw, that is totally their fault and they should cover it. This is nothing any consumer could have known (and therefore avoided buying) beforehand. This is the cost of taking risks in business.

Customers and suppliers should be able to allocate such risks between themselves however they like.

You can fiddle with what the default should be, in case they haven't negotiated anything.

But if both customer and supplier agree, the customer should be allowed to bear such a risk.

> It seems to be a popular stance these days to shill for corporations and protect them from ever having their risks realized.

Why? I have nothing against any particular allocation of the risks, as long as all involved parties agree. (Otherwise, they just don't make a deal.)

> Gotcha, so you are also against electrical regulations like requirements to use tested insulators etc?

If you claim they are tested, you better not lie. Otherwise it's fraud and you should be held liable.

If a customer wants to buy untested insulators, who am I to keep them?

I mean, in the most extreme case, a customer can buy bread and try to use it as an insulator, if they really want to. There's nothing the supplier can do about that.


Not put words in OPs mouth, but saying they “just want to control people” is perhaps not what they’re going for.

Since you raised the valid point of “why make everyone pay”, it seems they (OP) are concerned that currently _everyone_ is paying for the externalities of shitty electronics and the associated ewaste.

Of course these kinds of externalities are impossible (or just really hard?) to rectify at the individual-action level, its a collective action problem.

So they raised the interesting point why not target the specific companies who manufactured this faulty product and require them to clean-up the problem they’ve created.

If nothing else, it’s a useful conversation to have?


Sure, it can be a useful conversation.

Assume for the sake of argument, that I agree that there are externalities and that we should do something about them.

The economics textbook tells you the simple solution: tax the externalities. In our case, you might want to tax e-waste (or directly tax whatever is bad about e-waste, like heavy metals or so.)

Then let the market figure out how to deal with it. Perhaps offering extended warranties is the way to go? Perhaps developing compostable computers is the way to go? Perhaps using fewer computers is the way to go? Perhaps just sucking it up, doing nothing and paying for the externalities is the way to go?

It's not at all obvious to me which of these (if any) is the best solution. And I don't have high hopes than any government would figure out the best solution either.

Different people might even have different answers, and suppliers can provide for this diversity.

This simple tax avoids a lot of regulatory complexity. Remember, that regulatory complexity leads to loopholes and regulatory capture and endless lobbying.


The tax you propose is not as simple as you suggest :

Do you tax on material weight ? But bigger sized systems tends to be easier to repair/recycle than nano electronics. They also often last longer.

Do you tax on raw materials ? That would exclude all the junk made abroad. Or on components ? That would make a gigantic and perpetually changing list to maintain, creating an incentive to always create alternatives only needed to game this tax.

Hope does that address the externality “not infinite material rarefaction” ? Making thinks more expensive just make them less affordable for a part of the population while the other part can not care much especially for cheap disposable(!) small devices.

I don’t have the perfect solution but IMHO we need regulations, not tax. Creating an obligatory (long) warranty would push the society in the sustainability direction. I don’t need 2$ led lamp, an even slimmer keyboard or a 120hz screen, just want it to last way more (or being able to make repair) that 2/3 product cycle time frame defined by the corp business team. Who want to say to their children “enjoy this tech while there’s still some rare earth left” ?


> Do you tax on raw materials ? That would exclude all the junk made abroad.

You can adapt the tax to fix that problem. See how VAT systems handle imports and exports. If your stuff needs eg lead, you either prove that you dispose of it properly, or you pay the tax.

> I don’t have the perfect solution but IMHO we need regulations, not tax.

Why? Wouldn't your argument also work for a 'good enough' tax?

I agree that a tax also needs some careful thinking about what the exactly the externalities are that you want to tax. But it still leaves more flexibility to consumers and suppliers than a blanket regulation like 'mandatory long warranty, whether you want it or not'.

> Making thinks more expensive just make them less affordable for a part of the population while the other part can not care much especially for cheap disposable(!) small devices.

I don't understand this objection. Could you please explain?

Obligatory long warranties also raise the price of goods.

> Who want to say to their children “enjoy this tech while there’s still some rare earth left” ?

Are you suggesting we are going to run out of rare earth elements? Fat chance. We are sitting on a giant ball of matter, we are not going to run out of any elements. We might be running out of easily mine-able deposits of something, but either the price will go up a bit or someone will invent a new technique. (The former often leads to the latter.)

> I don’t need 2$ led lamp, an even slimmer keyboard or a 120hz screen, just want it to last way more (or being able to make repair) that 2/3 product cycle time frame defined by the corp business team.

That's a valid preference, and I suggest you put your money where your mouth is.


> See how VAT systems handle imports and exports. If your stuff needs eg lead, […]

It is way harder for a customs officier to distinguish if a package contains some trace of lead, gold, bore, cobalt, disprosium, neodyme… in some micro chip than classifying in “raw material, “alcool”, “processed food”, “weapon” etc.

Making thinks more expensive to prevent usage is pointless and that’s why countries put speed limits in place. Speed is dangerous (see ek=1/2mv2) and a “speed tax” would only reduce a road accidents to the proportional inverse of its users wallets. As the e-waste and elements rarefaction have impacts that last way longer than a road accidents, we need to absolute regulations to avoid those externalities. Relatives regulations (taxes) are good to balance some parts of the economy, not to prevent problems.

Of course I would stand for tax it can do a “good enough” job.

> We might be running out of easily mine-able deposits of something, but either the price will go up a bit or someone will invent a new technique.

What makes you suppose the price will go up only a bit ? What makes you expect a material price will stay bellow the economical threshold of its extraction ? I dream too of new clean techniques but the history showed us those inventions relies on way more energy and/or also have externalities on resources and environnement. I’m sure you’ll find plenty in battery material alternatives and oil replacement/new extraction techniques. Fracking [0] is my Favorite one.

0 https://archive.nytimes.com/www.nytimes.com/interactive/us/D...


Many countries effectively have a speed tax at least for small amounts of going over the speed limit.

I'm not sure why you keep harping on about people's wallets? Yes, rich people can afford more stuff. What else is new?

The economic theory of taxing externalities does not rely on all people having the same size wallet.

Have a look at eg London's congestion charge: it's not a problem for that system, that some people are richer than others.

You can also look at Singapore's congestion charge or Singapore's Certificate of Entitlement system. Or look at the very successful US sulphur dioxide cap and trade programme: it's not a problem that rich people can in theory just pay to emit more sulphur dioxide, the system still works.

> What makes you suppose the price will go up only a bit?

In the long run, the same reasoning that made Simon win his wager: https://en.wikipedia.org/wiki/Simon%E2%80%93Ehrlich_wager

> What makes you expect a material price will stay bellow the economical threshold of its extraction ?

Sorry, I don't understand. The price will generally be above that threshold, otherwise why would anyone bother extracting the stuff?

> Fracking [0] is my Favorite one.

Fracking is great, yes, I agree. One my favourites as well:

The extra natural gas that the US got from fracking allowed them to reduce their carbon emissions in the 2010s, despite lack of political support for such a goal! Holier than thou Europe meanwhile increased their carbon emissions, because fracking is verboten over there, so they burned more coal and oil instead.


1. turn the swap off or monitor it closely 2. try to load a big model, like 65b-q4 or 30b-f16 3. observe the OOM - It's not so hard to test this.


Using a memory mapped file doesn't use swap. The memory is backed by the file that is memory mapped!


Semi offtopic, but for some time I have been dreaming of training chatbot to communicate in cuneiform or hieroglyphs to bring some old languages back alive. Could it be possible, using old tablets as training data?


That's basically the problem of unsupervised machine translation using mainly monolingual corpora. It means giving a machine learning model tons of text in two languages and let it figure out how to do translation between some old language X and e.g. english. There's no need to feed it a parallel corpora, i.e. examples of sentences in X languages and their translations in english.

In some situations, this seemingly impossible task is doable and can yield good results. Researchers sometimes need to kickstart their models by giving them a mapping between words of the two languages (for english <-> french: "cat" <-> "chat", "book" <-> "livre" and so on). That's just simple vocabulary. While it's technically possible to learn this mapping from scratch, it's too difficult as for now.

Do you know of the Encoder-Decoder architecture? You feed something (image, text) to the encoder which compresses it to a very dense representation, and the decoder try to use the resulting dense vector to do useful stuff with it. The input could a sentence in english, the encoder then encodes it and the decoder tries to use the output of the encoder to generate the same sentence but in french. These architectures are useful because directly working with "plaintext" to learn how to do translation is way too expensive. I mean, that's one of the reasons.

What the encoder does is mapping a "sparse" representation of a sentence (plaintext) to a dense representation in a well-structured space (think of word2vec which managed to find that "king" + "woman" = "queen"). This space is called the "latent space". Some say it extracts the "meaning" of the sentence. To be more precise, it learns to extract enough information from the input and present it to the decoder in such a way that the decoder becomes able to solve a given task (machine translation, text summarizing etc).

One of the main assumption of the unsupervised models using monolingual data only is that both languages can be mapped to the same latent space. In other words, we assume that every sentences/texts in english has its exact french (or whatever) equivalent, that the resulting translated sentences contain exactly the same information/meaning as the original ones.

That's quite the dubious assumption. There's obviously some ideas, some stuff that can be expressed in some languages but can't be exactly expressed in some others. While theoretically unsound, however, these models were able to achieve pretty damn good results in the last couple of years.


I think we need a generic ai before we're able to do that as the data set is small and you would need to infer the rules. A human is able to learn rules way more efficiently than ChatGPT.

ChatGPT is just trained on a lot of data.


Intriguing question. Would it need large sets of hieroglyphic training data and is there enough of it? Or would a translation module be enough?

a follow up question: could a chatbot teach you said language?


No, you need billions of words to train a large language model.


Assuming all human languages have a common shared semantic meaning in latent space (I am flipping cause and effect here, but our purposes it doesn't really matter), and assuming that human languages largely follow the same pattern (this assumption is based on the fact that we can trace the roots of modern languages back to the Phoenician script), it is reasonable to assume that we can fine-tune a self supervised model on a tiny amount of data. (The emergent properties of a LLM is carrying a lot of weight here, many of the assumptions rely on the fact that LLM's emergent properties arise from the idea that the latent structure of various languages is learnt by the model)


I think you may be on to something here. For example ChatGPT is perfectly capable of "understanding" and speaking Polish while the amount of training data in this language definitely wasn't a lot. It is not as eloquent as in English, but still for a model that has not been trained for translation tasks, this is very cool.


Its Lithuanian is awful, I'd expect that any language further removed from that which the majority of it's training is in would be worse without a significant punt of data in that language. Its possible having that could affect it's English speaking capability, but that's just speculation on my part.


So you're saying something like the Universal Translator from Star Trek might be possible?


For humans yes, I am not saying one-shot learning would be possible for undocumented indigenous languages but few shot language acquisition in cases of a single surviving speaker is something that I would consider highly probable. This hypothesis relies heavily on the nature of variational learning in latent space and observations about human languages. It is of course possible that some ethnicity would have a language that's so different from other languages that it is effectively alien (and the assumption homo sapiens common brain structure and physiology have no influence on our languages and/or the human neural structure cannot be statistically modeled by latent variables, at least not with the current variational learning techniques). This is possible but very, very unlikely.


In encryption it's generally impossible to decrypt a 1 to many hash. You can do some clever things (correlating and combining other data) but if you're just looking at some hash that could be an infinite number of other things, you're just out of luck.

I'll take the extreme position that language translation is an unsolvable problem because of this exact phenomena. There was a recent case where a politician was accused of making a racist remark. He said something like "You are a donkey." or "You all are donkeys." to another [minority background] politician. Which was it? Well in many languages the second person plural and the second person singular formal are identical. And there are no articles. So the two statements are literally identical. Which did he mean? Nobody will ever know, besides him.

And outside of inherent language ambiguities, start piling on the endless (and ever/rapidly changing) euphemisms, idioms, colloquialisms, metaphors, just plain old ambiguous sarcasm, and all the other things that make language fun (and more expressive). And these sort of things aren't really the exceptions so much as the rule. And it only becomes more common the more distant languages get. Translations from various Asian languages to English often look just hilarious. Now imagine going back to languages exponentially more detached from any modern language, using one can only imagine what sort of expressions, and trying to convert it.

Especially using a neural network type system you'll probably be able to get something. And, even worse, it might well even make sense. That's a problem because, kind of like ChatGPT, it being coherent is zero indication of it being right.


I don’t think the pigeonhole principle applies to human languages though


> we can trace the roots of modern languages back to the Phoenician script

That's modern European languages ... and post ~1100 BCE if I recall correctly.

So, indigenous languages from people settled in Australia [1] for 50,000+ years can be a little different, some don't have "left" | "right" as relative to PoV directions and stick with East V. West as absolutes for example.

We're down to maybe 20-30 from pre colonial 100's though [2].

[1] https://mgnsw.org.au/wp-content/uploads/2019/01/map_col_high...

[2] https://www.youtube.com/watch?v=ZwSXFvDDjYE


> That's modern European languages ... and post ~1100 BCE if I recall correctly.

That's Indo-European (as in descended from PIE/Yanmaya). Basque is older than all of them and it's a living European language.


And we have about a million cuniform tablets, with maybe 10-100 words each, so we have a couple million words of text to fine tune the model with. Or maybe to use when training the next model, after all GPT can already speak multiple languages.


Addendum: One possible challenge is that so far large lanuage models are trained on a large sample of all text that has been published, while what we have of cuniform is a decent sample of all text that has been written. Meaning most cuniform tablets are inventories, invoices, requests for payment, contracts, tablets from students practicing writing etc. Types of documents that are underrepresented in traditional training data.


Phoenician script is the common ancestor of Latin, greek and Cyrillic script.

You're probably thinking of the indo-european language family, which accounts for about 45% of native language speakers. The largest language family in the world, but not even a majority.

Scripts and languages change over the course of decades, and while there are well known mechanisms to those changes, trying to deduce hieroglyphics or ancient Egyptian from a modern corpus is impossible.

The idea that there is some shared structure in all language is known as universal grammar. If that structure exists is still hotly debated.


I am not saying that all languages have a shared structure, but from the Bayesian variational learning perspective, as long as the new data shares some structure with what the model has previously encountered, the prior training data contributes to understanding the new information i.e. few-shot. This is in the information theoretic sense, I am not stating any theories about the underlying semantics or grammar.

I know this may be not be the answer that you are looking for, but the way most of these ML systems are designed is based on the idea that life, the universe, and everything can be modeled by a series of joint probabilities. For toy problems you draw a diagram

https://en.m.wikipedia.org/wiki/Graphical_model

It's an old idea in AI (predates even ML) but people have never been able to do anything useful with it outside of exam problems until the emergence of language models on modern deep learning hardware. All of a sudden variational learning and causal inference are not merely statistical word problems for grad students any more. This is the key to how most of the custom deep learning based avatar generators work. They use a Variational Autoencoder. For LLMs, it is in the form of a transformer which contains a sampling step (sampling from a distribution is the key to Bayesian methods).

I would like to emphasize the theory of probabilistic learning is very different from the actual practice. The theory we have today isn't much different from 20 years ago. Implement the methods in for example Murphy's Probabilistic ML book and they would be useless if you don't have access to modern deep learning hardware and gradient descent optimizers. Without deep learning, we won't have LLMs, regardless of how fancy the variational learning theories are.


Is there any reason to assume any shared structure for unrelated languages though? Written language is just an encoding for information.

There is a good candidate for a test. Someone will probably already work on it. Minoan as written in Linear A has only survived in a few thousand tokens and despite thousands of man years of effort, natural intelligence has made virtually no progress in understanding it. That's still easy mode, since we know that the Minoans were in contact with speakers of indo-european and Semitic languages, and writers of hieroglyphics and phonetician script, so their written Language was probably influenced by that.


There is no reason to, as stated before, it is however a necessary assumption. It is also possible that the assumption is entirely wrong, and the LLM generates a plausible explanation to their language that we cannot falsify. If the shared structure hypothesis is incorrect, then it is no different from dealing with an alien language. (Note we can also feed in related information like where it was found, what the nearby pottery shards at the excavation site are etc. I am lumping all of these under the "shared structure" banner of the LLM's model of humanity/human languages)


They don't have to be in the output language.


I love fishing but could never be a fisherman. Jobs are not fun, but more like temporary slavery.


To be fair, humans also have great problems solving novel problems. Monkey see, monkey do!


You could always install two of them.


Often I think it means that you asked a painful question. Don't think that downvotes are always bad, they merely just tell what is popular, just like in reddit.


But this isn't Reddit. If I wanted a popularity contest I'd be on Reddit, or Twitter or Facebook.

I want intelligent questions and answers and a basic respect for learning and knowledge.

To me it just shouldn't be acceptable to downvote a (non rhetorical) question.


I like it. However all the dates and times are in american format that I don't use.


I'll see if I can localize it. Can you give me an example format?


I personally use "en-GB", which results something like dd.MM. HH:mm. JS date object has bunch of methods for locale formatting. edit: actually it seems my browser does not expose correct locales...


The dates/times should now be localised!


Can you also look at adding the time/date's timezone on the end? Thanks, I love the site's layout/design


What do you mean?


I think he meant like: "23/11/2022, 15:00:00 GMT+1"

Doesn't seen necessary imo as long as the dates aren't in MM/DD/YYYY format and are in browser's timezone.


I have never shared my location via browser, including google maps. Still, google very well knows where I live and focuses on my home as default, when I open google maps. I'm curious what do you seek by sharing your location with google (maps)?


Perhaps your home router has a public IP. Google gets the location of the home router from just one Android phone connecting to it. I'm guessing.

But some home routers are behind CGNAT infrastructure: Then it's possible that TCP connections from the same browser can go through different public IP addresses.

Sharing the location helps Google to help users. And Google to target ads better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: