It’s mathematically identical but conceptually different. The things that go into the calculation are different, the numbers that get out of the calculation mean different things. Laughing is healthy though.
You can just shortcut all of that if you're a Bayesian and just plain say "p-values are posterior probabilities under a uniform (improper) prior" and save everyone a lot of time.
And if you're doing that, don't care complain that p-values can be misinterpreted, because you're basically just laundering the misinterpretation of p-values.
Sure, you are mathematically pure because you made an initial assumption that it can be so, rather than being confused, but the end result is the same.
There is some frequentist procedure there, but it seems hard to not recognize the deep connection to Bayesian statistics and wonder if you should begin to question your baseline assumptions. Since the entire justification for using a shrinkage estimator has a whole lot more in common with the foundations of Bayesian statistics than it does with the foundations of frequentist stats.
Purist frequentists using a shrinkage estimator looks a lot like heliocentric Ptolemic astronomy.
If they were doing frequentist inference they wouldn’t be using priors at all and there is nothing frequentist in using previous data to construct prior distributions.
Not true. In frequentist statistics, from the perspective of Bayesians, your prior is a point distribution derived empirically. It doesn't have the same confidence / uncertainty intervals but it does have an unnecessarily overconfident assumption of the nature of the data generating process.
Not true. In frequentist statistics, from the perspective of Bayesians and non-Bayesians alike, there are no priors.
—-
Dear ChatGPT, are there priors in frequentist statistics? (Please answer with a single sentence.)
No — unlike Bayesian statistics, frequentist statistics do not use priors, as they treat parameters as fixed and rely solely on the likelihood derived from the observed data.
There's always priors, they're just "flat", uniform priors (for maximum likelihood methods). But what "flat" means is determined by the parameterization you pick for your model. which is more or less arbitrary. Bayesians would call this an uninformative prior. And you can most likely account for stronger, more informative priors within frequentist statistics by resorting to so-called "robust" methods.
First, there is not such thing as a ‘uninformative’ prior; it’s a misnomer. They can change drastically based on your paramerization (cf change of variables in integration).
Second, I think the nod to robust methods is what’s often called regularization in frequentist statistics. There are cases where regularization and priors lead to the same methodology (cf L1 regularized fits and exponential priors) but the interpretation of the results is different. Bayesian claim they get stronger results but that’s because they make what are ultimately unjustified assumptions. My point is that if they were fully justified, they have to use frequentist methods.
One standard way to get uninformative priors is to make them invariant under the transformation groups which are relevant given the symmetries in the problem.
It’s not true that “there are always priors”. There are no priors when you calculate the area of a triangle, because priors are not a thing in geometry. Priors are not a thing in frequentist inference either.
You may do a Bayesian calculation that looks similar to a frequentist calculation but it will be conceptually different. The result is not really comparable: a frequentist confidence interval and a Bayesian credible interval are completely different things even if the numerical values of the limits coincide.
Frequentist confidence intervals as generally interpreted are not even compatible with the likelihood principle. There's really not much of a proper foundation for that interpretation of the "numerical values".
What does “as generally interpreted” mean? There is one valid way to interpret confidence intervals. The point is that it’s not based on a posterior probability and there is no prior probability there either.
If you want to say that when you do a frequentist analysis which doesn’t include any concept of prior you get a result that has a similar form to the result of a completely different conceptually Bayesian analysis which uses a flat prior (definitely not “a point distribution derived empirically”) that may be correct. It remains true that there is no prior in the frequentist analysis because they are not part of frequentist inference at all.
That’s not what it means though. It’s done through a partnership. Or not, if we count Business Development Companies as “private credit” - but then they are not usually private corporations either.
Firm is used for partnerships, where the company is not a legal entity itself. An incorporated company may be closely held but it wouldn’t be a firm in that sense. (However, it may be customary to talk about law or accouting firms, for example, regardless of their actual legal form.)
No, independently of OpenAI's definition. If we have AGI there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day*. And if all those jobs are eliminated, I guess we'll have bigger problems than to debate whether we've achieved AGI or not.
* Which is a much larger class of jobs than just engineering. And also excludes field engineers and other types of engineers that need a physical body for interacting with customers, etc.**
** Though even then, you could in theory divvy up the engineering part and the customer interaction part of the job, where the human that's doing the interaction part is primarily a proxy to the engineering agent that's in his earbud.
> there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day
I'm not sure I understand, and want to check. That really applies to a lot of jobs. That's all admins, accountants, programmers, probably includes lawyers, and probably includes all C-suite execs. It's harder for me to think of jobs that don't fit under this umbrella. I can think of some, of course[0], but this is a crazy amount of replacement with a wide set of skills.
But I also think that's a bad line to draw. Many of those jobs include a lot more than just typing into a computer. By your criteria we'd also be replacing most scientists, as so many are not doing physical experiments and using the computer to read the work of peers and develop new models. But also does get definition intended to exclude jobs where the computer just isn't the most convenient interface? We should be including more in that case since we can then make the connection for that interface.
I think we need a much more refined definition. I don't like the broad strokes "is computer". Nor do I like skills based definitions. They're much easier to measure but easily hackable. I think we should try to define more by our actual understanding of what intelligence is. While we don't have a precise definition we have some pretty good answers already. I know people act like the lack of an exact definition is the same as having no definition but that's a crazy framing. If we had that requirement we wouldn't have any definitions as we know nothing with infinite precision. Even physics is just an approximation, but it's about the convergence to the truth [1]
[side note] the conventional way to do references or notes here is with brackets like I did. So you don't have to escape your asterisks. *Also* if it lead a paragraph with two spaces you get verbatim text
[0] farmer, construction worker, plumber, machinist, welder, teacher, doctors, etc
Actually it occurs to me that even if we did have AGI, or even if ASI, heck if ASI even moreso, we'd still need desk jobs to maintain the guardrails.
Intelligence is one thing, being able to figure out how get a task done (say). But understanding that no, I don't want you to exploit a backdoor or blackmail my teammate or launch a warhead even though that might expedite the task. Or why some task is more important than another. Or that solving the P=NP problem is more fulfilling than computing the trillionth digit of pi. That's perhaps a different thing entirely, completely disjoint with intelligence.
And by that definition, maybe we are in the neighborhood of AGI already. The things can already accomplish many challenging tasks more reliably than most humans. But the lack of wisdom, emotion, human alignment, or whatever we want to call it, lead it to accomplish the wrong tasks, or accomplish them in the wrong way, or overlook obvious implicit requirements, may cause people to view it as unintelligent, even if intelligence is not the issue.
And that may be an unsolvable problem because AI simply isn't a living being, much less human. It doesn't have goals or ambitions or want a better future for its children. But it doesn't mean we can never achieve AGI.
Oh, and to your first question, yes it's a huge number of jobs, maybe half of jobs in developed nations. And why not? If you can get AI to do the work of the scientist for a tenth of the price, just give it a general role description and budget and let it rip, with the expectation that it'll identify the most promising experiments, process the results, decide what could use further investigation, look for market trends, grow the operation accordingly, that's all you need from a human scientist too. Plausibly the same for executives and other roles. Of course maybe sometimes the role needs a human face for press conferences or whatever, and I don't know how AI would be able to take that, but especially for jobs that are entirely internal-facing, it seems like there's no particular need for a human. Except that maybe, given the above, yes, you still need a human at the helm.
> we'd still need desk jobs to maintain the guardrails.
Agreed. I don't get why people think it is a good idea not to. I'd wager even the AGI would agree. The reason is quite simple: different perspectives help. Really for mission critical things it makes sense to have multiple entities verifying one another. For nuclear launches there's a chain of responsibility and famously those launching have two distinct keys that must be activated simultaneously. Though what people don't realize is that there's a chain of people who act and act independently during this process. It isn't just the president deciding to nuke a location and everyone else carrying out the commands mindlessly. But in far lower stakes settings... we have code review. Or a common saying in physical engineering as well among many tradesmen "measure twice, cut once".
It would be absolutely bonkers to just hand over absolute control of any system to a machine before substantial verification. These vetting processes are in place for a reason. They can be annoying because they slow things down, but they're there because they speed things up in the long run. Because their existence tends to make things less sloppy, so they are less needed. But their existence also catches mistakes that were they made slow down processes far more than all the QA annoyances and slowdowns could ever cause combined.
> And why not? If you can get AI to do the work of the scientist for a tenth of the price
And what are the assumptions being made here? Equal quality work? To my question, this is part of the implication. Price is an incredibly naive metric. We use it because we need something, but a grave mistake is to interpret some metric as more meaningful than it actually is. Goodhart's Law? Or just look at any bureaucracy. I think we need to be more refined than "price". It's going to be god awfully hard to even define what "equal quality" means. But it seems like you're recognizing that given your other statements.
And "maintaining guardrails" may be far more grandiose than it sounds. It's like if we have this energy source that could destroy the planet, but the closer you get to it without going past some threshold, the energy you get from it is proportional to the inverse of how close you are to it. There's some wiggle room and you can poke and prod and recover if it starts to go ballistic, but your goal is to extract as much energy (or wealth or whatever) out of it as possible. Every company in the world, every engineer on the planet would be pushing to extract just a little bit more without going beyond the limit.
AI could go the same way. It's a creation engine like nothing that's ever been seen before, but it can also become a destruction engine in ways that we could never understand or hope to counter, and left unchecked, the odds of that soar to near certainty. So the first job is to place dummy guardrails around it. That's where we are now. But soon that becomes too restrictive. What can we loosen? How do we know? How can we recover if we're wrong? We're not quite there yet, but we're not not there either.
Of course eventually somebody is going to trigger it and it's going to go ballistic. Our only hope is that it happens at exactly the right time where AGI can cause enough damage for people to notice, but not enough to be irrecoverable. Maybe we should rename this whole AGI thing to Project Icarus.
The reason AGI couldn't do these is the lack of a suitable interface to the physical world. It would take a trivial amount of effort for these to be designed and built by the AGI. Humans could be cut from the loop after an initial production run made up of just the subset of these physical interface devices needed to build more advanced ones.
reply