Higgs boson was predicted in theory in 1964, and found in LHC in CERN in 2012-2013. With this, all elementary particles in the standard model of particle physics have been found.
From the 1970s to 2010s, physicists believed in a theory called supersymmetry, which predicted supersymmetric partner particles for the known elementary particles. But these should have been already found in the energies used in LHC.
For the first time, there is no mainstream theory that would predict any new findings. Maybe the next bigger particle collider will find no new particles at all?
A collider produces far more than new particles or explanations. They produce papers and phds. In effect, thier primary goal is to produce stem careers. The new particles are just the public announcements. The collider doesnt even need to be functional. Much/most of the work occures before first light, before anyone turns it on. The design of the ring and its innumerable detectors and subsystems takes decades. So a great many people want the next collider to be funded regardless of its potential for scientific discovery.
The same discussion can happen re the ISS. Its primary purpose was not science. It existed to give shuttle a parking spot, to keep the US manned space program ticking along and to keep a thousand russian rocket people from going to work for rando countries. The ISS will soon end. Are we going to put up a new one? A place to park starliner and dragon? Or are we going to shut down low earth orbit spaceflight? The decision will not turn on the potential for new science, rather it will be about supporting and maintaining a flagship industry.
> thousand russian rocket people from going to work for bad people.
Just like for the Germans before!
I agree with you that it is an educational tool, but if that's all it is, there are cheaper ways to educate that might also have a higher likelihood for scientific discoveries. To build a new collider, we should have some things we're trying to do/find.
>The same discussion can happen re the ISS. Its primary purpose was not science.
But it's worth noting that many experiments took place on ISS covering few domains, examples being AMS (cosmology), CAL (quantum physics), SAFFIRE (combustion), and Veggie (botany/sustainability).
And the LHC did science too. But, in both cases, the amount of science generated was not worth the money and/or the same could have been acomplished at far lower cost via other means.
> Higgs boson was predicted in theory in 1964, and found in LHC in CERN in 2012-2013. With this, all elementary particles in the standard model of particle physics have been found.
Before LHC Large Hardron Collider (CERN), there were other experiments with lower raw and final recorded data rates: SppS (CERN; MB/s; 1-10 Hz), SLC (SLAC (Stanford); 50 MB/s; 2 Hz), LEP (CERN; 100 MB/s; 1-5 Hz), Tevatron (Fermilab (Chicago); 250 GB/s, 100-400 Hz), HERA (DESY; 500 MB/s; 5-20 Hz), LHC CMS/ATLAS (CERN; 40 TB/s; 1000 Hz).
HL-LHC (CERN; 10X LHC;)
FCC-ee (CERN), FCC-hh (CERN)
Non-confirmed non-elementary particles of or not of the Standard Model?
What about Superfluids and Supersolids (like spin-nematic liquid crystals)? Are those just phases? Is the phase chart for all particles complete?
Dataframes first appeared in S-PLUS in 1991-1992. Then R copied S, and from 1995-1996-1997 onwards R started to grow in popularity in statistics. As free and open source software, R started to take over the market among statisticians and other people who were using other statistical software, mainly SAS, SPSS and Stata.
Given that S and R existed, why were they mostly not picked up by data analysts and programmers in 1995-2008, and only Python and Pandas made dataframes popular from 2008 onwards?
Historically 18 years ago, Pandas started as a project by someone working in finance to use Python instead of Excel, yet be nicer than using just raw Python dicts and Numpy arrays.
For better or worse, like Excel and like the simpler programming languages of old, Pandas lets you overwrite data in place.
Polars comes from a more modern data engineering philosopy, and data is immutable. In Polars, if you ever wanted to do such a thing, you'd write a pipeline to process and replace the whole column.
If you are just interactively playing around with your data, and want to do it in Python and not in Excel or R, Pandas might still hit the spot. Or use Polars, and if need be then temporarily convert the data to Pandas or even to a Numpy array, manipulate, and then convert back.
P.S. Polars has an optimization to overwite a single value
The Polars code puts me off as being too verbose and requiring too many steps. I love the broadcasting ability that Pandas gets from Numpy. It's what sceintific computing should look like in my opinon. Maybe R, Julia or some array-based language does it a bit better than Numpy/Pandas, but it's certainly not like the Polars example.
Polars is indeed more verbose when coming from pandas, but in my experience it is an advantage for when you're reading that same code after not having touched it for months.
pandas is write-optimized, so you can quickly and powerfully transform your data. Once you're used to it, it allows you to quickly get your work done. But figuring out what is happening in that code after returning to it a while later is a lot harder compared to Polars, which is more read-optimized. This read-optimized API coincidentally allows the engine to perform more optimizations because all implicit knowledge about data must be typed out instead of kept in your head.
I don't agree that more verbose code is necessarily more readable when the shorter code looks like familiar math. All you have to do is learn how operators broadcast across array-like structures, how slicing and filtering works. Perhaps with more complicated examples the shorter code becomes harder to read after months away? Mathematicians are able to handle a lot of compact equations.
No doubt some of this comes down to preference as to what's considered readable. I never really bought that argument that regular expressions create more problems than they're worth. Perhaps I side on the expressivity end of the readability debate.
Oh I don't mean to say verbose makes it more readable by default, I agree with you on that. I mostly meant that because the API is declarative (more geared at describing the result you want instead of the operations) it is easier to understand what's going on. A side effect of that is that it might be more verbose, which is the case of Polars vs pandas.
In the end it's a personal thing which one you like the most. I do believe that if your deliverable is insights you get out of your analysis I can imagine that a less verbose API is practical to get things done quickly. But if you create pipelines that your colleagues have to quickly understand (or you in a couple of months) a read-optimized one makes more sense, even though it might take slightly more effort to write.
Likewise, I was considering trying Polaris until I saw that example. The pandas example is a good approximation of how I think and want to transform/process data even if it is ugly under the hood. I do occasionally find numpy and pandas annoying wrt when the return a view vs a copy but the cure seems worse than the disease.
He made a famous career, to being a professor and a director in Stanford University, about meta-research on the quality of other people's research, and critiquing the methodology of other people's studies. Then during Covid he tried to do a bit of original empirical research of his own, and his own methods and statistical data analysis were even worse than what he has critiqued in other people's work.
Roomba (specifically the brand of the American company iRobot) only added lidar in 2025 [1]. Earliest Roombas navigated by touch (bumping into walls), and then by cameras.
But if you use "roomba" as a generic term for robot vacuum then yes, Chinese Ecovacs and Xiaomi introduced lidar-based robot vacuums in 2015 [2].
> Earliest Roombas navigated by touch (bumping into walls)
My ex got a Roomba in the early 2010s and it gave me an irrational but everlasting disdain for the company.
They kept mentioning their "proprietary algorithm" like it was some amazing futuristic thing but watching that thing just bump into something and turn, bump into something else and turn, bump into something again and turn again, etc ... it made me hate that thing.
Now when my dog can't find her ball and starts senselessly roaming in all the wrong directions in a panic, I call it Roomba mode.
> Suppose you pass a parameter, N, and you also would like to pass a tensor, and you would like to specify the tensor's shape (N, N).
You can do that, and it might be cleaner and less lines of code that way.
But you don't necessarily need to pass the array dimensions as a parameter, as you can call `size` or `shape` to query it inside your function.
program main
implicit none
real :: a(2, 2) = reshape([1., 2., 3., 4.], [2, 2])
call print_array(a)
contains
subroutine print_array(a)
real, intent(in) :: a(:, :)
integer :: n, m, i, j
n = size(a, 1) ; m = size(a, 2)
write(*, '("array dimensions:", 2i3)') [n, m]
do i = 1, n
do j = 1, m
write(*, '(f6.1, 1x)', advance='no') a(i, j)
end do
print *
end do
end subroutine
end program
This is not valid C, though. The characters allowed for identifiers are defined in Unicode Standard Annex #31, and those easily understood as operators, like arrows, are not included.
Higgs boson was predicted in theory in 1964, and found in LHC in CERN in 2012-2013. With this, all elementary particles in the standard model of particle physics have been found.
From the 1970s to 2010s, physicists believed in a theory called supersymmetry, which predicted supersymmetric partner particles for the known elementary particles. But these should have been already found in the energies used in LHC.
For the first time, there is no mainstream theory that would predict any new findings. Maybe the next bigger particle collider will find no new particles at all?
https://www.youtube.com/watch?v=YlixMNBlQos
reply