Hacker Newsnew | past | comments | ask | show | jobs | submit | contravariant's commentslogin

They should really update those links. Could be a coincidence but about half seemed to redirect me to a service that was discontinued or continued under a different name.

Ah I hadn't realised development is falling behind a bit (addon last released 6 months ago, though it's possible it doesn't need updating I guess)

Could just be the webpage to be fair, but it's not a great look.

After trying a few others I do think I was a bit unlucky with my first few tries.


Both of these do, in a way. They just differ in which gaussian distribution they're fitting to.

And how I suppose. PCA is effectively moment matching, least squares is max likelihood. These correspond to the two ways of minimizing the Kullback Leibler divergence to or from a gaussian distribution.


I mean we've had to cope with users for ages, this is not that different.


Hey we have the same tree!

    (()(()((())((())(()())))))


Mine looks weird but it's decorated:

    @star
    @lights
    @balls
    def christmas_tree():
        n1 = Node()
        n2 = Node()
        n3 = Node()
        n4 = Node()
        n5 = Node()
        n6 = Node()
        n7 = Node()
        n8  = Node(n1, None)
        n9  = Node(n2, n3)
        n10 = Node(n4, n5)
        n11 = Node(n8, n9)
        n12 = Node(n6, n7)
        n13 = Node(None, n11)
        n14 = Node(None, n12)
        return Node(n13, n14)

    christmas_tree()


Pinus newickii


It is a moral one, but laws have no morality.


While that is indeed one of the causes, it does feel a bit like whataboutism to point it out on an article explaining the scam.


Ultimately I think the paradox comes from mixing two paradigms that aren't really designed to be mixed.

That said you can give a Bayesian argument for p-circling provided you have a prior on the power of the test. The details are almost impossible to work out except for a case by case calculation because unless I'm mistake the shape of the p-value distribution when the null-hypothesis does not hold is very ill defined.

However it's quite possible to give some examples where intuitively a p-value of just below 0.05 would be highly suspicious. You just need to mix tests with high power with unclear results. Say for example you're testing the existence of gravity with various objects and you get a probability of <0.04% that objects just stay in the air indefinitely.


Don't use them for the parts that are fuzzy.

I mean it should be obvious that making executive decisions about what the code should do exactly should only be left to a RNG powered model if the choices made are unimportant.


Why is figuring out what UI elements to capture so much harder than just looking at the network activity to figure what API calls you need?


I'm confused, of course if you await immediately it's not going to have a chance to do anything else _before_ returning.

If you do the following it works as expected

    async def child():
        print("child start")
        await asyncio.sleep(0)
        print("child end")
    
    async def parent():
        print("parent before")
        task = child()
        print("parent after")
        await task
The real difference is that the coroutine is not going to do _anything_ until it is awaited, but I don't think the asyncio task is really different in a meaningful way. It's just a wrapper with an actual task manager so you can run things 'concurrently'.

Python does have two different coroutines, but they're generators and async functions. You can go from one to the other,


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: