Hacker Newsnew | past | comments | ask | show | jobs | submit | acmj's commentslogin

No, at initial release, the human genome from the NIH side was done by bac-to-bac, not by shotgun.

You are confused by the human genome project vs the celera genome project. No, the human genome project didn't include his sample.

It gets a little fuzzy when talking about Celera and the human genome project. The two efforts were very much competitors, but there was a lot of crossover (mainly from Celera pulling in the public data).

But, Venter claimed that he was the a good chunk of the genome that Celera sequenced, so I think it's fair to say he was one of the people included in the draft human genome (at least the Celera version of it).

> After leaving Celera in 2002, Venter announced that much of the genome that had been sequenced there was his own. [1]

[1] https://www.technologyreview.com/2007/09/04/223919/craig-ven...


I am not sure what is "the draft human genome" you are talking about. Two separate human genomes were published in 2001: the HGP genome and the celera genome. The HGP genome then didn't use Venter DNA. It evolved into the current human reference genome. The celera genome contained Venter DNA but it has been completely forgotten nowadays.

Yes. For folks looking for more:

* Celera genome, first published 2004: https://www.ncbi.nlm.nih.gov/datasets/genome/GCF_000002115.1...

* Human reference genome, first published 2001 and most recently updated in 2022: https://www.ncbi.nlm.nih.gov/datasets/genome/GCF_000001405.4...


Well, they can keep stealing as long as someone open weight their models.


Going from something like "Go lacks a builtin arena allocation" to "Go risks becoming the COBOL" is a long stretch. First, Go is slower than C/C++/rust without complex memory allocation. Introducing an arena allocator won't fix that. Second, arena allocation often doesn't work for a lot of allocation patterns. Third, plain arena allocator is easy to implement when needed. Surely a builtin one would be better but Go won't fall without it.


How is "full booked" a real shame?


Needing to book days in advance makes it unusable for short-notice trips (vs. driving), and due to the demand they basically doubled prices. It's now more expensive to take Acela than it is to take a plane; that wasn't the case a decade ago.


Rail should be easy to use.

I live in Switzerland where people are so comfortable taking the train they treat it like an extension of their living room.

Only in rare cases do I even book tickets in advance, like when going to Milano… otherwise I just use the Fairtiq app, which is a nation wide system for paying for tickets, including busses and trams…

You swipe right before you step on, swipe left when you step off and the system automatically calculates the best ticket for you.

There isn’t a “fully booked”.


Switzerland is also the size of one of the smallest states in America.


And, what’s your point?


Its easy to do in a small area, harder in a big area like the US.


I visited Switzerland recently and loved the train network. One really awesome feature was that the train stations basically doubled as shopping malls. Which makes a lot of sense, imo!

We'd leave our room for the day, have breakfast at a restaurant or coffee shop in the train station, then jump on the train to whatever outing we had planned. At the end of the day, we'd take the train back, pick up some groceries at one of the grocery stores in the station (I saw at least two major grocery stores in our station), and then head to the room and make dinner. I also needed to visit a pharmacy at one point during our stay, and the only pharmacy open at that sleepy hour was at the train station.

The train stations are really major hubs for the towns. Even if I didn't need to take the train that day, I was still likely to make a trip down to the train station for something. It's smart.


Pypy is 10x faster and is compatible with most cpython code. IMHO it was a big mistake not to adopt JIT during the 2-to-3 transition.


That “most” is doing a big lift there. At some point you might consider that you’re actually programming in the language of Pypy and not pure Python. It’s effectively a dialect of the language like Turbo Pascal vs ISO Pascal or RPerl instead of Perl.


Most is more CPython code than python 3 was compatible with. But the port of the broken code was likely much easier than if it had moved to a JIT at the same time too.


Isn't there an incoming JIT in 3.14?


Are there studies to show those paying $200/month to openai/claude are more productive?


Anecdotally, I can take on and complete the side projects I've always wanted to do but didn't due to the large amounts of yak shaving or unfamiliarity with parts of the stack. It's the difference between "hey wouldn't it be cool to have a Monte Carlo simulator for retirement planning with multidimensional search for the safe withdrawal rate depending on savings rate, age of retirement, and other assumptions" and doing it in an afternoon with some prompts.


For curiosity, how complex are these side projects? My experience is that Claude Code can absolutely nail simple apps. But as the complexity increases it seems to lose its ability to work through things without having to burn tokens on constantly reminding it of the patterns it needs to follow. At the very least it diminishes the enjoyment of it.


It varies, but they're not necessarily very complex projects. The most complex project that I'm still working on is a Java swing UI to run multiple instances of Claude code in parallel with different chat histories and the ability to have them make progress in the background.

If you need to repeatedly remind it to do something though, you can store it in claude.md so that it is part of every chat. For example, in mine I have asked it to not invoke git commit but to review the git commit message with me before committing, since I usually need to change it.

There may be a maximum amount of complexity it can handle. I haven't reached that limit yet, but I can see how it could exist.


Simple apps are the majority of use-cases though - to me this feels like what programming/using a computer should have been all along: if I want to do something I’m curious about I just try with Claude whereas in the past I’d mostly be too lazy/tired to program after hours in my free time (even though my programming ability exceed Claude’s).


Well that's why I'm curious. I've been reading a lot of people talking about how the Max plan has 100x their productivity and they're getting a ton of value out of Claude Code. I too have had moments where Claude Code did amazing things for me. But I find myself in a bit of a valley of despair at the moment as I'm trying to force it to do things I'm finding out that it's not good at.

I'm just worried that I'm doing it wrong.


There are definitely things it can't do, and things it hilariously gets wrong.

I've found though that if you can steer it in the right direction it usually works out okay. It's not particularly good at design, but it's good at writing code, so one thing you can do is say write classes and some empty methods with // Todo Claude: implement, then ask it to implement the methods with Todo Claude in file foo. So this way you get the structure that you want, but without having to implement all the details.

What kind of things are you having issues with?


This has nothing to do with AI, but might help: All complex software programs are compositions of simpler programs.


I work at an Amazon subsidiary so I kinda have unlimited gpu budgets. I agree with siblings, I'm working on 5 side projects I have wanted to do as a framework lead for 7 years. I do them in my meetings. None of them are taking production traffic from customers, they're all nice to haves for developers. These tools have dropped the costs of building these tools massively. It's yet to be seen if they'll also make maintaining them the same, or spinning back up on them. But given AI built several of them in a few hours I'm less worried about that cost than I was a year ago (and not building them).


It's subjective, but the high monthly fee would suggest so. At the very least, they're getting an experience that those without are not.


People here have little idea about how Harvard works. Harvard is financially vulnerable. It is currently running on a deficiency considering the endowment. And Harvard can't freely use most endowment for personnels anyway. If the government takes away funding, Harvard will have a financial crisis. I guess the leadership made the decision in hope someone could stop the government before bad things happen but when bad things do happen, you will probably see mass layoffs of researchers in particular in life sciences and biomedical research.


I mean, we literally just saw what happened at JHU when their USAID funding vanished. Everybody on that soft money got laid off.

That’s what makes stands like this hard for admin: you’re risking massive layoffs in the programs that are often the least political to defend the academic freedom of the programs that are often the most political. Columbia made one decision. Harvard is making another. You could make Lord Farquaad jokes here, but if it alone loses its federal funding in these expensive research areas, it will lose its preeminence in those areas for a long time.


I guess Harvard saw the decision at Columbia made the situation worse [1], so they decided to make a different one.

[1] https://www.science.org/content/article/nih-freezes-all-rese...


Some universities should make sacrifices for academic freedom, yes. That's what they are there for!


I wouldn't say this easily if I were the sacrifice, especially as a visa holder.


With $50B in the endowment, how are they financially vulnerable? Honest question.


Much of the endowment is earmarked towards specific ends. It is not a slush fund for discretionary spending.


Earmarked implies discretionary so it is discretionary


That's not what discretionary means in this context. The funds having been originally earmarked at the discretion of the originator, means they are no longer available for any purpose at the discretion of the trustee, meaning they are no longer discretionary. You are confusing the funds having once been earmarked at someone's discretion for their being discretionary, which they haven't been since the point when they were earmarked at the originator's discretion.


Most of it is not discretionary, no matter what words random Internet commenters use to describe it.


I am replying to the GP. GP must be mistaken. It was Harvard’s choice to operate this way financially


I understand. I am saying they are correct that much of Harvard's endowment is not discretionary, even if they accidentally used a term that implies that it is.


Some part of this article is opinionated. Curl may be well written but this is more likely to be the result of the overall structure than the number of characters per line. Actually I don't know whether curl is well written. Popularity doesn't always equate to code quality. I have used curl APIs before. I don't like them.


std::deque typically uses chunked arrays. It is more complex but tends to be faster than a ring buffer based implementation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: