Hacker Newsnew | past | comments | ask | show | jobs | submit | dnadler's commentslogin

It’s not that they don’t have it. It’s that they don’t want it.


Software aside, does AMD have a potential strategic advantage in the long term since they are also producing top tier CPUs? Is there some future benefit to tightly integrating their products? IIRC NVidia partnered with intel for this purpose, right?


I’ve been wondering about this. It used to be that different chip makers had different efficiencies, and so measuring two different companies chips in gigawatts wouldn’t be a good apples to apples comparison.

Is that still true or has the gap narrowed? Or have GPUs always been similar across the board and it’s CPUs that have more disparity?


Franklin Templeton | Quant Implementation Lead | Boston, MA | In-Person | $145k - $190k + Bonus

We're building an end-to-end portfolio construction system to support FTIS, a $100bn multi-asset investment manager within FT. This is a front office role that will help guide our quantitative staff as we build out this new system. This project is a strategic objective for FTIS, highly visible, and frankly one of the most fun jobs in the industry.

We're looking for mid to senior level researchers or developers with strong buy-side backgrounds who can help translate our investment process into a robust, scalable, and approachable design.

Reach out with any questions - dan.nadler (at) franklintempleton (dot) com

Apply here: https://franklintempleton.wd5.myworkdayjobs.com/Primary-Exte...


I think maybe this refers to unlearning wrong information?


Also abstracting. No need to remember every milliseconds in its lifetime and consult them in every query.


I can remember for example when I was wrong and how and still responding correctly, I don't have to forget my wrong answer to respond with the correct one.


I have a similar mindset though less focused on the type of questions being asked and more about how many times I have to answer the same question.

Ideally, the number is one time. As in one conversation where the person walks away understanding the answer. If I have to have that conversation more than once it’s a problem.

Obviously there’s nuance - it can take time to get your head around a new concept or hard problem. But in any case, I like that as one dimension when thinking about a person’s skill/level/potential.


> I have a similar mindset though less focused on the type of questions being asked and more about how many times I have to answer the same question.

Yes, I completely agree and do that as well.

The focus on “type of question” has been something I’ve done more recently after helping someone out. Just reflecting on “what type of problem did I just help solve and how can I make it easier for them to solve on their own in the future”. Very often the answer is “more documentation” or similar, getting things only in my head down where everyone can benefit. On the other hand I walk away from some problems I’ve helped with frustrated that the answer was 1-2 Google searches away and the issue had nothing to do with “our stack”.


I mostly agree with you, but there’s another angle that is similar. How many times does the person come to you with a similar question but on a slightly different topic and you need to guide them through _how to find_ the answer. I’ve supervised mid level engineers in the past who will just drop a stack trace in a slack DM and expect me to tell them what’s wrong - I didn’t write the code so why do you expect me to figure it out for you. But when I have the conversation of “we’ve talked about how to dxooore these kinds of problems a few times now, next time you need to apply these techniques”,it often doesn’t land.


Actually, I think humans require much less energy than LLMs. Even raising a human to adulthood would be cheaper from a calorie perspective than running an AGI algorithm (probably). Its the whole reason why the premise of the Matrix was ridiculous :)

Some quick back of the envelope says that it would take around 35 MWh to get to 40 years old (2000 kcal per day)


I read an article once that claimed an early draft/version that was cut for time or narrative complexity had the human brains being used as raw compute for the machines, with the Matrix being the idle process to keep the minds sane and functional for their ultimate purpose.


I've read a file that claimed to be that script; it made more sense for the machines to use human brains to control fusion reactors than for humans to be directly used as batteries.

(And way more sense than how the power of love was supposed to be a nearly magical power source in #4. Boo. Some of the ideas in that film were interesting, but that bit was exceptionally cliché.)


I'd love to read that file. Of course, we're close (really close?) to being able to just ask an LLM to give us a personalized version of the script to do away with whatever set of flaws bother us the most.


One of the ways I experiment with LLMs is to get them to write short stories.

Two axies: Quality and length.

They're good quality. Not award winning, but significantly better than e.g. even good Reddit fiction.

But they still struggle with length, despite what the specs say about context length. You might manage the script length needed for a kid's cartoon, but not yet a film.

I'll see if I can find another copy of the script; what I saw was long enough ago my computer had a PPC chip in it.


> PPC chip

Pizza box? I loved the 6100.


Beige proto-iMac. I had a 5200 as a teen and upgraded to either a 5300 or a 5400 at university for a few years — the latter broke while at university and I upgraded again to an eMac, but I think this was before then.

Looks like there's many different old scripts, no idea which, if any, was what I read back in the day: https://old.reddit.com/r/matrix/comments/rb4x93/early_draft_...

I miss those days. Even software development back then was more fun with REALbasic than today with SwiftUI.


HA! I used REALbasic a bit back in the day, then spent my time comparing it to LiveCode, back then called Revolution. Geoff Perlman and I once co-presented at WWDC to compare the two tools.


You need to consider all the energy spent to bring those calories to you, easily multiplying your budget by 10 or 100.


A human runs on ~100W, even when not doing anything useful. It's entirely plausible that 100W will be enough to run a future AGI level model.


I think the big piece that is being overlooked here is the distance. The distance itself poses significant challenges. The obvious things like resupply and communication are much harder. But also the journey to mars is much harder on the human body.

Rescue and abort options are also much harder. The moon is close enough to easily resupply or rescue people on the surface, mars is much harder.


Completely agreed. Distance will impose substantial challenges, but the good thing is that that's really the "only" big challenge there is. I think many people have this mental model where the Moon is easy and Mars is hard, perhaps because we've already set foot on the Moon and so clearly it can't be that bad.

But if somehow both of these bodies were orbiting around Earth, Mars would be just orders of magnitude more straight forward than Mars, and I think it's relatively likely we'd already have permanent outposts, if not colonies, there. So the mental model of it being viewed as a stepping stone is somewhat misleading. The Moon is hard!

And also I don't think the distance will be that bad. We've already had 374 day ISS stays which is far longer than any possible transit to Mars (though nowhere near as long as a late-stage mission abort would entail) and the overall effects of such a stay were not markedly different than significantly shorter stays on the ISS. So it seems very unlikely that even a late stage emergency abort would be fatal.


I haven’t kept up with python too much over the past year or two and learned a couple new things from this code. Namely, match/case and generic class typing. Makes me wonder what else is new, off to the python docs!


thanks for pointing that out. Seems a thorough scan as features of whatsnew is due - since maybe 3.7?

For the record, ~language~ additions i found interesting (excluding type-hints):

  3.11:
   (Base)ExceptionGroups ; (Base)Exception.add_note() + __notes__
   modules: tomllib
  3.10:
   Parenthesized context managers 
   Structural Pattern Matching - match..:case.. 
   builtins: aiter(), anext() 
  3.9:
   dict | dict ; dict |= dict
   for a in *x,*y: ...    #no need of (*x,*y)
   str.removeprefix , str.removesuffix
   Any valid expression can now be used as a decorator
   modules: zoneinfo , graphlib
  3.8:
   Assignment expressions
   Positional-only parameters
   f'{expr=}'
   Dict comprehensions and literals compute First the key and Second value
  3.7:
   builtins: breakpoint()
   __getattr__ and __dir__ of modules
   modules: contextvars , dataclasses


Misc:

3.10 zip(strict=True) 3.11 asyncio.TaskGroup (structured concurrency) enabled by ExceptionGroup

3.12 itertools.batched(L, n) — it replaces zip([iter(L)]n)


There was also the walrus operator


that's the Assignment expressions, i.e. :=


True, I did not see that. My bad, sorry!


Oof, yeah, this site is really not great on iOS.

The first time I published a site, I was surprised by how much traffic came from mobile devices, even though my page was intended for desktop users. I really shouldn’t have been surprised, but fortunately I had some basic analytics and saw fairly quickly how bad my bounce rate was on mobile and was able to work on it a bit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: