Hacker Newsnew | past | comments | ask | show | jobs | submit | timoth3y's commentslogin

I thought about that as well. It's certainly a concern.

In the end I decided that the concrete benefits from giving Anthropic access to this kind of data outweigh the potential risks. Granted, they might be banking on me making this exact, naieve calculation, but still.


The saving grace of the SP500 and most similar indexes is that they are cap-weighted. So if SpaceX only, floats 5% only that 5% of their capitalization counts for index calculation.

The Nasdaq100 is more complicated. SpaceX's 5% would be counted as about 25% of their total market cap for indexing.


It's worse than that, because S&P500 and Nasdaq100 share stocks. Like all of the MAG7 stocks. So if mag7 stocks dip because they're being structurally sold to buy SpaceX, then the S&P500 goes down too.

Arguably even worse because at least Nasdaq100 would have SpaceX in it that's getting bid up to offset the losses in other stocks. S&P won't have SpaceX right away. So it just goes down.

And the more those stocks go down, the lower their market cap - which means next rebalancing date they potentially get re-weighted again causing a bit more selling, etc. Presumably the companies that can will counter this with more buy-backs to keep their share price propped at an acceptable level (?).


A Multitudes study recently cited in Scientific American showed exactly this.

AI led to not only longer hours overall, but also a shift from development to bug fixing and a 19.6% increase in out-of-hour commits. So longer hours, less interesting tasks, and more weekend work.

https://www.scientificamerican.com/article/why-developers-us...


Every few years a bill is introduced requiring profitable companies to pay additional taxes to cover the cost of the SNAP (food stamp) benefits received by their employers.

Lobbying ensures such proposals never gets far, but it seems like a common sense way of ensuring that these funds subsidize people rather than corporations.


A Google survey of 5,000 developers finds AI helps developers release more software—while logging longer hours and fixing problems after the code goes live.

It seems that LLMs always do the enjoyable work and leave us with the drudgery.

It was supposed to do the dishes while we create art and write poetry, but it turns it gets to create the art and poetry while we wash the dishes. AU gets to write the code while we have to review it and fix the bugs.


William Shatner is someone I really wish I could dislike. I mean, he is certainly not a conventionally talented singer or actor. He's laughably, painfully bad sometimes.

But the man keeps going! He's one of the hardest working people in show business. He clearly takes his craft very seriously, even if he defines it a bit differently from the rest of world.

The Wrath of Khan has no business being as great a movie as it is, and his version of Common People is fantastic.

I'm sure this collaboration will be .... something else.

== Edit I'm sure I am over-analyzing this - I do that with everything - but Common People is actually "perfect" Shatner.

When you start listening, you feel "OK, this is lame." After a bit it clicks and it becomes "Oh! I see what they are trying to do here." and by the end it becomes "Damn! This is awesome."

Shatner doesn't change throughout the performance, but everything just falls into place around him.


> But the man keeps going! He's one of the hardest working people in show business. He clearly takes his craft very seriously, even if he defines it a bit differently from the rest of world.

I still think him (of Star Trek) opening AFI's tribute to George Lucas (of Star Wars) was genius:

* https://www.youtube.com/watch?v=oEZVwQptvWw

(Also love Mike Myers' AFI for Sean Connery.)


His acting is laughably, painfully bad and then suddenly incredibly poignant and for some reason for the whole time it's bad I'm subconsciously like "oh this part doesn't count". It's so easy to root for him


His style seems quite Shakespearean which was done in a slightly over the top way to entertain the live crowds at the time, rather than the realistic style popular nowadays with close up filming.


Shatner singlehanded made American Psycho 2 better than the original. It's so awful it's great.

(I know no one else in the world feels this way.)


I didn't know there was even a sequel.

Alien 3 is my favourite in the Alien franchise, so I'm a chance to see it your way. But then I love American Psycho and the portrayal by Christian Bale, so maybe I'm already out of the running.


Maybe, it has a very different tone from the first movie. It's kind of like how Sleepaway Camp sequels went all goofy and campy.

I love shitty campy horror movies from the 80s/90s, so it works for me.


Denny Crane!


The article fundamentally misrepresents what AI is doing.

It claims that people using AI to create works that violate copyright is equivalent to individual artists painting pictures or people writing fan fiction. But that is not at all what is happening.

OpenAI and others are taking money from customers to generate copyrighted works. That's back-letter copyright infringement.

The states that it is unreasonable to go after all the individual customers. That's true, but that's not how copyright law has ever been enforced. If you have a company selling copyrighted works without permission, you go after that company not after their customers.


Many of the original Loony Toons and Warner Brothers cartoons fall into this category.

The reason they were produced from the 1930s to 50s was to be run in movie theaters before the main picture. Since they would run before different kinds of movies they had to entertain both kids and adults. Some of the humor in those cartoons clearly went way over the kinds heads.

It was only later that they were bundled as TV shows for children.


What meaningful connections did it uncover?

You have an interesting idea here, but looking over the LLM output, it's not clear what these "connections" actually mean, or if they mean anything at all.

Feeding a dataset into an LLM and getting it to output something is rather trivial. How is this particular output insightful or helpful? What specific connections gave you, the author, new insight into these works?

You correctly, and importantly point out that "LLMs are overused to summarise and underused to help us read deeper", but you published the LLM summary without explaining how the LLM helped you read deeper.


The connections are meaningful to me in so far as they get me thinking about the topics, another lens to look at these books through. It's a fine balance between being trivial and being so out there that it seems arbitrary.

A trail that hits that balance well IMO is https://trails.pieterma.es/trail/pacemaker-principle/. I find the system theory topics the most interesting. In this one, I like how it pulled in a section from Kitchen Confidential in between oil trade bottlenecks and software team constraints to illustrate the general principle.


Can you walk me though some of the insights you gained? I've read several of those books, including Kitchen Confidential and Confessions of an Economic Hit Man, and I don't see the connection that the LLM (or you) is trying to draw. What is the deeper insight into these works that I am missing?

I'm not familiar with he term "Pacemaker Principle" and Google search was unhelpful. What does it mean in this context? What else does this general principle apply to?

I'm perfectly willing to believe that I am missing something here. But reading thought many of the supportive comments, it seems more likely that this is an LLM Rorschach test where we are given random connections and asked to do the mental work of inventing meaning in them.

I love reading. These are great books. I would be excited if this tool actually helps point out connections that have been overlooked. However, it does not seem to do so.


> Can you walk me though some of the insights you gained?

This made me realize that so many influential figures have either absent fathers, or fathers that berated them or didn't give them their full trust/love. I think there's something to the idea that this commonality is more than coincidence. (that's the only topic of the site I've read through yet, and I ignored the highlighted word connections)


> we are given random connections and asked to do the mental work of inventing meaning in them

How is that different from having an insight yourself and later doing the work to see if it holds on closer inspection?


Don't ask me to elaborate on this, because it's kinda nebulous in my mind. I think there's a difference between being given an insight and interrogating that on your own initiative, and being given the same insight.


I don't doubt there is a difference in the mechanism of arriving at a given connection. What I think it's not possible to distinguish is the connection that someone made intuitively after reading many sources and the one that the AI makes, because both will have to undergo scrutiny before being accepted as relevant. We can argue there could be a difference in quality, depth and search space, maybe, but I don't think there is an ontological difference.


The one that you thought of in the shower has a much greater chance of being right, and also of being relevant to you.


Has it? Why?


Because humans aren't morons tasked with coming up with 100 connections.


Doesn't explain why a connection made in the shower has in essence more merit than a connection an LLM was instructed to come up with.


Not sure how to make it clearer. Look at the quality of this post, and compare it to your shower thoughts. I imagine you're not as stupid as the machine was.


I like design that highlights words in one summary and links them to highlights in the next. It's a cool idea

But so many of the links just don't make sense, as several comments have pointed out. Are these actually supposed to represent connections between books, or is it just a random visual effect that's suppose to imply they're connected?

I clicked on one category and it has "Us/Them" linked to "fictions" in the next summary. I get that it's supposed to imply some relationship but I can't parse the relationships


100 books is too small a datasize - particularly given it's a set of HN recommendations (i.e. a very narrow and specific subset of books). A larger set would probably draw more surprising and interesting groupings.


> 100 books is too small a datasize

this to me sounds off. I read the same 8, to 10 books over and over and with every read discover new things. the idea of more books being more useful stands against the same books on repeat. and while I'm not religious, how about dudes only reading 1 book (the Bible, or Koran), and claiming that they're getting all their wisdom from these for a 1000 years?

If I have a library of 100+ books and they are not enough then the quality of these books are the problem and not the number of books in the library?


I think if we are going to ban people under 16 from social media, we should also ban people over 70 from social media.

At least as much mental and societal damage is done by elderly falling for bigoted, scammy, manipulative nonsense online than by teenagers having their self-esteem lowered.

As recent holiday gatherings have shown us, the young handle social media far better then the elderly.

/s


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: