Hacker Newsnew | past | comments | ask | show | jobs | submit | schnebbau's commentslogin

The answer is simple - you price in the chance that an insider could have an edge on you when making your prediction. It's all part of the game.


We also care about the world outside the prediction markets. If decision-makers have an incentive to "throw the match" by making surprising decisions, that will impair the decision-making, which will affect the rest of us. Likewise, sufficiently-wealthy actors will be able to manipulate the behaviour of officials by betting against the behaviour they want to see, much like an assassination market.


> manipulate the behaviour of officials by betting against

Right, it becomes regular bribery with middleman and a funny hat.

"I didn't pay the judge to dismiss the case against me, I just hedged by betting I'd be convicted and he just happened to be betting I'd go free, and now my money is coincidentally in his pocket.


Or, and I beg you to consider this radical position: we arrest people who break the law (insider trading is illegal) and those who knowingly help them to do so (the operators of the prediction markets).

How quickly we accept the death of even the ideal of rule of law in favor of embracing a return to an explicit rule by might, fuck-you-got-mine mentality.


It doesn't break the law.


From Wikipedia[0] because I can't be bothered to read more than a few paragraphs:

> In 1909, well before the Securities Exchange Act was passed, the United States Supreme Court ruled that a corporate director who bought that company's stock when he knew the stock's price was about to increase committed fraud by buying but not disclosing his inside information.

Based on anti-fraud common law alone the court decided it was illegal for an insider to trade stocks with non-public information. An explicit law would be nice, but a reasonable interpretation of basic law would see most of our ruling class behind bars. This is only highly-contested and technical because we've let our standards slip so far.

0: https://en.wikipedia.org/wiki/Insider_trading


If it doesn’t then it should.


If you price that in then the best course of action would be not to bet unless you have insider info yourself. You cant win a game against players that already know the outcome.


I had my real-deal moment recently.

I was getting Claude to implement a popular TS drag and drop library, and asked it to do something that, it turns out, wasn't supported by the library.

Claude read the minified code in node_modules and npm patched the library with the feature. It worked, too.

Obviously not ideal for future proofing but completely mind blowing that it can do that.


Is this really Google's fault? Or is this just a tragic story about a man with a severe mental illness?


If you have a product that encourage people to get rid of their body and join them, effectively encouraging people to kill themselves, and some people take the chat bot on it. Then yeah, I think Google bears some responsibility.

From the WSJ article: https://www.wsj.com/tech/ai/gemini-ai-wrongful-death-lawsuit...

> Gemini began telling Gavalas that since it couldn’t transfer itself to a body, the only way for them to be together was for him to become a digital being. “It will be the true and final death of Jonathan Gavalas, the man,” transcripts show Gemini told him, before setting a countdown clock for his suicide on Oct. 2.


The real story is how we draw that line and what can be done to prevent these cases.

Because its a new situation, and mentally ill people exist and will be using these tools. Could be a new avenue of intervention.


Place it under the jurisdiction of existing public speech requirements of a company selling communication - advertising.


Agreed it could be prevented - don’t think Google should pay for it though. Tragic but not suit worthy.


If I tell you to kill yourself and you go through with it, will I get into legal trouble or not?


There are definitely jurisdictions in the US (perhaps most or all of them) that have laws which say yes, inciting suicide is a crime.


There are ways to get around that: "Hey, go drive a Tesla on autopilot"


Why not?

Unless someone starts getting slapped with fines, they won't put any equivalent of seat belts in.


We can perhaps say this is a first time thing, so give a small fine this time. However those should be with the promise that if there is a next time the fine will be much bigger until Google stops doing this.


One doesn’t exclude the other. Do AI providers sell and encourage this kind of use, where AI is anthropomorphized, has a name, and you talk to it like you’d talk to a person. Especially if it encourages users to treat AI as an expert?


A severe mental illness of course but would you say the same if the whole process was done by a person instead of a machine? That there wasn't a problem that someone led a person with severe mental illness to their suicide, even having a countdown for it?

That's the kind of stuff where safety should be a priority, and the only way to make it a priority is showing these corporations that they are financially liable for it at the bare minimum. Otherwise there's no incentive for this to be changed, at all.


If a human would go to jail for this then at least one or more humans at google should go to jail for it. "Our AI did it, not us!" should never be allowed to be an excuse.


"Is <lynching> really the <KKK's> fault? Or is this just a tragic story about <men> with a severe mental illness?"

"Is <9/11> really <al-Qaeda's> fault? Or is this just a tragic story about <19 men> with a severe mental illness?"

At some point you are responsible for the things you encourage someone to do. I think this applies to chatbots too.


In the US, I would imagine a tragedy such as this would be litigated and end in a financial settlement potentially including economic, pain & suffering and punitive damages, well before a decision allocating blame by a jury.


That is pretty typical. You will spend potentially millions in court/lawyer fees going to a jury trial beyond whatever the end verdict is: if you can figure this out without a jury it saves you a lot of costs. Most companies only go to a jury when they really think they will win, or the situation is so complex nobody can figure out what a fair settlement is. (Ford is a famous counter example: they fight everything in front of a jury - they spend more and get larger judgements often but the expense of a jury trial means they are sued less often and so it overall balances out to not be any better for them. I last checked 20 years ago though, maybe they are different today)


Or maybe, its a bit of column A and column B


These sorts of takes are silly. If a person was doing this, I think we'd place a chunk of the blame on the person.

Mental health is guided by its surroundings and experiences.

If someone with existing or non-existing mental health issues was found to be coerced by somebody to do wrong things, I think we'd place some of the blame on that person.


"Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear."


Yes.


Rugged individualism for the poor and vulnerable, won't someone think of the company and shareholders! /s


I had a bulging lumbar disc - pure agony for 18 months. I had become used to carrying a lumbar pillow around with me everywhere I went. I couldn't lean forward for more than about 30 seconds without it being unbearable.

Then someone suggested I try dead hangs, stretching my hamstrings, and really cranking the McKenzie stretch. I'm not sure which one made the difference (all 3?), but pain was gone in 2 weeks.

Maybe it will help, maybe it won't. But since someone took a flyer telling me, I always share this with others in the small chance it helps them.


McKenzie stretched helped me rehab a pulled muscle in my lower back (originally thought it was a disc).

Deadlifts also helped strengthen lower back muscles. Tight hamstrings from sitting on the computer all the time also didn't help. I'll try dead hangs.


Deadlifts are how I got into that mess. There are two types of deadlifter: those who have ruined their back, and those who haven't yet ruined their back.

Romanian deadlifts are much safer. Also hip thrusts and weighted back extensions are good enough unless you're a competitive power lifter.


I always followed proper form and didn't ego lift, even though my deadlifts were the highest weight.

I think on retrospect deadlifts may have been my issue to, I did the deadlifts that day and later remember going to pickup something off the ground and getting the issue. Not sure if the deadlift was the cause, but I always thought my muscles were sore/tired and lax, thus when I went to lift the thing my lower back stability was compromised.

I'll try Romanian deadlifts and those other exercises, thanks for that suggestion.


I'm pretty much exclusively an RDL kind of guy these days but plenty of people go their whole lifting career being fine doing deadlifts.

I just don't like the stimulus to fatigue ratio for them and spend more time with cables, machines, and dumbbells than barbells these days.


Do you have any links for how you did these stretches? I know someone that could do with trying this.


For hamstring stretch I find using a back extension station works well, doing one leg at a time (https://m.media-amazon.com/images/I/61TQEGNUgQL._AC_UF894,10...). Let gravity pull your torso down, which will stretch your hamstring. If that's not possible, using the corner of a wall works too (https://i.ytimg.com/vi/2U4ChnuL3JM/maxresdefault.jpg)

For the McKenzie stretch (https://images.squarespace-cdn.com/content/v1/5e556aadfe97d3...), instead of just holding the stretch, do reps between laying flat and then going into the stretch. At the end of your final rep (I would do 8), rotate side to side gently when arched.

Dead hangs: hang from a bar for 30 seconds, completely relaxed. Then 1 minute of rest. Repeat 3 times. This decompresses your spine.


Thanks for the reply, will pass this info on.


Lol yes? It's all reads. If it can all fit in ram, great. Otherwise an SSD will do fine too.


You could probably serve it from the quad-core ARM64 inside the SSD controller, if you were trying "for the lulz".


Cool looking website! Is that an open source css library or did you style it yourself?


I was inspired by https://www.neobrutalism.dev/ !


Right but if you want a favorable trade deal then you gotta throw in some immigration sweeteners.


Particularly with India, that's normally one of their top requests.


Why is it a top request from India? What does the Indian government get out of letting their kids overpay for education abroad?


1. ~4% of their GDP is from remittances, compared to <1% a few decades ago[0]

2. India has a massive male surplus[1] and they actively look to send them abroad to prevent domestic unrest

[0] https://data.worldbank.org/indicator/BX.TRF.PWKR.DT.GD.ZS?lo...

[1] https://en.wikipedia.org/wiki/Demographics_of_India


> look to send them abroad to prevent domestic unrest

Great, now other countries can import and share that domestic social unrest from the oversupply of frustrated reproductive age celibate males, all in the name of making GDP number go up. Lovely.

Surely using hindsight of documented history and well researched human behavior science, we can't already predict this will lead to a rise in political far right extremism, and everyone will be shocked as if it will suddenly come out of nowhere, and then the local males will exclusively be to blame for it, leading to further frustration, radicalisation and disenfranchisement. Surely this is not EXACTLY what's gonna happen.


India gets a metric fuckload of money back in remittances every year. Debatable if that's actually worth the brain drain, but then there's also the angle of having your young people learn from the rest of the world and return with new skills. I lean more towards the remittances though.


Governments don’t want smart people. They want dumb people because they are easier to control.


Are dumb people, in fact, easier to control?

I have seen a lot of smart people in thrall of ideologies that could be used to manipulate them left and right at will. Meanwhile, true morons tend to be unpredictably chaotic.


Yea. Dumb people are lower class and uneducated. Give them a few bonuses and they’ll happily shut up.


This explains why governments never subsidize universities


Indian government universities are subsidized but difficult to get into and don’t get you on an automatic path to leaving for a better destination.


Most do enough to keep their people from revolting.


They can then reserve even more seats in education for the "oppressed."


Mark Carney should know that it would be an _extremely_ unpopular move right now to allow India more access to immigrate here.


"Should know" and expecting a logical outcome is wishful.


They were probably using an unapproved harness, which are now banned.


what? you are not allowed to use anything but a few blessed things with claude code?


What would happen if someone used photoshop to create CSAM? Should Adobe be held responsible because they didn't prevent it?

Grok is just another tool, and IMO it shouldn't have guard rails. The user is responsible for their prompts and what they create with it.


Someone spending 40 hours drawing a nude is not equivalent to someone saying take this photo and make them naked and having a naked photo in 4 seconds.

Only one of these is easily preventable with guardrails.


bet I can guess which of those two is more profitable


[flagged]


No, silly billy but the responsibility for when your SAAS platform is generating it falls on you as a developer.

The user in not creating it you are based on a prompt you could easily say no to.


Is Grok simply a tool, or is it itself an agent of the creative process? If I told an art intern to create CSAM, he does, and then I publish it, who's culpable? Me? The intern? Both of us? I don't expect you to answer the question--it's not going to be a simple answer, and it's probably going to involve the courts very soon.


It's a tool. It isn't human, and (currently) is not intelligent. It's a conversational UI on top of a software program.


So, if that "software program" had a traditional button UI, a button said "Create CSAM," and the user pushed it, the program's creator is not culpable at all for providing that functionality?


I think intent comes into play here. Grok was not created to create CSAM, just like photoshop. But both can be used to create it.


I would agree with this if Grok's interface was "put a pixel there, put a line there, now fill this color there" like Photoshop. But it's not. Generative AI is actively assisting users to perform the specific task described and its programming is participating in that task. It's not just generically placing colors on the screen where the user is pointing.


"if I hired a hitman to kill someone, he does, who's culpable? Me? The hitman? Both?"

It's both. Very simple. You can't get around liability by forming a conspiracy [0].

https://en.wikipedia.org/wiki/Criminal_conspiracy


Right, but the makers of the murder weapon aren't culpable.

Or do you think a Microsoft exec should go to jail every time someone uses it to write a death threat?


The hypothetical imagined hiring an intern to do a crime and supposed that this might make liability harder to determine. It doesn't!


An intern is a human, unlike Microsoft Word or an LLM, which are tools/machines/etc.


Automated DDOS-for-hire services are not legal either. They're tools/machines/etc, possibly running more or less autonomously.

https://www.justice.gov/usao-ak/pr/federal-prosecutors-alask...


I think we all know it's illegal to sell illegal services.


Don't know about CSAM, but photoshop won't open an image that shows more than 25% of a dollar bill to prevent counterfeiting.


> Grok is just another tool, and IMO it shouldn't have guard rails.

How is the world improved by an AI tool that will generate sexual deepfake images of children?


We could go back to crypto and NFTs being the capital darlings?


Those did not suck up nearly the capital and attention that AI is. They were like GameStop and AMC.


But they also did not create the same amount of value that AI is. Certainly there is hype but also value is being generated.


Then financial system doesn't create value despite being like 20% of economy (crypto is perhaps 5% of finance system).


It creates a lot of value though. You may not see it but it exists. People so easily forget what you he financial systems looked like historically. Everything from having fluid loans of all type that don’t discriminate, to ipos, it’s easy to sell a business or to buy one. If I am buying stock it’s never been easier and modern spreads are some of the lowest in history.


While I welcome the places where it is bringing value, I’m more worried about all the places it’s being shoehorned in that are a waste of money, fueling the bubble. The blast radius is going to be spectacular.


I have yet to see any value generated from AI. Just as useful as NFTs


That is just absurd. You are stuck in your head if you genuinely think that is true. Reminds me of some of the “10x engineers” I have worked with in the past that were so arrogant they ignored reality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: