Yeah, leaving hanging loops with a gentle bend radius is very common as long as the loop is secured, and does not cause problems. Maybe something pulled on it though?
I hear "people rarely use unsafe rust" quite a lot, but every time I see a project or library with C-like performance, there's a _lot_ of unsafe code in there. Treating bugs in unsafe code as not being bugs in rust code is kind of silly, also.
Exactly. You don't need much unsafe if you use Rust to replace a Python project, for instance. If there is lower level code, high performances needs, things change.
For replacing a Python project with Rust, unsafe blocks will comprise 0% of your code. For replacing a C project with Rust, unsafe blocks will comprise about 5% of your code. The fact that the percentage is higher in the latter case doesn't change the fact that 95% of your codebase is just as safe as the Python project would be.
A big amount of C code does not do anything unsafe as well, it calls other stuff, do loops, logic business, and so forth. It is also wrong to believe 100% of the C code is basically unsafe.
If so, then it should be trivial for someone to introduce something like Rust's `unsafe` keyword in C such that the unsafe operations can be explicitly annotated and encapsulated.
Of course, it's not actually this trivial because what you're saying is incorrect. C is not equipped to enforce memory safety; even mundane C code is thoroughly suffused with operations that threaten to spiral off the rails into undefined behavior.
It is not so hard to introduce a "safe" keyword in C. I have a patched GCC that does it. The subset of the language which can be used safety is a bit too small to be full replacement on its own, but also not that small.
C lacks safe primitives or non-error-prone ways to build abstractions to refer to business objects. There are no safe string references, let along ways to safely manipulate strings. Want to iterate over or index into a result set? You can try to remember to put bounds checks into every API function.
But even with explicit bounds checks, C has an ace up its sleeve.
int cost_of_nth_item(int n) {
if (n < 0 || n >= num_items)
return -1; // error handling
…
}
Safe, right? Not so fast, because if the caller has a code path that forgets to initialize the argument, it’s UB.
You're swapping definitions of unsafe. Earlier you were referring to the `unsafe` keyword. Now you're using `unsafe` to refer to a property of code. This makes it easy to say things like "It is also wrong to believe 100% of the C code is basically unsafe" but you're just swapping definitions partway through the conversation.
What I see is that antirez claims that absence of "safe" (as syntax) in C lang doesn't automatically mean that all of C code is unsafe (as property). There's no swapping of definitions as I see it.
I think there's a very clear switch of usage happening. Maybe it's hard to see so I'll try to point out exactly where it happens and how you can spot it.
First from antirez:
> You don't need much unsafe if you use Rust to replace a Python project, for instance. If there is lower level code, high performances needs, things change.
Use of the term `unsafe` here referring to the keyword / "blocks" of code. Note that this statement would be nonsensical if talking about `unsafe` as a property of code, certainly it would be inconsistent with the later unsafe since later it's claimed that C code is not inherently "unsafe" (therefor Rust would not be inherently "unsafe").
Kibwen staying on that definition here:
> For replacing a Python project with Rust, unsafe blocks will comprise 0% of your code. For replacing a C project with Rust, unsafe blocks will comprise about 5% of your code.
Here is the switch:
> A big amount of C code does not do anything unsafe as well
Complete shift to "unsafe" as being a property of code, no longer talking about the keyword or about blocks of code. You can spot it by just rewriting the sentences to use Rust instead of C.
You can say:
"A big amount of 'unsafe' Rust code does not do anything unsafe as well"
"It is also wrong to believe 100% of the unsafe Rust code is basically unsafe."
I think that makes this conflation of terms clear, because we're now talking about the properties of the code within an "unsafe" block or globally in C. Note how clear it is in these sentences that the term `unsafe` is being swapped, we can see this by referring to "rust in unsafe blocks" explicitly.
This is just a change of definitions partway through the conversation.
p.s. @Dang can you remove my rate limit? It's been years, I'm a good boy now :)
High performance is not an on/off target. Safe rust really lets you express a lot of software patterns in a "zero-cost" way. Sure, there are a few patterns where you may need to touch unsafe, but safe rust itself is not slow by any means.
There is a reason for this. A lot of libraries were written at a time when the Rust compiler either rejected sound and safe code so you have to reach for unsafe, or `core` didn't yet deliver safe abstractions.
For your last sentence, I believe topics are conflated here.
Of course if one writes unsafe Rust and it leads to a CVE then that's on them. Who's denying that?
On the other hand, having to interact with the part of the landscape that's written in C mandates the use of the `unsafe` keyword and not everyone is ideally equipped to be careful.
I view the existence of `unsafe` as pragmatism; Rust never would have taken off without it. And if 5% of all Rust code is potentially unsafe, well, that's still much better than C where you can trivially introduce undefined behavior with many built-in constructs.
Obviously we can't fix everything in one fell swoop.
>>Of course if one writes unsafe Rust and it leads to a CVE then that's on them. >>Who's denying that?
>>The recent bug in the Linux kernel Rust code, based on my understanding, was >>in unsafe code, and related to interop with C. So I wouldn't really classify >>it as a Rust bug.
Why is glue code not normal code in Rust? I don't think anyone else would say that for any other language out there. Does it physically pain you to admit it's a bug in Rust code? I write bugs in all kind of languages and never feel the need for adjectives like "technical", "normal", "everyday" or words like "outlier" to make me feel not let down by the language of choice.
I have worked with Rust for ~3.5 years. I had to use the `unsafe` keyword, twice. In that context it's definitely not everyday code. Hence it's difficult to use that to gauge the language and the ecosystem.
Of course it's a bug in Rust code. It's just not a bug that you would have to protect against often in most workplaces. I probably would have allowed that bug easily because it's not something I stumble upon more than once a year, if even that.
To that effect, I don't believe it's fair to gauge the ecosystem by such statistical outliers. I make no excuses for the people who allowed the bug. This thread is a very good demonstration as to why: everything Rust-related is super closely scrutinized and immediately blown out of proportion.
As for the rest of your emotionally-loaded language -- get civil, please.
I don't care if there can be a bug in Rust code. It doesn't diminish the language for me. I don't appreciate mental gymnastics when evidence is readily available and your comments come out as compulsive defense of something nobody was really is attacking. I'm sorry for the jest in the comments.
I did latch onto semantics for a little time, that much is true, but you are making it look much worse than it is. And yes I get a PTSD and an eye-roll-syndrome from the constant close scrutiny of Rust even though I don't actively work with it for a while now. It gets tiring to read and many interpretations are dramatically negative for no reason than some imagined "Rust zealots always defending it" which I have not seen in a long time here on HN.
But you and me seem to be much closer in opinion and a stance than I thought. Thanks for clarifying that.
The bug in question is in rust glue code that interfaces with a C library. It's not in the rust-C interface or on the C side. If you write python glue code that interfaces with numpy and there's a bug in your glue, it's a python bug not a numpy bug.
I already agreed that technically it is indeed a bug in the Rust code. I would just contest that such a bug is representative is all. People in this thread seem way too eager to extrapolate which is not intellectually curious or fair.
HN gets a lot less sockpuppeting/astroturfing than reddit or twitter, as far as I can tell. There is some of it, but if it gets too big dang et al seem to generally put a stop to it.
Yes, but that is the standard methodology for startups in their boost phase. Burn vast piles of cash to acquire users, then find out at the end if a profitable business can be made of it.
Scams are our entire economy now. Do whatever you can to own a market, then squeeze your customers miserably once you have their loyalty. Cash out, kick the smoking remains of the company to the curb, use your payout to buy into another company, and repeat.
Most startups have big upfront capital costs and big customer acquisition costs, but small or zero marginal costs and COGS, and eventually the capital costs can slow down. That's why spending big and burning money to get a big customer base is the standard startup methodology. But OpenAI doesn't have tiny COGS: inference is expensive as fuck. And they can't stop capex spending on training because they'll be immediately lapped by the other frontier labs.
The reason people are so skeptical is that OpenAI is applying the standard startup justification for big spending to a business model where it doesn't seem to apply.
> Even at $200 a month for ChatGPT Pro, the service is struggling to turn a profit, OpenAI CEO Sam Altman lamented on the platform formerly known as Twitter Sunday. "Insane thing: We are currently losing money on OpenAI Pro subscriptions!" he wrote in a post. The problem? Well according to @Sama, "people use it much more than we expected."
So just raise the price or decrease the cost per token internally.
Altman also said 4 months ago:
Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
Only in as much as their product is a pure commodity like oil. Like yes it’s trivial to get customers if you sell gas for half the price, but I don’t think LLMs are that simple right now. ChatGPT has a particular voice that is different from Gemini and Grok.
2. "Postponed" is entirely normal within British politics. For most of my life, even the timing of general elections were at the whim of the government.
3. Given how Starmer polls, the delay is almost certainly going to make things un-favourable for him. Also unfavourable for Labour, unless they kick him out first.
Not that it would matter much if your conspiracy theory held water, given that one of the many constitutional problems the UK has is that local councils have negligible power (options are tied all over the place) and therefore local elections are functionally little more than opinion polls done in a voting booth.
"My conspiracy theory" is a set of facts that you and the source you cited (whose so-called "fact check" was mostly attempting to put into context) find really inconvenient. Fact 1: Starmer and his party are polling badly. Fact 2: Starmer and his party delayed several local elections, many of which were in constituencies where they currently hold power and won't afterward. This is not a good set of facts, regardless of how little those elections matter.
Your conspiracy theory is that Starmer got anything out of delaying, which you overstated as "cancelled", a thing commonly delayed in British politics.
I literally agreed with you in my original post (i.e. before you replied to me, unless you're both accounts) that he's not popular.
What indication do you have that the construction time for tunnel 3 is due to corruption or even that it's taking longer than necessary? It seems like a very large engineering project; sometimes those take time.
Yeah, I have no idea how long a tunnel of this size is supposed to take, and I’m surprised if many people here do.
It’s a big project, and it is tricky to patch it after release. The thing is supposed to last 300 years, and usually we use infrastructure well past it’s intended lifespan…
Few things in Europe compare to the size of NYC. A potentially comparable project would be the Elizabeth line in London. Took from 1948 to 2008 to agree on a plan and then 15 years to execute it.
The bill in favour of the Elizabeth Line was only put to parliament in 2005, receiving royal assent in 2008. Construction work began in 2009, faced some delays during COVID, but was completed in 2022 (total construction time: 13 years)
Construction on New York's Tunnel #3 began in 1970. It was 28 years before any part of it was operational. A second section came online 15 years later (2013). The final stage isn't expected to be completed until 2032, a full 62 years after construction began. I'm unaware of any comparable tunnel project which has progressed at this slow of a pace.
The Thames Tideway Tunnel might be a better comparator.
It's similar in scope to this recently-completed second phase of NYC Tunnel #3, albeit carrying sewage rather than fresh water: 25 km long, 7.2 m in diameter in London vs 29 km long, 4.9 m diameter in NYC. Flow volumes are likely similar (a sewage tunnel will rarely run full).
Anglosphere construction costs are through the roof in general, same problem is happening in the UK and Canada that isn't happening to places like Spain or Japan, comparing a project to Anglosphere norms is like comparing your cooking to English food
Google claims the original build was supposed to take 50 years, and it will take 62 due to delays from a funding crisis before de blasio.
However, this is only the second phase of the plan, with two more phases broken out into separate projects. I've no idea if those were supposed to be a part of the original 50 year timeline or not.
> What indication do you have that the construction time for tunnel 3 is due to corruption or even that it's taking longer than necessary?
These two questions are casually put next to each other in the same sentence but they're incredibly different. Personally, I don't think that corruption is a significant factor in how long it took. The second question is way too leading/framed - "necessary" doesn't exist past the physical limits.
For example, would the same project have taken the same time in China? No. Does that mean it should've taken as long as it would've in China, as clearly it took longer "than necessary"? Not by definition.
Your link shows that NYC is very corrupt! It ranks #3 in corruption prosecutions since 1976. On a population-adjusted basis, it’s probably #1. Your article tries to account for population, but does so incorrectly. It overlooks that NDIL includes the entire Chicago metro area (over 10 million people), not just the city of Chicago (2.7 million). CDCA includes the entire LA metro area, plus surrounding cities like Santa Barbara (total almost 19 million people). In contrast, SDNY includes just Manhattan and a few counties north of the city (Westchester, etc), totaling about 3 million people. And EDNY includes just Brooklyn and Long Island. The NYC metro area also is covered by DNJ (which is also very corrupt) and probably a bit of D. Conn as well.
Focusing on SDNY, it's about 462 federal corruption convictions per million people (current population) since 1976. D. Mass is at about 107.
The relevant part is the last three decades, as the article explains. The US as a whole has seen a decline in federal corruption prosecutions, but NYC leads that decline. Quote:
> However, beginning in the 1990s, the number of corruption convictions there began to rapidly decline, so much so that by the beginning of the 2020s, Manhattan’s position relative to other areas had flipped. It now boasts the fewest corruption convictions of any major city area.
As noted above, the article uses the wrong numbers for each district’s population. Even just looking at the most recent two periods (2010-present) SDNY has got about double the per-capita rate of D. Mass.
EDNY + SDNY together have about the same population as NDIL (about 10 million), but have a higher number of corruption convictions than NDIL during each period in the chart except 2020-21.
Why do people say NYC is more corrupt? I don't know of evidence or reports. To me, it doesn't seem more or less corrupt than other major cities in the US. It's hard to compare to other countries, where city government may have different roles.
Certainly NY's government and budget are larger than other US cities, for obvious reasons.
Thanks for the link! I'm not sure the incident you name is meaningful, but here is some evidence at last (from 2017):
> "the highest construction costs in the world"
> "The estimated cost of the Long Island Rail Road project, known as “East Side Access,” has ballooned to $12 billion, or nearly $3.5 billion for each new mile of track — seven times the average elsewhere in the world. The recently completed Second Avenue subway on Manhattan’s Upper East Side and the 2015 extension of the No. 7 line to Hudson Yards also cost far above average, at $2.5 billion and $1.5 billion per mile, respectively."
> For years, The Times found, public officials have stood by as a small group of politically connected labor unions, construction companies and consulting firms have amassed large profits.
> Trade unions, which have closely aligned themselves with Gov. Andrew M. Cuomo and other politicians, have secured deals requiring underground construction work to be staffed by as many as four times more laborers than elsewhere in the world, documents show.
> Construction companies, which have given millions of dollars in campaign donations in recent years, have increased their projected costs by up to 50 percent when bidding for work from the M.T.A., contractors say.
> Public officials, mired in bureaucracy, have not acted to curb the costs. The M.T.A. has not adopted best practices nor worked to increase competition in contracting, and it almost never punishes vendors for spending too much or taking too long, according to inspector general reports.
etc. Also, this is based on extensive research:'
> The Times brought the list to more than 50 contractors, many of whom had worked in New York as well as in other cities. The Times also interviewed nearly 100 current and former M.T.A. employees, reviewed internal project records, consulted industry price indexes and built a database to compare spending on specific items. And The Times observed construction on site in Paris, which is building a project similar to the Second Avenue subway at one-sixth the cost.
As for London, they built an entire industry around hiding money for oligarchs who stole it from their own countries. Maybe it's technically legal, but it's morally corrupt AF.
As a German I'll say that even acknowledging there is a corruption problem (while still being unwilling to change it and not voting for the parties that let corruption fester) puts them a good step ahead of all those thinking there's no real corruption.
No studies, personal impressions, so I might well be wrong and maybe they all know but don't care. No majority that cares either way.
As another German, I think there is different kinds of corruption. There is low-level and high-level.
Low-level is when you bribe individual cops, city clerks, etc so they let you go instead of writing a speeding ticket or approving your house building plan.
High-level is when people like Merz receive a political donation from McDonalds, do some self-promotion in one, and then keep/lower the Mwst (VAT) for restaurants.
Germany unfortunately has high-level corruption but as far as I know, very little low-level. I think thats partially why people don't care to vote to differently. Yes, it happens, but there is a large disconnect between what Merz does and how it impacts an individuals bottom line.
If people would have to constantly hand out bribes to anyone then maybe its a different story.
The people who use something should pay for its upkeep. It doesn't matter if that makes it a "regressive" tax. If you are a daily user of a road, you should pay more for its upkeep than someone who doesn't use the road.
Why should a delivery driver pay the toll for the road to my house, and not me? Why should I be able to exploit flat-rate product pricing like that and skim some money from all customers of the delivery service?
Why should I pay the toll to drive to a friend's house? They're the one getting the benefit out of having easy access to transportation.
What if my taxes pay for all the roads in my town, while the neighboring town chooses to implement tolls instead? Why should I get double-taxed? Prisoner's dilemma and race-to-the-bottom?
Why should I have to deal with having my license plate stolen, and police time wasted (who are paid out of taxes), because of people who don't pay the tolls?
I'm fine with a decently fair registration tax to offset the gas taxes, but the one in my state is the equivalent of 1,000 gallons of gas for the state gas taxes. If the car was a 35mpg hybrid that would be 35,000mi of equivalent driving. This is incredibly unfair.
35,000 mi of driving is not anywhere near out of the question if you're a daily commuter who takes road trips once in a while. If you're driving a truck or a non-hybrid, it's also a lot less mileage. It sounds like this is actually close to what you would be expected to use.
It weighs about as much as the smallest base model F150. Optioned out models and other trim levels easily hit 1,000lbs+ heavier.
Meanwhile that base model equivalent weight F150 gets about 24mpg and thus pays about half as much gas tax while doing the same amount of damage driving the average mileage. Further proving my point, I pay twice the state fuel tax for an equivalent weight pickup truck. Is that fair?
But also, isn't the whole point of the pickup truck to load it down? If all it's doing is carrying 1-4 people it's whole life, seems like a lot of waste. I'm told people buy trucks to actually load them down a lot and not just commute and get groceries? So while it's about the same weight dry and unloaded shouldn't it's weight really be quite a bit more in practice? Or are we all agreeing now trucks are just for commuting and getting groceries?
Now you've moved the goal posts to about half of your original claim. And it's still not accurate (links have already been provided elsewhere in this thread). And the only thing I've owned in the last 30 years that gets 25 mpg is a camper van (and, no, that thing doesn't move anywhere near 15K miles/year).
> With that information, the British newspaper calculated that BEVs [battery electric vehicles] could expose roads to 2.24 times more damage than gas cars.
If that's true, then 12-15k miles in an EV would be equivalent to 27-33k miles in a gas car.. so "taxes equivalent to 35k miles" isn't far off.
The average driver also doesn't get 35 mpg driving regularly. The average driver probably gets around 20 mpg, and that would make this distance about 15000 mi.
They also chose a car that's extremely heavy (by virtue of the battery), so they create more road wear per mile than the average American car. The point is that tax rate seems fair.
> With that information, the British newspaper calculated that BEVs [battery electric vehicles] could expose roads to 2.24 times more damage than gas cars.
If that's true, then 12-15k miles in an EV would be equivalent to 27-33k miles in a gas car.. so "taxes equivalent to 35k miles" isn't far off.
The EV tax applies to people who a) casue a disproportionate amount of wear & tear on the roads vs ICE vehicles and b) are generally higher income in the state.
When you look at taxation from a "charge the people who use it" or the "the rich should pay more" perspective, this appears to address both.
Is the problem simply that you want to pay less taxes?
No, I just want to pay a fair amount of taxes. Honestly the gas taxes should be increased or we should move to a tax structure where it's mileage, weight, and emissions based.
Paying 3x the same taxes while having less externalities isn't fair.
> With that information, the British newspaper calculated that BEVs [battery electric vehicles] could expose roads to 2.24 times more damage than gas cars.
If that's true, then 12-15k miles in an EV would be equivalent to 27-33k miles in a gas car in the externalities of road wear & tear.. so "taxes equivalent to 35k miles" is at most 25% higher in a "damage per mile equivalent" but could be as little as 6% using the averages.
If your actual mileage is over 15625/year, then you're paying less than the equivalent.
27 isn't 35 no matter how many times you say it is.
> If your actual mileage is over 15625/year, then you're paying less than the equivalent.
The average is less than that by a decent bit, so more than half of US cars are paying more even with your unproven, contorted math based on some estimates done once in the 70s and never really looked into closely again.
It's also assuming the difference in weight. The closest hybrid I would have bought instead is only like 100kg lighter than my EV. And it gets like 40mpg, better than 35mpg.
It would also mean semi trucks should pay like 20,000x more in registration fees. Does this make sense?
> What's your annual mileage?
Less than 15k on that car (like most people), so even with your assumed math it's overpaying.
Semi trucks pay huge amounts in gas taxes because they guzzle gas like nobody's business. It's only the EVs that aren't paying for their road wear in gas taxes.
Average class 8 truck (>33,000lbs) burns under 11,000GGEa year, ratio is 1GGE=1.13gal of diesel. So somewhere under 12,500gal of diesel on average, but we'll use that to lean even more in the truck's favor.
Are you suggesting the average car burns less than 1 gallon of gas a year?
A 20mpg car driving 12,500mi (the average ICE in the US) would use 625gal of gas. So more like 20x, maybe 40x if the per gallon tax of diesel is double. Pretty dang far off from 20,000x.
And they're doing way more miles while being massively heavier, meaning incredibly more harm on the road than whatever EV you're thinking.
Registration fees are likely the same or close but when you factor in gas taxes (the original comparison here), the Ford is definitely paying more both based on fuel type and mpg.
Not sure where you are but in Indiana, gas tax for unleaded is 36c while diesel is 62c so on a per-gallon basis, that's an additional +72% in taxes. Back of the envelope: Civic at 30mpg pays 1.2c/mile vs SuperDuty at 15mpg pays 4.13c/mile so the multiple is closer to 3.4 vs 2
So yes - assuming registration fees are comparable and mileage is comparable - the SuperDuty should pay more.
The lightest SuperDuty has a gas engine. Diesel SuperDuty fuel economy is a bit better, but the vehicle also weighs more and is likely to be carrying/pulling more. But regardless of whether the multiple is 2 or 3.4 or somewhere in between, it is a small fraction of the added road wear.
By the fourth power law, an unloaded diesel Superduty creates ~22x the road wear of a honda civic. Loaded can be 100x more.
I do agree the relationship probably isn't linear, but the fourth power rule doesn't necessarily have numerous studies confirming it. It was a small collection of studies on road wear the US highway administration did in the 1950s and pretty much everyone has just gone with that. Other studies have pointed to it being less than previously thought.
Thanks for the insight but my claim was never "12,000mi is really 35,000mi"
Regardless, it would be interesting to see the actual number worked through to see what the equivalent EV registration fee should be if road damage/maintenance is the sole factor.
> If the car was a 35mpg hybrid that would be 35,000mi of equivalent driving.
> that's true, then 12-15k miles in an EV would be equivalent to 27-33k miles in a gas car.. so "taxes equivalent to 35k miles" isn't far off
You absolutely did suggest me paying taxes for 12k miles is practically the same as ~35k miles, you said it several times. That it's not far off. How else am I supposed to read that? You were so sure of it you mentioned it many times.
> Regardless, it would be interesting to see the actual number worked through to see what the equivalent EV registration fee should be if road damage/maintenance is the sole factor.
Sure, but it's likely far less than what I'm paying. As mentioned elsewhere, a similar weight unloaded F-150 pays half the taxes. So I'm at least paying double for similar weight vehicles, and yet you tell me it's really only 6%. But sure, tell me again how I'm really just paying my fair share and 12 = 35.
> If that's true, then 12-15k miles in an EV would be equivalent to 27-33k miles in a gas car in the externalities of road wear & tear.. so "taxes equivalent to 35k miles" is at most 25% higher in a "damage per mile equivalent" but could be as little as 6% using the averages.
^ As you quoted, I used the formula to estimate 12k would be equivalent to 27k and said paying taxes equivalent to be 35k miles is "at most 25% higher", neither of which is "12 = 35". Using their approach, I calculated 35k to be equivalent to 15625 specifically, again, not 12k.
If the underlying approach is wrong, we should replace it with something better.
Alternatively, the OTHER reasoning of "the rich should pay more" still applies, so I assume that's a factor here. Hoping States charge rich people (or high income earners, if you prefer) less isn't likely to work right now.
> Alternatively, the OTHER reasoning of "the rich should pay more" still applies, so I assume that's a factor here.
Once again, your assumption is incorrect. That base model F-150 that pays half the taxes costs more than my EV. The registration fee doesn't factor in income or valuation at all. A $100k Hummer EV pays the same as a $15k used Bolt. Meanwhile that Hummer EV is going to do a hell of a lot more damage to the roads than the Bolt.
It probably has more to do with the government being in the pocket of oil interests and acts accordingly.
it's pretty silly to have a tax that incentivizes the opposite behaviour to what you want. registration surcharges benefit the people who drive the most, at the expense of the people who drive the least.
if you're trying to pay for wear and tear on the roads, or reduce congestion, making people feel like they have to "get their moneys worth" on the registration surcharge really isn't helping.
You can if you just run PTP (almost) entirely on your NIC. The best PTP implementations take their packet timestamps at the MAC on the NIC and keep time based on that. Nothing about CPU processing is time-critical in that case.
Well, if the goal is for software running on the host CPU to know the time accurately, then it does matter. The control loop for host PTP benefits from regularity. Anyway NICs that support PTP hardware timestamping may also use PCI LTR (latency tolerance reporting) to instruct the host operating system to disable high-exit-latency sleep features, and popular operating systems respect that.
> The control loop for host PTP benefits from regularity.
How much regularity? If you sent PTP packets with 5 milliseconds of randomness in the scheduling, does that cause real problems? It's still going to have an accurate timestamp.
> instruct the host operating system to disable high-exit-latency sleep features
Why, though? You didn't explain this. As long as the packet got timestamped when it arrived, the CPU can ask the NIC how many nanoseconds ago that was, and correct for how long it was asleep. Right?
> PTP packets with 5 milliseconds of randomness in the scheduling
This should not matter, unless you are a 5G telecom operator running at a high frequency. Gaussian noise in the master is not important to PTP. Being a master is easier than being a slave.
If you are running PTP at 128 Hz like a telecom, delays that large might lead to slaves resetting their state machines, which would blow the whole thing up.
> The CPU can ask the NIC how many nanoseconds ago that was
The CPU can indeed ask the NIC what time it is, but then the CPU has to estimate how long ago the NIC answered the question. If the PCI bus is in L1, it will take 10s to 100s of microseconds (no hard upper bound; could be forever) to train up to L0. The host has to determine this delay and compensate for it, because PCI bus transition is much longer than the desired error in PTP. The easiest way is to repeatedly read the time, discard the outliers, and divide the estimated delay in half. This technique is used by various realtime ethernet stacks. You will note that this is effectively the same as disabling ASPM. This is also why they invented PCIe 3.0 PTM.
I see nothing in your pair of unnecessarily belligerent comments that actually contradicts what I said. There are host-side features that enable the clock discipline you are observing, even if you are apparently not aware of them.
This is a really helpful contribution - if only everyone could be as smart as you.
If mine are somehow too beligerent for you, which is hilarious given how arrogant and beligerent your initial comment and responses come off as (maybe you are not aware?), then perhaps you'd like to actually engage any of the other comments that point out how wrong you are in a meaningful way?
Or are those too beligerent as well?
Because you didn't respond to any of those, either.
I should have said "most _professional_ FPGA users" because I assume many people here who don't know this (including the author of the article) are not.
reply