Hacker Newsnew | past | comments | ask | show | jobs | submit | alemanek's commentslogin

I think it is more the point that the users for his job were external developers. The role is inherently user facing and user focused. I don’t think anyone was trying to say he wasn’t a developer just that his job wasn’t to directly develop products

Yeah, I guess I just wanted to add that because of the way that quote was cut at the end, made me believe that the person quoting me thought Osmani "isn't a developer".

Here is how I think of it. When I am actively developing a feature I commit a lot. I like the granularity at that stage and typically it is for an audience of 1 (me). I push these commits up in my feature branch as a sort of backup. At this stage it is really just whatever works for your process.

When I am ready to make my PR I delete my remote feature branch and then squash the commits. I can use all my granular commit comments to write a nice verbose comment for that squashed commit. Rarely I will have more than one commit if a user story was bigger than it should be. Usually this happens when more necessary work is discovered. At this stage each larger squashed commit is a fully complete change.

The audience for these commits is everyone who comes after me to look at this code. They aren’t interested in seeing it took me 10 commits to fix a test that only fails in a GitHub action runner. They want the final change with a descriptive commit description. Also if they need to port this change to an earlier release as a hotfix they know there is a single commit to cherry pick to bring in that change. They don’t need to go through that dev commit history to track it all down.


The "cleaner" commit history should be a separate layer and the actual commit history should never be altered.

Ooph good to know thanks for the update.


As someone who has set this up while not being a DBA or sysadmin.

Replication and backups really aren’t that difficult to setup properly with something like Postgres. You can also expose metrics around this to setup alerting if replication lag goes beyond a threshold you set or a backup didn’t complete. You do need to periodically test your backups but that is also good practice.

I am not saying something like RDS doesn’t have value but you are paying a huge premium for it. Once you get to more steady state owning your database totally makes sense. A cluster of $10-20 VPSes with NVMe drives can get really good performance and will take you a lot farther than you might expect.


I think the pricing of the big three is absurd, so I'm on your side in principle. However, it's the steady state that worries me. When the box has been running for 4 years and nobody who works there has any (recent) experience operating postgres anymore. That shit makes me nervous.


More than that, it's easier than it ever was to setup but we live in the post-truth world where nobody wants to own their shit (both figuratively and concretely) ...


Even easier with sqlite thanks to litestream.


datasette and datasette-lite (WASM w/pyodide) are web UIs for SQLite with sqlite-utils.

For read only applications, it's possible to host datasette-lite and the SQLite database as static files on a redundant CDN. Datasette-lite + URL redirect API + litestream would probably work well, maybe with read-write; though also electric-sql has a sync engine (with optional partial replication) too, and there's PGlite (Postgres in WebAssembly)


Yes. Also you can have these replicas of Postgres across regions.


Hopefully this was some bug and not a sign of Apple starting to go down a user hostile path. I have Apple everything (iPhone, MacBook, iPad, home pods, the works) so not someone who dislikes the company. Just left a really bad impression.


Can’t edit so self reply. Nope this was intentional and it looks like I am about 5-6days late on this one. I didn’t see another post on this here though so hopefully this helps someone.

So just a crappy move from them then. Going to look hard at other options for all future purchases. Hopefully we get a reversal on this customer hostile stuff.


Very crappy. This is a lame move on Apples part.


UGH


I don’t vibe code yet but it has sped me up a lot when working with large frameworks that have a lot of magic behind the scenes (Spring Boot). I am doing a very large refactor, major version spring boot upgrade, at the moment.

When given focused questions for parts of the code it it will give me 2-4 different approaches extending/implementing different bean overrides. I go through a cycle of back and forth having it give me sample implementations. I often ask what is considered the more modern or desirable approach. Things like give me a pros and cons list of the different approaches. The one I like the best I then go look up the specific docs to fact check a bit.

For this type of work it easily is a 2-3x. Spring specifically is really tough to search for due to its long history and large changes between major versions. More times than not it lands me on the most modern approach for my Spring Boot version and while the code it produces is not bad it isn’t great either. So, I rewrite it.

Also it does a pretty good job of writing integration tests. I have it give me the boilerplate for the test and then I can modify it for all my different scenarios. Then I run those against the unmodified and refactored code as validation suite that the refactor didn’t introduce issues.

When I am working in GoLang I don’t get this level of speed up but I also don’t need to look up as much. The number of ways to do things is far lower and there is no real magic behind the scenes. This might be one reason experiences may differ so radically.


Apples biggest weakness is games. But it has a pretty large install base when compared to Linux (not counting phones or servers here).Seems like a win/win. Apple gets to address their weaknesses and Valve gets a large target market.

I actually see it as the reverse. Valve might be going for the whole pie and want to carve out a niche for their Steam Box. Inviting Apple to the party might detract from that effort. Or at the very least distract from their main focus.


> Apple gets to address their weaknesses and Valve gets a large target market.

I don't think Apple wants any non-Apple store addressing their weaknesses, especially a solution as competent and well-funded as Steam.

If Valve gains Apple-user mindshare on Mac, what prevents them from expanding to iPhones and iPads in the EU, and likely elsewhere if anti-monopoly laws get entrenched? IIRC, Services is the fastest growing revenue source at Apple.


That’s a fair point. I don’t think they care about steam competing on the desktop but mobile is another ballgame entirely.


>Valve gets a large target market

They don't need Apple for that. People who game already game elsewhere. Steam on Apple feels pointless. I wouldn't be surprised, if Valve will go for smartphones with their own at some point


This is really the endgame, I think. A modern smartphone with a controller attached is effectively the same as a Steam Deck or Switch 2, just with a different OS. Apple has been pushing higher-end games on phones lately (this year has seen iOS versions of Hitman 3, Sniper Elite 4, and Subnautica), and reports are that the new pro phones run them well (the limiting factor being thermal load).

A phone that can run my Steam library is super-compelling -- I travel a decent amount, so being able to chuck something smaller like a Backbone One in my bag vs. a Steam Deck would be a meaningful change.


Games are not a weakness for Apple. They have all the gaming revenue they seem to care about with mobile. They just don't have proper/immediate motivation to apply that effort to desktop. I'm not sure i even care anymore. I'm a valve fanboi at this point, until Gabe leaves and they go corporate.


Mobile overlapping consoles in revenue and Apple had a good way years of taking a 30% cut on top. They are indeed behind fine with sticking as a middleman for gambling simulators that make billions.


I think the argument is more that ATC, people and equipment needed, should be funded via use fees.

This ensures that the system is self sustaining. Also as demand increases then revenue to run the system would also increase.


I agree - a good example is bridges and bridge tolls. Maybe a lot of people in NYC have crossed the verrazano bridge. "see, most of us have crossed it, I don't see why taxes shouldn't pay for it". The counterexample would be the trucking company, that once taxes pay for it and not tolls, runs convoys of tractor trailers up and down the bridge, rapidly accelerating wear & tear - had they paid on a toll basis they'd be paying their fare share vs someone who crossed in one time 8 years ago.


In orgs I have seen this it is usually a symptom of the data center unit being starved of resources. It’s like they have only been given the choice of on prem but ridiculous paperwork and long lead times or pay 20x for cloud.

Like can’t we just give the data center org more money and they can over provision hardware. Or can we not have them use that extra money to rent servers from OVH/Hetzner during the discovery phase to keep things going while we are waiting on things to get sized or arrive?


I feel like companies are unreasonably afraid of cost up front, never mind that they’re going to pay more for cloud over the next 6 months, spending 6x monthly cloud cost on a single server makes them hesitate.

It’s how they always refuse to spend half my monthly salary on the computer I work on, and instead insist I use an underpowered windows machine.


Blame finance and accounting... Rent compute in the cloud can be immediately expensed against revenues. Purchasing equipment has to be depreciated over a few years. Also why spending $$$$$ on labor (salaries) to solve an ops issue rather than spending $$$$ on some software to do it happens. If the business relies on the software it looks like an ever ongoing cost of operating the business. Spending more on labor to juggle the craziness can "hide" that and make the business look more attractive to investors... Cutting labor costs is easier to improve the bottom line (in the short term).


You also don't need to commit to upfront costs. You can easily rent, rent to own/lease these resources.


The problem is if you over-provision and buy 2x as many resources as you need, this looks bad from a utilization standpoint. If you buy 2x as expensive cloud solutions and “auto scale” you will have a much higher utilization for the same coat.


> Or can we not have them use that extra money to rent servers from OVH/Hetzner

Or just use Hetzner for major performance at low cost... Their apis and stuff make it look like its your datacenter.


All the benefits OP lists are at or below mandated minimums for Western EU countries. It’s trivial to lookup and confirm for yourself.

In software the money difference you still end up ahead of where you would be on an equivalent salary in the EU. Also last time I was considering a move to the EU job market was weaker than the US. Also you still need to get all the necessary work visas which aren’t automatic. Even as a dual citizen I can’t just show up to work at a company in the EU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: