The migration engine will order the dependencies at runtime, and it will bail (and suggest creating a "merge" migration) if you have diverging trees of migrations. I find it pretty robust in practice.
The use case is different. For the price of a cloud instance, on top of the instance itself, you are paying for:
* availability. On AWS, you can start a couple dozens or hundreds of instances on demand, for a limited time. You are paying for that spare capacity. VPS/Dedicated servers generally have much lower spare capacity, and you're booking things by the month, not by the minute.
* reliability. Most real cloud instances live on networked drives, and your risk of losing data is very low. On root servers, you have to handle data reliability yourself earlier.
(you should do backups either way, but you're likely going to use your backups more often on VPS/Dedicated offerings than in the cloud)
* surroundings services. Private networking, security features, etc.
You pay a premium for all that, so for the same raw compute performance, cloud prices will be at least 2-3x the price of a basic root or dedicated server. On the other hand, vps/dedicated servers typically include bandwidth in the price. The best choice depends on your requirements, but most people will blindly go towards cloud servers..
I used the exact same stack (gitolite+cgit) in the early stages of a previous startup, and code reviews were the big missing part that made us move to something more full featured (for us, gitlab).
It's pretty easy to trigger ci runs via git hooks, and once you're used to it, checking their results in jenkins instead of in the git repository UI makes no difference. But code reviews really need a dedicated interface.
Google owns the .dev TLD, and they bought it for entirely internal use. People got upset about this (any company owning a TLD for internal use only is pretty weird, and especially one as generic as ".dev") so they started to offer it to the public.
The bigger issue is that a lot of people use .dev for internal/development uses, and it should therefore never been made into a "real" TLD in the first place. It's like deciding to sell "example.com" to someone.
I could've swore .local was okay, as that's what I use. But maybe not. Apparently it can cause issues with Macs, but that's new to me seeing as how I primarily develop on one.
I registered a .dev for my company, but because it's a company of one I imagine this won't affect me then. Good to know though. I do recall using .dev for local/internal development years ago...
EDIT: so would this mean that registering a .dev for some BigCo domain could potentially cause problems? Curious what the real-world implications might be.
I don’t think there are really any implications moving forward. It only affected developers in the first place. Developers have since been forced to stop using .dev for development, thanks to the existence actual .dev domains.
Yeah, at this point the horse is out of the barn, so don't worry about registering a .dev. Any damage that would be done has already happened. The point is it shouldn't have happened in the first place.
Developers have been using <something>.dev as their localhost hostname even though the .dev TLD wasn't a designated test/reserved TLD[0]. Everyone's non-https development environments broke when Google added the TLD to HSTS preload, forcing browsers to load it in HTTPS.
An interesting consequence of your last question, is that if Google added .dev and .foo to the HSTS preload list, it means they don't intend to make these domains available for public registration at all. If they did let people register them, they would have no way to enforce that the sites on there honored the hsts requirements.
That doesn't follow. Sites that did not abide by those requirements simply would not work. The requirements are enforced by the browser.
The intention when registering such a domain name would be to follow said requirements, otherwise you wouldn't be able to use the domain name for hosting websites (though you could of course still use it for other services).
Not to defend plaintext HTTP, but what you describe is a DNS registrar that mandates which services can be used with the registered domain... Would you buy a house where you cannot cook, only microwave?
The house analogy doesn't really work as you could always install a stove. A better example would be "Why would you buy a plot of land that is zoned residential if you can't build an office building on it?" The answer is that you know what you're getting into before you buy it, and so you'd only buy it if you were building a house. If the restrictions are known up front then it's all good. I'd also like to point out that HSTS has very real security benefits, and if the entire TLD is already on the list then you don't have to go through the hassle of adding all your domains individually and waiting months for those updates to roll out widely. The expectation is that the pros vastly outweigh the cons.
Sorry for the lack of knowledge, but what is actually contained in the preload lists? Just a flag "force TLS and activate HSTS" or also a certificate pin?
I.e., could you actually use your own certificate for a .dev site or would the browser only accept Google's?
We are only use certificate pinning for .google. The only requirement on any of the other TLDs is that you must have an SSL certificate and serve over HTTPS.
The old school solutions lack any sort of javascript support (per the docs, htmldoc doesn't even support css), so they wouldn't work for a lot of real world websites. That's not really the same use case.
A better comparison would be against the likes of wkhtmltopdf[0], which uses webkit, or the pdf generation features of phantomjs.
Yep, that's about how I remember it. It was such a pain to build on Windows (especially to get a single static binary) that people contributing fixes would often attain hero status by attaching a random binary to an issue. Specifically, GIF support was broken on the official Windows build for 4+ years:
It absolutely is okay for him, you're right. And it's absolutely great that there is such demand for tech right now that people can afford to be so particular about their working hours and their various ways of working.
What I find less great is the suggestion that all employers should "accept your employees for who they are and optimize for their abilities" - does anyone really think that if everyone just worked whatever hours they found most pleasing, this would genuinely result in a situation that was even vaguely practical? What would happen to the people with children who actually find that working 9-5 is convenient because they get to spend a few hours with their children when they get back from work before they go to bed? Would those guys just sit around stuck for two hours in the morning whilst the night owls had a bit of a lie-in, and then have to cart the laptop around with them in the evening so they can Slack their late-working colleagues whilst they're giving the children a bath?
I don't know. Maybe I'm wrong and I'm just a dinosaur (who actually happens both to work remotely and to work strange hours sometimes too).
It obviously depends on what work there is to do. In most of what I do, I work alone, with little need for cooperation, working through the backlog of tickets to implement. And the same goes for my coworkers. So I don't know why the morning people would have to wait for the night owls or why the night owls would need the morning people to stay around late. Everyone has their own stuff to do. If I need to talk to other people, I do it between 11AM and 5PM. Or schedule ahead of time so people can anticipate.
Yes, it certainly depends on the nature of the work. If you work pretty much entirely independently (and remote work often can be like this, especially freelance), then working hours become less important - if your arrangement is that the software will be deployed by 9am Friday and all the features implemented and testable by that point, and that's your sole responsibility, no one is going to be bothered if you worked 4pm to midnight every day to do it.
I once worked with a very senior creative who was exceptional at his job. We all worked in an office together - this was a few years before the current remote phenomenon had quite the momentum behind it that it now has. He came in at midday, on a good day, and normally stayed late into the evening (I think he enjoyed having a little red wine whilst he was working, which probably slightly stretched the boundaries of acceptability in that office). His work was exceptional - on-brief but always extremely innovative. But, you know, when you actually wanted to schedule a meeting with him to discuss a project, it was always bloody hard work - he collaborated very well with the other members of his team who enjoyed staying late in the evening, but he was a constant thorn in the side of the project managers who often wanted to talk to him in the morning when he was never there.
The migration file will contain explicit dependency information, something like:
The migration engine will order the dependencies at runtime, and it will bail (and suggest creating a "merge" migration) if you have diverging trees of migrations. I find it pretty robust in practice.