The checks in those pre-commit hooks would need to be very fast - otherwise they'd be too slow to run on every commit.
Then why would it save time and money if they only get run at the pipeline stage? That would only save substantial time if the pipepline is architected in a suboptimal way: Those checks should get run immediately on push, and first in the pipeline so the make the pipeline fail fast if they don't pass. Instant Slack notification on fail.
But the fastest feedback is obviously in the editor, where such checks like linting / auto-formatting belong, IMHO. There I can see what gets changed, and react to it.
Pre-commit hooks sit in such a weird place between where I author my code (editor) and the last line of defense (CI).
> Then why would it save time and money if they only get run at the pipeline stage? That would only save substantial time if the pipepline is architected in a suboptimal way: Those checks should get run immediately on push, and first in the pipeline so the make the pipeline fail fast if they don't pass. Instant Slack notification on fail.
That's still multiple minutes compared to an error thrown on push - i.e. long enough for the dev in question to create a PR, start another task, and then leave the PR open with CI failures for days afterwards.
> But the fastest feedback is obviously in the editor, where such checks like linting / auto-formatting belong, IMHO.
There are substantial chunk of fast checks that can't be configured in <arbitrary editor> or that require a disproportionate time investment. (e.g. you could write and maintain a Visual Studio extension vs just adding a line to grep for pre-commit)
This is outside the context the "Open File" dialog from your original question, but here's another tip about "navigating up":
In many application windows you can navigate the hierarchical directory structure that contains the currently open file by right-clicking on the document name/icon in the window's title bar.
E.g. in Preview, Pages, Finder, ..., hover over the file or directory name in the window's title bar. If you right click on it, a pop-out will appear with a vertical hierarchical list of that file's parent folders. Selecting one of the parent folders will open a new Finder window at that location, allowing you to quickly navigate to a file's containing folder.
And some additions to the tips in other comments:
- Dragging a file or directory from finder to the terminal will paste its path onto your shell
- iTerm has Finder integrations. Right click on a folder in Finder, Services -> New iTerm2 Window Here
And you might enjoy some of these Finder tweaks from my "dotfiles" (just run them on the shell):
# Set Documents as the default location for new Finder windows
# For other paths, use `PfLo` and `file:///full/path/here/`
defaults write com.apple.finder NewWindowTarget -string "PfDo"
defaults write com.apple.finder NewWindowTargetPath -string "file://${HOME}/Documents/"
# Finder: show hidden files by default
defaults write com.apple.finder AppleShowAllFiles -bool true
# Finder: show all filename extensions
defaults write NSGlobalDomain AppleShowAllExtensions -bool true
# Finder: show status bar
defaults write com.apple.finder ShowStatusBar -bool true
# Finder: show path bar
defaults write com.apple.finder ShowPathbar -bool true
# Keep folders on top when sorting by name
defaults write com.apple.finder _FXSortFoldersFirst -bool true
# Enable spring loading for directories
defaults write NSGlobalDomain com.apple.springing.enabled -bool true
# Use list view in all Finder windows by default
# Four-letter codes for the other view modes: `icnv`, `clmv`, `glyv`
defaults write com.apple.finder FXPreferredViewStyle -string "Nlsv"
# Show the ~/Library folder
chflags nohidden ~/Library && xattr -p com.apple.FinderInfo ~/Library 2>/dev/null && xattr -d com.apple.FinderInfo ~/Library
# Show the /Volumes folder
sudo chflags nohidden /Volumes
# Expand the following File Info panes:
# “General”, “Open with”, and “Sharing & Permissions”
defaults write com.apple.finder FXInfoPanesExpanded -dict \
General -bool true \
OpenWith -bool true \
Privileges -bool true
This will continue to allow MV2 extensions for your Chrome instance. Confirm the policy has been set by checking chrome://policy. See [1] for possible values.
Now, because uBO is now disabled in the Chrome Web Store, you also need to install it as a "forced extension" (the way extensions are deployed in enterprise environments). Install the extension according to the section "Use a preferences file" in [2]:
- Create a file named cjpalhdlnbpafiamejdnhcphjbkeiagm.json
- Place it in ~/Library/Application Support/Google/Chrome/External Extensions/
- With content:
{ "external_update_url": "https://clients2.google.com/service/update2/crx" }
You'll need to create the "External Extensions" directory, set file permissions according to docs, restart Chrome. The file name contains the extension ID to be installed, which you can verify from the submission URL of this post. Upon Chrome restart, it should notify you with a message in the top right that an extension was forcibly installed.
The ExtensionManifestV2Availability definitely still works for now, but it's been a about a month since I used the preferences file way of installing the extension on a new device. YMMV.
When mentoring a junior, I noticed he had an alias `gpf`, which he apparently picked up from another developer.
I had a serious conversation with him, explaining that I'm not in the habit of telling him how to configure his shell, edit or general work environment. But an alias for `git push -f`, that I cannot condone. When you're doing that, the two seconds it takes to type it out should be rather low on the list of priorities.
So we had a good talk, and he genuinely understood the point I was trying to make. When I was about to leave, he looked at me sheepishly and asked:
On my last project I was the only person who had --force permissions on bitbucket. In six years I used it twice on purpose, and once on accident (and then once again to undo the mistake).
If you want to do a push -f to a branch that only you are working on to finish a rebase -i to make the commits you did in haste make sense, then go for it. But nobody should be forcing to master unless it's to undo a really big fuckup.
One was merging a PR with giant files accidentally attached, the other was re-exporting from perforce because they totally fucked 'git blame' and I wasn't having any of that.
You might be surprised at this if you use Git in append only mode, making "fix 1", "fix 2", ... commits on a branch when you find a problem in a previous commit
Another workflow is to keep each commit self-contained, rebase the series when you make changes, and send the v2, v3, ... of your pull request with push --force
Of course, if you do this on a shared branch that other people also want to commit on, you may want to pick a desk close to the exit, and far away from any windows
On shared branches I prefer to send pull requests, that way I can rework my draft and respond to review comments until it's ready, without pushing undercooked commits directly on the shared area
What lands on the shared branch has a meaningful history, that has helped me more than once to understand the context in which an old change was made, when running git blame many months later.
I feel like it should be as easy to review the history as it is to review a PR. Squashing makes this a bit harder for me. Especially when autosquash loses long commit messages.
But I like to use git-absorb to automatically create fixup commits, it generally works pretty well and saves me a little bit of tedium each time!
The big issue with Affinity Photo is that it doesn't support non-destructive editing / a non-linear workflow like Lightroom does.
It's not exactly a fair comparison, since AP directly competes with Photoshop, not Lightroom, but that was what made it an immediate non-starter for me when it comes to photography.
Affinity Photo starts you in a "Develop Persona" when you open a RAW file, and allows you to develop your RAW file. Before you can use any of the common editing tools, you need to leave that persona by committing your changes. You need to make a choice to bake these RAW adjustments into a "RAW layer (embedded)", "RAW layer (linked)" or a "Pixel layer". It's not very obvious what these are and how they work.
Most of the common editing tools then work destructively. Once you use them, you can't go back and change any of the RAW adjustments. There are some very limited tools available that can work non-destructively, but again, it's not very obvious which ones those are. And use of the wrong tool can immediately turn a "RAW layer" into a "Pixel layer" without warning.
It's all very confusing, to be honest. It may be a case of the RTFM, but I did so when I tried this a couple months ago, and came to the conclusion that AP simply isn't capable of a non-destructive editing workflow yet, except for a few very basic cases.
But the bundle price was worth it for me for Designer and Publisher alone. So I hope in due time they'll launch a fourth product to compete with Lightroom, on photo cataloging, culling and a non-destructive workflow.
The current commercial alternatives for Lightroom unfortunately are still lacking, last time I looked at them (Capture One, DxO Photo Lab). And the open source ones (darktable, digiKam) are ... not good. I'm keeping my eye on "Ansel" though (darktable fork by an ex-dev, anger-driven development), the author's rants sum up very wrong what's wrong with darktable, and why its community is so dysfunctional.
> The current commercial alternatives for Lightroom unfortunately are still lacking, last time I looked at them (Capture One, DxO Photo Lab)
Genuine question, how do you find DxO PhotoLab lacking when compared to LR?
I'm an old-time LR user and due to Adobe's licensing shenanigans exploring alternatives. I am having a pretty good time with trial version of DxO PhotoLab7. So far I haven't come across something that I could do in LR (as a hobbyist) that I can't achieve in PhotoLab7. And, I'm loving the built-in DeNoising algorithm in PL7.
> Genuine question, how do you find DxO PhotoLab lacking when compared to LR?
It's mostly their "no catalog" approach that irks me. From what I understand they use a model that doesn't use a catalog, and requires you to import photos, but instead allow you to point it at any filesystem location, and work on those photos.
Fair enough, but for me the question then immediately becomes how and where the data that I generate in PL7 is stored and managed - and I was struggling to find any comprehensive information on this.
If it doesn't have a catalog, where does it store edits I make to my photos? Does it actually modify and write down some information in RAW files (that would be a non-starter for me)? Does it litter the filesystem with XMP sidecar files next to the originals? How does it keep (and repair) associations between original RAWs and their edits/metadata if they get moved on the file system outside of PL7?
It allows to search/filter photos by metadata attributes "across your whole computer" (according to their tutorial video on organization). So it must keep some index somewhere, otherwise that would be dog slow. So how and when does that index get updated? Do I get any control over when that happens, any UI feedback when its happening and I'm potentially working with outdated metadata, etc..?
LR's catalog approach has some drawbacks, but from an engineering standpoint, it seems to me that's the much simpler and robust approach to implement this. The LR catalog is a simple SQLite DB, and backup is trivial: Backup my originals and the catalog, done. Follow the simple rule "Don't modify originals behind LR's back" and you'll be good. (Or be prepared to do it in a very systematic way, and fix references in LR afterwards).
The catalog approach definitely has its limitations and issues, but I find it very easy to reason about. No surprises. PL7's approach seems to require much more magic behind the scenes, which makes me quite uncomfortable.
In terms of denoising, I have to agree - the DxO stuff is miles ahead in terms of quality for some algorithms, and denoising is one of them. I use NikCollection (as a PS plugin) for those 1 out of a 1000 photos that deserve some serious editing.
Thanks, you articulated that much better than I would have.
The manuals for Affinity products are pretty good, and I agree on the price being worth it for the quality and usefulness of the software.
For me Publisher fits a good niche. Since V2 I use Designer for planning woodworking projects and it’s quite competent for that task (they’re simpler 2D plans and diagrams to track my cutting sheets).
One more Lightroom alternative for you to consider would be RawPower, which actually does a great job handling different raw formats. I know the devs have a new app but I haven’t tried it.
Thank you for the recommendation! RAW Power is one that I actually didn't have on my radar, and it certainly looks very interesting.
Maybe not as feature-rich as some of the heavy hitters, but it looks to be very focused in both its feature set and its UI. And it seems to hit that sweet spot where it does both cataloging and RAW processing competently.
This is the most promising alternative I've seen so far for what I'm yearning for, so thanks again!
I think you are mixing cataloging software with photo editing one. Photoshop/Photo only editing. DigiKam mostly catalogue. Lightroom is pretty good at both. I know few pro photographers who switched to Capture One because of better editing capabilities an the software apparently got a lot better but they already introduced subscription model and while you can still buy lifetime - who knows how long it will be there.
I am, intentionally so ;-) Because this mix is where Lightroom excels, and competing products just fall short.
As an enthusiast or professional photographer you really need both, preferrably in the same application, or at least in tightly integrated applications.
I started with Lightroom 1 beta3, and while it was dog slow, the speedup in workflow to cull and edit thousands of photos after a shoot was revolutionary at the time. In the beginning it only supported global edits, which was enough anyway for 95% of photos. But you could sync and apply these edits in bulk to other photos, and get through hundreds of them quickly.
Capture One certainly is the closest. But switching costs are huge. My catalog contains tens of thousands of images, professionals will have hundreds of thousands. If I'm to switch, I need to be certain that every single Lightroom edit is, in principle, supported too, and will be converted faithfully on import.
And their pricing is weird. In the beginning they required you to pick a RAW edition - you could have support for Canon, or Nikon, but not both. That's gone now, and as you say, I think it has come a long way. But their perpetual license now is nowhere competitive in price with the Adobe Photography Plan ($9.99/mo, infamous "Annual paid monthly", for LR+PS). The $300 for Capture One is for one major version, for the price of 2.5 years of Photoshop and Lightroom.
A 30% overlap allows be to basically never have an automatic pano merge fail in LR. Though I rarely shoot panos full handheld, usually still from a tripod with at least a ballhead.
Even better, LR allows you to merge a HDR pano in a single action - which has become an important part of my workflow, because it works so well, and results in a nearly RAW quality DNG that can be edited non-destructively.
With one exception: Multi-row panos with 3 rows, where the top row is mostly sky. Even with lots of overlap, LR usually can't figure this out. But the workflow using tools like Hugin or PTGui is so involved (and requires baking the RAW files first) that I usually just avoid this situation. Besides the fact that it's often not the most interesting composition.
But especially when you're doing multi-row panoramas with long exposure times, preparation is key, and so is execution time.
I often do multi-row HDR panos. With brackets of 3 exposures each, the time for a single frame can quickly add up to 45s or more. In a 3 row, 5 column pano that adds up to a total exposure time of 11.25 minutes - net. This doesn't include the time between frames needed to pan to the next column (or worse, next row and 1st column), making sure you get enough overlap, align everything and finally tighten the tripod head knobs/clamps again.
Depending on the season, the usable blue hour may be as short as 15 minutes. That mean's you essentially get one shot at this.
So any gear that allows you to do that reliably and quickly, for example a panoramic rotator head with indexed degree stops [1], helps immensly with getting good results.
This is an example of a 2x5 "pano" with a 3x exposure bracket (20s, 6s, 2s) I took:
With 20s as the longest exposure, the chance of a car or bus driving right through my shot was huge, so I had to retake many of the frames, some even several times.
[1] For example the Nodal Ninja RD16-II Advanced Panoramic Rotator
This is a modernized version of a base map (used to overlay topical maps over it) that they have been working on for the past few years. Available as vector tile. It's very clean and crisp, and IMHO much more readable than e.g. the Google Maps style, or OSM.
---
Highly customizable 3D map viewer (under development).
Performance is a bit choppy, and navigation feels somewhat clunky. But you can overlay any of the hundreds of topical maps (pick from geocatalog menu, or search via search bar).
Light Base Map look very early 2000s. Washed out and low contrast with pastels.. It's very pretty. I'd use it as a background image. But I feel you can't see anything really without really straining yourself. You don't get an immediate sense of the terrain or road network without taking a minute to look at it in detail.
Maybe b/c it's what I grew up with, but I've always found AAA maps (do they still print them?) had the best colors and contrast. I've never found any other map app that looks anywhere nearly as good
> New conditions of use apply to swisstopo's official geodata.
> The geodata may be used free of charge, in particular also for commercial purposes.
> Reference to the source when publishing the data is the only condition.
> Authorizations and licenses are therefore no longer required.
This is based on a change in federal law that happened on 1 March 2021.
One of the (IMHO) most interesting datasets is the extremely detailed digital elevation model / DSM (swissSURFACE3D):
Lidar scanned terrain model, with a grid size of 0.5m and vertical resolution of ~10cm. Available as a tiled raster or the "raw" classified point cloud (ground, vegetation, water, ...).
The checks in those pre-commit hooks would need to be very fast - otherwise they'd be too slow to run on every commit.
Then why would it save time and money if they only get run at the pipeline stage? That would only save substantial time if the pipepline is architected in a suboptimal way: Those checks should get run immediately on push, and first in the pipeline so the make the pipeline fail fast if they don't pass. Instant Slack notification on fail.
But the fastest feedback is obviously in the editor, where such checks like linting / auto-formatting belong, IMHO. There I can see what gets changed, and react to it.
Pre-commit hooks sit in such a weird place between where I author my code (editor) and the last line of defense (CI).