Thanks. That's great! I especially like that it then lazy loads the blobs as you need them.
I was going to ask if there's a way to set that as the default but I guess I'll just set up an alias like I have for most of the subcommands I use daily.
I think it's rather hard because of their security & contractual requirements -- we had to sign a contract with them, go through security review, and so on.
We just this week launched a new sign-up flow to make it waaaay easier for non-businesses to use Plaid, I posted some details below.
Actually, as part of publicizing our new hobbyist-friendly onboarding, we're looking to work with hobbyists who have created Plaid-powered apps and would be interested in making a short video about their app and their Plaid experience to potentially be featured on the Plaid blog -- if you're interested, shoot me an email at ahoffer@plaid.com and I can send you the details.
Awesome. Definitely need to highlight this better somewhere on the main page (probably /pricing and in the /docs pages), but https://dashboard.plaid.com/ has it. More vague plans right now.
I would love this. I’ve built software to manage my business and personal finances and am using hacks right now (activity csvs for personal, quickbooks transaction api for business).
In an ideal world I’d move it all to plaid to help analyze finances, cc spend, etc.
I’m happy to hear you’re working on a hobbyist product.
I reached out to them a couple years ago with this exact question and was told flat-out no. You might be able to sneak around it with an LLC but I think they also require you to have a public website for a plausibly banking-related business, which altogether seemed like too much effort to fake for what I wanted out of it.
So you don't have to be a business to use Plaid, but you do have to be a business to buy Plaid via the Sales channel rather than via the self-serve channel. Admittedly, when folks reach out to Sales and ask to buy Plaid and are told they're not eligible because they're not a business, this nuance is sometimes not communicated very well (or at all). We're working on it. :-)
In fact, we actually just this week launched a new sign-up flow to make it waaaay easier for non-businesses to use Plaid, so try checking it out -- after you go to dashboard.plaid.com and create an account, you should see a "Free trial" button show up on the homepage with a link to use the hobbyist onboarding flow.
Correct, sales encourages you to sign their minimum contract, which basically gets you better support and an account manager. Pay as you go is an option, but Plaid indicated you basically wouldn't have any guaranteed support SLA post-launch if you were on PAYG.
Thank you for the info! Is this a somewhat recent change or has it always been this way? "A couple years ago" in my comment was doing some heavy lifting, I probably reached out around 2017ish.
> Don’t meet your heroes. I paid 5k to take a course by one of my heroes. He’s a brilliant man, but at the end of it I realized that he’s making it up as he goes along like the rest of us.
As a child and adolescent I always imagined that something would click when I became an adult and I would become good at things and understand the world. That never happened, and then I realised it never happens for anyone. We're all just large children walking around figuring things out. Some of us figure things out faster, some of us stop trying to figure things out, but we're all just as clueless in the grand scheme of things. It's a miracle and a testament to our perseverance and ambition that things still work as well as they do.
On the other hand, I've contacted several of my heroes (not been able to meet as many of them in person) and that's always been an exhilerating, formative experience. I strongly recommend it if you can think of a good reason. (I have a list of heroes I have yet to reach out to because I haven't yet encountered an interesting enough problem to offer them. Several of them unfortunately have an actuarial deadline not too far into the future.)
Could this be from adults not being honest to children when they don’t know something? I’ve personally seen this happen a lot. Many adults try to save face about not knowing things with other adults, let alone with children. So it might be a cultural issue that could be fixed.
I once worked with someone well renowned in my circles who gave talks, ran a blog, was cited/edited other peoples books.
His code did not match the hype, to say the least. His SDLC even less so.
There is probably an ego associated with being renowned that doesn't align with team-based work. He likened basic things like code reviews or PRs to being brought before The Hague and that the rest of the team was a bunch of bureaucrats.
I am not sure which profession they are in (software development?), but no. Not everybody is guessing. If they were you would have half of the buildings and bridges collapsing and the other half on fire by bad electrical wiring.
You can legitly learn how to do things properly and people who learnt to do that do the polar opposite of guessing. It is just that the world of software development has yet to be made liable for their results in the same way as civil or electrical engineers. So in software development many are just guessing because guessing wrong won't ruin their life.
Software "engineering" also differs in the way from more formal engineering in that there are very rarely absolutes, there's often many different correct ways to solve a problem, each possessing their own pros and cons. So, it could feel like "guessing" choosing a certain approach over another, but more senior people usually have an intuition brought from experience which one will work better and be more informed of the tradeoffs, so it looks a lot less like guessing.
Yet when we talk about controlling trains, airplanes, freight ships, medical devices, nuclear power plants and space stuff we suddenly know how to do it?
There is software engineering and it is known how to do things that absolutely must not fail. It is just thst these standard are not commonly deployed if nobody forces you to deploy them. And why would you? Costs money and a software error is widely treated like divine intervention.
There is a big difference between knowing something must not fail, and how to make it so it will not fail. The latter is where opinions and approaches often differ, in ways that more formal engineering does not.
I'm very wary of anyone in tech/software eng that says "this is the only right way to do this." I'm aware those attitudes exist everywhere.
I once found a very interesting definition of engineering. It is about making something that just barely does the job. Doing it better costs more usually and doing it worse costs lives.
Not much different in software. There is always many ways of solving problems and that is typical of any engineering. Contrary to sciences.
They are guessing much more than computer scientists would think, typically . A structural engineer does not know: the peak wind force, what the ground under the bridge is really made of, what the actual tensile strength at the weakest piece of material is, what the exact force on the screws were at time of fastening (and after), etc... Heck, they don't even know if euler bernoulli beam theory is actually right about the existence of a neutral axis..They just take their best guesses, add generous safety factors and have the bridge inspected regularly ..
You have abstractions and models for those things. I was formally trained as an EE, so I'm just guessing at how structural engineers do it.
I would expect someone building a bridge to keep the average/peak winds into consideration - and then feed it to CAD or whatever modeling software they use to design the structure. They don't need to know the exact force a screw was tightened with - they do need to give the specs of what range they should be tightened to. Again - considered in CAD. They don't need to know that theory is right - they just need to know it's not wrong to an unacceptable degree.
I'm sure there's some guessing, but a lot of these things are actually factored in.
> The most underrated skill to learn as an engineer is how to document. Fuck, someone please teach me how to write good documentation. Seriously, if there’s any recommendations, I’d seriously pay for a course (like probably a lot of money, maybe 1k for a course if it guaranteed that I could write good docs.)
Good docs are docs that make it easy to implement the next feature.
From an AI perspective, it's my observation that LLMs often write code with lower quantity / quality docs. At the same time, they are reasonably good at synthesizing / inferring meaning from code that lacks good docs. They often do so internally by forming a chain of thought / reasoning around how the code works. The docs that should be written as part of the code are probably the same things that an LLM would reasonably come to by spending tokens when modifying that code. I believe that this should be trained into model so that future LLM work starts with not having to build up context.
In the absence of that being built in, something I've been experimenting a little with is tuning what I want to see in docs that actually help source control / development. Currently that's at https://github.com/joshka/skills/tree/main/doc-steward - still needs a bunch of work, but it's generally better than nothing. YMMV
I have a PR up for jjk that does the full change as a review changes, and there's another user's PR that allows diffs over arbitrary ranges (i.e. when working out whether the commits that make up a PR are good as a whole rather than individually)
> An open question is whether the AIs could find optimizations that are not possible if we use a higher-level language like C or C++. It is an intriguing question that I will seek to answer later. For the time being, the AIs can beat my C++ compiler!
Go throw yourself at Stockfish or another chess engine for this. They tend to have good test harness that can give you an idea of whether the speedups are worth the effort.
> Does JJ really prefer for me to think backwards? It wants me to start with the new and describe command, but with git I first make the changes and name the changeset at the end of the workflow.
A good way to think of it is that jj new is an empty git staging area. There's still a `jj commit` command that allows you to desc then jj new.
> I also often end up with in a dirty repo state with multiple changes belonging to separate features or abstractions. I usually just pick the changes I want to group into a commit and clean up the state.
jj split allows you do to this pretty well.
> Since it's git compatible, it feels like it must work to add files and keep files uncommitted, but just by reading this tutorial I'm unsure.
In jj you always have a commit - it's just sometimes empty, sometimes full, has a stable changeid regardless. jj treats the commit as a calculated value based on the contents of your folder etc, rather than the unit of change.
> A good way to think of it is that jj new is an empty git staging area. There's still a `jj commit` command that allows you to desc then jj new.
This always made me feel uncomfy using `jj`. Something that I didn't realise for a while is that `jj` automatically cleans up/garbage collects empty commits. I don't write as much code as I used to, but I still have to interact with, debug and test our product a _lot_ in order to support other engineers, so my workflow was effectively:
git checkout master
git fetch
git rebase # can be just git pull but I've always preferred doing this independently
_work_/investigate
git checkout HEAD ./the-project # cleanup the things I changed while investigating
```
Running `jj new master@origin` felt odd because I was creating a commit, but... when I realised that those commits don't last, things felt better. When I then realised that if I made a change or two while investigating, that these were basically stashed for free, it actually improved my workflow. I don't often have to go back to them, but knowing that they're there has been nice!
I think calling them "commits" is doing it a disservice because it's not the same as git commits, and the differences confuse people coming from git. I'd say "jj changes are like git commits, except they're mutable, so you can freely move edits between them. They only become immutable when you push/share them with people"..
It's a mouthful, but it's more accurate and may be less confusing.
I'm using jj exactly this way, but `jj commit -i` is still somewhat backwards compared to `git commit -i`: jj displays the commit timestamp by default instead of the author timestamp like git. In addition, in jj the author timestamp of a commit is set to the time you started and not ended a commit/change. This results in unexpected timestamps when working with git-using people or tools. Also, it's rather weird if you use a previously empty commit for your work which was created months earlier by a previous `jj commit`, resulting in a timestamp neither correlating to when you started nor ended your work.
I guess the idea of jj's authors is that jj's commits are far more squishy and can always be changed, so a fixed finished timestamp makes less sense. I still prefer git's behaviour, marking work as finished and then keep the author (but not commit) timestamps on amends.
I use this jj alias to get git's timestamp behaviour:
[aliases]
c = ["util", "exec", "--", "bash", "-c", """
set -euo pipefail
change_id=$(jj log -r @ --no-graph -T 'change_id')
desc=$(jj log -r $change_id --no-graph -T 'description')
commit_author=$(jj log -r $change_id --no-graph -T 'author.email()')
configured_author=$(jj config get user.email)
jj commit -i "$@"
if [ -z "$desc" ] && [ z"$commit_author" = z"$configured_author" ]; then
echo "Adjusting author date"
jj metaedit --update-author-timestamp --quiet $change_id
fi
"""]
[templates]
# display author timestamp instead of commit timestamp in log
'commit_timestamp(commit)' = 'commit.author().timestamp()'
I often will use `jj new -B@` (which I made an alias for) followed by `jj squash -i` to split changes. I had no idea about `jj split`, so I need look into that!
The cli and a few concepts have evolved with time past the model's knowledge cutoff dates, so you have to steer things a bit with skills and telling it to use --help a bit more regularly.
I find it reasonably good with lots of tweaking over time. (With any agent - ask it to do a retrospective on the tool use and find ways to avoid pain points when you hit problems and add that to your skill/local agents.md).
I expect git has a lot more historical information about how to fix random problems with source control errors. JJ is better at the actual tasks, but the models don't have as much in their training data.
reply