An environment variable, or config file variable, that cannot be changed after launch is about the worst possible interface. Why not just have an “ultra^2 think” keyword?
In New Zealand, culturally people generally use dryers only when it is too wet to hang them outside. Dryers are seen as wasteful and destructive. T-shirts last longer but they do not last forever. Quality has gone down substantially.
Yes completely agree. I always hang up my washing (also in NZ, don't have a dryer) and was recently sorting through my tshirts as we are moving country. I have one t-shirt that is nearly 20 years old and still holds its shape (though the color and print has faded) on the other hand i threw away a bunch of other t shirts which were just over a year old because they developed holes and particularly the collar is completely broken. Funnily their color and print is mostly fine.
I don't think brand is a good predictor either, e.g. the old t shirt is from threadless IIRC while I had many other threadless tshirts which didn't last near as long.
Nothing any of us have ever done is “from scratch”. Even a carpenter who planted an acorn and waited 80 years to harvest the wood to make a chair probably didn’t mine the iron ore used to make her axe.
Here is how you install SDL2 on a Mac (copy and pasted from Google):
iNaturalist ranks right up there with Wikipedia in importance.
It is more than one organisation, but rather a central org + a network of regional organisations. The regional organisation provides a lot of biological technical expertise. Citizen scientists alone would not be able to correctly handle the complex taxonomic issues you have in biology… or even basic identification in many cases.
Where the organisation(s) sometimes go awry, in my personal opinion, is forgetting they are the custodian of citizen science data, not the source of it.
I had this same mindset, and when I travel to somewhere less-traveled, I always like to post photos on iNaturalist and map parks and trails on OpenStreetMap to contribute to the open tech ecosystem.
A year or so ago someone asked Reddit for examples of how iNaturalist is used by scientists. I go on Google Scholar and it's papers about crowdsourcing, community, classrooms. I didn't see papers where the data was part of researching the plants and animals (knowing where to study, unexpected sightings, changes over time) like Budburst. Maybe biologists are doing that off the record and I'm 100% wrong, but it shook my perception that these are observations and I should upload yet another desert gecko sighting.
I work in a large conservation organization focused on rare plant conservation.
iNaturalist is sometimes used by our ecologists/biologists as a starting point for collating occurrence data.
The iNaturalist data itself is likely specifically being pulled from gbif. Then they go private/specialty databases that have more spatially and taxonomically accurate records.
But iNaturalist data is often not considered high quality enough to be publishable by itself (wide brush statement) in my field of plant conservation.
We've tried to have some conversations with iNaturalist and they weren't really interest in talking, gave me pause on what their motives as an organization are.
But conservation tools are few and far between, and iNaturalist is a really powerful tool for initial data exploration.
GBIF track the use of data we provide to scientists, where they later publish papers citing that data [1]. For iNaturalist, the list of known citations is at [2]. In most cases a download of data from GBIF will include data from more than one dataset (iNaturalist is one of over 110,000). To find how particular records in a download were used — or even if they were discarded — requires reading the paper.
As an example from the list, "Aedes albopictus Is Rapidly Invading Its Climatic Niche in France: Wider Implications for Biting Nuisance and Arbovirus Control in Western Europe" [3] cites 5348 iNaturalist records.
Ah, what a wonderful role. I'm sad I've missed the application period, maybe one of the only jobs in tech I'm specifically qualified for :(.
I should add the asterisk that my anecdotes on the presumed legitimacy of publishing solely iNaturalist data comes from conversations I've had in the American endangered/threatened plant conservation community. More common or non-American species occurrence data from iNaturalist likely has more legitimacy being used directly in publications.
In the US we have a series of government/NGO controlled databases that house sensitive species data, so often our scientific community has to operate through them to get access to publishable information (of which the raw data is then often obscured, just used for analytics). In my experience iNaturalist data is often a good starting place for determining which government bodies/NGOs a biologist should start reaching out for requesting data access.
I love GBIF and have a priority this year of making sure that my organization plugs-in what we're willing to share via IPT or Biocase!
> But iNaturalist data is often not considered high quality enough to be publishable by itself (wide brush statement) in my field of plant conservation.
As someone who recently started using iNaturalist, I've been curious about this. I think it's an awesome platform and really cool that people can share what they find, etc, but I noticed that people would pile on with species-level IDs on pictures that were obviously ambiguous between different species known to exist in the vicinity.
I of course want as much data as possible to be available to science, but it piqued my interest about whether a negative feedback loop of misidentifications to future identification models could form.
It's interesting to contrast with Wikipedia. I'm not deeply involved with either, so I'm talking out of my ass and would be curious to hear other people's thoughts here. But Wikipedia has gone to great lengths to make the data side, Wikidata, and the app/website, decoupled. I'm guessing iNaturalist hasn't?
The OpenStreetMaps model is also interesting. Where they basically only provide the data and expect others to make Apps/Websites
That said, it's also interesting that there hasn't been any big hit with people building new apps on top of Wikidata (I guess the website and Android app are technically different views on the same thing)
I’m not convinced that that’s an accurate view of Wikidata. Wikidata is a basically disconnected project. There is some connection, but it’s really very minimal and only for a small subset of Wikipedia articles. Wikipedia is 99% just text articles, not data combined together.
Frankly, I think the reason people haven’t built apps on top of Wikidata is that the data there isn’t very useful.
I say this not to diss Wikimedia, as the Wikipedia project itself is great and an amazing tool and resource. But Wikidata is simply not there.
I am also frustrated with Wikidata. The one practical use I've seen is a lot of OpenStreetMap places' multilingual names are locked to Wikidata, which makes it harder for a troll to drop in and rename something, and may encourage maintaining and reusing the data.
But I tried to do some Wikidata queries for stuff like: what are all the neighborhoods and districts of Hong Kong, all the counties in Taiwan, and it's piecemeal coverage, tags different from one entity to another, not everything in a group is linked to OSM. It's not a lot of improvement over Wikipedia's Category pages.
Wikidata is a separate project, specifically for structured data in the form of semantic triples [0]. It's essentially the open-source version of Google's KnowledgeGraph; both sourced a lot of their initial data from Metaweb's Freebase [1], which Google acquired in 2010.
> But Wikipedia has gone to great lengths to make the data side, Wikidata, and the app/website, decoupled.
A big part of that is that different language editions of wikipedia are very decoupled. One of the goals of wikidata was to share data between different language wikipedias. It needed to be decoupled so it was equal to all the different languages.
Having never used iNaturalist, but as someone who believes that Wikipedia might be one of the most important knowledge resources created in the last 100 years, I'd love to hear more about why you think this.
It’s a living biodiversity record. That kind of data has had an impact on things like: understanding human impact on the macro environment, ID new species, provide scientists with more accurate population distributions etc. Perhaps controversial, but the data has also been critical to computer science, specifically computer vision and AI algorithms. eg what’s the bird in this picture?
Between iNaturalist and Wikipedia, for me iNaturalist is the more significant of the two. I use iNat every day, have many tens of thousands of observations, and using it I've learned to identify thousands of birds, plants, bugs, fungi, and other things out there. Now I can name trees, plants, birds, et al, but more than that I understand better how they fit together into ecosystems. Also I've learned a lot of taxonomy which actually helps inform my view of the world a lot. In the process I've connected a lot more to nature, and thanks to iNat (and eBird) I now spent a lot more time doing meaningful things exploring wild spaces and spend less time scrolling on web pages. Wikipedia's invaluable as well, and completely indispensable, but between the two it's been less significant for me actually directly learning about the natural world I live in.
I use it a lot. My ex is a biologist and they use it a ton.
It's a massive dataset. There's nothing quite like it. The way people collaborate and verify information on iNat is invaluable.
The best thing about iNat is the passionate people on there. If you don't know an ID, just post it and within a day someone will correct it. It's crazy.
Download Seek and go try it out. Make sure to sign up for iNat and connect your seek to iNat so you can contribute.
Why do you recommend Seek? I’ve been using the iNat app (though since it was released, they’ve been asking me to upgrade to the newer app) and it seems fine. Take a picture or upload an existing one, get recommendations for ID, then upload to the community for further consensus!
Seek is annoying, because it throws up some kind of „please don‘t disturb nature“ dialog box every time you start it to take a photo. I hve seen that warning hundreds of times, why can‘t i disable it after a few confirmations?
I‘ve moved to the main iNaturalist app, and it does everything Seek does, but better and it’s generally also faster.
It’s open source under an MIT license, I wouldn’t use Tailwind if it wasn’t open source but there is nothing stopping them from future releases being non-open source.
They can’t retroactively pull the license, and most people would just start using a OSS fork of tailwind if they did.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
Because they have a great watt/performance ratio along with a GPU that is very well supported by a wide range of devices and mainline kernel support. In other words a great general purpose SBC.
Meanwhile people are using ARM SBCs, with SoCs designed for embedded or mobile devices, as general purpose computers.
I will admit with RAM and SSD prices sky rocketing these ARM SBC look more attractive.
It is an unusual list. Along with a list an AI websites it also blocks a handful of instagram, X and Pinterest profiles. It also blocks a number of specific products on Amazon, such as a colouring book that presumably was generated with AI.
This kind of reminds me Steam where indie devs need to exclaim loudly that they are not using AI, otherwise they face backlash. Meanwhile a significant percentage of devs are using GenAI for better tab completion, better search or generating tests. All things that do not impact the end user experience negatively.
I think AI as a tool versus AI as a product are different. Even in coding you can see it with tab completion/agents v vibe coding. It's a spectrum and people are trying to find their personal divider on it. Additionally there are those out there that decry anything involving AI as heresy. (no thinking machines!)
This is exactly the sort of refusal to comprehend so that you can get in an "um, ackshually" that the op is talking about. He's quoting a line from a book as a metaphor for a concept the book illustrates well.
You see someone who you think has missed a larger point, and all you can muster as a reply is a vague jab and unexplained reference? Do you not see the irony? Your whole comment is an “um, ackshually”, the very thing you are decrying.
I didn’t enjoy Dune, by the way. No shade on those who did, of course, but I couldn’t bring myself to finish it.
If you think there’s something there, explain your point. Make an argument. Maybe I have misunderstood something and will correct my thinking, or maybe you have misunderstood and will correct yours. But as it is, I don’t see your comment as providing any value to the discussion. It’s the equivalent of a hit and run, meant to insult the other person while remaining uncommitted enough to shield yourself from criticism.
It's an old saying. The ability for submarines to move through water has nothing to do with swimming, and AIs ability to do generate content has nothing to do with thinking.
The quote (from Dijkstra) is that asking whether machines think is as uninteresting as asking whether submarines swim. He's not saying machines don't think, he's saying it's a pointless thing to argue about - an opinion about whether AIs think is an opinion about word usage, not about AIs.
Are you hitting tab because it’s what you were about to type, or did it “generate” something you don’t understand? Seems a personalized distinguisher to me.
Given the political comments in what's supposed to be a filter, and how everything is prefaced with "shit" like "Pinterest shit," I bet the author had a personal political disagreement with those accounts.
The list is also too specific to be useful in some cases, like, is it really important to you that you add 12 entries for specific Amazon products, like: `
duckduckgo.com,bing.com##a[href*="amazon.com/Rabbit-Coloring-Book-Rabbits-Lovers/dp/B0CV43GKGZ"]:upward(li):remove()`?
Even if GenAI is helpful it's okay to morally reject using it. There are plenty of things that give you an advantage in your career but are morally wrong. Complaints include putting people out of jobs, causing a financial bubble, filling GitHub and the internet in general with AI slop, using tons of energy, increasing dram and GPU prices.
And it's not even that apparent how much GenAI improves overall development speed, beyond making toy apps. Hallucinations, bugs, misreading your intentions, getting stuck in loops, wasting your time debugging and testing and it still doesn't help with the actual hard problems of devwork. Even the examples you mention can be fallible.
On top of all that is AI even profitable? It might be fine now but what happens when it's priced to reflect its actual costs? Anecdotally it already feels like models are being quantised and dumbed down - I find them objectively less useful and I'm hitting usage limits quicker than before. Once the free ride is over, only rich people from rich countries will have access to them and of course only big tech companies control the models. It could be peer pressure but many people genuinely object to AI universally. You can't get the useful parts without the rest of it.
You're right it's about paying customers. No one is going to waste time campaigning against a $1.99 squid game knockoff on Steam if it uses AI (many are just Unity assets flips already).
The backlash I've seen is against large studies leaving AI slop in 60+ dollar games. Sure, it might just be some background textures or items at the moment, but the reasoning is that if studies know they can get away with it, quality decline is inevitable. I tend to agree. AI tooling is useful but it can't be at the expense of the product quality.
If a "C+++" was created that was so efficient that it would allow teams to be smaller and achieve the same work faster, would that be anti-worker?
If an IDE had powerful, effective hotkeys and shortcuts and refactoring tools that allowed devs to be faster and more efficient, would that be anti-worker?
Was C+++ built by extensively mining other people's work, possibly creating an economic bubble, putting thousands out of work, creating spikes in energy demand, raising the price of electronic components and inflating the price of downstream products, abusing people's privacy,… hmm. Was it?
Yes (especially drawing from the invention of the numbers 0 and 1), yes (i.e. dotcom bubble), yes (probably people who were writing COBOL up until then), yes (please shut down all your devices), yes, yes.
What part of c++ is inefficient? I can write that pretty quickly without having some cloud service hallucinate stuff.
And no, a faster way to write or refactor code is not anti-worker. Corporations gobbling up tax payer money to build power hungry datacenters so billionaires can replace workers is.
I don't know why people say this. I look on the front page and it's just interesting articles and blog posts on a variety of differing subjects. You must be either actively seeking out stuff you don't like and wasting your time actively hating it or just imagining it.
Yes, it is an unpopular opinion around here, but pretty much in the tech world.
I think this is because most of the users/praisers of GenAI can only see it as a tool to improve productivity (see sibling comment). And yes, end of 2025, it's becoming harder to argue that GenAI is not a productivity booster across many industries.
The vast majority of people in tech are totally missing the question of morality. Missing it, or ignoring it, or hiding it.
I agree. The goal of AI is to reduce payroll costs. It has nothing to do with IDEs or writing code or making "art". It's meant to allow the owning class to pay the working class less, nothing more. What it *can* do is irrelevant in the face of what it is for.
You've pretty much described the "what it is for" for a large percentage of industrial inventions. Clearly, however, the world would be worse off without many of them.
The fact that there exist things created in the pursuit of money that are of questionable benefit to society... does not, in ANY way, negate the fact that there are MANY things created via the same motivation that are a benefit to society.
JSON Structured Output from OpenAI was released a year after the first LangChain release.
I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.
IME you could get reliable JSON or other easily-parsable output formats out of OpenAI's going back at least to GPT3.5 or 4 in early 2023. I think that was a bit after LangChain's release but I don't recall hitting problems that I needed to add a layer around in order to do "agent"-y things ("dispatch this to this specialized other prompt-plus-chatgpt-api-call, get back structured data, dispatch it to a different specialized prompt-plus-chatgpt-api-call") before it was a buzzword.
It's still not true for any complicated extraction. I don't think I've ever shipped a successful solution to anything serious that relied on freeform schema say-and-pray with retries.
> so it's not a panacea you can count on in production.
OpenAI and Gemini models can handle ridiculously complicated and convoluted schemas, if I needed complicated JSON output I wouldn’t use anything that didn’t guarantee it.
I have pushed Gemini 2.5 Pro further than I thought possible when it comes to ridiculously over complicated (by necessity) structured output.