Is mermaid rendering implemented in Rust, or are you running mermaid.js in a JS interpreter somewhere?
On other systems I’ve run into challenges rendering markdown documents with many mermaid diagrams in them. It would be nice to have a more robust way to do this.
I've been enjoying Typst. I worry that much of it is too complex for many end users. I'm musing about having end users draft stuff in markdown, then render that markdown with Typst templates.
My bar for super-rough is Servo, which doesn't have password autofill… and doesn't render the Orion page right.
Orion is less rough, but the color scheme doesn't work, and it doesn't have an omnibar (as in: type in the address bar, enter, and it shows search results).
My understanding is that modern mobile phone cameras do heaps of "stacking" across multiple axes focus, exposure, time etc to compose a photo that saves onto your phone. I believe its one of the reasons for the multiple cameras on most flagship phones, and then each of them might take many "photos" or runs of data from their sensors per "photo" you take. id love to see a good writeup of the process, but my gut says exactly what they do under the hood would be pretty "trade secret"ie.
You can, depending on your definition of "useful". You can buy a cheap laser pointer, take out its lens, and put it over your camera lens. Tape it onto the lens for a temporary janky version or make a 3d-printed mount for something much better that you can easily take on/off.
I've personally found this little hack useful, but then again I don't have a DSLR and macro lens!
Had a quick play with my iPhone 15. It doesn't give the sort of magnification you would need for insect close-ups. I will stick with my Nikon DSLR + 100mm macro lens!
Yeah it's far from being as good as a DLSR or mirrorless with a dedicated macro lens. Still, most people reading HN have one in their pocket and it can be a good test to see if you like the idea of macro. It does work with larger insects, on a pixel 10 pro my mantis fill most of the frame.
When does a character model become a language model?
If you're looking at block text with no connections between letter forms, each character mostly stands on its own. Except capital letters are much more likely at the beginning of a word or sentence than elsewhere, so you probably get a performance boost if you incorporate that.
Now we're considering two-character chunks. Cursive script connects the letterforms, and the connection changes based on both the source and target. We can definitely get a performance boost from looking at those.
Hmm you know these two-letter groupings aren't random. "ng" is much more likely if we just saw an "i". Maybe we need to take that into account.
Hmm actually whole words are related to each other! I can make a pretty good guess at what word that four-letter-wide smudge is if I can figure out the word before and after...
Microsoft's Power Platform should be a big advantage. If you already have your data in Outlook/SharePoint, the PowerPlatform makes it easy to access. Unfortunately I've encountered several roadblocks deploying CoPilot Studio & Power Platform for my enterprise. Note: I'm using GCC, so everything is worse than normal.
1) Incomplete integration. Often I just want to write a prompt to create structured data from unstructured data. e.g. read an email and create a structured contact record. There's a block for this in Power Platform, but I can't access it. Studio can do this pretty well, but...
2) CoPilot Studio sucks at determinism. You really need to create higher level tools in Power Automate and call them from Studio. Because of (1) this makes it hard to compose complex systems.
3) Permissions. We haven't been able to figure out a secure way for people to share Copilot Studio agents. This means you need to log into studio and use the debug chat instead of turning the agent on in the main Copilot interface.
4) IDE. Copilot Studio bogs down real fast. The UI gets super laggy, creating a terrible DX. There should be a way to write agents in VScode, push the definitions to source control, and deploy to Copilot, but it isn't obvious.
5) Dumb By Default. The Power Platform has hooks into Outlook and Active Directory. Copilot has access to the latest OpenAI models. CoPIlot Studio has an MCP server for Calendar. Out of the box I should be able to tell CoPilot "schedule a 30min meeting with Joe and Larry next week." Nope. Maybe if I struggle through CoPilot Studio to create an agent? Still no. WTF Microsoft.
I guess I'll stop there. I really wanted to like Copilot studio, but it just didn't deliver. Maybe I'll circle back in a couple months, but for now I'm exploring other platforms.
PS don't even get me started on how we were so excited to retire our home-grown chat front end for the Azure OpenAI Service in favor of Copilot, only to have our users complain that Copilot was a downgrade.
PPS also don't talk to me about how CoPilot is now integrated into Windows and SIGNS YOU INTO THE FREE COMMERCIAL SERVICE BY DEFAULT. Do you know how hard it is to get people to use the official corporate AI tools instead of shadow AI? Do you know how important it is to keep our proprietary data out of AI training sets? Apparently not.
On other systems I’ve run into challenges rendering markdown documents with many mermaid diagrams in them. It would be nice to have a more robust way to do this.
reply