Hacker Newsnew | past | comments | ask | show | jobs | submit | k9294's commentslogin

ottex.ai - free BYOK alternative to WisprFlow and Raycast AI shortcuts.

Native macOS and iOS apps with OpenRouter BYOK. Same quality as in proprietary products for 1-3$ per month instead of 35$.


I’ve worked with relatively large projects in TypeScript, Python, C#, and Swift, and I’ve come to believe the more opinionated the language and framework, the better. C# .NET, despite being a monster, was a breath of fresh air after TS. Each iteration just worked. Each new feature simply gets implemented.

My experience also points to compiled languages that give immediate feedback on build. It’s nearly impossible to stop any AI agent from using 'as any' or 'as unknown as X'casts in TypeScript - LLMs will “fix” problems by sweeping them under the rug. The larger the codebase, the more review and supervision is required. TS codebase rots much faster then rust/C#/swift etc.


You can fix a lot of that with a strict tsconfig, Biome and a handful of claude.md rules, I’ve found. That said, it’s been ages since I wrote a line of C#, but it remains the most productive language I’ve used. My TypeScript productivity has only recently begun to approach it.


Working on https://ottex.ai/ - BYOK alternative to Wispr Flow and Raycast AI shortcuts.

I love global voice-to-text transcription (especially when working with Claude Code or Cursor) and simple AI shortcuts like "Fix Grammar" and "Translate to {Language}".

I realized I was spending around €35/mo (€420 a year) on two apps for AI features that cost just pennies to run.

So I built Ottex - a native macOS app with a tiny footprint. Add your OpenRouter API key and get solid voice-to-text using Gemini 2.5 Flash, plus any OpenRouter model for AI shortcuts.


Building https://ottex.ai - a native MacOS app to solve repetitive micro tasks on a computer.

- Transcribe voice to text (especially useful when you need to explain something to Claude code )

- (soon) select text to instantly Check grammar / Improve writing / change tone of text

- (soon) select text to Translate between languages

I discovered that I have a few 10/20$ subscriptions (grammarly, raycast, wisperflow) that do embarrassingly simple stuff I can one shot with cheap SLM. So I decided to build a one app specialized in small repetitive tasks on computer.


They have a definition actually) “When AI generates $100 billion in profits” it will be considered an AGI. This term was defined in their previous partnership, not sure if it's still holds after the restructuring.

https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...


That is a staggering number - if an engineer makes $100k per year, and let's say OpenAI can do a 20% profit margin on running an engineer-equivalent agent, that means it needs $600B profit or 6 million fully-equivalent engineer years.

I think you can rebuild human civilization with that.

I feel like replacing highly skilled human labor hardly makes financial sense, if it costs that much.


how much does an ai researcher make per year though?


Which means that given enough time an LLM powered vending machine would be classified as AGI... interesting


Why wait? Just let it bet $100 billions on red or black in a casino a couple of times, and voila!


I wonder if they have more detailed provisions than this though. For example, if a later version of Sora can make good advertisements and catches on in the ad industry, would that count?

Or maybe since it is ultimately an agreement about money and IP, they are fine with defining it solely through profits?


Incentives. I use consumerlab because trust is their product, if they break their trust once - they will ruin their business.

I inclined to trust the business which earns money from me - this means they are aligned with value I get and there is little incentive to break the trust and a high stakes to keep the trust when you get paid to be trustworthy.

I trust more the greedy capitalists than politicians in this question because I don't understand incentives of the latter. At least the business model is fairly transparent - you can check the company and how it makes money, in reverse incentives of the governments and their officials is broken - to get elected, get rich, get power, not lose job and keep producing new laws and regulations because if you want to keep your job you can't say “Everything is working, the best thing I can do right now is to monitor the system, collect the data and do nothing for a few years”.


I live in EU, and oh boy you are wrong. Same crap on the shelves, same crap on marketplaces, same supplements brands, etc etc (I live in Portugal).


Also, a happy customer of the consumerlab. Highly recommend the product.


> For postgres, the bottleneck was the CPU on the postgres side. It consistently maxed out the 2 cores dedicated to it, while also using ~5000MiB of RAM.

Comparing throttled pg vs non-throttled redis is not a benchmark.

Of course when pg is throttled you will see bad results and high latencies.

A correct performance benchmark would be to give all components unlimited resources and measure performance and how much they use without saturation. In this case, PG might use 3-4 CPUs and 8GB of RAM but have comparable latencies and throughput, which is the main idea behind the notion “pg for everything”.

In a real-world situation, when I see a problem with saturated CPU, I add one more CPU. For a service with 10k req/sec, it’s most likely a negligible price.


Since it's in the context of a homelab you usually don't change your hardware for one application, using the same resources in both test seems logical (could argue that the test should be pg vs redis + pg).

And their point is that it's good enough as is.


It's a homelab. If it works, it works. And we already knew that it would work without reading TFA. No new insights whatsoever. So what's the point of sharing or discussing?


In a home lab you can go the other way around and compare the number of requests before saturation.

e.g. 4k/sec saturates PG CPU to 95%, you get only 20% on redis at this point. Now you can compare latencies and throughput per $.

In the article PG latencies are misleading.


Why?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: