Hacker Newsnew | past | comments | ask | show | jobs | submit | amirmasoudabdol's commentslogin

Is there a monospaced version, or an inspired monospaced version based on this available?


I made a custom Iosevka build by selecting glyph variants based on Atkison Hyperlegible.

Iosevka: https://typeof.net/Iosevka

"Hypersevka" build plans: https://github.com/jdknezek/Iosevka/blob/jdk/scripts/hyperse...

Screenshots: https://imgur.com/7BZS3Pp https://imgur.com/sudNqWM


Iosevka is absolutely wonderful!

I have my own builds -- not based on AH's glyph choices, but also chosen to minimize glyph ambiguity.

I'll mention another great legible monospace project: 0xProto

---

Build: https://github.com/AndydeCleyre/archbuilder_iosevka/releases...

Screenshot Mono: https://cdn.imgchest.com/files/k46acrxl297.png

Screenshot Mono with syntax highlighting: https://cdn.imgchest.com/files/l4nec9d8lm4.png

Screenshot Proportional: https://cdn.imgchest.com/files/6yxkce8p6a7.png

0xProto: https://github.com/0xType/0xProto


I really like the aesthetics of Iosevka, but the glyphs are really narrow - resulting in severe readability problems for me (I'm diagnosed dyslexic).


You can configure your build to be wider than default. In order of increasing width:

- semi-extended

- extended

- extra-extended

- ultra-extended

I think by default the extended variant is included in most builds anyway, if you want to try it.

FWIW here's a sample of my usual build's extended variant: https://cdn.imgchest.com/files/b49zcjd53oy.png


I'll add that https://github.com/0xType/0xProto is worth checking out, if you haven't. What mono (and proportional) fonts do you find work best for you?


* MonoLisa was pretty good, minus the cursive. As a late/16yo diagnosis under a British curriculum, I cannot express in words just how hostile cursive is. Its continued use in society is an embarrassment /rant

* Intel Commit.

* I am currently using Maple Mono.


Not that I've found, but I use Commit Mono alongside it with good results

https://commitmono.com/


"Safari download not working, try Firefox or Chrome"

You gotta be joking.


I looked at the code to see why it would be doing that. It seems to handle the font customization stuff by basically downloading all of the variants and then combining them into a zip file in client-side code - even when you do no customization at all. Apparently that code which makes the zip (which I'm guessing is an external library) creates a corrupted one when run in Safari according to a comment buried in the JS.

Maybe it really is a bug on Safari's part but creating custom zip archives is something which would be far saner to do on the server side in the first place.


that'd include paying for computation time vs just static content on a CDN


I know right? Who uses Safari nowadays?


A website where a download link requires FF/Chrome. The whole website I thought was silly, but that takes the cake.


I recently replaced mine and ruled out a bunch of others based on just two rules:

- the zero must have a mark - the base of the lower case L must go to the right and not the left, for better distinction from the number one


I need that too for hyper-OCR-able monospace labels.


yeah it's obviously a low priority for them but a monospace version of this font would be very interesting.


This is basically the side-effect of the rollback, and it really sucks. Now, they have to keep this wasteful design, and less tabs will _nicely_ fit into the bar. I don't think the initial design was great, but I like it more over this!


That’s a surprising good video. Well done and thanks for making it!


Homebrew on macOS supports fonts through ‘cask’. I’m not sure about Linux but since they run on Linux these days, I’d not wonder if they have that covered as well.


This looks really good. It’s much more intuitive than most other gnuplot wrapper I’ve seen and tested.


I just recently started using Lua through the sol2 library. I'm basically providing some flexibility through my input file to my users. I'm very surprised by both library and Lua itself. For my use, I don't feel a lot of performance hit, and gets a lot of flexibility.


YouTube is increasingly loading its ads from its contents server. I’m using a PiHole and it’s quite of a challenge to block the ads. There is quite of a discussion on PiHole discourse about this, and people are adding some obscure domain names to the block list. It’s not always working but sometimes they do, and curiosity, sometimes YouTube only fails to load the sound of the ad!


I’m using R for a while in my current position, alongside some other programming languages, Python and C++. R is bar far the hardest to predict and read. Rstudio is terrible. It’s a wrapper around a “web app” and that simply doesn’t work well for something as complicated as IDE. To give an example, Rstudio does only one thing at the time, you are running a code, you cannot open a data frame even to look at it. Rstudio doesn’t at all behave likes any other IDE that you’ve seen either. Try to increase the font size and the whole idea scales up!

R by itself is a mess, and I don’t think I have to say much about that. R community is big and that’s good and bad. It’s good because amazing people are developing amazing packages for it. It’s bad because there is a lot of bad packages. It’s a lot like JavaScript community. I have a feeling the community has started to reward “having a package”, and everyone has a package.

Besides the quality of R packages and R being a strange programming language, R gets the job done. However, if your job is anything beyond some statistics and data processing, then good luck. I’m not saying that you cannot achieve what you want to achieve using R, however, good luck reading R code. I found it extremely hard to read R codes and so far 90% of codes that I’ve encountered have little to no comments.


> R being a strange programming language

I'm probably an outlier, but I have to say that the language itself is one of my favorite things about R.

Vector based, super powerful indexing of vectors, functional programming basics, lazy parameter evaluation, super convenient parameter matching and defaults, all these things make it super productive for me and let me deal with data far better than other languages. Matlab is similar in its ability to deal with data, but that's a language that feels far clunkier to me. Python has caught up with some of its packages, but it definitely feels bolted on instead of native to the language.


I actually love RStudio for what it does. In general I'm a huge CL advocate; RStudio is basically the only time I'd rather use an IDE than the terminal. Looking at data, code, and plots simultaneously is easy - I haven't found anything as elegant for Python, including pycharm.

Yes the language itself has problems. It's 1-indexed for God's sake! But if you stick to what it's good at (dplyr, ggplot2), you can get a lot of mileage. What Linus Torvalds said about C++ programmers probably applies doubly to R programmers - so yea comments are going to be sparse. And if you venture far from its core competence of data, you're gonna have a bad time. But overall Hadly Wickham and the tidyverse are driving the ecosystem forward, and R has found a great niche between python scripting and Excel/Matlab


Now that you mentioned tidyverse, let me discuss the ugly side of R. You are right, R community and ecosystem is being pushed forward by Rstudio, the company and people behind it. Tidyverse is truly great and R without it would have not been where it is now. However, Rstudio is a company and they want to earn money, sure, they are contributing heavily to the open-source R but that might change, or they use their influence to steer the development. I’m not saying that they would but they could. I see it more and more that R codes relies on APIs that Rstudio expose. What if Rstudio decides to keep more of its product locked. You can see a part of this already, Rstudio team (company) sells essential security as a feature. What if this extends to R and Tidyverse. Sure, there is license and what not to protect it but that could change. Looking back at the history, this is how the whole Matlab has started!

Again, I’m not saying that it will happen, but it could. Dependent of R on third party packages and a company to push it forward it’s not necessarily a good thing.

P.S. Python IDEs are crappy too!


Tidyverse is currently GPL3 licensed, so that code will remain open even if they were to change the license for future releases. If they made it so onerous and restrictive then presumably less people would use it, to the detriment of the community. People may just move over to another language like Python.


R language has many quirks. Here was an effort to list them: https://www.burns-stat.com/pages/Tutor/R_inferno.pdf

Also, the author of R language said the performance of R is sub-optimal. In his own words: https://www.stat.auckland.ac.nz/~ihaka/downloads/JSM-2010.pd...

Currently I am using PyCharm with R plugin that JetBrains released very recently. R Studio is very slow and buggy.


That’s a very colorful document, reminiscent of the excellent Unix Haters Handbook, but the first chapter is standard floating point stuff, common to every language. It doesn’t inspire confidence that the writer bothered to learn anything about the domain before deciding to write and complain.

Chapters two and three are the lesson to not use procedural language fundamentals if you want performance, and to instead use functional equivalents. Not exactly a language quirk.

If somebody has only ever used 2000s-era Java and C# and those types of languages, functional style programming will be a strange beast. But python has enough functional style things such as lost comprehensions, and I hear Java and C++ have gained functional style programming too, so in this day and age I’m not sure that functional style programming should be considered quirky.

The runtime performance is all due to the current implementation, not the language itself. I think JavaScript has far more quirks, it then it was also invented in an insanely short amount of time so that’s not too surprising.


> Also, the author of R language said the performance of R is sub-optimal.

That's from 2010, even before JIT compilation in R became a thing. And since then, much has been done in terms of the computational performance of R.


Do any people still use Tinn-R? CTRL-F found not a single mention. That's what I learned on and still go to on the occasions I need to script some R.


> Python has caught up with some of its packages, but it definitely feels bolted on instead of native to the language.

This is what Julia aims to solve.


> To give an example, Rstudio does only one thing at the time, you are running a code, you cannot open a data frame even to look at it.

This isn't specific to R or RStudio. Start running a slow process in your Python IDE of choice, and while it's running try to execute df.head() to view some data frame - you won't be able to see it regardless of the language or IDE (and for a good reason).


I understand that good reason, it’s because scripting languages run on sessions. So, Rstudio couldn’t execute any new command while doing something else. That’s fine. What’s annoying and not ok is the fact that sometimes the entire interface freezes. UI has to be separated from the session and logic of the program. Rstudio doesn’t do this well.


Background jobs in Rstudio are coming:

https://blog.rstudio.com/2019/03/14/rstudio-1-2-jobs/

Also not sure you can criticize a whole language because some random code you read doesn’t have comments.


There actually here. They're really great for developing shiny apps. Set the shiny.autoreload option to TRUE. Run the app in the background and point your viewer to the URL et voila


Even if your job is beyond statistics and data processing, you're probably using R because the core of what your team is doing is statistics and or data processing. If that's the case, and if you're the computationally sophisticated member of the team, then shouldn't the onus be on you to understand/adapt to what your less "sophisticated" peers are using?


I’m not sure why you brought this up. I didn’t talk about who should do what and whether people are doing things wrong. I said R is strange and hard to read and Rstudio is bad and should be more capable.

Now, if I want to address your comment. I don’t think “this was the way things have been done here, let’s do it like that from now on” is a good approach. If there is a better way, even if it’s more complicated, at least it has to be tried and tested. If a more “sophisticated” tool proves to be useful, let it be. Not everyone in a team has to use the same set of tools, and if they see the benefit in something more “sophisticated”, they might want to try it, even if it was not explored before. People can learn, there is always a better way/tool and one tool cannot do it all.


You're entitled to your opinion. But perhaps it is more constructive to email R dev mailing list and suggest your ideas for improvement.


There are a lot of really fantastic packages, too. Many are the only implementation of a certain stats tool in the world.


> It’s a lot like JavaScript community.

Erm, not at all. You don't need a hundred packages to run a trivial application. Even R-base is reasonably powerful, and it goes to a complete different level once you use tidyverse as a layer to code everything for R.


Felt the same way for years coming from the python world. The R for Data Science book[0] was a game changer in making R enjoyable for me.

[0] https://r4ds.had.co.nz/


If you want to look at data frames as you run sections of your code you will have to use r markdown chunks.


While this is great, I like to give a shout out to DeepL Translator [1]. I'm not affiliated with them but I like to recommend them to people who like to step out of Google ecosystem. I am using DeepL for about a year now and I mostly use it for NL<->EN, DE<->EN. So far, I never felt that the translation is off, or terrible, and if not better in some cases, it's as good as Google Translator.

[1]: https://www.deepl.com/translator


> So far, I never felt that the translation is off, or terrible, and if not better in some cases, it's as good as Google Translator.

In my experience Deepl is consistently, without fail, considerably better than Google Translate. I basically use Google now only for more exotic language pairs, or full-page translations.


> more exotic language pairs

I've found that sometimes what Google does for these is translate from Language A > English, and then English > Language B, which leads to bizarre results.


I have noticed that too. When translation libre from French to Azerbaijani, it gave the Azerbaijani word for free as in free of charge, rather than free as in freedom. That is an ambiguity that is mostly limited to English, maybe some other languages, but definitely neither French nor Azerbaijani.


From reports, Google Translate has effectively created its own internal (AI GD ML) metalanguage, which can interpolate between languages it's not been specifically trained on. E.g., with Japanese <-> English and Korean <-> English, Google Translate can manage Japanese <-> Korean, without being specifically trained to do so.

So yes, there's an intermediary language. But it's not English.

https://www.newscientist.com/article/2114748-google-translat...


The intermediary language is not English, but due to the way the training set is constructed (pairs of texts in various languages, with the vast majority of pairs having English as one of the languages, is my understanding) it can be very hard to tell apart from English sometimes.

For example, translating "рубанок" ("plane", in the sense of the carpenter's tool) from Russian to Polish used to produce "samolot" ("airplane") in Google translate up until sometime earlier this year, because in the intermediate representation "plane" was ambiguous just like it is in English. It looks like that particular bit is fixed now, which is at least progress! Maybe they've been adding more non-English text pairs...


That's almost certainly accurate. The metalanguage / interlingua isn't English, but being based on A <-> English and B <-> English training, is all but certainly influenced by English grammar, words, and idioms, in ways that direct A <->B training would not be.


It seems to be 'fixed' now, but once I was trying to translate from Hindi to Nepali (which are actually closely-related languages, think Italian and Spanish) with a simple sentence along the lines 'Ram came', where 'Ram' is a common Hindu name (effectively: 'John'), written in Devanagari with a long vowel: राम (rām).

And I gave the Hindi input in devanagari (राम आ गया), but still the Nepali translation ended up being the equivalent of 'the sheep came' (भेडा आयो), so somewhere along the line it seemed to be treating the name राम (rām) as equivalent to the English string 'ram' and translating accordingly.

So if the intermediate language isn't English, it certainly has some English-like properties....


This is interesting! So its not just about grammatical patterns, but other stuff that might be parsed as named entities.


But it's bizarre that, even if for some reason it doesn't recognise common Indian names, that it just treats 'unknown strings' as English words.


Supposedly, but it behaves suspiciously like English in practice, perhaps because of the input data (lots of texts originally in English then translated to many languages and fed in)


Since we're complaining about Google Translate, I'd like to mention how ridiculous their "verified translation" system is. It works by throwing automatic translations at people who, in their majority, have never studied English, and expecting them to tell whether it's right or wrong, but what happens is that most just confirm whatever they get as being correct. As a result, at least for Portuguese, many of them, if not most, are just plain wrong.

Considering Translate is such an important product, I can't fathom why they just don't hire a single linguist (or just anyone who isn't completely clueless, really) per language to register decent translations, or at least import them from a real dictionary...


Well, the more general problem about asking people visiting Google Translate to verify if a translation from A->B is correct is that generally people visiting Google Translate didn't know how to translate A->B or weren't very sure.

How many people on Google Translate actually are able to reasonably verify translations? Relatively few - and those qualified few who might poke at Translate out of curiosity are just as likely not to feel inclined to offer free labour to Google.

> Considering Translate is such an important product, I can't fathom why they just don't hire a single linguist (or just anyone who isn't completely clueless, really) per language to register decent translations, or at least import them from a real dictionary...

Perhaps professional translators rather than linguists. I imagine they have some linguists on the project, but they're likely to be more NLP-type linguists.

The difficulty is that they're interested not just in word-level meaning/translation-accuracy, but also phrase- and clause-level accuracy, and those are really large (i.e. theoretically infinite) spaces.

I've heard that the Spanish<->English translations aren't too bad.


Sometimes I have seen English words injected verbatim/untranslated in the middle of a phrase, when asked to translate between two non-English languages.


This is pretty standard, even in human (non-automated) translation.

One example is technical/service documentation for a heavy machinery company I worked for. The tech writers were based in Germany, but O&M Manuals were required in Japanese for sale in Japan. Those docs were translated using English as a "pivot language". Usually dictated by pricing (fewer German+Japanese translators = much higher cost).


But one advantage of automated translations is that there shouldn't have to be a pivot language.

And in any case, for German->Japanese, going through English probably has a lower cost. But for Hindi->Nepali, you'll lose a lot of information as Hindi and Nepali are closely related and similar not only in terms of correspondences between vocabulary items, but also grammatical structures, which is effectively 'thrown away' if there's an English, or close-enough-to-English-to-effectively-be-English intermediate translation language. (Not to mention the inefficiencies of the equivalent of sending a package from Delhi to Kathmandu via London.....)


I think in the case of automated translations it's a function of training data and confidence rather than cost. If you don't have a corpus of translation data for the source/target language combination to draw from, you're essentially forced into a pivot model.


To be frank, the Hindi<->English and Nepali<->English translation results are pretty poor to start with.


For Spanish, at least, it definitely isn't better than Google.

And recently everything I had to look up while reading Gabriel García Márquez, DeepL didn't know. After enough failures I gave up and returned to Google Translate for the remainder of the book.


Google may have a wider range of words, but DeepL is definitely much better in grammar and idioms, which Google tends to translate literally.

Moreover, Google often provides a single target word; whereas DeepL allows you to select from a range of synonyms clicking on a word, and will adjust the sentence accordingly to use the new word. When Google gets the context wrong and provides the wrong meaning for the translation, DeepL's capability to translate with a different meaning is invaluable.


same here. Admittedly it supports just a few language pairs but the translation quality is consistently and considerably better than Google Translate and other major offerings.


It's way off 'never off', I work with colleagues who use it en <> de.

For example it will always translate 'order' in a sentence as 'Befehl' (I order you to fix a steak) instead of 'Bestellung' (I order a steak).

Both are correct, but completely different, and in our context we never mean the former.


During a C1 German course I took, I tried writing an essay In English, using deepl and then submitting it to the teacher. Only manual thing done is choosing the correct alternative from the list of words deepl gives you.

The Teacher said that it was amazing and that many native students she had couldn't write that well.


This seems really quite variable. The second sentence I tried DE->EN ended up with an awkward and confusing literal translation of a phrase that Google Translate handled well.


I find the deepl translations in the available languages very good, much better than google translate. Unfortunately, the selection of languages is (still) very limited.


DeepL really shines when not translating to or from english. I find their FR<->DE fantastic, miles ahead of Google's.


It looks useful but the lack of non-European languages makes me slightly suspicious, I wonder if their approach generalises to Arabic, Chinese, Japanese, etc.


It depends on which part of their approch you focus on. I'd expect their machine translation model to be sufficiently general to support basically any language given enough training data. Yes, the languages you listed have some edge cases, but so do languages they already support fine. For example, the lack of spaces separating words in Chinese and Japanese can be handled by the same word-piece segmentation they need for German compound words.

The bigger problem is likely to be lack of training data. Unless they have a pile of cash to pay professional translators to produce a parallel corpus, the alternative is to scrape translations from the internet. Basically, crawl the same site multiple times with different Accept-Language headers and try to align the results. Crucially, this depends on an existing ecosystem of bilingual websites with high-quality human translations.

According to DeepL's website they're a spin-off of Linguee, who provide a search service for exactly that kind of parallel data. So before DeepL starts supporting any given language pair, you should expect it to appear in Linguee first. https://en.wikipedia.org/wiki/Linguee

Edit: It took me a while to figure out how to select a different language on https://linguee.com (Their UI seems broken using mobile Firefox Preview.) Appending /english-chinese and /english-japanese to the URL shows that they already support those two, and the alignment of translations appears reasonable to me. No /english-arabic, though.


The European Union, the Swiss confederation, Belgium, Canada and other multilingual states, the European patent office and many international organization provide a huge corpus of professionally translated documents and reports for major European languages. Not so much for Japanese.


Perhaps I'm too jaded, but it comes off to me as copping out of the truly difficult ones!


If you know of a good source for large piles of docs that have been accurately/naturally translated eng/ch/jp, I'm sure deepl would be interested. As another poster pointed out, any deep learning NLP project boils down to quantity+quality of data. I'm assuming adversarial approaches don't work well in this context but I'm not very familiar with nlp research.


Right but that's kind of the essence of machine translation. You won't always have high quality parallel training data for all languages so you have to find a way to thrive with low quality data.

Google has clearly made it their mission to solve that problem, and I'd say they've been rather successful.


> Translate from any language

only has 9 language options(8 if you exclude the dest language).


I found it to work incredible between Polish and German as well.


My experience so far is that DeepL gives the best translations, then Bing and last Google.


I lived in Germany for a bit and Deepl is what everyone recommended over Google for professional translation. Google is great to figure out how to ask for your schnitzel at the shop, but Deepl is for when you want to make a deal.


Another Luc here that lives in Germany since a bit (yes, I know you said "for a bit") and recommends Deepl to everyone over Google :D


I’m not sure about European English, but at least in American, it’s not normal to say “I live in Germany since a bit”. If you don’t live there anymore, you could say “I lived in Germany for a bit”. If you still live there, you could say “I have lived in Germany for a bit”


Based on some (limited) experience with people for whom German is L1 who are speaking English I suspect (without knowing any German myself) that this is a typical formation that a native German speaker would use when intending to form a "for a bit" phrase in English. tl;dr, I think it was a joke.


I have used it for translating several academic writeups, including proposals and papers from English to German. It works like a charm. Also, translates German official letters from banks and government, to English pretty well.


OK, some observations. It looks like they used the UN official documents as a part of their corpus, so it translates regular news from Russian into English almost perfectly. I was actually stunned how good the translation was.

But once you step away from it, quality goes down. I tried translating random pieces of Russian literature and it makes obvious mistakes. It can't even manage the structure of sentences, never mind word choice.

Translations from English are also bad. For example, it translated "I never felt that the translation is off" as "I never felt that the translation is turned off".


In the firm I made my internship, we have to use German in every communication. I use DeepL to check my E-mail or help me write speech for the presentation, etc (not a native speaker). The translator is wonderful!


I'd meant to add this to my DDG !bang list yesterday: !deepl


i've just tried it out and am positively blown away by the quality of translation.


I'll be blown away when it does near-perfect JP/EN translation (which it doesn't even seem to support). No machine translation has ever been close to being remotely good when it comes to JP/EN, including the ones developed by GAFAM.


I use it since 2 years, mainly for French->English and occasionaly for English->French. I love it. It understand very well idioms and propose excellent translations. Even for traduction of single words, it is far better than google.


As far as I've seen that doesn't support translating web pages other than copy-pasting the text you want to translate... Unless I'm missing something. So HN, tell me, how do I get to that functionality, if it's there?


I was hoping to use them for an application, but their API pricing is orders of magnitude higher than Google's and Microsoft's. I guess they must be focused on the web application primarily.


I was also looking for a Google alternative and recently discoverd them on pons.com and am very happy with the results which are often DE <-> EN.


Sometimes after writing a text I decide to throw it into DeepL for shits and giggles and the translations are pretty much as good as native every time.


This is great, never even heard of them before! Would you happen to have other suggestions to Google service alternatives?


That's really good, I even tried olden English to an extent to Spanish, and it still went through.


Sadly there's no korean to english.


I’ve been adopting and promoting DuckDuckGo for more than 2–3 years now. I can say that the transition period was quite long actually. I was invoking !g very frequently at the beginning but I force myself to at least probe a few first items first before trying Google. After these years I rarely use Google, and every time that I actually search something on Google, before even seeing the result, Google welcomes me by the whole TOS/Privacy stuff and I close the tab and try the DDG a few more times. I think the friction of accepting their privacy policy makes me think twice.

If you like DDG, recommend it to friends and coworkers. I usually put a link in newsletter if appropriate.


I've switched my default browser search engine to DDG, but I still find myself missing Google results. I miss those knowledge graph results (or whatever Google calls those sidebar boxes).

Also, I've found that DDG cannot consistently display a weather forecast widget for the query "[zip code] weather". Sometimes it will show a weather forecast, and other times it will not display it--very frustrating. Any idea why? Even just refreshing the results will switch to displaying the weather widget, which leads me to think it's either a bug (I'm in Firefox), or they don't actually have the data for my zip code and are going to fetch it after I make my request?


How about !weather instead of just weather? You can even pick your source: https://duckduckgo.com/bang?q=weather


When I search !weather it ships me off to weather.com. I REALLY don't want that to happen. If I wanted to go there, I would have just typed in that URL.


There are over two dozens weather bangs.

!wu = weather underground !owm = open weather map !nws = national weather service

to name a few alternatives.


I think the issue is they may not want to go to ANY other website - they want the widget. What do they do then?


"$city weather" works for me every time. Even more specific regions of cities works.


Why bother launching a webbrowser then? Save ram:

curl wttr.in


UI's have value, as does integration into a browser. Not everything is solved by a CLI for all


Funny enough, it doesn't have a bang for Dark Sky, which is the source for the weather info box when it has one.


I don't see why it's worse. If you want to know what the weather is, why not go to a weather site? What's the advantage to seeing the same (or less) information on a search results page instead?


Not OP, but I was so happy when weather was built into google a few years ago.

Weather.com is pure evil when it comes to tracking and clickbait. It's also slow as hell.


Yeah but I covered that in the second part of my comment. See https://duckduckgo.com/bang?q=weather


I guess this boils down to what is my goal. Here's two plausible goals from my perspective. I go to DDG and ask for it to give me a sample of some sources from which I could find the weather. This is a traditional search. Or, I could go to DDG and ask it for the current weather. This is what I really want.

There's no scenario in which I already know which weather source I want to use, memorize its ! command, navigate to DDG, then enter that ! command. That's too many jumps. If I want the weather from weather underground or whatever, why would I ever go to DDG to start with?


I have DDG as my default search engine, so I just type "!wu 01234" in my address bar. This basically skips the DDG navigation and UI entirely. I like this type of shortcut for Wikipedia and IMDB too.

Anyway this is just a workaround for when the weather infobox doesn't automatically show up in the DDG UI, which it normally does.


This is kinda silly but for me it always works if I do it in the other order: ie. “weather 12345”

Maybe try that?


The biggest problem with DDG for me is how easy it is to game. Try doing research on new cars/trucks and you'll find most results at the top are trash sites like 2020-make-model.com and the like.


Use !mill to filter DDG searches through millionshort.com.

You can also go directly there. [1]

Change the 1000000 to however many results you want to remove from the top of the results list.

1: https://millionshort.com/search?keywords=weather+22222&remov...


Agreed, it’s easier to game. But often these ‘spam’ sites combined with an adblocker actually mean you get the content you wanted without ads and bloat.

Where as the ‘official’ site has a long load time due to design bloat, videos and other non-content which can’t be blocked with a simple blocker because it’s a part of the actual website.

Google seems to assume name brand websites have good content. I rarely agree.


It's bing..


That's a problem with Bing (the underlying search engine), not DDG.


Similar story for me. Until a few months ago probably 90% of my searches included `!g`. Then at some point I just got out of the habit, and now I usually don’t even think about the fact that I’m on DDG’s result page instead of Google’s. I have no idea if it is because the results actually got better or if I just got used to them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: