I think it really depends on the how. Engaging with it in a socratic debate-style argument [1] if no fellow human is available might very much support your thought process. On the other hand, just obtaining the solution to one‘s homework/problem/task/… won‘t be very beneficial for one’s development. The latter is sadly much more convenient and probably accounts for most of the usage. I remember a saying about the mind being a muscle: in order to keep it in good shape, you have to use it actively.
Thank you for the good laugh! This whole thread is peak satire.
Although, be careful. It reminds me of the foreword to a shortstory someone shared on HN recently: „[…] Read it and laugh, because it is very funny, and at the moment it is satire. If you’re still around forty years from now, do the existing societal equivalent of reading it again, and you may find yourself laughing out of the other side of your mouth (remember mouths?). It will probably be much too conservative.“ — https://www.baen.com/Chapters/9781618249203/9781618249203___...
What you suggest seems plausible, but there is a very good counter example. Overleaf is also managing well by relying on the open-source LaTEX. What drives people to subscribe is not the typesetting itself, but the ecosystem around it (collaborative editing, version management, easy sharing, etc.). You can make money with those and still have the rendering free/open-source. I believe a similar thing is/will be true for Typst as well.
That is a bad counterexample. There is a world of difference between the main devs offering a paid service and some unaffiliated company offering services.
In principle, having a reliable source of funding for typst is great. However, as a journal this would make me hesitant: what if down the road some essential features become subscription-only?
Reminds me a bit of Isaac Asimov‘s novel „I, Robot“ where they rely on positronic brains to do things. In the story, mathematics seems to have caught up and developed a framework to analyse the behavior of an AI system. I wonder if something similar will happen if CS becomes an empirical science, i.e., will we try to infer laws from empirical AI behavior measurements so that we can reason about it more effectively? This would then turn CS into Physics somewhat, but based on an artificial system. Very strange times.
> these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries.
I guess we should figure out how to include the three laws of robotics in connectionist models asap…
I can second this, even availability of the code is still a problem. However, I would not say CS results are rarely reproducible, at least from the few experineces I had so far, but I heard of problematic cases from others. I guess it also differs between fields.
I want to note there is hope. Contrary to what the root comment says, some publishers try to endorse reproducible results. See for example the ACM reproducibility initiative [1]. I have participated in this before and believe it is a really good initiative. Reproducing results can be very labor intensive though, loading a review system already struggling under massive floods of papers. And it is also not perfect, most of the time it is only ensured that the author-supplied code produces the presented results, but I still think more such initiatives are healthy. When you really want to ensure the rigor of a presented method, you have to replicate it, i.e., using a different programming language or so, which is really its own research endeavor. And there is also a place to publish such results in CS already [2]! (although I haven‘t tried this one). I imagine this may be especially interesting for PhD students just starting out in a new field, as it gives them the opportunity to learn while satisfying the expectation of producing papers.
This is very interesting! I think an exciting direction would be to arrive at minimal circuits that are to some extent comprehensible by humans. Now, this might not be possible for every system, but certainly the rules of Conway‘s GoL can be expressed in less than 350 logic gates per cell?
This also reminds me of using Hopfield networks to store images. Seems like Hopfield networks are a special case of this where the activation function of each cell is a simple sum, but I’m not sure. Another difference is that Hopfield networks are fully connected, so the neighborhood is the entire world, i.e., they are local in time but not local in space. Maybe someone can clarify this further?
You can actually use this to import pdfs generated with Matplotlib as vector graphics into Impress presentations. This allows you to change, e.g., the color of lines or the legend (or any other part of the plot) right within Impress to better fit your presentation. I found this extremely useful in the past. In Powerpoint, I could not even import an svg, let alone a pdf (although maybe the newest version supports this?).
The only downside is that currently you have to first import the pdf into Draw and then copy the shapes/curves over to Impress. I hope they will add direct import into Impress in the future.
There is also an open-source/free version of this [1], which I use regularly. You can install it, e.g., in Fedora, with the ‚diffpdf’ package. It is no longer maintained but works very well, has a nice GUI with a side-by-side view, drag&drop support, and both text and visual modes.
(I am one of the authors) Generally speaking, the latter. The purpose of DiscoGrad is just to deliver useful gradients. These provide information about the local behavior of the cost function around the currently evaluated point to an optimizer of your choice, e.g., gradient descent. Interestingly, the smoothing and noise can sometimes prevent getting stuck in undesired (shallow) local minima when using gradient descent.
[1] https://en.wikipedia.org/wiki/Socratic_method
reply