Hacker Newsnew | past | comments | ask | show | jobs | submit | alco's commentslogin

From https://firebase.google.com/docs/data-connect/data-connect-e...:

> Firebase Data Connect provides you with a local emulator for end-to-end prototyping as well as continuous integration and continuous deployment (CI/CD) flows

Developers at Google have chosen PGLite for those qualities so that users of Firebase Data Connect had better CI/CD workflows. It wouldn't be fair to say this is an insignificant use case.


  Location: Kyiv, Ukraine
  Remote: Yes
  Willing to relocate: planning a move to UTC time zone within the EU
  Technologies: Elixir, Go, TypeScript, Python, AWS, GCP, Docker, Kubernetes, PostgreSQL, RabbitMQ
  Résumé/CV: https://www.dropbox.com/s/6njitiijrb0gs37/Oleksii%20Sholik%20Resume.pdf?dl=0
  Email: oleksii@sholik.dev
Hi, my name is Oleksii. Having worked as a backend engineer for the past almost 10 years, I'm an expert in Elixir/OTP and PostgreSQL, skilled in Go, AWS, and Kubernetes. Looking for new challenges in distributed computing, global data consistency, high scalability, and observability.

I actively contributed to the Elixir programming language between 2012 and 2016, developing UX improvements for IEx (colors, history, .iex.exs), implementing core functions in Enum, List, OptionParser, String, System modules.


Honestly, most of them were great. Probably the most captivating one was Ben Tyler's fun exercise in using Riak Core to build a stateful, distributed, fault-tolerant, real-time, impress your cat application.

Chris McCord's keynote was also really interesting, he explained how he implemented Phoenix Presence using a certain kind of CRDT.

And for some laughs I recommend watching Gary Rennie's funny story about achieving 2 million clients simultaneously connected to a single Phoenix server.


Have there been attempts at dispersing the whole collection of papers through torrents or IPFS? The goal here is not to have a central location with a pretty web page but to make the content freely accessibly everywhere by anyone. Distributing it over thousands of nodes would achieve that goal.


I believe that libgen torrents are available, which, as I understand it, are basically a mirror of the content available from scihub. I don't have a lot of information on it now and can't research it from work.


Torrents are pretty bad at the archiving problem , i.e. maintaining copies of things with no readership over a long time. Not a great fit for this stuff.


I agree. Setting up mirrors would probably be a better fit for this kind of content, but mirrors would be susceptible to the same dangers that the primary website may face: forced take-down by the hosting provider, domain blocking, etc.

The fundamental difference of content-addressed networks is their resiliency in the face a single authority trying to track down all of the sources that have copies of the content.

Even though IPFS is still in its infancy, one of its primary goals is to solve the problem of content suddenly disappearing from the Internet.


Yes and no. You need servers that guarantee availability (but they could hide themselves pretty well if necessary). Then the torrents provide accessibility.


That's a good question. I don't know if there has been an attempt to IPFS/Torrent the files, but it would be good. They are referenced by DOI on sci-hub so they are easily searched. It would probably be helpful to have a DOI -> IPFS/Torrent name. For IPFS a doi like 10.1037/rmh0000008 would probably work but I don't know if the '/' character will work as a torrent name.

I don't know that the sci-hub underlying database is available outside of the site. I expect with growth they will need to move to torrents or similar to minimize their bandwidth requirements.


> the whole collection of papers through torrents

yes, although their bandwidth is horrible


I tried to use the Chrome extension to read a Spanish website but it doesn't work unless I enable 3rd party cookies. That's quite rude. I cannot allow a single extension to compromise my browsing experience by letting anyone set 3rd party cookies.

Could you please add a field on the site where I could paste the URL of a webpage? That would work for me as a workaround to using the extension.


I understand where you're coming from, but it's for purely technical reasons. The extension just plain wouldn't work without this enabled.

I avoided the "paste a URL" solution because that would mean that my server would need to fetch the content from other websites instead of each user's browser doing the work.

Until I have a better solution, for now you could resort to copying and pasting articles you'd like to read into the upload page: http://readlang.com/upload - it's not ideal perhaps, but should get the job done.


Elixir has first-class support for defining custom types and adding type annotations (specs) to functions. There's also a tool that makes it very easy to run dialyzer on your code: https://github.com/fishcakez/dialyze


JavaScript has functions. The list pretty much ends there. There are some libraries that provide "functional features" like functions over collections, persistent data structures, promises. The language itself and the environment it's running in (DOM or Node) do not have a lot going for them in terms of functional programming essentials.


Erlang/OTP is being developed continuously, primarily by a team at Ericsson. They ship one new major release roughly once per year, with a few minor releases in-between.


First question in the Q & A here sheds light on the authors motivation for creating Wren:

http://munificent.github.io/wren/qa.html#why-did-you-create-...


Just a small note: Go has also abandoned segmented stacks. They were a placeholder until Go got precise GC which enabled it to use contiguous stacks with pointer rewriting. Here's a good explanation of this – http://agis.io/2014/03/25/contiguous-stacks-in-go.html.


That's a very relevant point, thanks for the link, will check!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: