Hacker Newsnew | past | comments | ask | show | jobs | submit | diablo1's commentslogin

I'm an oldskool dev who shys away from 'the new shiny' because I've learned the basics of JS and you can get pretty far with the fundamentals, despite the allure of these rather expressive frameworks that get released every week now.

Frankly I get more joy out of writing bookmarklets, Tampermonkey/Greasemonkey scripts and customizing websites with various CSS 'userstyles'.

I also prefer SFTP and still enjoy uploading PHP scripts with SFTP and then building out some barebones CRUD app in my free time. Again, I shy away from the new shiny like GraphQL and things like Docker or Kubernetes etc.


I'm also "oldchool" in a similar vein. But. Docker, dude. Docker.

For instance, yesterday a PHP tool made the HN frontpage [1] that seemed rather interesting. Problem is, it needs PHP 7.2+ and my app runs on 5.6.* What to do? (Bear in mind, 3 weeks ago I moved from Virtual Box to Docker for local dev, and my files are now in a regular folder in my machine).

In this case, I just need to tell Docker to fetch the image and run it pointing to the same folder where my app is. Just a 1 line command.

After that, I just need to remove the Docker image and my system is as pristine as before.

I think I have itches very similar to yours (Like, I'm learning Python and all things Data Science and Machine learning related, instead of virtualenv or even *conda, I'm separating my projects using Docker), and nowadays I'm using Docker for all of'em.

[1]https://news.ycombinator.com/item?id=23654973


Keep in mind you should still use a venv with docker, this removes the need for sudo and won't conflict with the image system python if it has one.


For the longest time I thought the standard was to install stuff on Docker as root and not worry about typical permission/user idioms you'd practice on a classic box.

But now I am seeing more of this. Do you have any good links to read more about why using venv/non-root makes sense for Docker?


In my case, I think Docker voids the need for virtualenv. But a quick google search returns interesting results [1]

I do know setting up a proper user in Docker is just a couple lines away in a Dockerfile (As a matter of fact, I did that for the main app I develop).

For my other use cases, I just don't care. I'm using Docker to quickly bootstrap a Jupyter Labs environment, and I do that by sharing some confs (Like the Pip cache folder).

The caveat to this is that files I create are owned by root, but that again is just a command away for fixing (If I need it, that haven't yet).

[1] https://stackoverflow.com/questions/27017715/does-virtualenv...


Don't have a link at hand, but main reasons out of my head:

- exploits for breaking out of the container are easier to pull off if you have root access

- if the image has a system python, installing with sudo will install things in the system site-packages directory, which can cause a lot of troubles


I understand the attitude, especially because most of the new shiny things tend to put a lot of polish and either no doc or overwhelming doc, instead of allowing smooth integration with a respectable learning curve.

The only item in your list I'm not sure fits is Docker. It's not a framework or a language, it's a complex-but-not-complicated tool that makes a lot of life fairly convenient, especially at small scale. Things like Kubernetes (and Terraform and so on) only should come into the conversation when you start having bigger questions about the scale of your project and your infrastructure, and even then aren't a given. But at a single-user local scale, Docker can be incredibly convenient to do local dev, have reliable behaviour, and avoid a lot of pitfalls of your own machine's configuration. Often in just a handful of lines (a few for the dockerfile and a one-liner to start or stop the whole thing). The docs are also fairly well-written as they offer basic options and allow for a lot of granularity when it becomes needed.

Docker isn't the only tool in the world to do what it does, but it's a very user-friendly tool that offers a lot of convenience with little overhead (at least at a non-"devops team" scale).


While I admire your forward thinking in not mentioning scp, I will fault you with sftp, as it requires an OS account on every supported platform.

This is overkill for moving files.

I have leaned both towards rfc-1867 transfers when I have a web server available, and stunnel configured to launch tar xf when I don't.

I'm also about to go to production with a tftp server as a bridge to smb3. The tftp client is so elderly that it is constrained to 512-byte UDP packets - a true museum piece (running VMS).


> And didn't have any or much experience with old school HTML

A good heuristic for how good a web developer is is getting them to list every HTML element they know, including the deprecated ones like <marquee>. You would be surprised just how little elements they know, and more importantly, the semantic value of the elements. For example, knowing when to use <span> instead of <div>


Is encyclopedic knowledge of something that can be searched online a good heuristic for skill? I thought the community agreed this was not a good thing when it came to fizz-buzz type software interviews.

IMO in frontend world there are two important things: knowing how to search, and knowing how to determine which search results are useful as opposed to some Medium/blogpost fluff piece that is varying degrees of misinformed.


If you have experience with a framework, you should naturally be able to list things you commonly use.

I was recently at an interview where they just asked me to name a list of CloudFormation functions I’ve used and what did I use them for. If I listed that on my resume as something I knew, it was fair game.


@bobthepanda

It is not a good heuristic if the person has two days to prepare in that it is very gameable.

It is important to be able to develop things _quickly_ in order to keep a job. For that, knowing how to search is indeed useful. But there are certain things that it is worth keeping in L1 cache.


> But there are certain things that it is worth keeping in L1 cache.

I doubt that the laundry list of things that have gotten tossed out of W3C's HTML standards is one of them, as the grandfather comment implied.


I am somewhat of a polymath, but a narrowly focused one, in that most of my skills solely apply to computers, and very little else. But then you may ask: computers are a broad topic: how does one ever conquer the subject and attain mastery of it?

My answer is simple: the world of computers is an endless one, a gigantic rabbit-hole (especially when combined with The Internet). Being able to download source code for a broad number of programs and have them run and configured any way you wish is still magical, no matter how many times I've tried it.

Then there's the fact that you can get reliable information on an endless amount of subjects with very little friction or red tape. I am still in awe of The Internet and haven't become jaded about it yet, as many of my peers have (they literally have grown bored of the net...something I can't understand). I guess it's how you apply the knowledge instead of merely knowing for the sake of knowing.


Thank goodness. Google is the biggest potential 'database of ruin' that could embarrass many people if even a sliver of the dossiers they have on people were leaked. Holding onto this database (without periodically wiping it) is like keeping tonnes of radioactive waste under the floorboards.


Would be interesting to see how many of these attacks could be mitigated with CSP[0]

[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP


It's worth inspecting traffic from iOS apps. I normally do this by creating a wifi hotspot in Linux, connecting my iPhone to it, and then inspect the traffic with Wireshark. Then I look at the DNS protocol and what is listed there. There is so much tracking going on behind the scenes in apps it is staggering! Also some (not most) of the traffic is unencrypted and I've even seen stuff that was sitting in my pasteboard being uploaded to some random server. (Even popular apps like TikTok spy on the pasteboard)[0]

https://in.mashable.com/tech/12219/tiktok-and-other-popular-...


The new Pasteboard APIs and notifications should go someway towards stopping that from happening.


I have made a habit of mining HN's Algolia search engine. You can uncover some real gems if you just put in the effort to narrow down your search to the particular topic you're interested in. Also: to avoid bias I wrote a script that opens random stories from the HN main page in my browser tabs, and am often surprised and refreshed by what I read (As most stories that gain popularity on the main page have vague titles)


PWAs need to be pushed more. PWAs are more private and don't have access to things that could contain sensitive information like the clipboard's contents which are often scraped and uploaded to shady C2s by some apps




Indeed.

You mentioned

> PWAs are more private and don't have access to things that could contain sensitive information like the clipboard's contents

Do PWAs -- Web applications built with JavaScript, among other Web technologies -- have access to standard Web API offering? If so, then PWAs offer no advantage with respect to your claim. How does using a PWA provide a more secure environment for the user?


The JS API is a bit safer as the user has to "explicitly enable this feature". Although in some browsers, full access is granted. Thanks for pointing it out. Another reason to surf with JS disabled and whitelist for sites that require it.


I know people who have to have everything wireless. Little do they realize that all that extra radiation is probably slowly killing them. On top of that is the security risk of having personal data leaking out of your room to whomever decides to eavesdrop on the signal (A threat model which becomes clearer when you see how easy it is to collect signal leak)


> Little do they realize that all that extra radiation is probably slowly killing them

Haha, honestly people could probably use more radiation, i.e. sustained daily sunshine, to live healthier.


The sun beams down 1 KILOwatt of light (50% infrared, 40% visible, 10% other) to every square meter on the ground. Most of the infrared and much of the visible light is absorbed by your skin. Your WiFi router on the other hand has a power of 50-150 MILLIwatts, and won't let you set it higher because that's illegal. And your phone etc. run at around 15 milliwatts.

I guess if you built a ridiculously high powered WiFi antenna (which would be illegal) and stood next to it for a while, you would cook yourself. But BT/WiFi ain't gonna do shit to you.

https://m.youtube.com/watch?v=i4pxw4tYeCU


Standing outside in the sun all day everyday is absolutely deadly.


That's somewhat beside the point though, isn't it? Wifi and Bluetooth are not giving off anywhere near that much radiation.


If that were the case, there would be no African tribes left alive.


Clothing, night-time, and shade all make their exposure far less than 24/7.

Also higher melanin skin blocks?/absorbs? radiation.


Well, things that are deadly don't necessarily kill everyone.


By that rationale, walking is deadly. Breathing is deadly. Sleeping is deadly. This is not a valid argument supporting the stance that wifi is deadly.


Sorry, I didn't mean to argue that wifi is deadly (because power is so much lower), I'm merely agreeing with the parent that the sun is deadly. I think the links to cancer are strong enough to warrant the term.


I worry about this being swarmed by traffic and hugged to death. Since it's popular on HN, I imagine the particular Heroku instance is overwhelmed. I was surprised that it worked when I used it. I guess I'm gonna have to pony up and donate then...


You are correct in that it is somewhat starved of resources. The free Heroku instance that I host is running on the free Heroku dyno (512 MB RAM). I do not have a good caching solution currently, which is why Twitter and Instagram are almost always returning errors now. I suspect a single person is responsible for most of the issues (see GitHub issue #38). It's actually amazing how well it runs considering how much traffic is thrown at it.

At some point I hope to get enough time to implement a caching solution, which should hopefully resolve most of these issues.


Looks like you can self host there is a github repo


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: