Hacker Newsnew | past | comments | ask | show | jobs | submit | jvilk's commentslogin

Yeah, I had the same experience in my area. It's so strange to see so many people (online, and locally) strongly assert that Costco has the cheapest price on many goods. It's a strangely pervasive notion. We thought that we must be missing something!

Sure, some specific items are always cheap -- like the rotisserie chicken. But outside of those few things, you really have to look for actual savings, and that's true even of baby products which so many in our area seem to get at Costco. In many cases, we would be buying something in bulk for more money than at Winco or Kroger.

Shopping there is also rather disorienting... many times an item we bought one week would be gone the next. Or items would unpredictably shift locations.

As some commenters have identified, there may be specific items you want that are in fact cheaper at Costco. And I only have personal experience from my area, which is not a big city and has cheaper local grocery prices than a more urban area. My main point is: if you've been shopping at Costco assuming you are getting a good deal, you might want to double check against other stores!


What do you mean? Nearly every native English speaker who works with Coq pronounces it "cock" (from experience at POPL, PLDI, and other PL conferences).

Personally, I'm very happy that they are changing the name.


I pronounce it sort of like cog but with a sort of w sound at the end - like the gw sound you get in quiet.


I wrote a tool that may be useful here; it generates a TypeScript definition file and runtime type checking logic given examples of the objects you want to accept.

https://jvilk.com/MakeTypes/


> I'd bet that anyone with a PhD could have made more money with a masters degree and a switch to industry.

This is common knowledge for those pursuing CS PhDs. You don't do it for the money. If you only care about money, a master's degree is what you want.


A lot of jobs I have seen recently say a PhD is highly desirable while a Masters is essential. Obviously I don't suggest people start doing a PhD for the money, however I'm surprised to think that you can't out-earn your alternative self with a Masters seeing as the difference in study time is 2 years but you'd be entering industry as an 'expert'. Especially if you target your PhD at something like computational finance, autonomous vehicles, etc.


I'd lump those fields into "ML" and the difference is probably more like 4 years in the US (2 vs ~6).

The career consequences are also weirdly mixed. Some places seem to recognize that, along with your area of specialization, getting a PhD also involves a fair amount of project management, writing, etc skills. Other places (or even different people at the same place), seem to think it's a glaring red flag that you can't "get real work done" because you sit around all day in a smoking jacket, thinking. (I think that's mostly bunk--academia moves fast these days, but that sentiment is nevertheless not uncommon).

And, if anyone has actual advice on monetizing a comp/neuro PhD, I'm all ears :-)


Job requirements are often inflated to scare away people without the ambition, egomania, and/or self confidence that the writer of the description desires. The business/management job description equivalent of this is "MBA from top-ten business school essential."


You can capture a heap snapshot in most browsers now using development tools. However, even a blank webpage (about:blank) has tens of thousands of objects allocated for the default JavaScript/DOM APIs. It's challenging to manually grok a JavaScript heap.


The approach I used was to take a snapshot, count up all the different object types (making a hash table mapping "name of type of object" => "count of objects whose type has that name", then discarding the snapshot), then take another snapshot a few minutes later, and see what the difference was, and which type of object there was suddenly a lot more of. Then I looked at various instances of that object. It turned out to be some async queue-related object that was only used in a couple of places, so that narrowed it down a lot. Even if it were something generic like a hash table or list, I suspect looking at instances of the object and breaking them down by some observable quality (e.g. number of elements, the set of keys in a table, the types of objects in a list), plus the differential approach, will take you fairly far.


I recently wrote an automatic memory leak detector and debugger, which makes this a lot easier (imo) [0]. You write a short input script that drives the UI in some loop, it looks for growing things (objects, arrays, event listener lists, DOM node lists...), and then collects stack traces to find the code that grew them. While it won't find all of the leaks, I was able to eliminate an average of 94% of the live heap growth I observed in 5 web applications (which found new memory leaks in Google Analytics, AngularJS, Google Maps, etc).

More information about the technique can be found in a PLDI paper (which I presented last week :D ), which I tried to write clearly so that it is accessible to a technical audience (i.e., non-academics) [1].

[0] http://bleak-detector.org/

[1] https://dl.acm.org/citation.cfm?doid=3192366.3192376 (shouldn't be paywalled, but if it is, it's also available at [0])


You're right; when it is running in the browser, isomorphic-git does not have access to the operating system's file system. Instead, it uses BrowserFS [1], which emulates a file system abstraction on top of arbitrary storage backends. It supports IndexedDB, localStorage, and Dropbox, among others. (Disclaimer: I am the author of BrowserFS.)

[1] https://github.com/jvilk/BrowserFS


Well, this is useful! I recently built BLeak [0], an automatic memory leak debugger for the client-side of web apps, which consumes heap snapshots during the automatic leak debugging process.

I had to work around the DOM limitations of V8 heap snapshots by building a JavaScript 'mirror' of DOM state that I could examine in the snapshots [1]. Perhaps I'll be able to strip out that logic and rely on the improved snapshots!

[0] http://plasma-umass.org/BLeak/

[1] Discussed in Section 5.3.2 of the preprint of our PLDI 2018 paper: https://github.com/plasma-umass/BLeak/raw/master/paper.pdf


There are meal recipe subscription services that do exactly that. I've been a subscriber to CookSmarts for a few years now, and love it!

https://www.cooksmarts.com/weekly-meal-plan-service/

I pay a low monthly fee for four recipes a week and have access to the archives if I don't like some of the recipes. I like to jokingly call it 'Netflix for meal recipes'. Each recipe has a vegetarian, paleo, and gluten free variant, and it creates a shopping list for your week. You can get a free trial if you want to see what it's like.

Recipes specify what you can prepare ahead of time (so you could do most of your prep for the week in one go), and contain embedded videos showing how to do some of the prep steps if you are inexperienced.

I will note that I simplify some of the meals. For example, some recipes contain salads and specify how to make a particular salad dressing from scratch, but I opt to use prepackaged salad dressing to save time. Other recipes specify expensive items, like capers, that I skip.

I also have lost weight eating these recipes in the portions specified, which has been quite nice.

(Note: I'm not affiliated in any way with CookSmarts; I'm just a super happy customer!)


Our lab addressed some of the issues with Stabilizer [0], which "eliminates measurement bias by comprehensively and repeatedly randomizing the placement of functions, stack frames, and heap objects in memory".

[0] http://plasma.cs.umass.edu/emery/stabilizer.html "Stabilizer: Statistically Sound Performance Evaluation" by Charlie Curtsinger and Emery Berger, ASPLOS 2013


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: