Hacker Newsnew | past | comments | ask | show | jobs | submit | breezest's commentslogin

Can't reproduce. My friend sends me a link to Signal but it works.


I did not work in the hottest topics in Machine Learning (and not in computer vision) as I cannot find rooms for innovation. I did publish several papers in 2nd-tier conferences and submit papers to journals. Near the end of my graduation, I started to find jobs in the industry. I have been shocked when I saw some employers will target authors who published in these big conferences even for non-research positions. (It sounds like discrimination.) I always wonder if my research profile will meet their requirements.


When I was hiring in industry for computer vision, by the end I honestly didn't even look at people's publications. GitHub, any skills they list, and examples of projects they worked on. What got published at the end seemed to have no impact on what was important for industry.


It seems like you are talking about publishing research work in book chapter rather than peer reviewed articles.

A book usually covers a selected topic in depth and in a coherent manner. This is particular useful for graduate students. Besides, some book authors also invite their friends to give comments. The impact of a book can be as rigorous and significant as journal articles.


Political knowledge from Taiwan is still a part of the human knowledge. I cannot see why the Foundation's application should to be rejected by such a weak argument.


WIPO is about intellectual property. That has nothing to do with human knowledge and everything to do with legal structures to enable government granted monopolies over ideas (patents), expression (copyright), and identification (trademarks).

If anything the "human knowledge" on Wikipedia is the remit of UNESCO, which is "United Nations Educational, Scientific and Cultural Organization".

Wikimedia wanted oberver status at WIPO to be "inside the halls" when WIPO is making decisions that might affect Wikimedia's purpose, which it states as "We help everyone share in the sum of all knowledge."


Because WIPO is a political body and it has nothing to do with truth and knowledge preservation, but about appearance and control.


By showing that almost no recruiter check the GitHub repos cannot tell if such contributions make a candidate stand out.

If someone has tangible contributions through GitHub, they probably write that down in the resume. The interviewers may check their claims either during interview or on GitHub before/after the interview.

The problem is that if a candidate performs moderately well in the interview (e.g. coding interview), while he/she has made many contributions through GitHub, should the interviewer recommend the candidate? My belief is that most interviewers will not take the risk or take the responsibility for a potentially wrong hire.


This is similar to the presumption of innocence. You can believe someone acts in good faith initially. Yet, if they don't, you should turn against them immediately to protect yourself.


Isn't it better for Mozilla to "sell" some projects to other tech or foundations rather than keeping them internally and let them wither away?


These big tech are using Rust as a feature to promote their cloud business. They will/morally should help sustain the ecosystem.


Thanks for sharing!

How often will the clients find you again to do the follow-up work, e.g. new software feature requests, for the previous projects? If you refuse the requests, will this undermine the business relationship?


This has never happened to me (yet).


Nowadays, many books cover the elementary mathematics in machine learning. After I learnt these elementary topics, any good suggestions for computational learning theory?


I recommend Shai Shalev-Schwartz and Shai Ben-David's Understanding Machine Learning: From Theory to Algorithms [0]. I've also used and found Mohri's Foundations of Machine Learning quite insightful [1]. Usually, between the two books, at least one's proof is easy to follow.

Get both unless you're only getting one, in which case, get Shai Shalev-Schwartz's.

[0] http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning...

[1] https://mitpress.mit.edu/books/foundations-machine-learning


[Deleted]


Why delete the comment? Useful context for stochastic_monk's reply is missing now. You could always preserve the existing and possibly incorrect comment and append an "Update" or "Edit" section to the comment to override your earlier comment.


Essentially, he asked if the above poster had read Elements of Statistical Learning, Murphy's ML textbook, Bishop's PRML, Reinforcement Learning: An Introduction, and Ian Goodfellow's Deep Learning textbook.

I simply clarified that the question was about computational learning theory, a subfield largely started by Leslie Valiant in the form of PAC (Probably Approximately Correct) learning. The difference in emphasis between the machine learning conferences I mentioned helps point out how practical machine learning (like ICML, matching PRML/ML/ESL) and feature extraction/representation learning (like ICLR, perhaps matching portions of both ICML and ICLR), while important, are not what the previous poster was asking about.


He's specifically asking about learning theory which is a subfield of machine learning, along the lines of work you'll see at COLT, using concentration bounds, VC theory, and Rademacher complexity. PRML, Murphy, ESL, the Deep Learning book, and the RL introduction are more like what you'd see at ICML or ICLR.


thanks, my bad.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: