Hacker Newsnew | past | comments | ask | show | jobs | submit | dnbgfher's commentslogin

I think you're fundamentally misunderstanding the problem.

We do have a problem with nomenclature being fractured, even within fields, never mind between them. But that's not the problem.

The problem is that when you get deep enough into how things work, things just get really complex and they stop behaving in ways and patterns that humans experience the world. They end up being things that cannot be simplified down to a common human experience in any meaningful way. At best, you can take a portion of the concept and make an analogy to some limited portion of a common experience without lying too much. But they are fundamentally not the same, so you cannot use the analogy to discover anything new about the original concept.

There is also a problem of the time it takes to internalize new ideas and concepts. You can't quickly and easily give someone an intuitive understanding of anything - it takes work and experience and time on their part to get there. I don't see how going and reworking nomenclatures of entire fields is going to make people more willing or able to devote their time to this. Mostly you'll have to put in incredible amount of work, convince far too many people to do things a different way because you said so, and all you'll have really done is save some (significant) annoyance from students getting their feet wet in a new field.


Sorry, I didn't see your reply to this until just now. I hope that you get a chance to see mine:

I have to disagree that nomenclature being fractured is not the problem. Within any given field, it is true that as you go deep enough into how things work, it gets complicated and hectic. However, as we have done throughout the entire written history of humanity, we have a special set of tools to help us engage with those complications, and reduce them down to complexes of simple objects whose behavior we understand - mathematics. If we are unable to reach deeply enough within a single field, it is because the tools we teach are not up to the task.

A large part of the issue is that the mathematics in common use in these disciplines, and at the elementary level, is in need of an upgrade. That upgrade is in the process of being performed (category theory), and as we learn to apply it in more and more fields, concepts and processes that seem deep, difficult, and essentially different turn out to be related in mathematically precise 'analogies'. This means that the same set of 'deep concepts', applied with different sets base objects and different operators that obey the same rules, will unfold into some of the main ideas in each discipline. Baez's Physics, Logic, Topology, Computation, a Rosetta stone is a good example of the beginnings of this, but googling 'applied category theory' or 'applied category theory course azimuth' ought to bring you some interesting extra links.

More generally, what we see of as specific processes within a particular field, when viewed through the right lenses, I expect we will find are instead instantiations of much more general processes that are the same across most, if not all, fields of discipline. How will we convince everyone to use these different naming systems and toolsets? Because those people who do use them will perform better, and the logic of competition will pull the educational system and society along with them.

If you disagree, I'd love to hear more about it. Nomenclature and Education are near and dear to me - I always jump and an opportunity to discuss with someone who does so in good faith.


Yes, that's the problem exactly - at some point we can't just use common sense and spherical cows to express an idea.


I felt that the second season greatly improved from the first. I'd say the studio segments were still pretty subpar compared to the Top Gear days, but they seemed to get a better handle on the rest of it. I haven't watched the latest season yet, but hopefully they managed to improve again.

If you gave up after the first season, you might want to give the later ones a shot. Though I have to admit I never cared too much about the reviews, so that's worth keeping in mind.


s3 is really good - the car reviews are back, the panoramic views and cinematic genius of top gear is back.


I don't see much in there that is relevant to vaping in general being unsafe. The main concerns seem to be related to non-nicotine contents of the liquids being used. While that is certainly an issue to keep in mind when talking about current vape usage, it should be something that is addressable without getting rid of vaping altogether. The only concern specific to nicotine seems to be the possibility of spills leading to overdoses. I suspect this is minor enough to ignore entirely, but if not then better education and even product design can be used to address this.

Otherwise the section referenced said nicotine does not seem to be a carcinogen but some people suspect it may be a tumor promoter. If we're drawing the line at "suspected to be involved in cancer somehow but doesn't cause it" then we're going to have to rework most of society and then some.

The only other thing that stood out was the particulates issue, though they didn't mention any health effects of being exposed to the particulates and I don't have that knowledge off-hand.


I think you're ignoring some very basic aspects of the ISP business and taking entirely the wrong lessons away from Google Fiber.

There is no cheap way to enter many/most/all of these markets. This isn't a world of cheap VC cash, because there is no chance of winning a lottery. Google of all people couldn't make this make sense. Sure, they would have had a better pitch if they were competing with incumbants who did things like block Google Search. Except it's even more trivial to unblock Google Search than it is to offer better service. This gets worse when you consider the possibility of not outright blocking, but intentionally slowing services.

So yeah. If Google couldn't make Google Fiber work, there is no way any new competitor is going to spring out of nowhere. The reality of the competition hasn't changed - incumbants are at a massive advantage even compared to most incumbants. Infrastructure is a massive barrier to newcomers, and any competitive aspects that can be used to differentiate one offer from another can be trivially and immediately matched by the incumbants. And in practice most all of the offers are difficult or impossible to really compare to each other in pratice, so most consumers have to rely mainly on the marketing materials and subjective experiences.

And again I've left out a whole other side of the issue, namely the relationships between ISPs and services.

This is just an awful system to rely on tradional markets. It uses expensive infrastructure, and the products are indifferentiable in any meaningful way over any period of time.


Google built a hugely successful Operating System ( Android ), Web browser ( Chrome ) etc because it was important to them that nobody won these platforms. If someone had monopoly control, they could squeeze Google for a lot of money through Traffic Acquisition Costs. Google currently pays potentially 100s of millions of dollars to Mozilla

The same applies to ISPs. Why do you think they are pushing for net neutrality ? Do you think it is out of the goodness of their hearts ? Without net neutrality Google might invest billions into building out a global, fast Internet infrastructure. They may have no choice but to do it, because it is strategic for them to own the entire ad delivery pipeline ( Computers/Phones/Tablets, Operating Systems, ISPs )


> Google built a hugely successful Operating System ( Android ), Web browser ( Chrome ) etc because it was important to them that nobody won these platforms.

Or was it important to them that they win? They certainly seem to have with Chrome at least. Does any company intentionally enter a space just to compete? Would google rather not absolutely win in the infrastructure as well?


I think it was important to them that someone else did not win. Winning it themselves was probably a bonus

> Would google rather not absolutely win in the infrastructure as well?

Probably, but it is up to it's competitors to try and prevent that.

If Google has no competitive advantage because of their other businesses it should not be a problem. But if it competes unfairly, fair competition laws should be applied

Either way, the customers could stand to benefit from increased investment


Again I'd emphasize one point you made here: "And in practice most all of the offers are difficult or impossible to really compare to each other in practice, so most consumers have to rely mainly on the marketing materials and subjective experiences." This is arguably the single most important factor here. When companies are incapable of meaningfully distinguishing themselves, you're going to trend towards monopoly because there is no way for competitors to offer a compelling argument for their product. They can try to compete on price/performance but in most industries, certainly including telecoms, economy of scale means this is a losing battle.

Let's now imagine Comcast blocks Google due to payment from e.g. Bing. First off it's entirely possible that Comcast could not unblock Google even if they wanted to, since this would undoubtedly be breach of contract with Microsoft. But more importantly, this is something that would be perceived as a actively malicious action by the customer. They see the company is engaging in unreasonable behavior at the behest of third parties that visibly and meaningfully worsens their experience relative to other ISPs. Our critical point from above is no longer true!

This offers an opening for new competitors even if Comcast is able to unblock Google, because of the newfound ability for customers to compare services in practice. Compare this to slow speeds or high prices. These are not going to be seen as actively malicious and so the incumbent monopoly matching a competitor's prices is something that will be seen as a positive for the incumbent monopoly, as opposed to a non-negative.

----

So see how barriers to entry are not the fundamental problem here we can go reductio ad absurdum. Imagine we take something with very few barriers to entry - selling burgers. But let's apply so many rules and regulations that burgers become effectively identical -- same meat, same ingredients, same cooking, same standards of quality/freshness, same spices, etc, etc. You'd rapidly trend towards monopoly once again simply because the only way companies could compete would be on price and factors outside of the burger itself. But the biggest burger company would be capable of making burgers cheaper, faster, and more rapidly than anybody else - which means they would 'win', or be able to win if necessary, on every single competitive factor. In essence they become impossible to compete against and in simply trying to ensure high standards, you have created a monopoly.


I'm not sure why you think the only option here is that the doctor was in on it. He could have treated a man with fake papers identifying him as Mr. Cotten. By asking the doctor to compare with known-valid photos you can eliminate the possibility that the doctor was unintentionally mistaken about the identity of the man he treated.


> He could have treated a man with fake papers identifying him as Mr. Cotten

So what you're saying is that this "Mr. Cotten" (almost certainly not Indian given the surname), magically appears on death's doorstep at the hospital and promptly dies the next day.

It's so far outside the realm of possiblity, but perhaps Cotton and his wife went to a local cafe frequented by tourists; found a similarly baby faced looking Northern European male with light colored hair and blue eyes; offered him a large sum of cash to "drink this potion that will make you temporarily ill, but only for 24 hours, trust us". Together they walk to the local fake ID shop and do the necessary; then "Cotton" and his "wife" check in at the 5-star hotel and proceed with the plan (drink potion, get sick, go to hospital, but oh noes, "Cotton" dies!).

The real Cotton then visits a local plastic surgeon to get that Brad Pitt look he's always wanted, and slips off to a remote tropical island to lay low until his wife can extricate herself from all the unwanted attention she's getting back home -- when the dust settles she rejoins him in their happily ever after, lifestyles of the rich but not famous, world travels.

...More likely, Cotton is simply stone cold dead.


You misunderstand me. I agree Cotton is almost certainly dead.

I was responding to the statement that questioning the doctor was essentially useless when looking into the possiblity that Cotton faked his death, as either he did indeed die or the doctor was part of the scheme. I was pointing out that there is value in talking to the doctor, as there are possiblities that don't require the doctor to be complicit.

If you've decided investigating the matter is worth your time, you may as well do it properly is all. Doesn't mean I think it's what likely happened.


For Google, I'm not sure how option 2 is supposed to be acceptable either. It is perfectly reasonable to be concerned about introducing an internet-connected microphone into your house. It doesn't even require assuming a malicious Google to see potential problems with this. You're one decent security flaw (in an IoT device no less) from anybody having a microphone in your house.


> You're one decent security flaw (in an IoT device no less) from anybody having a microphone in your house.

Many people already have Android smartphones, so there is already a Google microphone in your house. The big difference is that you know that it has a microphone.


Which of course makes a big difference. We are all adults. We can weight pros and cons and then make an informed decision. Not so if we don’t know all the details. This is what you’re betting on when leaving “details” like this out.


Lots of technology now incorporates the idea that people are better not given too many choices. DRM/trusted computing, root-locked phones, software and operating systems that decide what information they send where, without any explicit consent or choice to disable.


The smartphone requires a battery, which drains away noticeably if it is sending all your conversations. The Nest is connected to the house power, so it can stream audio non-stop.


Additionally a user is likely to pay a lot more attention to their phone than to their Nest devices. A compromised Nest device will likely stay compromised until Google find the exploit...


Are you sure about that battery drain?

A malicious actor could easily conceal their activity by making 24-hour-long recordings and sending them in the night (or whenever connected to WiFi and plugged into power).


The main trick smartphones use to have their battery last long enough, is to power off every piece of hardware that's not in use, for as long as possible. Doing a 24-hour-long recording would require the main CPU to be awake far more often than usual (and in fact, I would suspect it would have to be pretty much constantly awake, unless the phone had a large dedicated hardware buffer for the recorded audio samples).


Not to mention, that Android phones seem to pick up "ok google" activation pattern from random conversations, and start sending voice to Google's servers for speech-to-text processing. Even after repeated attempts to find and turn off voice activation from settings.


And many people don't have Android phones, so this could be pretty significant.

Besides, the attack vector for a non-Google attacker to access this mic may be different than for accessing the mic on a phone


While true, the upgrade situation for Android is way better than for most IoT devices, which is saying something. And this is the sort of thing you may well keep for a decade. While you may still have other Google microphones, I would be a lot more worried about this one specifically being vulnerable at some point.


I don't know which specs exactly people are referencing, but if its marketing specs or the specs you would see on the box then I don't expect consumer products to have "microphone (disabled)" for unused hardware just as I wouldn't expect it to list some unused PCB circuitry.

It might be reasonable to be concerned about this kind of thing in the tech crowd, but the vast majority of people aren't.


> I don't expect consumer products to have "microphone (disabled)" for unused hardware

This should absolutely be the expectation. A note of "microphone (disabled in software)" at minimum. Since when is it OK for a company to sell you a product with hidden functionality that can be used to harm you by either the manufacturer or third parties?

(The obvious defense is that they're not selling it to you, they're renting it out. Such is the pathology of turning products into services. It's a sick market dynamic.)


How many things built into products have obsolete hardware or unused functionality that would have to be listed? I understand being reactionary to a microphone but where is the line? How do you draw it?

Do I need to list all the capabilities of some SoC even if I don't take any advantage of them? If a component has thermal sensors I'm not using do I have to list every one of them on the box?


The tech crowd are their first customers. There's no downside to listing the microphone, so why not do it?


I'm sure (this is not sarcasm) that the people behind the leak of recordings of confidential doctor-patient phone calls had no malicious intent.

So, I agree no malicious intent is needed to make things turn very bad.


I don't see how this tracks. Particularly in relation to many of the jobs going away, productivity isn't inversely related to hours worked. People working 6 hours instead of 8 will get three quarters as much done. How would their wages go up?

If you are proposing the government cover the difference, then you're effectively proposing a bastardized version of UBI with an employment requirement, and payout that is proportional to current economic advantage. I'm not even sure it wouldn't cost more than current UBI proposals given that proportionality.

I don't understand why you would be in favor of this, but think UBI was bad. Or how it could possibly work, if it wasn't the government paying for it.


I think the argument is that most people are truly only attaining maximum productivity for 5-6 hours in a day, so why are they even there for the extra 2-3? If you make $100/hr for 8 hours but are only productive for 5 of those hours, your employer is really paying you $160/hr. Why are we wasting employees time and employers money?


This is very IT-centric point of view, where mostly creative work can be done in various speeds, depending on effectivity and motivation in given time. I know its definitely valid for me and colleagues.

Imagine tons of other jobs where this simply doesn't apply - doctors, teachers, bureaucrats, farmers, drivers, shop/restaurant crew, people working in tourism, many factory jobs and probably tons of other types of jobs. Yes, I just mentioned more than 90% of the world population.

We can gradually improve efficiency of probably every single job out there, but simply slashing 30% will have very direct negative consequences on output for most people, in many cases directly proportional. Now who wants to take 30% pay cut?


People working 6 hours will get slightly more than three quarters done. People lose steam through the day.


Nah, mostly it's just (IMO) the result of AdaCore being needlessly confusing. AdaCore also maintains the GNU compiler, which is what their Pro offering is based on. I'm not really sure why the Community edition exists, as it's basically at most a slightly different version than the GNU offering.


Someone showed me that GCC Ada supports older versions of Ada, while AdaCore's Community edition does not. There are other differences of which I am unaware, but I found this difference to be a particularly major and surprising one.


I'm not sure what you mean. Ada hasn't ever undergone any breaking changes I'm aware of, so even Ada 83 would be valid Ada 2012. I'm not sure how a compiler could lose support of older versions.

The only difference I've ever run into was an AdaCore-defined (as in not part of the standard) aspect that was known but not fully implemented by GCC Ada, while the Community edition did have it. That's since been added to GCC as well.


AdaCore maintains the GNU compiler as well. As a result, the GNU and AdaCore compiler (Community and Commercial) are essentially the same. The Ada community in fact just calls them all GNAT and then specifies further if it's relevant. The main difference is that GNU can lag behind a bit historically, but recently there has been some initiative from inside AdaCore to tackle that issue and they've done a great job.

Grab whichever is easiest for you to get started with. It's a weird and ultimately meaningless situation license-wise. If and when licenses matter, grab the GNU version. Unless you're doing something that requires a support contract anyway, then you'll end up with Pro.

I really wish AdaCore would do something about that Community edition. I have no idea what purpose it is supposed to have. In my opinion, right now it's mostly a thorn in the communities side. All it does is create a bunch of hesitation and confusion about the licenses and compiler capabilities for newcomers. As a result, before people even start with Ada they're getting deep into license discussions which is just a wonderful first impression. And worst of all, it's all basically for nothing because they're at most slightly different versions of the same software with a different license.


The answer to "what" is implied above. Evolution.


or a military selection of psychos... because ability to limit or completely switch off the empathy is one of the characteristics of psychopathic criminals


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: