Hacker Newsnew | past | comments | ask | show | jobs | submit | mschaef's commentslogin

I think there are a couple questions you need to ask yourself - the first is why is it hard for you to be alone? You're the one person you're stuck living with for your entire life - it shouldn't be hard to be alone with yourself. That's where it began. That's where it will end.

You mentioned you have a therapist - this is something you might wish to explore with them.

The second question is related - what are you looking for in the "not alone"? What do you want? What would bring you peace? Are you looking for a relationship? A friend? Sex? etc? While you have to be comfortable with yourself, part of that comfort is knowing and being confident in what you're looking for. It may be that the world won't or can't provide it, but that's why I put this question second.

The final point I'll make is that there's nothing stopping you. You're an adult... within the constraints of the laws of your society, you CAN do what you want and there's nothing stopping you. It may not go the way you want, but it might, and wouldn't it be fun to try?


I like being alone. I'm good at being alone. I was an only child, often left alone , and I have lived alone (although dated a lot) for 20 years. But if you put me in OPs situation, it would easily be tough. Working from home, living alone, totally lost your social circle possibly, AND sad over a recent breakup probably. Thats a prime recipe for problems even for people that do enjoy being alone for the most part.

Agreed, re: DeMuro.... I'm long past tired of hearing how "The ____ is the ____!"

I do like Tedward's videos, though. He seems a lot more honestly enthusiastic about it, and definitely has fun with the cars.


That's a $125 salad spinner... I get the appeal, but it's definitely a premium product.


I have half a dozen of them (including my father's from college) that I cherish, but do not use. I love the simplicity and elegance of the design. (Slide rules do a lot with operations that essentially boil down to addition, subtraction, and looking up function values in tables.)


Does this have any similarities at all to the fact that the Pentium 4 used a 16-bit ALU?


Thank you. As always.


What do you mean by respect? Here's a layperson's perspective, at least.

Up through the 486 (with its built in x87), the x87 was always a niche product. You had to know about it, need it, buy it, and install it. This is over and on top of buying a PC in the first place. So definitionally, it was relegated it to the peripheries of the industry. Most people didn't even know x87 was a possibility. (I remember distinctly a PC World article having to explain why there was an empty socket next to the 8088 socket in the IBM PC.)

However, in the periphery where it mattered, it gained acceptance within a matter of a few years of being available. Lotus 1-2-3, AutoCAD, and many compilers (including yours, IIRC) had support for x87 early on. I would argue that this is one of the better examples of marginal hardware being appropriately supported.

The other argument I'd make is that (thanks to William Kahan), the 8087 was the first real attempt at IEEE-754 support in hardware. Given that IEEE-754 is still the standard, I'd suggest that x87's place in history is secure. While we may not be executing x87 opcodes, our floating point data is still in a format first used in the x87. (Not the 80-bit type, but do we really care? If the 80-bit type was truly important, I'd have thought that in the intervening 45 years, there'd be a material attempt to bring it back. Instead, what we have are a push towards narrower floating point types used in GPGPU, etc.... fp8 and f16, sure... fp80, not so much.)


> What do you mean by respect?

The disinterest programmers have in using 80 bit arithmetic.

A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors. More bits would put off the cliff where the results turned into gibberish.

I know there are techniques to minimize this problem. But they aren't simple or obvious. It's easier to go to higher precision. After all, you have the chip in your computer.


Yes, the argument of Kahan in favor of the 80-bit precision has always been that it will allow ordinary programmers, like the expected users of IBM PC, who do not have the knowledge and experience of a numerical analyst, to write programs that perform floating-point computations without subtle bugs caused by unexpected behavior due to rounding errors.


> The disinterest programmers have in using 80 bit arithmetic.

I don't know, other than to say there's often a tendency in this industry to overlook the better in the name of the standard. 80-bit probably didn't offer enough marginal value to enough people to be worth the investment and complexity. I also wonder how much of an impact there is to the fact that you can't align 80-bit quantities on 64-bit boundaries. Not to mention the fact that memory bandwidth costs are 25% higher when dealing with 64-bit quantities, and floating point work is very often bandwidth constrained. There's more precision in 80-bit, but it's not free, and as you point out, there are techniques for managing the lack of precision.

> A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors.

This sort of thing shows up in even the most prosaic places, of course:

https://blog.codinghorror.com/if-you-dont-change-the-ui-nobo...

In any event, while we're chatting, thank you for your longstanding work in the field.


The 80-bit format was included in the IEEE standard since the beginning.

The IEEE standard had included almost all of what Intel 8087 had implemented, the main exception being the projective extension of the real number line. Because of this deviation in the standard, Intel 80387 has also dropped this feature.

Where you are right is that most other implementers of the standard have chosen to not provide this extended precision format, due to the higher cost in die area, power consumption and memory usage, the latter being exacerbated by the alignment issue. The same was true for Intel when defining SSE, SSE2 and later ISA extensions. The main cost issue is the superlinear growth of the multiplier size with precision, a 64-bit multiplier is not a little bigger than a 53-bit multiplier, but much bigger.

Nowadays, the FP arithmetic standard also includes 128-bit floating-point numbers, which are preferable to 80-bit numbers and do not have alignment problems. However, few processors implement this format in hardware, and on the processors where it would need to be implemented in a software library one can obtain a higher performance by using double-double precision numbers, instead of quadruple precision numbers (unless there is a risk of overflow/underflow in intermediate results, when using the range of double-precision exponents).

In general, on the most popular CPUs, e.g. x86-64 based or Aarch64 based, one should use a double-double precision library for all the arithmetic computations where the traditional 80-bit Intel 8087 format would have been appropriate.


Haha the calculator app misses one critical feature - a history of the numbers you typed in, so you can double check the column of numbers you added.

> thank you for your longstanding work in the field.

I sure appreciate that, especially since I give away all my work for free these days!


> I'm curious what the CX-83D87 and Weiteks look like.

The Weitek's were memory mapped. (At least those built for x86 machines.).

This essentially increased bandwidth by using the address bus as a source for floating point instructions. Was really a very cool idea, although I don't know what the performance realities were when using one.

http://www.bitsavers.org/components/weitek/dataSheets/WTL-31...


This is nuts, in the best way.

The operand fields of a WTL 3167 address have been specifically designed so that a WTL 3167 address can be given as either the source or the destination to a REP MOVSD instruction. [

Single-precision vector arithmetic is accomplished by applying the 80386 block move instruction REP MOVSD to a WTL 3167 address involving arithmetic instead of loading or storing.


haha - took me a while to figure out that's Mauro Bonomi's signature

iirc the 3167 was a single clocked, full barrel shift mac pipeline with a bunch (64?) of registers, so the FPU could be driven with a RISC-style opcode on every address bus clock (given the right driver on the CPU) ... the core registers were enough to run inner loops (think LINPACK) very fast with some housekeeping on context switch of course

this window sat between full PCB minicomputer FPUs made from TTL and the decoupling of microcomputer internal clocks & cache from address bus rates ...

Weitek tried to convert their FPU base into an integrated FPU/CPU play during the RISC wars, but lost


> Then for many years it was standard for software to have help files, and it seemed anachronistic for Emacs to loudly proclaim it is self-documenting.

Emacs' notion of self documentation refers to something slightly different than the fact it has online help files. The help facilities can query the Lisp runtime for things like functions and keybindings. These update dynamically as the system is reconfigured. The result is something that isn't quite as cleanly presented as an online help document, but has the benefit of being deeply integrated into how the system is actually configured to behave at the moment. Very cool, and very much dependent on the open source nature of emacs.


> My first idea was to get the source cleaned up a bit and compile it again to 32 bit with Visual Basic 4, but I couldn't figure it out, it required some 3rd party libraries that I just couldn't get a hold of.

This was super common for VB apps. The original architecture of VB was, loosely speaking, a GUI container for various pluggable controls connected to a BASIC runtime. The number of controls that came in the box would vary depending on how fancy a version of VB you bought, and you could plug in additional third party controls in the form of "VBX's" - Visual Basic eXtensions. Even though VBX's were designed mainly for GUI controls, they were the easiest extension point for VB and got extensively used for all sorts of integrations until OLE Automation became more prevalent. (Roughly contemporaneous with the switch to 32-bit.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: