Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not too surprising. Each layer can multiply each input by a set of weights, and add them up. That's all you need, in order to convert from base 2 to base 10. In base 2, we calculate the value of a set of digits by multiplying each by 2^0, 2^1, ..., 2^n. Each weight is twice the previous. The activation function is just making it easier to center those factors on zero to reduce the magnitude of the regularization term. I am guessing you could probably get the job done with just two neurons in the first layer, which output the real value of each input, one neuron in the second layer to do the addition (weights 1, 1), and 8 neurons in the final layer to convert back to binary digits.


While your analysis sounds good, I don’t think you mean base 10, you probably mean a “single float”?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: